Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
The Kubernetes Workshop

You're reading from  The Kubernetes Workshop

Product type Book
Published in Sep 2020
Publisher Packt
ISBN-13 9781838820756
Pages 780 pages
Edition 1st Edition
Languages
Authors (6):
Zachary Arnold Zachary Arnold
Profile icon Zachary Arnold
Sahil Dua Sahil Dua
Profile icon Sahil Dua
Wei Huang Wei Huang
Profile icon Wei Huang
Faisal Masood Faisal Masood
Profile icon Faisal Masood
Mélony Qin Mélony Qin
Profile icon Mélony Qin
Mohammed Abu Taleb Mohammed Abu Taleb
Profile icon Mohammed Abu Taleb
View More author details

Table of Contents (20) Chapters

Preface
1. Introduction to Kubernetes and Containers 2. An Overview of Kubernetes 3. kubectl – Kubernetes Command Center 4. How to Communicate with Kubernetes (API Server) 5. Pods 6. Labels and Annotations 7. Kubernetes Controllers 8. Service Discovery 9. Storing and Reading Data on Disk 10. ConfigMaps and Secrets 11. Build Your Own HA Cluster 12. Your Application and HA 13. Runtime and Network Security in Kubernetes 14. Running Stateful Components in Kubernetes 15. Monitoring and Autoscaling in Kubernetes 16. Kubernetes Admission Controllers 17. Advanced Scheduling in Kubernetes 18. Upgrading Your Cluster without Downtime 19. Custom Resource Definitions in Kubernetes

12. Your Application and HA

Overview

In this chapter, we will explore Kubernetes cluster life cycle management through the use of Terraform and Amazon Elastic Kubernetes Service (EKS). We will also deploy an application and learn some principles to make applications better suited to the Kubernetes environment.

This chapter will walk you through using Terraform to create a fully functioning, highly available Kubernetes environment. You will deploy an application to the cluster and modify its functionality to make it suitable for a highly available environment. We will also learn how to get traffic from the internet to an application running in a cluster by using a Kubernetes ingress resource.

Introduction

In the previous chapter, we set up our first multi-node Kubernetes cluster in a cloud environment. In this section, we're going to talk about how we operationalize a Kubernetes cluster for our application—that is, we will use the cluster to run a containerized application other than the dashboard.

Since Kubernetes has as many uses as can be imagined by a cluster operator, no two use cases for Kubernetes are alike. So, we're going to make some assumptions about the type of application that we're operationalizing our cluster for. We're going to optimize a workflow for deploying a stateless web application with a stateful backend that has high-availability requirements in a cloud-based environment. In doing so, we're hopefully going to cover a large percentage of what people generally use Kubernetes clusters for.

Kubernetes can be used for just about anything. Even if what we cover does not exactly match your use case for Kubernetes,...

An Overview of Infrastructure Life Cycle Management

In simple words, infrastructure life cycle management refers to how we manage our servers through each phase of its useful life. This involves provisioning, maintaining, and decommissioning physical hardware or cloud resources. Since we are leveraging cloud infrastructure, we should leverage infrastructure life cycle management tools to provision and de-provision resources programmatically. To understand why this is important, let's consider the following example.

Imagine for a moment that you work as a system administrator, DevOps engineer, site reliability engineer, or any other role that requires you to deal with server infrastructure for a company that is in the digital news industry. What that means is that the primary output of the people who are working for this company is the information that they publish on their website. Now, imagine that the entirety of the website runs on one server in your company's server...

Terraform

In the last chapter, we used kops to create a Kubernetes cluster from scratch. However, this process can be viewed as tedious and difficult to replicate, which creates a high probability of misconfiguration, resulting in unexpected events at application runtime. Luckily, there is a very powerful community-supported tool that solves this issue very well for Kubernetes clusters running on Amazon Web Services (AWS), as well as several other cloud platforms, such as Azure, Google Cloud Platform (GCP), and many more.

Terraform is a general-purpose infrastructure life cycle management tool; that is, Terraform can manage the state of your infrastructure as defined through code. The goal of Terraform, when it was initially created, was to create both a language (HashiCorp Configuration Language (HCL)) and runtime that can create infrastructure in a repeatable manner and control changes to that infrastructure in the same way that we control changes to application source code—...

Kubernetes Ingress

In the early days of the Kubernetes project, the Service object was used to get traffic from outside the cluster to the running Pods. You had only two options to get that traffic from outside in—using either a NodePort service or a LoadBalancer service. The latter option was preferred in public cloud provider environments because the cluster would automatically manage setting up security groups/firewall rules and to point the LoadBalancer to the correct ports on your worker nodes. However, there is one slight problem with that approach, especially for those who are just getting started with Kubernetes or those who have tight cloud budgets. The problem is that one LoadBalancer can only point toward a single Kubernetes service object.

Now, imagine that you have 100 microservices running in Kubernetes, all of which need to be exposed publicly. In AWS, the average cost of an ELB (a load balancer provided by AWS) is roughly $20 per month. So, in this scenario...

Highly Available Applications Running on Top of Kubernetes

Now that you've had a chance to spin up an EKS cluster and learn about Ingress, let's introduce you to our application. We have provided an example application that has a flaw that prevents it from being cloud-native and really being able to be horizontally scaled in Kubernetes. We will deploy this application in the following exercise and observe its behavior. Then, in the next section, we will deploy a modified version of this application and observe how it is more suited to achieve our stated objective of being highly available.

Exercise 12.03: Deploying a Multi-Replica Non-HA Application in Kubernetes

In this exercise, we will deploy a version of the application that's not horizontally scalable. We will try to scale it and observe the problem that prevents it from being scaled horizontally:

Note

We have provided the source code for this application in the GitHub repository for reference. However...

Working with Stateful Applications

The previous exercise demonstrates the challenge of working with stateful applications in a distributed context. As a brief overview, a stateless app is an application program that does not save client data generated in one session for use in the next session with that client. This means that in general, a stateless application depends entirely on the input to derive its output. Imagine a server displaying a static web page that does not need to change for any reason. In the real world, stateless applications typically need to be combined with stateful applications in order to create a useful experience for clients or consumers of the application. There are, of course, exceptions to this.

A stateful application is one whose output depends on multiple factors, such as user input, input from other applications, and past saved events. These factors are called the "state" of the application, which determines its behavior. One of the most...

Summary

In an earlier chapter of this book, we explored how Kubernetes works favorably with a declarative approach to application management; that is, you define your desired state and let Kubernetes take care of the rest. Throughout this chapter, we took a look at some tools that help us manage our cloud infrastructure in a similar way. We introduced Terraform as a tool that can help us manage the state of our infrastructure and introduced the idea of treating your infrastructure as code.

We then created a mostly secure, production-ready Kubernetes cluster using Terraform in Amazon EKS. We took a look at the Ingress object and learned about the major motivations for using it, as well as the various advantages that it provides. Then, we deployed two versions of an application on a highly available Kubernetes cluster and explored some concepts that allow us to improve at horizontally scaling stateful applications. This gave us a glimpse of the challenges that come with running stateful...

lock icon The rest of the chapter is locked
You have been reading a chapter from
The Kubernetes Workshop
Published in: Sep 2020 Publisher: Packt ISBN-13: 9781838820756
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime}