Reader small image

You're reading from  Kubernetes - A Complete DevOps Cookbook

Product typeBook
Published inMar 2020
PublisherPackt
ISBN-139781838828042
Edition1st Edition
Concepts
Right arrow
Author (1)
Murat Karslioglu
Murat Karslioglu
author image
Murat Karslioglu

Murat Karslioglu is a distinguished technologist with years of experience using infrastructure tools and technologies. Murat is currently the VP of products at MayaData, a start-up that builds data agility platform for stateful applications, and a maintainer of open source projects, namely OpenEBS and Litmus. In his free time, Murat is busy writing practical articles about DevOps best practices, CI/CD, Kubernetes, and running stateful applications on popular Kubernetes platforms on his blog, Containerized Me. Murat also runs a cloud-native news curator site, The Containerized Today, where he regularly publishes updates on the Kubernetes ecosystem.
Read more about Murat Karslioglu

Right arrow
Scaling and Upgrading Applications

TERNAIn this chapter, we will discuss the methods and strategies that we can use to dynamically scale containerized services running on Kubernetes to handle the changing traffic needs of our service. After following the recipes in this chapter, you will have the skills needed to create load balancers to distribute traffic to multiple workers and increase bandwidth. You will also know how to handle upgrades in production with minimum downtime.

In this chapter, we will cover the following recipes:

  • Scaling applications on Kubernetes
  • Assigning applications to nodes with priority
  • Creating an external load balancer
  • Creating an ingress service and service mesh using Istio
  • Creating an ingress service and service mesh using Linkerd
  • Auto-healing pods in Kubernetes
  • Managing upgrades through blue/green deployments

Technical requirements

The recipes in this chapter assume that you have a functional Kubernetes cluster deployed by following one of the recommended methods described in Chapter 1, Building Production-Ready Kubernetes Clusters.

The Kubernetes command-line tool, kubectl, will be used for the rest of the recipes in this chapter since it's the main command-line interface for running commands against Kubernetes clusters. We will also use helm where Helm charts are available to deploy solutions.

Scaling applications on Kubernetes

In this section, we will perform application and cluster scaling tasks. You will learn how to manually and also automatically scale your service capacity up or down in Kubernetes to support dynamic traffic.

Getting ready

Clone the k8sdevopscookbook/src repository to your workstation to use the manifest files in the chapter7 directory, as follows:

$ git clone https://github.com/k8sdevopscookbook/src.git
$ cd /src/chapter7/

Make sure you have a Kubernetes cluster ready and kubectl and helm configured to manage the cluster resources.

How to do it…

This section is further divided into the following subsections to make this process easier:

  • Validating the installation of Metrics Server
  • Manually scaling an application
  • Autoscaling applications using Horizontal Pod Autoscaler

Validating the installation of Metrics Server

The Autoscaling applications using the Horizontal Pod Autoscaler recipe in this section also requires Metrics Server to be installed...

Assigning applications to nodes

In this section, we will make sure that pods are not scheduled onto inappropriate nodes. You will learn how to schedule pods into Kubernetes nodes using node selectors, taints, toleration and by setting priorities.

Getting ready

Make sure you have a Kubernetes cluster ready and kubectl and helm configured to manage the cluster resources.

How to do it…

This section is further divided into the following subsections to make this process easier:

  • Labeling nodes
  • Assigning pods to nodes using nodeSelector
  • Assigning pods to nodes using node and inter-pod affinity

Labeling nodes

Kubernetes labels are used for specifying the important attributes of resources that can be used to apply organizational structures onto system objects. In this recipe, we will learn about the common labels that are used for Kubernetes nodes and apply a custom label to be used when scheduling pods into nodes.

Let's perform the following steps to list some of the default labels...

Creating an external load balancer

The load balancer service type is a relatively simple service alternative to ingress that uses a cloud-based external load balancer. The external load balancer service type's support is limited to specific cloud providers but is supported by the most popular cloud providers, including AWS, GCP, Azure, Alibaba Cloud, and OpenStack.

In this section, we will expose our workload ports using a load balancer. We will learn how to create an external GCE/AWS load balancer for clusters on public clouds, as well as for your private cluster using inlet-operator.

Getting ready

Make sure you have a Kubernetes cluster ready and kubectl and helm configured to manage the cluster resources. In this recipe, we are using a cluster that's been deployed on AWS using kops, as described in Chapter 1, Building Production-Ready Kubernetes Clusters, in the Amazon Web Services recipe. The same instructions will work on all major cloud providers.

To access the example...

Creating an ingress service and service mesh using Istio

Istio is a popular open source service mesh. In this section, we will get basic Istio service mesh functionality up and running. You will learn how to create a service mesh to secure, connect, and monitor microservices.

Service mesh is a very detailed concept and we don't intend to explain any detailed use cases. Instead, we will focus on getting our service up and running.

Getting ready

Make sure you have a Kubernetes cluster ready and kubectl and helm configured to manage the cluster resources.

Clone the https://github.com/istio/istio repository to your workstation, as follows:

$ git clone https://github.com/istio/istio.git 
$ cd istio

We will use the examples in the preceding repository to install Istio on our Kubernetes cluster.

How to do it…

This section is further divided into the following subsections to make this process easier:

  • Installing Istio using Helm
  • Verifying the installation
  • Creating an ingress gateway...

Creating an ingress service and service mesh using Linkerd

In this section, we will get basic Linkerd service mesh up and running. You will learn how to create a service mesh to secure, connect, and monitor microservices.

Service mesh is a very detailed concept in itself and we don't intend to explain any detailed use cases here. Instead, we will focus on getting our service up and running.

Getting ready

Make sure you have a Kubernetes cluster ready and kubectl and helm configured to manage the cluster resources.

To access the example files for this recipe, clone the k8sdevopscookbook/src repository to your workstation to use the configuration files in the src/chapter7/linkerd directory, as follows:

$ git clone https://github.com/k8sdevopscookbook/src.git
$ cd src/chapter7/linkerd/

After you've cloned the preceding repository, you can get started with the recipes.

How to do it…

This section is further divided into the following subsections to make this process easier...

Auto-healing pods in Kubernetes

Kubernetes has self-healing capabilities at the cluster level. It restarts containers that fail, reschedules pods when nodes die, and even kills containers that don't respond to your user-defined health checks.

In this section, we will perform application and cluster scaling tasks. You will learn how to use liveness and readiness probes to monitor container health and trigger a restart action in case of failures.

Getting ready

Make sure you have a Kubernetes cluster ready and kubectl and helm configured to manage the cluster resources.

How to do it…

This section is further divided into the following subsections to make this process easier:

  • Testing self-healing pods
  • Adding liveness probes to pods

Testing self-healing pods

In this recipe, we will manually remove pods in our deployment to show how Kubernetes replaces them. Later, we will learn how to automate this using a user-defined health check. Now, let's test Kubernetes' self...

Managing upgrades through blue/green deployments

The blue-green deployment architecture is a method that's used to reduce downtime by running two identical production environments that can be switched between when needed. These two environments are identified as blue and green. In this section, we will perform rollover application upgrades. You will learn how to roll over a new version of your application with persistent storage by using blue/green deployment in Kubernetes.

Getting ready

Make sure you have a Kubernetes cluster ready and kubectl and helm configured to manage the cluster resources.

For this recipe, we will need a persistent storage provider to take snapshots from one version of the application and use clones with the other version of the application to keep the persistent volume content. We will use OpenEBS as a persistent storage provider, but you can also use any CSI-compatible storage provider.

Make sure OpenEBS has been configured with the cStor storage engine...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Kubernetes - A Complete DevOps Cookbook
Published in: Mar 2020Publisher: PacktISBN-13: 9781838828042
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Murat Karslioglu

Murat Karslioglu is a distinguished technologist with years of experience using infrastructure tools and technologies. Murat is currently the VP of products at MayaData, a start-up that builds data agility platform for stateful applications, and a maintainer of open source projects, namely OpenEBS and Litmus. In his free time, Murat is busy writing practical articles about DevOps best practices, CI/CD, Kubernetes, and running stateful applications on popular Kubernetes platforms on his blog, Containerized Me. Murat also runs a cloud-native news curator site, The Containerized Today, where he regularly publishes updates on the Kubernetes ecosystem.
Read more about Murat Karslioglu