Reader small image

You're reading from  Learn Docker - Fundamentals of Docker 19.x - Second Edition

Product typeBook
Published inMar 2020
PublisherPackt
ISBN-139781838827472
Edition2nd Edition
Tools
Right arrow
Author (1)
Dr. Gabriel N. Schenker
Dr. Gabriel N. Schenker
author image
Dr. Gabriel N. Schenker

Dr. Gabriel N. Schenker has more than 25 years of experience as an independent consultant, architect, leader, trainer, mentor, and developer. Currently, Gabriel works as Lead Solution Architect at Techgroup Switzerland. Prior to that, Gabriel worked as Lead Curriculum Developer at Docker and at Confluent. Gabriel has a Ph.D. in Physics, and he is a Docker Captain, a Certified Docker Associate, a Certified Kafka Developer and Operator, and an ASP Insider. When not working, Gabriel enjoys time with his wonderful wife Veronicah and his children.
Read more about Dr. Gabriel N. Schenker

Right arrow

Introduction to Kubernetes

 In the previous chapter, we learned how SwarmKit uses rolling updates to achieve zero downtime deployments. We were also introduced to Docker configs, which are used to store nonsensitive data in clusters and use this to configure application services, as well as Docker secrets, which are used to share confidential data with an application service running in a Docker Swarm.

In this chapter, we're going to introduce Kubernetes. Kubernetes is currently the clear leader in the container orchestration space. We will start with a high-level overview of the architecture of a Kubernetes cluster and then discuss the main objects used in Kubernetes to define and run containerized applications.

This chapter covers the following topics:

  • Kubernetes architecture
  • Kubernetes master nodes
  • Cluster nodes
  • Introduction to MiniKube
  • Kubernetes support in Docker...

Technical requirements

Kubernetes architecture

A Kubernetes cluster consists of a set of servers. These servers can be VMs or physical servers. The latter are also called bare metal. Each member of the cluster can have one of two roles. It is either a Kubernetes master or a (worker) node. The former is used to manage the cluster, while the latter will run an application workload. I have put the worker in parentheses since, in Kubernetes parlance, you only talk about a node when you're talking about a server that runs application workloads. But in Docker parlance and in Swarm, the equivalent is a worker node. I think that the notion of a worker node better describes the role of the server than a simple node.

In a cluster, you have a small and odd number of masters and as many worker nodes as needed. Small clusters might only have a few worker nodes, while more realistic clusters...

Kubernetes master nodes

Kubernetes master nodes are used to manage a Kubernetes cluster. The following is a high-level diagram of such a master:

Kubernetes master

At the bottom of the preceding diagram, we have the Infrastructure, which can be a VM on-premise or in the cloud or a server (often called bare metal) on-premise or in the cloud. Currently, Kubernetes masters only run on Linux. The most popular Linux distributions, such as RHEL, CentOS, and Ubuntu, are supported. On this Linux machine, we have at least the following four Kubernetes services running:

  • API server: This is the gateway to Kubernetes. All requests to list, create, modify, or delete any resources in the cluster must go through this service. It exposes a REST interface that tools such as kubectl use to manage the cluster and applications in the cluster.
  • Controller...

Cluster nodes

Cluster nodes are the nodes with which Kubernetes schedules application workloads. They are the workhorses of the cluster. A Kubernetes cluster can have a few, dozens, hundreds, or even thousands of cluster nodes. Kubernetes has been built from the ground up for high scalability. Don't forget that Kubernetes was modeled after Google Borg, which has been running tens of thousands of containers for years:

Kubernetes worker node

A worker node can run on a VM, bare metal, on-premise, or in the cloud. Originally, worker nodes could only be configured on Linux. But since version 1.10 of Kubernetes, worker nodes can also run on Windows Server. It is perfectly fine to have a mixed cluster with Linux and Windows worker nodes.

On each node, we have three services that need to run, as follows:

  • Kubelet: This is the first and foremost service. ...

Introduction to Minikube

Minikube is a tool that creates a single-node Kubernetes cluster in VirtualBox or Hyper-V (other hypervisors are supported too) ready to be used during the development of a containerized application. In Chapter 2Setting Up a Working Environment, we learned how Minikube and kubectl can be installed on our macOS or Windows laptop. As stated there, Minikube is a single-node Kubernetes cluster and thus the node is, at the same time, a Kubernetes master as well as a worker node.

Let's make sure that Minikube is running with the following command:

$ minikube start

Once Minikube is ready, we can access its single node cluster using kubectl. We should see something similar to the following:

Listing all nodes in Minikube

As we mentioned previously, we have a single-node cluster with a node called minikube. The...

Kubernetes support in Docker for Desktop

Starting from version 18.01-ce, Docker for macOS and Docker for Windows have started to support Kubernetes out of the box. Developers who want to deploy their containerized applications to Kubernetes can use this orchestrator instead of SwarmKit. Kubernetes support is turned off by default and has to be enabled in the settings. The first time Kubernetes is enabled, Docker for macOS or Windows will need a moment to download all the components that are needed to create a single-node Kubernetes cluster. Contrary to Minikube, which is also a single-node cluster, the version provided by the Docker tools uses containerized versions of all Kubernetes components:

Kubernetes support in Docker for macOS and Windows

The preceding diagram gives us a rough overview of how Kubernetes support has been added to Docker...

Introduction to pods

Contrary to what is possible in Docker Swarm, you cannot run containers directly in a Kubernetes cluster. In a Kubernetes cluster, you can only run pods. Pods are the atomic units of deployment in Kubernetes. A pod is an abstraction of one or many co-located containers that share the same Kernel namespaces, such as the network namespace. No equivalent exists in Docker SwarmKit. The fact that more than one container can be co-located and share the same network namespace is a very powerful concept. The following diagram illustrates two pods:

Kubernetes pods

In the preceding diagram, we have two pods, Pod 1 and Pod 2. The first pod contains two containers, while the second one only contains a single container. Each pod gets an IP address assigned by Kubernetes that is unique in the whole Kubernetes cluster. In our case, these are the...

Kubernetes ReplicaSet

A single pod in an environment with high availability requirements is insufficient. What if the pod crashes? What if we need to update the application running inside the pod but cannot afford any service interruption? These questions and more indicate that pods alone are not enough and we need a higher-level concept that can manage multiple instances of the same pod. In Kubernetes, the ReplicaSet is used to define and manage such a collection of identical pods that are running on different cluster nodes. Among other things, a ReplicaSet defines which container images are used by the containers running inside a pod and how many instances of the pod will run in the cluster. These properties and many others are called the desired state. 

The ReplicaSet is responsible for reconciling the desired state at all times...

Kubernetes deployment

Kubernetes takes the single-responsibility principle very seriously. All Kubernetes objects are designed to do one thing and one thing only, and they are designed to do this one thing very well. In this regard, we have to understand Kubernetes ReplicaSets and Deployments. A ReplicaSet, as we have learned, is responsible for achieving and reconciling the desired state of an application service. This means that the ReplicaSet manages a set of pods.

Deployment augments a ReplicaSet by providing rolling updates and rollback functionality on top of it. In Docker Swarm, the Swarm service incorporates the functionality of both ReplicaSet and Deployment. In this regard, SwarmKit is much more monolithic than Kubernetes. The following diagram shows the relationship of a Deployment to a ReplicaSet:

Kubernetes deployment

In the preceding...

Kubernetes service

The moment we start to work with applications consisting of more than one application service, we need service discovery. The following diagram illustrates this problem:

Service discovery

In the preceding diagram, we have a Web API service that needs access to three other services: payments, shipping, and ordering. The Web API should never have to care about how and where to find those three services. In the API code, we just want to use the name of the service we want to reach and its port number. A sample would be the following URL http://payments:3000, which is used to access an instance of the payments service. 

In Kubernetes, the payments application service is represented by a ReplicaSet of pods. Due to the nature of highly distributed systems, we cannot assume that pods have stable endpoints. A pod can...

Context-based routing

Often, we want to configure context-based routing for our Kubernetes cluster. Kubernetes offers us various ways to do this. The preferred and most scalable way at this time is to use an IngressController. The following diagram tries to illustrate how this ingress controller works:

 Context-based routing using a Kubernetes ingress controller

In the preceding diagram, we can see how context-based (or layer 7) routing works when using an IngressController, such as Nginx. Here, we have the deployment of an application service called web. All the pods of this application service have the following label: app=web. Then, we have a Kubernetes service called web that provides a stable endpoint to those pods. The service has a (virtual) IP of 52.14.0.13 and exposes port 30044. That is, if a request comes to any node of...

Comparing SwarmKit with Kubernetes

Now that we have learned a lot of details about the most important resources in Kubernetes, it is helpful to compare the two orchestrators, SwarmKit and Kubernetes, by matching important resources. Let's take a look:

SwarmKit

Kubernetes

Description

Swarm

Cluster

Set of servers/nodes managed by the respective orchestrator.

Node

Cluster member

Single host (physical or virtual) that's a member of the Swarm/cluster.

Manager node

Master

Node managing the Swarm/cluster. This is the control plane.

Worker node

Node

Member of the Swarm/cluster running application workload.

Container

Container**

An instance of a container image running on a node.
**Note: In a Kubernetes cluster, we cannot run a container directly.

Task

Pod

An instance of a service (Swarm) or ReplicaSet...

Summary

In this chapter, we learned about the basics of Kubernetes. We took an overview of its architecture and introduced the main resources that are used to define and run applications in a Kubernetes cluster. We also introduced Minikube and Kubernetes support in Docker for Desktop.

In the next chapter, we're going to deploy an application into a Kubernetes cluster. Then, we're going to be updating one of the services of this application using a zero downtime strategy. Finally, we're going to instrument application services running in Kubernetes with sensitive data using secrets. Stay tuned!

Questions

Please answer the following questions to assess your learning progress:

  1. Explain in a few short sentences what the role of a Kubernetes master is.
  2. List the elements that need to be present on each Kubernetes (worker) node.
  1. We cannot run individual containers in a Kubernetes cluster.

A. Yes
B. No

  1. Explain the reason why the containers in a pod can use localhost to communicate with each other.
  2. What is the purpose of the so-called pause container in a pod?
  3. Bob tells you "Our application consists of three Docker images: web, inventory, and db. Since we can run multiple containers in a Kubernetes pod, we are going to deploy all the services of our application in a single pod." List three to four reasons why this is a bad idea.
  4. Explain in your own words why we need Kubernetes ReplicaSets.
  5. Under which circumstances do we need Kubernetes deployments...

Further reading

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Learn Docker - Fundamentals of Docker 19.x - Second Edition
Published in: Mar 2020Publisher: PacktISBN-13: 9781838827472
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Dr. Gabriel N. Schenker

Dr. Gabriel N. Schenker has more than 25 years of experience as an independent consultant, architect, leader, trainer, mentor, and developer. Currently, Gabriel works as Lead Solution Architect at Techgroup Switzerland. Prior to that, Gabriel worked as Lead Curriculum Developer at Docker and at Confluent. Gabriel has a Ph.D. in Physics, and he is a Docker Captain, a Certified Docker Associate, a Certified Kafka Developer and Operator, and an ASP Insider. When not working, Gabriel enjoys time with his wonderful wife Veronicah and his children.
Read more about Dr. Gabriel N. Schenker