Kubernetes – Basics and Beyond, Fourth Edition: Deep dive into Kubernetes and learn how to create and operate world-class container-native systems
Welcome to Packt Early Access. We’re giving you an exclusive preview of this book before it goes on sale. It can take many months to write a book, but our authors have cutting-edge information to share with you today. Early Access gives you an insight into the latest developments by making chapter drafts available. The chapters may be a little rough around the edges right now, but our authors will update them over time.
You can dip in and out of this book or follow along from start to finish; Early Access is designed to be flexible. We hope you enjoy getting to know more about the process of writing a Packt book.
- Chapter 1: Understanding Kubernetes Architecture
- Chapter 2: Creating Kubernetes Clusters
- Chapter 3: High Availability and Reliability
- Chapter 4: Securing Kubernetes
- Chapter 5: Using Kubernetes...
What is Kubernetes?
Kubernetes is a platform that encompasses a huge number of services and capabilities that keeps growing. The core functionality is scheduling workloads in containers across your infrastructure, but it doesn’t stop there. Here are some of the other capabilities Kubernetes brings to the table:
- Providing authentication and authorization
- Debugging applications
- Accessing and ingesting logs
- Rolling updates
- Using Cluster Autoscaling
- Using the Horizontal Pod Autoscaler
- Replicating application instances
- Checking application health and readiness
- Monitoring resources
- Balancing loads
- Naming and service discovery
- Distributing secrets
- Mounting storage systems
We will cover all these capabilities in great detail throughout the book. At this point, just absorb and appreciate how much value Kubernetes can add to your system.
Kubernetes has impressive scope, but it is also important...
What Kubernetes is not
Kubernetes is not a Platform as a Service (PaaS). It doesn’t dictate many important aspects that are left to you or to other systems built on top of Kubernetes, such as OpenShift and Tanzu. For example:
- Kubernetes doesn’t require a specific application type or framework
- Kubernetes doesn’t require a specific programming language
- Kubernetes doesn’t provide databases or message queues
- Kubernetes doesn’t distinguish apps from services
- Kubernetes doesn’t have a click-to-deploy service marketplace
- Kubernetes doesn’t provide a built-in function as a service solution
- Kubernetes doesn’t mandate logging, monitoring, and alerting systems
- Kubernetes doesn’t provide a CI/CD pipeline
Understanding container orchestration
The primary responsibility of Kubernetes is container orchestration. That means making sure that all the containers that execute various workloads are scheduled to run on physical or virtual machines. The containers must be packed efficiently following the constraints of the deployment environment and the cluster configuration. In addition, Kubernetes must keep an eye on all running containers and replace dead, unresponsive, or otherwise unhealthy containers. Kubernetes provides many more capabilities that you will learn about in the following chapters. In this section, the focus is on containers and their orchestration.
Physical machines, virtual machines, and containers
It all starts and ends with hardware. In order to run your workloads, you need some real hardware provisioned. That includes actual physical machines, with certain compute capabilities (CPUs or cores), memory, and some local persistent storage (spinning disks or SSDs...
Kubernetes concepts
In this section, we’ll briefly introduce many important Kubernetes concepts and give you some context as to why they are needed and how they interact with each other. The goal is to get familiar with these terms and concepts. Later, we will see how these concepts are woven together and organized into API groups and resource categories to achieve awesomeness. You can consider many of these concepts as building blocks. Some concepts, such as nodes and the control plane, are implemented as a set of Kubernetes components. These components are at a different abstraction level, and we will discuss them in detail in a dedicated section, Kubernetes components.
Here is the Kubernetes architecture diagram:
Figure 1.1: Kubernetes architecture
Node
A node is a single host. It may be a physical or virtual machine. Its job is to run pods. Each Kubernetes node runs several Kubernetes components, such as the kubelet, the container runtime, and the kube...
Diving into Kubernetes architecture in depth
Kubernetes has very ambitious goals. It aims to manage and simplify the orchestration, deployment, and management of distributed systems across a wide range of environments and cloud providers. It provides many capabilities and services that should work across all these diverse environments and use cases, while evolving and remaining simple enough for mere mortals to use. This is a tall order. Kubernetes achieves this by following a crystal-clear, high-level design and well-thought-out architecture that promotes extensibility and pluggability.
Kubernetes originally had many hard-coded or environment-aware components, but the trend is to refactor them into plugins and keep the core small, generic, and abstract.
In this section, we will peel Kubernetes like an onion, starting with various distributed systems design patterns and how Kubernetes supports them, then go over the surface of Kubernetes, which is its set of APIs, and then...
Kubernetes container runtimes
Kubernetes originally only supported Docker as a container runtime engine. But that is no longer the case. Kubernetes now supports any runtime that implements the CRI interface.
In this section, you’ll get a closer look at the CRI and get to know some runtime engines that implement it. At the end of this section, you’ll be able to make a well-informed decision about which container runtime is appropriate for your use case and under what circumstances you may switch or even combine multiple runtimes in the same system.
The Container Runtime Interface (CRI)
The CRI is a gRPC API, containing specifications/requirements and libraries for container runtimes to integrate with the kubelet on a node. In Kubernetes 1.7 the internal Docker integration in Kubernetes was replaced with a CRI-based integration. This was a big deal. It opened the door to multiple implementations that can take advantage of advances in the container world. The...
Summary
In this chapter, we covered a lot of ground. You learned about the organization, design, and architecture of Kubernetes. Kubernetes is an orchestration platform for microservice-based applications running as containers. Kubernetes clusters have a control plane and worker nodes. Containers run within pods. Each pod runs on a single physical or virtual machine. Kubernetes directly supports many concepts, such as services, labels, and persistent storage. You can implement various distributed systems design patterns on Kubernetes. Container runtimes just need to implement the CRI. Docker, containerd, CRI-O, and more are supported.
In Chapter 2, Creating Kubernetes Clusters, we will explore the various ways to create Kubernetes clusters, discuss when to use different options, and build a local multi-node cluster.
Join us on Discord!
Read this book alongside other users, cloud experts, authors, and like-minded professionals.
Ask questions, provide solutions to other...