Home Cloud & Networking Mastering Kubernetes - Fourth Edition

Mastering Kubernetes - Fourth Edition

By Gigi Sayfan
ai-assist-svg-icon Book + AI Assistant
eBook + AI Assistant $43.99 $29.99
Print $54.99
Subscription $15.99 $10 p/m for three months
ai-assist-svg-icon NEW: AI Assistant (beta) Available with eBook, Print, and Subscription.
ai-assist-svg-icon NEW: AI Assistant (beta) Available with eBook, Print, and Subscription. $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime! ai-assist-svg-icon NEW: AI Assistant (beta) Available with eBook, Print, and Subscription.
What do you get with a Packt Subscription?
Gain access to our AI Assistant (beta) for an exclusive selection of 500 books, available during your subscription period. Enjoy a personalized, interactive, and narrative experience to engage with the book content on a deeper level.
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
Gain access to our AI Assistant (beta) for an exclusive selection of 500 books, available during your subscription period. Enjoy a personalized, interactive, and narrative experience to engage with the book content on a deeper level.
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Along with your eBook purchase, enjoy AI Assistant (beta) access in our online reader for a personalized, interactive reading experience.
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
ai-assist-svg-icon NEW: AI Assistant (beta) Available with eBook, Print, and Subscription. ai-assist-svg-icon NEW: AI Assistant (beta) Available with eBook, Print, and Subscription. BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime! ai-assist-svg-icon NEW: AI Assistant (beta) Available with eBook, Print, and Subscription.
eBook + AI Assistant $43.99 $29.99
Print $54.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
Gain access to our AI Assistant (beta) for an exclusive selection of 500 books, available during your subscription period. Enjoy a personalized, interactive, and narrative experience to engage with the book content on a deeper level.
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
Gain access to our AI Assistant (beta) for an exclusive selection of 500 books, available during your subscription period. Enjoy a personalized, interactive, and narrative experience to engage with the book content on a deeper level.
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Along with your eBook purchase, enjoy AI Assistant (beta) access in our online reader for a personalized, interactive reading experience.
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
  1. Free Chapter
    Creating Kubernetes Clusters
About this book
The fourth edition of the bestseller Mastering Kubernetes includes the most recent tools and code to enable you to learn the latest features of Kubernetes 1.25. This book contains a thorough exploration of complex concepts and best practices to help you master the skills of designing and deploying large-scale distributed systems on Kubernetes clusters. You’ll learn how to run complex stateless and stateful microservices on Kubernetes, including advanced features such as horizontal pod autoscaling, rolling updates, resource quotas, and persistent storage backends. In addition, you’ll understand how to utilize serverless computing and service meshes. Further, two new chapters have been added. “Governing Kubernetes” covers the problem of policy management, how admission control addresses it, and how policy engines provide a powerful governance solution. “Running Kubernetes in Production” shows you what it takes to run Kubernetes at scale across multiple cloud providers, multiple geographical regions, and multiple clusters, and it also explains how to handle topics such as upgrades, capacity planning, dealing with cloud provider limits/quotas, and cost management. By the end of this Kubernetes book, you’ll have a strong understanding of, and hands-on experience with, a wide range of Kubernetes capabilities.
Publication date:
June 2023
Publisher
Packt
Pages
746
ISBN
9781804611395

 

Creating Kubernetes Clusters

In the previous chapter, we learned what Kubernetes is all about, how it is designed, what concepts it supports, its architecture, and the various container runtimes it supports.

Creating a Kubernetes cluster from scratch is a non-trivial task. There are many options and tools to select from. There are many factors to consider. In this chapter, we will roll our sleeves up and build some Kubernetes clusters using Minikube, KinD, and k3d. We will discuss and evaluate other tools such as Kubeadm and Kubespray. We will also look into deployment environments such as local, cloud, and bare metal. The topics we will cover are as follows:

  • Getting ready for your first cluster
  • Creating a single-node cluster with Minikube
  • Creating a multi-node cluster with KinD
  • Creating a multi-node cluster using k3d
  • Creating clusters in the cloud
  • Creating bare-metal clusters from scratch
  • Reviewing other options for creating Kubernetes clusters

At the end of this chapter, you will have a solid understanding of the various options to create Kubernetes clusters and knowledge of the best-of-breed tools to support the creation of Kubernetes clusters, and you will also build several clusters, both single-node and multi-node.

 

Getting ready for your first cluster

Before we start creating clusters, we should install a couple of tools such as the Docker client and kubectl. These days, the most convenient way to install Docker and kubectl on Mac and Windows is via Rancher Desktop. If you already have them installed, feel free to skip this section.

Installing Rancher Desktop

Rancher Desktop is a cross-platform desktop application that lets you run Docker on your local machine. It will install additional tools such as:

  • Helm
  • Kubectl
  • Nerdctl
  • Moby (open source Docker)
  • Docker Compose

Installation on macOS

The most streamlined way to install Rancher Desktop on macOS is via Homebrew:

brew install --cask rancher

Installation on Windows

The most streamlined way to install Rancher Desktop on Windows is via Chocolatey:

choco install rancher-desktop

Additional installation methods

For alternative methods to install Docker Desktop, follow the instructions here:

https://docs.rancherdesktop.io/getting-started/installation/

Let’s verify docker was installed correctly. Type the following commands and make sure you don’t see any errors (the output doesn’t have to be identical if you installed a different version than mine):

$ docker version
Client:
 Version:           20.10.9
 API version:       1.41
 Go version:        go1.16.8
 Git commit:        c2ea9bc
 Built:             Thu Nov 18 21:17:06 2021
 OS/Arch:           darwin/arm64
 Context:           rancher-desktop
 Experimental:      true
Server:
 Engine:
  Version:          20.10.14
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.17.9
  Git commit:       87a90dc786bda134c9eb02adbae2c6a7342fb7f6
  Built:            Fri Apr 15 00:05:05 2022
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          v1.5.11
  GitCommit:        3df54a852345ae127d1fa3092b95168e4a88e2f8
 runc:
  Version:          1.0.2
  GitCommit:        52b36a2dd837e8462de8e01458bf02cf9eea47dd
 docker-init:
  Version:          0.19.0
  GitCommit:

And, while we’re at it, let’s verify kubectl has been installed correctly too:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.4", GitCommit:"e6c093d87ea4cbb530a7b2ae91e54c0842d8308a", GitTreeState:"clean", BuildDate:"2022-02-16T12:38:05Z", GoVersion:"go1.17.7", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.6+k3s1", GitCommit:"418c3fa858b69b12b9cefbcff0526f666a6236b9", GitTreeState:"clean", BuildDate:"2022-04-28T22:16:58Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/arm64"}

The Server section may be empty if no active Kubernetes server is up and running. When you see this output, you can rest assured that kubectl is ready to go.

Meet kubectl

Before we start creating clusters, let’s talk about kubectl. It is the official Kubernetes CLI, and it interacts with your Kubernetes cluster’s API server via its API. It is configured by default using the ~/.kube/config file, which is a YAML file that contains metadata, connection info, and authentication tokens or certificates for one or more clusters. Kubectl provides commands to view your configuration and switch between clusters if it contains more than one. You can also point kubectl at a different config file by setting the KUBECONFIG environment variable or passing the --kubeconfig command-line flag.

The code below uses a kubectl command to check the pods in the kube-system namespace of the current active cluster:

$ kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS         AGE
svclb-traefik-fv84n                       2/2     Running   6 (7d20h ago)    8d
local-path-provisioner-84bb864455-s2xmp   1/1     Running   20 (7d20h ago)   27d
metrics-server-ff9dbcb6c-lsffr            0/1     Running   88 (10h ago)     27d
coredns-d76bd69b-mc6cn                    1/1     Running   11 (22h ago)     8d
traefik-df4ff85d6-2fskv                   1/1     Running   7 (3d ago)       8d

Kubectl is great, but it is not the only game in town. Let’s look at some alternative tools.

Kubectl alternatives – K9S, KUI, and Lens

Kubectl is a no-nonsense command-line tool. It is very powerful, but it may be difficult or less convenient for some people to visually parse its output or remember all the flags and options. There are many tools the community developed that can replace (or more like complement) kubectl. The best ones, in my opinion, are K9S, KUI, and Lens.

K9S

K9S is a terminal-based UI for managing Kubernetes clusters. It has a lot of shortcuts and aggregated views that will require multiple kubectl commands to accomplish.

Here is what the K9S window looks like:

A screenshot of a computer

Description automatically generated with medium confidence

Figure 2.1: K9S window

Check it out here: https://k9scli.io

KUI

KUI is a framework for adding graphics to CLIs (command-line interfaces). This is a very interesting concept. KUI is focused on Kubernetes of course. It lets you run Kubectl commands and returns the results as graphics. KUI also collects a lot of relevant information and presents it in a concise way with tabs and detail panes to explore even deeper.

KUI is based on Electron, but it is fast.

Here is what the KUI window looks like:

Graphical user interface

Description automatically generated

Figure 2.2: KUI window

Check it out here: https://kui.tools

Lens

Lens is a very polished application. It also presents a graphical view of clusters and allows you to perform a lot of operations from the UI and drop to a terminal interface when necessary. I especially appreciate the ability to work easily with multiple clusters that Lens provides.

Here is what the Lens window looks like:

A screenshot of a computer

Description automatically generated with medium confidence

Figure 2.3: Lens window

Check it out here: https://k8slens.dev

All these tools are running locally. I highly recommend that you start playing with kubectl and then give these tools a test drive. One of them may just be your speed.

In this section, we covered the installation of Rancher Desktop, introduced kubectl, and looked at some alternatives. We are now ready to create our first Kubernetes cluster.

 

Creating a single-node cluster with Minikube

In this section, we will create a local single-node cluster using Minikube. Local clusters are most useful for developers that want quick edit-test-deploy-debug cycles on their machine before committing their changes. Local clusters are also very useful for DevOps and operators that want to play with Kubernetes locally without concerns about breaking a shared environment or creating expensive resources in the cloud and forgetting to clean them up. While Kubernetes is typically deployed on Linux in production, many developers work on Windows PCs or Macs. That said, there aren’t too many differences if you do want to install Minikube on Linux.

A picture containing text, clipart

Description automatically generated

Figure 2.4: minikube

Quick introduction to Minikube

Minikube is the most mature local Kubernetes cluster. It runs the latest stable Kubernetes release. It supports Windows, macOS, and Linux. Minikube provides a lot of advanced options and capabilities:

  • LoadBalancer service type - via minikube tunnel
  • NodePort service type - via minikube service
  • Multiple clusters
  • Filesystem mounts
  • GPU support - for machine learning
  • RBAC
  • Persistent Volumes
  • Ingress
  • Dashboard - via minikube dashboard
  • Custom container runtimes - via the start --container-runtime flag
  • Configuring API server and kubelet options via command-line flags
  • Addons

Installing Minikube

The ultimate guide is here: https://minikube.sigs.k8s.io/docs/start/

But, to save you a trip, here are the latest instructions at the time of writing.

Installing Minikube on Windows

On Windows, I prefer to install software via the Chocolatey package manager. If you don’t have it yet, you can get it here: https://chocolatey.org/

If you don’t want to use Chocolatey, check the ultimate guide above for alternative methods.

With Chocolatey installed, the installation is pretty simple:

PS C:\Windows\system32> choco install minikube -y
Chocolatey v0.12.1
Installing the following packages:
minikube
By installing, you accept licenses for the packages.
Progress: Downloading Minikube 1.25.2... 100%
kubernetes-cli v1.24.0 [Approved]
kubernetes-cli package files install completed. Performing other installation steps.
Extracting 64-bit C:\ProgramData\chocolatey\lib\kubernetes-cli\tools\kubernetes-client-windows-amd64.tar.gz to C:\ProgramData\chocolatey\lib\kubernetes-cli\tools...
C:\ProgramData\chocolatey\lib\kubernetes-cli\tools
Extracting 64-bit C:\ProgramData\chocolatey\lib\kubernetes-cli\tools\kubernetes-client-windows-amd64.tar to C:\ProgramData\chocolatey\lib\kubernetes-cli\tools...
C:\ProgramData\chocolatey\lib\kubernetes-cli\tools
 ShimGen has successfully created a shim for kubectl-convert.exe
 ShimGen has successfully created a shim for kubectl.exe
 The install of kubernetes-cli was successful.
  Software installed to 'C:\ProgramData\chocolatey\lib\kubernetes-cli\tools'
Minikube v1.25.2 [Approved]
minikube package files install completed. Performing other installation steps.
 ShimGen has successfully created a shim for minikube.exe
 The install of minikube was successful.
  Software installed to 'C:\ProgramData\chocolatey\lib\Minikube'
Chocolatey installed 2/2 packages.
 See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).

On Windows, you can work in different command-line environments. The most common ones are PowerShell and WSL (Windows System for Linux). Either one works. You may need to run them in Administrator mode for certain operations.

As far as console windows go, I recommend the official Windows Terminal these days. You can install it with one command:

choco install microsoft-windows-terminal --pre

If you prefer other console windows such as ConEMU or Cmdr, this is totally fine.

I’ll use shortcuts to make life easy. If you want to follow along and copy the aliases into your profile, you can use the following for PowerShell and WSL.

For PowerShell, add the following to your $profile:

function k { kubectl.exe $args } function mk { minikube.exe $args }

For WSL, add the following to .bashrc:

alias k='kubectl.exe'
alias mk=minikube.exe'

Let’s verify that minikube was installed correctly:

$ mk version
minikube version: v1.25.2
commit: 362d5fdc0a3dbee389b3d3f1034e8023e72bd3a7

Let’s create a cluster with mk start:

$ mk start
  minikube v1.25.2 on Microsoft Windows 10 Pro 10.0.19044 Build 19044
  Automatically selected the docker driver. Other choices: hyperv, ssh
  Starting control plane node minikube in cluster minikube
  Pulling base image ...
  Downloading Kubernetes v1.23.3 preload ...
    > preloaded-images-k8s-v17-v1...: 505.68 MiB / 505.68 MiB  100.00% 3.58 MiB
    > gcr.io/k8s-minikube/kicbase: 379.06 MiB / 379.06 MiB  100.00% 2.61 MiB p/
  Creating docker container (CPUs=2, Memory=8100MB) ...
  docker "minikube" container is missing, will recreate.
  Creating docker container (CPUs=2, Memory=8100MB) ...
  Downloading VM boot image ...
    > minikube-v1.25.2.iso.sha256: 65 B / 65 B [-------------] 100.00% ? p/s 0s
    > minikube-v1.25.2.iso: 237.06 MiB / 237.06 MiB [ 100.00% 12.51 MiB p/s 19s
  Starting control plane node minikube in cluster minikube
  Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
  This VM is having trouble accessing https://k8s.gcr.io
  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networki
ng/proxy/
  Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
    ▪ kubelet.housekeeping-interval=5m
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
  Enabled addons: storage-provisioner, default-storageclass
  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

As you can see, the process is pretty complicated even for the default setup, and required multiple retries (automatically). You can customize the cluster creation process with a multitude of command-line flags. Type mk start -h to see what’s available.

Let’s check the status of our cluster:

$ mk status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

All is well!

Now let’s stop the cluster and later restart it:

$ mk stop
 Stopping node "minikube" ... 
 Powering off "minikube" via SSH ... 
 1 node stopped.

Restarting with the time command to measure how long it takes:

$ time mk start
  minikube v1.25.2 on Microsoft Windows 10 Pro 10.0.19044 Build 19044
  Using the hyperv driver based on existing profile
  Starting control plane node minikube in cluster minikube
  Restarting existing hyperv VM for "minikube" ...
  This VM is having trouble accessing https://k8s.gcr.io
  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networki
ng/proxy/
  Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
    ▪ kubelet.housekeeping-interval=5m
  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
  Enabled addons: storage-provisioner, default-storageclass
  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
real    1m8.666s
user    0m0.004s
sys     0m0.000s

It took a little over a minute.

Let’s review what Minikube did behind the curtains for you. You’ll need to do a lot of it when creating a cluster from scratch:

  1. Started a Hyper-V VM
  2. Created certificates for the local machine and the VM
  3. Downloaded images
  4. Set up networking between the local machine and the VM
  5. Ran the local Kubernetes cluster on the VM
  6. Configured the cluster
  7. Started all the Kubernetes control plane components
  8. Configured the kubelet
  9. Enabled addons (for storage)
  10. Configured kubectl to talk to the cluster

Installing Minikube on macOS

On Mac, I recommend installing minikube using Homebrew:

$ brew install minikube
Running `brew update --preinstall`...
==> Auto-updated Homebrew!
Updated 2 taps (homebrew/core and homebrew/cask).
==> Updated Formulae
Updated 39 formulae.
==> New Casks
contour                                                        hdfview                         rancher-desktop | kube-system
==> Updated Casks
Updated 17 casks.
==> Downloading https://ghcr.io/v2/homebrew/core/kubernetes-cli/manifests/1.24.0
######################################################################## 100.0%
==> Downloading https://ghcr.io/v2/homebrew/core/kubernetes-cli/blobs/sha256:e57f8f7ea19d22748d1bcae5cd02b91e71816147712e6dcd
==> Downloading from https://pkg-containers.githubusercontent.com/ghcr1/blobs/sha256:e57f8f7ea19d22748d1bcae5cd02b91e71816147
######################################################################## 100.0%
==> Downloading https://ghcr.io/v2/homebrew/core/minikube/manifests/1.25.2
Already downloaded: /Users/gigi.sayfan/Library/Caches/Homebrew/downloads/fa0034afe1330adad087a8b3dc9ac4917982d248b08a4df4cbc52ce01d5eabff--minikube-1.25.2.bottle_manifest.json
==> Downloading https://ghcr.io/v2/homebrew/core/minikube/blobs/sha256:6dee5f22e08636346258f4a6daa646e9102e384ceb63f33981745d
Already downloaded: /Users/gigi.sayfan/Library/Caches/Homebrew/downloads/ceeab562206fd08fd3b6523a85b246d48d804b2cd678d76cbae4968d97b5df1f--minikube--1.25.2.arm64_monterey.bottle.tar.gz
==> Installing dependencies for minikube: kubernetes-cli
==> Installing minikube dependency: kubernetes-cli
==> Pouring kubernetes-cli--1.24.0.arm64_monterey.bottle.tar.gz
  /opt/homebrew/Cellar/kubernetes-cli/1.24.0: 228 files, 55.3MB
==> Installing minikube
==> Pouring minikube--1.25.2.arm64_monterey.bottle.tar.gz
==> Caveats
zsh completions have been installed to:
  /opt/homebrew/share/zsh/site-functions
==> Summary
  /opt/homebrew/Cellar/minikube/1.25.2: 9 files, 70.3MB
==> Running `brew cleanup minikube`...
Disable this behaviour by setting HOMEBREW_NO_INSTALL_CLEANUP.
Hide these hints with HOMEBREW_NO_ENV_HINTS (see `man brew`).
==> Caveats
==> minikube
zsh completions have been installed to:
  /opt/homebrew/share/zsh/site-functions

You can add aliases to your .bashrc file (similar to the WSL aliases on Windows):

alias k='kubectl'
alias mk='$(brew --prefix)/bin/minikube'

Now you can use k and mk and type less.

Type mk version to verify Minikube is correctly installed and functioning:

$ mk version
minikube version: v1.25.2
commit: 362d5fdc0a3dbee389b3d3f1034e8023e72bd3a7

Type k version to verify kubectl is correctly installed and functioning:

$ k version
I0522 15:41:13.663004   68055 versioner.go:58] invalid configuration: no configuration has been provided
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.4", GitCommit:"e6c093d87ea4cbb530a7b2ae91e54c0842d8308a", GitTreeState:"clean", BuildDate:"2022-02-16T12:38:05Z", GoVersion:"go1.17.7", Compiler:"gc", Platform:"darwin/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

Note that the client version is 1.23. Don’t worry about the error message. There is no cluster running, so kubectl can’t connect to anything. That’s expected. The error message will disappear when we create the cluster.

You can explore the available commands and flags for both Minikube and kubectl by just typing the commands with no arguments.

To create the cluster on macOS, just run mk start.

Troubleshooting the Minikube installation

If something goes wrong during the process, try to follow the error messages. You can add the --alsologtostderr flag to get detailed error info to the console. Everything minikube does is organized neatly under ~/.minikube. Here is the directory structure:

$ tree ~/.minikube\ -L 2
C:\Users\the_g\.minikube\
|-- addons
|-- ca.crt
|-- ca.key
|-- ca.pem
|-- cache
|   |-- iso
|   |-- kic
|   `-- preloaded-tarball
|-- cert.pem
|-- certs
|   |-- ca-key.pem
|   |-- ca.pem
|   |-- cert.pem
|   `-- key.pem
|-- config
|-- files
|-- key.pem
|-- logs
|   |-- audit.json
|   `-- lastStart.txt
|-- machine_client.lock
|-- machines
|   |-- minikube
|   |-- server-key.pem
|   `-- server.pem
|-- profiles
|   `-- minikube
|-- proxy-client-ca.crt
`-- proxy-client-ca.key
13 directories, 16 files

If you don’t have the tree utility, you can install it.

On Windows: $ choco install -y tree

On Mac: brew install tree

Checking out the cluster

Now that we have a cluster up and running, let’s peek inside.

First, let’s ssh into the VM:

$ mk ssh
                         _             _
            _         _ ( )           ( )
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ uname -a
Linux minikube 4.19.202 #1 SMP Tue Feb 8 19:13:02 UTC 2022 x86_64 GNU/Linux
$

Great! That works. The weird symbols are ASCII art for “minikube.” Now, let’s start using kubectl because it is the Swiss Army knife of Kubernetes and will be useful for all clusters.

Disconnect from the VM via ctrl+D or by typing:

$ logout

We will cover many of the kubectl commands in our journey. First, let’s check the cluster status using cluster-info:

$ k cluster-info
Kubernetes control plane is running at https://172.26.246.89:8443
CoreDNS is running at https://172.26.246.89:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

You can see that the control plane is running properly. To see a much more detailed view of all the objects in the cluster as JSON, type: k cluster-info dump. The output can be a little daunting let’s use more specific commands to explore the cluster.

Let’s check out the nodes in the cluster using get nodes:

$ k get nodes
NAME       STATUS   ROLES                  AGE   VERSION
minikube   Ready    control-plane,master   62m   v1.23.3

So, we have one node called minikube. To get a lot more information about it, type:

k describe node minikube

The output is verbose; I’ll let you try it yourself.

Before we start putting our cluster to work, let’s check the addons minikube installed by default:

 mk addons list
|-----------------------------|----------|--------------|--------------------------------|
|         ADDON NAME          | PROFILE  |    STATUS    |           MAINTAINER           |
|-----------------------------|----------|--------------|--------------------------------|
| ambassador                  | minikube | disabled     | third-party (ambassador)       |
| auto-pause                  | minikube | disabled     | google                         |
| csi-hostpath-driver         | minikube | disabled     | kubernetes                     |
| dashboard                   | minikube | disabled     | kubernetes                     |
| default-storageclass        | minikube | enabled   | kubernetes                     |
| efk                         | minikube | disabled     | third-party (elastic)          |
| freshpod                    | minikube | disabled     | google                         |
| gcp-auth                    | minikube | disabled     | google                         |
| gvisor                      | minikube | disabled     | google                         |
| helm-tiller                 | minikube | disabled     | third-party (helm)             |
| ingress                     | minikube | disabled     | unknown (third-party)          |
| ingress-dns                 | minikube | disabled     | google                         |
| istio                       | minikube | disabled     | third-party (istio)            |
| istio-provisioner           | minikube | disabled     | third-party (istio)            |
| kong                        | minikube | disabled     | third-party (Kong HQ)          |
| kubevirt                    | minikube | disabled     | third-party (kubevirt)         |
| logviewer                   | minikube | disabled     | unknown (third-party)          |
| metallb                     | minikube | disabled     | third-party (metallb)          |
| metrics-server              | minikube | disabled     | kubernetes                     |
| nvidia-driver-installer     | minikube | disabled     | google                         |
| nvidia-gpu-device-plugin    | minikube | disabled     | third-party (nvidia)           |
| olm                         | minikube | disabled     | third-party (operator          |
|                             |          |              | framework)                     |
| pod-security-policy         | minikube | disabled     | unknown (third-party)          |
| portainer                   | minikube | disabled     | portainer.io                   |
| registry                    | minikube | disabled     | google                         |
| registry-aliases            | minikube | disabled     | unknown (third-party)          |
| registry-creds              | minikube | disabled     | third-party (upmc enterprises) |
| storage-provisioner         | minikube | enabled   | google                         |
| storage-provisioner-gluster | minikube | disabled     | unknown (third-party)          |
| volumesnapshots             | minikube | disabled     | kubernetes                     |
|-----------------------------|----------|--------------|--------------------------------|

As you can see, minikube comes loaded with a lot of addons, but only enables a couple of storage addons out of the box.

Doing work

Before we start, if you have a VPN running, you may need to shut it down when pulling images.

We have a nice empty cluster up and running (well, not completely empty, as the DNS service and dashboard run as pods in the kube-system namespace). It’s time to deploy some pods:

$ k create deployment echo --image=k8s.gcr.io/e2e-test-images/echoserver:2.5 
deployment.apps/echo created

Let’s check out the pod that was created. The -w flag means watch. Whenever the status changes, a new line will be displayed:

$ k get po -w
NAME                    READY   STATUS              RESTARTS   AGE
echo-7fd7648898-6hh48   0/1     ContainerCreating   0          5s
echo-7fd7648898-6hh48   1/1     Running             0          6s

To expose our pod as a service, type the following:

$ k expose deployment echo --type=NodePort --port=8080
service/echo exposed

Exposing the service as type NodePort means that it is exposed to the host on some port. But it is not the 8080 port we ran the pod on. Ports get mapped in the cluster. To access the service, we need the cluster IP and exposed port:

$ mk ip
172.26.246.89
$  k get service echo -o jsonpath='{.spec.ports[0].nodePort}'
32649

Now we can access the echo service, which returns a lot of information:

n$ curl http://172.26.246.89:32649/hi
Hostname: echo-7fd7648898-6hh48
Pod Information:
        -no pod information available-
Server values:
        server_version=nginx: 1.14.2 - lua: 10015
Request Information:
        client_address=172.17.0.1
        method=GET
        real path=/hi
        query=
        request_version=1.1
        request_scheme=http
        request_uri=http://172.26.246.89:8080/hi
Request Headers:
        accept=*/*
        host=172.26.246.89:32649
        user-agent=curl/7.79.1
Request Body:
        -no body in request-

Congratulations! You just created a local Kubernetes cluster, deployed a service, and exposed it to the world.

Examining the cluster with the dashboard

Kubernetes has a very nice web interface, which is deployed, of course, as a service in a pod. The dashboard is well designed and provides a high-level overview of your cluster as well as drilling down into individual resources, viewing logs, editing resource files, and more. It is the perfect weapon when you want to check out your cluster manually and don’t have local tools like KUI or Lens. Minikube provides it as an addon.

Let’s enable it:

$ mk addons enable dashboard
    ▪ Using image kubernetesui/dashboard:v2.3.1
    ▪ Using image kubernetesui/metrics-scraper:v1.0.7
  Some dashboard features require the metrics-server addon. To enable all features please run:
        minikube addons enable metrics-server
  The 'dashboard' addon is enabled

To launch it, type:

$ mk dashboard
  Verifying dashboard health ...
  Launching proxy ...
  Verifying proxy health ...
  Opening http://127.0.0.1:63200/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...

Minikube will open a browser window with the dashboard UI.

Here is the Workloads view, which displays Deployments, Replica Sets, and Pods.

Graphical user interface, chart, bubble chart

Description automatically generated

Figure 2.5: Workloads dashboard

It can also display daemon sets, stateful sets, and jobs, but we don’t have any in this cluster.

To delete the cluster we created, type:

$ mk delete
  Deleting "minikube" in docker ...
  Deleting container "minikube" ...
  Removing /Users/gigi.sayfan/.minikube/machines/minikube ...
  Removed all traces of the "minikube" cluster.

In this section, we created a local single-node Kubernetes cluster on Windows, explored it a little bit using kubectl, deployed a service, and played with the web UI. In the next section, we’ll move to a multi-node cluster.

 

Creating a multi-node cluster with KinD

In this section, we’ll create a multi-node cluster using KinD. We will also repeat the deployment of the echo server we deployed on Minikube and observe the differences. Spoiler alert - everything will be faster and easier!

Quick introduction to KinD

KinD stands for Kubernetes in Docker. It is a tool for creating ephemeral clusters (no persistent storage). It was built primarily for running the Kubernetes conformance tests. It supports Kubernetes 1.11+. Under the covers, it uses kubeadm to bootstrap Docker containers as nodes in the cluster. KinD is a combination of a library and a CLI. You can use the library in your code for testing or other purposes. KinD can create highly-available clusters with multiple control plane nodes. Finally, KinD is a CNCF conformant Kubernetes installer. It had better be if it’s used for the conformance tests of Kubernetes itself.

KinD is super fast to start, but it has some limitations too:

  • No persistent storage
  • No support for alternative runtimes yet, only Docker

Let’s install KinD and get going.

Installing KinD

You must have Docker installed as KinD is literally running as a Docker container. If you have Go installed, you can install the KinD CLI via:

go install sigs.k8s.io/kind@v0.14.0

Otherwise, on macOS type:

brew install kind

On Windows type:

choco install kind

Dealing with Docker contexts

You may have multiple Docker engines on your system and the Docker context determines which one is used. You may get an error like:

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

In this case, check your Docker contexts:

$ docker context ls
NAME              DESCRIPTION                               DOCKER ENDPOINT                                 KUBERNETES ENDPOINT                ORCHESTRATOR
colima            colima                                    unix:///Users/gigi.sayfan/.colima/docker.sock
default *         Current DOCKER_HOST based configuration   unix:///var/run/docker.sock                     https://127.0.0.1:6443 (default)   swarm
rancher-desktop   Rancher Desktop moby context              unix:///Users/gigi.sayfan/.rd/docker.sock       https://127.0.0.1:6443 (default)

The context marked with * is the current context. If you use Rancher Desktop, then you should set the context to rancher-desktop:

$ docker context use rancher-desktop

Creating a cluster with KinD

Creating a cluster is super easy.

$ kind create cluster
Creating cluster "kind" ...
  Ensuring node image (kindest/node:v1.23.4) 
  Preparing nodes 
  Writing configuration 
  Starting control-plane 
  Installing CNI 
  Installing StorageClass 
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Thanks for using kind! 

It takes less than 30 seconds to create a single-node cluster.

Now, we can access the cluster using kubectl:

$ k config current-context
kind-kind
$ k cluster-info
Kubernetes control plane is running at https://127.0.0.1:51561
CoreDNS is running at https://127.0.0.1:51561/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

KinD adds its kube context to the default ~/.kube/config file by default. When creating a lot of temporary clusters, it is sometimes better to store the KinD contexts in separate files and avoid cluttering ~/.kube/config. This is easily done by passing the --kubeconfig flag with a file path.

So, KinD creates a single-node cluster by default:

$ k get no
NAME                 STATUS   ROLES                  AGE   VERSION
kind-control-plane   Ready    control-plane,master   4m   v1.23.4

Let’s delete it and create a multi-node cluster:

$ kind delete cluster 
Deleting cluster "kind" ...

To create a multi-node cluster, we need to provide a configuration file with the specification of our nodes. Here is a configuration file that will create a cluster called multi-node-cluster with one control plane node and two worker nodes:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: multi-node-cluster
nodes:
- role: control-plane
- role: worker
- role: worker

Let’s save the configuration file as kind-multi-node-config.yaml and create the cluster storing the kubeconfig with its own file $TMPDIR/kind-multi-node-config:

$ kind create cluster --config kind-multi-node-config.yaml --kubeconfig $TMPDIR/kind-multi-node-config
Creating cluster "multi-node-cluster" ...
  Ensuring node image (kindest/node:v1.23.4) 
  Preparing nodes 
  Writing configuration 
  Starting control-plane 
  Installing CNI 
  Installing StorageClass 
  Joining worker nodes 
Set kubectl context to "kind-multi-node-cluster"
You can now use your cluster with:
kubectl cluster-info --context kind-multi-node-cluster --kubeconfig /var/folders/qv/7l781jhs6j19gw3b89f4fcz40000gq/T//kind-multi-node-config
Have a nice day! 

Yeah, it works! And we got a local 3-node cluster in less than a minute:

$ k get nodes --kubeconfig $TMPDIR/kind-multi-node-config
NAME                               STATUS   ROLES                  AGE     VERSION
multi-node-cluster-control-plane   Ready    control-plane,master   2m17s   v1.23.4
multi-node-cluster-worker          Ready    <none>                 100s    v1.23.4
multi-node-cluster-worker2         Ready    <none>                 100s    v1.23.4

KinD is also kind enough (see what I did there) to let us create HA (highly available) clusters with multiple control plane nodes for redundancy. If you want a highly available cluster with three control plane nodes and two worker nodes, your cluster config file will be very similar:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: ha-multi-node-cluster
nodes:
- role: control-plane
- role: control-plane
- role: control-plane
- role: worker
- role: worker

Let’s save the configuration file as kind-ha-multi-node-config.yaml and create a new HA cluster:

$ kind create cluster --config kind-ha-multi-node-config.yaml --kubeconfig $TMPDIR/kind-ha-multi-node-config
Creating cluster "ha-multi-node-cluster" ...
  Ensuring node image (kindest/node:v1.23.4) 
  Preparing nodes 
  Configuring the external load balancer 
  Writing configuration 
  Starting control-plane 
  Installing CNI 
  Installing StorageClass 
  Joining more control-plane nodes 
  Joining worker nodes 
Set kubectl context to "kind-ha-multi-node-cluster"
You can now use your cluster with:
kubectl cluster-info --context kind-ha-multi-node-cluster --kubeconfig /var/folders/qv/7l781jhs6j19gw3b89f4fcz40000gq/T//kind-ha-multi-node-config
Not sure what to do next?  Check out https://kind.sigs.k8s.io/docs/user/quick-start/

Hmmm... there is something new here. Now KinD creates an external load balancer as well as joining more control plane nodes before joining the worker nodes. The load balancer is necessary to distribute requests across all the control plane nodes.

Note that the external load balancer doesn’t show as a node using kubectl:

$ k get nodes --kubeconfig $TMPDIR/kind-ha-multi-node-config
NAME                                   STATUS   ROLES                  AGE     VERSION
ha-multi-node-cluster-control-plane    Ready    control-plane,master   3m31s   v1.23.4
ha-multi-node-cluster-control-plane2   Ready    control-plane,master   3m19s   v1.23.4
ha-multi-node-cluster-control-plane3   Ready    control-plane,master   2m22s   v1.23.4
ha-multi-node-cluster-worker           Ready    <none>                 2m4s    v1.23.4
ha-multi-node-cluster-worker2          Ready    <none>                 2m5s    v1.23.4

But, KinD has its own get nodes command, where you can see the load balancer:

$ kind get nodes --name ha-multi-node-cluster
ha-multi-node-cluster-control-plane2
ha-multi-node-cluster-external-load-balancer
ha-multi-node-cluster-control-plane
ha-multi-node-cluster-control-plane3
ha-multi-node-cluster-worker
ha-multi-node-cluster-worker2

Our KinD cluster is up and running; let’s put it to work.

Doing work with KinD

Let’s deploy our echo service on the KinD cluster. It starts the same:

$ k create deployment echo --image=g1g1/echo-server:0.1 --kubeconfig $TMPDIR/kind-ha-multi-node-config
deployment.apps/echo created
$ k expose deployment echo --type=NodePort --port=7070 --kubeconfig $TMPDIR/kind-ha-multi-node-config
service/echo exposed

Checking our services, we can see the echo service front and center:

$ k get svc echo --kubeconfig $TMPDIR/kind-ha-multi-node-config
NAME   TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
echo   NodePort   10.96.52.33   <none>        7070:31953/TCP   10s

But, there is no external IP to the service. With minikube, we got the IP of the minikube node itself via $(minikube ip) and we could use it in combination with the node port to access the service. That is not an option with KinD clusters. Let’s see how to use a proxy to access the echo service.

Accessing Kubernetes services locally through a proxy

We will go into a lot of detail about networking, services, and how to expose them outside the cluster later in the book.

Here, we will just show you how to get it done and keep you in suspense for now. First, we need to run the kubectl proxy command that exposes the API server, pods, and services on localhost:

$ k proxy --kubeconfig $TMPDIR/kind-ha-multi-node-config &
[1] 32479
Starting to serve on 127.0.0.1:8001

Then, we can access the echo service through a specially crafted proxy URL that includes the exposed port (8080) and NOT the node port:

$ http http://localhost:8001/api/v1/namespaces/default/services/echo:7070/proxy/yeah-it-works
HTTP/1.1 200 OK
Audit-Id: 294cf10b-0d60-467d-8a51-4414834fc173
Cache-Control: no-cache, private
Content-Length: 13
Content-Type: text/plain; charset=utf-8
Date: Mon, 23 May 2022 21:54:01 GMT
yeah-it-works

I used httpie in the command above. You can use curl too. To install httpie, follow the instructions here: https://httpie.org/doc#installation.

We will deep dive into exactly what’s going on in Chapter 10, Exploring Kubernetes Networking. For now, it is enough to demonstrate how kubectl proxy allows us to access our KinD services.

Let’s check out my favorite local cluster solution – k3d.

 

Creating a multi-node cluster with k3d

In this section, we’ll create a multi-node cluster using k3d from Rancher. We will not repeat the deployment of the echo server because it’s identical to the KinD cluster including accessing it through a proxy. Spoiler alert – creating clusters with k3d is even faster and more user-friendly than KinD!

Quick introduction to k3s and k3d

Rancher created k3s, which is a lightweight Kubernetes distribution. Rancher says that k3s is 5 less than k8s if that makes any sense. The basic idea is to remove features and capabilities that most people don’t need such as:

  • Non-default features
  • Legacy features
  • Alpha features
  • In-tree storage drivers
  • In-tree cloud providers

K3s removed Docker completely and uses containerd instead. You can still bring Docker back if you depend on it. Another major change is that k3s stores its state in an SQLite DB instead of etcd. For networking and DNS, k3s uses Flannel and CoreDNS.

K3s also added a simplified installer that takes care of SSL and certificate provisioning.

The end result is astonishing – a single binary (less than 40MB) that needs only 512MB of memory.

Unlike Minikube and KinD, k3s is actually designed for production. The primary use case is for edge computing, IoT, and CI systems. It is optimized for ARM devices.

OK. That’s k3s, but what’s k3d? K3d takes all the goodness that is k3s and packages it in Docker (similar to KinD) and adds a friendly CLI to manage it.

Let’s install k3d and see for ourselves.

Installing k3d

Installing k3d on macOS is as simple as:

brew install k3d

And on Windows, it is just:

choco install -y k3d

On Windows, optionally add this alias to your WSL .bashrc file:

alias k3d='k3d.exe'

Let’s see what we have:

$ k3d version
k3d version v5.4.1
k3s version v1.22.7-k3s1 (default)

As you see, k3d reports its version, which shows all is well. Now, we can create a cluster with k3d.

Creating the cluster with k3d

Are you ready to be amazed? Creating a single-node cluster with k3d takes less than 20 seconds!

$ time k3d cluster create
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-k3s-default'
INFO[0000] Created image volume k3d-k3s-default-images
INFO[0000] Starting new tools node...
INFO[0000] Starting Node 'k3d-k3s-default-tools'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
INFO[0001] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0002] Using the k3d-tools node to gather environment information
INFO[0002] HostIP: using network gateway 172.19.0.1 address
INFO[0002] Starting cluster 'k3s-default'
INFO[0002] Starting servers...
INFO[0002] Starting Node 'k3d-k3s-default-server-0'
INFO[0008] All agents already running.
INFO[0008] Starting helpers...
INFO[0008] Starting Node 'k3d-k3s-default-serverlb'
INFO[0015] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap...
INFO[0017] Cluster 'k3s-default' created successfully!
INFO[0018] You can now use it like this:
kubectl cluster-info
real    0m18.154s
user    0m0.005s
sys     0m0.000s

Without a load balancer, it takes less than 8 seconds!

What about multi-node clusters? We saw that KinD was much slower, especially when creating a HA cluster with multiple control plane nodes and an external load balancer.

Let’s delete the single-node cluster first:

$ k3d cluster delete
INFO[0000] Deleting cluster 'k3s-default'
INFO[0000] Deleting cluster network 'k3d-k3s-default'
INFO[0000] Deleting 2 attached volumes...
WARN[0000] Failed to delete volume 'k3d-k3s-default-images' of cluster 'k3s-default': failed to find volume 'k3d-k3s-default-images': Error: No such volume: k3d-k3s-default-images -> Try to delete it manually
INFO[0000] Removing cluster details from default kubeconfig...
INFO[0000] Removing standalone kubeconfig file (if there is one)...
INFO[0000] Successfully deleted cluster k3s-default!

Now, let’s create a cluster with 3 worker nodes. That takes a little over 30 seconds:

$ time k3d cluster create --agents 3
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-k3s-default'
INFO[0000] Created image volume k3d-k3s-default-images
INFO[0000] Starting new tools node...
INFO[0000] Starting Node 'k3d-k3s-default-tools'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
INFO[0001] Creating node 'k3d-k3s-default-agent-0'
INFO[0002] Creating node 'k3d-k3s-default-agent-1'
INFO[0002] Creating node 'k3d-k3s-default-agent-2'
INFO[0002] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0002] Using the k3d-tools node to gather environment information
INFO[0002] HostIP: using network gateway 172.22.0.1 address
INFO[0002] Starting cluster 'k3s-default'
INFO[0002] Starting servers...
INFO[0002] Starting Node 'k3d-k3s-default-server-0'
INFO[0008] Starting agents...
INFO[0008] Starting Node 'k3d-k3s-default-agent-0'
INFO[0008] Starting Node 'k3d-k3s-default-agent-2'
INFO[0008] Starting Node 'k3d-k3s-default-agent-1'
INFO[0018] Starting helpers...
INFO[0019] Starting Node 'k3d-k3s-default-serverlb'
INFO[0029] Injecting records for hostAliases (incl. host.k3d.internal) and for 5 network members into CoreDNS configmap...
INFO[0032] Cluster 'k3s-default' created successfully!
INFO[0032] You can now use it like this:
kubectl cluster-info
real    0m32.512s
user    0m0.005s
sys     0m0.000s

Let’s verify the cluster works as expected:

$ k cluster-info
Kubernetes control plane is running at https://0.0.0.0:60490
CoreDNS is running at https://0.0.0.0:60490/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:60490/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Here are the nodes. Note that there is just one control plane node called k3d-k3s-default-server-0:

$ k get nodes
NAME                       STATUS   ROLES                  AGE     VERSION
k3d-k3s-default-server-0   Ready    control-plane,master   5m33s   v1.22.7+k3s1
k3d-k3s-default-agent-0    Ready    <none>                 5m30s   v1.22.7+k3s1
k3d-k3s-default-agent-2    Ready    <none>                 5m30s   v1.22.7+k3s1
k3d-k3s-default-agent-1    Ready    <none>                 5m29s   v1.22.7+k3s1

You can stop and start clusters, create multiple clusters, and list existing clusters using the k3d CLI. Here are all the commands. Feel free to explore further:

$ k3d
Usage:
  k3d [flags]
  k3d [command]
Available Commands:
  cluster      Manage cluster(s)
  completion   Generate completion scripts for [bash, zsh, fish, powershell | psh]
  config       Work with config file(s)
  help         Help about any command
  image        Handle container images.
  kubeconfig   Manage kubeconfig(s)
  node         Manage node(s)
  registry     Manage registry/registries
  version      Show k3d and default k3s version
Flags:
  -h, --help         help for k3d
      --timestamps   Enable Log timestamps
      --trace        Enable super verbose output (trace logging)
      --verbose      Enable verbose output (debug logging)
      --version      Show k3d and default k3s version
Use "k3d [command] --help" for more information about a command.

You can repeat the steps for deploying, exposing, and accessing the echo service on your own. It works just like KinD.

OK. We created clusters using minikube, KinD and k3d. Let’s compare them, so you can decide which one works for you.

 

Comparing Minikube, KinD, and k3d

Minikube is an official local Kubernetes release. It’s very mature and very full-featured. That said, it requires a VM and is both slow to install and to start. It also can get into trouble with networking at arbitrary times, and sometimes the only remedy is deleting the cluster and rebooting. Also, minikube supports a single node only. I suggest using Minikube only if it supports some features that you need that are not available in either KinD or k3d. See https://minikube.sigs.k8s.io/ for more info.

KinD is much faster than Minikube and is used for Kubernetes conformance tests, so by definition, it is a conformant Kubernetes distribution. It is the only local cluster solution that provides an HA cluster with multiple control plane nodes. It is also designed to be used as a library, which I don’t find a big attraction because it is very easy to automate CLIs from code. The main downside of KinD for local development is that it is ephemeral. I recommend using KinD if you contribute to Kubernetes itself and want to test against it. See https://kind.sigs.k8s.io/.

K3d is the clear winner for me. Lightning fast, and supports multiple clusters and multiple worker nodes per cluster. Easy to stop and start clusters without losing state. See https://k3d.io/.

Honorable mention – Rancher Desktop Kubernetes cluster

I use Rancher Desktop as my Docker Engine provider, but it also comes with a built-in Kubernetes cluster. You don’t get to customize it and you can’t have multiple clusters or even multiple nodes in the same cluster. But, if all you need is a local single-node Kubernetes cluster to play with, then the rancher-desktop cluster is there for you.

To use this cluster, type:

$ kubectl config use-context rancher-desktop
Switched to context "rancher-desktop".

You can decide how many resources you allocate to its node, which is important if you try to deploy a lot of workloads on it because you get just the one node.

Graphical user interface, application

Description automatically generated

Figure 2.6: Rancher Desktop – Kubernetes settings

In this section, we covered creating Kubernetes clusters locally using Minikube, KinD, and K3d. In the next section, we will look at creating clusters in the cloud.

 

Creating clusters in the cloud (GCP, AWS, Azure, and Digital Ocean)

Creating clusters locally is fun. It’s also important during development and when trying to troubleshoot problems locally. But, in the end, Kubernetes is designed for cloud-native applications (applications that run in the cloud). Kubernetes doesn’t want to be aware of individual cloud environments because that doesn’t scale. Instead, Kubernetes has the concept of a cloud-provider interface. Every cloud provider can implement this interface and then host Kubernetes.

The cloud-provider interface

The cloud-provider interface is a collection of Go data types and interfaces. It is defined in a file called cloud.go, available at: https://github.com/kubernetes/cloud-provider/blob/master/cloud.go.

Here is the main interface:

type Interface interface {
     Initialize(clientBuilder ControllerClientBuilder, stop <-chan struct{})
    LoadBalancer() (LoadBalancer, bool)
    Instances() (Instances, bool)
    InstancesV2() (InstancesV2, bool)
    Zones() (Zones, bool)
    Clusters() (Clusters, bool)
    Routes() (Routes, bool)
    ProviderName() string
    HasClusterID() bool
}

This is very clear. Kubernetes operates in terms of instances, zones, clusters, and routes, and also requires access to a load balancer and provider name. The main interface is primarily a gateway. Most methods of the Interface interface above return yet other interfaces.

For example, the Clusters() method returns the Cluster interface, which is very simple:

type Clusters interface {
    ListClusters(ctx context.Context) ([]string, error)
    Master(ctx context.Context, clusterName string) (string, error)
}

The ListClusters() method returns cluster names. The Master() method returns the IP address or DNS name of the control plane of the cluster.

The other interfaces are not much more complicated. The entire file is 313 lines long (at the time of writing) including lots of comments. The take-home point is that it is not too complicated to implement a Kubernetes provider if your cloud utilizes those basic concepts.

Creating Kubernetes clusters in the cloud

Before we look at the cloud providers and their support for managed and non-managed Kubernetes, let’s consider how you should create and maintain clusters. If you commit to a single cloud provider, and you are happy with using their tooling, then you are set. All cloud providers let you create and configure Kubernetes clusters using either a Web UI, a CLI, or an API. However, if you prefer a more general approach and want to utilize GitOps to manage your clusters, you should look into Infrastructure as Code solutions such as Terraform and Pulumi.

If you prefer to roll out non-managed Kubernetes clusters in the cloud, then kOps is a strong candidate. See: https://kops.sigs.k8s.io.

Later, in Chapter 17, Running Kubernetes in Production, we will discuss in detail the topic of multi-cluster provisioning and management. There are many technologies, open source projects, and commercial products in this space.

For now, let’s look at the various cloud providers.

GCP

The Google Cloud Platform (GCP) supports Kubernetes out of the box. The so-called Google Kubernetes Engine (GKE) is a container management solution built on Kubernetes. You don’t need to install Kubernetes on GCP, and you can use the Google Cloud API to create Kubernetes clusters and provision them. The fact that Kubernetes is a built-in part of the GCP means it will always be well integrated and well tested, and you don’t have to worry about changes in the underlying platform breaking the cloud-provider interface.

If you prefer to manage Kubernetes yourself, then you can just deploy it directly on GCP instances (or use kOps alpha support for GCP), but I would generally advise against it as GKE does a lot of work for you and it’s integrated deeply with GCP compute, networking, and core services.

All in all, if you plan to base your system on Kubernetes and you don’t have any existing code on other cloud platforms, then GCP is a solid choice. It leads the pack in terms of maturity, polish, and depth of integration to GCP services, and is usually the first to update to newer versions of Kubernetes.

I spent a lot of time with Kubernetes on GKE, managing tens of clusters, upgrading them, and deploying workloads. GKE is production-grade Kubernetes for sure.

GKE Autopilot

GKE also has the Autopilot project, which takes care of managing worker nodes and node pools for you, so you focus on deploying and configuring workloads.

See: https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview.

AWS

AWS has its own container management service called ECS, which is not based on Kubernetes. It also has a managed Kubernetes service called EKS. You can run Kubernetes yourself on AWS EC2 instances. Let’s talk about how to roll your own Kubernetes first and then we’ll discuss EKS.

Kubernetes on EC2

AWS was a supported cloud provider from the get-go. There is a lot of documentation on how to set it up. While you could provision some EC2 instances yourself and use kubeadm to create a cluster, I recommend using the kOps (Kubernetes Operations) project mentioned earlier. kOps initially supported only AWS and is generally considered the most battle-tested and feature-rich tool for self-provisioning Kubernetes clusters on AWS (without using EKS).

It supports the following features:

  • Automated Kubernetes cluster CRUD for the cloud (AWS)
  • HA Kubernetes clusters
  • Uses a state-sync model for dry-run and automatic idempotency
  • Custom support for kubectl addons
  • kOps can generate Terraform configuration
  • Based on a simple meta-model defined in a directory tree
  • Easy command-line syntax
  • Community support

To create a cluster, you need to do some IAM and DNS configuration, set up an S3 bucket to store the cluster configuration, and then run a single command:

kops create cluster \
    --name=${NAME} \
    --cloud=aws \
    --zones=us-west-2a \
    --discovery-store=s3://prefix-example-com-oidc-store/${NAME}/discovery

The complete instructions are here: https://kops.sigs.k8s.io/getting_started/aws/.

At the end of 2017, AWS joined the CNCF and made two big announcements regarding Kubernetes: its own Kubernetes-based container orchestration solution (EKS) and a container-on-demand solution (Fargate).

Amazon EKS

Amazon Elastic Kubernetes Service (EKS) is a fully managed and highly available Kubernetes solution. It has three control plane nodes running in three AZs. EKS also takes care of upgrades and patching. The great thing about EKS is that it runs a stock Kubernetes. This means you can use all the standard plugins and tools developed by the community. It also opens the door to convenient cluster federation with other cloud providers and/or your own on-premise Kubernetes clusters.

EKS provides deep integration with AWS infrastructure like IAM authentication, which is integrated with Kubernetes Role-Based Access Control (RBAC).

You can also use PrivateLink if you want to access your Kubernetes masters directly from your own Amazon VPC. With PrivateLink, your Kubernetes control plane and the Amazon EKS service endpoint appear as an elastic network interface with private IP addresses in your Amazon VPC.

Another important piece of the puzzle is a special CNI plugin that lets your Kubernetes components talk to each other using AWS networking.

EKS keeps getting better and Amazon demonstrated that it is committed to keeping it up to date and improving it. If you are an AWS shop and getting into Kubernetes, I recommend starting with EKS as opposed to building your own cluster.

The eksctl tool is a great CLI for creating and managing EKS clusters and node groups for testing and development. I successfully created, deleted, and added nodes to several Kubernetes clusters on AWS using eksctl. Check out https://eksctl.io/.

Fargate

Fargate lets you run containers directly without worrying about provisioning hardware. It eliminates a huge part of the operational complexity at the cost of losing some control. When using Fargate, you package your application into a container, specify CPU and memory requirements, define networking and IAM policies, and you’re off to the races. Fargate can run on top of ECS and EKS. It is a very interesting member of the serverless camp although it’s not specific to Kubernetes like GKE’s Autopilot.

Azure

Azure used to have its own container management service based on Mesos-based DC/OS or Docker Swarm to manage your containers. But you can also use Kubernetes, of course. You could also provision the cluster yourself (for example, using Azure’s desired state configuration) and then create the Kubernetes cluster using kubeadm. kOps has alpha support for Azure, and the Kubespray project is a good option too.

However, in the second half of 2017 Azure jumped on the Kubernetes bandwagon too and introduced AKS (Azure Kubernetes Service). It is similar to Amazon EKS, although it’s a little further ahead in its implementation.

AKS provides a Web UI, CLI, and REST API to manage your Kubernetes clusters. Once, an AKS cluster is configured, you can use kubectl and any other Kubernetes tooling directly.

Here are some of the benefits of using AKS:

  • Automated Kubernetes version upgrades and patching
  • Easy cluster scaling
  • Self-healing hosted control plane (masters)
  • Cost savings – pay only for running agent pool nodes

AKS also offers integration with Azure Container Instances (ACI), which is similar to AWS Fargate and GKE AutoPilot. This means that not only the control plane of your Kubernetes cluster is managed, but also the worker nodes.

Digital Ocean

Digital Ocean is not a cloud provider on the order of the big three (GCP, AWS, Azure), but it does provide a managed Kubernetes solution and it has data centers across the world (US, Canada, Europe, Asia). It is also much cheaper compared to the alternatives, and cost is a major deciding factor when choosing a cloud provider. With Digital Ocean, the control plane doesn’t cost anything. Besides lower prices Digital Ocean’s claim to fame is simplicity.

DOKS (Digital Ocean Kubernetes Service) gives you a managed Kubernetes control plane (which can be highly available) and integration with Digital Ocean’s droplets (for nodes and node pools), load balancers, and block storage volumes. This covers all the basic needs. Your clusters are of course CNCF conformant.

Digital Ocean will take care of system upgrades, security patches, and the installed packages on the control plane as well as the worker nodes.

Other cloud providers

GCP, AWS, and Azure are leading the pack, but there are quite a few other companies that offer managed Kubernetes services. In general, I recommend using these providers if you already have significant business connections or integrations.

Once upon a time in China

If you operate in China with its special constraints and limitations, you should probably use a Chinese cloud platform. There are three big ones: Alibaba, Tencent, and Huawei.

The Chinese Alibaba cloud is an up-and-comer on the cloud platform scene. It mimics AWS pretty closely, although its English documentation leaves a lot to be desired. The Alibaba cloud supports Kubernetes in several ways via its ACK (Alibaba Container service for Kubernetes) and allows you to:

  • Run your own dedicated Kubernetes cluster (you must create 3 master nodes and upgrade and maintain them)
  • Use the managed Kubernetes cluster (you’re just responsible for the worker nodes)
  • Use the serverless Kubernetes cluster via ECI (Elastic Container Instances), which is similar to Fargate and ACI

ACK is a CNCF-certified Kubernetes distribution. If you need to deploy cloud-native applications in China, then ACK looks like a solid option.

See https://www.alibabacloud.com/product/kubernetes.

Tencent is another large Chinese company with its own cloud platform and Kubernetes support. TKE (Tencent Kubernetes Engine) seems less mature than ACK. See https://intl.cloud.tencent.com/products/tke.

Finally, the Huawei cloud platform offers CCE (Cloud Container Engine), which is built on Kubernetes. It supports VMs, bare metal, and GPU accelerated instances. See https://www.huaweicloud.com/intl/en-us/product/cce.html.

IBM Kubernetes service

IBM is investing heavily in Kubernetes. It acquired Red Hat at the end of 2018. Red Hat was of course a major player in the Kubernetes world, building its OpenShift Kubernetes-based platform and contributing RBAC to Kubernetes. IBM has its own cloud platform and it offers a managed Kubernetes cluster. You can try it out for free with $200 credit and there is also a free tier.

IBM is also involved in the development of Istio and Knative, so you can expect IKS to have deep integration with those technologies.

IKS offers integration with a lot of IBM services.

See https://www.ibm.com/cloud/kubernetes-service.

Oracle Container Service

Oracle also has a cloud platform and of course, it offers a managed Kubernetes service too, with high availability, bare-metal instances, and multi-AZ support.

OKE supports ARM and GPU instances and also offers a few control plane options.

See https://www.oracle.com/cloud/cloud-native/container-engine-kubernetes/.

In this section, we covered the cloud-provider interface and looked at the recommended ways to create Kubernetes clusters on various cloud providers. The scene is still young and the tools evolve quickly. I believe convergence will happen soon. Kubeadm has matured and is the underlying foundation of many other tools to bootstrap and create Kubernetes clusters on and off the cloud. Let’s consider now what it takes to create bare-metal clusters where you have to provision the hardware and low-level networking and storage too.

 

Creating a bare-metal cluster from scratch

In the previous section, we looked at running Kubernetes on cloud providers. This is the dominant deployment story for Kubernetes. But there are strong use cases for running Kubernetes on bare metal, such as Kubernetes on the edge. We don’t focus here on hosted versus on-premise. This is yet another dimension. If you already manage a lot of servers on-premise, you are in the best position to decide.

Use cases for bare metal

Bare-metal clusters are a bear to deal with, especially if you manage them yourself. There are companies that provide commercial support for bare-metal Kubernetes clusters, such as Platform 9, but the offerings are not mature yet. A solid open-source option is Kubespray, which can deploy industrial-strength Kubernetes clusters on bare metal, AWS, GCE, Azure, and OpenStack.

Here are some use cases where it makes sense:

  • Price: If you already manage large-scale bare-metal clusters, it may be much cheaper to run Kubernetes clusters on your physical infrastructure
  • Low network latency: If you must have low latency between your nodes, then the VM overhead might be too much
  • Regulatory requirements: If you must comply with regulations, you may not be allowed to use cloud providers
  • You want total control over hardware: Cloud providers give you many options, but you may have special needs

When should you consider creating a bare-metal cluster?

The complexities of creating a cluster from scratch are significant. A Kubernetes cluster is not a trivial beast. There is a lot of documentation on the web on how to set up bare-metal clusters, but as the whole ecosystem moves forward, many of these guides get out of date quickly.

You should consider going down this route if you have the operational capability to troubleshoot problems at every level of the stack. Most of the problems will probably be networking-related, but filesystems and storage drivers can bite you too, as well as general incompatibilities and version mismatches between components such as Kubernetes itself, Docker (or other runtimes, if you use them), images, your OS, your OS kernel, and the various addons and tools you use. If you opt for using VMs on top of bare metal, then you add another layer of complexity.

Understanding the process

There is a lot to do. Here is a list of some of the concerns you’ll have to address:

  • Implementing your own cloud-provider interface or sidestepping it
  • Choosing a networking model and how to implement it (CNI plugin, direct compile)
  • Whether or not to use network policies
  • Selecting images for system components
  • The security model and SSL certificates
  • Admin credentials
  • Templates for components such as API Server, replication controller, and scheduler
  • Cluster services: DNS, logging, monitoring, and GUI

I recommend the following guide from the Kubernetes site to get a deeper understanding of what it takes to create a HA cluster from scratch using kubeadm:

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/

Using the Cluster API for managing bare-metal clusters

The Cluster API (AKA CAPI) is a Kubernetes sub-project for managing Kubernetes clusters at scale. It uses kubeadm for provisioning. It can provision and manage Kubernetes clusters in any environment using providers. At work, we use it to manage multiple clusters in the cloud. But, it has multiple providers for bare-metal clusters:

  • MAAS
  • Equinix metal
  • Metal3
  • Cidero

See https://cluster-api.sigs.k8s.io.

Using virtual private cloud infrastructure

If your use case falls under the bare-metal use cases but you don’t have the necessary skilled manpower or the inclination to deal with the infrastructure challenges of bare metal, you have the option to use a private cloud such as OpenStack. If you want to aim a little higher in the abstraction ladder, then Mirantis offers a cloud platform built on top of OpenStack and Kubernetes.

Let’s review a few more tools for building Kubernetes clusters on bare metal. Some of these tools support OpenStack as well.

Building your own cluster with Kubespray

Kubespray is a project for deploying production-ready highly available Kubernetes clusters. It uses Ansible and can deploy Kubernetes on a large number of targets such as:

  • AWS
  • GCE
  • Azure
  • OpenStack
  • vSphere
  • Equinix metal
  • Oracle Cloud Infrastructure (Experimental)

It is also used to deploy Kubernetes clusters on plain bare-metal machines.

It is highly customizable and supports multiple operating systems for the nodes, multiple CNI plugins for networking, and multiple container runtimes.

If you want to test it locally, it can deploy to a multi-node vagrant setup too. If you’re an Ansible fan, Kubespray may be a great choice for you.

See https://kubespray.io.

Building your cluster with Rancher RKE

Rancher Kubernetes Engine (RKE) is a friendly Kubernetes installer that can install Kubernetes on bare metal as well as virtualized servers. RKE aims to address the complexity of installing Kubernetes. It is open source and has great documentation. Check it out here: http://rancher.com/docs/rke/v0.1.x/en/.

Running managed Kubernetes on bare metal or VMs

The cloud providers didn’t want to confine themselves to their own cloud only. They all offer multi-cloud and hybrid solutions where you can control Kubernetes clusters on multiple clouds as well as use their managed control plane on VMs anywhere.

GKE Anthos

Anthos is a comprehensive managed platform that facilitates the deployment of applications, encompassing both traditional and cloud-native environments. It empowers you to construct and oversee global fleets of applications while ensuring operational consistency across them.

EKS Anywhere

Amazon EKS Anywhere presents a fresh deployment alternative for Amazon EKS that enables you to establish and manage Kubernetes clusters on your infrastructure with AWS support. It grants you the flexibility to run Amazon EKS Anywhere on your own on-premises infrastructure, utilizing VMware vSphere, as well as bare metal environments.

AKS Arc

Azure Arc encompasses a collection of technologies that extend Azure’s security and cloud-native services to hybrid and multi-cloud environments. It empowers you to safeguard and manage your infrastructure and applications across various locations while providing familiar tools and services to accelerate the development of cloud-native apps. These applications can then be deployed on any Kubernetes platform.

In this section, we covered creating bare-metal Kubernetes clusters, which gives you total control, but is highly complicated, and requires a tremendous amount of effort and knowledge. Luckily, there are multiple tools, projects, and frameworks to assist you.

 

Summary

In this chapter, we got into some hands-on cluster creation. We created single-node and multi-node clusters using tools like Minikube, KinD, and k3d. Then we looked at the various options to create Kubernetes clusters on cloud providers. Finally, we touched on the complexities of creating Kubernetes clusters on bare metal. The current state of affairs is very dynamic. The basic components are changing rapidly, the tooling is getting better, and there are different options for each environment. Kubeadm is now the cornerstone of most installation options, which is great for consistency and consolidation of effort. It’s still not completely trivial to stand up a Kubernetes cluster on your own, but with some effort and attention to detail, you can get it done quickly.

I highly recommend considering the Cluster API as the go-to solution for provisioning and managing clusters in any environment – managed, private cloud, VMs, and bare metal. We will discuss the Cluster API in depth in Chapter 17, Running Kubernetes in Production.

In the next chapter, we will explore the important topics of scalability and high availability. Once your cluster is up and running, you need to make sure it stays that way even as the volume of requests increases. This requires ongoing attention and building the ability to recover from failures as well adjusting to changes in traffic.

Join us on Discord!

Read this book alongside other users, cloud experts, authors, and like-minded professionals.

Ask questions, provide solutions to other readers, chat with the authors via. Ask Me Anything sessions and much more.

Scan the QR code or visit the link to join the community now.

https://packt.link/cloudanddevops

About the Author
  • Gigi Sayfan

    Gigi Sayfan has been developing software for 25+ years in domains as diverse as instant messaging, morphing, chip fabrication process control, embedded multimedia applications for game consoles, brain-inspired ML, custom browser development, web services for 3D distributed game platforms, IoT sensors, virtual reality, and genomics. He has written production code in languages such as Go, Python, C, C++, C#, Java, Delphi, JavaScript, and even Cobol and PowerBuilder for operating systems such as Windows (3.11 through 7), Linux, macOS, Lynx (embedded), and Sony PlayStation. His technical expertise includes databases, low-level networking, distributed systems, containers, unorthodox user interfaces, modern web applications, and general SDLC.

    Browse publications by this author
Latest Reviews (2 reviews total)
I like the book content and access to both the PDF and online. When having it read out loud to me online it can't pronounce contractions properly.
Mastering Kubernetes - Fourth Edition
Unlock this book and the full library FREE for 7 days
Start now