Creating a multi-node cluster with KinD
In this section, we’ll create a multi-node cluster using KinD. We will also repeat the deployment of the echo server we deployed on Minikube and observe the differences. Spoiler alert - everything will be faster and easier!
Quick introduction to KinD
KinD stands for Kubernetes in Docker. It is a tool for creating ephemeral clusters (no persistent storage). It was built primarily for running the Kubernetes conformance tests. It supports Kubernetes 1.11+. Under the covers, it uses kubeadm to bootstrap Docker containers as nodes in the cluster. KinD is a combination of a library and a CLI. You can use the library in your code for testing or other purposes. KinD can create highly-available clusters with multiple control plane nodes. Finally, KinD is a CNCF conformant Kubernetes installer. It had better be if it’s used for the conformance tests of Kubernetes itself.
KinD is super fast to start, but it has some limitations too:
- No persistent storage
 - No support for alternative runtimes yet, only Docker
 
Let’s install KinD and get going.
Installing KinD
You must have Docker installed as KinD is literally running as a Docker container. If you have Go installed, you can install the KinD CLI via:
go install sigs.k8s.io/kind@v0.14.0
    Otherwise, on macOS type:
brew install kind
    On Windows type:
choco install kind
    Dealing with Docker contexts
You may have multiple Docker engines on your system and the Docker context determines which one is used. You may get an error like:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
    In this case, check your Docker contexts:
$ docker context ls
NAME              DESCRIPTION                               DOCKER ENDPOINT                                 KUBERNETES ENDPOINT                ORCHESTRATOR
colima            colima                                    unix:///Users/gigi.sayfan/.colima/docker.sock
default *         Current DOCKER_HOST based configuration   unix:///var/run/docker.sock                     https://127.0.0.1:6443 (default)   swarm
rancher-desktop   Rancher Desktop moby context              unix:///Users/gigi.sayfan/.rd/docker.sock       https://127.0.0.1:6443 (default)
    The context marked with * is the current context. If you use Rancher Desktop, then you should set the context to rancher-desktop:
$ docker context use rancher-desktop
    Creating a cluster with KinD
Creating a cluster is super easy.
$ kind create cluster
Creating cluster "kind" ...
 
 Ensuring node image (kindest/node:v1.23.4) 
 
 Preparing nodes 
 
 Writing configuration 
 
 Starting control-plane 
 
 Installing CNI 
 
 Installing StorageClass 
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Thanks for using kind! 
    It takes less than 30 seconds to create a single-node cluster.
Now, we can access the cluster using kubectl:
$ k config current-context
kind-kind
$ k cluster-info
Kubernetes control plane is running at https://127.0.0.1:51561
CoreDNS is running at https://127.0.0.1:51561/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
    KinD adds its kube context to the default ~/.kube/config file by default. When creating a lot of temporary clusters, it is sometimes better to store the KinD contexts in separate files and avoid cluttering ~/.kube/config. This is easily done by passing the --kubeconfig flag with a file path.
So, KinD creates a single-node cluster by default:
$ k get no
NAME                 STATUS   ROLES                  AGE   VERSION
kind-control-plane   Ready    control-plane,master   4m   v1.23.4
    Let’s delete it and create a multi-node cluster:
$ kind delete cluster 
Deleting cluster "kind" ...
    To create a multi-node cluster, we need to provide a configuration file with the specification of our nodes. Here is a configuration file that will create a cluster called multi-node-cluster with one control plane node and two worker nodes:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: multi-node-cluster
nodes:
- role: control-plane
- role: worker
- role: worker
    Let’s save the configuration file as kind-multi-node-config.yaml and create the cluster storing the kubeconfig with its own file $TMPDIR/kind-multi-node-config:
$ kind create cluster --config kind-multi-node-config.yaml --kubeconfig $TMPDIR/kind-multi-node-config
Creating cluster "multi-node-cluster" ...
 
 Ensuring node image (kindest/node:v1.23.4) 
 
 Preparing nodes 
 
 Writing configuration 
 
 Starting control-plane 
 
 Installing CNI 
 
 Installing StorageClass 
 
 Joining worker nodes 
Set kubectl context to "kind-multi-node-cluster"
You can now use your cluster with:
kubectl cluster-info --context kind-multi-node-cluster --kubeconfig /var/folders/qv/7l781jhs6j19gw3b89f4fcz40000gq/T//kind-multi-node-config
Have a nice day! 
    Yeah, it works! And we got a local 3-node cluster in less than a minute:
$ k get nodes --kubeconfig $TMPDIR/kind-multi-node-config
NAME                               STATUS   ROLES                  AGE     VERSION
multi-node-cluster-control-plane   Ready    control-plane,master   2m17s   v1.23.4
multi-node-cluster-worker          Ready    <none>                 100s    v1.23.4
multi-node-cluster-worker2         Ready    <none>                 100s    v1.23.4
    KinD is also kind enough (see what I did there) to let us create HA (highly available) clusters with multiple control plane nodes for redundancy. If you want a highly available cluster with three control plane nodes and two worker nodes, your cluster config file will be very similar:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: ha-multi-node-cluster
nodes:
- role: control-plane
- role: control-plane
- role: control-plane
- role: worker
- role: worker
    Let’s save the configuration file as kind-ha-multi-node-config.yaml and create a new HA cluster:
$ kind create cluster --config kind-ha-multi-node-config.yaml --kubeconfig $TMPDIR/kind-ha-multi-node-config
Creating cluster "ha-multi-node-cluster" ...
 
 Ensuring node image (kindest/node:v1.23.4) 
 
 Preparing nodes 
 
 Configuring the external load balancer 
 
 Writing configuration 
 
 Starting control-plane 
 
 Installing CNI 
 
 Installing StorageClass 
 
 Joining more control-plane nodes 
 
 Joining worker nodes 
Set kubectl context to "kind-ha-multi-node-cluster"
You can now use your cluster with:
kubectl cluster-info --context kind-ha-multi-node-cluster --kubeconfig /var/folders/qv/7l781jhs6j19gw3b89f4fcz40000gq/T//kind-ha-multi-node-config
Not sure what to do next? 
 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
    Hmmm... there is something new here. Now KinD creates an external load balancer as well as joining more control plane nodes before joining the worker nodes. The load balancer is necessary to distribute requests across all the control plane nodes.
Note that the external load balancer doesn’t show as a node using kubectl:
$ k get nodes --kubeconfig $TMPDIR/kind-ha-multi-node-config
NAME                                   STATUS   ROLES                  AGE     VERSION
ha-multi-node-cluster-control-plane    Ready    control-plane,master   3m31s   v1.23.4
ha-multi-node-cluster-control-plane2   Ready    control-plane,master   3m19s   v1.23.4
ha-multi-node-cluster-control-plane3   Ready    control-plane,master   2m22s   v1.23.4
ha-multi-node-cluster-worker           Ready    <none>                 2m4s    v1.23.4
ha-multi-node-cluster-worker2          Ready    <none>                 2m5s    v1.23.4
    But, KinD has its own get nodes command, where you can see the load balancer:
$ kind get nodes --name ha-multi-node-cluster
ha-multi-node-cluster-control-plane2
ha-multi-node-cluster-external-load-balancer
ha-multi-node-cluster-control-plane
ha-multi-node-cluster-control-plane3
ha-multi-node-cluster-worker
ha-multi-node-cluster-worker2
    Our KinD cluster is up and running; let’s put it to work.
Doing work with KinD
Let’s deploy our echo service on the KinD cluster. It starts the same:
$ k create deployment echo --image=g1g1/echo-server:0.1 --kubeconfig $TMPDIR/kind-ha-multi-node-config
deployment.apps/echo created
$ k expose deployment echo --type=NodePort --port=7070 --kubeconfig $TMPDIR/kind-ha-multi-node-config
service/echo exposed
    Checking our services, we can see the echo service front and center:
$ k get svc echo --kubeconfig $TMPDIR/kind-ha-multi-node-config
NAME   TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
echo   NodePort   10.96.52.33   <none>        7070:31953/TCP   10s
    But, there is no external IP to the service. With minikube, we got the IP of the minikube node itself via $(minikube ip) and we could use it in combination with the node port to access the service. That is not an option with KinD clusters. Let’s see how to use a proxy to access the echo service.
Accessing Kubernetes services locally through a proxy
We will go into a lot of detail about networking, services, and how to expose them outside the cluster later in the book.
Here, we will just show you how to get it done and keep you in suspense for now. First, we need to run the kubectl proxy command that exposes the API server, pods, and services on localhost:
$ k proxy --kubeconfig $TMPDIR/kind-ha-multi-node-config &
[1] 32479
Starting to serve on 127.0.0.1:8001
    Then, we can access the echo service through a specially crafted proxy URL that includes the exposed port (8080) and NOT the node port:
$ http http://localhost:8001/api/v1/namespaces/default/services/echo:7070/proxy/yeah-it-works
HTTP/1.1 200 OK
Audit-Id: 294cf10b-0d60-467d-8a51-4414834fc173
Cache-Control: no-cache, private
Content-Length: 13
Content-Type: text/plain; charset=utf-8
Date: Mon, 23 May 2022 21:54:01 GMT
yeah-it-works
    I used httpie in the command above. You can use curl too. To install httpie, follow the instructions here: https://httpie.org/doc#installation.
We will deep dive into exactly what’s going on in Chapter 10, Exploring Kubernetes Networking. For now, it is enough to demonstrate how kubectl proxy allows us to access our KinD services.
Let’s check out my favorite local cluster solution – k3d.