Reader small image

You're reading from  Kubernetes - A Complete DevOps Cookbook

Product typeBook
Published inMar 2020
PublisherPackt
ISBN-139781838828042
Edition1st Edition
Concepts
Right arrow
Author (1)
Murat Karslioglu
Murat Karslioglu
author image
Murat Karslioglu

Murat Karslioglu is a distinguished technologist with years of experience using infrastructure tools and technologies. Murat is currently the VP of products at MayaData, a start-up that builds data agility platform for stateful applications, and a maintainer of open source projects, namely OpenEBS and Litmus. In his free time, Murat is busy writing practical articles about DevOps best practices, CI/CD, Kubernetes, and running stateful applications on popular Kubernetes platforms on his blog, Containerized Me. Murat also runs a cloud-native news curator site, The Containerized Today, where he regularly publishes updates on the Kubernetes ecosystem.
Read more about Murat Karslioglu

Right arrow
Building Production-Ready Kubernetes Clusters

This chapter proposes the most common deployment methods that are used on popular cloud services as well as on-premises, although you will certainly find a number of other tutorials on the internet explaining other approaches. This chapter explains the differences between managed/hosted cloud services versus self-managed cloud or on-premises Kubernetes deployments and the advantages of one vendor over another.

In this chapter, we will be covering the following recipes:

  • Configuring a Kubernetes cluster on Amazon Web Services
  • Configuring a Kubernetes cluster on Google Cloud Platform
  • Configuring a Kubernetes cluster on Microsoft Azure
  • Configuring a Kubernetes cluster on Alibaba Cloud
  • Configuring and managing Kubernetes clusters with Rancher
  • Configuring Red Hat OpenShift
  • Configuring a Kubernetes cluster using Ansible
  • Troubleshooting installation issues

Technical requirements

It is recommended that you have a fundamental knowledge of Linux containers and Kubernetes in general. For preparing your Kubernetes clusters, using a Linux host is recommended. If your workstation is Windows-based, then we recommend that you use Windows Subsystem for Linux (WSL). WSL gives you a Linux command line on Windows and lets you run ELF64 Linux binaries on Windows.

It's always good practice to develop using the same environment (which means the same distribution and the same version) as the one that will be used in production. This will avoid unexpected surprises such as It Worked on My Machine (IWOMM). If your workstation is using a different OS, another good approach is to set up a virtual machine on your workstation. VirtualBox (https://www.virtualbox.org/) is a free and open source hypervisor that runs on Windows, Linux, and macOS.

In this chapter, we'll assume that you are using an Ubuntu host (18.04, code name Bionic Beaver at the time of writing). There are no specific hardware requirements since all the recipes in this chapter will be deployed and run on cloud instances. Here is the list of software packages that will be required on your localhost to complete the recipes:

  • cURL
  • Python
  • Vim or Nano (or your favorite text editor)

Configuring a Kubernetes cluster on Amazon Web Services

The recipes in this section will take you through how to get a fully functional Kubernetes cluster with a fully customizable master and worker nodes that you can use for the recipes in the following chapters or in production.

In this section, we will cover both Amazon EC2 and Amazon EKS recipes so that we can run Kubernetes on Amazon Web Services (AWS).

Getting ready

All the operations mentioned here require an AWS account and an AWS user with a policy that has permission to use the related services. If you don't have one, go to https://aws.amazon.com/account/ and create one.

AWS provides two main options when it comes to running Kubernetes on it. You can consider using the Amazon Elastic Compute Cloud (Amazon EC2) if you'd like to manage your deployment completely and have specific powerful instance requirements. Otherwise, it's highly recommended to consider using managed services such as Amazon Elastic Container Service for Kubernetes (Amazon EKS).

How to do it…

Depending on whether you want to use AWS EC2 service or EKS, you can follow the following recipes to get your cluster up and running using either kops or eksctl tools:

  • Installing the command-line tools to configure AWS services
  • Installing kops to provision a Kubernetes cluster
  • Provisioning a Kubernetes cluster on Amazon EC2
  • Provisioning a managed Kubernetes cluster on Amazon EKS

Installing the command-line tools to configure AWS services

In this recipe, we will get the AWS Command-Line Interface (CLI) awscli and the Amazon EKS CLI eksctl to access and configure AWS services.

Let's perform the following steps:

  1. Install awscli on your workstation:

$ sudo apt-get update && sudo apt-get install awscli
  1. Configure the AWS CLI so that it uses your access key ID and secret access key:

$ aws configure
  1. Download and install the Amazon EKS command-line interface, eksctl:

$ curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
$ sudo mv /tmp/eksctl /usr/local/bin
  1. Verify its version and make sure eksctl is installed:

$ eksctl version

To be able to perform the following recipes, the eksctl version should be 0.13.0 or later.

Installing kops to provision a Kubernetes cluster

In this recipe, we will get the Kubernetes Operations tool, kops, and Kubernetes command-line tool, kubectl, installed in order to provision and manage Kubernetes clusters.

Let's perform the following steps:

  1. Download and install the Kubernetes Operations tool, kops:
$ curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
$ chmod +x kops-linux-amd64 && sudo mv kops-linux-amd64 /usr/local/bin/kops
  1. Run the following command to make sure kops is installed and confirm that the version is 1.15.0 or later:
$ kops version
  1. Download and install the Kubernetes command-line tool, kubectl:
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
$ chmod +x ./kubectl && sudo mv ./kubectl /usr/local/bin/kubectl
  1. Verify its version and make sure kubectl is installed:
$ kubectl version --short

To be able to perform the following recipes, the kubectl version should be v1.15 or later.

Provisioning a Kubernetes cluster on Amazon EC2

This recipe will take you through how to get a fully functional Kubernetes cluster with fully customizable master and worker nodes that you can use for the recipes in the following chapters or in production.

Let's perform the following steps:

  1. Create a domain for your cluster.
It is a cloud management best practice to have subdomains and to divide your clusters with logical and valid DNS names for kops to successfully discovery them.

As an example, I will use the k8s.containerized.me subdomain as our hosted zone. Also, if your domain is registered with a registrar other than Amazon Route 53, you must update the name servers with your registrar and add Route 53 NS records for the hosted zone to your registrar's DNS records:

$ aws route53 create-hosted-zone --name k8s.containerized.me \
--caller-reference k8s-devops-cookbook \
--hosted-zone-config Comment="Hosted Zone for my K8s Cluster"
  1. Create an S3 bucket to store the Kubernetes configuration and the state of the cluster. In our example, we will use s3.k8s.containerized.me as our bucket name:
$ aws s3api create-bucket --bucket s3.k8s.containerized.me \
--region us-east-1
  1. Confirm your S3 bucket by listing the available bucket:
$ aws s3 ls
2019-07-21 22:02:58 s3.k8s.containerized.me
  1. Enable bucket versioning:
$ aws s3api put-bucket-versioning --bucket s3.k8s.containerized.me \
--versioning-configuration Status=Enabled
  1. Set environmental parameters for kops so that you can use the locations by default:
$ export KOPS_CLUSTER_NAME=useast1.k8s.containerized.me
$ export KOPS_STATE_STORE=s3://s3.k8s.containerized.me
  1. Create an SSH key if you haven't done so already:
$ ssh-keygen -t rsa
  1. Create the cluster configuration with the list of zones where you want your master nodes to run:
$ kops create cluster --node-count=6 --node-size=t3.large \
--zones=us-east-1a,us-east-1b,us-east-1c \
--master-size=t3.large \
--master-zones=us-east-1a,us-east-1b,us-east-1c
  1. Create the cluster:
$ kops update cluster --name ${KOPS_CLUSTER_NAME} --yes
  1. Wait a couple of minutes for the nodes to launch and validate:
$ kops validate cluster
  1. Now, you can use kubectl to manage your cluster:
$ kubectl cluster-info

By default, kops creates and exports the Kubernetes configuration under ~/.kube/config. Therefore, no additional steps are required to connect your clusters using kubectl.

Provisioning a managed Kubernetes cluster on Amazon EKS

Perform the following steps to get your managed Kubernetes-as-a-service cluster up and running on Amazon EKS using eksctl:

  1. Create a cluster using the default settings:
$ eksctl create cluster
...
[√] EKS cluster "great-outfit-123" in "us-west-2" region is ready

By default, eksctl deploys a cluster with workers on two m5.large instances using the AWS EKS AMI in the us-west-2 region. eksctl creates and exports the Kubernetes configuration under ~/.kube/config. Therefore, no additional steps are required to connect your clusters using kubectl.

  1. Confirm the cluster information and workers:
$ kubectl cluster-info && kubectl get nodes
Kubernetes master is running at https://gr7.us-west-2.eks.amazonaws.com
CoreDNS is running at https://gr7.us-west-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
NAME STATUS ROLES AGE VERSION
ip-1-2-3-4.us-west-2.compute.internal Ready <none> 5m42s v1.13.8-eks-cd3eb0
ip-1-2-3-4.us-west-2.compute.internal Ready <none> 5m40s v1.13.8-eks-cd3eb0

Now, you have a two-node Amazon EKS cluster up and running.

How it works...

The first recipe on Amazon EC2 showed you how to provision multiple copies of master nodes that can survive a master node failure as well as single AZ outages. Although it is similar to what you get with the second recipe on Amazon EKS with Multi-AZ support, clusters on EC2 give you higher flexibility. When you run Amazon EKS instead, it runs a single-tenant Kubernetes control plane for each cluster, and the control plane consists of at least two API server nodes and three etcd nodes that run across three AZs within a region.

Let's take a look at the cluster options we used in step 7 with the kops create cluster command:

  • --node-count=3 sets the number of nodes to create. In our example, this is 6. This configuration will deploy two nodes per zone defined with--zones=us-east-1a,us-east-1b,us-east-1c, with a total of three master nodes and six worker nodes.
  • --node-size and --master-size set the instance size for the worker and master nodes. In our example, t2.medium is used for worker nodes and t2.large is used for master nodes. For larger clusters, t2.large is recommended for a worker.
  • --zones and --master-zones set the zones that the cluster will run in. In our example, we have used three zones called us-east-1a, us-east-1b, and us-east-1c.

For additional zone information, check the AWS Global Infrastructure link in the See also section.

AWS clusters cannot span across multiple regions and all the zones that have been defined for the master and worker nodes should be within the same region.

When deploying multi-master clusters, an odd number of master instances should be created. Also, remember that Kubernetes relies on etcd, a distributed key/value store. etcd quorum requires more than 51% of the nodes to be available at any time. Therefore, with three master nodes, our control plane can only survive a single master node or AZ outages. If you need to handle more than that, you need to consider increasing the number of master instances.

There's more…

It is also useful to have knowledge of the following information:

  • Using the AWS Shell
  • Using a gossip-based cluster
  • Using different regions for an S3 bucket
  • Editing cluster configuration
  • Deleting your cluster
  • Provisioning an EKS cluster using the Amazon EKS dashboard
  • Deploying Kubernetes Dashboard

Using the AWS Shell

Another useful tool worth mentioning here is aws-shell. It is an integrated shell that works with the AWS CLI. It uses the AWS CLI configuration and improves productivity with an autocomplete feature.

Install aws-shell using the following command and run it:

$ sudo apt-get install aws-shell && aws-shell

You will see the following output:

You can use AWS commands with aws-shell with less typing. Press the F10 key to exit the shell.

Using a gossip-based cluster

In this recipe, we created a domain (either purchased from Amazon or another registrar) and a hosted zone, because kops uses DNS for discovery. Although it needs to be a valid DNS name, starting with kops 1.6.2, DNS configuration became optional. Instead of an actual domain or subdomain, a gossip-based cluster can be easily created. By using a registered domain name, we make our clusters easier to share and accessible by others for production use.

If, for any reason, you prefer a gossip-based cluster, you can skip hosted zone creation and use a cluster name that ends with k8s.local :

$ export KOPS_CLUSTER_NAME=devopscookbook.k8s.local
$ export KOPS_STATE_STORE=s3://devops-cookbook-state-store

Setting the environmental parameters for kops is optional but highly recommended since it shortens your CLI commands.

Using different regions for an S3 bucket

In order for kops to store cluster configuration, a dedicated S3 bucket is required.

An example for the eu-west-1 region would look as follows:

$ aws s3api create-bucket --bucket s3.k8s.containerized.me \
--region eu-west-1 --create-bucket-configuration \
LocationConstraint=eu-west-1

This S3 bucket will become the source of truth for our Kubernetes cluster configuration. For simplicity, it is recommended to use the us-east-1 region; otherwise, an appropriate LocationConstraint needs be specified in order to create the bucket in the desired region.

Editing the cluster configuration

The kops create cluster command, which we used to create the cluster configuration, doesn't actually create the cluster itself and launch the EC2 instances; instead, it creates the configuration file in our S3 bucket.

After creating the configuration file, you can make changes to the configuration using the kops edit cluster command.

You can separately edit your node instance groups using the following command:

$ kops edit ig nodes 
$ kops edit ig master-us-east-1a

The config file is called from the S3 bucket's state store location. If you prefer a different editor you can, for example, set $KUBE_EDITOR=nano to change it.

Deleting your cluster

To delete your cluster, use the following command:

$ kops delete cluster --name ${KOPS_CLUSTER_NAME} --yes

This process may take a few minutes and, when finished, you will get a confirmation.

Provisioning an EKS cluster using the Amazon EKS Management Console

In the Provisioning a managed Kubernetes cluster on Amazon EKS recipe, we used eksctl to deploy a cluster. As an alternative, you can also use the AWS Management Console web user interface to deploy an EKS cluster.

Perform the following steps to get your cluster up and running on Amazon EKS:

  1. Open your browser and go to the Amazon EKS console at https://console.aws.amazon.com/eks/home#/clusters.
  2. Enter a cluster name and click on the Next Step button.
  3. On the Create Cluster page, select Kubernetes Version, Role name, at least two or more availability zones from the subnets list, and Security groups.
  4. Click on Create.
  1. Cluster creation with EKS takes around 20 minutes. Refresh the page in 15-20 minutes and check its status.
  2. Use the following command to update your kubectl configuration:
$ aws eks --region us-east-1 update-kubeconfig \
--name K8s-DevOps-Cookbook
  1. Now, use kubectl to manage your cluster:
$ kubectl get nodes

Now that your cluster has been configured, you can configure kubectl to manage it.

Deploying Kubernetes Dashboard

Last but not least, to deploy the Kubernetes Dashboard application on an AWS cluster, you need to follow these steps:

  1. At the time I wrote this recipe, Kubernetes Dashboard v.2.0.0 was still in beta. Since v.1.x version will be obsolete soon, I highly recommend that you install the latest version, that is, v.2.0.0. The new version brings a lot of functionality and support for Kubernetes v.1.16 and later versions. Before you deploy Dashboard, make sure to remove the previous version if you have a previous version. Check the latest release by following the link in the following information box and deploy it using the latest release, similar to doing the following:
$ kubectl delete ns kubernetes-dashboard
# Use the latest version link from https://github.com/kubernetes/dashboard/releases
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta5/aio/deploy/recommended.yaml
As the Kubernetes version gets upgraded, the dashboard application also gets frequently updated. To use the latest version, find the latest link to the YAML manifest on the release page at https://github.com/kubernetes/dashboard/releases. If you experience compatibility issues with the latest version of Dashboard, you can always deploy the previous stable version by using the following command:
$ kubectl apply -f
https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/depl
oy/recommended/kubernetes-dashboard.yaml
  1. By default, the kubernetes-dashboard service is exposed using the ClusterIP type. If you want to access it from outside, edit the service using the following command and replace the ClusterIP type with LoadBalancer; otherwise, use port forwarding to access it:
$ kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
  1. Get the external IP of your dashboard from the kubernetes-dashboard service:
$ kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard LoadBalancer 100.66.234.228 myaddress.us-east-1.elb.amazonaws.com 443:30221/TCP 5m46s
  1. Open the external IP link in your browser. In our example, it is https://myaddress.us-east-1.elb.amazonaws.com.
  1. We will use the token option to access Kubernetes Dashboard. Now, let's find the token in our cluster using the following command. In this example, the command returns kubernetes-dashboard-token-bc2w5 as the token name:
$ kubectl get secrets -A | grep dashboard-token
kubernetes-dashboard kubernetes-dashboard-token-bc2w5 kubernetes.io/service-account-token 3 17m
  1. Replace the secret name with yours from the output of the previous command. Get the token details from the description of the Secret:
$ kubectl describe secrets kubernetes-dashboard-token-bc2w5 -nkubernetes-dashboard

  1. Copy the token section from the output of the preceding command and paste it into Kubernetes Dashboard to sign in to Dashboard:

Now, you have access to Kubernetes Dashboard to manage your cluster.

See also

Configuring a Kubernetes cluster on Google Cloud Platform

This section will take you through step-by-step instructions to configure Kubernetes clusters on GCP. You will learn how to run a hosted Kubernetes cluster without needing to provision or manage master and etcd instances using GKE.

Getting ready

All the operations mentioned here require a GCP account with billing enabled. If you don't have one already, go to https://console.cloud.google.com and create an account.

On Google Cloud Platform (GCP), you have two main options when it comes to running Kubernetes. You can consider using Google Compute Engine (GCE) if you'd like to manage your deployment completely and have specific powerful instance requirements. Otherwise, it's highly recommended to use the managed Google Kubernetes Engine (GKE).

How to do it…

This section is further divided into the following subsections to make this process easier to follow:

  • Installing the command-line tools to configure GCP services
  • Provisioning a managed Kubernetes cluster on GKE
  • Connecting to GKE clusters

Installing the command-line tools to configure GCP services

In this recipe, we will get the primary CLI for Google Cloud Platform, gcloud, installed so that we can configure GCP services:

  1. Run the following command to download the gcloud CLI:

$ curl https://sdk.cloud.google.com | bash
  1. Initialize the SDK and follow the instructions given:
$ gcloud init
  1. During the initialization, when asked, select either an existing project that you have permissions for or create a new project.
  2. Enable the Compute Engine APIs for the project:
$ gcloud services enable compute.googleapis.com
Operation "operations/acf.07e3e23a-77a0-4fb3-8d30-ef20adb2986a" finished successfully.
  1. Set a default zone:
$ gcloud config set compute/zone us-central1-a
  1. Make sure you can start up a GCE instance from the command line:
$ gcloud compute instances create "devops-cookbook" \
--zone "us-central1-a" --machine-type "f1-micro"
  1. Delete the test VM:
$ gcloud compute instances delete "devops-cookbook"

If all the commands are successful, you can provision your GKE cluster.

Provisioning a managed Kubernetes cluster on GKE

Let's perform the following steps:

  1. Create a cluster:
$ gcloud container clusters create k8s-devops-cookbook-1 \
--cluster-version latest --machine-type n1-standard-2 \
--image-type UBUNTU --disk-type pd-standard --disk-size 100 \
--no-enable-basic-auth --metadata disable-legacy-endpoints=true \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
--num-nodes "3" --enable-stackdriver-kubernetes \
--no-enable-ip-alias --enable-autoscaling --min-nodes 1 \

--max-nodes 5 --enable-network-policy \
--addons HorizontalPodAutoscaling,HttpLoadBalancing \
--enable-autoupgrade --enable-autorepair --maintenance-window "10:00"

Cluster creation will take 5 minutes or more to complete.

Connecting to Google Kubernetes Engine (GKE) clusters

To get access to your GKE cluster, you need to follow these steps:

  1. Configure kubectl to access your k8s-devops-cookbook-1 cluster:
$ gcloud container clusters get-credentials k8s-devops-cookbook-1
  1. Verify your Kubernetes cluster:
$ kubectl get nodes

Now, you have a three-node GKE cluster up and running.

How it works…

This recipe showed you how to quickly provision a GKE cluster using some default parameters.

In Step 1, we created a cluster with some default parameters. While all of the parameters are very important, I want to explain some of them here.

--cluster-version sets the Kubernetes version to use for the master and nodes. Only use it if you want to use a version that's different from the default. To get the available version information, you can use the gcloud container get-server-config command.

We set the instance type by using the --machine-type parameter. If it's not set, the default is n1-standard-1. To get the list of predefined types, you can use the gcloud compute machine-types list command.

The default image type is COS, but my personal preference is Ubuntu, so I used --image-type UBUNTU to set the OS image to UBUNTU. If this isn't set, the server picks the default image type, that is, COS. To get the list of available image types, you can use the gcloud container get-server-config command.

GKE offers advanced cluster management features and comes with the automatic scaling of node instances, auto-upgrade, and auto-repair to maintain node availability. --enable-autoupgrade enables the GKE auto-upgrade feature for cluster nodes and --enable-autorepair enables the automatic repair feature, which is started at the time defined with the --maintenance-window parameter. The time that's set here is the UTC time zone and must be in HH:MM format.

There's more…

The following are some of the alternative methods that can be employed besides the recipe described in the previous section:

  • Using Google Cloud Shell
  • Deploying with a custom network configuration
  • Deleting your cluster
  • Viewing the Workloads dashboard

Using Google Cloud Shell

As an alternative to your Linux workstation, you can get a CLI interface on your browser to manage your cloud instances.

Go to https://cloud.google.com/shell/ to get a Google Cloud Shell.

Deploying with a custom network configuration

The following steps demonstrate how to provision your cluster with a custom network configuration:

  1. Create a VPC network:
$ gcloud compute networks create k8s-devops-cookbook \
--subnet-mode custom
  1. Create a subnet in your VPC network. In our example, this is 10.240.0.0/16:
$ gcloud compute networks subnets create kubernetes \
--network k8s-devops-cookbook --range 10.240.0.0/16
  1. Create a firewall rule to allow internal traffic:
$ gcloud compute firewall-rules create k8s-devops-cookbook-allow-int \
--allow tcp,udp,icmp --network k8s-devops-cookbook \
--source-ranges 10.240.0.0/16,10.200.0.0/16
  1. Create a firewall rule to allow external SSH, ICMP, and HTTPS traffic:
$ gcloud compute firewall-rules create k8s-devops-cookbook-allow-ext \
--allow tcp:22,tcp:6443,icmp --network k8s-devops-cookbook \
--source-ranges 0.0.0.0/0
  1. Verify the rules:
$ gcloud compute firewall-rules list
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
...
k8s-devops-cookbook-allow-ext k8s-devops-cookbook INGRESS 1000 tcp:22,tcp:6443,icmp False
k8s-devops-cookbook-allow-int k8s-devops-cookbook INGRESS 1000 tcp,udp,icmp False
  1. Add the --network k8s-devops-cookbook and --subnetwork kubernetes parameters to your container clusters create command and run it.

Deleting your cluster

To delete your k8s-devops-cookbook-1 cluster, use the following command:

$ gcloud container clusters delete k8s-devops-cookbook-1

This process may take a few minutes and when finished, you will get a confirmation message.

Viewing the Workloads dashboard

On GCP, instead of using the Kubernetes Dashboard application, you can use the built-in Workloads dashboard and deploy containerized applications through Google Marketplace. Follow these steps:

  1. To access the Workload dashboard from your GCP dashboard, choose your GKE cluster and click on Workloads.
  2. Click on Show system workloads to see the existing components and containers that have been deployed in the kube-system namespace.

See also

Configuring a Kubernetes cluster on Microsoft Azure

In this section, we will cover a recipe using Microsoft Azure Kubernetes Service (AKS) in order to create a Kubernetes cluster on the Microsoft Azure Cloud.

Getting ready

All the operations mentioned here require a Microsoft Azure subscription. If you don't have one already, go to https://portal.azure.com and create a free account.

How to do it…

This section will take you through how to configure a Kubernetes cluster on Microsoft Azure. This section is further divided into the following subsections to make this process easier:

  • Installing the command-line tools to configure Azure services
  • Provisioning a managed Kubernetes cluster on AKS
  • Connecting to AKS clusters

Installing the command-line tools to configure Azure services

In this recipe, we will get the Azure CLI tool called az and kubectl installed.

Let's perform the following steps:

  1. Install the necessary dependencies:
$ sudo apt-get update && sudo apt-get install -y libssl-dev \
libffi-dev python-dev build-essential
  1. Download and install the az CLI tool:
$ curl -L https://aka.ms/InstallAzureCli | bash
  1. Verify the az version you're using:
$ az --version
  1. Install kubectl, if you haven't installed it already:
$ az aks install-cli

If all commands were successful, you can start provisioning your AKS cluster.

Provisioning a managed Kubernetes cluster on AKS

Let's perform the following steps:

  1. Log in to your account:
$ az login
  1. Create a resource group named k8sdevopscookbook in your preferred region:
$ az group create --name k8sdevopscookbook --location eastus
  1. Create a service principal and take note of your appId and password for the next steps:
$ az ad sp create-for-rbac --skip-assignment
{
"appId": "12345678-1234-1234-1234-123456789012",
"displayName": "azure-cli-2019-05-11-20-43-47",
"name": "http://azure-cli-2019-05-11-20-43-47",
"password": "12345678-1234-1234-1234-123456789012",
"tenant": "12345678-1234-1234-1234-123456789012"
  1. Create a cluster. Replace appId and password with the output from the preceding command:
$ az aks create --resource-group k8sdevopscookbook \
--name AKSCluster \
--kubernetes-version 1.15.4 \
--node-vm-size Standard_DS2_v2 \
--node-count 3 \
--service-principal <appId> \
--client-secret <password> \
--generate-ssh-keys

Cluster creation will take around 5 minutes. You will see "provisioningState": Succeeded" when it has successfully completed.

Connecting to AKS clusters

Let's perform the following steps:

  1. Gather some credentials and configure kubectl so that you can use them:
$ az aks get-credentials --resource-group k8sdevopscookbook \
--name AKSCluster
  1. Verify your Kubernetes cluster:
$ kubectl get nodes

Now, you have a three-node GKE cluster up and running.

How it works…

This recipe showed you how to quickly provision an AKS cluster using some common options.

In step 3, the command starts with az aks create, followed by -g or --resource-group, so that you can select the name of your resource group. You can configure the default group using az configure --defaults group=k8sdevopscookbook and skip this parameter next time.

We used the --name AKSCluster parameter to set the name of the managed cluster to AKSCluster. The rest of the parameters are optional; --kubernetes-version or -k sets the version of Kubernetes to use for the cluster. You can use the az aks get-versions --location eastus --output table command to get the list of available options.

We used --node-vm-size to set the instance type for the Kubernetes worker nodes. If this isn't set, the default is Standard_DS2_v2.

Next, we used --node-count to set the number of Kubernetes worker nodes. If this isn't set, the default is 3. This can be changed using the az aks scale command.

Finally, the --generate-ssh-keys parameter is used to autogenerate the SSH public and private key files, which are stored in the ~/.ssh directory.

There's more…

Although Windows-based containers are now supported by Kubernetes, to be able to run Windows Server containers, you need to run Windows Server-based nodes. AKS nodes currently run on Linux OS and Windows Server-based nodes are not available in AKS. However, you can use Virtual Kubelet to schedule Windows containers on container instances and manage them as part of your cluster. In this section, we will take a look at the following:

  • Deleting your cluster
  • Viewing Kubernetes Dashboard

Deleting your cluster

To delete your cluster, use the following command:

$ az aks delete --resource-group k8sdevopscookbook --name AKSCluster

This process will take a few minutes and, when finished, you will receive confirmation of this.

Viewing Kubernetes Dashboard

To view Kubernetes Dashboard, you need to follow these steps:

  1. To start Kubernetes Dashboard, use the following command:
$ az aks browse --resource-group k8sdevopscookbook --name AKSCluster
  1. If your cluster is RBAC-enabled, then create Clusterrolebinding:
$ kubectl create clusterrolebinding kubernetes-dashboard \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:kubernetes-dashboard
  1. Open a browser window and go to the address where the proxy is running. In our example, this is http://127.0.0.1:8001/.

See also

Configuring a Kubernetes cluster on Alibaba Cloud

Alibaba Cloud (also known as Aliyun) offers multiple templates that you can use to provision a Kubernetes environment. There are four main service categories:

  • Kubernetes: Self-managed Kubernetes deployed with three masters on ECS instances within a single zone. Worker nodes can be on either ECS or bare-metal.
  • Managed Kubernetes: Similar to the Kubernetes cluster option, except master nodes are managed by Alibaba Cloud.
  • Multi-AZ Kubernetes: Similar to the Kubernetes cluster option, except the self-managed master and worker instances can be deployed in separate availability zones.
  • Serverless Kubernetes: A Kubernetes service offering where you deploy container applications without having to manage and maintain clusters instances:

In this section, we will cover how to provision a highly available Multi-AZ Kubernetes cluster without needing to provision or manage master and etcd instances.

Getting ready

All the operations mentioned here require an Alibaba Cloud account (also known as Aliyun) with an AccessKey. If you don't have one already, go to https://account.alibabacloud.com and create an account.

How to do it…

This section will take you through how to configure a Kubernetes cluster on Alibaba Cloud. This section is further divided into the following subsections to make this process easier:

  • Installing the command-line tools to configure Alibaba Cloud services
  • Provisioning a highly available Kubernetes cluster on Alibaba Cloud
  • Connecting to Alibaba Container Service clusters

Installing the command-line tools to configure Alibaba Cloud services

For this recipe, we will use the Alibaba Cloud console and generate the API request parameters from the dashboard that will be used with the CLI. You will also need the Alibaba Cloud CLI, aliyun, and kubectl installed.

  1. Run the following command to download the aliyun tool:
$ curl -O https://aliyuncli.alicdn.com/aliyun-cli-linux-3.0.15-amd64.tgz
You can find the link to the latest version here: https://github.com/aliyun/aliyun-cli.
  1. Extract the files and install them:
$ tar –zxvf aliyun-cli*.tgz && sudo mv aliyun /usr/local/bin/.
  1. Verify the aliyun CLI version you're using:
$ aliyun --version
  1. If you haven't created an AccessKey, go to Security Management in your account and create one (https://usercenter.console.aliyun.com/#/manage/ak).
  1. Complete the CLI configuration by entering your AccessKey ID, AccessKey Secret, and region ID:
$ aliyun configure
Configuring profile '' in '' authenticate mode...
Access Key Id []: <Your AccessKey ID>
Access Key Secret []: <Your AccessKey Secret>
Default Region Id []: us-west-1
Default Output Format [json]: json (Only support json))
Default Language [zh|en] en: en
Saving profile[] ...Done.
  1. Enable bash/zsh autocompletion:
$ aliyun auto-completion
  1. Go to the Container Service console (https://cs.console.aliyun.com) to give permissions to the container service to access cloud resources. Here, select AliyunCSDefaultRole, AliyunCSServerlessKuberentesRole, AliyunCSClusterRole, and AliyunCSManagedKubernetesRole and click on Confirm Authorization Policy.

Make sure you have the Resource Orchestration Service (ROS) and Autoscaling services enabled since they are required to get Kubernetes clusters deployed. ROS is used to automatically provision and configure resources for auto-deployment, operation, and maintenance based on your template, while Autoscaling is used to adjust compute resources based on demand.

Provisioning a highly available Kubernetes cluster on Alibaba Cloud

Let's perform the following steps:

  1. Open a browser window and go to the Alibaba Cloud Virtual Private Cloud console at https://vpc.console.aliyun.com.
  2. Make sure you select a region with at least three zones (most of the regions in mainland China have more than three zones) and click on Create VPC.
  3. Give a unique name to your VPC and select an IPv4 CIDR block. In our example, this is 10.0.0.0/8.
  1. Enter a name for your first VSwitch ( k8s-1), and select a zone (Beijing Zone A).
  2. Set an IPv4 CIDR block. In our example, we used 10.10.0.0./16.
  3. Click on the Add button and repeat steps 4 and 5 to get different zones. Use the following CIDR block information:
VSwitch 2 VSwitch 3
Name: k8s-2 k8s-3
Zone: Beijing Zone B Beijing Zone E
IPv4 CIDR Block: 10.20.0.0/16 10.30.0.0/16
  1. Click OK to create your VPC and VSwitches.
  2. Open the Aliyun Web console on your web browser (https://cs.console.aliyun.com.).
  3. Click on Create Kubernetes Cluster.
  4. Select Standard Managed Cluster.
  5. Click on the Multi-AZ Kubernetes tab, give your cluster a name, and select the same region that you used to create your VPCs and VSwitches.
  6. If you have selected the same region, the VPC dropdown will be populated with k8s-devops-cookbook-vpc. Now, select all three VSwitches that we've created:
  1. Set the instance types for the Master node configuration in each zone.
  2. Set the instance type for the Worker node configuration in each zone and the number of nodes in every zone to 3. Otherwise, use the defaults.
  3. Select the Kubernetes version (1.12.6-aliyun.1, at the time of writing).
  1. Select Key Pair Name from the drop-down menu, or create one by clicking Create a new key pair:
  1. Alibaba offers two CNI options: Flannel and Terway. The difference is explained in the There's more… section of this recipe. Leave the default network options using Flannel. The default parameters support up to 512 servers in the cluster.
  2. Monitoring and logging will be explained in Chapter 8, Observability and Monitoring on Kubernetes, and Chapter 10, Logging on Kubernetes. Therefore, this step is optional. Check the Install cloud monitoring plug-in on your ECS and Using Log Service options to enable monitoring and logging.
  3. Now, click on Create to provision your Multi-AZ Kubernetes cluster. This step may take 15-20 minutes to complete.

Connecting to Alibaba Container Service clusters

To get access to your cluster on Alibaba Cloud, you need to follow these steps:

  1. To get the cluster's credentials, go to the Clusters menu and click on the cluster name you want to access:
  1. Copy the content displayed in the KubeConfig tab to your local machine's $HOME/.kube/config file:
  1. Verify your Kubernetes cluster:
$ kubectl get nodes

As an alternative, see the Viewing the Kubernetes Dashboard instructions under the There's more... section to manage your cluster.

How it works…

This recipe showed you how to provision a managed Kubernetes cluster on Alibaba Cloud using a cluster template.

Under the Container Service menu, Alibaba Cloud provides a few Kubernetes cluster, where you are offered seven cluster templates. We used the Standard Managed Cluster here. This option lets you manage the worker nodes only and saves you the cost of resources and management for the master nodes:

By default, accounts support up to 20 clusters and 40 nodes in each cluster. You can request a quota increase by submitting a support ticket.

There's more…

As an alternative way of using the Alibaba Cloud console, you can use REST API calls through aliyuncli to create the ECS instances and your cluster. Follow these steps to do so:

  1. After you've configured your cluster options on your Alibaba Cloud console, click on Generate API request Parameters right under the Create button to generate POST request body content to be used with the aliyun CLI.
  2. Save the content in a file. In our case, this file is called cscreate.json.
  3. For an explanation of the additional parameters listed in this section, please refer to the Create a Kubernetes section at https://www.alibabacloud.com/help/doc-detail/87525.htm.
  4. Use the following command to create your cluster:
$ aliyun cs POST /clusters --header "Content-Type=application/json" \
--body "$(cat cscreate.json)"

The Alibaba Cloud Container Service provides two network plugin options for their Kubernetes clusters: Terway and Flannel.

Flannel is based on the community Flannel CNI plugin. Flannel is a very common and stable networking plugin that provides basic networking functionality. It is the recommended option for most use cases, except it does not support the Kubernetes NetworkPolicy. Terway is a network plugin developed by Alibaba Cloud CS. It is fully compatible with Flannel. Terway can define access policies between containers based on the Kubernetes NetworkPolicy. Terway also supports bandwidth limiting for containers.

Configuring and managing Kubernetes clusters with Rancher

Rancher is a container management platform with the flexibility to create Kubernetes clusters with Rancher Kubernetes Engine (RKE) or cloud-based Kubernetes services, such as GKE, AKS, and EKS, which we discussed in the previous recipes.

In this section, we will cover recipes for configuring Rancher so that we can deploy and manage Kubernetes services.

Getting ready

Rancher can be installed on Ubuntu, RHEL/CentOS, RancherOS, or even on Windows Server. You can bring up Rancher Server in a high availability configuration or a single node. Refer to the See also... section for links to the alternative installation instructions. In this recipe, we will run Rancher on a single node.

How to do it…

This section will take you through how to configure and manage Kubernetes clusters with Rancher. To that end, this section is further divided into the following subsections to make this process easier:

  • Installing Rancher Server
  • Deploying a Kubernetes cluster
  • Importing an existing cluster
  • Enabling cluster and node providers

Installing Rancher Server

Follow these steps to install Rancher Server:

  1. Install a supported version of Docker. You can skip this step if you have Docker installed already:
$ sudo apt-get -y install apt-transport-https ca-certificates curl \
software-properties-common
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
$ sudo apt-get -y install docker-ce && docker --version
  1. Add a user to a Docker group:
$ sudo usermod -a -G docker $USER
  1. To install Rancher Server, run the following command:
docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 rancher/rancher:latest
  1. Open a browser window and go to https://localhost. Replace localhost with your host's IP if necessary.
  2. Set a new password and click on Continue.
  3. Set the public IP address of Rancher server and click on Save URL. This IP needs to be externally accessible from your clusters.

Deploying a Kubernetes cluster

To deploy a new cluster, you need to follow these steps:

  1. Click on Add Cluster.
  2. Choose a provider. In our example, we will use GKE. Some settings for other providers might be slightly different:
  1. Enter a cluster name.

If you have your GCP service account JSON file that we saved previously, skip to step 10.

  1. From the GCP navigation menu, go to IAM and click on the Service accounts link.
  2. Click on Create Service Account.
  3. Enter a service account name and click Create.
  4. Add the required minimum permissions; that is, Compute Viewer, Viewer, Kubernetes Engine Admin, and Service Account User, and click Continue.
  5. Click on Create Key. Use JSON as the key type in order to save your service account.
  6. On the Rancher UI, click on Read from a file and load the service account JSON file you saved previously.
  7. Customize the Cluster Options as needed; otherwise, use the default settings and click on Create to deploy your Kubernetes cluster:

Your cluster will be listed and ready to be managed immediately on your Rancher dashboard.

Importing an existing cluster

To import an existing cluster, you need to follow these steps:

  1. Click on Add Cluster
  2. Click on Import:
  1. Enter a cluster name and click on Create.
  2. Follow the instructions shown and copy and run the kubectl command displayed on the screen to an existing Kubernetes cluster. This command will look similar to the following if you are running with an untrusted/self-signed SSL certificate:
  1. By clicking on Done, your cluster will be listed and ready to manage immediately on your Rancher dashboard:

The last step may take a minute to complete. Eventually, the state of your cluster will turn from Pending to Active when it is ready.

Enabling cluster and node providers

To be able to support multiple providers, Rancher uses cluster and node drivers. If you don't see your provider on the list, then it is most likely not enabled.

To enable additional providers, follow these steps:

  1. From Tools, click on Drivers.
  2. Find your provider on the list and click Activate:

From the same page, you can also deactivate the providers you don't intend to use.

How it works…

This recipe showed you how to quickly run Rancher Server to manage your Kubernetes clusters.

In step 1, we used a single node installation using a default self-signed certificate method. For security purposes, SSL is required to interact with the clusters. Therefore, a certificate is required.

If you prefer to use your own certificate signed by a recognized CA instead, you can use the following command and provide the path to your certificates to mount them in your container by replacing the FULLCHAIN.pem and PRIVATEKEY.pem files with your signed certificates:

$ docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
-v /<CERTDIRECTORY>/<FULLCHAIN.pem>:/etc/rancher/ssl/cert.pem \
-v /<CERTDIRECTORY>/<PRIVATEKEY.pem>:/etc/rancher/ssl/key.pem \
rancher/rancher:latest --no-cacerts

Using a recognized certificate will eliminate the security warning on the login page.

There's more…

It is also useful to have knowledge of the following information:

  • Bind mounting a host volume to keep data
  • Keeping user volumes persistent
  • Keeping data persistent on a host volume
  • Running Rancher on the same Kubernetes nodes

Bind mounting a host volume to keep data

When using the single node installation?, the persistent data is kept on the /var/lib/rancher path in the container.

To keep data on the host, you can bind mount a host volume to a location using the following command:

$ docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
-v /opt/rancher:/var/lib/rancher \
-v /var/log/rancher/auditlog:/var/log/auditlog \

rancher/rancher:latest

Bind mounts have limited functionality compared to volumes. When Rancher is started using the bind mount, a directory on the host machine will be mounted to the specified directory in the container.

Keeping user volumes persistent

When using RancherOS, only specific directories keep the data defined by the user-volumes parameter persistent.

To add additional persistent user-volumes, for example, add the /var/openebs directory:

$ ros config set rancher.services.user-volumes.volumes \[/home:/home,/opt:/opt,/var/lib/kubelet:/var/lib/kubelet,/etc/kubernetes:/etc/kubernetes,/var/openebs]
$ system-docker rm all-volumes
$ reboot

After rebooting, data in the specified directories will be persistent.

Running Rancher on the same Kubernetes nodes

To add the node where you run Rancher Server on a cluster, replace the default ports -p 80:80 -p 443:443 as follows and use the following command to start Rancher:

$ docker run -d --restart=unless-stopped \
-p 8080:80 -p 8443:443 rancher/rancher:latest

In this case, Rancher Server will be accessible through https://localhost:8443 instead of the standard 443 port.

See also

Configuring Red Hat OpenShift

In this recipe, we will learn how to deploy Red Hat OpenShift on AWS, bare-metal, or VMware vSphere VMs.

The steps in the Provisioning an OpenShift cluster recipe can be applied to deploy OpenShift on either VMs running on a virtualized environment or bare-metal servers.

Getting ready

All the operations mentioned here require a Red Hat account with active Red Hat Enterprise Linux and OpenShift Container Platform subscriptions. If you don't have one already, go to https://access.redhat.com and create an account.

When you deploy on VMs, make sure to plan that the zones you create on Kubernetes nodes are actually physically located on separate hypervisor nodes.

For this recipe, we need to have a minimum of six nodes with Red Hat Enterprise CoreOS installed on them. These nodes can be either bare-metal, VMs, or a mix of bare-metal and VMs.

How to do it…

This section will take you through how to configure Red Hat OpenShift. To that end, this section is further divided into the following subsections to make this process easier:

  • Downloading OpenShift binaries
  • Provisioning an OpenShift cluster
  • Connecting to OpenShift clusters

Downloading OpenShift binaries

Make sure you are on the Terminal of your first master and that you have an account with root access, or you are running as a superuser. Follow these steps:

  1. Go to https://cloud.redhat.com/openshift/install and download the latest OpenShift Installer:
  1. Extract the installer files on your workstation:
$ tar -xzf openshift-install-linux-*.tar.gz

The preceding command will create a file called openshift-install in the same folder.

Provisioning an OpenShift cluster

In this recipe, we will use the AWS platform to deploy OpenShift:

  1. To get your OpenShift cluster up, use the following command:
$ ./openshift-install create cluster
  1. Choose aws as your platform and enter your AWS Access Key ID and Secret Access Key.
  2. Choose your region. In our example, this is us-east-1.
  3. Select a base domain. In our example, this is k8s.containerized.me.
  4. Enter a cluster name.
  5. Copy Pull Secret from the Red Hat site and paste it onto the command line:
  1. After the installation is complete, you will see the console URL and credentials for accessing your new cluster, similar to the following:
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/ubuntu/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.os.k8s.containerized.me
INFO Login to the console with user: kubeadmin, password: ABCDE-ABCDE-ABCDE-ABCDE
  1. Switch to the Red Hat site and click on the Download Command-Line Tools link to download openshift-client.
  2. Extract the openshift-client files in your workstation:
$ tar -xzf openshift-client-linux-*.tar.gz && sudo mv oc /usr/local/bin

The preceding command will create the kubectl and oc files on the same folder and move the oc binary to PATH.

Connecting to OpenShift clusters

To connect to OpenShift clusters, follow these steps:

  1. To get access to your OpenShift cluster, use the following command:
$ export KUBECONFIG=~/auth/kubeconfig
  1. Log in to your OpenShift cluster after replacing password and cluster address:
$ oc login -u kubeadmin -p ABCDE-ABCDE-ABCDE-ABCDE \
https://api.openshift.k8s.containerized.me:6443 \
--insecure-skip-tls-verify=true

If you prefer to use the web console instead, open the web console URL address from the Provisioning an OpenShift cluster recipe, in step 7.

How it works…

This recipe showed you how to quickly deploy an OpenShift cluster on AWS.

In step 1, we created a cluster using the default configuration of the installer-provisioned infrastructure.

The installer asked a series of questions regarding user information and used mostly default values for other configuration options. These defaults can be edited and customized if needed using the install-config.yaml file.

To see the defaults that were used for the deployment, let's create an install-config.yaml file and view it:

$ ./openshift-install create install-config && cat install-config.yaml

As you can see from the following output, the file's default configuration creates a cluster consisting of three master and three worker nodes:

apiVersion: v1
baseDomain: k8s.containerized.me
compute:
- hyperthreading: Enabled
name: worker
platform: {}
replicas: 3
controlPlane:
hyperthreading: Enabled
name: master
platform: {}
replicas: 3
...

Edit install-config.yaml as needed. Next time you create the cluster, new parameters will be used instead.

There's more…

It is also useful to have knowledge of the following information:

  • Deleting your cluster

Deleting your cluster

To delete your cluster, use the following command:

$ ./openshift-install destroy cluster

This process will take a few minutes and, when finished, you will get a confirmation message.

See also

Configuring a Kubernetes cluster using Ansible

Powerful IT automation engines such as Ansible can be used to automate pretty much any day-to-day IT task, including the deployment of Kubernetes clusters on bare-metal clusters. In this section, we will learn how to deploy a simple Kubernetes cluster using Ansible playbooks.

Getting ready

    In this recipe, we will use an Ansible playbook. The examples that will be used in these recipes are accessible through the k8sdevopscookbook GitHub repository.

    Before you start executing the commands in this section's recipes, clone the Ansible playbook examples using the following command:

    $ git clone https://github.com/k8sdevopscookbook/src.git

    You will find the examples stored under the k8sdevopscookbook/src directory.

    How to do it…

    This section will take you through how to configure a Kubernetes cluster using Ansible. To that end, this section is further divided into the following subsections to make this process easier:

    • Installing Ansible
    • Provisioning a Kubernetes cluster using an Ansible playbook
    • Connecting to the Kubernetes cluster

    Installing Ansible

    In order to provision a Kubernetes cluster using an Ansible playbook, follow these steps:

    1. To install Ansible on your Linux workstation, first, we need to add the necessary repositories:
    $ sudo apt-get install software-properties-common
    $ sudo apt-add-repository --yes --update ppa:ansible/ansible
    1. Install Ansible using the following command:
    $ sudo apt-get update && sudo apt-get install ansible -y
    1. Verify its version and make sure Ansible is installed:
    $ ansible --version

    At the time this recipe was written, the latest Ansible version was 2.9.4.

    Provisioning a Kubernetes cluster using an Ansible playbook

    In order to provision a Kubernetes cluster using an Ansible playbook, follow these steps:

    1. Edit the hosts.ini file and replace the master and node IP addresses with your node IPs where you want Kubernetes to be configured:
    $ cd src/chapter1/ansible/ && vim hosts.ini
    1. The hosts.ini file should look as follows:
    [master]
    192.168.1.10
    [node]
    192.168.1.[11:13]
    [kube-cluster:children]
    master
    node
    1. Edit the groups_vars/all.yml file to customize your configuration. The following is an example of how to do this:
    kube_version: v1.14.0
    token: b0f7b8.8d1767876297d85c
    init_opts: ""
    kubeadm_opts: ""
    service_cidr: "10.96.0.0/12"
    pod_network_cidr: "10.244.0.0/16"
    calico_etcd_service: "10.96.232.136"
    network: calico
    network_interface: ""
    enable_dashboard: yes
    insecure_registries: []
    systemd_dir: /lib/systemd/system
    system_env_dir: /etc/sysconfig
    network_dir: /etc/kubernetes/network
    kubeadmin_config: /etc/kubernetes/admin.conf
    kube_addon_dir: /etc/kubernetes/addon
    1. Run the site.yaml playbook to create your cluster:
    $ ansible-playbook site.yaml

    Your cluster will be deployed based on your configuration.

    Connecting to the Kubernetes cluster

    To get access to your Kubernetes cluster, you need to follow these steps:

    1. Copy the configuration file from the master1 node:
    $ scp root@master:/etc/kubernetes/admin.conf ~/.kube/config
    1. Now, use kubectl to manage your cluster.

    See also

    Troubleshooting installation issues

    Kubernetes consists of many loosely coupled components and APIs. Based on environmental differences, you may run into problems where a little bit more attention is required to get everything up and running. Fortunately, Kubernetes provides many ways to point out problems.

    In this section, we will learn how to get cluster information in order to troubleshoot potential issues.

    How to do it…

    Follow these steps to gather cluster information in order to troubleshoot potential issues:

    1. Create a file dump of the cluster state called cluster-state:
    $ kubectl cluster-info dump --all-namespaces \
    --output-directory=$PWD/cluster-state
    1. Display the master and service addresses:
    $ kubectl cluster-info
    Kubernetes master is running at https://172.23.1.110:6443
    Heapster is running at https://172.23.1.110:6443/api/v1/namespaces/kube-system/services/heapster/proxy
    KubeDNS is running at https://172.23.1.110:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    1. Show the resource usage of the us-west-2.compute.internal node:
    $ kubectl top node us-west-2.compute.internal
    NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
    us-west-2.compute.internal 42m 2% 1690Mi 43%
    1. Mark the us-west-2.compute.internal node as unschedulable:
    $ kubectl cordon us-west-2.compute.internal
    1. Safely evict all the pods from the us-west-2.compute.internal node for maintenance:
    $ kubectl drain us-west-2.compute.internal
    1. Mark the us-west-2.compute.internal node as schedulable after maintenance:
    $ kubectl uncordon us-west-2.compute.internal

    How it works…

    This recipe showed you how to quickly troubleshoot common Kubernetes cluster issues.

    In step 1, when the kubectl cluster-info command was executed with the --output-directory parameter, Kubernetes dumped the content of the cluster state under a specified folder. You can see the full list using the following command:

    $ tree ./cluster-state
    ./cluster-state
    ├── default
    │ ├── daemonsets.json
    │ ├── deployments.json
    │ ├── events.json
    │ ├── pods.json
    │....

    In step 4, we marked the node as unavailable using the kubectl cordon command. Kubernetes has a concept of scheduling applications, meaning that it assigns pods to nodes that are available. If you know in advance that an instance on your cluster will be terminated or updated, you don't want new pods to be scheduled on that specific node. Cordoning means patching the node with node.Spec.Unschedulable=true. When a node is set as unavailable, no new pods will be scheduled on that node.

    In step 5, we use, the kubectl drain command to evict the existing pods, because cordoning alone will not have an impact on the currently scheduled pods. Evict APIs take disruption budgets into account. If set by the owner, disruption budgets limit the number of pods of a replicated application that are down simultaneously from voluntary disruptions. If this isn't supported or set, Evict APIs will simply delete the pods on the node after the grace period.

    There's more…

    It is also useful to have knowledge of the following information:

    • Setting log levels

    Setting log levels

    When using the kubectl command, you can set the output verbosity with the --v flag, followed by an integer for the log level, which is a number between 0 and 9. The general Kubernetes logging conventions and the associated log levels are described in the Kubernetes documentation at https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-output-verbosity-and-debugging.

    It is useful to get the output details in a specific format by adding one of the following parameters to your command:

    • -o=wide is used to get additional information on a resource. An example is as follows:
    $ kubectl get nodes -owide
    NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
    ip-192-168-41-120.us-west-2.compute.internal Ready <none> 84m v1.13.8-eks-cd3eb0 192.168.41.120 34.210.108.135 Amazon Linux 2 4.14.133-113.112.amzn2.x86_64 docker://18.6.1
    ip-192-168-6-128.us-west-2.compute.internal Ready <none> 84m v1.13.8-eks-cd3eb0 192.168.6.128 18.236.119.52 Amazon Linux 2 4.14.133-113.112.amzn2.x86_64 docker://18.6.1
    • -o=yaml is used to return the output in YAML format. An example is as follows:
    $ kubectl get pod nginx-deployment-5c689d88bb-qtvsx -oyaml
    apiVersion: v1
    kind: Pod
    metadata:
    annotations:
    kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container
    nginx'
    creationTimestamp: 2019-09-25T04:54:20Z
    generateName: nginx-deployment-5c689d88bb-
    labels:
    app: nginx
    pod-template-hash: 5c689d88bb
    name: nginx-deployment-5c689d88bb-qtvsx
    namespace: default
    ...

    As you can see, the output of the -o=yaml parameter can be used to create a manifest file out of an existing resource as well.

    See also

    lock icon
    The rest of the chapter is locked
    You have been reading a chapter from
    Kubernetes - A Complete DevOps Cookbook
    Published in: Mar 2020Publisher: PacktISBN-13: 9781838828042
    Register for a free Packt account to unlock a world of extra content!
    A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
    undefined
    Unlock this book and the full library FREE for 7 days
    Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
    Renews at $15.99/month. Cancel anytime

    Author (1)

    author image
    Murat Karslioglu

    Murat Karslioglu is a distinguished technologist with years of experience using infrastructure tools and technologies. Murat is currently the VP of products at MayaData, a start-up that builds data agility platform for stateful applications, and a maintainer of open source projects, namely OpenEBS and Litmus. In his free time, Murat is busy writing practical articles about DevOps best practices, CI/CD, Kubernetes, and running stateful applications on popular Kubernetes platforms on his blog, Containerized Me. Murat also runs a cloud-native news curator site, The Containerized Today, where he regularly publishes updates on the Kubernetes ecosystem.
    Read more about Murat Karslioglu