Reader small image

You're reading from  Ansible for Real-Life Automation

Product typeBook
Published inSep 2022
PublisherPackt
ISBN-139781803235417
Edition1st Edition
Concepts
Right arrow
Author (1)
Gineesh Madapparambath
Gineesh Madapparambath
author image
Gineesh Madapparambath

Gineesh Madapparambath has over 15 years of experience in IT service management and consultancy with experience in planning, deploying, and supporting Linux-based projects. He has designed, developed, and deployed automation solutions based on Ansible and Ansible Automation Platform (formerly Ansible Tower) for bare metal and virtual server building, patching, container management, network operations, and custom monitoring. Gineesh has coordinated, designed, and deployed servers in data centers globally and has cross-cultural experience in classic, private cloud (OpenStack and VM ware), and public cloud environments (AWS, Azure, and Google Cloud Platform). Gineesh has handled multiple roles such as systems engineer, automation specialist, infrastructure designer, and content author. His primary focus is on IT and application automation using Ansible, containerization using OpenShift (and Kubernetes), and infrastructure automation using Terraform.
Read more about Gineesh Madapparambath

Right arrow

Managing Kubernetes Using Ansible

Due to the containerization of applications and the revolution in microservices, Kubernetes-based platforms have become popular. The containerization of applications and container orchestration using Kubernetes provide additional layers and complexity to infrastructure that requires automated solutions for managing a large number of components.

In the previous chapter, you learned about the capabilities of Ansible to build and manage container images and containers. When it comes to container orchestration tools, such as Kubernetes or Red Hat OpenShift, there are Ansible collections available with modules and plugins for supporting and managing your Kubernetes and Red Hat OpenShift clusters and resources.

Using Ansible for Kubernetes resource management will help you to implement more integrations in your DevOps workflow and Continuous Integration/Continuous Deployment (CI/CD) pipelines to deploy your applications very flexibly.

In this...

Technical requirements

The following are the technical requirements to proceed with this chapter:

  • One Linux machine for the Ansible control node
  • A working Kubernetes cluster with API access (refer to https://minikube.sigs.k8s.io/docs/start to spin up a local Kubernetes cluster)
  • Basic knowledge about containers and Kubernetes

All the Ansible code, Ansible playbooks, commands, and snippets for this chapter can be found in the GitHub repository at https://github.com/PacktPublishing/Ansible-for-Real-life-Automation/tree/main/Chapter-11.

An introduction to Kubernetes

Kubernetes is an open source container orchestration platform where we can deploy and manage our containerized applications without worrying about the underlying layers. This model of service is known as Platform as a Service (PaaS), where developers have the freedom to deploy their applications and other required resources, such as storage, network, and secrets, without assistance from the platform team.

The Kubernetes platform contains many components to manage container deployment and orchestration, as shown in Figure 11.1:

Figure 11.1 – The components of a Kubernetes cluster (source: https://kubernetes.io/docs/concepts/overview/components/)

Let’s briefly have a look at these components in the following sections.

The Kubernetes control plane

The control plane is responsible for making decisions on behalf of the cluster and application, such as scheduling the application Pods, detecting and responding to...

Managing Kubernetes clusters using Ansible

Deploying a Kubernetes cluster involves many steps, including preparing nodes, installing container runtime packages, and configuring networking. There are multiple methods we can use for deploying Kubernetes clusters within testing or production environments. The installation method for a Kubernetes cluster also depends on your requirements, whether you are using single-node clusters or multi-node clusters with HA or you require the option to scale the cluster whenever needed, for example.

Kubespray is a production-grade Kubernetes cluster deployment method that uses Ansible as its foundation for provisioning and orchestration. Using Kubespray, it is possible deploy a Kubernetes cluster on top of bare-metal servers, virtual machines, and private cloud or public cloud platforms (for example, AWS, GCE, Azure, and OpenStack).

Kubespray is highly customizable and you can configure the cluster with different Kubernetes components of your...

Configuring Ansible for Kubernetes

Ansible can communicate with Kubernetes clusters using the Kubernetes Python libraries or directly via the Kubernetes API, as shown in Figure 11.3:

Figure 11.3 – Communication between Ansible and Kubernetes

Ansible modules and plugins for managing Kubernetes are available in the kubernetes.core Ansible collection. (The Ansible Kubernetes collection was released as community.kubernetes prior to the release of kubernetes.core 1.1.) We will install, configure, and use the kubernetes.core collection in the following sections.

Python requirements

To communicate with the Kubernetes or OpenShift API, use the Python client for the OpenShift API (https://github.com/openshift/openshift-restclient-python) Python library. Before using any of the Kubernetes modules, you need to install the required Python libraries, as follows:

$ pip install openshift
$ pip install PyYAML

If you are using Ansible inside a Python virtual...

Deploying applications to Kubernetes using Ansible

Containerized applications can be deployed inside Kubernetes via the Kubernetes dashboard (web UI) or using the kubectl CLI (https://kubernetes.io/docs/reference/kubectl). By using Ansible, we can automate most of the deployment operations that take place inside our Kubernetes clusters. Since Ansible can easily integrate within CI/CD pipelines, it is possible to achieve more control over your application deployments in a containerized environment such as Kubernetes.

Applications are deployed inside logical isolated groups called Kubernetes namespaces. There can be default namespaces and Kubernetes cluster-related namespaces, and we can also create additional namespaces as required to deploy applications. Figure 11.16 demonstrates the relation between Deployments, Pods, Services, and namespaces in a Kubernetes cluster:

Figure 11.16 – Kubernetes Deployments and namespaces

In the following exercise,...

Scaling Kubernetes applications

The ReplicaSet resource in Kubernetes ensures that a specified number of application Pod replicas are running as part of the Deployment. This mechanism will help to scale the application horizontally whenever needed and without additional resource configurations. A ReplicaSet resource will be created when you create a deployment resource in Kubernetes, as shown in Figure 11.28:

Figure 11.28 – A ReplicaSet resource created as part of Deployment

Specify the initial number of replicas inside the Deployment definition file as replicas: 1 . ReplicaSet will scale the number of Pods based on the replica number.

When there is extra traffic on the application Pods, scale the application using the kubectl scale command, as follows (modify the Deployment, not the ReplicaSet):

Figure 11.29 – Scaling an application using kubectl

Wait for the replication changes to take effect and check the resource...

Executing commands inside a Kubernetes Pod

In a normal situation, we do not need to log in to a Pod or container, as the application is exposed on some ports and Services are talking over these exposed ports. However, when there are issues, we need to access the containers and check what is happening inside, by checking logs, accessing other Pods, or running any necessary troubleshooting commands.

Use the kubectl exec command if you are doing this troubleshooting or information gathering manually:

Figure 11.36 – Execute commands inside a Pod using the kubectl utility

However, when we automate Kubernetes operations using Ansible, use the k8s_exec module and automate the verification tasks or validation tasks as well.

For such scenarios, we can deploy debug Pods using suitable images (for example, images with required utilities, such as ping, curl, or netstat) and execute validation commands from these Pods. A typical deployment scenario with test...

Summary

In this chapter, we have learned about the Ansible collection for Kubernetes cluster and resource management. We started by covering the basics of Kubernetes components and discussed how to use Kubespray to deploy and manage Kubernetes clusters and their supported features.

After that, we learned the method of connecting a Kubernetes cluster to Ansible to automate cluster operations. We have used the Kubernetes Ansible collection to deploy applications and scale Deployments. We have also learned how to execute commands inside a running Kubernetes Pod using Ansible, which can be utilized for validation and troubleshooting purposes. This chapter has provided a brief introduction to Kubernetes automation using Ansible and other important information, such as Kubernetes content collection and methods of connecting Ansible to Kubernetes.

In the next chapter, you will learn about the different available methods of integrating your CI/CD and communication tools using Ansible...

Further reading

For more information on the topics covered in this chapter, please refer to the following links:

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Ansible for Real-Life Automation
Published in: Sep 2022Publisher: PacktISBN-13: 9781803235417
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime

Author (1)

author image
Gineesh Madapparambath

Gineesh Madapparambath has over 15 years of experience in IT service management and consultancy with experience in planning, deploying, and supporting Linux-based projects. He has designed, developed, and deployed automation solutions based on Ansible and Ansible Automation Platform (formerly Ansible Tower) for bare metal and virtual server building, patching, container management, network operations, and custom monitoring. Gineesh has coordinated, designed, and deployed servers in data centers globally and has cross-cultural experience in classic, private cloud (OpenStack and VM ware), and public cloud environments (AWS, Azure, and Google Cloud Platform). Gineesh has handled multiple roles such as systems engineer, automation specialist, infrastructure designer, and content author. His primary focus is on IT and application automation using Ansible, containerization using OpenShift (and Kubernetes), and infrastructure automation using Terraform.
Read more about Gineesh Madapparambath