Understanding Kubernetes and Helm
Thank you for choosing this book, Learn Helm. If you are interested in this book, you are probably aware of the challenges that modern applications bring. Teams face tremendous pressure to ensure that applications are lightweight and scalable. Applications must also be highly available and able to withstand varying loads. Historically, applications have most commonly been deployed as monoliths or large, single-tiered applications served on a single system. As time has progressed, the industry has shifted toward a microservice approach or small, multi-tiered applications served on multiple systems. Often deployed using container technology, the industry has started leveraging tools such as Kubernetes to orchestrate and scale their containerized microservices.
Kubernetes, however, comes with its own set of challenges. While it is an effective container orchestration tool, it presents a steep learning curve that can be difficult for teams to overcome. One tool that helps simplify the challenges of running workloads on Kubernetes is Helm. Helm allows users to more simply deploy and manage the life cycle of Kubernetes applications. It abstracts many of the complexities behind configuring Kubernetes applications and allows teams to be more productive on the platform.
In this book, you will explore each of the benefits offered by Helm and discover how Helm makes application deployment much simpler on Kubernetes. You will first assume the role of an end user, consuming Helm charts written by the community and learning the best practices behind leveraging Helm as a package manager. As this book progresses, you will assume the role of a chart developer and learn how to package Kubernetes applications in ways that are easily consumable and efficient. Toward the end of this book, you’ll learn about advanced patterns around application management and security with Helm.
In this chapter, we will cover the following main topics:
- From monoliths to modern microservices
- What is Kubernetes?
- Deploying a Kubernetes application
- Approaches to resource management
- Resource configuration challenges
- Helm to the rescue!
From monoliths to modern microservices
Software applications are a fundamental component of most modern technology. Whether they take the form of a word processor, web browser, or streaming service, they enable user interaction to complete one or more tasks. Applications have a long and storied history, from the days of Electronic Numerical Integrator and Computer (ENIAC)—the first general-purpose computer—to taking man to the moon in the Apollo space missions, to the rise of the World Wide Web (WWW), social media, and online retail.
These applications can operate on a wide range of platforms and systems, leveraging either physical or virtual computing resources. Depending on their purpose and resource requirements, entire machines may be dedicated to serving the compute and/or storage needs of an application. Fortunately, thanks in part to the realization of Moore’s law, the power and performance of microprocessors initially increased with each passing year, along with the overall cost associated with the physical resources used. This trend has subsided in recent years, but the advent of this trend and its persistence for the first 30 years of the existence of processors was instrumental to the advances in technology.
Software developers took full advantage of this opportunity and bundled more features and components into their applications. As a result, a single application could consist of several smaller components, each of which, on its own, could be written as its own individual services. Initially, bundling components together yielded several benefits, including a simplified deployment process. However, as industry trends began to change and businesses focused more on the ability to deliver features more rapidly, the design of a single deployable application brought with it a number of challenges. Whenever a change was required, the entire application and all of its underlying components needed to be validated once again to ensure the change had no adverse features. This process potentially required coordination from multiple teams, which slowed the overall delivery of the feature.
Delivering features more rapidly, especially across traditional divisions within organizations, was also something that organizations wanted. This concept of rapid delivery is fundamental to a practice called development-operations (DevOps), whose rise in popularity occurred around 2010. DevOps encouraged more iterative changes to applications over time, instead of extensive planning prior to development. In order to be sustainable in this new model, architectures evolved from being a single large application to instead favoring several smaller applications that could be delivered faster. Because of this change in thinking, the more traditional application design was labeled as monolithic. This new approach of breaking components down into separate applications coined a name for these components: microservices. The traits that were inherent in microservices applications brought with them several desirable features, including the ability to develop and deploy services concurrently from one another as well as to scale them (increase the number of instances) independently.
The change in software architecture from monolithic to microservices also resulted in re-evaluating how applications are packaged and deployed at runtime. Traditionally, entire machines were dedicated to either one or two applications. Now, as microservices resulted in the overall reduction of resources required for a single application, dedicating an entire machine to one or two microservices was no longer viable.
Fortunately, a technology called containers was introduced and gained popularity in filling in the gaps for many missing features needed to create a microservices runtime environment. Red Hat defines a container as “a set of one or more processes that are isolated from the rest of the system and includes all of the files necessary to run”(https://www.redhat.com/en/topics/containers/whats-a-linux-container#:~:text=A%20Linux%C2%AE%20container%20is,testing%2C%20and%20finally%20to%20production.). Containerized technology has a long history in computing, dating back to the 1970s. Many of the foundational container technologies, including chroots (the ability to change the root directory of a process and any of its children to a new location on the filesystem) and jails, are still in use today.
The combination of a simple and portable packaging model, along with the ability to create many isolated sandboxes on each physical machine or virtual machine (VM), led to the rapid adoption of containers in the microservices space. This rise in container popularity in the mid-2010s can also be attributed to Docker, which brought containers to the masses through simplified packaging and runtimes that could be utilized on Linux, macOS, and Windows. The ability to distribute container images with ease led to the increase in the popularity of container technologies. This was because first-time users did not need to know how to create images but instead could make use of existing images that were created by others.
Containers and microservices became a match made in heaven. Applications had a packaging and distribution mechanism, along with the ability to share the same compute footprint while taking advantage of being isolated from one another. However, as more and more containerized microservices were deployed, the overall management became a concern. How do you ensure the health of each running container? What do you do if a container fails? What happens if your underlying machine does not have the compute capacity required? Enter Kubernetes, which helped answer this need for container orchestration.
In the next section, we will discuss how Kubernetes works and provides value to an enterprise.
What is Kubernetes?
Kubernetes, often abbreviated as k8s (pronounced as kaytes), is an open source container orchestration platform. Originating from Google’s proprietary orchestration tool, Borg, the project was open sourced in 2015 and was renamed Kubernetes. Following the v1.0 release on July 21, 2015, Google and the Linux Foundation partnered to form the Cloud Native Computing Foundation (CNCF), which acts as the current maintainer of the Kubernetes project.
The word Kubernetes is a Greek word, meaning helmsman or pilot. A helmsman is a person who is in charge of steering a ship and works closely with the ship’s officer to ensure a safe and steady course, along with the overall safety of the crew. Having similar responsibilities with regard to containers and microservices, Kubernetes is in charge of the orchestration and scheduling of containers. It is in charge of steering those containers to proper worker nodes that can handle their workloads. Kubernetes will also help ensure the safety of those microservices by providing high availability (HA) and health checks.
Let’s review some of the ways Kubernetes helps simplify the management of containerized workloads.
Container orchestration
The most prominent feature of Kubernetes is container orchestration. This is a fairly loaded term, so we’ll break it down into different pieces.
Container orchestration is about placing containers on certain machines from a pool of compute resources based on their requirements. The simplest use case for container orchestration is for deploying containers on machines that can handle their resource requirements. In the following diagram, there is an application that requests 2 Gibibytes (Gi) of memory (Kubernetes resource requests typically use their power-of-two values, which in this case is roughly equivalent to 2 gigabytes (GB)) and one central processing unit (CPU) core. This means that the container will be allocated 2 Gi of memory and 1 CPU core from the underlying machine that it is scheduled on. It is up to Kubernetes to track which machines, or nodes, have the required resources available and to place an incoming container on that machine. If a node does not have enough resources to satisfy the request, the container will not be scheduled on that node. If none of the nodes in a cluster have enough resources to run the workload, the container will not be deployed. Once a node has enough resources free, the container will be deployed on the node with sufficient resources:

Figure 1.1 – Kubernetes orchestration and scheduling
Container orchestration relieves you of the effort required to track the available resources on machines. Kubernetes and other monitoring tools provide insight into these metrics. So, a developer can simply declare the number of resources they expect a container to use, and Kubernetes will take care of the rest on the backend.
HA
Another benefit of Kubernetes is that it provides features that help take care of redundancy and HA. HA is a characteristic that prevents application downtime. It’s performed by a load balancer, which splits incoming traffic across multiple instances of an application. The premise of HA is that if one instance of an application goes down, other instances are still available to accept incoming traffic. In this regard, downtime is avoided, and the end user—whether a human or another microservice—remains completely unaware that there was a failed instance of the application. Kubernetes provides a networking mechanism, called a service, that allows applications to be load-balanced. We will talk about services in greater detail later on, in the Deploying a Kubernetes application section of this chapter.
Scalability
Given the lightweight nature of containers and microservices, developers can use Kubernetes to rapidly scale their workloads, both horizontally and vertically.
Horizontal scaling is the act of deploying more container instances. If a team running their workloads on Kubernetes were expecting increased load, they could simply tell Kubernetes to deploy more instances of their application. Since Kubernetes is a container orchestrator, developers would not need to worry about the physical infrastructure that those applications would be deployed on. It would simply locate a node within the cluster with the available resources and deploy the additional instances there. Each extra instance would be added to a load-balancing pool, which would allow the application to continue to be highly available.
Vertical scaling is the act of allocating additional memory and CPU to an application. Developers can modify the resource requirements of their applications while they are running. This will prompt Kubernetes to redeploy the running instances and reschedule them on nodes that can support the new resource requirements. Depending on how this is configured, Kubernetes can redeploy each instance in a way that prevents downtime while the new instances are being deployed.
Active community
The Kubernetes community is an incredibly active open source community. As a result, Kubernetes frequently receives patches and new features. The community has also made many contributions to documentation, both to the official Kubernetes documentation and to professional or hobbyist blog websites. In addition to documentation, the community is highly involved in planning and attending meetups and conferences around the world, which helps increase education about the platform and innovation surrounding it.
Another benefit of Kubernetes’ large community is the number of different tools built to augment the abilities that are provided. Helm is one such tool. As we’ll see later in this chapter and throughout this book, Helm—a tool built by members of the Kubernetes community—vastly improves a developer’s experience by simplifying application deployments and life cycle management.
With an understanding of the benefits Kubernetes brings to managing containerized workloads, let’s now discuss how an application can be deployed in Kubernetes.
Deploying a Kubernetes application
Deploying an application on Kubernetes is fundamentally similar to deploying an application outside of Kubernetes. All applications, whether containerized or not, must consider the following configuration details:
- Networking
- Persistent storage and file mounts
- Resource allocation
- Availability and redundancy
- Runtime configuration
- Security
Configuring these details on Kubernetes is done by interacting with the Kubernetes application programming interface (API). The Kubernetes API serves as a set of endpoints that can be interacted with to view, modify, or delete different Kubernetes resources, many of which are used to configure different details of an application.
There are many different Kubernetes API resources, but the following table shows some of the most common ones:
Resource Name |
Definition |
Pod |
The smallest deployable unit in Kubernetes. Encapsulates one or more containers. |
|
Used to deploy and manage a set of Pods. Maintains the desired amount of Pod replicas (1 by default). |
|
Similar to a Deployment resource, except a StatefulSet maintains a sticky identity for each Pod replica and can also provision PersistentVolumeClaims resources (explained further down in this table) unique to each Pod. |
|
Used to load-balance between Pod replicas. |
|
Provides external access to services within the cluster. |
|
Stores application configuration to decouple configuration from code. |
|
Used to store sensitive data such as credentials and keys. Data stored in Secrets resources are only obfuscated using Base64 encoding, so administrators must ensure that proper access controls are in place. |
|
A request for storage by a user. Used to provide persistence for running Pods. |
|
Represents a set of permissions to be allowed against the Kubernetes API. |
|
Grants the permissions defined in a role to a user or set of users. |
Table 1.1 – Common Kubernetes resources
Creating resources is central to deploying and managing an application on Kubernetes, but what does a user need to do to create them? We will explore this question further in the next section.
Approaches to resource management
In order to deploy an application on Kubernetes, we need to interact with the Kubernetes API to create resources. kubectl
is the tool we use to talk to the Kubernetes API. kubectl
is a command-line interface (CLI) tool used to abstract the complexity of the Kubernetes API from end users, allowing them to more efficiently work on the platform.
Let’s discuss how kubectl
can be used to manage Kubernetes resources.
Imperative and declarative configurations
The kubectl
tool provides a series of subcommands to create and modify resources in an imperative fashion. Here is a small list of these commands:
create
describe
edit
delete
The kubectl
commands follow a common format, as shown here:
kubectl <verb> <noun> <arguments>
The verb
refers to one of the kubectl
subcommands, and the noun
refers to a particular Kubernetes resource. For example, the following command can be run to create a deployment:
kubectl create deployment my-deployment --image=busybox
This would instruct kubectl
to talk to the Deployment API endpoint and create a new deployment called my-deployment
, using the busybox
image from Docker Hub.
You could use kubectl
to get more information on the deployment that was created by using the describe
subcommand, as follows:
kubectl describe deployment my-deployment
This command would retrieve information about the deployment and format the result in a readable format that allows developers to inspect the live my-deployment
deployment on Kubernetes.
If a change to the deployment was desired, a developer could use the edit
subcommand to modify it in place, like this:
kubectl edit deployment my-deployment
This command would open a text editor, allowing you to modify the deployment.
When it comes to deleting a resource, the user could run the delete
subcommand, as illustrated here:
kubectl delete deployment my-deployment
This would call the appropriate API endpoint to delete the my-deployment
deployment.
Kubernetes resources, once created, exist in the cluster as JavaScript Object Notation (JSON) resource files, which can be exported as YAML Ain’t Markup Language (YAML) files for greater human readability. An example resource in YAML format can be seen here:
apiVersion: apps/v1 kind: Deployment metadata: name: busybox spec: replicas: 1 selector: matchLabels: app: busybox template: metadata: labels: app: busybox spec: containers: - name: main image: busybox args: - sleep - infinity
The preceding YAML format presents a very basic use case. It deploys the busybox
image from Docker Hub and runs the sleep
command indefinitely to keep the Pod running.
While it may be easier to create resources imperatively using the kubectl
subcommands we have just described, Kubernetes allows you to directly manage the YAML resources in a declarative fashion to gain more control over resource creation. The kubectl
subcommands do not always let you configure all the possible resource options, but creating YAML files directly allows you to more flexibly create resources and fill in the gaps that the kubectl
subcommands may contain.
When creating resources declaratively, users first write out the resource they want to create in YAML format. Next, they use the kubectl
tool to apply the resource against the Kubernetes API. While in imperative configuration developers use kubectl
subcommands to manage resources, declarative configuration relies primarily on only one subcommand—apply
.
Declarative configuration often takes the following form:
kubectl apply -f my-deployment.yaml
This command gives Kubernetes a YAML resource that contains a resource specification, although the JSON format can be used as well. Kubernetes infers the action to perform on resources (create or modify) based on whether or not they exist.
An application may be configured declaratively by following these steps:
- First, the user can create a file called
deployment.yaml
and provide a YAML-formatted specification for the deployment. We will use the same example as before, as follows:apiVersion: apps/v1 kind: Deployment metadata: name: busybox spec: replicas: 1 selector: matchLabels: app: busybox template: metadata: labels: app: busybox spec: containers: - name: main image: busybox args: - sleep - infinity
- A deployment can then be created with the following command:
kubectl apply –f deployment.yaml
Upon running this command, Kubernetes will attempt to create a deployment in the way you specified.
- If you wanted to make a change to the deployment by changing the number of replicas to
2
, you would first modify thedeployment.yaml
file, as follows:apiVersion: apps/v1 kind: Deployment metadata: name: busybox spec: replicas: 2 selector: matchLabels: app: busybox template: metadata: labels: app: busybox spec: containers: - name: main image: busybox args: - sleep - infinity
- You would then apply the change with
kubectl apply
, like this:kubectl apply –f deployment.yaml
After running that command, Kubernetes would apply the provided deployment declaration over the previously applied deployment. At this point, the application would scale up from a replica value of 1
to 2
.
- When it comes to deleting an application, the Kubernetes documentation actually recommends doing so in an imperative manner; that is, using the
delete
subcommand instead ofapply
, as illustrated here:kubectl delete –f deployment.yaml
As you can see, the delete
subcommand uses the –f
flag to delete the resource from the given file.
With an understanding of how Kubernetes resources are created, let’s now discuss some of the challenges involved in resource configuration.
Resource configuration challenges
In the previous section, we covered how Kubernetes has two different configuration methods—imperative and declarative. One question to consider is this: What challenges do users need to be aware of when creating Kubernetes resources with imperative and declarative methodologies?
Let’s discuss some of the most common challenges.
The many types of Kubernetes resources
First of all, as described in the Deploying a Kubernetes application section, there are many different types of resources in Kubernetes. In order to be effective on Kubernetes, developers need to be able to determine which resources are required to deploy their applications, and they need to understand them at a deep enough level to configure them appropriately. This requires a lot of knowledge of and training on the platform. While understanding and creating resources may already sound like a large hurdle, this is actually just the beginning of many different operational challenges.
Keeping live and local states in sync
A method of configuring Kubernetes resources that we would encourage is to maintain their configuration in source control for teams to edit and share, which also allows the source control repository to become the source of truth. The configuration defined in source control (referred to as the local state) is then created by applying them to the Kubernetes environment, and the resources become live or enter what can be called a live state. This sounds simple enough, but what happens when developers need to make changes to their resources? The proper answer would be to modify the files in source control and apply the changes to synchronize the local state to the live state. However, this isn’t what always ends up happening. It is often simpler, in the short term, to modify the live resource in place with kubectl edit
or kubectl patch
and completely skip over modifying the local files. This results in state inconsistency between local and live states and is an act that makes scaling on Kubernetes difficult.
Application life cycles are hard to manage
Life cycle management is a loaded term, but in this context, we’ll refer to it as the concept of installing, upgrading, and rolling back applications. In the Kubernetes world, an installation would include API resources for deploying and configuring an application. The initial installation would create what we refer to here as version 1 of an application.
An upgrade, then, can be thought of as a modification to one or many of those Kubernetes resources. Each batch of edits can be thought of as a single upgrade. A developer could modify a single service resource, which would bump the version number to version 2. The developer could then modify a deployment, a configmap, and a service at the same time, bumping the version count to version 3.
As newer versions of an application continue to be rolled out onto Kubernetes, it becomes more difficult to keep track of changes that have occurred across relevant API resources. Kubernetes, in most cases, does not have an inherent way of keeping a history of changes. While this makes upgrades harder to keep track of, it also makes restoring a prior version of an application much more difficult. Say, for example, a developer previously made an incorrect edit on a particular resource. How would a team know where to roll back to? The n-1
case is particularly easy to work out, as that is the most recent version. What happens, however, if the latest stable release was five versions ago? Teams often end up scrambling to resolve issues because they cannot quickly identify the latest stable configuration that worked previously.
Resource files are static
This is a challenge that primarily affects the declarative configuration style of applying YAML resources. Part of the difficulty in following a declarative approach is that Kubernetes resource files are not natively designed to be parameterized. Resource files are largely designed to be written out in full before being applied, and the contents remain the source of truth (SOT) until the file is modified. When dealing with Kubernetes, this can be a frustrating reality. Some API resources can be lengthy, containing many different customizable fields, and it can be quite cumbersome to write and configure YAML resources in full.
Static files lend themselves to becoming boilerplate. Boilerplate represents text or code that remains largely consistent in different but similar contexts. This becomes an issue if developers manage multiple different applications, where they could potentially manage multiple different deployment resources, multiple different services, and so on. In comparing the different applications’ resource files, you may find large numbers of similar YAML configurations between them.
The following screenshot depicts an example of two resources with significant boilerplate configuration between them. The blue text denotes lines that are boilerplate, while the red text denotes lines that are unique:

Figure 1.2 – An example of two resources with boilerplate
Notice, in this example, that both files are almost exactly the same. When managing files that are as similar as this, boilerplate becomes a major headache for teams managing their applications in a declarative fashion.
Helm to the rescue!
Over time, the Kubernetes community discovered that creating and maintaining Kubernetes resources to deploy applications is difficult. This prompted the development of a simple yet powerful tool that would allow teams to overcome the challenges posed by deploying applications on Kubernetes. The tool that was created is called Helm. Helm is an open source tool used for packaging and deploying applications on Kubernetes. It is often referred to as the Kubernetes package manager because of its similarities to any other package manager you would find on your favorite operating system (OS). Helm is widely used throughout the Kubernetes community and is a CNCF graduated project.
Given Helm’s similarities to traditional package managers, let’s begin exploring Helm by first reviewing how a package manager works.
Understanding package managers
Package managers are used to simplify the process of installing, upgrading, reverting, and removing a system’s applications. These applications are defined as packages that contain metadata around target software and its dependencies.
The idea behind package managers is simple. First, the user passes the name of a software package as an argument. The package manager then performs a lookup against a repository to see whether that package exists. If it is found, the package manager installs the application defined by the package and its dependencies to specified locations on the system.
Package managers make managing software very easy. As an example, let’s imagine you wanted to install htop
, a Linux system monitor, to a Fedora machine. Installing this would be as simple as typing a single command, as follows:
dnf install htop --assumeyes
This instructs dnf
, the Fedora package manager, to find htop
in the Fedora package repository and install it. dnf
also takes care of installing the htop
package’s dependencies, so you don’t have to worry about installing its requirements beforehand. After dnf
finds the htop
package from the upstream repository, it asks you whether you’re sure you want to proceed. The --assumeyes
flag automatically answers yes to this question and any other prompts that dnf
may potentially ask.
Over time, newer versions of htop
may appear in the upstream repository. dnf
and other package managers allow users to efficiently upgrade to new versions of the software. The subcommand that allows users to upgrade using dnf
is upgrade
, as illustrated here:
dnf upgrade htop --assumeyes
This instructs dnf
to upgrade htop
to its latest version. It also upgrades its dependencies to the versions specified in the package’s metadata.
While moving forward is often better, package managers also allow users to move backward and revert an application to a prior version if necessary. dnf
does this with the downgrade
subcommand, as illustrated here:
dnf downgrade htop --assumeyes
This is a powerful process because the package manager allows users to quickly roll back if a critical bug or vulnerability is reported.
If you want to remove an application completely, a package manager can take care of that as well. dnf
provides the remove
subcommand for this purpose, as illustrated here:
dnf remove htop --assumeyes
In this section, we reviewed how the dnf
package manager on Fedora can be used to manage a software package. Helm, as the Kubernetes package manager, is similar to dnf
, both in its purpose and functionality. While dnf
is used to manage applications on Fedora, Helm is used to manage applications on Kubernetes. We will explore this in greater detail next.
The Kubernetes package manager
Given that Helm was designed to provide an experience similar to that of package managers, experienced users of dnf
or similar tools will immediately understand Helm’s basic concepts. Things become more complicated, however, when talking about the specific implementation details. dnf
operates on RPM Package Manager (RPM) packages that provide executables, dependency information, and metadata. Helm, on the other hand, works with charts. A Helm chart can be thought of as a Kubernetes package. Charts contain the declarative Kubernetes resource files required to deploy an application. Similar to an RPM package, it can also declare one or more dependencies that the application needs in order to run.
Helm relies on repositories to provide widespread access to charts. Chart developers create declarative YAML files, package them into charts, and publish them to chart repositories. End users then use Helm to search for existing charts to deploy onto Kubernetes, similar to how end users of dnf
will search for RPM packages to deploy to Fedora.
Let’s go through a basic example. Helm can be used to deploy Redis, an in-memory cache, to Kubernetes by using a chart from an upstream repository. This can be performed using Helm’s install
command, as illustrated here:
helm install redis bitnami/redis --namespace=redis
This would install the redis
chart from the bitnami
repository to a Kubernetes namespace called redis
. This installation would be referred to as the initial revision, or the initial installation of a Helm chart.
If a new version of the redis
chart becomes available, users can upgrade to the new version using the upgrade
command, as follows:
helm upgrade redis bitnami/redis --namespace=redis
This would upgrade redis
to meet the specification defined by the newer redis
chart.
With OSs, users should be concerned about rollbacks if a bug or vulnerability is found. The same concern exists with applications on Kubernetes, and Helm provides the rollback
command to handle this use case, as illustrated here:
helm rollback redis 1 --namespace=redis
This command would roll redis
back to its first revision.
Finally, Helm provides the ability to remove redis
altogether with the uninstall
command, as follows:
helm uninstall redis --namespace=redis
Compare dnf
and Helm’s subcommands, and the functions they serve in the following table. Notice that dnf
and Helm offer similar commands that provide a similar user experience (UX):
|
Helm subcommands |
Purpose |
|
|
Install an application and its dependencies. |
|
|
Upgrade an application to a newer version. Upgrade dependencies as specified by the target package. |
|
|
Revert an application to a previous version. Revert dependencies as specified by the target package. |
|
|
Delete an application. Each tool has a different philosophy around handling dependencies. |
Table 1.2 – Purpose of dnf and Helm subcommands
With an understanding of how Helm functions as a package manager, let’s discuss in greater detail the benefits that Helm brings to Kubernetes.
The benefits of Helm
Earlier in this chapter, we reviewed how Kubernetes applications are created by managing Kubernetes resources, and we discussed some of the challenges involved. Here are a few ways Helm can overcome these challenges.
Abstracting the complexity of Kubernetes resources
Let’s assume that a developer has been given the task of deploying a WordPress instance onto Kubernetes. The developer would need to create the resources required to configure its containers, network, and storage. The amount of Kubernetes knowledge required to configure such an application from scratch is high and is a big hurdle for new—and even intermediate Kubernetes users—to clear.
With Helm, a developer tasked with deploying a WordPress instance could simply search for WordPress charts from upstream chart repositories. These charts would have already been written by chart developers in the community and would already contain the declarative configuration required to deploy WordPress and a backing database. Vendor-owned chart repositories also tend to be well maintained, so teams using charts from them would not need to worry about keeping Kubernetes resources up to date. In this regard, developers with this kind of task would act as simple end users that consume Helm in a similar way to any other package manager.
Maintaining an ongoing history of revisions
Helm has a concept called release history. When a Helm chart is installed for the first time, Helm adds that initial revision to the history. The history is further modified as revisions increase via upgrades, keeping various snapshots of how the application was configured at varying revisions.
The following diagram depicts an ongoing history of revisions. The squares in blue illustrate resources that have been modified from their previous versions:

Figure 1.3 – An example of a revision history
The process of tracking each revision provides opportunities for rollback. Rollbacks in Helm are very simple. Users simply point Helm to a previous revision, and Helm reverts the live state to that of the selected revision. Helm allows users to roll back their applications as far as they desire, even back to the very first installation.
Configuring declarative resources in a dynamic fashion
One of the biggest hassles with creating resources declaratively is that Kubernetes resources are static and cannot be parameterized. As you may recall from earlier, this results in resources becoming boilerplate across applications and similar configurations, making it more difficult for teams to configure their applications as code. Helm alleviates these issues by introducing values and templates.
Values can be thought of as parameters for charts. Templates are dynamically generated files based on a given set of values. These two constructs give chart developers the ability to write Kubernetes resources that are generated based on values that end users provide. By doing so, applications managed by Helm become more flexible, have less boilerplate, and are easier to maintain.
Values and templates allow users to do things such as this:
- Parameterize common fields, such as the image name in a deployment and the ports in a service.
- Generate long pieces of YAML configuration based on user input, such as volume mounts in a deployment or the data in a ConfigMap.
- Include or exclude resources based on user input.
The ability to dynamically generate declarative resource files makes it simpler to create YAML-based resources while still ensuring that applications are deployed in an easily reproducible fashion.
Simplifying local and live state synchronization
Package managers prevent users from having to manage all of the intricate details of an application and its dependencies. The same idea holds true with Helm. Using Helm’s values
construct, users can provide configuration changes across an application’s life cycle by managing a small number of parameters instead of multiple full-length YAML resources. When the local state (values/parameters) is updated, Helm propagates the configuration change out to the relevant resources in Kubernetes. This workflow keeps Helm in control of managing intricate Kubernetes details and encourages users to manage the state locally instead of updating live resources directly.
Deploying resources in an intelligent order
Helm simplifies application deployments by having a pre-determined order in which Kubernetes resources need to be created. This ordering exists to ensure that dependent resources are deployed first. For example, Secret instances and ConfigMap instances should be created before deployments, since a deployment would likely consume those resources as volumes. Helm performs this ordering without any interaction from the user, so this complexity is abstracted and prevents users from needing to understand the order in which resources should be applied.
Providing automated life cycle hooks
Similar to other package managers, Helm provides the ability to define life cycle hooks. Life cycle hooks are actions that take place automatically at different stages of an application’s life cycle. They can be used to do things such as the following:
- Perform a data backup on an upgrade.
- Restore data on a rollback.
- Validate a Kubernetes environment prior to installation.
Life cycle hooks are valuable because they abstract complexities around tasks that may not be Kubernetes-specific. For example, a Kubernetes user may not be familiar with the best practices behind backing up a database or may not know when such a task should be performed. Life cycle hooks allow experts to write automation that handles various life cycle tasks and prevents users from needing to handle them on their own.
Summary
In this chapter, we began by exploring the trend of adopting microservice-based architectures to decompose monoliths into smaller applications. The creation of microservices that are more lightweight and easier to manage has led to utilizing containers as a packaging and runtime format to produce releases more frequently. By adopting containers, additional operational challenges were introduced and solved by using Kubernetes as a container orchestration platform to manage the container life cycle.
Our discussion turned to the various ways Kubernetes applications can be configured. These resources can be expressed using two distinct styles of application configuration: imperative and declarative. Each of these configuration styles contributes to a set of challenges involved in deploying Kubernetes applications, including the amount of knowledge required to understand how Kubernetes resources work and the challenge of managing application life cycles.
To better manage each of the assets that comprise an application, Helm was introduced as the package manager for Kubernetes. Through its rich feature set, the full life cycle of applications from installation, upgrading, and, rollback to deletion can be managed with ease.
In the next chapter, we’ll walk through the process of installing Helm and preparing an environment that can be used for following along with this book’s examples.
Further reading
For more information about the Kubernetes resources that make up an application, please see the Understanding Kubernetes Objects page from the Kubernetes documentation at https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/.
To reinforce some of the benefits of Helm discussed in this chapter, please refer to the Using Helm page of the Helm documentation at https://helm.sh/docs/intro/using_helm/. (This page also dives into some basic usage around Helm, which will be discussed throughout this book in greater detail.)
Questions
Here are some questions to test your knowledge of the chapter:
- What is the difference between a monolithic and a microservices application?
- What is Kubernetes? What kinds of problems was it designed to solve?
- What are some of the
kubectl
commands commonly used when deploying applications to Kubernetes? - What challenges are often involved in deploying applications to Kubernetes?
- How does Helm function as a Kubernetes package manager? How does it address the challenges posed by Kubernetes?
- Imagine you want to roll back an application deployed on Kubernetes. Which Helm command allows you to perform this action? How does Helm keep track of your changes to make this rollback possible?
- What are the four primary Helm commands?