Evolution from Docker to Kubernetes
Process isolation has been a part of Linux for a long time in the form of Control Groups (cgroups) and namespaces. With the cgroup setting, each process has limited resources (CPU, memory, and so on) to use. With a dedicated process namespace, the processes within a namespace do not have any knowledge of other processes running in the same node but in different process namespaces. Additionally, with a dedicated network namespace, processes cannot communicate with other processes without a proper network configuration, even though they’re running on the same node.
With the release of Docker, the mentioned process isolation was improved by easing process management for infrastructure and DevOps engineers. In 2013, Docker released the Docker open-source project. Instead of managing namespaces and cgroups, DevOps engineers manage containers through Docker Engine. Docker containers leverage the isolation mechanisms in Linux to run and manage microservices. Each container has a dedicated cgroup and namespaces. Since its release 11 years ago, Docker has changed how developers build, share, and run any applications, supporting them to quickly deliver high-quality, secure apps by taking advantage of the right technology, whether it is Linux, Windows, serverless functions, or any other. Developers just need to use their favorite tools and the skills they already possess to deliver.
Before Docker, virtualization was primarily achieved through virtual machines (VMs), which required a full operating system for each application, but led to some overhead in terms of resources and performance. Docker introduced a lightweight, efficient, and portable alternative by leveraging LXC technology.
However, the problem of interdependency and complexity between processes remains. Orchestration platforms try to solve this problem. While Docker simplified running single containers, it lacked built-in capabilities for managing container clusters, handling load balancing, auto-scaling, and deployment rollbacks to name some. Kubernetes, initially developed by Google and released as an open-source project in 2014, was designed to solve these challenges.
To better understand the natural evolution to Kubernetes, review some of the key advantages of Kubernetes over Docker:
- Kubernetes makes it easy to deploy, scale, and manage containerized applications on multiple nodes, ensuring they are always available
- It can automatically replace failed containers to keep applications running smoothly
- Kubernetes also includes built-in load balancing and service discovery to evenly distribute traffic among containers
- With declarative YAML files, Kubernetes simplifies the process of defining how applications should behave, making it simple to manage and duplicate environments
As Kubernetes adoption grew, it has since moved to containerd, (a lightweight container runtime) and deprecated direct support for the Docker runtime (known as Dockershim) starting with version 1.20, moving to containerd and other OCI-compliant runtimes for more efficiency and performance.
As you have seen so far, Docker’s simplicity and friendly approach made containerization mainstream. However, as organizations began adopting containers at scale, new challenges emerged. For example, managing hundreds or thousands of containers across multiple environments requires a more robust solution. As container adoption grew, so did the need for a system to manage these containers efficiently. This is where Kubernetes came into play. You should understand how Kubernetes evolved to address the complexities of deploying, scaling, and managing containerized applications in production environments and learn the best practices for securing, managing, and scaling applications in a cloud-native world.
Kubernetes and its components are discussed in depth in the next section.