Software applications are a fundamental component of most modern technology. Whether they take the form of a word processor, web browser, or streaming service, they enable user interaction to complete one or more tasks. Applications have a long and storied history, from the days of Electronic Numerical Integrator and Computer (ENIAC)—the first general-purpose computer—to taking man to the moon in the Apollo space missions, to the rise of the World Wide Web (WWW), social media, and online retail.
These applications can operate on a wide range of platforms and systems, leveraging either physical or virtual computing resources. Depending on their purpose and resource requirements, entire machines may be dedicated to serving the compute and/or storage needs of an application. Fortunately, thanks in part to the realization of Moore’s law, the power and performance of microprocessors initially increased with each passing year, along with the overall cost associated with the physical resources used. This trend has subsided in recent years, but the advent of this trend and its persistence for the first 30 years of the existence of processors was instrumental to the advances in technology.
Software developers took full advantage of this opportunity and bundled more features and components into their applications. As a result, a single application could consist of several smaller components, each of which, on its own, could be written as its own individual services. Initially, bundling components together yielded several benefits, including a simplified deployment process. However, as industry trends began to change and businesses focused more on the ability to deliver features more rapidly, the design of a single deployable application brought with it a number of challenges. Whenever a change was required, the entire application and all of its underlying components needed to be validated once again to ensure the change had no adverse features. This process potentially required coordination from multiple teams, which slowed the overall delivery of the feature.
Delivering features more rapidly, especially across traditional divisions within organizations, was also something that organizations wanted. This concept of rapid delivery is fundamental to a practice called development-operations (DevOps), whose rise in popularity occurred around 2010. DevOps encouraged more iterative changes to applications over time, instead of extensive planning prior to development. In order to be sustainable in this new model, architectures evolved from being a single large application to instead favoring several smaller applications that could be delivered faster. Because of this change in thinking, the more traditional application design was labeled as monolithic. This new approach of breaking components down into separate applications coined a name for these components: microservices. The traits that were inherent in microservices applications brought with them several desirable features, including the ability to develop and deploy services concurrently from one another as well as to scale them (increase the number of instances) independently.
The change in software architecture from monolithic to microservices also resulted in re-evaluating how applications are packaged and deployed at runtime. Traditionally, entire machines were dedicated to either one or two applications. Now, as microservices resulted in the overall reduction of resources required for a single application, dedicating an entire machine to one or two microservices was no longer viable.
Fortunately, a technology called containers was introduced and gained popularity in filling in the gaps for many missing features needed to create a microservices runtime environment. Red Hat defines a container as “a set of one or more processes that are isolated from the rest of the system and includes all of the files necessary to run”(https://www.redhat.com/en/topics/containers/whats-a-linux-container#:~:text=A%20Linux%C2%AE%20container%20is,testing%2C%20and%20finally%20to%20production.). Containerized technology has a long history in computing, dating back to the 1970s. Many of the foundational container technologies, including chroots (the ability to change the root directory of a process and any of its children to a new location on the filesystem) and jails, are still in use today.
The combination of a simple and portable packaging model, along with the ability to create many isolated sandboxes on each physical machine or virtual machine (VM), led to the rapid adoption of containers in the microservices space. This rise in container popularity in the mid-2010s can also be attributed to Docker, which brought containers to the masses through simplified packaging and runtimes that could be utilized on Linux, macOS, and Windows. The ability to distribute container images with ease led to the increase in the popularity of container technologies. This was because first-time users did not need to know how to create images but instead could make use of existing images that were created by others.
Containers and microservices became a match made in heaven. Applications had a packaging and distribution mechanism, along with the ability to share the same compute footprint while taking advantage of being isolated from one another. However, as more and more containerized microservices were deployed, the overall management became a concern. How do you ensure the health of each running container? What do you do if a container fails? What happens if your underlying machine does not have the compute capacity required? Enter Kubernetes, which helped answer this need for container orchestration.
In the next section, we will discuss how Kubernetes works and provides value to an enterprise.