Reader small image

You're reading from  Building CI/CD Systems Using Tekton

Product typeBook
Published inSep 2021
PublisherPackt
ISBN-139781801078214
Edition1st Edition
Right arrow
Author (1)
Joel Lord
Joel Lord
author image
Joel Lord

Joel Lord (joel__lord on Twitter) is passionate about the web and technology in general. He likes to learn new things, but most of all, he wants to share his discoveries. He does so by traveling to various conferences all across the globe. He graduated from college with a degree in computer programming in the last millennium. Apart from a little break to get his BSc in computational astrophysics, he has always worked in the industry. In his daily job, Joel is a developer advocate with MongoDB, where he connects with software engineers to help them make the web better by using best practices around JavaScript. In his free time, he can be found stargazing on a campground somewhere or brewing a fresh batch of beer in his garage.
Read more about Joel Lord

Right arrow

Deploying in the era of the cloud

The cloud brought drastic changes to the way applications were built and maintained. Until then, most software development shops or online businesses had their own servers and team to maintain said servers. With the advent of AWS, all of this changed. It was now possible to spin up a new environment and use that new environment directly on someone else's infrastructure. This new way of doing things meant less time managing actual hardware and the capability to create reproducible environments easily.

With what was soon known as the cloud, it was easier than ever to deploy a new application. A software developer could now spin up a virtual machine that had the necessary software and runtimes, and then execute a batch of unit tests to ensure that it was running on that specific server. You could also create an environment for the customers to see the application changes at the end of each iteration, which helped them approve those new features or provide feedback on a requested enhancement.

With server environments that were easier to start, faster to scale, and cheaper than actual hardware, more and more people moved to cloud-based software development. This move also facilitated the automation of many processes around software deployment practices. Using a command-line tool, it was now possible to start a new staging environment, spin up a new database, or take down a server that wasn't needed.

More and more companies were having a presence on the web, and the competition to get out new features or implement the same features as your competition became a problem. It was no longer acceptable to deploy every few months. If a competitor released a new feature, your product also needed to implement it as soon as possible due to the risk of losing a market share. If there was a delay in fixing a bug, that also meant a potentially significant revenue loss.

These fast changes were at the heart of a revolution in how teams worked to build and deploy applications. Until now, enterprises had teams of software engineers who oversaw designing new features, fixing bugs, and preparing the next releases. On the other hand, a group of system administrators oversaw that all the infrastructures were running smoothly and that no bugs were introduced in the system. Despite having the same goal of making the applications run better, those two teams ended up contradicting each other due to the nature of their work.

The programmers had pressure to release faster, but they would potentially introduce bugs or required software upgrades on the servers with each release. Sysadmins were under pressure to keep the environment stable and pushed changes to avoid breaking the fragile equilibrium in the systems in place. This dichotomy led to a new philosophy in enterprises: DevOps.

DevOps' idea was to bridge that gap between the two teams so that deploying better software quicker was finally possible. Lots of tools aim to make DevOps easier, and containers are one of those technologies.

Works on my machine!

One problem that has always existed is software engineering became more prevalent with the cloud – the "Works on my machine" syndrome. A programmer would install all the required software to run an application on their development machine, and everything ran smoothly. As soon as this software was shipped on a different device, though, everything stopped working.

This is a widespread problem at larger companies where multiple teams have various environments. A programmer would have Apache 2.4 running PHP 8.0, while someone on the QA team would be running Apache 2.3 with PHP 7.2. Both of those setups have benefits. The software developers tend to use the latest version available to benefit from all the new features that would make their life easier. On the other hand, the QA team would try to use the most common setup – the one where most of the customers would be.

This was also very true a few years ago when browsers differed from one vendor to the next. Software engineers were typically using Chrome or Firefox because they provided the best developer tools. In contrast, the testing team would use Internet Explorer because they still had the largest market share.

When teams had different environments, they would get mixed results when running the applications. Some features wouldn't work, or some bugs were raised that weren't reproducible by the developers. Trying to fix issues related to the environment is much more complicated than fixing problems that lie in the source code. You must compare the environment and understand why a specific piece of software did not behave as expected. Often, a note was added to the ticket mentioning either "Works on my machine" or "Cannot reproduce," and the ticket was closed without being fixed.

This type of scenario is prevalent between QA and software engineers and is present for system administrators and programmers. In the same way, developers typically used the latest versions of software, say a database, but the version running on the servers would be years old, so it would continue to support older applications.

A software engineer could request a newer version to be installed, but that was not a guarantee. Perhaps the more recent version had not been tested on the infrastructure. Maybe installing two different versions on a single machine was merely impossible. Either way, if the system administrators couldn't install the newer database, it meant going back to the drawing board, getting a development environment that ran the version installed on the servers, and getting it to work.

In 2013, a new startup came with a solution to those problems. Container technology was not new. In fact, process isolation, which is at the heart of how containers work, was introduced in 1979 in Unix V7. Yet, it was only close to 35 years later that it was made mainstream.

That startup was Docker. They built a tool that made it much easier for software engineers to ship their applications along with the whole environment.

If you are not familiar with containers, they are basically giant ZIP files that you would package your source code in, as well as the required configurations, the necessary runtimes, and so on. Essentially, programmers now shipped everything needed to run an application. The other benefit of containers is that they run in complete isolation from the underlying operating system.

With containers, it was finally possible to work in a development environment that was identical to the production or testing environments. Containers were the solution to the "Works on my machine" problem. Since all the settings were identical, the bugs had to be related to the source code and not due to a difference in the associated runtimes.

Scripting overload

Containers rapidly became a solution of choice for deploying applications in the cloud. The major cloud computing providers added support for them, making it more and more simple to use container technology.

The advent of containers was also a significant benefit to DevOps communities. Now, developers could pick the software they needed, and system operators didn't have to worry as much about breaking older legacy systems with newer versions.

Containers are intended to have minimal overhead. With a minimal set of resources, many of them can be started – much fewer resources than virtual machines, that is. The good thing about this is that we can break down large applications into smaller pieces called microservices.

Instead of having one extensive application deployed occasionally, software engineers started to break these down into smaller standalone chunks that would communicate with each other. With microservices, it was much easier to deploy applications faster. Smaller modules reduced the risk of breaking the whole application, and the impact of each deployment was smaller, making it possible to deploy more frequently.

Like every solution that seems too good to be true, microservices also came with their own set of problems. It turns out that, even with container technology, when an application has hundreds or even thousands of containers running simultaneously and communicating with each other, it can be challenging to manage. System administrators at large enterprises relied on scripting to manage those containers and ensure that they were up and running at any given time. A good understanding of the system was required, and those operators were doing a job similar to traffic controllers; that is, making sure that all those containers were working as they should and not crashing.

Google was an early adopter of container technology and was running thousands of them. They built a cluster manager called Borg to help them keep all those containers up and running. In 2014, they rewrote this project and released it as an open source project called Kubernetes.

Kubernetes is a container orchestration platform for containers. It ensures that containers are always up and running and takes care of the networking between them. It also helps with deployment as it introduces mechanisms that can redirect traffic during upgrades or distribute traffic between various containers for better release management.

Kubernetes is now the standard solution for container orchestration in large enterprises and is available from most cloud providers.

The cloud today – cloud native

20 years after my first application deployments, things have evolved quite a bit. Often, applications are running in the cloud. Tooling is easier than ever to deploy and manage such applications. We are now in the era of cloud-native software development.

There is no need to access a user's machine to release new software; it is possible to push these changes via the web directly. Even in software that runs locally, such as mobile applications or phone operating systems, upgrades are usually done seamlessly through automated processes.

This means constant application upgrades and security patches without the need for a manual process for the end user. These frequent upgrades keep your customers happier and more engaged with your product. If a user happens to find a bug, it can automatically be fixed quickly.

For enterprises, this means being able to push new features more quickly than the competition. With tooling such as unit testing, new builds are generally more reliable, and the risk of regressions is highly reduced. Even if a new feature would break the application in place, tools such as Kubernetes let your team quickly revert changes to an older state while waiting for a patch to be released.

For the software developers, this means less hassle when pushing a new version. System operators are more confident that the microservices won't interfere with other systems in place. With containers, the friction between the teams is also less frequent, and trust is higher regarding the software's quality.

Not only are the applications deployed to the cloud, but applications can also be tested, packaged, and even built directly with tooling that exists on the web. Tekton is one of those tools you can use in your server's infrastructure to manage your deployments.

The future of the cloud

Just like there was no way for me to predict the cloud 20 years ago, I can't predict what we will see in the future. One thing is certain, though: the cloud will stick around for many more years. The flexibility and ease of use that it brings to software development has helped software developers become more productive. Once all the tooling is in place, programmers can focus on their code and forget about all the hassle of deployment.

Already, we can see the emergence of cloud-based IDEs. It is possible to code directly in a web browser. This code is then pushed to a repository that lives on the web and can be automatically deployed to a cluster via some tools that exist "as a Service."

These cloud-native tools will eventually bring even more uniformity across the environment and help software developers work even more closely to build better software.

So far, you've seen how the last 20 years have shaped the web as we now know it and the importance of deploying quicker and quicker. All of this brings us to our topic at hand: CI/CD.

Previous PageNext Page
You have been reading a chapter from
Building CI/CD Systems Using Tekton
Published in: Sep 2021Publisher: PacktISBN-13: 9781801078214
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Joel Lord

Joel Lord (joel__lord on Twitter) is passionate about the web and technology in general. He likes to learn new things, but most of all, he wants to share his discoveries. He does so by traveling to various conferences all across the globe. He graduated from college with a degree in computer programming in the last millennium. Apart from a little break to get his BSc in computational astrophysics, he has always worked in the industry. In his daily job, Joel is a developer advocate with MongoDB, where he connects with software engineers to help them make the web better by using best practices around JavaScript. In his free time, he can be found stargazing on a campground somewhere or brewing a fresh batch of beer in his garage.
Read more about Joel Lord