The DevOps movement, agile development, Continuous Integration (CI) and Continuous Delivery (CD) have all played a role in reshaping the landscape of software engineering efforts throughout the world. Gone are the days of manual environment provisioning, a priesthood of release engineering, and late-night stale-pizza release parties. While the pizza may have been a highlight, it was hardly worth the 4am deployment nightmares. These now antiquated practices have been replaced with highly efficient delivery pipelines, scalable microservice architectures, and IaC automated configuration-management techniques. As a result of these innovations, a new demand for automation engineers, configuration-management personnel, and DevOps-oriented engineers has cropped up. This new demand for an engineering resource capable of both driving efficient development practices, automating configuration management, and implementing scalable software delivery has completely transformed the modern software organization.
In software engineering, the term DevOps is as equally diverse as it is popular. A simple Google search for the term
DevOps yields roughly 18 million unique page results (that's a lot!). A search on Indeed.com for the term DevOps provides a diverse set of industry implementations. As with most culture-oriented terms, there is a buzzword definition and a deeper technical scope for the term DevOps. For the outsider, DevOps may seem a bit ambiguous. For this reason, it is often confused by organizations as an operations person who can code, or a developer who acts as an operational resource. This misnomer known as a DevOps engineer has led to significant confusion. Neither of the above provided definitions is 100% accurate.
In this book, we will add clarity to the practices surrounding the implementation of DevOps and provide you with the knowledge you will need to become both a successful DevOps and Ansible expert in your organization. In this book we will explore Ansible implementations and learn how it ties into DevOps solutions and processes. We will journey together through the Ansible and DevOps world and see how to leverage it for scalable deployments, configuration management, and automation. We will take this journey together and explore the exciting world of DevOps in Ansible 2 together. Let's get started!
In this first chapter, we are going to dive into DevOps and its methodology constructs to cover the following topics:
- DevOps 101
- The History of DevOps
- DevOps in the modern software organization
- The DevOps assembly line
- DevOps architectures and patterns
In the years leading up to the 2009 DevOpsDays conference tour, the term "DevOps" was relatively unknown to the engineering and technology stratosphere. The inception of DevOps-oriented culture was provided by Patrick Debois at an agile infrastructure conference in 2008. During this conference, Patrick spoke of a highly collaborative development team he worked with during his tenure at a large enterprise. The most highly collaborative moments during this tenure were when there were site outages or emergencies. During these incidents, the developers and operations people seemed to be laser-focused and worked incredibly well together. This experience gave Patrick a yearning to encourage this behavior outside of non-emergency activities.
It was at the agile infrastructure conference that Pattick Debois was able to also connect with Andrew Shafer (who then worked at Puppet Labs, Inc.). These two soon found out that they shared many of the same goals and ideologies. In many senses, this chance encounter encouraged Patrick to continue to push the fledgling concept of DevOps forward. In future conferences, Patrick tried fervently yet unsuccessfully (at agile infrastructure conferences) to encourage a more collaborative approach to software development and delivery. While the idea was novel, the practical implementation of the idea never seemed to gain traction at the venues provided to Patrick.
It was in 2009 that Patrick Debois attended an O'Reilly Velocity conference, where he heard John Allspaw speak of how Ops and Dev could collaborate. From this speech, the idea of DevOps was seeded in his mind. Patrick decided to begin hosting a set of mini DevOpsDays conferences, which would eventually catapult the concept of DevOps into mainstream engineering cultures.
While there is yet to be a concise, one-line summary of everything that DevOps entails, there has come about a generally accepted agreement on the overarching concepts and practices that define DevOps: culture, automation, measurement, and sharing, or CAMS for short. The CAMS approach to DevOps was defined by Damon Edwards and John Willis at DevOpsDays in 2010. It is described in greater detail next.
One of the generally accepted concepts to arise out of the DevOps movement is a cultural one. With traditional IT organization being isolated from development, silos are a commonplace within organizations worldwide. In an effort to pave the way for rapid development and delivery, a fundamental change in organizational culture must take place. This would be done in an effort to promote collaboration, sharing, and a sense of synergy within the organization. This cultural change is indeed probably the most difficult aspect of a DevOps adoption in an organization.
Automating once-manual processes is critical for a successful DevOps transformation. Automation removes the guesswork and magic out of building, testing, and delivering software and enforces the codification of software processes. Automation is also among the more visible aspects of DevOps and provides one of the highest returns on investment (ROIs).
Measuring successes and failures provides critical business data and helps pave the way for higher efficiency through effective change. This simply emphasizes that business decisions can be made through data and metrics rather than gut reactions. For a DevOps transformation to be a success, measuring things such as throughput, downtime, rollback frequency, latency, and other related operational statistics can help pivot an organization toward higher efficiency and automation.
In stark contrast to the previously accepted paradigm of software development, sharing is pivotal for a successful DevOps transformation. This means that teams should be encouraged to share code, concepts, practices, processes, and resources. A successful DevOps-oriented organization may even go so far as to embed an operations employees or QA resource in the development team in order to facilitate autonomy and collaborative teams. Some organizations may also have shared or overlapping roles. This may be realized through some modern development techniques (TDD, BDD and so on).
At the time of writing, there are hundreds if not thousands of DevOps-specific tools. Such tools are designed to make the lives of engineering organizations better or more efficient. While the tools aspect of DevOps is important, it is important to not let a given tool define your specific organization's DevOps process for you. Once again this implementation CANNOT be achieved without applying the CAMS model first. Throughout the course of this book, we will reference and tutorialize an array of different tools and technologies. For you, specifically, it's important that you select and leverage the right tool for the right job.
Prior to the widespread adoption of DevOps, organizations would often commit to developing and delivering a software system within a specified time frame and, more often than not, miss release deadlines. The failure to meet required deadlines put additional strains on organizations financially and often meant that the business would bleed financial capital. Release deadlines in software organizations are missed for any number of reasons, but some of the most common are listed here:
- The time needed to complete pure development efforts
- The amount of effort involved in integrating disparate components into a working software title
- The number of quality issues identified by the testing team
- Failed deployments of software or failed installations onto customers' machines
The amount of extra effort (and money) required to complete a software title (beyond its originally scheduled release date) sometimes even drains company coffers so much it forces the organization into bankruptcy. Companies such as Epic MegaGames or Apogee were once at the top of their industry but quickly faltered and eventually faded into the background of failed businesses and dead software titles as a result of missed release dates and a failure to compete.
The primary risk of this era was not so much in the amount of time engineering would often take to create a title, but instead in the amount of time it would take to integrate, test, and release a software title after initial development was completed. Once the initial development of a software title was completed, there were oftentimes long integration cycles coupled with complex quality-assurance measures. As a result of the quality issues identified, major rework would need to be performed before the software title was adequately defect-free and releasable. Eventually, the releases were replicated onto disk or CD and shipped to customers.
Some of the side-effects of this paradigm were that during development, integration, quality assurance, or pre release periods, the software organization would not be able to capitalize on the software, and the business was often kept in the dark on progress. This inherently created a significant amount of risk, which could result in the insolvency of the business. With software engineering risks at an all-time high and businesses averse to Vegas-style gambling, something needed to be done.
In an effort for businesses to codify the development, integration, testing, and release steps, companies strategized and created the software development life cycle (SDLC). The SDLC provided a basic outline process flow, which engineering would follow in an effort understand the current status of an under-construction software title. These process steps included the following:
- Requirements gathering
The process steps in the SDLC were found to be cyclic in nature, meaning that once a given software title was released, the next iteration (including bug fixes, patches, and so on) was planned, and the SDLC would be restarted. In the 90s, this meant a revision in the version number, major reworks of features, bug fixes, added enhancements, a new integration cycle, quality assurance cycle, and eventually a reprint of CDs or disks. From this process, the modern SDLC was born.
An illustration of the SDLC is provided next:
Through the creation and codification of the SDLC, businesses now had an effective way to manage the software creation and release process. While this process properly identified a repeatable software process, it did not mitigate the risk of integrating it. The major problem with the integration phase was in the risk of merging. During the time period before DevOps, CI, CD, and agile, software marching orders would traditionally be divided among teams, and individual developers would retreat to their workstations and code. They would progress in their development efforts in relative isolation until everyone was done and a subsequent integration phase of development took place.
During the integration phase, individual working copies were cobbled together to eventually form one cohesive and operational software title. At the time, the integration phase posed the most amount of risk to a business, as this phase could take as long as (or longer than) the process of creating the software title itself. During this period, engineering resources were expensive and the risk of failure was at its highest; a better solution was needed.
The risk of the integration phase to businesses was oftentimes very high, and a unique approach was finally identified by a few software pundits, which would ultimately pave the way for the future. Continuous Integration is a development practice where developers can merge their local workstation development changes incrementally (and very frequently) into a shared source-control mainline. In a CI environment, basic automation would typically be created to validate each incremental change and ensure nothing was inadvertently broken or didn't work. In the unfortunate event something broke, the developer could easily fix it or revert the change. The idea of continuously merging contributions meant that organizations would no longer need an integration phase, and QA could begin to take place as the software was developed.
Continuous Integration would eventually be popularized through successful mainstream software-engineering implementations and through the tireless efforts of Kent Beck and Martin Fowler. These two industry pundits successfully scaled basic continuous-integration techniques during their tenure at the Chrysler corporation in the mid 90s. As a result of their successful litmus tests through their new CI solution, they noticed an elimination of risk to the business via the integration phase. As a result, they eagerly touted the newfound methodology as the way of the future. Not too long after CI began to gain visibility, other software organizations began to take notice of it and also successfully applied the core techniques.
By the late 90s and early 2000s, Continuous Integration was in full swing. Software engineering teams were clamoring to integrate more frequently and verify changes faster, and they diligently worked to develop releasable software incrementally. In many ways, this was the golden era of engineering. It was at the height of the Continuous Integration revolution that (in 2001) 12 software engineering pundits met in a retreat at a mountain resort in Snowbird, Utah, to discuss a new approach to software development. The result of this meeting of the minds, known now as agile development, is broken down into four central pillars, which are:
- Individuals and interactions over processes and tools
- Working software over comprehensive documentation
- Customer collaboration over contract negotiation
- Responding to change over following a plan
That is, while there is value in the items on the right, we value the items on the left more.
This set of simple principles combined with the 12 core philosophies of agile development would later become known as the agile manifesto. The complete agile manifesto can be found at http://agilemanifesto.org/.
In 2001, the agile manifesto was officially published, and organizations soon began breaking work into smaller chunks and getting orders standing up instead of sitting down. Functionality was prioritized, and work items were divided across team members for completion. This meant that the team now had rigid timelines and 2- to 4-week deliverable deadlines.
While this was a step in the right direction, it was limited to the scope of the development group alone. Once the software system was handed from development to quality assurance, the development team would often remain hands off as the software eventually made its way to a release. The most notable problem in this era was related to large complex deployments into physical infrastructure by people who had little to no understanding of the way the software worked.
As software organizations evolved, so did the other departments. For example, quality assurance (QA) practices became more modern and automated. Programmers began writing automated test suites and worked to validate software changes in an automated way. From the revolution in QA, modern practices such as Test-driven Development (TDD), Behavior-driven Development (BDD), and A/B Testing evolved.
The agile movement came about in the year 2001 with the signing and release of the agile manifesto. The principles identified in the agile manifesto in many ways identified a lot of the core concepts that the DevOps movement has since adopted and extended. The agile manifesto represented a radical shift in development patterns when it was released. It argued for shorter iterative development cycles, rapid feedback, and higher levels of collaboration. Sound familiar?
It was also about this time that Continuous Integration began to take root in software organizations and engineers began to take notice of broken builds, failed unit tests, and release engineering.
The solution to the issue of silos in an organization, it would seem, was to alter the culture, simplify and automate the delivery of software changes (by doing it more often), change the architecture of software solutions (away from monoliths), and pave the way for the organization to outmaneuver the competition through synergy, agility, and velocity. The idea is that if a business can deliver features that customers want faster than the competition, they will outdo their opponents.
It was for these reasons that modern DevOps approaches came to fruition. This approach also allowed incremental approaches to DevOps adoption within an organization.
In the infancy of computer science, computer programmers were wizards, their code was a black art, and organizations paid hefty sums to develop and release software. Oftentimes, software projects would falter and companies would go bankrupt attempting to release a software title to the market. Computer science back then was very risky and entailed long development cycles with painful integration periods and oftentimes failed releases.
In the mid 2000's Cloud computing took the world by storm. The idea of an elastic implementation of computing resources, which could scale at ease with organizations that were expanding rapidly provided a wave for the innovation of the future. By 2012 Cloud computing was a huge trend and hundreds if not thousands of companies were clamoring to get to the cloud.
As software engineering matured in the early 2000s and the widespread use of computers grew, a new software paradigm came to fruition; it was called Software as a Service (SaaS). In the past, software was shipped to customers either on CD, floppy disk, or direct onsite installations. This widely accepted pricing model was in the form of a one-time purchase. This new platform provided a subscription-based revenue model and touted an elastic and highly scalable infrastructure with promises of recurring revenue for businesses. It was known as the cloud.
With cloud computing on the rise and the software use paradigm changing dramatically, the previously accepted big bang 5 release strategy began to become antiquated. As a result of the shifting mentality in software releases, organizations could no longer wait over a year for an integration cycle to take place prior to the execution of quality assurance test plans. Nor could the business wait two years for engineering and QA to sign off on a given release. To help solve this issue, Continuous Integration was born, and the beginnings of an assembly-line system for software development began to take shape. The point of DevOps was more than just a collaborative edge within teams. The premise was in fact a business strategy to get features into customers hands more efficiently through DevOps cultural implementations.
Prior to the Industrial Revolution, goods were mostly handcrafted and developed in small quantities. This approach limited the quantity a craftsman could create as well as the customer base they could sell their goods to. This process of handcrafting goods proved to be expensive, time-consuming, and wasteful. When Henry Ford began developing the automobile, he looked to identify a more efficient method of manufacturing goods. The result of his quest was to implement a standardization methodology and adopt a progressive assembly-line approach for developing automobiles.
In the 1980s and 90s, software engineering efforts would oftentimes drain company finances. This was the result of inefficiencies in processes, poor communication, a lack of coordinated development efforts, and an inadequate release process. Inefficiencies such as integration phases, manual quality assurance, verification release plans, and execution often added a significant amount of time to the overall development and release strategies of the business. As a way to begin mitigating these risks, new practices and processes began to take shape.
As a result of these trends, software organizations began to apply manufacturing techniques to software engineering. One of the more prevalent manufacturing concepts to be applied to software development teams is the manufacturing assembly line (also known as progressive assembly). In factories all around the world, factory assembly lines have helped organize product-creation processes and have helped ensure that, prior to shipping and delivery, manufactured goods are carefully assembled and verified. The assembly-line approach provides a level of repeatability and quantifiable verification for mass-produced products. Factories adopt the progressive assembly approach to minimize waste, maximize efficiency, and deliver products of higher quality. In recent years, software engineering organizations have begun to gravitate towards this progressive assembly-line practice to also help reduce waste, improve throughput, and release products of higher quality. From this approach, the overarching DevOps concept was born.
From the DevOps movement, a set of software architectural patterns and practices have become increasingly popular. The primary logic behind the development of these architectural patterns and practices is derived from the need for scalability, no-downtime deployments, and minimizing negative customer reactions to upgrades and releases. Some of these you may have heard of (microservices), while others may be a bit vague (blue-green deployments).
In this section, we will outline some of the more popular architectures and practices to evolve from the DevOps movement and learn how they are being leveraged to provide flexibility and velocity at organizations worldwide.
In software development, encapsulation often means different things to different people. In the context of the DevOps architecture, it simply means modularity. This is an important implementation requirement for DevOps organizations because it provides a way for components to be updated and replaced individually. Modular software is easier to develop, maintain, and upgrade than monolithic software. This applies both to the grand architectural approach as well as at the object level in object-oriented programming. If you have ever worked at a software organization that has monolithic legacy code base, you are probably quite familiar with spaghetti code or the monolithic fractal Onion Software approach. Below is a monolithic software architecture vs encapsulated architecture approach diagram:
As we can see from the above diagram, the modular organized software solution is significantly easier to understand and potentially manage than the monolithic one.
Microservices architectures cropped up around the same time as containerization and portable virtualization. The general concept behind a microservice architecture is to architect a software system in such a way that large development groups have a simplistic way to update software through repeatable deployments, and upgrade only the parts that have changed. In some ways, microservices provide a basic constraint and solution to development sprawl to ensure that software components don't become monolithic. The general practice of upgrading only the parts that have changed might be to think of this as replacing the tires on a car instead of replacing the entire car every time the tires become worn.
A microservice development paradigm requires discipline from development personnel to ensure the structure and content of the microservice don't grow beyond its initially defined scope. As such, the basic components of a microservice are listed here:
- Each microservice should have an API or externally facing mode of communication
- Each microservice, where applicable, should have a unique database component
- Each microservice should only be accessible through its API or externally facing mode of communication
So from what we've learned, microservices vs monolithic architectures could be summed up in the following basic diagram:
Continuous Integration and Continuous Delivery, or CI->CD as they are better known in the software industry, have become a fundamental component of the DevOps movement. The implementation of these practices varies across many organizations. The implementation varies due to a significant variance in CI/CD maturity and evolution.
Continuous Integration represents a foundation for a completely automated build and deployment solution and is usually the starting point in a CI/CD quest. Continuous Integration represents a specific set of development practices, which aim to validate each change to a source-controlled software system through automation. The specific practice of CI in many regards also represents mainline software development coupled with a set of basic verification systems to ensure the commit didn't cause any code compilation issues and does not contain any known landmines.
The general practice of CI is provided here:
- A developer commits code changes to a source-control system mainline (Per Martin Fowler's invention of CI concepts) performed at least once a day. This is to ensure that code is collaborated on EVEN if they are incomplete.
- An automation system detects the check-in and validates that the code can be compiled (syntax check).
- The same automation system executes a set of unit tests against the newly updated code base.
- The system notifies the committer if there are any identifiable defects related to the check-in.
If at the end of the CI cycle for a given commit there exist any identifiable defects, the committer has two potential options:
- Fix the issue quickly.
- Revert the change from the source control (to ensure the system is in a known working state).
While the practice of CI may sound quite easy, in many ways, it's quite difficult for development organizations to implement. This is usually related to the cultural atmosphere of the team and organization.
It is worth noting the source for CI mentioned here comes from Martin Folwer and James Shore. These software visionaries were instrumental in creating and advocating CI implementations and solid development practices. This is also the base platform required for Continuous Delivery, which was created by Jez Humble in 2012.
Continuous Delivery represents a continuation of CI and requires CI as a foundational starting point. Continuous Delivery aims to start by validating each committed change to a software system through the basic CI process described earlier. The main addition that Continuous Delivery offers is that, once the validation of the code change is completed, the CD system will deploy (install) the software onto a mock environment and perform additional testing as a result.
The Continuous Delivery practice aims to provide instant feedback to developers on the quality of their commit and the potential reliability of their code base. The end goal is to keep the software in a releasable form at all times. When implemented correctly, CI/CD provides significant business value to the organization and can help reduce wasted development cycles debugging complex merges and commits that don't actually work or provide business value.
Based on what we described previously, Continuous Delivery has the following basic flow of operations:
- User commits code to source-control mainline
- Automated CI process detects the change
- Automated CI process builds/syntax-checks the code base for compilation issues
- Automated CI process creates a uniquely versioned deployable package
- Automated CI process pushes the package to an artifact repository
- Automated CD process pulls the package onto a given environment
- Automated CD process deploys/installs the package onto the environment
- Automated CD process executes a set of automated tests against the environment
- Automated CD process reports any failures
- Automated CD process deploys the package onto additional environments
- Automated CD process allows additional manual testing and validation
In a Continuous Delivery implementation, not every change automatically goes into production, but instead the principles of Continuous Delivery offer a releasable at anytime software product. The idea is that the software COULD be pushed into production at any moment but isn't necessarily always done so.
Generally, the CI/CD process flow would look like this:
- Continuous Integration:
- Flow of Continuous Delivery:
- Components of Continuous Delivery:
Microservices and modularity are similar in nature but not entirely the same. The basic concept of modularity is to avoid creating a monolithic implementation of a software system. A monolithic software system is inadvertently developed in such a way that components are tightly coupled and have heavy reliance on each other, so much so that the effect of updating one component requires the updating of many others just to improve functionality or alleviate the presence of a defect.
Monolithic software development implementations are most common in legacy code bases that were poorly designed or rushed through the development phase. They can often result in brittle software functionality and force the business to continue to spend significant amounts of time updating and maintaining the code base.
On the other hand, a modular software system has a neatly encapsulated set of modules, which can be easily updated and maintained due to the lack of tightly coupled components. Each component in a modular software system provides a generally self-reliant piece of functionality and can be swapped out for a replacement in a much more efficient manner.
Horizontal scaling is an approach to software delivery that allows larger cloud-based organizations to spin up additional instances of a specific services in a given environment. The traffic incoming to this service would then be load-balanced across the instances to provide consistent performance for the end user. Horizontally scaling an application must be approached during the design and development phase of the SDLC and requires a level of discipline on the developer's part.
Blue-green is a development and deployment concept requiring two copies of the product, one called blue and other green, with one copy being the current release of the product. The other copy is in active development to become the next release as soon as it is deemed fit for production. Another benefit of using this development/deployment model is the ability to roll back to the previous release should the need arise. Blue-green deployments are vital to the concept of CI because, without the future release being developed in conjunction with the current release, hotfixes and fire/damage control become the norm, with innovation and overall focus suffering as a result.
Blue-green deployments specifically allow zero-downtime deployment to take place and for rollbacks to occur seamlessly (since the previous instance was never destroyed). Some very notable organizations have successfully implemented blue-green deployments. These companies include:
As a result of blue-green deployments, there have been some very notable successes within the DevOps world that have minimized the risk of deployment and increased the stability of the software systems.
Artifact management plays a pivotal role in a DevOps environment. The artifact-management solution provides a single source of truth for all things deployable. In addition to that, it provides a way for the automation system to shrink-wrap a build or potential release candidate and ensure it doesn't get tampered with after the initial build. In many ways, an artifact-management system is to binaries what source control is to source code.
In the software industry, there are many options for artifact management. Some of these are free to use and others require the purchase of a specific tool. Some of the more popular options include:
Now that we have a basic understanding of artifact management, let's take a look at how an artifact repository fits into the general workflow of a DevOps-oriented environment. A diagram depicting this solution's place within a DevOps-oriented environment is provided next:
In a rapid-velocity deployment environment (where changes are pushed through a delivery pipeline rapidly), it is absolutely critical that any pre-production and production environments maintain a level of symmetry. That is to say, the deployment procedures and resulting installation of a software system are identical in every way possible among environments. For example, an organization may have the following environments:
- Development: Here, developers can test their changes and integration tactics. This environment acts as a playground for all things development oriented and provides developers with an area to validate their code changes and test the resulting impact.
- Quality-assurance environment: This environment comes after the development environment and provides QA personnel with a location to test and validate the code and resulting installation. This environment is usually released as a precursor environment, and the environment will need to pass stricter quality standards prior to a sign-off on a given build for release.
- Stage: This environment represents the final location prior to production, where all automated deployment techniques are validated and tested.
- Production: This environment represents the location where users/customers are actually working with the live install.
In this chapter, we talked about how DevOps and the DevOps movement began; we learned about the various components of DevOps (CAMS); we discussed the roles agile, Continuous Integration, Continuous Delivery, and microservices have played within DevOps; and we discussed some of the other various architectural techniques that DevOps requires.
In the next chapter, we will delve into configuration management, which also plays a pivotal role in DevOps. By understanding the techniques of configuration management, we will begin to understand the concepts of Infrastructure as Code (which is something Ansible does very well). We will delve into what it means to version your configuration states, how to go about developing code that maintains the infrastructure state, and what the ins and outs are for creating a successful configuration management (CM) solution.