DevOps for Web Development

By Mitesh Soni
    Advance your knowledge in tech with a Packt subscription

  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. Getting Started – DevOps Concepts, Tools, and Technologies

About this book

The DevOps culture is growing at a massive rate, as many organizations are adopting it. However, implementing it for web applications is one of the biggest challenges experienced by many developers and admins, which this book will help you overcome using various tools, such as Chef, Docker, and Jenkins.

On the basis of the functionality of these tools, the book is divided into three parts. The first part shows you how to use Jenkins 2.0 for Continuous Integration of a sample JEE application. The second part explains the Chef configuration management tool, and provides an overview of Docker containers, resource provisioning in Cloud environments using Chef, and Configuration Management in a Cloud environment. The third part explores Continuous Delivery and Continuous Deployment in AWS, Microsoft Azure, and Docker, all using Jenkins 2.0.

This book combines the skills of both web application deployment and system configuration as each chapter contains one or more practical hands-on projects. You will be exposed to real-world project scenarios that are progressively presented from easy to complex solutions. We will teach you concepts such as hosting web applications, configuring a runtime environment, monitoring and hosting on various cloud platforms, and managing them. This book will show you how to essentially host and manage web applications along with Continuous Integration, Cloud Computing, Configuration Management, Continuous Monitoring, Continuous Delivery, and Deployment.

Publication date:
October 2016


Chapter 1. Getting Started – DevOps Concepts, Tools, and Technologies


"The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency."

 -- Bill Gates

DevOps is not a tool or technology; it is an approach or culture that makes things better. This chapter describes in detail how DevOps solves different problems of the traditional application—delivery cycle. It also describes how it can be used to make development and operations teams efficient and effective in order to make time to market faster by improving culture. It also explains key concepts essential for evolving DevOps culture.

You will learn about the DevOps culture, its lifecycle and key concepts, and tools, technologies, and platforms used for automating different aspects of application lifecycle management.

In this chapter, we will cover the following topics:

  • Understanding the DevOps movement

  • The DevOps lifecycle—it's all about "continuous"

  • Continuous integration

  • Configuration management

  • Continuous delivery/continuous deployment

  • Continuous monitoring

  • Continuous feedback

  • Tools and technologies

  • Overview of a sample Java EE application


Understanding the DevOps movement

Let's try to understand what DevOps is. Is it a real, technical word? No, because DevOps is not just about technical stuff. It is also neither simply a technology nor an innovation. In simple terms, DevOps is a blend of complex terminologies. It can be considered as a concept, culture, development and operational philosophy, or a movement.

To understand DevOps, let's revisit the old days of any IT organization. Consider there are multiple environments where an application is deployed. The following sequence of events takes place when any new feature is implemented or bug fixed:

  1. The development team writes code to implement a new feature or fix a bug. This new code is deployed to the development environment and generally tested by the development team.

  2. The new code is deployed to the QA environment, where it is verified by the testing team.

  3. The code is then provided to the operations team for deploying it to the production environment.

  4. The operations team is responsible for managing and maintaining the code.

Let's list the possible issues in this approach:

  • The transition of the current application build from the development environment to the production environment takes weeks or months.

  • The priorities of the development team, QA team, and IT operations team are different in an organization and effective, and efficient co-ordination becomes a necessity for smooth operations.

  • The development team is focused on the latest development release, while the operations team cares about the stability of the production environment.

  • The development and operations teams are not aware of each other's work and work culture.

  • Both teams work in different types of environments; there is a possibility that the development team has resource constraints and they therefore, use a different kind of configuration. It may work on the localhost or in the dev environment

  • The operations team works on production resources and there will, therefore, be a huge gap in the configuration and deployment environments. It may not work where it needs to run – in the production environment.

  • Assumptions are key in such a scenario, and it is improbable that both teams will work under the same set of assumptions.

  • There is manual work involved in setting up the runtime environment and configuration and deployment activities. The biggest issue with the manual application–deployment process is its nonrepeatability and error-prone nature.

  • The development team has the executable files, configuration files, database scripts, and deployment documentation. They provide it to the operations team. All these artifacts are verified on the development environment and not in production or staging.

  • Each team may take a different approach for setting up the runtime environment and the configuration and deployment activities, considering resource constraints and resource availability.

  • In addition, the deployment process needs to be documented for future usage. Now, maintaining the documentation is a time-consuming task that requires collaboration between different stakeholders.

  • Both teams work separately and hence there can be a situation where both use different automation techniques.

  • Both teams are unaware of the challenges faced by each other and hence may not be able to visualize or understand an ideal scenario in which the application works.

  • While the operations team is busy in deployment activities, the development team may get another request for a feature implementation or bug fix; in such a case, if the operations team faces any issues in deployment, they may try to consult the development team, who are already occupied with the new implementation request. This results in communication gaps, and the required collaboration may not happen.

  • There is hardly any collaboration between the development team and the operations team. Poor collaboration causes many issues in the application's deployment to different environments, resulting in back-and-forth communication through e-mail, chat, calls, meetings, and so on, and it often ends in quick fixes.

  • Challenges for the development team:

    • The competitive market creates on-time delivery pressure.

    • They have to take care of production-ready code management and new feature implementation.

    • The release cycle is often long and hence the development team has to make assumptions before the application deployment finally takes place. In such a scenario, it takes more time to fix the issues that occurred during deployment in the staging or production environment.

  • Challenges for the operations team:

    • Resource contention: It's difficult to handle increasing resource demands

    • Redesigning or tweaking: This is needed to run the application in the production environment

    • Diagnosing and rectifying: They are supposed to diagnose and rectify issues after application deployment in isolation

DevOps with the changing times

Time changes everything. In the modern era, customers expect and demand extremely quick response, and we need to deliver new features continuously to stay in business. Users and customers today have rapidly changing needs; they expect 24/7 connectivity and reliability and access services over smartphones, tablets, and PCs. As software product vendors—irrespective of whether in the development and/or operations—organizations need to push updates frequently to satisfy customers' needs and stay relevant. In short, organizations are facing the following challenges:

A change in the behavior of customers or market demand affects the development process.

The waterfall model

The waterfall model follows sequential application design process for software development. It comes with good control but lacks revisions. It is a goal based development but without any scope of revision. The waterfall model has long been used for software development:

It has its advantages, as follows:

  • Easy to understand

  • Easy to manage—the input and output of each phase is defined

  • Sequential process—order is maintained

  • Better control

However, it is only useful in scenarios where requirements are predefined and fixed. As it is a rigid model with a sequential process, we can't go back to any phase and change things. It has its share of disadvantages, as follows:

  • No revision

  • No outcome or application package until all phases are completed

  • Not possible to integrate feedback until all phases are completed

  • Not suitable for changing requirements

  • Not suitable for long-term and complex projects

The agile model

Inefficient estimation, long time to market, and other issues led to a change in the waterfall model, resulting in the agile model. Agile development or the agile methodology is a method of building an application by empowering individuals and encouraging interactions, giving importance to working software, customer collaboration—using feedback for improvement in subsequent steps—and responding to change in an efficient manner. It emphasizes customer satisfaction through continuous delivery in small interactions for specific features in short timelines or sprints.

The following diagram illustrates the working mechanism of agile:

One of the most attractive benefits of agile development is continuous delivery in short time frames or, in agile terms, sprints. Now, it is not a one-time deployment, but multiple deployments. Why? After each sprint, a version of the application with some features is ready for showcasing. It needs to be deployed in specific environments for demonstration, and thus, deployment is no longer a one-time activity.

It is very essential from an organization's perspective to meet changing demands of customers. To make it more efficient, communication and collaboration between all cross-functional teams is essential. Many organizations have adopted the agile methodologies.

In such a case, traditional manual deployment processes work as speed barriers for incremental deployments. Hence, it is necessary to change other processes as well along with a change in the application development methodology. One key can't be used for all locks; similarly, the waterfall model is not suitable for all projects. We need to understand that agile is customer focused and feedback is vital. Changes happen based on customer feedback, and release cycles may increase. Just imagine a scenario where inputs are high but input processing is slow. Consider an example of a shoe company where one department prepares shoes and another department works on final touches and packaging. What would happen if the packaging process were slow and inefficient? Shoes would pile up in the packaging department. Now let's add a twist to this situation. What if the shoe-making department brings new machines and improves the process of making shoes? Let's say it makes the shoe-making process two to three times faster. Imagine the state of the packaging department. Similarly, cloud computing and DevOps have gained momentum, which increases the speed of delivery and improves the quality of the end product. Thus, the agile approach of application development, improvement in technology, and disruptive innovations and approaches have created a gap between development and operations teams.


DevOps attempts to fill these gaps by developing a partnership between the development and operations teams. The DevOps movement emphasizes communication, collaboration, and integration between software developers and IT operations. DevOps promotes collaboration, and collaboration is facilitated by automation and orchestration in order to improve processes. In other words, DevOps essentially extends the continuous development goals of the agile movement to continuous integration and release. DevOps is a combination of agile practices and processes leveraging the benefits of cloud solutions. Agile development and testing methodologies help us meet the goals of continuously integrating, developing, building, deploying, testing, and releasing applications. It provides a mechanism for constant feedback from different teams and stakeholders. It also provides transparency in the form of a platform for collaboration across teams, such as business analysts, developers, and testers. In short, agile and DevOps are compatible and increase each other's value.

One of the most popular sayings is that practice makes a man perfect. What if that saying were applied to a production-like environment? It is much easier to repeat the entire process as there are no last minute—surprises, and most of the issues in deployment have already been experienced and dealt with. The development team supports operational requirements such as deploy scripts, diagnostics, and load and performance testing from the beginning of the application—delivery lifecycle, and the operations team provides knowledgeable support and feedback before, during, and after deployment. The remedy is to integrate the testing, deployment, and release activities into the development process. This is done by performing all activities multiple times and making then an ongoing part of development so that by the time you are ready to release your system into production there is little to no risk, because the deployment process has already been rehearsed on many different environments in progressively more production-like environments.

Cloud computing - the disruptive innovation

A major challenge is managing the infrastructure for all environments. Virtualization and cloud environments can help you get started with this. The cloud helps us overcome this hurdle by providing flexible on-demand resources and environments. It provides distributed access across the globe and helps in the effective utilization of resources. The cloud provides a repository of software—tools that can be used on an on-demand basis. We can clone environments and reproduce required versions as and when required. The entire development, test, and production environments can be monitored and managed using the facilities provided by cloud providers. With the advent of cloud computing, it is easy to recreate every piece of infrastructure used by an application using automation. This means that operating systems, OS configuration, runtime environments and configuration, infrastructure configuration, and so forth can all be managed. In this way, it is easy to recreate the production environment exactly in an automated fashion. Thus, DevOps on cloud brings in the best-of-breed solution from both agile development and cloud solutions. It helps in providing a distributed agile environment in the cloud, leading to continuous accelerated delivery.

Why DevOps?

DevOps is effective because of new methodologies, automation tools, agile resources of cloud service providers, and other disruptive innovations, practices, and technologies. However, it is not only about tools and technology-DevOps is more about culture than tools or technology alone.


"Technology is just a tool. In terms of getting the kids working together and motivating them, the teacher is the most important."  

 -- Bill Gates

There is an urgent need of a huge change in the way development and operations teams collaborate and communicate. Organizations need to have a change in culture and have long term business goals that include DevOps in their vision. It is important to establish the pain points and obstacles experienced by different teams or business units and use that knowledge for refining business strategy and fixing goals.


"People always fear change. People feared electricity when it was invented, didn't they? People feared coal; they feared gas-powered engines... There will always be ignorance, and ignorance leads to fear. But with time, people will come to accept their silicon masters

 -- Bill Gates

If we identify the common issues faced by different sections of an organization and change the strategy to bring more value, then it makes sense. It can be a stepping stone in the direction of DevOps. With old values and objectives, it is difficult to adopt any new path. It is very important to align people with the new process first. For example, a team has to understand the value of the agile methodology; else, they will resist using it. They might resist it because they are comfortable with the old process. Hence, it is important to make them realize the benefit as well as empowering them to bring about the change.


"Change is hard because people overestimate the value of what they have—and underestimate the value of what they may gain by giving that up."

 -- James Belasco and Ralph Stayer

Self-dependent teams bring out the best in them when they are empowered. We also need to understand that power comes with accountability and responsibility. Cross-functional teams work together and enhance quality by providing their expertise in the development process; however, it is not an isolated function. Communication and collaboration across teams makes quality way higher.

The end objective of the DevOps culture is continuous improvement. We learn from our mistakes, and it becomes experience. Experience helps us identify robust design patterns and minimize errors in processes. This leads to an enhancement of productivity, and hence, we achieve new heights with continuous innovations.


"Software innovation, like almost every other kind of innovation, requires the ability to collaborate and share ideas with other people, and to sit down and talk with customers and get their feedback and understand their needs."

 -- Bill Gates

The benefits of DevOps

This diagram covers all the benefits of DevOps:

Collaboration among different stakeholders brings many business and technical benefits that help organizations achieve their business goals.


The DevOps lifecycle - it's all about "continuous"

Continuous Integration (CI), Continuous Testing (CT), and Continuous Delivery (CD) are significant part of DevOps culture. CI includes automating builds, unit tests, and packaging processes while CD is concerned with the application delivery pipeline across different environments. CI and CD accelerate the application development process through automation across different phases, such as build, test, and code analysis, and enable users achieve end-to-end automation in the application delivery lifecycle:

Continuous integration and continuous delivery or deployment are well supported by cloud provisioning and configuration management. Continuous monitoring helps identify issues or bottlenecks in the end-to-end pipeline and helps make the pipeline effective.

Continuous feedback is an integral part of this pipeline, which directs the stakeholders whether are close to the required outcome or going in the different direction.


"Continuous effort – not strength or intelligence – is the key to unlocking our potential"

 -- Winston Churchill

The following diagram shows a mapping of different parts of an application delivery pipeline with the toolset for Java web applications:

We will use a sample Spring application throughout this book for demonstration purposes, which is why the toolset is related to Java.

Build automation

An automated build helps us create an application build using build automation tools such as Apache Ant and Apache Maven. An automated build process includes the following activities:

  • Compiling source code into class files or binary files

  • Providing references to third-party library files

  • Providing the path of configuration files

  • Packaging class files or binary files into WAR files in the case of Java

  • Executing automated test cases

  • Deploying WAR files on local or remote machines

  • Reducing manual effort in creating the WAR file

Maven and Ant automate the build process and make it simple, repeatable, and less error prone as it is a create-once-run-multiple-times concept. Build automation is the base of any kind of automation in the application delivery pipeline:

Build automation is essential for continuous integration and the rest of the automation is effective only if the build process is automated. All CI servers, such as Jenkins, Atlassian, and Bamboo use build files for continuous integration and creating their application-delivery pipeline.

Continuous integration

What is continuous integration? In simple words, CI is a software engineering practice where each check-in made by a developer is verified by either of the following:

  • Pull mechanism: Executing an automated build at a scheduled time

  • Push mechanism: Executing an automated build when changes are saved in the repository

This step is followed by executing a unit test against the latest changes available in the source code repository:

The main benefit of continuous integration is quick feedback based on the result of build execution. If it is successful, all is well; else, assign responsibility to the developer whose commit has broken the build, notify all stakeholders, and fix the issue.

So why is CI needed? Because it makes things simple and helps us identify bugs or errors in the code at a very early stage of development, when it is relatively easy to fix them. Just imagine if the same scenario takes place after a long duration and there are too many dependencies and complexities we need to manage. In the early stages, it is far easier to cure and fix issues; consider health issues as an analogy, and things will be clearer in this context.

Continuous integration is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early.

CI is a significant part and in fact a base for the release-management strategy of any organization that wants to develop a DevOps culture.

Following are immediate benefits of CI:

  • Automated integration with pull or push mechanism

  • Repeatable process without any manual intervention

  • Automated test case execution

  • Coding standard verification

  • Execution of scripts based on requirement

  • Quick feedback: build status notification to stakeholders via e-mail

  • Teams focused on their work and not in the managing processes

Jenkins, Apache Continuum, Buildbot, GitLabCI, and so on are some examples of open source CI tools. AnthillPro, Atlassian Bamboo, TeamCity, Team Foundation Server, and so on are some examples of commercial CI tools.

Best practices

We will now be looking at best practices that can be useful when considering a continuous integration implementation:

  • Maintain a code repository such as Git or SVN.

  • Check-in third-party JAR files, build scripts, other artifacts, and so on into the code repository.

  • Execute builds fully from the code repository: Use a clean build.

  • Automate the build using Maven or Ant for Java.

  • Make the build self-testing: Create unit tests.

  • Commit all changes at least once a day per feature.

  • Every commit should be built to verify the integrity of changes.

  • Authenticate users and enforce access control (authentication and authorization).

  • Use alphanumeric characters for build names and avoid symbols.

  • Keep different build jobs to maintain granularity and manage operations in a better way. A single job for all tasks is difficult when trying to troubleshoot. It also helps to assign build execution to slave instances, if that concept is supported by CI server.

  • Backup the home directory of the CI server regularly as it contains archived builds and other artifacts too, which may be useful in troubleshooting.

  • Make sure the CI server has enough free disk space available as it stores a lot of build-related details.

  • Do not schedule multiple jobs to start at the same time, or use a master-slave concept, where specific jobs are assigned to slave instances so that multiple build jobs can be executed at the same time.

  • Set up an e-mail, SMS, or Twitter notification to specific stakeholders of a project or an application. It is advisable to use customized e-mails for specific stakeholders.

  • It is advisable to use community plugins.

Cloud computing

Cloud computing is regarded as a groundbreaking innovation of recent years. It is reshaping the technology landscape. With breakthroughs made in appropriate service and business models, cloud computing has expanded to its role as a backbone for IT services. Based on experience, organizations improved from dedicated servers to consolidation and then to virtualization and cloud computing:


Cloud computing provides elastic and unlimited resources that can be efficiently utilized at the time of peak load and normal load with a pay-per-use pricing model. The pay-as-you-go feature is a boon for development teams that have faced resource scarcity for years. It is possible to automate resource provisioning and configuration based on your requirements, which has reduced a lot of manual effort. For more information, refer to NIST SP 800-145, The NIST Definition of Cloud Computing at .

It has opened various opportunities in terms of the availability of application—deployment environments, considering three service models and four deployment models as shown in the following diagram:

There are four cloud deployment models, each addressing specific requirements:

  • Public cloud: This cloud Infrastructure is available to the general public

  • Private cloud: This cloud Infrastructure is operated for and by a single organization

  • Community cloud: This cloud infrastructure is shared by specific community that has shared concerns

  • Hybrid cloud: This cloud infrastructure is a composition of two or more cloud models

Cloud computing is pivotal if we want to achieve our goals of automation to inculcate DevOps culture in any organization. Infrastructure can be treated similar to code while creating resources, configuring them, and managing resources using configuration-management tools. Cloud resources play an essential role in the successful adoption of DevOps culture. Elastic, scalable, and pay-as-you-go resource consumption enables organizations to use the same type of cloud resources in different environments. The major problems in all the environments are inconsistency and limited capacity. Cloud computing solves this problem as well as those of economic benefits.

Configuration management

Configuration management(CM) manages changes in the system or, to be more specific, the server runtime environment. Let's consider an example where we need to manage multiple servers with same kind of configuration. For example, we need to install Tomcat on each server. What if we need to change the port on all servers or update some packages or provide rights to some users? Any kind of modification in this scenario is a manual and, if so, error-prone process. As the same configuration is being used for all the servers, automation can be useful here. Automating installation and modification in the server runtime environment or permissions brings servers up to spec effectively.

CM is also about keeping track or versions of details related to the state of specific nodes or servers. It is a far better situation when we nee and update themselves. A centralized change can trigger this, or nodes can communicate with the CM server about whether they need to update themselves. CM tools make this process efficient when only changed behavior is updated, and the entire installation and modification isn't applied again to the server nodes.

There are many popular configuration management tools in the market, such as Chef, Puppet, Ansible, and Salt. Each tool is different in the way it works, but the characteristics and end goal are the same: to bring standardized behavior to the state changes of specific nodes without any errors.

Continuous delivery/continuous deployment

Continuous delivery and continuous deployment are used interchangeably more often than not. However, there is a small difference between them. Continuous delivery is a process of deploying an application in any environment in an automated fashion and providing continuous feedback to improve its quality. Continuous deployment, on the other hand, is all about deploying an application with the latest changes to the production environment. In other words, we can say that continuous deployment implies continuous delivery, but the converse isn't true:

Continuous delivery is significant because of the incremental releases after short spans of implementation, or sprint in agile terms. To deploy a feature-ready application from development to testing may include multiple iterations in a sprint due to changes in the requirements or interpretation. However, at the end of a sprint, the final, feature-ready application is deployed to the production environment. Like we discussed about having multiple deployments in a testing environment even for a short span of time, it is advisable to automate such a thing. Scripts to create infrastructure and runtime environments for all environments are useful. It is easier to provision resources in such environments.

For example, to deploy an application in Microsoft Azure, we need the following resources:

  • The Azure web app configured with specific types of resources

  • A storage account to store BACPAC files to create the database

Then, we need to follow these steps:

  1. Create a SQL Server instance to host the database.

  2. Import BACPAC files from the storage account to create a new database.

  3. Deploy the web application to Microsoft Azure.

In this scenario, we may consider to use a configuration file for each environment with respect to naming conventions and paths. However, we need similar types of resources in each environment. It is possible that the configuration of resources changes according to the environment, but that can be managed in a configuration file for each environment. Automation scripts can use configuration files based on the environment and create resources and deploy an application into it. Hence, repetitive steps can be easily managed by an automated approach, and this is helpful both in continuous delivery and continuous deployment.

Best practices for continuous delivery

The following are some common practices we should follow to implement continuous delivery:

  • Plan to automate everything in an application delivery pipeline: Consider a situation where just a single commit only is required to deploy an application in the target environment. It should include compilation, unit test execution, code verification, notification, instance provisioning, setting up runtime environment, and deployment. You must remember to automate:

    • Repetitive tasks

    • Difficult tasks

    • Manual tasks

  • Develop and test the newly implemented bug fixes in a production-like environment; it is possible now with pay-per-use resources provided by cloud computing.

  • Deploy frequently in the development and test environments to gain experience and consistency.


Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation Continuous Delivery vs Continuous Deployment Continuous Delivery versus Continuous Deploy

Continuous monitoring

Continuous monitoring is a backbone of end-to-end delivery pipeline, and open source monitoring tools are like toppings on an ice cream scoop. It is desirable to monitor at almost every stage in order to have transparency about all the processes, as shown in the following diagram. It also helps us troubleshoot quickly.

Monitoring should be a well thought-out implementation of a plan and it should a part of each of the component mentioned in the following diagram. Consider monitoring practices for continuous integration to continuous delivery/deployment:

There is a likely scenario where end-to-end deployment is implemented in an automated fashion but issues arise due to coding problems, query-related problems, infrastructure related issues, and so on. We can consider different types of monitoring, as shown in the following diagram:

However, there is normally a tendency to monitor only infrastructure resources. The question one must ask is whether it is enough or whether we must focus on other types of monitoring as well. To answer this question, we must have a monitoring strategy in place in the planning stage itself. It is always better to identify stakeholders, monitoring aspects, and so on based on the culture and experience of an organization.

Continuous feedback

Continuous feedback is the last important component in the DevOps culture and provides a means of improvement and innovation. Feedback always provides improvement if it comes from stakeholders who know what they need and how the outcome should be. Feedback from the customer after deployment activities can serve as inputs to developers for improvement, as shown in the following diagram, and its correct integration will make the customer happy:

Here, we are considering a situation where a feature implementation is provided to the stakeholders and they provide their feedback. In the waterfall model, the feedback cycle is very long and hence developers may not be aware about whether the end product is what the customer asked for or whether the interpretation of what needs to be delivered was changed somewhere. In agile or DevOps culture, a shorter feedback cycle makes a major difference as stakeholders can actually see the result of a small implementation phase, and hence, the outcome is verified multiple times. If customers are not satisfied, then feedback is available at a stage where it is not very tedious to change things. In the waterfall model, this would've been a disaster as feedback used to be received very late. With time and dependencies, the complexity increases and changes in such situations takes a long time. In addition to this, no one remembers what they wrote 2 months back. Hence, a faster feedback cycle improves the overall process and connects endpoints as well as finding patterns in mistakes, learning lessons, and using improved patterns. However, continuous feedback not only improves the technical aspects of implementation but also provides a way to assess current features and whether they fit into the overall scenario or there is still room for improvement. It is important to realize that continuous feedback plays a significant role in making customers happy by providing an improved experience.


Tools and technologies

Tools and technologies play an important role in the DevOps culture; however, it is not the only part that needs attention. For all parts of the application delivery pipeline, different tools, disruptive innovations, open source initiatives, community plugins, and so on are required to keep the entire pipeline running to produce effective outcomes.

Code repositories – Git

Subversion is a version control system that is used to track all the changes made to files and folders. Using this, you can keep track of the applications being built. Features added months ago can also be tracked using the version code. It is all about tracking the code. Whenever any new features added or new code made, it is first tested and then committed by the developer. Then, the code is sent to the repository to track the changes, and a new version is given to it. A comment can also be made by the developer so that other developers can easily understand changes that were made. Other developers only have to update their checkout to see the changes made.


The following are some advantages of using source code repositories:

  • Many developers can work simultaneously on the same code

  • If a computer crashes, the code can still be recovered as it had been committed in the server

  • If a bug occurs, the new code can be easily reverted to the previous version

Git is an open source distributed version control system designed to handle small to enormous projects with speed and efficiency. It is easy to learn and has good performance. It comprises a full-fledged repository and version control tracking capabilities independent of a central server or network access. It was developed and designed by Linus Torvalds in 2005.


The following are some significant characteristics of Git:

  • It provides support for nonlinear development

  • It is compatible with existing systems and protocols

  • It ensures the cryptographic authentication of history

  • It has well-designed pluggable merge strategies

  • It consists of toolkit-based designs

  • It supports various merging techniques, such as resolve, octopus, and recursive

Differences between SVN and Git

SVN and Git are both very popular source code repositories; however, Git is getting more popular in recent times. Let's look at the major differences between them:

Detailed description of Subversion and Git is illustrated in the following table:



Centralized version control system

Distributed version control system

Snapshot of a specific version of the project is available on the developer's machine

Complete clone of a full-fledged repository is available on the developer's machine

Perform operations such as commit, merge, blame, and revert and verifies branch and log from a central repository

Perform operations such as commit, merge, and blame and verifies branch and log from a local repository, along with pull and push operation to a remote repository if the developer needs to share work with others

URLs are used for trunks, branches, or tags:

https://<URL/IP Address>/svn/trunk/AntExample1/

.git is the root of projects, and commands are used to address branches and not URLs:

[email protected]:mitesh51/game-of-life.git

An SVN workflow:

A Git workflow:

File changes are included in the next commit

File changes have to be marked explicitly and only then are they included in the next commit

Committed work is directly transferred to the central repository, and hence, direct connection to the repository must be available

Committed work is not directly transferred to the remote repository and committed to local repository, and to share it with other developers, we need to push it to the remote repository, in which case we need a connection to the remote repository

Each commit gets ascending revision numbers

Each commit gets commit hashes rather than ascending revision numbers

Application directory:

Application directory:

.svn directory structure:

.git directory structure:

Short learning curve

Long learning curve

Build tools – Maven

Apache Maven is a build tool with the Apache 2.0 license. It is used for Java projects and can be used in a cross-platform environment. It can be also be used for Ruby, Scala, C#, and other languages.

The following are the important features of Maven:

A Project Object Model (POM) XML file contains information about the name of the application, owner information, how the application distribution file can be created, and how dependencies can be managed.

Example pom.xml file

The pom.xml file has predefined targets, such as validate, generate-sources, process-sources, generate-resources, process-resources, compile, process-test-sources, process-test-resources, test-compile, test, package, install, and deploy.

The following is an example of a sample pom.xml file used in Maven:

Continuous integration tools – Jenkins

Jenkins was originally an open source continuous integration software written in Java under the MIT License. However, Jenkins 2 an open source automation server that focuses on any automation, including continuous integration and continuous delivery.

Jenkins can be used across different platforms, such as Windows, Ubuntu/Debian, Red Hat/Fedora, Mac OS X, openSUSE, and FreeBSD. Jenkins enables users to utilize continuous integration services for software development in an agile environment. It can be used to build freestyle software projects based on Apache Ant and Maven 2/Maven 3. It can also execute Windows batch commands and shell scripts.

It can be easily customized with the use of plugins. There are different kinds of plugins available for customizing Jenkins based on specific needs for setting up continuous integration. Categories of plugins include source code management (the Git, CVS, and Bazaar plugins), build triggers (the Accelerated Build Now and Build Flow plugins), build reports (the Code Scanner and Disk Usage plugins), authentication and user management (the Active Directory and GitHub OAuth plugins), and cluster management and distributed build (Amazon EC2 and Azure Slave plugins).


To know more about Jenkins please refer Jenkins Essentials

Jenkins accelerates the software development process through automation:

Key features and benefits

Here are some striking benefits of Jenkins:

  • Easy install, upgrade, and configuration.

  • Supported platforms: Windows, Ubuntu/Debian, Red Hat/Fedora/CentOS, Mac OS X, openSUSE, FreeBSD, OpenBSD, Solaris, and Gentoo.

  • Manages and controls development lifecycle processes.

  • Non-Java projects supported by Jenkins: Such as .NET, Ruby, PHP, Drupal, Perl, C++, Node.js, Python, Android, and Scala.

  • A development methodology of daily integrations verified by automated builds.

  • Every commit can trigger a build.

  • Jenkins is a fully featured technology platform that enables users to implement CI and CD.

  • The use of Jenkins is not limited to CI and CD. It is possible to include a model and orchestrate the entire pipeline with the use of Jenkins as it supports shell and Windows batch command execution. Jenkins 2.0 supports a delivery pipeline that uses a Domain-Specific Language (DSL) for modeling entire deployments or delivery pipelines.

  • The pipeline as code provides a common language-DSL-to help the development and operations teams to collaborate in an effective manner.

  • Jenkins 2 brings a new GUI with stage view to observe the progress across the delivery pipeline.

  • Jenkins 2.0 is fully backward compatible with the Jenkins 1.x series.

  • Jenkins 2 now requires Servlet 3.1 to run.

  • You can use embedded Winstone-Jetty or a container that supports Servlet 3.1 (such as Tomcat 8).

  • GitHub, Collabnet, SVN, TFS code repositories, and so on are supported by Jenkins for collaborative development.

  • Continuous integration: Automate build and the test automated testing (continuous testing), package, and static code analysis.

  • Supports common test frameworks such as HP ALM Tools, JUnit, Selenium, and MSTest.

  • For continuous testing, Jenkins has plugins for both; Jenkins slaves can execute test suites on different platforms.

  • Jenkins supports static code analysis tools such as code verification by CheckStyle and FindBug. It also integrates with Sonar.

  • Continuous delivery and continuous deployment: It automates the application deployment pipeline, integrates with popular configuration management tools, and automates environment provisioning.

  • To achieve continuous delivery and deployment, Jenkins supports automatic deployment; it provides a plugin for direct integration with IBM uDeploy.

  • Highly configurable: Plugins-based architecture that provides support to many technologies, repositories, build tools and test tools; it has an open source CI server and provides over 400 plugins to achieve extensibility.

  • Supports distributed builds: Jenkins supports "master/slave" mode, where the workload of building projects is delegated to multiple slave nodes.

  • It has a machine-consumable remote access API to retrieve information from Jenkins for programmatic consumption, to trigger a new build, and so on.

  • It delivers a better application faster by automating the application development lifecycle, allowing faster delivery.

The Jenkins build pipeline (quality gate system) provides a build pipeline view of upstream and downstream connected jobs, as a chain of jobs, each one subjecting the build to quality-assurance steps. It has the ability to define manual triggers for jobs that require intervention prior to execution, such as an approval process outside of Jenkins. In the following diagram Quality Gates and Orchestration of Build Pipeline are illustrated:

Jenkins can be used with the following tools in different categories as shown here:




Code repositories

Subversion, Git, CVS, StarTeam

Subversion, Git, CVS, StarTeam

Build tools

Ant, Maven

NAnt, MS Build

Code analysis tools

Sonar, CheckStyle, FindBugs, NCover, Visual Studio Code Metrics, PowerTool

Sonar, CheckStyle, FindBugs, NCover, Visual Studio Code Metrics, PowerTool

Continuous integration



Continuous testing

Jenkins plugins (HP Quality Center 10.00 with the QuickTest Professional add-in, HP Unified Functional Testing 11.5x and 12.0x, HP Service Test 11.20 and 11.50, HP LoadRunner 11.52 and 12.0x, HP Performance Center 12.xx, HP QuickTest Professional 11.00, HP Application Lifecycle Management 11.00, 11.52, and 12.xx, HP ALM Lab Management 11.50, 11.52, and 12.xx, JUnit, MSTest, and VsTest)

Jenkins plugins (HP Quality Center 10.00 with the QuickTest Professional add-in, HP Unified Functional Testing 11.5x and 12.0x, HP Service Test 11.20 and 11.50, HP LoadRunner 11.52 and 12.0x, HP Performance Center 12.xx, HP QuickTest Professional 11.00, HP Application Lifecycle Management 11.00, 11.52, and 12.xx, HP ALM Lab Management 11.50, 11.52, and 12.xx, JUnit, MSTest, and VsTest)

Infrastructure provisioning

Configuration management tool-Chef

Configuration management tool-Chef

Virtualization/cloud service provider

VMware, AWS, Microsoft Azure (IaaS), traditional environment

VMware, AWS, Microsoft Azure (IaaS), traditional environment

Continuous delivery/deployment

Chef/deployment plugin/shell scripting/Powershell scripts/Windows batch commands

Chef/deployment plugin/shell scripting/Powershell scripts/Windows batch commands

Configuration management tools – Chef

Software Configuration Management (SCM) is a software engineering discipline comprising tools and techniques that an organization uses to manage changes in software components. It includes technical aspects of the project, communication, and control of modifications to the projects during development. It also called software control management. It consists of practices for all software projects ranging from development to rapid prototyping and ongoing maintenance. It enriches the reliability and quality of software.

Chef is a configuration management tool used to transform infrastructure into code. It automates the building, deploying, and managing of the infrastructure. By using Chef, infrastructure can be considered as code. The concept behind Chef is that of reusability. It uses recipes to automate the infrastructure. Recipes are instructions required for configuring databases, web servers, and load balances. It describes every part of the infrastructure and how it should be configured, deployed, and managed. It uses building blocks known as resources. A resource describes parts of the infrastructure, such as the template, package, and files to be installed.

These recipes and configuration data are stored on Chef servers. The Chef client is installed on each node of the network. A node can be a physical or virtual server.

As shown in the following diagram, the Chef client periodically checks the Chef server for the latest recipes and to see whether the node is in compliance with the policy defined by the recipes. If it is out of date, the Chef client runs them on the node to bring it up to date:


The following are some important features of the Chef configuration management tool:

  • The Chef server:

    • It manages a huge number of nodes

    • It maintains a blueprint of the infrastructure

  • The Chef client:

    • It manages various operating systems, such as Linux, Windows, Mac OS, Solaris, and FreeBSD

    • It provides integration with cloud providers

    • It is easy to manage the containers in a versionable, testable, and repeatable way

    • Chef provides an automation platform to continuously define, build, and manage cloud infrastructure used for deployment

    • It enables resource provisioning and the configuration of resources programmatically, and it will help in the deployment pipeline in order to automate provisioning and configuration

The following three basic concepts of Chef will enable organizations to quickly manage any infrastructure:

  • Achieving the desired state

  • Centralized modeling of IT infrastructure

  • Resource primitives that serve as building blocks


To learn more about Chef refer Learning Chef

Cloud service providers

AWS and Microsoft Azure are popular public cloud providers right now. They provide cloud services in different areas, and both have their strong areas. Based on the organization's culture and past partnerships, either can be considered after a detailed assessment based on requirements.

The following is a side-by-side comparison:


Microsoft Azure

Virtual machines

Amazon EC2

Virtual machine


Elastic Beanstalk

Azure Web Apps

Container services

Amazon EC2 Container Services

Azure Container Services


Amazon RDS

Azure SQL Database




BIG Data

Amazon EMR

HD Insight


Amazon VPC

Virtual network


Amazon Elasticache

Azure RadisCache


Amazon import/export

Azure import/export


Amazon CloudSearch

Azure Search



Azure CDN

Identity and access management

AWS IAM and Directory Services

Azure Active Directory


AWS OpsWorks

Azure Automation


Amazon Web Services: Microsoft Azure:

Container technology

Containers use OS-level virtualization, where the kernel is shared between isolated user-spaces. Docker and OpenVZ are popular open source example of OS—level virtualization technologies.


Docker is an open source initiative to wrap code, the runtime environment, system tools, and libraries. Docker containers share the kernel they are running on and hence start instantly and in a lightweight manner. Docker containers run on Windows as well as Linux distributions. It is important to understand how containers and virtual machines are different. Here is a comparison table of virtual machines and containers:


You can download Docker by visiting .

Monitoring tools

There are many open source tools available for monitoring resources. Zenoss and Nagios are two of the most popular open source tools and have been adopted by many organizations.


Zenoss is an agentless and open source management platform for applications, servers, and networks released under the GNU General Public License (GPL) version 2 and is based on the Zope application server. Zenoss Core consists of the extensible programming language Python, object-oriented web server Zope, monitoring protocol network, graph and log time series data by RRD tool, MySQL, and event-driven networking engine Twisted. It provides an easy-to-use web portal to monitor alerts, performance, configuration, and inventory. In the following diagram, Zenoss features are illustrated:


You can visit Zenoss Core 5 website at .


Nagios is a cross-platform and open source monitoring tool for infrastructure and networks. It monitors network services such as FTP, HTTP, SSH, and SMTP. It monitors resources, detects problems, and alerts stakeholders. Nagios can empower organizations and service providers to identify and resolve issues in a way that outages have minimal impact on the IT infrastructure and processes, hence ensuring highest adherence to SLAs. Nagios can monitor cloud resources such as compute, storage, and network.


You can get more information by navigating to Nagios official website at .

Deployment orchestration/continuous delivery - Jenkins

The build pipeline, also called the deployment or application delivery pipeline, can be used to achieve end-to-end automation for all operations, including continuous integration, cloud provisioning, configuration management, continuous delivery, continuous deployment, and notifications. The following Jenkins plugins can be used for overall orchestration of all the activities involved in end-to-end automation:

  • Continuous integration: Jenkins

  • Configuration management: Chef

  • Cloud service providers: AWS, Microsoft Azure

  • Container technology: Docker

  • Continuous delivery/deployment: ssh

End-to-end orchestration: Jenkins plugins

Here is a sample representation of end-to-end automation using different tools:

Jenkins can be used to manage unit testing and code verification; Chef can be used for setting up a runtime environment; Knife plugins can be used for creating a virtual machine in AWS or Microsoft Azure; the build pipeline or deployment pipeline plugins in Jenkins can be used for managing deployment orchestration.

From a single pipeline dashboard, we can view the status of all the builds that are configured in the pipeline. Each build in the pipeline is a kind of quality gate. If one build fails, then the execution won't go further. Additional dimensions can be added, such as notification based on compilation failures, unit test failures, or for unsuccessful deployment. The final deployment can be based on some sort of permission from a specific stakeholder. Consider a scenario for a parameterized build or promoted build concept-what should we do? All will be revealed in the chapters to follow!

The DevOps dashboard

One of the most liked components of DevOps culture is the dashboard or GUI that provides a combined status of all end-to-end activities. For automation tools, an easy-to-use web GUI is handy for managing resources. For end-to-end automation in application deployment activities, multiple open source or commercial tools are used. There is a high possibility that a single product may not be used for all activities, for example, Git or SVN as the repository, Jenkins as the CI server, and IBM UrbanCode Deploy as the deployment orchestration tool. In such a scenario, it is easier if there is a single-pane-of-glass view where we can track multiple tools for a specific application.

Hygieia is an open source DevOps dashboard that provides a way to track the status of a deployment pipeline. It basically tracks six different areas as of now, including features (Jira, VersionOne), code repository (GitHub, Subversion), builds (Jenkins, Hudson), quality (Sonar, Cucumber/Selenium), monitoring, and deployment (IBM UrbanCode Deploy). Following is a sample image of configured DevOps dashboard:


Download Hygieia from here .


An overview of a sample Java EE application

We are going to use PetClinic, available on GitHub. It is a sample spring application with JUnit test cases already written for it.


A sample Spring-based application .

The PetClinic sample application can be used to build simple and robust database-oriented applications to demonstrate the use of Spring's core functionality. It is accessible via web browser:

A few use cases:

  • Add a new pet owner, a new pet, and information pertaining to a visit to the pet's visitation history to the system

  • Update the information pertaining to a pet and pet owner

  • View a list of veterinarians and their specialties, a pet owner, a pet, and pet's visitation history

Once a WAR file is created, we can deploy it in Tomcat or another web server, and to verify it on the localhost, visit http://localhost:8080/petclinic. You will see something like this:

The list of tasks

These are the tasks we will try to complete in the rest of the chapters:

  • Jenkins installation, configuration, UI personalization

  • Java configuration (JAVA_HOME) in Jenkins

  • Maven or Ant configuration in Jenkins

  • Plugin installation and configuration in Jenkins

  • Security (access control, authorization, and project-based security) in Jenkins

  • Jenkins build configuration and execution

  • Email notification configuration

  • Deploying a WAR file to a web application server

  • Creating and configuring a build/deployment pipeline

  • Installing and configuring Chef

  • Installing and configuring Docker

  • Creating and configuring a virtual machine in AWS, Microsoft Azure, and containers

  • Deploy a WAR file into a virtual machine and a container

  • Configuring infrastructure monitoring

  • Orchestrating the application delivery pipeline using Jenkins plugins


Self-test questions

  1. Which of the following statements is not related to the development team in a traditional environment?

    • A competitive market creates pressure of on-time delivery of feature or bug fixing

    • Production-ready code management and new feature implementation

    • The release cycle is often long and hence the development team has to make assumptions before the application deployment finally takes place

    • Redesigning or tweaking is needed to run the application in a production environment

  2. Which of the following are benefits of DevOps?

    • Collaboration, management, and security for the complete application development lifecycle management

    • Continuous innovation because of continuous development of new ideas

    • Faster delivery of new features or resolution of issues

    • Automated deployments and standardized configuration management for different environments

    • All of these

  3. Which of the following are parts of the DevOps culture or application delivery pipeline?

    • Continuous integration

    • Cloud provisioning

    • Configuration management

    • Continuous delivery/deployment

    • Continuous monitoring

    • Continuous feedback

  4. Which of the following are by-products of the DevOps culture or application delivery pipeline?

    • Continuous integration

    • Continuous delivery/deployment

    • Continuous monitoring

    • Continuous feedback

    • Continuous improvement

    • Continuous innovation

  5. State whether the following statements are true or false:

    • Jenkins and Atlassian Bamboo are build automation tools

    • Apache Ant and Apache Maven are continuous integration tools

    • Chef is a configuration management tool

    • Build automation is essential for continuous integration and the rest of the automation is effective only if the build process is automated

    • Subversion is a distributed version control system

    • Git is a centralized version control system

    • AWS and Microsoft Azure are public cloud service providers

  6. Which of the following are cloud deployment models according to NIST's definition of cloud computing?

    • Public cloud

    • Private cloud

    • Community cloud

    • Hybrid cloud

    • All of these

  7. Which of the followings are cloud service models according to NIST's definition of cloud computing?

    • Software as a Service

    • Platform as a Service

    • Infrastructure as a Service

    • All of these

  8. Which of the following are major components of a Chef installation?

    • Chef server/hosted chef

    • Chef workstation

    • Nodes

    • All of these



In this chapter, we learned about the difficulties faced by development and operations teams in a traditional environment and how agile development helps in such a scenario. What has changed after the arrival of agile development and what challenges has it brought with its arrival? We have covered the important aspects of the DevOps culture, including continuous integration and continuous delivery. We also covered details regarding cloud computing and configuration management that enhance the processes and help to adopt DevOps culture.

In terms of tools and technologies, we covered a brief overview of SVN, Git, Apache Maven, Jenkins, AWS, Microsoft Azure, Chef, Nagios, Zenoss, and the DevOps dashboard Hygieia.

In the next chapter, we will see how to install and configure Jenkins 2.0  and implement continuous integration using a sample Spring application available on GitHub.

It is the right time to quote Charles Darwin as it is relevant in the context of DevOps culture:

"It is not the most intellectual or the strongest species that survives, but the species that survives is the one that is able to adapt to or adjust best to the changing environment in which it finds itself."

About the Author

  • Mitesh Soni

    Mitesh Soni is a DevOps enthusiast. He has worked on projects for DevOps enablement using Microsoft Azure and Visual Studio Team Services. He also has experience of working with other DevOps-enabling tools, such as Jenkins, Chef, IBM UrbanCode Deploy, and Atlassian Bamboo.

    He is a CSM, SCJP, SCWCD, VCP, IBM Bluemix, and IBM Urbancode certified professional.

    Browse publications by this author
DevOps for Web Development
Unlock this book and the full library for FREE
Start free trial