About this book

Cloud-native software development is based on developing distributed applications focusing on speed, stability, and high availability. With this paradigm shift, software development has changed substantially and converted into a more agile environment where distributed teams develop distributed applications. In addition, the environment where the software is built, tested and deployed has changed from bare-metal servers to cloud systems. In this course, the new concepts of cloud-native Continuous Integration and Delivery are discussed in depth. Cloud-native tooling and services such as cloud providers (AWS, Google Cloud) containerization with Docker, container-orchestrators such as Kubernetes will be a part of this course to teach how to analyze and design modern software delivery pipelines.

Publication date:
December 2018
Publisher
Packt
Pages
148
ISBN
9781789805659

 

Chapter 1. Cloud-NativeCI/CD Concepts

Note

Learning Objectives

By the end of this chapter, you will be able to:

  • Compare conventional approaches of software development with DevOps

  • Describe the DevOps toolchain steps

  • Identify the benefits of cloud-native architecture for software development

  • Describe the DevOps patterns for a cloud-native environment

  • Create a CI/CD pipeline in the cloud

Note

This chapter introduces basic principles of DevOps and cloud-native approaches in addition to presenting the steps for creating a CI/CD pipeline in the cloud.

 

Introduction to Cloud-Native CI/CD Concepts


In the past few years, there have been several paradigm shifts in software development and operations. This has presented the industry with innovative methods for creating and installing applications. More importantly, two significant paradigm shifts have consolidated their capabilities for developing, installing, and managing scalable applications: DevOps and the cloud-native architecture. DevOps introduced a culture shift that increased focus on having smaller teams with agile development instead of large groups and long development cycles. Cloud-native microservices emerged with cloud-based horizontal scaling ability, thus providing service to millions of customers. With these two significant, powerful approaches combined, organizations now have the capability to create scalable, robust, and reliable applications with a high level of collaboration and information sharing among small teams. Before we begin expanding on DevOps and the cloud-native architecture, we will explore the limitations posed by conventional software development and how it compares with new approaches.

Conventional software development can be likened to manufacturing a passenger aircraft. The end product is enormous and requires considerable resources, such as a large infrastructure, capital, and personnel, to name a few. Requirement collections and planning are rigid and usually take weeks to months to be finalized. Similar to this example, in the conventional development cycle, different parts of the software are always combined in the same product line that is specifically built for a monolithic bulky end product. Once developed, the product is finally delivered to customers. Note that there is very little flexibility presented to the customers in terms of how they wish to use the product's features or in terms of dynamically altering them as per market fluctuations.

Nowadays, software development can be likened to manufacturing drones. The goal is to have leaner end products that can be mass produced and distributed. Unlike bulky software, today, software is distributed as microservices and built with relatively smaller and sometimes even geographically distributed teams. Requirements collection and planning are more flexible, and it is possible to let customers decide how to use these services and configure them on the fly. In addition to maintenance, managing and updating the software-as-a-service is included in the life cycle. This revolution in software development is possible due to new cloud-based architectural approaches and DevOps related cultural changes in organizations.

It is challenging to develop and run the scalable cloud-native applications with tools and the mindset of conventional software development. Unlike large software packages that are delivered in disk drives and other storage devices, current implementations are designed as microservices and packaged as containers that are available via cloud systems. These containers run in clusters that are managed by cloud providers such as Amazon Web Services (AWS) or Google Cloud Platform (GCP). Thus, organizations do not have to invest in costly servers or bother about having them run in-house. To develop, test, and run cloud-native applications, two essential DevOps practices must be implemented in a cloud-native manner: continuous integration (CI) and continuous delivery/deployment (CD).

Creating CI pipelines to test microservices and create containers is the first prerequisite of cloud-native application development. CD is based on delivering applications for customer usage and deploying them for production. With best practices, checklists, and real-life examples of cloud-native CI/CD pipelines, organizations can now bridge the gap between developers and customers efficiently and create reliable, robust, and scalable applications.

This chapter explores the impetus for the shift toward a DevOps culture and its impact on software development. Next, cloud-native architecture and essential characteristics of the applications of the future are listed. The chapter also explains how cloud-native architecture compliments DevOps and contributes toward successful organizations. DevOps practices for cloud-native architecture, namely CI and CD, are described, and guidelines are provided for selecting the appropriate tools.

 

DevOps Culture


Software development organizations traditionally worked in a fast-paced environment without focusing on inter-team collaboration. Development teams attempted to produce software as soon as possible for deployment by the operations team. Without clear communication between development and operations, conflicts and product failures were inevitable. When organizations examined the problems in depth, they realized that development teams had almost no idea about the runtime environment. The operations team had practically no sound understanding of the requirements and features of the applications they were deploying. With enormous barriers between these teams, organizations created applications that did not simultaneously account for the runtime environment and software requirements. Consequently, neither development nor operations teams were held responsible for many problems and attempted to address several customer tickets, thus leading to the loss of many engineer hours and money.

DevOps—derived from Development Operations — culture came in to being to increase collaboration between development and operations. Organizations built DevOps teams with engineers from development and operations backgrounds to eliminate the communication barrier between these groups. Besides, many practices and tools are implemented to increase automation and decrease the delivery times, and minimize the risks. Eventually, this culture shift in organizations fostered quality and reliability with reduced lead times. In these new teams, developers acknowledged operational knowledge such as cloud providers and customer environments.

Operations engineers also gained insight into the applications that they were deploying. Enhanced overall efficiency and advances in cloud-native architectures increased the adoption of DevOps culture in various level of organizations, from start-ups to enterprise companies.

Note

Although the term DevOps is used in various meanings, job postings, and company culture manifestos, there is not still one accepted academic or practical definition. The DevOps term was coined by Patrick Debois in 2009 and first used in the DevOpsDays Conference that started in Belgium.

To summarize, we first described the issues encountered in the conventional method of software development. We discussed how DevOps increases collaboration and mitigates problems that are encountered in conventional approaches for software development. In the next section, we will discuss the best practices for implementing DevOps.

DevOps Practices

Organizations adopt unique methods to implement DevOps. Thus, there are no specific standards in terms of implementation practices. In other words, it is difficult to find a single approach with regard to implementing a DevOps culture shift when considering unique product requirements and organizational structures. However, there are certain core best practices that have been implemented in the industry by successful companies. The following ideas cover the core the DevOps philosophy:

  • Continuous Integration (CI): Continuous integration focuses on integrating changes from different sources as soon as possible. Integration covers building, reviewing code quality, and testing. The main idea of CI is finding bugs and conflicts as quickly as possible and solving them early in the software life cycle.

  • Continuous Delivery (CD): Continuous delivery focuses on delivering and packaging the software under test as soon as possible. Similar to CI, CD aims to create production-ready packages and deploy them to the production environment. With this idea, all changes will be in the service of customers, and developers will be able to see their recent commits live.

  • Monitoring and logging: Monitoring metrics and collecting application logs is critical for investigating the causes of problems. Creating notifications over parameters and active control of systems help to create a reliable environment for end users. One of the most crucial points is that monitoring could create a proactive path for finding problems rather than waiting for customers to encounter issues and raise tickets.

  • Communication and collaboration: The communication and collaboration of different stakeholders is crucial to success in DevOps. All tools, procedures, and organizational changes should be implemented to increase communication and cooperation. Knowledge and information sharing with open communication channels between teams enables transparency and leads to successful organizations.

Until now, we have discussed the best practices to implement the DevOps approach. In the next section, we will describe how the DevOps tools chain in conjunction with the aforementioned best practices lead to the creation of a value chain.

DevOps Toolchain

The DevOps toolchain enlists practices that connect development and operations teams with the aim of creating a value chain. The main stages of the chain, along with their interconnectivity, are presented as follows:

Figure 1.1: The DevOps toolchain

The inputs and outputs of each stage are presented in the following flow chart:

Figure 1.2: Detailed steps of the DevOps toolchain

When a new project is on the table, the chain originates from planning and then progresses to creating the software. The next steps are verification and packaging. Completion of packaging marks the end of the development phase. Thereafter, operations begin from release, followed by configuration. Any feedback and insights obtained at the end of monitoring feed in to the development phase again, thereby completing the cycle. It is important not to skip any part of the toolchain and create an environment where processes feed each other with complete data. For instance, if monitoring fails to provide accurate information about the production environment, development may not have any idea about the outages in production. The development team will be under the false impression that their application is running and scaling with customer demand. However, if the monitoring feeds planning with accurate information, development teams could plan and fix their problems in scaling. As DevOps tries to remove the barriers between development and operations, meticulous execution of each stage in the DevOps tool chain is crucial. The most natural and expected benefits of DevOps can be summarized as follows:

  • Speed: The DevOps model and its continuous delivery principles decrease the time to deliver new features to the market.

  • Reliability: Continuous integration and testing throughout the product's life cycle helps to increase reliability of products. With metrics collected by monitoring systems, applications evolve to be more stable and reliable.

  • Scalability: Not only software but also infrastructure is managed as code in the DevOps environment. This approach makes it easier to manage, deploy, and scale with customer demand.

DevOps culture, with its best practices and toolchains, provides many benefits to organizations. However, before implementation, understanding the current company's culture and creating a feasible action plan for introducing DevOps is crucial. In the following sections, how DevOps practices are implemented for applications and introduction to cloud-native architecture is explained in more detail.

 

Cloud-Native Architecture


Cloud-native application development focuses on building and running applications that utilize the advantages of cloud services. This focus does not mandate any specific cloud provider to deploy the applications; however, it concentrates on the approach of development and deployment. It consists of creating agile, reusable, and scalable components, and deploying them as resilient services in dynamically orchestrated cloud environments. Cloud-native services are essential since they serve millions of customers daily and empower social networks such as Facebook, Twitter, online retailers such as Amazon, real-time car-sharing applications such as Uber, and many more.

Note

For more comprehensive information on cloud-native technologies, please refer to the following link: https://github.com/cncf/toc/blob/master/DEFINITION.md.

Knowing the differences between conventional and cloud-native approaches is essential. Specifically, conventional and cloud-native application development can be distinguished through the following four views:

  • Focus: Conventionally, applications are designed for long lifespans that are included with years of maintenance agreements. However, cloud-native applications focus on how quickly applications can be market-ready with flexible subscription agreements.

  • Team: Conventional software teams work independently of each other and focus on their specified areas, such as development, operations, security, and quality. In contrast, cloud-native applications are collaboratively developed and maintained by DevOps teams that comprise members focusing on different areas.

  • Architecture: Monolithic applications and the firmly coupled dependencies between them are the mainstream architectural approaches of conventional software. An example of monolithic design could be an e-commerce website where four different software components, namely the user frontend, checkout, buy, and user promotions are packaged as a single Java package and deployed to production. Each of these components may have several different functions aimed at addressing certain objectives of the website. All components call each other via function calls, and thus they are strictly dependent on each other. For instance, while creating an invoice, the buy package will directly call, for example, the CreateInvoice function of the checkout package. On the contrary, cloud-native applications are loosely coupled services communicating over defined APIs. When the same e-commerce website is designed with loosely coupled components, all of the components can call each other over API calls. With this approach, the buy package will create a POST request to a REST API endpoint, for example, /v1/invoice, to create the invoice.

  • Infrastructure: Conventional applications are installed on and deployed through large servers that have been configured according to the end user environment. On the contrary, cloud-native applications run as containers and are ready to run, irrespective of vendor-specific requirements. Besides, capacity planning is for peak demand in traditional software systems; however, cloud-native applications are run on a scalable, on-demand infrastructure.

In addition to the comparison with conventional software development, there are more characteristics of the cloud-native architecture. In the following section, all key cloud-native architecture characteristics are explained in depth. Note that most features have emerged with cloud-native applications and have changed the method of software development.

Cloud-Native Application Characteristics

Characteristics of cloud-native applications can be grouped into three categories: design, development, and operations. These groups also indicate how cloud-native architectures are implemented throughout the life cycle of software development. First, design characteristics focus on how cloud-native applications are structured and planned. Then, development characteristics focus on the essential points for creating cloud-native applications. Finally, operations concentrate on the installation, runtime, and infrastructure features of cloud-native architecture. In this section, we will discuss all three characteristics in detail.

Design: Design is categorized into microservices and API communication:

  • Microservices: Applications are designed as loosely coupled services that exist independent independently of each other. These microservices focus on a small subset of functionalities and discover other services during runtime. For instance, frontend services and backend services run independently, and the frontend finds the IP address of the backend from service discovery to send queries. Each service focuses only on its functionalities and does not directly depend on another service.

  • API Communication: Services are designed to use lightweight API protocols to communicate with each other. APIs are versioned, and services interact with each other without any inconsistency. For instance, the frontend service reaches the backend via a REST API, and all API endpoints are versioned. For example, consider a versioned endpoint API: /v1/orders. When the backend is updated and changes its REST API, the endpoints will start with v2 instead of v1. It ensures that the frontend still works with v1 endpoints until it gets updated to work with v2 endpoints without any inconsistency.

Development: Development is categorized into most suitable programming language and light weight containers:

  • Most Suitable Programming Language: Cloud-native applications are developed using a programming language and framework that is most suited for its functionality. It is aimed to have various programming languages working together while exploiting their best features. For instance, REST APIs could be developed in Go for concurrency, and the streaming service could be implemented in Node.js using WebSockets. Frontend services are not required to know the implementation details, as long as the programming languages implement the expected APIs.

  • Lightweight containers: Cloud-native applications are packaged and delivered as containers. Each container consists of minimum requirements, such as operating system and dependency libraries of the service, in order to be as lightweight as possible. Containers enable scalability and encapsulate microservices for efficient management. For instance, the frontend and its JavaScript libraries create a Docker container, whereas the backend and its database connectors create another Docker container. Each service is packaged, self-sufficient, and can be scaled easily without knowing the internals of services.

Operations: Operations in categorized into isolation, elastic cloud infrastructure, and automation:

  • Isolation: Cloud-native applications are isolated from their runtime and operating system dependencies, as well as those of other applications. This feature enables service portability without making any further modifications. For instance, the same container image of the frontend service could simultaneously run on the laptop of the developer for testing new features and on AWS servers to serve millions of customers.

  • Elastic Cloud Infrastructure: Cloud-native applications should run on a flexible infrastructure that could expand with usage. These flexible infrastructures are public or on-premise cloud systems that are shared by multiple services, users, and even companies, to achieve cost efficiency.

  • Automation: Cloud-native applications and their life cycles should be automated as much as possible. Every step of development and deployment, such as integration, testing, provisioning of infrastructure, monitoring for capability, log collection, and auto-scaling and alerting, needs automation. Automation is crucial for reliable, scalable, and resilient cloud-native applications. Without automation, there are numerous manual steps to provision the infrastructure, configure the applications, run them, and check for their statuses. All of these manual steps are prone to human error, and it is unlikely to create reliable and robust systems that are scalable.

In this section, we saw the basic characteristics of cloud-native applications. In the next section, we will see how cloud-native architectures and the DevOps culture complement each other to yield successful application development and deployment.

 

DevOps Patterns for Cloud-Native Architecture


Both DevOps culture and the cloud-native architecture complement each other in their contribution to bring about a change in software development and in making companies successful. While DevOps focuses on collaboration and the fast delivery of applications, the cloud offers scalability and automation tools to deliver applications to customers. In this section, we will discuss how the DevOps culture and cloud-native architecture work in sync and can be practically implemented.

DevOps attempts to eliminate time and resource wastage in software development by increasing automation and collaboration. Cloud-native application development focuses on building and running applications that utilize the advantages of cloud services. When they melt in the same pot, the cloud-native architecture and cloud‑cloud computing enable and adopt DevOps culture in two main directions:

  • Platform: Cloud-computing provides all platform requirements for DevOps processes such as testing, development, and production. This enables organizations to smoothly run every step of the DevOps toolchain on cloud platforms. For instance, verification, release, and production stages of the DevOps toolchain can be easily placed on separate Kubernetes namespaces in GCP with complete isolation.

  • Tooling: The DevOps culture focuses on automation, and it needs reliable and scalable tooling for continuous integration, deployment, and delivery. To exploit the scalability and reliability that's provided by cloud platforms, these tools inevitably have to be cloud-native. For instance, AWS Code build, which is a continuous code build tool by AWS, is widely used for a reliable, managed, and secure method of testing and integrating applications.

Not only solo DevOps or cloud-native applications, but also their combination has changed software development. In the following table, fundamental changes are briefly summarized:

Figure 1.3: Differences between Pre-DevOps and DevOps approaches

Today, and presumably in the future, organizations would not only deliver software but also provide services. Similar to the Software-as-a-Service (SaaS) products of today, the approach will be more mainstream, and microservices-as-a-service products will spread throughout the market. To deliver and manage cloud-native services of the future, you will need to implement DevOps practices. The most critical cloud-native DevOps practices are continuous integration and continuous delivery/deployment. Each of these will be briefly discussed as follows:

  • Continuous Integration (CI): This practice concentrates on integrating the code several times a day, or more commonly, with every commit. With every commit, a hierarchy of tests are run, starting from unit tests to integration tests. Also, software executables are built to check inconsistencies between libraries or dependencies. With validation provided by the test and build results, success or failures indicate whether the codebase works. Tests and builds of the CI could run on on-premise systems or cloud providers. CI systems are expected to be always up and running and be on the lookout for the future commits from developers.

  • Continuous Delivery (CD): CD is an extension of CI to automatically deliver or deploy packages that have been validated by CI. CD focuses on creating and publishing software packages and Docker containers in the cloud-native world automatically with every commit. CD focuses on updating applications on the customer side or public clouds with the latest packages automatically. In this book, both continuous delivery and deployment topics are covered and abbreviated as CD.

In the next section, we will explore the essential characteristics of CI/CD tools in order to help you choose the right subset for your organization.

 

Choosing the best CI/CD tools


The DevOps culture and the practices of CI/CD require modern tools for building, testing, packaging, deploying, and monitoring. There are many open source, licensed, and vendor-specific tools on the market with different prominent features. In this section, we will first categorize CI/CD tools and then present a guideline for choosing the appropriate ones.

DevOps tools can be categorized as follows, starting from source code to the application running in a production environment:

  • Version Control Systems: GitHub, GitLab, and Bitbucket

  • Continuous Integration: Jenkins, Travis CI, and Concourse

  • Test Frameworks: Selenium, JUnit, and pytest

  • Artifact Management: Maven, Docker, and npm

  • Continuous Delivery/Deployment: AWS Code pipeline, Codefresh, and Wercker

  • Infrastructure Provisioning: AWS, GCP, and Microsoft Azure

  • Release Management: Octopus Delivery, Spinnaker, and Helm

  • Log Aggregation: Splunk, ELK stack, and Loggly

  • Metric Collection: Heapster, Prometheus, and InfluxData

  • Team Communication: Slack, Stride, and Microsoft Teams

On the market, there are plenty of tools with robust features that will make the tool qualified for more than one of the preceding categories. To select an appropriate tool, considering the pros and cons of each is difficult, owing to the uniqueness of organizations and software requirements. Therefore, the following guidelines could help to evaluate the core features of the tools within a continuously evolving industry:

Note

No Silver Bullet—Essence and Accident in Software Engineering was written by Turing Award winner Fred Brooks in 1986. In the paper, it is argued that "There is no single development, in either technology or management technique, which by itself promises even one order of magnitude (tenfold) improvement within a decade in productivity, in reliability, in simplicity." This idea is still valid for most software development fields due to complexity.

Enhanced collaboration: To have a successful DevOps culture in place, all of the tools in the DevOps chain should focus on increasing collaboration. Although there are specific tools, such as Slack, that have cooperation as the main focus, it is crucial to select tools that improve collaboration for every step in software development and delivery. For instance, if you need a source code versioning system, the most basic approach is to set up a bare git server with a single line of code: sudo apt-get install git-core.

With that set up, all of the components will be required to use git command-line tools to interact with the git server. Also, the team will carry the discussions and code reviews to other platforms, such as emails. There are tools such as GitHub, GitLab, or Bitbucket that integrate code reviews, pull requests, and discussion capabilities. Everyone on the team can quickly check the latest pull requests, code reviews, and issues. This eventually increases collaboration. Usage experience differences can be checked out in the following screenshots, between the bare git server with a command-line interface and GitLab with the merge request screen. The crucial point is that both git server setup and GitLab solve the source code versioning problem, but they differ on the level of collaboration increase.

One of the key points while evaluating tools is taking the collaboration capabilities of the tools into consideration even if all the tools provide the same main functionality. The following screen shot shows how GitLab provides a better collaborative experience in comparison to a bare git server:

Figure 1.4: CLI for git server versus GitLab merge request

API integration: The DevOps toolchain and its operations need a high level of automation and configuration. It is unlikely that you should need to hire people to configure infrastructure for every run of the integration test. Instead, all stages of DevOps, from source code to production, are required to expose their APIs. It enables applications to communicate with each other, sending build results, configuration, and artifacts. Rich API functionality enables new tools to be plugged in to the toolchain and work without high customization. Therefore, exposing the APIs is not only crucial for the first setup of the DevOps toolchain, but is also crucial when new tools replace old ones. Accordingly, the second of the key points while evaluating tools is API exposure and the composition of APIs to create a value chain.

Learning curve: Cloud-native DevOps tools try to follow up the latest cloud industry standards and vendor-specific requirements. Besides, most tools focus on user experience and are open to extensions with custom plugins at the same time. These different features could make DevOps tools complicated for beginners. Although there might be experienced users in the team, it is vital to select tools that entail exponential learning curves, starting from zero experience. In other words, tools should allow users to gain competency in a short amount of time, as illustrated in the following diagram. There are three key points to check for a tool that is suitable for everyone in the team:

Documentation: Official documentation, example applications, and references are essential to learn and gain competency.

Community and Support: Online and offline communities and support are critical for solving problems, considering the broad scope of DevOps tools and cloud integrations.

Multiple Access: Selection of the appropriate tools and, having different methods of access, such as API, web interface, and CLI, is essential. This enables beginners to discover tools using the web interface, while experienced users can automate extensively and configure using the API and command line:

Figure 1.5: Exponential learning curve to a competence limit

DevOps practices for the cloud-native world can be viewed as a voyage in which both naive and experienced sailors are in the same boat. All parties need to get some competency, and selected tools should allow for this with their learning curves. Thus, this is the third crucial point while choosing a tool for an organization.

Three main points of enhanced collaboration, API exposure, and learning curves are the must-have features of DevOps tools and also important measures for comparing tools over each other. In the next section, the most popular cloud-native DevOps tools are evaluated using the guideline points mentioned.

Exercise 1: Building, Deploying, and Updating Your Blog in the Cloud

Blog websites are popular in areas such as company websites, technology news, or sharing personal journeys. In this exercise, we aim to create a blog where the contents are written and kept in the source code repository, and the site is generated and automatically published by CI/CD pipelines in the cloud.

An up and running live blog website with its contents should appear as follows:

Figure 1.6: A sample blog with a single post

When a new blog post is added to the source code repository, the site should automatically be updated with the new material:

Figure 1.7: An updated blog with a new post added

Before we begin this exercise, ensure that the following prerequisites are satisfied:

Note

The code files for this exercise can be found here: https://bit.ly/2PBKisL

To successfully complete this exercise, we need to ensure that the following steps are executed:

  1. Fork the project on GitLab to your namespace by clicking the forking icon, as shown in the following screenshot:

    Figure 1.8: Forking the project to create a copy

  2. Review the hierarchy of the files and their usages by running the tree command in the shell:

    Figure 1.9: Tree view of the folder

    As we can see, the following files appear on the tree: .gitignore is used for ignored files in the repository, while .gitlab-ci.yaml defines CI/CD pipeline on GitLab. The config.toml file defines the configuration for Hugo. The content folder is for the source content of the blog and contains another folder, post, and a _index.md file. The post folder contains another file called 2018-10-01-kubernetes-deployment.md. Details on these are provide as follows. _index.md is a Markdown style for the index page, and 2018-10-01-kubernetes-deployment.md is the only blog post live now.

  3. Open the CI/CD pipeline defined in the .gitlab-ci.yaml file and check the following code:

    image: registry.gitlab.com/pages/hugo:latest
    
    stages:
      - validate
      - pages
      
    validate:
      stage: validate
      script:
      - hugo
    
    pages:
      stage: pages
      script:
      - mkdir -p themes/beautifulhugo && git clone https://github.com/halogenica/beautifulhugo.git themes/beautifulhugo
      - hugo --theme beautifulhugo
      only: 
      - master
      artifacts:
        paths:
        - public

    The image: block defines the Docker container image where the pipeline steps will be running, and the stages block defines the sequential order of the jobs that will run. In this pipeline, first, validate will run and if it is successful, then pages will run.

    The validate block defines the required testing before publishing the changes. In this block, there is only one command: hugo. This command verifies whether the contents are correct for creating a website.

    The pages block is for generating the website with its template and finally publishing it. In the script section, first, the template is installed, and then the site is generated with the installed theme. However, pages has an only block with a master. This implies that only for the master branch will the website be updated. In other words, the pipeline could run for other branches for validation, but the actual site will only be deployed from the master branch.

  4. Check the CI/CD pipelines from the left-hand side of the menu bar by clicking Pipelines under the CI/CD tab, as shown in the following screenshot:

    Figure 1.10: The CI/CD pipeline view

    As expected, the page shows that there are no pipelines as we have not yet created them.

  5. Click Run Pipeline on the top right-hand corner of the interface. It will redirect you to the following page; then, click Create pipeline:

    Figure 1.11: Creating a pipeline on GitLab

    You will be able to view the running pipeline instance. With a successful run, it is expected that you will see three successful jobs for Validate, Pages, and Deploy, as shown in the following screenshot:

    Figure 1.12: Successful Validate, Pages, and Deploy jobs

  6. Click on the pages tab, as displayed in the preceding screenshot. You will obtain the following output log:

    Figure 1.13: A log of the jobs run in containers

    In the log screen, all of the lines until Cloning repository... show the preparation steps for the build environment. After that, the source code repository is cloned, and the beatifulhugo template is retrieved. This part is essential since it enables us to combine blog and style at the build time. This approach makes it possible to change to another styling with a template in the future easily without source code change. Then, the HTML files are generated in the Building sites part, and finally, artifacts are uploaded to be served by GitLab.

  7. Type the following URL in your browser window: https://<USERNAME>.gitlab.io/blog-pipeline-example/. The published website is shown as follows:

    Figure 1.14: Screen shot of the live blog

    Note

    It could take up to 10 minutes for DNS resolution of the <USERNAME>.gitlab.io address. If you see a "404 - The page you're looking for could not be found" error, then please ensure that your address is correct and wait patiently until your blog works.

  8. Create another file with the name 2018-10-02-kubernetes-scale.md under the content/post folder and type in the following code:

    -
    title: Scaling My Kubernetes Deployment
    date: 2018-10-02
    tags: ["kubernetes", "code"]
    ---
    //[...]
        NAME                                   READY     STATUS    RESTARTS   AGE       IP           NODE
        kubernetes-bootcamp-5c69669756-9jhz9   1/1  Running  0          3s 172.18.0.7   minikube
        kubernetes-bootcamp-5c69669756-lrjwz   1/1  Running   0          3s 172.18.0.5   minikube
        kubernetes-bootcamp-5c69669756-slht6   1/1  Running   0          3s 172.18.0.6   minikube
        kubernetes-bootcamp-5c69669756-t4pcs   1/1  Running   0          28s 172.18.0.4   minikube
    '''

    We are doing this as we expect the blog to be updated with new content thanks to the CI/CD pipeline we activated with the .gitlab-ci.yaml file.

    Note

    The file and the complete code is available under the new-post branch of this project: https://bit.ly/2rz8O3Y.

  9. Click on Pipelines under the CI/CD tab to check whether there are further instances of the pipeline. The CI/CD pipeline automatically runs and creates a new deployment as soon as a new post is added:

    Figure 1.15: The pipeline runs whenever there is a new commit

  10. Type https://<USERNAME>.gitlab.io/blog-pipeline-example/ and check whether the website is updated when the pipeline is finished:

    Figure 1.16: The website is updated with the new post automatically

    As you can see, compared to the previous output (as listed in step 7), the website has now been updated.

Thus, in this exercise, we created a blog where the contents were written in a repository, and we observed that the site was automatically generated and published by CI/CD pipelines. We will now summarize this chapter.

 

Summary


In this chapter, we first described the conventional method of software development and established its limitations. Specifically, we described how conventional methods failed to encourage collaboration between development and operations, thus ultimately resulting in the loss of engineer hours and money. Then, we discussed the motivation for the origin of the DevOps culture shift. We expanded the discussion by listing DevOps best practices and introduced the DevOps toolchain.

We then progressed to introduce cloud-native architectures and described how it compliments DevOps in bringing about a paradigm shift in software development. Also presented in this chapter was a set of guidelines that will help you choose the best CI/CD tools to implement two critical cloud-native DevOps patterns, namely continuous integration and continuous delivery/deployment, for enhanced collaboration. Finally, we ended this chapter by creating and running a pipeline for a blog application on GitLab.

In the next chapter, we will be describing the fundamentals of continuous integration for the cloud-native architecture and introduce container technology. Additionally, we will be identifying and running several levels of testing for microservices.

About the Author

  • Onur Yılmaz

    Onur Yılmaz is a senior software engineer in a multinational enterprise software company. He is a certified Kubernetes administrator (CKA) and works on Kubernetes and cloud management systems. He is a keen supporter of cutting-edge technologies including Docker, Kubernetes, and cloud-native applications. He has one master's and two bachelor's degrees in the engineering field.

    Browse publications by this author

Latest Reviews

(1 reviews total)
Transitioning to cloud native - this is must read.

Recommended For You

Continuous Delivery with Docker and Jenkins - Second Edition

Create a complete Continuous Delivery process using modern DevOps tools such as Docker, Kubernetes, Jenkins, Docker Hub, Ansible, GitHub and many more.

By Rafał Leszko
Learning DevOps

Simplify your DevOps roles with DevOps tools and techniques

By Mikael Krief
Hands-On Docker for Microservices with Python

A step-by-step guide to building microservices using Python and Docker, along with managing and orchestrating them with Kubernetes

By Jaime Buelta
Architecting Cloud Native Applications

Apply cloud native patterns and practices to deliver responsive, resilient, elastic, and message-driven systems with confidence

By Kamal Arora and 3 more