Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Modern DevOps Practices

You're reading from  Modern DevOps Practices

Product type Book
Published in Sep 2021
Publisher Packt
ISBN-13 9781800562387
Pages 530 pages
Edition 1st Edition
Languages
Author (1):
Gaurav Agarwal Gaurav Agarwal
Profile icon Gaurav Agarwal

Table of Contents (19) Chapters

Preface 1. Section 1: Container Fundamentals and Best Practices
2. Chapter 1: The Move to Containers 3. Chapter 2: Containerization with Docker 4. Chapter 3: Creating and Managing Container Images 5. Chapter 4: Container Orchestration with Kubernetes – Part I 6. Chapter 5: Container Orchestration with Kubernetes – Part II 7. Section 2: Delivering Containers
8. Chapter 6: Infrastructure as Code (IaC) with Terraform 9. Chapter 7: Configuration Management with Ansible 10. Chapter 8: IaC and Config Management in Action 11. Chapter 9: Containers as a Service (CaaS) and Serverless Computing for Containers 12. Chapter 10: Continuous Integration 13. Chapter 11: Continuous Deployment/Delivery with Spinnaker 14. Chapter 12: Securing the Deployment Pipeline 15. Section 3: Modern DevOps with GitOps
16. Chapter 13: Understanding DevOps with GitOps 17. Chapter 14: CI/CD Pipelines with GitOps 18. Other Books You May Enjoy

Chapter 2: Containerization with Docker

In the last chapter, we briefly covered containers, the history of containers, and how the technology has redefined the software ecosystem today. We also understood why it is vital for modern DevOps engineers to be familiar with containers and how containers follow config management principles right from the beginning.

In this chapter, we'll get hands-on and explore Docker – the de facto container runtime. By the end of this chapter, you should be able to install and configure Docker, run your first container, and then monitor it. This chapter will also form the basis for the following chapters, as we will use the same setup for the demos later.

In this chapter, we're going to cover the following main topics:

  • Installing tools
  • Installing Docker
  • Introducing Docker storage drivers and volumes
  • Running your first container
  • Docker logging and logging drivers
  • Docker monitoring with Prometheus
  • Declarative...

Technical requirements

You will need a Linux machine running Ubuntu 16.04 Xenial LTS or later with sudo access for this chapter.

You will also need to clone the following GitHub repository for some of the exercises: https://github.com/PacktPublishing/Modern-DevOps-Practices

Installing tools

Before we deep dive into installing Docker, we need to install supporting tools for us to progress. So, let's first install Git and vim.

Git is the command-line tool that will help you clone code from Git repositories. We will use several repositories for our exercises in future chapters.

Vim is a popular text editor for Linux and Unix operating systems, and we will use it extensively in this and the coming chapters. There are alternatives to vim, such as the GNU nano editor, VS Code, and Sublime Text. Feel free to use whatever you are comfortable with.

Installing Git

Open your shell terminal and run the following command:

$ sudo apt update -y && sudo apt install -y git 

Tip

If you get an output such as E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?, that's because apt update is already running, so you should wait for 5 minutes and then retry.

To confirm that Git is...

Installing Docker

Install the supporting tools that Docker would need to run:

$ sudo apt install apt-transport-https ca-certificates curl \
gnupg-agent software-properties-common

Download the Docker gpg key and add it to the apt package manager:

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \ 
sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-\ keyring.gpg

You then need to add the Docker repository to your apt config so that you can download packages from there:

$ echo "deb [arch=amd64 signed-by=/usr/share/\
  keyrings/docker-archive-keyring.gpg] \
  https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt\
  /sources.list.d/docker.list > /dev/null

Now, finally, install the Docker engine by using the following commands:

$ sudo apt update -y
$ sudo apt install -y docker-ce docker-ce-cli containerd.io

To verify whether Docker is installed successfully,...

Introducing Docker storage drivers and volumes

Docker containers are ephemeral workloads. That means whatever data you store on your container filesystem gets wiped out once the container is gone. The data lives on a disk during the container life cycle, but it does not persist beyond it. Pragmatically speaking, most applications in the real world are stateful. They need to store data beyond the container life cycle, and they want data to persist.

So, how do we go along with that? Docker provides several ways you can store data. By default, all data is stored on the writable container layer, which is ephemeral. The writable container layer interacts with the host filesystem via a storage driver. Because of the abstraction, writing files to the container layer is slower than writing directly to the host filesystem.

To solve that problem and also provide persistent storage, Docker provides volumes, bind mounts, and tmpfs. With them, you can interact directly with the host filesystem...

Running your first container

You create Docker containers out of Docker container images. While we will discuss container images and their architecture in the following chapters, an excellent way to visualize it is as a copy of all files, application libraries, and dependencies that would comprise your application environment, similar to a virtual machine image.

To run a Docker container, we will use the docker run command, which has the following structure:

$ docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG...]

Let's look at each of them using working examples.

In its simplest form, you can use docker run by simply typing the following:

$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
0e03bdcc26d7: Pull complete
Digest: sha256:e7c70bb24b462baa86c102610182e3efcb12a04854e8c582
838d92970a09f323
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
...

If you remember...

Docker logging and logging drivers

Docker not only changed how applications are deployed but also the workflow for log management. Instead of writing logs to files, containers write logs to the console (stdout/stderr). Docker then uses a logging driver to export container logs to chosen destinations.

Container log management

Log management is an essential function within Docker, like any application. But, due to the transient nature of Docker workloads, it becomes more critical as we lose the filesystem and potentially logs as well when the container is deleted or faces any issue. So, we should use log drivers to export the logs into a particular place and store and persist it. If you have a log analytics solution, the best place for your logs to be is within it. Docker supports multiple log targets via logging drivers. Let's have a look.

Logging drivers

As of the time of writing this book, the following logging drivers are available:

  • none: There are no logs...

Docker monitoring with Prometheus

Monitoring Docker nodes and containers are an essential part of managing Docker. There are various tools available for monitoring Docker. While you can use traditional tools such as Nagios, Prometheus is gaining ground in cloud-native monitoring because of its simplicity and pluggable architecture.

Prometheus is a free, open source monitoring tool that provides a dimensional data model, efficient and straightforward querying using the Prometheus query language (PromQL), efficient time-series databases, and modern alerting capabilities.

It has several exporters available for exporting data from various sources and supports both virtual machines and containers. Before we delve into the details, let's look at some of the challenges with container monitoring.

Challenges with container monitoring

From a conceptual point of view, there is no difference between container monitoring and the traditional method. You would still need metrics...

Declarative container management with Docker Compose

Docker Compose helps you manage multiple containers in a declarative way. You create a YAML file and specify what you want to build, what containers you want to run, and how the containers interact with each other. You can define mounts, networks, port mapping, and many different configurations in the YAML file.

After that, you can simply run docker-compose up to get your entire containerized application running.

Declarative management is fast gaining ground because of the power and simplicity it offers. Now, sysadmins don't need to remember what commands they had run or write lengthy scripts or playbooks to manage containers. Instead, they can simply declare what they want in a YAML file, and docker-compose or other tools can help them achieve that state.

Installing Docker Compose

Installing Docker Compose is very simple. You download the docker-compose binary from its official repository, make it executable, and...

Summary

This chapter started with installing Docker, then running our first Docker container, looking at various modes of running a container, and understanding Docker volumes and storage drivers. We also learned how to select the right storage driver and volume options and some best practices. All these skills will help you set up a production-ready Docker server with ease. We also talked about the logging agent and how you can quickly ship Docker logs to multiple destinations, such as journald, Splunk, and JSON files, to help you monitor your containers. We looked at managing Docker containers declaratively using docker-compose and ran a complete composite container application.

In the following chapter, we will look at Docker images, creating and managing them, and some best practices.

Questions

  1. You should use Overlay2 for CentOS and RHEL 7 and below – True or false?
  2. Which of the following statements are true? (Multiple answers are possible)

    a. Volumes increase IOps.

    b. Volumes decrease IOps.

    c. tmpfs mounts use system memory.

    d. You can use bind mounts to mount host files to containers.

    e. You can use volume mounts for multi-instance active-active configuration.

  3. Changing the storage driver removes existing containers from the host – True or false?
  4. devicemapper is a better option than overlay2 for write-intensive containers – True or false?
  5. Which one of the following logging drivers are supported by Docker? (Multiple answers are possible)

    a. journald

    b. Splunk

    c. JSON files

    d. Syslog

    e. Logstash

  6. Docker Compose is an imperative approach for managing containers – True or false?
  7. Which of the following docker run configurations are correct? (Multiple answers are possible)

    a. docker run nginx

    b. docker run --name nginx nginx...

Answers

  1. False – You should use devicemapper for CentOS and RHEL 7 and below as they do not support overlay2
  2. b, c, d, e
  3. True
  4. True
  5. a, b, c, d
  6. False – Docker Compose is a declarative approach for container management.
  7. a, b, c
lock icon The rest of the chapter is locked
You have been reading a chapter from
Modern DevOps Practices
Published in: Sep 2021 Publisher: Packt ISBN-13: 9781800562387
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}