The Docker Workshop

By Vincent Sesto , Onur Yılmaz , Sathsara Sarathchandra
  • Instant online access to over 8,000+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. 1. Running My First Docker Container

About this book

No doubt Docker Containers are the future of highly-scalable software systems and have cost and runtime efficient supporting infrastructure. But learning it might look complex as it comes with many technicalities. This is where The Docker Workshop will help you.

Through this workshop, you’ll quickly learn how to work with containers and Docker with the help of practical activities. 

The workshop starts with Docker containers, enabling you to understand how it works. You’ll run third party Docker images and also create your own images using Dockerfiles and multi-stage Dockerfiles. Next, you’ll create environments for Docker images, and expedite your deployment and testing process with Continuous Integration. Moving ahead, you’ll tap into interesting topics and learn how to implement production-ready environments using Docker Swarm. You’ll also apply best practices to secure Docker images and to ensure that production environments are running at maximum capacity. Towards the end, you’ll gather skills to successfully move Docker from development to testing, and then into production. While doing so, you’ll learn how to troubleshoot issues, clear up resource bottlenecks and optimize the performance of services.

By the end of this workshop, you’ll be able to utilize Docker containers in real-world use cases.

Publication date:
October 2020
Publisher
Packt
Pages
792
ISBN
9781838983444

 

1. Running My First Docker Container

Overview

In this chapter, you will learn the basics of Docker and containerization, and explore the benefits of migrating traditional multi-tier applications to a fast and reliable containerized infrastructure. By the end of this chapter, you will have a firm understanding of the benefits of running containerized applications as well as the basics of running containers using the docker run command. This chapter will not only introduce you to the fundamentals of Docker but also provide a solid understanding of the Docker concepts that will be built upon throughout this workshop.

 

Introduction

In recent years, technological innovations across all industries are rapidly increasing the rate at which software products are delivered. Due to trends in technology, such as agile development (a methodology for quickly writing software) and continuous integration pipelines, which enable the rapid delivery of software, operations' staff have recently struggled to build infrastructure quickly enough to quell the increasing demand. In order to keep up, many organizations have opted to migrate to cloud infrastructure.

Cloud infrastructure provides hosted virtualization, network, and storage solutions that can be leveraged on a pay-as-you-go model. These providers allow any organization or individual to sign up and receive access to infrastructure that would traditionally require large amounts of space and expensive hardware to implement on-site or in a data center. Cloud providers such as Amazon Web Services and Google Cloud Platform provide easy-to-use APIs that allow for the creation of large fleets of virtual machines (or VMs) almost instantly.

Deploying infrastructure to the cloud provided a solution to many of the dilemmas that organizations were facing with traditional infrastructure solutions, but also created additional problems related to managing costs in running these services at scale. How do companies manage the on-going monthly and yearly expenditures of running expensive servers 24 hours a day, 7 days a week?

VMs revolutionized infrastructure by leveraging hypervisors to create smaller servers on top of larger hardware. The downside of virtualization was how resource-intensive it was to run a VM. VMs themselves look, act, and feel like real bare metal hardware since hypervisors such as Zen, KVM, and VMWare allocate resources to boot and manage an entire operating system image. The dedicated resources associated with VMs make them large and somewhat difficult to manage. Moving VMs between an on-premises hypervisor and the cloud could potentially mean moving hundreds of gigabytes worth of data per VM.

To provide a greater degree of automation, make better use of compute density, and optimize their cloud presence, companies find themselves moving toward containerization and microservices architectures as a solution. Containers provide process-level isolation or running software services within isolated sections of the kernel of the host operating system. Instead of running an entire operating system kernel to provide isolation, containers can share the kernel of the host operating system to run multiple software applications. This is accomplished in the Linux kernel through features known as control groups (or cgroups) and namespace isolation. On a single VM or bare metal machine, a user could potentially run hundreds of containers that run individual software application instances on a single host operating system.

This is in stark contrast to a traditional VM architecture. Generally, when we deploy a VM, we purpose that machine to run a single server or a minor subset of services. This creates a waste of valuable CPU cycles that could be allocated to other tasks and serve other requests. We could, in theory, resolve this dilemma by installing multiple services on a single VM. However, this can create a tremendous amount of confusion regarding which machine is running which service. It also places the ownership of hosting multiple software installations and backend dependencies in a single operating system.

A containerized microservices approach solves this by allowing the container runtime to schedule and run containers on the host operating system. The container runtime does not care what application is running inside the container, but rather that a container exists and can be downloaded and executed on the host operating system. It doesn't matter if the application running inside the container is a Go web API, a simple Python script, or a legacy Cobol application. Since the container is in a standard format, the container runtime will download the container image and execute the software within it. Throughout this book, we will study the Docker container runtime and learn the basics of running containers both locally and at scale.

Docker is a container runtime that was developed in 2013 and designed to take advantage of the process isolation features of the Linux kernel. What separated Docker from other container runtime implementations is that Docker developed a system to not only run containers but also to build and push containers to container repositories. This innovation led to the concept of container immutability—only changing containers by building and pushing new versions of the containers when software changes occur.

As seen in the following diagram (Figure 1.1), we have a series of containerized applications deployed across two Docker servers. Between two server instances, seven containerized applications have been deployed. Each container hosts its own set of binaries, libraries, and self-contained dependencies. When Docker runs a container, the container itself hosts everything that it requires to function properly. It is even possible to deploy different versions of the same application framework since each container exists in its own kernel space:

Figure 1.1: Seven containers running across two different container servers

Figure 1.1: Seven containers running across two different container servers

In this chapter, you will get to know various advantages provided by Docker with the help of containerization. You will also learn the basics of running containers using the docker run command.

 

Advantages of Using Docker

In a traditional VM approach, code changes would require operations folk or a configuration management tool to access that machine and install a new version of the software. The principle of immutable containers means that when a code change occurs, a new version of that container image will be built and created as a new artifact. If this change needed to be rolled back, it would be as easy as downloading and restarting the older version of the container image.

Leveraging a containerized approach also enables software development teams to predictably and reliably test applications in various scenarios and multiple environments locally. Since the Docker runtime environment provides a standard execution environment, software developers can quickly recreate issues and debug problems easily. Because of container immutability, developers can be assured that the same code is running across all environments because the same Docker images can be deployed in any environment. This means that configuration variables such as invalid database connection strings, API credentials, or other environment-specific variance are the primary source of failures. This eases the operational burden and provides an unparalleled degree of efficiency and reusability.

Another advantage of using Docker is that containerized applications are traditionally quite small and flexible compared to their traditional infrastructure counterparts. Instead of providing a full operating system kernel and execution environment, containers generally only provide the necessary libraries and packages that are required to run an application.

When building Docker containers, developers are no longer at the mercy of packages and tools installed on the host operating system, which may differ between environments. They can pack inside a container image only the exact versions of libraries and utilities that the application requires to run. When deployed onto a production machine, developers and operations teams are no longer concerned about what hardware or operating system version the container is running on, as long as their container is running.

For example, as of January 1, 2020, Python 2 is no longer supported. As a result, many software repositories are phasing out Python 2 packages and runtimes. Leveraging a containerized approach, you can continue to run legacy Python 2 applications in a controlled, secure, and reliable fashion until the legacy applications can be rewritten. This removes the fear of worrying about installing operating-system-level patches, which may remove Python 2 support and break legacy application stacks. These Python 2 containers can even run in parallel on Docker servers with Python 3 applications to provide precise testing as these applications are migrated to the new modernized stacks.

Now that we have taken a look at what Docker is and how it works, we can start to work with Docker to get an idea of how process isolation differs from virtualization and other similar technologies.

Note

Before we can begin to run containers, you must first have a working installation of Docker on your local development workstation. For details, please review the Preface section of this book.

 

Docker Engine

Docker Engine is the interface that provides access to the process isolation features of the Linux kernel. Since only Linux exposes the features that allow containers to run, Windows and macOS hosts leverage a Linux VM in the background to make container execution possible. For Windows and macOS users, Docker provides the "Docker Desktop" suite of packages that deploy and run this VM in the background for you. This allows Docker commands to be executed natively from the terminal or PowerShell console of the macOS or Windows host. Linux hosts have the privilege of directly executing the Docker Engine natively because modern versions of the Linux kernel support cgroups and namespace isolation.

Note

Since Windows, macOS, and Linux have fundamentally different operating system architectures in terms of networking and process management, a few of the examples in this book (specifically in regard to networking) are sometimes called out as having different behaviors depending on the operating system that is running on your development workstation. These differences are called out as they occur.

Docker Engine supports not only the execution of container images but also provides built-in mechanisms to build and test container images from source code files known as Dockerfiles. When container images are built, they can be pushed to container image registries. An image registry is a repository of container images from which other Docker hosts can download and execute container images. The Docker engine supports running container images, building container images, and even hosting container image registries when configured to run as such.

When a container is started, Docker will, by default, download the container image, store it in its local container image cache, and finally execute the container's entrypoint directive. The entrypoint directive is the command that will start the primary process of the application. When this process stops or goes down, the container will also cease to run.

Depending on the application running inside the container, the entrypoint directive might be a long-running server daemon that is available all the time, or it could be a short-lived script that will naturally stop when the execution is completed. Alternatively, many containers execute entrypoint scripts that complete a series of setup steps before starting the primary process, which could be long- or short-lived.

Before running any container, it is a best practice to first understand the type of application that will be running inside the container and whether it will be a short-lived execution or a long-running server daemon.

 

Running Docker Containers

Best practices for building containers and microservices architecture dictate that a container should only run a single process. Keeping this principle in mind, we can design containers that are easy to build, troubleshoot, scale, and deploy.

The life cycle of a container is defined by the state of the container and the running processes within it. A container can be in a running or stopped state based on actions taken by the operator, the container orchestrator, or the state of the application running inside the container itself. For example, an operator can manually stop or start a container using the docker stop or docker start command-line interface (CLI) interface commands. Docker itself may automatically stop or restart a container if it detects that the container has entered an unhealthy state. Furthermore, if the primary application running inside the container fails or stops, the running container instance should also stop. Many container runtime platforms such as Docker even provide automated mechanisms to restart containers that enter a stopped state automatically. Many container platforms use this principle to build job and task execution functionality.

Since containers terminate when the primary process within the container finishes, containers are excellent platforms to execute scripts and other types of jobs that have an indefinite lifespan. The following Figure 1.2 illustrates the life cycle of a typical container:

Figure 1.2: The life cycle of a typical container

Figure 1.2: The life cycle of a typical container

Once you have Docker downloaded and installed on your targeted operating system, you can start running containers. The Docker CLI has an aptly named docker run command specifically for starting and running Docker containers. As we learned previously, containers provide isolation from the rest of the applications and processes running on the system. Due to this fact, the life cycle of a Docker container is determined by the primary process running inside that container. When a container stops, Docker may attempt to restart the container to ensure continuity of the application.

To see the running containers on our host system, we will also be leveraging the docker ps command. The docker ps command is similar to the Unix-style ps command that is used to show the running processes on a Linux or Unix-based operating system.

Remember that when Docker first runs a container, if it does not have the container image stored in its local cache, it will download the container image from a container image registry. To view the container images that are stored locally, use the docker images command.

The following exercise will demonstrate how to use the docker run, docker ps, and docker images commands to start and view the status of a simple hello-world container.

Exercise 1.01: Running the hello-world Container

A simple "Hello World" application is generally the first line of code a developer writes when learning software development or starting a new programming language, and containerization is no different. Docker has published a hello-world container that is extremely small in size and simple to execute. This container demonstrates the nature of containers running a single process with an indefinite lifespan.

In this exercise, you will use the docker run command to start the hello-world container and the docker ps command to view the status of the container after it has finished execution. This will provide a basic overview of running containers in your local development environment:

  1. Enter the docker run command in a Bash terminal or PowerShell window. This instructs Docker to run a container called hello-world:
    $ docker run hello-world

    Your shell should return output similar to the following:

    Unable to find image 'hello-world: latest' locally
    latest: Pulling from library/hello-world
    0e03bdcc26d7: Pull complete 
    Digest: sha256:
    8e3114318a995a1ee497790535e7b88365222a21771ae7e53687ad76563e8e76
    Status: Downloaded newer image for hello-world:latest
    Hello from Docker!
    This message shows that your installation appears to be working 
    correctly.
    To generate this message, Docker took the following steps:
     1. The Docker client contacted the Docker daemon.
     2. The Docker daemon pulled the "hello-world" image from the 
    Docker Hub.
        (amd64)
     3. The Docker daemon created a new container from that image 
    which runs the executable that produces the output you are 
    currently reading.
    4. The Docker daemon streamed that output to the Docker 
    client, which sent it to your terminal.
    To try something more ambitious, you can run an Ubuntu 
    container with:
     $ docker run -it ubuntu bash
    Share images, automate workflows, and more with a free Docker ID:
     https://hub.docker.com/
    For more examples and ideas, visit:
     https://docs.docker.com/get-started/

    What just happened? You told Docker to run the container, hello-world. So, first, Docker will look in its local container cache for a container by that same name. If it doesn't find one, it will look to a container registry on the internet in an attempt to satisfy the command. By simply specifying the name of the container, Docker will, by default, query Docker Hub for a published container image by that name.

    As you can see, it was able to find a container called the library/hello-world and began the process of pulling in the container image layer by layer. You will get a closer look into container images and layers in Chapter 2, Getting Started with Dockerfiles. Once the image has fully downloaded, Docker runs the image, which displays the Hello from Docker output. Since the primary process of this image is simply to display that output, the container then stops itself and ceases to run after the output displays.

  2. Use the docker ps command to see what containers are running on your system. In your Bash or PowerShell terminal, type the following command:
    $ docker ps

    This will return output similar to the following:

    CONTAINER ID      IMAGE     COMMAND      CREATED
      STATUS              PORTS                   NAMES

    The output of the docker ps command is empty because it only shows currently running containers by default. This is similar to the Linux/Unix ps command, which only shows the running processes.

  3. Use the docker ps -a command to display all the containers, even the stopped ones:
    $ docker ps -a

    In the output returned, you should see the hello-world container instance:

    CONTAINER ID     IMAGE           COMMAND     CREATED
      STATUS                          PORTS         NAMES
    24c4ce56c904     hello-world     "/hello"    About a minute ago
      Exited (0) About a minute ago                 inspiring_moser

    As you can see, Docker gave the container a unique container ID. It also displays the IMAGE that was run, the COMMAND within that image that was executed, the TIME it was created, and the STATUS of the process running that container, as well as a unique human-readable name. This particular container was created approximately one minute ago, executed the program /hello, and ran successfully. You can tell that the program ran and executed successfully since it resulted in an Exited (0) code.

  4. You can query your system to see what container images Docker cached locally. Execute the docker images command to view the local cache:
    $ docker images

    The returned output should display the locally cached container images:

    REPOSITORY     TAG        IMAGE ID        CREATED         SIZE
    hello-world    latest     bf756fb1ae65    3 months ago    13.3kB

    The only image cached so far is the hello-world container image. This image is running the latest version, which was created 3 months ago, and has a size of 13.3 kilobytes. From the preceding output, you know that this Docker image is incredibly slim and that developers haven't published a code change for this image in 3 months. This output can be very helpful for troubleshooting differences between software versions in the real world.

    Since you simply told Docker to run the hello-world container without specifying a version, Docker will pull the latest version by default. You can specify different versions by specifying a tag in your docker run command. For example, if the hello-world container image had a version 2.0, you could run that version using the docker run hello-world:2.0 command.

    Imagine for a minute that the container was a bit more complex than a simple hello-world application. Imagine your colleague wrote software with the requirement to download very specific versions of many third-party libraries. If you run this application traditionally, you would have to download the runtime environment for the language they develop in, plus all of the third-party libraries, as well as detailed instructions on how to build and execute their code.

    However, if they publish a Docker image of their code to an internal Docker registry, all they have to provide to you is the docker run syntax for running the container. Since you have Docker, the container image will run the same no matter what your underlying platform is. The container image itself already has the libraries and runtime details baked in.

  5. If you execute the same docker run command over again, then, for each docker run command a user inputs, a new container instance will be created. It should be noted that one of the benefits of containerization is the ability to easily run multiple instances of a software application. To see how Docker handles multiple container instances, run the same docker run command again to create another instance of the hello-world container:
    $ docker run hello-world

    You should see the following output:

    Hello from Docker!
    This message shows that your installation appears to be 
    working correctly.
    To generate this message, Docker took the following steps:
     1. The Docker client contacted the Docker daemon.
     2. The Docker daemon pulled the "hello-world" image from 
        the Docker Hub.
        (amd64)
     3. The Docker daemon created a new container from that image 
        which runs the executable that produces the output you 
        are currently reading.
     4. The Docker daemon streamed that output to the Docker client, 
        which sent it to your terminal.
    To try something more ambitious, you can run an Ubuntu container 
    with:
     $ docker run -it ubuntu bash
    Share images, automate workflows, and more with a free Docker ID:
     https://hub.docker.com/
    For more examples and ideas, visit:
     https://docs.docker.com/get-started/

    Notice that, this time, Docker did not have to download the container image from Docker Hub again. This is because you now have that container image cached locally. Instead, Docker was able to directly run the container and display the output to the screen. Let's see what your docker ps -a output looks like now.

  6. In your terminal, run the docker ps -a command again:
    docker ps -a

    In the output, you should see that the second instance of this container image has completed its execution and entered a stopped state, as indicated by Exit (0) in the STATUS column of the output:

    CONTAINER ID     IMAGE           COMMAND       CREATED
      STATUS                      PORTS               NAMES
    e86277ca07f1     hello-world     "/hello"      2 minutes ago
      Exited (0) 2 minutes ago                        awesome_euclid
    24c4ce56c904     hello-world     "/hello"      20 minutes ago
      Exited (0) 20 minutes ago                       inspiring_moser

    You now have a second instance of this container showing in your output. Each time you execute the docker run command, Docker will create a new instance of that container with its attributes and data. You can run as many instances of a container as your system resources will allow. You created one instance in this example 20 minutes ago. The second instance you created 2 minutes ago.

  7. Check the base image again by executing the docker images command once more:
    $ docker images

    The returned output will show the single base image that Docker created two running instances from:

    REPOSITORY     TAG       IMAGE ID        CREATED         SIZE
    hello-world    latest    bf756fb1ae65    3 months ago    13.3kB

In this exercise, you used docker run to start the hello-world container. To accomplish this, Docker downloaded the image from the Docker Hub registry and executed it in the Docker Engine. Once the base image was downloaded, you could create as many instances of that container as you wanted using subsequent docker run commands.

Docker container management is more complex than simply starting and viewing the status of containers running in your development environment. Docker also supports many other actions that help provide insight into the status of applications running on Docker hosts. In the next section, we will learn how to manage Docker containers using different commands.

 

Managing Docker Containers

Throughout our container journey, we will be pulling, starting, stopping, and removing containers from our local environment quite frequently. Prior to deploying a container in a production environment, it is critical that we first run the container locally to understand how it functions and what normal behavior looks like. This includes starting containers, stopping containers, getting verbose details about how the container is running, and, of course, accessing the container logs to view critical details about the applications running inside the containers. These basic commands are outlined as follows:

  • docker pull: This command downloads a container image to the local cache
  • docker stop: This command stops a running container instance
  • docker start: This command starts a container instance that is no longer in a running state
  • docker restart: This command restarts a running container
  • docker attach: This command allows users to gain access (or attach) to the primary process of a running Docker container instance
  • docker exec: This command executes a command inside a running container
  • docker rm: This command deletes a stopped container
  • docker rmi: This command deletes a container image
  • docker inspect: This command shows verbose details about the state of a container

Container life cycle management is a critical component of effective container management in production environments. Knowing how to investigate running containers is critical when looking to evaluate the health of your containerized infrastructure.

In the following exercise, we are going to work with these commands individually to get an in-depth understanding of how they work and how they can be leveraged to provide visibility into the health of your containerized infrastructure.

Exercise 1.02: Managing Container Life Cycles

When managing containers in both development and production environments, it is critical to understand the status of container instances. Many developers use base container images that contain a specific baseline configuration on top of which their applications can be deployed. Ubuntu is a commonly used base image that users use to package their applications.

Unlike the full operating system image, the Ubuntu base container image is quite slim and intentionally leaves out a lot of packages the full operating system installation has. Most base images do have package systems that will allow you to install any missing packages.

Keep in mind that when building container images, you want to keep the base images as slim as possible, only installing the most necessary packages. This ensures that container images can quickly be pulled and started by Docker hosts.

In this exercise, you will work with the official Ubuntu base container image. This image will be used to start container instances that will be used to test the various container life cycle management commands, such as docker pull, docker start, and docker stop. This container image is useful because the default base image allows us to run container instances in long-running sessions to understand how the container life cycle management commands function. In this exercise, you will also pull the Ubuntu 18.04 container image and compare it with the Ubuntu 19.04 container image:

  1. In a new terminal or PowerShell window, execute the docker pull command to download the Ubuntu 18.04 container image:
    $ docker pull ubuntu:18.04

    You should see the following output indicating that Docker is downloading all the layers of the base image:

    5bed26d33875: Pull complete 
    f11b29a9c730: Pull complete 
    930bda195c84: Pull complete 
    78bf9a5ad49e: Pull complete 
    Digest: sha256:bec5a2727be7fff3d308193cfde3491f8fba1a2ba392
            b7546b43a051853a341d
    Status: Downloaded newer image for ubuntu:18.04
    docker.io/library/ubuntu:18.04
  2. Use the docker pull command to download the Ubuntu 19.04 base image:
    $ docker pull ubuntu:19.04

    You will see similar output as Docker downloads the Ubuntu 19.04 base image:

    19.04: Pulling from library/ubuntu
    4dc9c2fff018: Pull complete 
    0a4ccbb24215: Pull complete 
    c0f243bc6706: Pull complete 
    5ff1eaecba77: Pull complete 
    Digest: sha256:2adeae829bf27a3399a0e7db8ae38d5adb89bcaf1bbef
            378240bc0e6724e8344
    Status: Downloaded newer image for ubuntu:19.04
    docker.io/library/ubuntu:19.04
  3. Use the docker images command to confirm that the container images are downloaded to the local container cache:
    $ docker images

    The contents of the local container cache will display the Ubuntu 18.04 and Ubuntu 19.04 base images, as well as our hello-world image from the earlier exercise:

    REPOSITORY     TAG        IMAGE ID         CREATED         SIZE
    ubuntu         18.04      4e5021d210f6     4 weeks ago     64.2MB
    ubuntu         19.04      c88ac1f841b7     3 months ago    70MB
    hello-world    latest     bf756fb1ae65     3 months ago    13.3kB
  4. Before running these images, use the docker inspect command to get verbose output about what makes up the container images and how they differ. In your terminal, run the docker inspect command and use the image ID of the Ubuntu 18.04 container image as the main argument:
    $ docker inspect 4e5021d210f6

    The inspect output will contain a large list of all the attributes that define that container. For example, you can see what environment variables are configured within the container, whether the container has a hostname set when the image was last updated, and a breakdown of all the layers that define that container. This output contains critical debugging details that can prove valuable when planning an upgrade. The following is the truncated output of the inspect command. In the Ubuntu 18.04 image, the "Created" parameter should provide the date and time the container image was built:

    "Id": "4e5021d210f6d4a0717f4b643409eff23a4dc01c4140fa378b1b
           f0a4f8f4",
    "Created": "2020-03-20T19:20:22.835345724Z",
    "Path": "/bin/bash",
    "Args": [],
  5. Inspecting the Ubuntu 19.04 container, you can see that this parameter is different. Run the docker inspect command in the Ubuntu 19.04 container image ID:
    $ docker inspect c88ac1f841b7

    In the displayed output, you will see that this container image was created on a different date to the 18.04 container image:

    "Id": "c88ac1f841b74e5021d210f6d4a0717f4b643409eff23a4dc0
           1c4140fa"
    "Created": "2020-01-16T01:20:46.938732934Z",
    "Path": "/bin/bash",
    "Args": []

    This could be critical if you knew that a security vulnerability might be present in an Ubuntu base image. This information can also prove vital to helping you determine which version of the container you want to run.

  6. After inspecting both the container images, it will be clear that your best choice is to stick with the Ubuntu Long Term Support 18.04 release. As you saw from the preceding outputs, the 18.04 release is more up to date than the 19.04 release. This is to be expected as Ubuntu will generally provide more stable updates to the long-term support releases.
  7. Use the docker run command to start an instance of the Ubuntu 18.04 container:
    $ docker run -d ubuntu:18.04

    Notice that this time we are using the docker run command with the -d flag. This tells Docker to run the container in daemon mode (or in the background). If we omit the -d flag, the container will take over our current terminal until the primary process inside the container terminates.

    Note

    A successful invocation of the docker run command will usually only return the container ID as output. Some versions of Docker will not return any output.

  8. Check the status of the container using the docker ps -a command:
    $ docker ps -a

    This will reveal a similar output to the following:

    CONTAINER ID     IMAGE           COMMAND        CREATED
      STATUS                     PORTS         NAMES
    c139e44193de     ubuntu:18.04    "/bin/bash"    6 seconds ago
      Exited (0) 4 seconds ago                 xenodochial_banzai

    As you can see, your container is stopped and exited. This is because the primary process inside the container is /bin/bash, which is a shell. The Bash shell cannot run without being executed in an interactive mode since it expects text input and output from a user.

  9. Run the docker run command again, passing in the -i flag to make the session interactive (expecting user input), and the -t flag to allocate a pseudo-tty handler to the container. pseudo-tty handler will essentially link the user's terminal to the interactive Bash shell running inside the container. This will allow Bash to run properly since it will instruct the container to run in an interactive mode, expecting user input. You can also give the container a human-readable name by passing in the --name flag. Type the following command in your Bash terminal:
    $ docker run -i -t -d --name ubuntu1 ubuntu:18.04
  10. Execute the docker ps -a command again to check the status of the container instance:
    $ docker ps -a 

    You should now see the new instance running, as well as the instance that failed to start moments ago:

    CONTAINER ID    IMAGE          COMMAND         CREATED
      STATUS            PORTS               NAMES
    f087d0d92110    ubuntu:18.04   "/bin/bash"     4 seconds ago
      Up 2 seconds                          ubuntu1
    c139e44193de    ubuntu:18.04   "/bin/bash"     5 minutes ago
      Exited (0) 5 minutes ago              xenodochial_banzai
  11. You now have an Ubuntu container up and running. You can run commands inside this container using the docker exec command. Run the exec command to access a Bash shell, which will allow us to run commands inside the container. Similar to docker run, pass in the -i and -t flags to make it an interactive session. Also pass in the name or ID of the container, so that Docker knows which container you are targeting. The final argument of docker exec is always the command you wish to execute. In this case, it will be /bin/bash to start a Bash shell inside the container instance:
    docker exec -it ubuntu1 /bin/bash

    You should immediately see your prompt change to a root shell. This indicates that you have successfully launched a shell inside your Ubuntu container. The hostname of the container, cfaa37795a7b, is taken from the first twelve characters of the container ID. This allows the user to know for certain which container are they accessing, as seen in the following example:

    [email protected]:/#
  12. From inside the container, you are very limited in terms of what tools you have available. Unlike a VM image, container images are extremely minimal in terms of the packages that come preinstalled. The echo command should be available, however. Use echo to write a simple message to a text file:
    [email protected]:/# echo "Hello world from ubuntu1" > hello-world.txt
  13. Run the exit command to exit from the Bash shell of the ubuntu1 container. You should return to your normal terminal shell:
    [email protected]:/# exit

    The command will return output like the following. Please note that the output may vary for every user running the command:

    [email protected]:~/
  14. Now create a second container called ubuntu2 that will also run in your Docker environment using the Ubuntu 19.04 image:
    $ docker run -i -t -d --name ubuntu2 ubuntu:19.04
  15. Run docker exec to access a shell of this second container. Remember to use the name or container ID of the new container you created. Likewise, access a Bash shell inside this container, so the final argument will be /bin/bash:
    $ docker exec -it ubuntu2 /bin/bash

    You should observe your prompt change to a Bash root shell, similar to how it did for the Ubuntu 18.04 container image:

    [email protected]:/#
  16. Run the echo command inside the ubuntu2 container instance to write a similar hello-world-type greeting:
    [email protected]:/# echo "Hello-world from ubuntu2!" > hello-world.txt
  17. Currently, you have two Ubuntu container instances running in your Docker environment with two separate hello-world greeting messages in the home directory of the root account. Use docker ps to see the two running container images:
    $ docker ps

    The list of running containers should reflect the two Ubuntu containers, as well as the time elapsed since they have been created:

    CONTAINER ID    IMAGE            COMMAND        CREATED
      STATUS              PORTS               NAMES
    875cad5c4dd8    ubuntu:19.04     "/bin/bash"    3 minutes ago
      Up 3 minutes                            ubuntu2
    cfaa37795a7b    ubuntu:18.04     "/bin/bash"    15 minutes ago
      Up 15 minutes                           ubuntu1
  18. Instead of using docker exec to access a shell inside our containers, use it to display the output of the hello-world.txt files you wrote by executing the cat command inside the containers:
    $ docker exec -it ubuntu1 cat hello-world.txt

    The output will display the hello-world message you passed into the container in the previous steps. Notice that as soon as the cat command was completed and the output displayed, the user was moved back to the context of your main terminal. This is because the docker exec session will only exist for as long as the command the user is executing will run.

    In the earlier example of the Bash shell, Bash will only exit if the user terminates it by using the exit command. In this example, only the Hello world output is displayed because the cat command displayed the output and exited, ending the docker exec session:

    Hello world from ubuntu1

    You will observe the contents of the hello-world file displayed, followed by a return to your main terminal session.

  19. Run the same cat command in the ubuntu2 container instance:
    $ docker exec -it ubuntu2 cat hello-world.txt

    Similar to the first example, the ubuntu2 container instance will display the contents of the hello-world.txt file provided previously:

    Hello-world from ubuntu2!

    As you can see, Docker was able to allocate an interactive session on both the containers, execute the command, and return the output directly in our running container instances.

  20. In a similar manner to that you used to execute commands inside our running containers, you can also stop, start, and restart them. Stop one of your container instances using the docker stop command. In your terminal session, execute the docker stop command, followed by the name or container ID of the ubuntu2 container:
    $ docker stop ubuntu2

    This command should return no output.

  21. Use the docker ps command to view all running container instances:
    $ docker ps

    The output will display the ubuntu1 container up and running:

    CONTAINER ID    IMAGE           COMMAND        CREATED
      STATUS              PORTS               NAMES
    cfaa37795a7b    ubuntu:18.04    "/bin/bash"    26 minutes ago
      Up 26 minutes                           ubuntu1
  22. Execute the docker ps -a command to view all container instances, regardless of whether they are running, to see your container in a stopped state:
    $ docker ps -a

    The command will return the following output:

    CONTAINER ID     IMAGE            COMMAND         CREATED
      STATUS                      PORTS             NAMES
    875cad5c4dd8     ubuntu:19.04     "/bin/bash"     14 minutes ago
      Exited (0) 6 seconds ago                      ubuntu2
  23. Use the docker start or docker restart command to restart the container instance:
    $ docker start ubuntu2

    This command will return no output, although some versions of Docker may display the container ID.

  24. Verify that the container is running again by using the docker ps command:
    $ docker ps

    Notice that STATUS shows that this container has only been up for a short period (1 second), although the container instance was created 29 minutes ago:

    CONTAINER ID    IMAGE           COMMAND         CREATED
      STATUS              PORTS               NAMES
    875cad5c4dd8    ubuntu:19.04    "/bin/bash"     17 minutes ago
      Up 1 second                             ubuntu2
    cfaa37795a7b    ubuntu:18.04    "/bin/bash"     29 minutes ago
      Up 29 minutes                           ubuntu1

    From this state, you can experiment with starting, stopping, or executing commands inside these containers.

  25. The final stage of the container management life cycle is cleaning up the container instances you created. Use the docker stop command to stop the ubuntu1 container instance:
    $ docker stop ubuntu1

    This command will return no output, although some versions of Docker may return the container ID.

  26. Perform the same docker stop command to stop the ubuntu2 container instance:
    $ docker stop ubuntu2
  27. When container instances are in a stopped state, use the docker rm command to delete the container instances altogether. Use docker rm followed by the name or container ID to delete the ubuntu1 container instance:
    $ docker rm ubuntu1

    This command will return no output, although some versions of Docker may return the container ID.

    Perform this same step on the ubuntu2 container instance:

    $ docker rm ubuntu2
  28. Execute docker ps -a to see all containers, even the ones in a stopped state. You will find that the stopped containers no longer exist due to the fact they have been deleted by our previous command. You may also delete the hello-world container instances, as well. Delete the hello-world container using the container ID captured from the docker ps -a output:
    $ docker rm b291785f066c
  29. To completely reset the state of our Docker environment, delete the base images you downloaded during this exercise as well. Use the docker images command to view the cached base images:
    $ docker images

    The list of Docker images and all associated metadata in your local cache will display:

    REPOSITORY     TAG        IMAGE ID        CREATED         SIZE
    ubuntu         18.04      4e5021d210f6    4 weeks ago     64.2MB
    ubuntu         19.04      c88ac1f841b7    3 months ago    70MB
    hello-world    latest     bf756fb1ae65    3 months ago    13.3kB
  30. Execute the docker rmi command followed by the image ID to delete the first image ID:
    $ docker rmi 4e5021d210f6

    Similar to docker pull, the rmi command will delete each image and all associated layers:

    Untagged: ubuntu:18.04
    Untagged: [email protected]:bec5a2727be7fff3d308193cfde3491f8fba1a2b
    a392b7546b43a051853a341d
    Deleted: sha256:4e5021d210f65ebe915670c7089120120bc0a303b9020859
    2851708c1b8c04bd
    Deleted: sha256:1d9112746e9d86157c23e426ce87cc2d7bced0ba2ec8ddbd
    fbcc3093e0769472
    Deleted: sha256:efcf4a93c18b5d01aa8e10a2e3b7e2b2eef0378336456d86
    53e2d123d6232c1e
    Deleted: sha256:1e1aa31289fdca521c403edd6b37317bf0a349a941c7f19b
    6d9d311f59347502
    Deleted: sha256:c8be1b8f4d60d99c281fc2db75e0f56df42a83ad2f0b0916
    21ce19357e19d853

    Perform this step for each image you wish to delete, substituting in the various image IDs. For each base image you delete, you will see all of the image layers get untagged and deleted along with it.

It is important to periodically clean up your Docker environment as frequently building and running containers can cause large amounts of hard disk usage over time. Now that you know how to run and manage Docker containers in your local development environment, you can use more advanced Docker commands to understand how a container's primary process functions and how to troubleshoot issues. In the next section, we will look at the docker attach command to directly access the primary process of a container.

Note

To streamline the process of cleaning up your environment, Docker provides a prune command that will automatically remove old containers and base images:

$ docker system prune -fa

Executing this command will remove any container images that are not tied to an existing running container, along with any other resources in your Docker environment.

 

Attaching to Containers Using the attach Command

In the previous exercise, you saw how to use the docker exec command to spin up a new shell session in a running container instance in which to execute commands. The docker exec command is very good for quickly gaining access to a containerized instance for debugging, troubleshooting, and understanding the context the container is running in.

However, as covered earlier in the chapter, Docker containers run as per the life of the primary process running inside the container. When this process exits, the container will stop. If you wanted to access the primary process inside the container directly (as opposed to a secondary shell session), then Docker provides the docker attach command to attach to the primary running process inside the container.

When using docker attach, you are gaining access to the primary process running in the container. If this process is interactive, such as a Bash or Bourne shell session, you will be able to execute commands directly through a docker attach session (similar to docker exec). However, if the primary process in your container terminates, so will the entire container instance, since the Docker container life cycle is dependent on the running state of the primary process.

In the following exercise, you will use the docker attach command to directly access the primary process of an Ubuntu container. By default, the primary process of this container is /bin/bash.

Exercise 1.03: Attaching to an Ubuntu Container

The docker attach command is used to attach to a running container in the context of the primary process. In this exercise, you will use the docker attach command to attach to running containers and investigate the main container entrypoint process directly:

  1. Use the docker run command to start a new Ubuntu container instance. Run this container in interactive mode (-i), allocate a TTY session (-t), and run it in the background (-d). Call this container attach-example1:
    docker run -itd --name attach-example1 ubuntu:latest

    This will start a new Ubuntu container instance named attach-example1 using the latest version of the Ubuntu container image.

  2. Use the docker ps command to check that this container is running in our environment:
    docker ps 

    The details of the running container instance will be displayed. Take note that the primary process of this container is a Bash shell (/bin/bash):

    CONTAINER ID    IMAGE            COMMAND          CREATED
      STATUS              PORTS               NAMES
    90722712ae93    ubuntu:latest    "/bin/bash"      18 seconds ago
      Up 16 seconds                           attach-example1
  3. Run the docker attach command to attach to the primary process inside this container, (/bin/bash). Use docker attach followed by the name or ID of the container instance:
    $ docker attach attach-example1

    This should drop you into the primary Bash shell session of this container instance. Note that your terminal session should change to a root shell session, indicating you have successfully accessed the container instance:

    [email protected]:/#

    It should be noted here that using commands such as exit to terminate a shell session will result in stopping the container instance because you are now attached to the primary process of the container instance. By default, Docker provides the shortcut key sequence of Ctrl + P and then Ctrl + Q to gracefully detach from an attach session.

  4. Use the keyboard combinations Ctrl + P and then Ctrl + Q to detach from this session gracefully:
    [email protected]:/# CTRL-p CTRL-q

    Note

    You will not type the words CTRL-p CTRL-q; rather, you will press and hold the Ctrl key, press the P key, and then release both keys. Then, press and hold the Ctrl key again, press the Q key, and then again release both keys.

    Upon successful detachment of the container, the words read escape sequence will be displayed before returning you to your main terminal or PowerShell session:

    [email protected]:/# read escape sequence
  5. Use docker ps to verify that the Ubuntu container is still running as expected:
    $ docker ps

    The attach-example1 container will be displayed, still running as expected:

    CONTAINER ID    IMAGE            COMMAND          CREATED
      STATUS              PORTS               NAMES
    90722712ae93    ubuntu:latest    "/bin/bash"      13 minutes ago
      Up 13 minutes                           attach-example1
  6. Use the docker attach command to attach once more to the attach-example1 container instance:
    $ docker attach attach-example1

    You should be put back into the Bash session of the primary process:

    [email protected]:/#
  7. Now, terminate the primary process of this container using the exit command. In the Bash shell session, type the exit command:
    [email protected]:/# exit

    The terminal session should have exited, returning you once more to your primary terminal.

  8. Use the docker ps command to observe that the attach-example1 container should no longer be running:
    $ docker ps

    This should return no running container instances:

    CONTAINER ID    IMAGE            COMMAND              CREATED
      STATUS              PORTS               NAMES
  9. Use the docker ps -a command to view all the containers, even ones that have been stopped or have exited:
    $ docker ps -a

    This should display the attach-example1 container in a stopped state:

    CONTAINER ID      IMAGE                COMMAND 
      CREATED            STATUS    PORTS           NAMES
    90722712ae93      ubuntu:latest        "/bin/bash"
      20 minutes ago     Exited (0) 3 minutes ago  attach-example1

    As you can see, the container has gracefully terminated (Exited (0)) approximately 3 minutes ago. The exit command gracefully terminates a Bash shell session.

  10. Use the docker system prune -fa command to clean up the stopped container instances:
    docker system prune -fa

    This should remove all stopped container instances, including the attach-example1 container instance, as seen in the following output:

    Deleted Containers:
    ry6v87v9a545hjn7535jk2kv9x8cv09wnkjnscas98v7a762nvnw7938798vnand
    Deleted Images:
    untagged: attach-example1

In this exercise, we used the docker attach command to gain direct access to the primary process of a running container. This differs from the docker exec command we explored earlier in the chapter because docker exec executes a new process inside a running container, whereas docker attach attaches to the main process of a container directly. Careful attention must be paid, however, when attaching to a container not to stop the container by terminating the main process.

In the next activity, we will put together the Docker management commands we covered in this chapter to start putting together the building block containers that will become the Panoramic Trekking microservices application stack.

Activity 1.01: Pulling and Running the PostgreSQL Container Image from Docker Hub

Panoramic Trekking is the multi-tier web application that we will be building throughout this book. Similar to any web application, it will consist of a web server container (NGINX), a Python Django backend application, and a PostgreSQL database. Before you can start deploying the web application or the frontend web server, you must first deploy the backend database.

In this activity, you are asked to start a PostgreSQL version 12 database container with default credentials.

Note

The official Postgres container image provides many environment variable overrides you can leverage to configure the PostgreSQL instance. Review the documentation for the container on Docker Hub at https://hub.docker.com/_/postgres.

Perform the following steps:

  1. Create a Postgres database container instance that will serve as the data tier of our application stack.
  2. Use environment variables to configure the container at runtime to use the following database credentials:
    username: panoramic
    password: trekking
  3. Verify whether the container is running and healthy.

Expected Output:

The following output should be returned on running docker ps command:

CONTAINER ID  IMAGE         COMMAND                 CREATED
  STATUS              PORTS               NAMES
29f115af8cdd  postgres:12   "docker-entrypoint.s…"  4 seconds ago
  Up 2 seconds        5432/tcp            blissful_kapitsa

Note

The solution for this activity can be found via this link.

In the next activity, you will access the database that has just been set up in this activity inside the container instance. You will also interact with the container to fetch the list of databases running in the container.

Activity 1.02: Accessing the Panoramic Trekking App Database

This activity will involve accessing the database running inside the container instance using the PSQL CLI utility. Once you have logged in using the credentials (panoramic/trekking), you will query for the list of databases running in the container.

Perform the following steps:

  1. Log in to the Postgres database container using the PSQL command-line utility.
  2. Once logged in to the database, return a list of databases in Postgres by default.

    Note

    If you are not familiar with the PSQL CLI, the following is a list of reference commands to assist you with this activity:

    Logging in: psql --username username --password

    Listing the database: \l

    Quitting the PSQL shell: \q

Expected Output:

Figure 1.3: Expected output of Activity 1.02

Figure 1.3: Expected output of Activity 1.02

Note

The solution for this activity can be found via this link.

 

Summary

In this chapter, you learned the fundamentals of containerization, the benefits of running applications in containers, and the basic Docker life cycle commands to manage containerized instances. You learned that containers serve as a universal software deployment package that truly can be built once and run anywhere. Because we are running Docker locally, we can know for certain that the same container images running in our local environment can be deployed in production and run with confidence.

Using commands such as docker run, docker start, docker exec, docker ps, and docker stop, we have explored the basics of container life cycle management through the Docker CLI. Through the various exercises, we launched container instances from the same base image, configured them using docker exec, and cleaned up the deployments using other basic container life cycle commands such as docker rm and docker rmi.

In the final portion of this chapter, we jumped in head-first, taking the first steps toward running our Panoramic Trekking application by launching a PostgreSQL database container instance. Using environment variables that we placed within the docker run command, we created an instance configured with a default username and password. We tested the configuration by executing the PSQL command-line tool from inside the container and querying the database to see the schema.

Although this is only scratching the surface of what Docker is capable of, we hope it was able to whet your appetite for the material that will be covered in the upcoming chapters. In the next chapter, we will discuss building truly immutable containers using Dockerfiles and the docker build command. Writing custom Dockerfiles to build and deploy unique container images will demonstrate the power of running containerized applications at scale.

About the Authors

  • Vincent Sesto

    Vincent Sesto is a DevOps engineer, endurance athlete, and author. As a DevOps engineer, he specializes in working with Linux and open-source applications. He is particularly interested in helping people get more from the tools they have and is currently developing his skills in DevOps, continuous integration, security, Splunk (UI and reporting), and Python development.

    Browse publications by this author
  • Onur Yılmaz

    Onur Yılmaz is a senior software engineer in a multinational enterprise software company. He is a certified Kubernetes administrator (CKA) and works on Kubernetes and cloud management systems. He is a keen supporter of cutting-edge technologies including Docker, Kubernetes, and cloud-native applications. He has one master's and two bachelor's degrees in the engineering field.

    Browse publications by this author
  • Sathsara Sarathchandra

    Sathsara Sarathchandra is a DevOps engineer, experienced in building and managing Kubernetes based production deployments both in cloud and on-premise. He has over 8 years of experience working in several companies ranging from small start-ups to enterprises. He is a Certified Kubernetes Administrator (CKA) and a certified Kubernetes application developer (CKAD). He holds a master's degree in business administration and a bachelor's degree in computer science.

    Browse publications by this author
Book Title
Access this book, plus 8,000 other titles for FREE
Access now