Mastering Docker - Second Edition

4.4 (7 reviews total)
By Russ McKendrick , Scott Gallagher
  • Instant online access to over 8,000+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. Docker Overview

About this book

Docker has been a game-changer when it comes to how modern applications are deployed and architectured. It has now grown into a key driver of innovation beyond system administration, with an impact on the world of web development and more. But how can you make sure you're keeping up with the innovations it's driving? How can you be sure you're using it to its full potential?

This book shows you how; it not only demonstrates how to use Docker more effectively, it also helps you rethink and reimagine what's possible with Docker.

You will also cover basic topics such as building, managing and storing images along with best practices to make you confident before delving more deeply into Docker security.

You'll find everything related to extending and integrating Docker in new and innovative ways. Docker Swarm and Docker Compose will help you take control of your containers in an efficient way.

By the end of the book, you will have a broad and detailed sense of exactly what's possible with Docker and how seamlessly it fits in with a range of other platforms and tools.

Publication date:
July 2017
Publisher
Packt
Pages
392
ISBN
9781787280243

 

Chapter 1. Docker Overview

Welcome to Mastering Docker, Second EditionThis first chapter will cover the Docker basics that you should already have a pretty good handle on. But if you don't already have the required knowledge at this point, this chapter will help you with the basics so that subsequent chapters don't feel as heavy. By the end of the book, you should be a Docker master, able to implement Docker in your own environments, building and supporting applications on top of them.

In this chapter, we're going to review the following high-level topics:

  • Understanding Docker
  • Differences between Docker and typical VMs
  • Docker installers/installation
  • The Docker command
  • The Docker ecosystem
 

Understanding Docker


Let's start by trying to define what Docker is. Docker's website currently sums Docker up with the following statement:

"Docker is the world's leading software container platform. Developers use Docker to eliminate “works on my machine” problems when collaborating on code with co-workers. Operators use Docker to run and manage apps side-by-side in isolated containers to get better compute density. Enterprises use Docker to build agile software delivery pipelines to ship new features faster, more securely and with confidence for both Linux and Windows Server apps."

That is a pretty bold opening statement, however when you look at the figures presented by then Docker CEO Ben Golub during the opening of the 2017 DockerCon, which were:

  • 14M Docker Hosts
  • 900K Docker Apps
  • 77K% Growth in Docker job listings
  • 12B Images pulls
  • 3300 Project Contributors

For a technology which is only three years old, I am sure you will agree that is quite impressive. To view the opening please see https://www.slideshare.net/Docker/dockercon-2017-general-session-day-1-ben-golub.

Let's begin getting an understanding of the problems that Docker aims to solve, starting with developers.

Developers

The "works on my machine" problem is probably summed up best by the following image macro, based on the Disaster Girl meme, which started popping up in presentations, forums, and Slack channels a few years ago:

While it is funny, unfortunately it is an all-too-real problem and one I have personally been on the receiving end of.

The problem

Even in a world where DevOps best practices are followed, it is still all too easy for a developer's working environment to not match the final production environment.

For example, a developer using the macOS version of, say, PHP will probably not be running the same version as the Linux server that hosts the production code. Even if the versions match, you then have to deal with differences in the configuration and overall environment the version of PHP is running on, such as differences in the way file permissions are handled between the operating system versions, to name just one potential problem.

All of this comes to head when it is time for a developer to deploy their code to the host and it doesn't work; should the production environment be configured to match the developer's machine, or should developers only do their work in environments that match productions?

In an ideal world, everything should be consistent, from the developer's laptop all the way through to your production servers; however, traditionally this utopia has been difficult to achieve. Everyone has their own way of working and personal preferences--enforcing consistency across multiple platforms is difficult enough when it is a single engineer working on their own systems, let alone a team of engineers working with a team of potentially hundreds of developers.

The Docker solution

Using Docker for Mac or Docker for Windows, a developer can easily wrap their code in a container that they have either defined themselves or worked alongside operations to create as a Dockerfile--we will be covering this in Chapter 2, Building Container Images--or a Docker Compose file, which we will go into more detail about in Chapter 5, Docker Compose.

They can continue to use their chosen IDE and maintain their own workflows for working with the code. As we will see in the upcoming sections of this chapter, installing and using Docker is not difficult; in fact, considering how much of a chore it was to maintain consistent environments in the past, even with automation, Docker feels a little too easy, almost like cheating.

Operators

I have been working in operations for more years than I would like to admit to, and this problem has cropped regularly.

The problem

Let's say you are looking after five servers: three load-balanced web servers and two database servers that are in a master or slave configuration running Application 1. You are using a tool such as Puppet or Chef to automatically manage the software stack and the configuration across your five servers.

Everything is going great until you are told "we need to deploy Application 2 on the same servers that are running Application 1." On the face of it, no problem, you can tweak your Puppet or Chef configuration to add new users, vhosts, pull the new code down, and so on. However, you notice that Application 2 requires a higher version of the software you are running for Application 1.

To make matters worse, you already know that Application 1 flat-out refuses to work with the new software stack and that Application 2 is not backward compatible.

Traditionally, this leaves you with a few choices, all of which just add to the problem one way or another:

  1. Ask for more servers? While this traditionally is probably the safest technical solution, it does not automatically mean that there will be the budget for additional resources.
  2. Re-architect the solution? Taking one of the web and database servers out of the load-balancer or replication and redeploying them with the software stack for Application 2 may seem like the next easiest option from a technical point of view. However, you are introducing single points of failure for Application 2 and also reducing the redundancy for Application 1: there was probably a reason why you were running three web and two database servers.
  3. Attempt to install the new software stack side by side on your servers? Well, this certainly is possible and may seem like a good short-term plan to get the project out of the door, but it could leave you with a house of cards that could come tumbling down when the first critical security patch is needed for either software stack.

The Docker solution

This is where Docker starts to come into its own. If you have Application 1 running across your three web servers in containers, you may actually be running more than three containers; in fact, you could already be running six, doubling up on the containers, allowing you to run rolling deployments of your application without reducing the availability of Application 1.

Deploying Application 2 in this environment is as easy as simply launching more containers across your three hosts and then routing to the newly deployed application using your load balancer. As you are just deploying containers, you do not need to worry about the logistics of deploying, configuring, and managing two versions of the same software stack on the same server.

We will work through an example of this exact scenario in Chapter 6, Docker Compose.

Enterprise

Enterprises suffer from the same problems described previously as they have both developers and operators; however, they have them on a much larger scale and also there is a lot more risk involved.

The problem

Because of this risk and that any downtime could cost sales or reputation, enterprises need to test every deployment before it is released. This means that new features and fixes are stuck in a holding pattern while:

  • Test environments are spun up and configured
  • Applications are deployed across the newly launched environments
  • Test plans are executed and the application and configuration are tweaked until the tests pass
  • Requests for change are written, submitted, and discussed to get the updated application deployed to production

This process can take anywhere from a few days to weeks or even months, depending on the complexity of the application and the risk the change introduces. While the process is required to ensure continuity and availability for the enterprise at a technology level, it does potentially introduce risk at the business level. What if you have a new feature stuck in this holding pattern and a competitor releases a similar, or worse still, the same feature ahead of you?

This scenario could be just as damaging to sales and reputation as the downtime that the process has been put in place to protect you against.

The Docker solution

Let me start by saying that Docker does not remove the need for a process such as the one just described to exist or be followed. However, as we have already touched upon, it does make things a lot easier as you are already working in a consistent way. It means that your developers have been working with the exact same container configuration that is running in production. This means that it is not much of a step for the methodology to be applied to your testing.

For example, when a developer checks in their code that they know works on their local development environment (as that is where they have been doing all of their work) your testing tool can launch the same containers to run your automated tests against. Once the containers have been used, they can be removed to free up resources for the next lot of tests. This means that all of a sudden your testing process and procedures are a lot more flexible and you can continue to reuse the same environment rather than redeploy or reimage servers for the next set of testing.

This streamlining of the process can be taken as far having your new application containers push all the way through to production.

The quicker this process can be completed, the quicker you can confidently launch new features or fixes and keep ahead of the curve.

 

Differences between dedicated hosts, virtual machines, and Docker


We know what problems Docker was developed to solve. We need to now discuss what exactly Docker is and does.

Docker is a container-management system that helps us easily manageLinux Containers(LXC) in an easier and universal fashion. This lets you create images in virtual environments on your laptop and run commands against them. The actions you perform to the containers that you run in these environments locally on your own machine will be the same commands or operations you run against them when they are running in your production environment.

This helps in not having to do things differently when you go from a development environment like the one on your local machine to a production environment on your server. Now, let's take a look at the differences between Docker containers and typical virtual machine environments.

The following illustration demonstrates the difference between a dedicated, bare-metal server and a server running virtual machines:

As you can see, for a dedicated machine we have three applications, all sharing the same orange software stack. Running virtual machines allow us to run three applications, running two completely different software stacks. The following illustration shows the same orange and green applications running in containers using Docker:

This illustration gives us a lot of insight into the biggest key benefit of Docker, that is, there isno needfor a complete operating system every time we need to bring up a new container, which cuts down on the overall size of containers. Docker relies on using the host OS's Linux kernel (since almost all the versions of Linux use the standard kernel models) for the OS it was built upon, such as Red Hat, CentOS, and Ubuntu. For this reason, you can have almost any Linux OS as your host operating system and be able to layer other Linux-based operating systems on top of the host. Well, that is, your applications are led to believe a full operating system is never actually installed--just the binaries, such as a package manager and, for example, Apache/PHP and libraries needed to get just enough of an operating system for your applications to run.

For example, in the earlier illustration, we could have RedHat running for the orange application and Debian running for the green application, but there would never be a need to actually install Red Hat or Debian on the host. Thus, another benefit of Docker is the size of images when they are created. They are built without the largest piece: the kernel or the operating system. This makes them incredibly small, compact, and easy to ship.

 

Docker installers/installation


Installers are one of the first pieces you need to get up and running with Docker on both your local machine as well as your server environments. Let's first take a look at what environments you can install Docker in:

  • Linux (various Linux flavors)
  • Apple macOS

  • Windows 10 Professional

In addition, you can run them on public clouds such as Amazon Web Services, Microsoft Azure, and DigitalOcean to name a few. With the various types of installers listed earlier, there are different ways Docker actually operates on the operating system. Docker natively runs on Linux; so if you are using Linux, then it's pretty straightforward how Docker runs right on your system. However, if you are using macOS or Windows 10, then it operates a little differently since it relies on using Linux.

Let's look at quickly installing Docker on a Linux desktop running Ubuntu 16.04 and then on macOS and Windows 10.

Installing on Linux (Ubuntu 16.04)

As already mentioned, this is the most straightforward installation out of the three systems we will be looking at. To install Docker, simply run the following command from a Terminal session:

$ curl -sSL https://get.docker.com/ | sh
$ sudo systemctl start docker

These commands will download, install, and configure the latest version of Docker from Docker themselves; at the time of writing this, the Linux OS version installed by the official install script is 17.05.

Running the following command should confirm that Docker is installed and running:

$ docker version

You should see something similar to the following output:

There are two supporting tools we are going to be using in future chapters that are installed as part of the Docker for macOS or Windows installers. To ensure that we are ready to use these tools in later chapters, we should install them now. The first tool is Docker Machine; to install this, run the following commands:

Note

You can check whether you are installing the latest version by visiting the releases section of the project's GitHub page at https://github.com/docker/machine/releases/. To install a version other than v.0.11.0, simply replace the version number in the following command.

$ curl -L "https://github.com/docker/machine/releases/download/v0.11.0/docker-machine-$(uname -s)-$(uname -m)" -o /tmp/docker-machine
$ chmod +x /tmp/docker-machine
$ sudo mv /tmp/docker-machine /usr/local/bin/docker-machine

The download and install Docker Compose, run the following, again checking you are running the latest version by visiting the releases page at https://github.com/docker/compose/releases/:

$ curl -L "https://github.com/docker/compose/releases/download/1.13.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
$ chmod +x /tmp/docker-compose
$ sudo mv /tmp/docker-compose /usr/local/bin/docker-compose

Once it's installed, you should be able to run the following two commands:

$ docker-compose version
$ docker-machine version

Installing on macOS

Unlike the command-line Linux installation, Docker for Mac has a graphical installer.

Note

Before downloading, you should make sure that you are running Apple macOS Yosemite 10.10.3 or above. If you are running an older version, all is not lost; you can still run Docker. Refer to the other older operating systems section of this chapter.

You can download the installer from the Docker store at https://store.docker.com/editions/community/docker-ce-desktop-mac/; just click on Get Docker. Once it's downloaded, you should have a DMG file. Double-clicking on it will mount the image, and opening the image mounted on your desktop should present you with something like this:

Once you have dragged the Docker icon to your Applications folder, double-click on it and you will be asked whether you want to open the application you have downloaded. Saying yes will open the Docker installer:

Click on Next and follow the on-screen instructions. Once it is installed and started, you should see a Docker icon in the top-left icon bar on your screen. Clicking on the icon and selecting About Docker should show you something similar to the following:

You can also open a Terminal window. Run the following command as we did on the Linux installation:

$ docker version

You should see something similar to the following Terminal output:

You can also run this to check the versions of Docker Compose and Docker Machine that were installed alongside Docker Engine:

$ docker-compose version
$ docker-machine version

Installing on Windows 10 Professional

Like Docker for Mac, Docker for Windows uses a graphical installer.

Note

Before downloading, you should make sure that you are running Microsoft Windows 10 Professional or Enterprise 64-bit. If you are running an older version or an unsupported edition of Windows 10, you can still run Docker; refer to the other older operating systems section of this chapter for more information. Docker for Windows has this requirement due to its reliance on Hyper-V. Hyper-V is Windows' native hypervisor and allows you to run x86-64 guests on your Windows machine, be it Windows 10 Professional or Windows Server. It even forms part of the XBox One operating system.

You can download the Docker for Windows installer from the Docker Store at https://store.docker.com/editions/community/docker-ce-desktop-windows/; just click on the Get Docker button to download the installer. Once it's downloaded, run the MSI package and you will be greeted with the following:

Click on Install, and then follow the on screen prompts, which will not only work through installing Docker, but also enabling Hyper-V if you do not have it enabled.

Once it's installed, you should see a Docker icon in the icon tray in the bottom right of your screen. Clicking on it and selecting About Docker from the menu will show the following:

Open a PowerShell window and type the following command:

$ docker version

This should also show you similar output to the Mac and Linux versions:

Again, you can also run the following:

$ docker-compose version
$ docker-machine version

Older operating systems

If you are not running a newer operating system on Mac or Windows, then you will need to use Docker Toolbox. You may have noticed that the output from running the following:

$ docker version

On all three of the installations we have performed so far, it shows two different versions, a client and server. Predictably, the Linux version shows that the architecture for the client and server are both Linux; however, you may notice that the Mac version shows the client is running on Darwin, which is Apple's Unix-like kernel, and the Windows version shows Windows. Yet both of the servers show the architecture as being Linux--what gives?

That is because both the Mac and Windows versions of Docker download and run a virtual machine in the background, and this machine is running a small lightweight operating system based on Alpine Linux. The virtual machine is running using Docker's own libraries, which connect the inbuilt hypervisor for your chosen environment. For macOS, this is the inbuilt Hypervisor Framework (https://developer.apple.com/reference/hypervisor/) and for Windows, Hyper-V (https://www.microsoft.com/en-gb/cloud-platform/server-virtualization).

To ensure that no one misses out on the Docker experience, a version of Docker that does not use these built-in hypervisors is available for older versions of macOS and unsupported Windows versions. These versions utilize VirtualBox as the hypervisor to run the Linux server for your local client to connect to.

Note

VirtualBox is an open source x86 and AMD64/Intel64 virtualization product developed by Oracle. It runs on Windows, Linux, Macintosh, and Solaris hosts, with support for many Linux, Unix, and Windows guest operating systems. For more information on VirtualBox, see https://www.virtualbox.org/.

For more information on Docker Toolbox, see the project's website at https://www.docker.com/products/docker-toolbox/, where you can also download the macOS and Windows installers.

Note

This book assumes you have installed the latest Docker version on Linux or have used Docker for Mac or Docker for Windows. While Docker installations using Docker Toolbox should be able to support the commands in this book, you may run into issues around file permissions and ownership when mounting data from your local machine to your containers.

 

The Docker command-line client


Now that we have Docker installed, let's look at some Docker commands that you should be familiar with already. We will start with some common commands and then take a peek at the commands that are used for the Docker images. We will then take a dive into the commands that are used for the containers.

Note

Docker recently restructured their command-line client into more logical groupings of commands due to the number of features provided by Docker growing quickly and commands starting to cross over each other. Throughout this book, we will be using the new structure. For more information on the command-line client changes, read the following blog post: https://blog.docker.com/2017/01/whats-new-in-docker-1-13/

The first command we will be taking a look at will be one of the most useful commands not only in Docker but in any command-line utility you use: thehelpcommand. It is run simply like this:

$ docker help

This command will give you a full list of all the Docker commands at your disposal and a brief description of what each command does. For further help with a particular command, you can run the following:

$ docker <COMMAND> --help

Next up, let's run the hello-world container. To do this, simply run:

$ docker container run hello-world

It doesn't matter what host you are running Docker on, the same thing will happen on Linux, macOS, and Windows. Docker will download the hello-world container image and then execute it, and once it's executed, the container will be stopped.

Your Terminal session should look like the following:

Let's try something a little more adventurous; let's download and run an NGINX container by running the following two commands:

$ docker image pull nginx
$ docker container run -d --name nginx-test -p 8080:80 nginx

The first of the two commands downloads the NGINX container image, and the second launches a container in the background called nginx-test using the nginx image we pulled. It also maps port 8080 on our host machine to port 80 on the container, making it accessible to our local browser at http://localhost:8080/.

As you can see from the following screenshots, the command and results are exactly the same on all three OS types. Here we have Linux:

This is macOS:

And this is how it looks on Windows:

In the following three chapters, we will look at using the Docker command-line client in more detail; for now, let's stop and remove our nginx-test container by running the following:

$ docker container stop nginx-test
$ docker container rm nginx-test
 

The Docker ecosystem


There are a lot of tools supplied and supported by Docker; some we have already mentioned, and others we will cover in later chapters. Before we finish this, our first chapter, we should get an idea of the tools we are going to be using. The most of important of them is Docker Engine.

This is the core of Docker, and all of the other tools we will be covering use it. We have already been using it as we installed it in the Docker installer/installation and Docker commands sections of this chapter. There are currently two versions of Docker Engine; there is the Docker Enterprise Edition (Docker EE) and the Docker Community Edition (CE). We will be using Docker CE throughout this book. The release cycle for Docker Engine has recently changed to become more predictable.

Docker CE and EE have stable editions that are updated once every quarter; as you may have noticed, we have been running Docker CE v17.03.x for our Docker for Mac and Docker for Windows installations. There is an also an Edge version, which introduces features at a quicker pace--every month, in fact. This version may not be suitable for production workloads; as you may have noticed, when we installed Docker on Linux, it was running the Edge version Docker CE 17.05.x.

  • Docker Compose: A tool that allows you to define and share multi-container definitions; it is detailed in Chapter 5, Docker Compose.
  • Docker Machine: A tool to launch Docker hosts on multiple platforms; we will cover this in Chapter 6, Docker Machine.
  • Docker Hub: A repository for your Docker images, covered in the next three chapters.
  • Docker Store: A storefront for official Docker images and plugins as well as licensed products. Again, we will cover this in the next three chapters.
  • Docker Swarm: A muli-host-aware orchestration tool, covered in detail in Chapter 7, Docker Swarm.
  • Docker for Mac: We have covered Docker for Mac in this chapter.
  • Docker Cloud: Docker's Container as a Service (CaaS) offering, covered in detail in Chapter 10, Docker Cloud.
  • Docker for Windows: We have covered Docker for Windows in this chapter.
  • Docker for Amazon Web Services: A best-practice Docker Swarm installation that targets AWS, covered in Chapter 10, Docker Cloud.
  • Docker for Azure: A best-practice Docker Swarm installation that targets Azure, covered inChapter 10, Docker Cloud.

We will also be looking at some third-party services in later chapters.

 

Summary


In this chapter, we covered what basic information you should already know (or now know) for the chapters ahead. We went over the basics of what Docker is and how it fares compared to other host types.We went over the installers, how they operate on different operating systems, and how to control them through the command line.Be sure to remember to look at the requirements for the installers to ensure you use the correct one for your operating system.

Then, we took a small dive into the basic Docker commands to get you started.We will be looking at all the management commands in future chapters to get a more in-depth understanding of what they are as well as how and when to use them. Finally, we discussed the Docker ecosystem and the responsibilities of each of the different tools.

In the next chapters, we will be taking a look at how to build base containers, and we will also look in depth at Dockerfiles and places to store your images, as well as using environmental variables and Docker volumes.

About the Authors

  • Russ McKendrick

    Russ McKendrick is an experienced system administrator who has been working in IT and related industries for over 25 years. During his career, he has had varied responsibilities, from looking after an entire IT infrastructure to providing first-line, second-line, and senior support in both client-facing and internal teams for large organizations.

    Russ supports open source systems and tools on public and private clouds at N4Stack, a Node4 company, where he is the practice manager (SRE and DevOps). In his spare time, he has written several books including Mastering Docker, Learn Ansible and Kubernetes for Serverless Applications, all published by Packt Publishing.

    Browse publications by this author
  • Scott Gallagher

    Scott Gallagher has been fascinated with technology since he played Oregon Trail in elementary school. His love for it continued through middle school as he worked on more Apple IIe computers. In high school, he learned how to build computers and program in BASIC. His college years were all about server technologies such as Novell, Microsoft, and Red Hat. After college, he continued to work on Novell, all the while maintaining an interest in all technologies. He then moved on to manage Microsoft environments and, eventually, what he was most passionate about Linux environments. Now, his focus is on Docker and cloud environments.

    Browse publications by this author

Latest Reviews

(7 reviews total)
Good book for mastering docker
Quick & complete -- will use them again
poorly executed on information and examples

Recommended For You

Book Title
Access this book, plus 8,000 other titles for FREE
Access now