These days, Docker technology is gaining more market and more mind shares among information technology (IT) professionals across the globe. In this chapter, we would like to shed more light on Docker, and show why it is being touted as the next best thing for the impending cloud IT era. In order to make this book relevant to software engineers, we have listed the steps needed for crafting highly usable application-aware containers, registering them in a public registry repository, and then deploying them in multiple IT environments (on-premises as well as off-premises). In this book, we have clearly explained the prerequisites and the most important details of Docker, with the help of all the education and experiences that we could gain through a series of careful implementations of several useful Docker containers in different systems. For doing this, we used our own laptops as well as a few leading public Cloud Service Providers (CSP).
We would like to introduce you to the practical side of Docker for the game-changing Docker-inspired containerization movement.
In this chapter, we will cover the following topics:
An introduction to Docker
Docker on Linux
Differentiating between containerization and virtualization
Installing the Docker engine
Understanding the Docker setup
Downloading the first image
Running the first container
Troubleshooting the Docker containers
Due to its overwhelming usage across industry verticals, the IT domain has been stuffed with many new and pathbreaking technologies used not only for bringing in more decisive automation but also for overcoming existing complexities. Virtualization has set the goal of bringing forth IT infrastructure optimization and portability. However, virtualization technology has serious drawbacks, such as performance degradation due to the heavyweight nature of virtual machines (VM), the lack of application portability, slowness in provisioning of IT resources, and so on. Therefore, the IT industry has been steadily embarking on a Docker-inspired containerization journey. The Docker initiative has been specifically designed for making the containerization paradigm easier to grasp and use. Docker enables the containerization process to be accomplished in a risk-free and accelerated fashion.
Precisely speaking, Docker is an open source containerization engine, which automates the packaging, shipping, and deployment of any software applications that are presented as lightweight, portable, and self-sufficient containers, that will run virtually anywhere.
A Docker container is a software bucket comprising everything necessary to run the software independently. There can be multiple Docker containers in a single machine and containers are completely isolated from one another as well as from the host machine.
In other words, a Docker container includes a software component along with all of its dependencies (binaries, libraries, configuration files, scripts, jars, and so on). Therefore, the Docker containers could be fluently run on x64 Linux kernel supporting namespaces, control groups, and file systems, such as Another Union File System (AUFS). However, as indicated in this chapter, there are pragmatic workarounds for running Docker on other mainstream operating systems, such as Windows, Mac, and so on. The Docker container has its own process space and network interface. It can also run things as root, and have its own /sbin/init
, which can be different from the host machines'.
In a nutshell, the Docker solution lets us quickly assemble composite, enterprise-scale, and business-critical applications. For doing this, we can use different and distributed software components: Containers eliminate the friction that comes with shipping code to distant locations. Docker also lets us test the code and then deploy it in production as fast as possible. The Docker solution primarily consists of the following components:
The Docker engine
The Docker Hub
The Docker engine is for enabling the realization of purpose-specific as well as generic Docker containers. The Docker Hub is a fast-growing repository of the Docker images that can be combined in different ways for producing publicly findable, network-accessible, and widely usable containers.
Suppose that we want to directly run the containers on a Linux machine. The Docker engine produces, monitors, and manages multiple containers as illustrated in the following diagram:

The preceding diagram vividly illustrates how future IT systems would have hundreds of application-aware containers, which would innately be capable of facilitating their seamless integration and orchestration for deriving modular applications (business, social, mobile, analytical, and embedded solutions). These contained applications could fluently run on converged, federated, virtualized, shared, dedicated, and automated infrastructures.
It is pertinent, and paramount to extract and expound the game-changing advantages of the Docker-inspired containerization movement over the widely used and fully matured virtualization paradigm. In the containerization paradigm, strategically sound optimizations have been accomplished through a few crucial and well-defined rationalizations and the insightful sharing of the compute resources. Some of the innate and hitherto underutilized capabilities of the Linux kernel have been rediscovered. These capabilities have been rewarded for bringing in much-wanted automation and acceleration, which will enable the fledgling containerization idea to reach greater heights in the days ahead, especially those of the cloud era. The noteworthy business and technical advantages of these include the bare metal-scale performance, real-time scalability, higher availability, and so on. All the unwanted bulges and flab are being sagaciously eliminated to speed up the roll-out of hundreds of application containers in seconds and to reduce the time taken for marketing and valuing in a cost-effective fashion. The following diagram on the left-hand side depicts the virtualization aspect, whereas the diagram on the right-hand side vividly illustrates the simplifications that are being achieved in the containers:

The following table gives a direct comparison between virtual machines and containers:
Virtual Machines (VMs) |
Containers |
---|---|
Represents hardware-level virtualization |
Represents operating system virtualization |
Heavyweight |
Lightweight |
Slow provisioning |
Real-time provisioning and scalability |
Limited performance |
Native performance |
Fully isolated and hence more secure |
Process-level isolation and hence less secure |
A hybrid model, having features from both the virtual machines and that of containers, is being developed. It is the emergence of system containers, as illustrated in the preceding right-hand-side diagram. Traditional hypervisors, which implicitly represent hardware virtualization, directly secure the environment with the help of the server hardware. That is, VMs are completely isolated from the other VMs as well as from the underlying system. But for containers, this isolation happens at the process level and hence, they are liable for any kind of security incursion. Furthermore, some vital features that are available in the VMs are not available in the containers. For instance, there is no support for SSH, TTY, and the other security functionalities in the containers. On the other hand, VMs are resource-hungry and hence, their performance gets substantially degraded. Indeed, in containerization parlance, the overhead of a classic hypervisor and a guest operating system will be eliminated to achieve bare metal performance. Therefore, a few VMs can be provisioned and made available to work on a single machine. Thus, on one hand, we have the fully isolated VMs with average performance and on the other side, we have the containers that lack some of the key features, but are blessed with high performance. Having understood the ensuing needs, product vendors are working on system containers. The objective of this new initiative is to provide full system containers with the performance that you would expect from bare metal servers, but with the experience of virtual machines. The system containers in the preceding right-hand-side diagram represent the convergence of two important concepts (virtualization and containerization) for smarter IT. We will hear and read more about this blending in the future.
Having recognized the role and the relevance of the containerization paradigm for IT infrastructure augmentation and acceleration, a few technologies that leverage the unique and decisive impacts of the containerization idea have come into existence and they have been enumerated as follows:
LXC (Linux Containers): This is the father of all kinds of containers and it represents an operating-system-level virtualization environment for running multiple isolated Linux systems (containers) on a single Linux machine.
The article LXC on the Wikipedia website states that:
"The Linux kernel provides the cgroups functionality that allows limitation and prioritization of resources (CPU, memory, block I/O, network, etc.) without the need for starting any virtual machines, and namespace isolation functionality that allows complete isolation of an applications' view of the operating environment, including process trees, networking, user IDs and mounted file systems."
You can get more information from http://en.wikipedia.org/wiki/LXC.
OpenVZ: This is an OS-level virtualization technology based on the Linux kernel and the operating system. OpenVZ allows a physical server to run multiple isolated operating system instances, called containers, virtual private servers (VPSs), or virtual environments (VEs).
The FreeBSD jail: This is a mechanism that implements an OS-level virtualization, which lets the administrators partition a FreeBSD-based computer system into several independent mini-systems called jails.
The AIX Workload partitions (WPARs): These are the software implementations of the OS-level virtualization technology, which provide application environment isolation and resource control.
Solaris Containers (including Solaris Zones): This is an implementation of the OS-level virtualization technology for the x86 and SPARC systems. A Solaris Container is a combination of the system resource controls and boundary separation provided by zones. Zones act as completely isolated virtual servers within a single operating system instance.
In this book, considering the surging popularity and the mass adoption happening to Docker, we have chosen to dig deeper, dwell in detail on the Docker platform, the one-stop solution for the simplified and streamlined containerization movement.
The Docker engine is built on top of the Linux kernel and it extensively leverages its features. Therefore, at this point in time, the Docker engine can only be directly run on Linux OS distributions. Nonetheless, the Docker engine could be run on the Mac and Microsoft Windows operating systems by using the lightweight Linux VMs with the help of adapters, such as Boot2Docker. Due to the surging growing of Docker, it is now being packaged by all major Linux distributions so that they can retain their loyal users as well as attract new users. You can install the Docker engine by using the corresponding packaging tool of the Linux distribution; for example, by using the apt-get
command for Debian and Ubuntu, and the yum
command for RedHat, Fedora, and CentOS.
Note
We have chosen the Ubuntu Trusty 14.04 (LTS) (64-bit) Linux distribution for all practical purposes.
This section explains the steps involved in installing the Docker engine from the Ubuntu package repository in detail. At the time of writing this book, the Ubuntu repository had packaged Docker 1.0.1, whereas the latest version of Docker was 1.5. We strongly recommend installing Docker version 1.5 or greater by using any one of the methods described in the next section.
However, if for any reason you have to install the Ubuntu packaged version, then please follow the steps described here:
The best practice for installing the Ubuntu packaged version is to begin the installation process by resynchronizing with the Ubuntu package repository. This step will essentially update the package repository to the latest published packages, thus we will ensure that we always get the latest published version by using the command shown here:
$ sudo apt-get update
Tip
Downloading the example code
You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.
Kick-start the installation by using the following command. This setup will install the Docker engine along with a few more support files, and it will also start the
docker
service instantaneously:$ sudo apt-get install -y docker.io
For your convenience, you can create a soft link for
docker
.io
calleddocker
. This will enable you to execute Docker commands asdocker
instead ofdocker.io
. You can do this by using the following command:$ sudo ln -sf /usr/bin/docker.io /usr/local/bin/docker
The official distributions might not package the latest version of Docker. In such a case, you can install the latest version of Docker either manually or by using the automated scripts provided by the Docker community.
For installing the latest version of Docker manually, follow these steps:
Add the Docker release tool's repository path to your APT sources, as shown here:
$ sudo sh -c "echo deb https://get.docker.io/ubuntu \ docker main > /etc/apt/sources.list.d/docker.list"
Import the Docker release tool's public key by running the following command:
$ sudo apt-key adv --keyserver \ hkp://keyserver.ubuntu.com:80 --recv-keys \ 36A1D7869245C8950F966E92D8576A8BA88D21E9
Resynchronize with the package repository by using the command shown here:
$ sudo apt-get update
Install
docker
and then start thedocker
service.$ sudo apt-get install -y lxc-docker
The Docker community has taken a step forward by hiding these details in an automated install script. This script enables the installation of Docker on most of the popular Linux distributions, either through the curl
command or through the wget
command, as shown here:
For curl command:
$ sudo curl -sSL https://get.docker.io/ | sh
For wget command:
$ sudo wget -qO- https://get.docker.io/ | sh
It's important to understand Docker's components and their versions, storage, execution drivers, file locations, and so on. Incidentally, the quest for understanding the Docker setup would also reveal whether the installation was successful or not. You can accomplish this by using two docker
subcommands, namely docker version
, and docker info
.
Let's start our docker
journey with the docker version
subcommand, as shown here:
$ sudo docker version Client version: 1.5.0 Client API version: 1.17 Go version (client): go1.4.1 Git commit (client): a8a31ef OS/Arch (client): linux/amd64 Server version: 1.5.0 Server API version: 1.17 Go version (server): go1.4.1 Git commit (server): a8a31ef
Although the docker version
subcommand lists many lines of text, as a Docker user, you should know what these following output lines mean:
The client version
The client API version
The server version
The server API version
The client and server versions that have been considered here are 1.5.0 and the client API and the server API, versions 1.17.
If we dissect the internals of the docker version
subcommand, then it will first list the client-related information that is stored locally. Subsequently, it will make a REST API call to the server over HTTP to obtain the server-related details.
Let's learn more about the Docker environment using the docker info
subcommand:
$ sudo docker -D info Containers: 0 Images: 0 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 0 Execution Driver: native-0.2 Kernel Version: 3.13.0-45-generic Operating System: Ubuntu 14.04.1 LTS CPUs: 4 Total Memory: 3.908 GiB Name: dockerhost ID: ZNXR:QQSY:IGKJ:ZLYU:G4P7:AXVC:2KAJ:A3Q5:YCRQ:IJD3:7RON:IJ6Y Debug mode (server): false Debug mode (client): true Fds: 10 Goroutines: 14 EventsListeners: 0 Init Path: /usr/bin/docker Docker Root Dir: /var/lib/docker WARNING: No swap limit support
As you can see in the output of a freshly installed Docker engine, the number of Containers
and Images
is invariably nil. The Storage Driver
has been set up as aufs
, and the directory has been given the /var/lib/docker/aufs
location. The Execution Driver
has been set to the native
mode. This command also lists details, such as the Kernel Version
, the Operating System
, the number of CPUs
, the Total Memory
, and Name
, the new Docker hostname.
Having installed the Docker engine successfully, the next logical step is to download the images from the Docker registry. The Docker registry is an application repository, which hosts a range of applications that vary between basic Linux images and advanced applications. The docker pull
subcommand is used for downloading any number of images from the registry. In this section, we will download a tiny version of Linux called the busybox
image by using the following command:
$ sudo docker pull busybox 511136ea3c5a: Pull complete df7546f9f060: Pull complete ea13149945cb: Pull complete 4986bf8c1536: Pull complete busybox:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security. Status: Downloaded newer image for busybox:latest
Once the images have been downloaded, they can be verified by using the docker images
subcommand, as shown here:
$ sudo docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE busybox latest 4986bf8c1536 12 weeks ago 2.433 MB
Now, you can start your first Docker container. It is standard practice to start with the basic Hello World! application. In the following example, we will echo Hello World!
by using a busybox
image, which we have already downloaded, as shown here:
$ sudo docker run busybox echo "Hello World!" "Hello World!"
Cool, isn't it? You have set up your first Docker container in no time. In the preceding example, the docker run
subcommand has been used for creating a container and for printing Hello World!
by using the echo
command.
Amazon Web Services (AWS) announced the availability of Docker containers at the beginning of 2014, as a part of its Elastic Beanstalk offering. At the end of 2014, they revolutionized Docker deployment and provided the users with options shown here for running Docker containers:
The Amazon EC2 container service (only available in preview mode at the time of writing this book)
Docker deployment by using the Amazon Elastic Beans services
The Amazon EC2 container service lets you start and stop the container-enabled applications with the help of simple API calls. AWS has introduced the concept of a cluster for viewing the state of your containers. You can view the tasks from a centralized service, and it gives you access to many familiar Amazon EC2 features, such as the security groups, the EBS volumes and the IAM roles.
Please note that this service is still not available in the AWS console. You need to install AWS CLI on your machine to deploy, run, and access this service.
The AWS Elastic Beanstalk service supports the following:
A single container that supports Elastic Beanstalk by using a console. Currently, it supports the PHP and Python applications.
A single container that supports Elastic Beanstalk by using a command line tool called eb. It supports the same PHP and Python applications.
Use of multiple container environments by using Elastic beanstalk.
Currently, AWS supports the latest Docker version, which is 1.5.
This section provides a step-by-step process to deploy a sample application on a Docker container running on AWS Elastic Beanstalk.The following are the steps of deployment:
Log in to the AWS Elastic Beanstalk console by using this https://console.aws.amazon.com/elasticbeanstalk/ URL.
Select a region where you want to deploy your application, as shown here:
Select the Docker option, which is in the drop down menu, and then click on Launch Now. The next screen will be shown after a few minutes, as shown here:
Now, click on the URL that is next to Default-Environment (Default-Environment-pjgerbmmjm.elasticbeanstalk.com), as shown here:
Most of the time, you will not encounter any issues when installing Docker. However, unplanned failures might occur. Therefore, it is necessary to discuss prominent troubleshooting techniques and tips. Let's begin by discussing the troubleshooting knowhow in this section. The first tip is that the running status of Docker should be checked by using the following command:
$ sudo service docker status
However, if Docker has been installed by using the Ubuntu package, then you will have to use docker.io
as the service name. If the docker
service is running, then this command will print the status as start/running
along with its process ID.
If you are still experiencing issues with the Docker setup, then you could open the Docker log by using the /var/log/upstart/docker.log
file for further investigation.
Containerization is going to be a dominant and decisive paradigm for the enterprise as well as cloud IT environments in the future because of its hitherto unforeseen automation and acceleration capabilities. There are several mechanisms in place for taking the containerization movement to greater heights. However, Docker has zoomed ahead of everyone in this hot race, and it has successfully decimated the previously-elucidated barriers.
In this chapter, we have exclusively concentrated on the practical side of Docker for giving you a head start in learning about the most promising technology. We have listed the appropriate steps and tips for effortlessly installing the Docker engine in different environments, for leveraging and for building, installing, and running a few sample Docker containers, both in local as well as remote environments. We will dive deep into the world of Docker and dig deeper to extract and share tactically and strategically sound information with you in the ensuing chapters. Please read on to gain the required knowledge about advanced topics, such as container integration, orchestration, management, governance, security, and so on, through the Docker engine. We will also discuss a bevy of third-party tools.