Initially, Internet services ran on hardware and life was okay. To scale services to handle peak capacity, one needed to buy enough hardware to handle the load. When the load was no longer needed, the hardware sat unused or underused but ready to serve. Unused hardware is a waste of resources. Also, there was always the threat of configuration drift because of the subtle changes we made with each new install.
Then came VMs and life was good. VMs could be scaled to the size that was needed and no more. Multiple VMs could be run on the same hardware. If there was an increase in demand, new VMs could be started on any physical server that had room. More work could be done on less hardware. Even better, new VMs could be started in minutes when needed, and destroyed when the load slackened. It was even possible to outsource the hardware to companies such as Amazon, Google, and Microsoft. Thus elastic computing was born.
VMs, too, had their problems. Each VM required that additional memory and storage space be allocated to support the operating system. In addition, each virtualization platform had its own way of doing things. Automation that worked with one system had to be completely retooled to work with another. Vendor lock-in became a problem.
Then came Docker. What VMs did for hardware, Docker does for the VM. Services can be started across multiple servers and even multiple providers. Once deployed, containers can be started in seconds without the resource overhead of a full VM. Even better, applications developed in Docker can be deployed exactly as they were built, minimizing the problems of configuration drift and package maintenance.
The question is: how does one do it? That process is called orchestration, and like an orchestra, there are a number of pieces needed to build a working cluster. In the following chapters, I will show a few ways of putting those pieces together to build scalable, reliable services with faster, more consistent deployments.
Let's go through a quick review of the basics so that we are all on the same page. The following topics will be covered:
How to install Docker Engine on Amazon Web Services (AWS), Google Compute Engine (GCE), Microsoft Azure, and a generic Linux host with
docker-machine
An introduction to Docker-specific distributions including CoreOS, RancherOS, and Project Atomic
Starting, stopping, and inspecting containers with Docker
Managing Docker images
Docker Engine is the process that actually runs and controls containers on each Docker host. It is the engine that makes your cluster work. It provides the daemon that runs and manages the containers, an API that the various tools use to interact with Docker, and a command-line interface.
Docker Engine is easy to install with a script provided by Docker. The Docker project recommends that you pipe the download through sh
:
$ wget -qO - https://get.docker.com/ | sh
I cannot state strongly enough how dangerous that practice is. If https://www.docker.com/ is compromised, the script that you download could compromise your systems. Instead, download the file locally and review it to ensure that you are comfortable with what the script is doing. After you have reviewed it, you could load it to a local web server for easy access or push it out with a configuration management tool such as Puppet, Chef, or Ansible:
$ wget -qO install-docker.sh https://get.docker.com/
After you have reviewed the script, run it:
$ sh install-docker.sh
If you are running a supported Linux distribution, the script will prepare your system and install Docker. Once installed, Docker will be updated by the local package system, such as apt
on Debian and Ubuntu or yum
on CentOS and Red Hat Enterprise Linux (RHEL). The install
command starts Docker and configures it to start on system boot.
Note
A list of supported operating systems, distributions, and cloud providers is located at https://docs.docker.com/engine/installation/ .
By default, anyone using Docker locally will need root privileges. You can change that by adding them to the docker
group which is created by the install packages. They will be able to use Docker without root, starting with their next login.
Docker provides a very nice tool to facilitate deployment and management of Docker hosts on various cloud services and Linux hosts called Docker Machine. Docker Machine is installed as part of the Docker Toolbox but can be installed separately. Full instructions can be found at https://github.com/docker/machine/releases/ .
Docker Machine supports many different cloud services including AWS, Microsoft Azure, and GCE. It can also be configured to connect to any existing supported Linux server. The driver docker-machine
uses is defined by the --driver
flag. Each driver has its own specific flags that control how docker-machine
works with the service.
AWS is a great way to run Docker hosts and docker-machine
makes it easy to start and manage them. You can use the Elastic Load Balancer (ELB) to send traffic to containers running on a specific host or load balance among multiple hosts.
First of all, you will need to get your access credentials from AWS. You can use them in a couple of ways. First, you can include them on the command line when you run docker-machine
:
$ docker-machine create --driver amazonec2 --amazonec2-access-key AK*** --amazonec2-secret-key DM*** ...
Second, you can add them to ~/.aws/credentials
. Putting your credentials in a credential file means that you will not have to include them on the command line every time you use docker-machine
to work with AWS. It also keeps your credentials off of the command line and out of the process list. The following examples will assume that you have created a credentials file to keep from cluttering the command line:
[default] aws_access_key_id = AK*** aws_secret_access_key = DM***
A new Docker host is created with the create
subcommand. You can specify the region using the --amazonec2-region
flag. By default, the host will be started in the us-east-1
region. The last item on the command line is the name of the instance, in this case dm-aws-test
:
$ docker-machine create --driver amazonec2 --amazonec2-region us-west-2 dm-aws-test Creating CA: /home/user/.docker/machine/certs/ca.pem Creating client certificate: /home/user/.docker/machine/certs/cert.pem Running pre-create checks... Creating machine... (dm-aws-test) Launching instance... Waiting for machine to be running, this may take a few minutes... Detecting operating system of created instance... Waiting for SSH to be available... Detecting the provisioner... Provisioning with ubuntu(systemd)... Installing Docker... Copying certs to the local machine directory... Copying certs to the remote machine... Setting Docker configuration on the remote daemon... Checking connection to Docker... Docker is up and running! To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env dm-aws-test

The command takes a couple of minutes to run but when it's complete, you have a fully-functional Docker host ready to run containers. The ls
subcommand will show you all the machines that docker-machine
knows about:
$ docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS dm-aws-test - amazonec2 Running tcp://52.43.71.87:2376 v1.12.1
The machine's IP address is listed in the output of docker-machine ls
, but you can also get it by running docker-machine ip
. To start working with your new machine, set up your environment by running eval $(docker-machine env dm-aws-test)
. Now when you run Docker, it will talk to the instance running up on AWS. It is even possible to ssh
into the server using docker-machine
:
$ docker-machine ssh dm-aws-test Welcome to Ubuntu 15.10 (GNU/Linux 4.2.0-18-generic x86_64) * Documentation: https://help.ubuntu.com/ Get cloud support with Ubuntu Advantage Cloud Guest: http://www.ubuntu.com/business/services/cloud New release '16.04.1 LTS' available. Run 'do-release-upgrade' to upgrade to it. *** System restart required *** ubuntu@dm-aws-test:~$
Once you are done with the instance, you can stop it with docker-machine stop
and remove it with docker-machine rm
:
$ docker-machine stop dm-aws-test Stopping "dm-aws-test"... Machine "dm-aws-test" was stopped. $ docker-machine rm dm-aws-test About to remove dm-aws-test Are you sure? (y/n): y Successfully removed dm-aws-test
Note
There are a number of options that can be passed to docker-machine create
including options to use a custom AMI, instance type, or volume size. Complete documentation is available at
https://docs.docker.com/machine/drivers/aws/
.
GCE is another big player in cloud computing. Their APIs make it very easy to start up new hosts running on Google's high power infrastructure. Google is an excellent choice to host your Docker hosts, especially if you are already using other Google Cloud services.
You will need to create a project in GCE for your containers. Authentication happens through Google Application Default Credentials (ADC). This means that authentication will happen automatically if you run docker-machine
from a host on GCE. If you are running docker-machine
from your own computer, you will need to authenticate using the gcloud
tool. The gcloud
tool requires Python 2.7 and can be downloaded from the following site:
https://cloud.google.com/sdk/
.
$ gcloud auth login
The gcloud
tool will open a web browser to authenticate using OAuth 2. Select your account then click Allow on the next page. You will be redirected to a page that shows that you have been authenticated. Now, on to the fun stuff:
$ docker-machine create --driver google \ --google-project docker-test-141618 \ --google-machine-type f1-micro \ dm-gce-test

It will take a few minutes to complete depending on the size of image you choose. When it is done, you will have a Docker host running on GCE. You can now use the ls
, ssh
, and ip
subcommands just like the preceding AWS. When you are done, run docker-machine stop
and docker-machine rm
to stop and remove the image.
Note
There are a number of options that can be passed to docker-machine
including options to set the zone, image, and machine time. Complete documentation is available at
https://docs.docker.com/machine/drivers/gce/
.
Microsoft is a relative newcomer to the cloud services game but they have built an impressive service. Azure underpins several large systems including Xbox Live.
Azure uses the subscription ID for authentication. You will be given an access code and directed to enter it at https://aka.ms/devicelogin . Select Continue, choose your account, then click on Accept. You can close the browser window when you are done:
$ docker-machine create --driver azure --azure-subscription-id 30*** dm-azure-test

Again, it will take some time to finish. Once done, you will be able to run containers on your new host. As always, you can manage your new host with docker-machine
. There is an important notice in the output when you remove a machine on Azure. It is worth making sure that everything does get cleaned up:
$ docker-machine rm dm-azure-test About to remove dm-azure-test Are you sure? (y/n): y (dm-azure-test) NOTICE: Please check Azure portal/CLI to make sure you have no leftover resources to avoid unexpected charges. (dm-azure-test) Removing Virtual Machine resource. name="dm-azure-test" (dm-azure-test) Removing Network Interface resource. name="dm-azure-test-nic" (dm-azure-test) Removing Public IP resource. name="dm-azure-test-ip" (dm-azure-test) Removing Network Security Group resource. name="dm-azure-test-firewall" (dm-azure-test) Attempting to clean up Availability Set resource... name="docker-machine" (dm-azure-test) Removing Availability Set resource... name="docker-machine" (dm-azure-test) Attempting to clean up Subnet resource... name="docker-machine" (dm-azure-test) Removing Subnet resource... name="docker-machine" (dm-azure-test) Attempting to clean up Virtual Network resource... name="docker -machine-vnet" (dm-azure-test) Removing Virtual Network resource... name="docker-machine-vnet" Successfully removed dm-azure-test
Note
There are many options for the Azure driver including options to choose the image, VM size, location, and even which ports need to be open on the host. For full documentation refer to https://docs.docker.com/machine/drivers/azure/ .
You can also use a generic driver of docker-machine
to install and manage Docker on an existing host running a supported Linux distribution. There are a couple of things to keep in mind. First, the host must already be running. Docker can be pre-installed. This can be useful if you are installing Docker as part of your host build process. Second, if Docker is running, it will be restarted. This means that any running containers will be stopped. Third, you need to have an existing SSH key pair.
The following command will use SSH to connect to the server specified by the --generic-ip-address
flag using the key identified by --generic-ssh-key
and the user set with --generic-ssh-user
. There are two important things to keep in mind for the SSH user. First, the user must be able to use sudo
without a password prompt. Second, the public key must be in the authorized_keys
file in the user's $HOME/.ssh/
directory:
$ docker-machine create --driver generic --generic-ip-address 52.40.113.7 --generic-ssh-key ~/.ssh/id_rsa --generic-ssh-user ubuntu dm-ubuntu-test
This process will take a couple of minutes. It will be faster than the creates on cloud services that also have to provision the VM. Once it is complete, you can manage the host with docker-machine
and start running containers.
The only difference between the generic driver and the other cloud drivers is that the stop
subcommand does not work. This means that stopping a generic Docker host has to be done from the host.
Note
Full documentation can be found at https://docs.docker.com/machine/drivers/generic/ .
One of the benefits of running services with Docker is that the server distribution no longer matters. If your application needs CentOS tools, it can run in a container based on CentOS. The same is true for Ubuntu. In fact, services running in containers based on different distributions can run side-by-side without issue. This has led to a push for very thin, Docker-specific distributions.
These distributions have one purpose: to run Docker containers. As such, they are very small and very limited in what comes out of the box. This a huge benefit to cloud wranglers everywhere. Fewer tools mean fewer updates and more uptime. It also means that the host OS has a much smaller attack surface, giving you greater security.
Their focus on Docker is a great strength, but it can also be a weakness. You may find yourself up against a wall if you need something specific on your host that is not available. On the positive side, many tools that might not be available in the default install can be run from a container.
CoreOS ( https://coreos.com ) was one of the first Docker-specific distributions. They have since started their own container project called rkt, but still include Docker. It is supported on all major cloud providers including Amazon EC2, Microsoft Azure, and GCE and can be installed locally in bare-metal or in a local cloud environment.
CoreOS uses the same system that Google uses on their Chromebooks to manage updates. If the updates cause a problem, they can be easily rolled back to the previous version. This can help you maintain stable and reliable services.
CoreOS is designed to update the system automatically which is very unique. The idea is that automatically updating the OS is the best way to maintain the security of the infrastructure. This process can be configured, by default, to ensure that only one host in a CoreOS cluster is rebooting at a time. It can also be configured to only update during maintenance windows or turned off completely. Before you decide to turn it off manually, remember that a properly configured orchestration system will keep services up and running even when the hosts they are running is on reboot.
CoreOS includes Docker but does not enable it. The following example from the CoreOS documentation shows how to enable Docker on boot. This is done by creating a new systemd
unit file through cloud-init
. On AWS, this is placed in the user data instance configuration:
#cloud-config coreos: units: - name: docker-tcp.socket command: start enable: true content: | [Unit] Description=Docker Socket for the API [Socket] ListenStream=2375 BindIPv6Only=both Service=docker.service [Install] WantedBy=sockets.target
CoreOS uses a default core
user. Users can be added through the cloud-config
file:
#cloud-config users: - name: "demo" passwd: "$6$HpqJOCs8XahT$mSgRYAn..." groups: - "sudo" - "docker"
As SSH key can also be added with the ssh-authorized-keys
option in the users
block. You can add any number of keys to each user:
#cloud-config users: - default - name: "demo" ssh-authorized-keys: - "ssh-rsa AAAAB3Nz..."
CoreOS also supports sssd
for authentication against LDAP and Active Directory (AD). Like Docker, it is enabled through cloud-config
:
#cloud-config coreos: units: - name "sssd.service" command: "start" enable: true
The sssd
configuration is in /etc/sssd/sssd.conf
. Like the rest of CoreOS, the configuration can be added to cloud-config
:
#cloud-config write_files: - path: "/etc/sssd/sssd.conf" permissions: "0644" owner: "root" content: | config_file_version = 2 ...
Note
Full configuration of sssd
is beyond the scope of this book. Full documentation is at the following website:
https://jhrozek.fedorapeople.org/sssd/1.13.1/man/sssd.conf.5.html
Rancher ( http://rancher.com ) was designed from the ground up to run Docker containers. It supports multiple orchestration tools including Kubernetes, Mesos, and Docker Swarm. There are ISOs available for installation to hardware and images for Amazon EC2, GCE, or OpenStack. You can even install RancherOS on Raspberry Pi!
RancherOS is so unique; even the system tools run in Docker. Because of this, the admin can choose a console that fits what they're comfortable with. Supported consoles are CentOS, Debian, Fedora, Ubuntu, or the default BusyBox-based console.
Rancher provides a very nice web interface for managing containers and clusters. It also makes it easy to run multiple environments including multiple orchestration suites. Rancher will be covered in more detail in Chapter 7, Using Simpler Orchestration Tools - Fleet and Cattle.
The cloud-init
package is used to configure RancherOS. You can configure it to start containers on boot, format persistent disks, or do all sorts of other cool things. One thing it cannot do is add additional users. The idea is that there is very little reason to log in to the host once Docker is installed and configured. However, you can add SSH keys to the default rancher
user to allow unique logins for different users:
#cloud-config ssh_authorized_keys: - ssh-rsa AAAAB3Nz...
If you need to add options to Docker, set them with cloud-init
:
#cloud-config rancher: docker: args: [daemon, ...]
Project Atomic ( http://projectatomic.io ) grew out of the Fedora Project but now supports CentOS, Fedora, and RHEL. Images are available for Linux KVM and Xen-based virtualization platforms as well as Amazon EC2 and bare-metal installation.
It uses OSTree and rpm-OSTree
to provide atomic updates. In other words, every package is updated at the same time, in one chunk. You do not have to worry that one package might have failed updates and left the system with an older package. It also provides for easy rollback in case the updates cause problems.
Project Atomic comes pre-installed with Docker and Kubernetes. (Kubernetes is covered in detail in Chapter 5, Deploying and Managing Services with Kubernetes.) This makes it an ideal base for Kubernetes-based clusters. The addition of SELinux adds an extra level of security in case one of the running containers is compromised.
Deployment on almost any local cloud system or EC2 is made easier by the use of cloud-init
. The cloud-init
package lets you configure your Atomic hosts automatically on boot, instantly growing your Kubernetes cluster.
You can use cloud-init
to set the password
and enable SSH logins for the default user:
#cloud-config password: c00l-pa$$word ssh_pwauth: True
You can also add SSH keys to the default user's authorized_keys
file:
#cloud-config ssh_authorized_keys: - ssh-rsa AAAAB3Nz... - ssh-rsa AAAAB3Nz...
Before we get into the nuts and bolts of Docker orchestration, let's run through the basics of running single applications in Docker. Seeing as this is a tech book, the first example is always some variant of Hello World
and this is no different.
Note
By default, docker
must be run as the root user or with sudo
. Instead, you could add your user to the docker
group and run containers without root.
$ docker run --rm ubuntu echo "Hello World"
This example is really simple. It downloads the ubuntu
Docker image and uses that image to run the echo "Hello World"
command. Simple, right? There is actually a lot going on here that you need to understand before you get into orchestration.
First of all, notice the word ubuntu
in that command. That tells Docker that you want to use the ubuntu
image. By default, Docker will download images from the Docker Hub. There are a large number of images, most uploaded by the community, but there are also a number of official images of various projects of which ubuntu
is one. These form a great base for almost any application.
Second, take special note of the --rm
flag. When docker
runs, it creates an image for the container that contains any changes to the base image. Those changes persist as long as the container exists even if the container is stopped. The --rm
flag tells docker
to remove the container and its image as soon as it stops running. When you start automating containers with orchestration tools, you will often want to remove containers when they stop. I'll explain more in the next section.
Lastly, take a look at the echo
command. Yes, it is an echo
alright, and it outputs Hello World
just like one would expect. There are two important points here. First, the command can be anything in the image, and second, it must be in the image. For example, if you tried to run nginx
in that command, Docker will throw an error similar to the following:
$ sudo docker run --rm ubuntu nginx exec: "nginx": executable file not found in $PATH Error response from daemon: Cannot start container 821fcd4e8ae76668d8c508190b338e166247dc46cb6bc2582731566e7f2c705a: [8] System error: exec: "nginx": executable file not found in $PATH
The "Hello World"
examples are all good but what if you want to do something actually useful? To quote old iPhone ads; There's an app for that. There are many official applications available on the Docker Hub. Let's continue with nginx
and start a container running nginx
to serve a simple website:
$ docker run --rm -p 80:80 --name nginx nginx
This command starts a new container based on the nginx
image, downloading it if needed, and telling docker
to forward TCP port 80 to port 80 in the image. Now you can go to http://localhost
and see a most welcoming website:

Welcoming people to Nginx is all well and good, but obviously, you will want to do more than that. That will be covered in more detail in Chapter 2, Building Multi-Container Applications with Docker Compose. For now, the default will be sufficient.
If you run the preceding example, you will notice that the console appears to hang. That's because docker
starts processes in the foreground. What you are seeing there is nginx
waiting for a request. If you go to http://localhost
, then you should see messages from the nginx
access log printed to the console. Another option is to add -d
to your run
command. That will detach the process from the console:
$ docker run -d -p 80:80 --name nginx nginx
There are multiple ways to stop a container. The first way is to end the process running in the container. This will happen automatically for short running processes. When starting a container like nginx
in the preceding example, pressing
Ctrl
+
C
in the session will stop nginx
and the container. The other way is to use docker stop
. It takes the image ID or name of the container. For example, to stop the container that was started earlier you would run docker stop nginx
.
Let's take a moment and look at how Docker deals with remote images. Remember, when you first ran docker run
with the ubuntu
or nginx
images, docker
had to first download the images from the Docker Hub. When you run them again, Docker will use the downloaded image. You can see the images Docker knows about with the docker images
command:
$ docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE ubuntu latest ae81bbda2b6c 5 hours ago 126.6 MB nginx latest bfdd4ced794e 3 days ago 183.4 MB
Unwanted images can be deleted with the docker rmi
command:
$ docker rmi ubuntu
Ack! What do you do if you deleted an image but you still need it? You have two options. First, you can run a container that uses the image. It works, but can be cumbersome if running a container changes data or conflicts with something that is already running. Fortunately, there is the docker pull
command:
$ docker pull ubuntu
This command will pull the default version of the ubuntu
image from the repository on Docker Hub. Specific versions can be pulled by specifying them in the command:
$ docker pull ubuntu:trusty
Docker pull is also used to update a previously downloaded image. For example, the ubuntu
image is regularly updated with security fixes and other patches. If you do not pull the updates, docker
on your host will continue to use the old image. Simply run the docker pull
command again and any updates to the image will be downloaded.
Let's take a quick diversion and consider what this means for your hosts when you begin to orchestrate Docker. Unless you or your tools update the images on your hosts, you will find that some hosts are running old images while others are running the new, shiny image. This can open your systems up to intermittent failures or security holes. Most modern tools will take care of that for you or, at least, have an option to force a pull before deployment. Others may not, keep that in mind as you look at orchestration tools and strategies.
At some point, you will want to see what containers are running on a specific host. Your orchestration tools will help with that, but there will be times that you will need to go straight to the source to troubleshoot a problem. For that, there is the docker ps
command. To demonstrate, start up a few containers:
$ for i in {1..4}; do docker run -d --name nginx$i nginx ; done
Now run docker ps
:
$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e5b302217aeb nginx "nginx -g 'daemon off" About a minute ago Up About a minute 80/tcp, 443/tcp nginx4 dc9d9e1e1228 nginx "nginx -g 'daemon off" About a minute ago Up About a minute 80/tcp, 443/tcp nginx3 6009967479fc nginx "nginx -g 'daemon off" About a minute ago Up About a minute 80/tcp, 443/tcp nginx2 67ac8125983c nginx "nginx -g 'daemon off" About a minute ago Up About a minute 80/tcp, 443/tcp nginx1
You should see the containers that were just started as well as any others that you may have running. If you stop the containers, they will disappear from docker ps
:
$ for i in {1..4}; do docker stop nginx$i ; done nginx1 nginx2 nginx3 nginx4
As you can see if you run docker ps
, the containers are gone:
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
However, since the --rm
flag was not used, docker
still knows about them and could restart them:
$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e5b302217aeb nginx "nginx -g 'daemon off" 3 minutes ago Exited (0) About a minute ago nginx4 dc9d9e1e1228 nginx "nginx -g 'daemon off" 3 minutes ago Exited (0) About a minute ago nginx3 6009967479fc nginx "nginx -g 'daemon off" 3 minutes ago Exited (0) About a minute ago nginx2 67ac8125983c nginx "nginx -g 'daemon off" 3 minutes ago Exited (0) About a minute ago nginx1
These are all the stopped nginx
containers. The docker rm
command will remove the containers:
$ for i in {1..4}; do docker rm nginx$i ; done nginx1 nginx2 nginx3 nginx4
Until a container is removed, all the data is still available. You can restart the container and it will chug along quite happily with whatever data existed when the container was stopped. Once the container is removed, all the data within that container is removed right along with it. In many cases, you might not care but, in others, that data might be important. How you deal with that data will be an important part of planning out your orchestration system. In Chapter 3, Cluster Building Blocks - Registry, Overlay Networks, and Shared Storage, I will show you how you can move your data into shared storage to keep it safe.
There comes a time in the life of anyone working with containers when you will need to jump into a running container and see what is going on. Fortunately, Docker has just the tool for you in the form of docker exec
. The exec
subcommand takes two arguments, the name of the container, and the command to run:
$ docker exec -it nginx bash
I slipped an option in there that is important if you are starting an interactive process. The -it
option tells Docker that you have an interactive process and that you want a new TTY. This is essential if you want to start a shell:
$ sudo docker exec -it nginx bash root@fd8533fa2eda:/# ps ax PID TTY STAT TIME COMMAND 1 ? Ss 0:00 nginx: master process nginx -g daemon off; 6 ? S 0:00 nginx: worker process 13 ? Ss 0:00 bash 18 ? R+ 0:00 ps ax root@fd8533fa2eda:/# exit
In the preceding example, I connected to the container and ran ps ax
to see every process that the container knew about. Getting a shell in the container can be invaluable when debugging. You can verify that files were added correctly or that internal scripts are properly handling environment variables passed in through docker
.
It's also possible to run non-interactive programs. Let's use the same ps
example as earlier:
$ sudo docker exec nginx ps ax PID TTY STAT TIME COMMAND 1 ? Ss 0:00 nginx: master process nginx -g daemon off; 6 ? S 0:00 nginx: worker process 19 ? Rs 0:00 ps ax
As you might expect, there's not much to see here, but it should give you an idea of what is possible. I often use them when debugging and I do not need a full shell.
In this chapter you have learned the basics of using Docker. You saw how to quickly install Docker on AWS, GCE, Azure, and a generic Linux host. You were introduced to a few Docker-specific distributions and saw a little bit of how to configure them. Finally, we reviewed the basics of using the docker
command. In the next chapter, I will walk you through using Docker Compose to build new single and multi-container applications.