Part 1: The big picture stuff
1: Containers from 30,000 feet
Containers have taken over the world!
In this chapter we’ll get into things like; why we have containers, what they do for us, and where we can use them.
The bad old days
Applications are at the heart of businesses. If applications break, businesses break. Sometimes they even go bust. These statements get truer every day!
Most applications run on servers. In the past we could only run one application per server. The open-systems world of Windows and Linux just didn’t have the technologies to safely and securely run multiple applications on the same server.
As a result, the story went something like this… Every time the business needed a new application, the IT department would buy a new server. Most of the time nobody knew the performance requirements of the new application, forcing the IT department to make guesses when choosing the model and size of the server to buy.
As a result, IT did the only thing it could do — it bought big fast servers that cost a lot of money. After all, the last thing anyone wanted, including the business, was under-powered servers unable to execute transactions and potentially losing customers and revenue. So, IT bought big. This resulted in over-powered servers operating as low as 5-10% of their potential capacity. A tragic waste of company capital and environmental resources!
Hello VMware!
Amid all of this, VMware, Inc. gave the world a gift — the virtual machine (VM). And almost overnight, the world changed into a much better place. We finally had a technology that allowed us to run multiple business applications safely on a single server. Cue wild celebrations!
This was a game changer. IT departments no longer needed to procure a brand-new oversized server every time the business needed a new application. More often than not, they could run new apps on existing servers that were sitting around with spare capacity.
All of a sudden, we could squeeze massive amounts of value out of existing corporate assets, resulting in a lot more bang for the company’s buck ($).
VMwarts
But… and there’s always a but! As great as VMs are, they’re far from perfect!
The fact that every VM requires its own dedicated operating system (OS) is a major flaw. Every OS consumes CPU, RAM and other resources that could otherwise be used to power more applications. Every OS needs patching and monitoring. And in some cases, every OS requires a license. All of this results in wasted time and resources.
The VM model has other challenges too. VMs are slow to boot, and portability isn’t great — migrating and moving VM workloads between hypervisors and cloud platforms is harder than it needs to be.
Hello Containers!
For a long time, the big web-scale players, like Google, have been using container technologies to address the shortcomings of the VM model.
In the container model, the container is roughly analogous to the VM. A major difference is that containers do not require their own full-blown OS. In fact, all containers on a single host share the host’s OS. This frees up huge amounts of system resources such as CPU, RAM, and storage. It also reduces potential licensing costs and reduces the overhead of OS patching and other maintenance. Net result: savings on the time, resource, and capital fronts.
Containers are also fast to start and ultra-portable. Moving container workloads from your laptop, to the cloud, and then to VMs or bare metal in your data center is a breeze.
Linux containers
Modern containers started in the Linux world and are the product of an immense amount of work from a wide variety of people over a long period of time. Just as one example, Google LLC has contributed many container-related technologies to the Linux kernel. Without these, and other contributions, we wouldn’t have modern containers today.
Some of the major technologies that enabled the massive growth of containers in recent years include; kernel namespaces, control groups, capabilities, and of course Docker. To re-emphasize what was said earlier — the modern container ecosystem is deeply indebted to the many individuals and organizations that laid the strong foundations that we currently build on. Thank you!
Despite all of this, containers remained complex and outside of the reach of most organizations. It wasn’t until Docker came along that containers were effectively democratized and accessible to the masses.
Note: There are many operating system virtualization technologies similar to containers that pre-date Docker and modern containers. Some even date back to System/360 on the Mainframe. BSD Jails and Solaris Zones are some other well-known examples of Unix-type container technologies. However, in this book we are restricting our conversation to modern containers made popular by Docker.
Hello Docker!
We’ll talk about Docker in a bit more detail in the next chapter. But for now, it’s enough to say that Docker was the magic that made Linux containers usable for mere mortals. Put another way, Docker, Inc. made containers simple!
Docker and Windows
Microsoft has worked extremely hard to bring Docker and container technologies to the Windows platform.
At the time of writing, Windows desktop and server platforms support both of the following:
- Windows containers
- Linux containers
Windows containers run Windows apps that require a host system with a Windows kernel. Windows 10 and Windows 11, as well as all modern versions of Windows Server, have native support Windows containers.
Any Windows host running the WSL 2 (Windows Subsystem for Linux) can also run Linux containers. This makes Windows 10 and 11 great platforms for developing and testing Windows and Linux containers.
However, despite all of the work Microsoft has done developing Windows containers, the vast majority of containers are Linux containers. This is because Linux containers are smaller and faster, and the majority of tooling exists for Linux.
All of the examples in this edition of the book are Linux containers.
Windows containers vs Linux containers
It’s vital to understand that a container shares the kernel of the host it’s running on. This means containerized Windows apps need a host with a Windows kernel, whereas containerized Linux apps need a host with a Linux kernel. Only… it’s not always that simple.
As previously mentioned, it’s possible to run Linux containers on Windows machines with the WSL 2 backend installed.
What about Mac containers?
There is currently no such thing as Mac containers.
However, you can run Linux containers on your Mac using Docker Desktop. This works by seamlessly running your containers inside of a lightweight Linux VM on your Mac. It’s extremely popular with developers, who can easily develop and test Linux containers on their Mac.
What about Kubernetes
Kubernetes is an open-source project out of Google that has quickly emerged as the de facto orchestrator of containerized apps. That’s just a fancy way of saying Kubernetes is the most popular tool for deploying and managing containerized apps.
Note: A containerized app is an application running as a container.
Kubernetes used to use Docker as its default container runtime – the low-level technology that pulls images and starts and stops containers. However, modern Kubernetes clusters have a pluggable container runtime interface (CRI) that makes it easy to swap-out different container runtimes. At the time of writing, most new Kubernetes clusters use containerd. We’ll cover more on containerd later in the book, but for now it’s enough to know that containerd is the small specialized part of Docker that does the low-level tasks of starting and stopping containers.
Check out these resources if you need to learn Kubernetes. Quick Start Kubernetes is ~100 pages and will get you up-to-speed with Kubernetes in a day! The Kubernetes Book is a lot more comprehensive and will get you very close to being a Kubernetes expert.

Chapter Summary
We used to live in a world where every time the business needed a new application we had to buy a brand-new server. VMware came along and allowed us to drive more value out of new and existing IT assets. As good as VMware and the VM model is, it’s not perfect. Following the success of VMware and hypervisors came a newer more efficient and portable virtualization technology called containers. But containers were initially hard to implement and were only found in the data centers of web giants that had Linux kernel engineers on staff. Docker came along and made containers easy and accessible to the masses.
Speaking of Docker… let’s go find who, why, and what Docker is!
2: Docker
No book or conversation about containers is complete without talking about Docker. But when we say “Docker”, we can be referring to either of the following:
- Docker, Inc. the company
- Docker the technology
Docker - The TLDR
Docker is software that runs on Linux and Windows. It creates, manages, and can even orchestrate containers. The software is currently built from various tools from the Moby open-source project. Docker, Inc. is the company that created the technology and continues to create technologies and solutions that make it easier to get the code on your laptop running in the cloud.
That’s the quick version. Let’s dive a bit deeper.
Docker, Inc.
Docker, Inc. is a technology company based out of San Francisco founded by French-born American developer and entrepreneur Solomon Hykes. Solomon is no longer at the company.

The company started out as a platform as a service (PaaS) provider called dotCloud. Behind the scenes, the dotCloud platform was built on Linux containers. To help create and manage these containers, they built an in-house tool that they eventually nick-named “Docker”. And that’s how the Docker technology was born!
It’s also interesting to know that the word “Docker” comes from a British expression meaning dock worker — somebody who loads and unloads cargo from ships.
In 2013 they got rid of the struggling PaaS side of the business, rebranded the company as “Docker, Inc.”, and focussed on bringing Docker and containers to the world. They’ve been immensely successful in this endeavour.
Throughout this book we’ll use the term “Docker, Inc.” when referring to Docker the company. All other uses of the term “Docker” will refer to the technology.
The Docker technology
When most people talk about Docker, they’re referring to the technology that runs containers. However, there are at least three things to be aware of when referring to Docker as a technology:
- The runtime
- The daemon (a.k.a. engine)
- The orchestrator
Figure 2.2 shows the three layers and will be a useful reference as we explain each component. We’ll get deeper into each later in the book.

The runtime operates at the lowest level and is responsible for starting and stopping containers (this includes building all of the OS constructs such as namespaces and cgroups). Docker implements a tiered runtime architecture with high-level and low-level runtimes that work together.
The low-level runtime is called runc
and is the reference implementation of Open Containers Initiative (OCI) runtime-spec. Its job is to interface with the underlying OS and start and stop containers. Every container on a Docker node was created and started by an instance of runc.
The higher-level runtime is called containerd
. This manages the entire container lifecycle including pulling images and managing runc instances. containerd
is pronounced “container-dee’ and is a graduated CNCF project used by Docker and Kubernetes.
A typical Docker installation has a single long-running containerd process instructing runc to start and stop containers. runc is never a long-running process and exits as soon as a container is started.
The Docker daemon (dockerd
) sits above containerd
and performs higher-level tasks such as exposing the Docker API, managing images, managing volumes, managing networks, and more…
A major job of the Docker daemon is to provide an easy-to-use standard interface that abstracts the lower levels.
Docker also has native support for managing clusters of nodes running Docker. These clusters are called swarms and the native technology is called Docker Swarm. Docker Swarm is easy-to-use and many companies are using it in real-world production. It’s a lot simpler to install and manage than Kubernetes but lacks a lot of the advanced features and ecosystem of Kubernetes.
The Open Container Initiative (OCI)
Earlier in the chapter we mentioned the Open Containers Initiative — OCI.
The OCI is a governance council responsible for standardizing the low-level fundamental components of container infrastructure. In particular it focusses on image format and container runtime (don’t worry if you’re not comfortable with these terms yet, we’ll cover them in the book).
It’s also true that no discussion of the OCI is complete without mentioning a bit of history. And as with all accounts of history, the version you get depends on who’s doing the talking. So, this is container history according to Nigel :-D
From day one, use of Docker grew like crazy. More and more people used it in more and more ways for more and more things. So, it was inevitable that some parties would get frustrated. This is normal and healthy.
The TLDR of this history according to Nigel is that a company called CoreOS (acquired by Red Hat which was then acquired by IBM) didn’t like the way Docker did certain things. So, they created an open standard called appc that defined things like image format and container runtime. They also created an implementation of the spec called rkt (pronounced “rocket”).
This put the container ecosystem in an awkward position with two competing standards.
Getting back to the story, this threatened to fracture the ecosystem and present users and customers with a dilemma. While competition is usually a good thing, competing standards is usually not. They cause confusion and slowdown user adoption. Not good for anybody.
With this in mind, everybody did their best to act like adults and came together to form the OCI — a lightweight agile council to govern container standards.
At the time of writing, the OCI has published two specifications (standards) -
- The image-spec
- The runtime-spec
- The distribution-spec
An analogy that’s often used when referring to these two standards is rail tracks. These two standards are like agreeing on standard sizes and properties of rail tracks, leaving everyone else free to build better trains, better carriages, better signalling systems, better stations… all safe in the knowledge that they’ll work on the standardized tracks. Nobody wants two competing standards for rail track sizes!
It’s fair to say that the OCI specifications have had a major impact on the architecture and design of the core Docker product. All modern versions of Docker and Docker Hub implement the OCI specifications.
The OCI is organized under the auspices of the Linux Foundation.
Chapter summary
In this chapter, we learned about Docker, Inc. the company, and the Docker technology.
Docker, Inc. is a technology company out of San Francisco with an ambition to change the way we do software. They were arguably the first-movers and instigators of the modern container revolution.
The Docker technology focuses on running and managing application containers. It runs on Linux and Windows, can be installed almost anywhere, and is currently the most popular container runtime used by Kubernetes.
The Open Container Initiative (OCI) was instrumental in standardizing low-level container technologies such as runtimes, image format, and registries.
3: Installing Docker
There are lots of ways and places to install Docker. There’s Windows, Mac, and Linux. You can install in the cloud, on premises, and on your laptop. And there are manual installs, scripted installs, wizard-based installs…
But don’t let that scare you. They’re all really easy, and a simple search for “how to install docker on <insert your choice here>” will reveal up-to-date instructions that are easy to follow. As a result, we won’t waste too much space here. We’ll cover the following.
- Docker Desktop
- Windows
- MacOS
- Multipass
- Server installs on
- Linux
- Play with Docker
Docker Desktop
Docker Desktop is a desktop app from Docker, Inc. that makes it super-easy to work with containers. It includes the Docker engine, a slick UI, and an extension system with a marketplace. These extensions add some very useful features to Docker Desktop such as scanning images for vulnerabilities and making it easy to manage images and disk space.
Docker Desktop is free for educational purposes, but you’ll have to pay if you start using it for work and your company has over 250 employees or does more than $10M in annual revenue.
It runs on 64-bit versions of Windows 10, Windows 11, MacOS, and Linux.
Once installed, you have a fully working Docker environment that’s great for development, testing, and learning. It includes Docker Compose and you can even enable a single-node Kubernetes cluster if you need to learn Kubernetes.
Docker Desktop on Windows can run native Windows containers as well as Linux containers. Docker Desktop on Mac and Linux can only run Linux containers.
We’ll walk through the process of installing on Windows and MacOS.
Windows pre-reqs
Docker Desktop on Windows requires all of the following:
- 64-bit version of Windows 10/11
- Hardware virtualization support must be enabled in your system’s BIOS
- WSL 2
Be very careful changing anything in your system’s BIOS.
Installing Docker Desktop on Windows 10 and 11
Search the internet or ask your AI assistant how to “install Docker Desktop on Windows”. This will take you to the relevant download page where you can download the installer and follow the instructions. You may need to install and enable the WSL 2 backend (Windows Subsystem for Linux).
Once the installation is complete you may have to manually start Docker Desktop from the Windows Start menu. It may take a minute to start but you can watch the start progress via the animated whale icon on the Windows task bar at the bottom of the screen.
Once it’s up and running you can open a terminal and type some simple docker
commands.
$ docker version
Client:
Cloud integration: v1.0.31
Version: 20.10.23
API version: 1.41
Go version: go1.18.10
Git commit: 7155243
Built: Thu Jan 19 01:20:44 2023
OS/Arch: linux/amd64
Context: default
Experimental: true
Server:
Engine:
Version: 20.10.23
<Snip>
OS/Arch: linux/amd64
Experimental: true
Notice the output is showing OS/Arch: linux/amd64
for the Server component. This is because a default installation assumes you’ll be working with Linux containers.
You can easily switch to Windows containers by right-clicking the Docker whale icon in the Windows notifications tray and selecting Switch to Windows containers...
.
Be aware that any existing Linux containers will keep running in the background but you won’t be able to see or manage them until you switch back to Linux containers mode.
Run another docker version
command and look for the windows/amd64
line in the Server section of the output.
C:\> docker version
Client:
<Snip>
Server:
Engine:
<Snip>
OS/Arch: windows/amd64
Experimental: true
You can now run and manage Windows containers (containers running Windows applications).
Congratulations. You now have a working installation of Docker on your Windows machine.
Installing Docker Desktop on Mac
Docker Desktop for Mac is like Docker Desktop on Windows — a packaged product with a slick UI that gets you a single-engine installation of Docker that’s ideal for local development needs. You can also enable a single-node Kubernetes cluster.
Before proceeding with the installation, it’s worth noting that Docker Desktop on Mac installs all of the Docker engine components in a lightweight Linux VM that seamlessly exposes the API to your local Mac environment. This means you can open a terminal on your Mac and use the regular Docker commands without ever knowing it’s all running in a hidden VM. This is why Docker Desktop on Mac only work with Linux containers – it’s all running inside a Linux VM. This is fine as Linux is where most of the container action is.
Figure 3.1 shows the high-level architecture for Docker Desktop on Mac.

The simplest way to install Docker Desktop on your Mac is to search the web or ask your AI how to “install Docker Desktop on MacOS”. Follow the links to the download and then complete the simple installer.
Once the installation is complete you may have to manually start Docker Desktop from the MacOS Launchpad. It may take a minute to start but you can watch the animated Docker whale icon in the status bar at the top of your screen. Once it’s started you can click the whale icon to manage Docker Desktop.
Open a terminal window and run some regular Docker commands. Try the following.
$ docker version
Client:
Cloud integration: v1.0.31
Version: 23.0.5
API version: 1.42
<Snip>
OS/Arch: darwin/arm64
Context: desktop-linux
Server: Docker Desktop 4.19.0 (106363)
Engine:
Version: dev
API version: 1.43 (minimum version 1.12)
<Snip>
OS/Arch: linux/arm64
Experimental: false
containerd:
Version: 1.6.20
GitCommit: 2806fc1057397dbaeefbea0e4e17bddfbd388f38
runc:
Version: 1.1.5
GitCommit: v1.1.5-0-gf19387a
<Snip>
Notice that the OS/Arch:
for the Server component is showing as linux/amd64
or linux/arm64
. This is because the daemon is running inside the Linux VM mentioned earlier. The Client component is a native Mac application and runs directly on the Mac OS Darwin kernel which is why it shows as either darwin/amd64
or darwin/arm64
.
You can now use Docker on your Mac.
Installing Docker with Multipass
Multipass is a free tool for creating cloud-style Linux VMs on your Linux, Mac, or Windows machine. It’s my go-to choice for Docker testing on my laptop as it’s incredibly easy to spin-up and tear-down Docker VMs.
Just go to https://multipass.run/install
and install the right edition for your hardware and OS.
Once installed you’ll only need the following three commands:
$ multipass launch
$ multipass ls
$ multipass shell
Let’s see how to launch and connect to a new VM that will have Docker pre-installed.
Run the following command to create a new VM called node1 based on the docker image. The docker image has Docker pre-installed and ready to go.
$ multipass launch docker --name node1
It’ll take a minute or two to download the image and launch the VM.
List VMs to make sure it launched properly.
$ multipass ls
Name State IPv4 Image
node1 Running 192.168.64.37 Ubuntu 22.04 LTS
172.17.0.1
172.18.0.1
You’ll use the 192
IP address when working with the examples later in the book.
Connect to the VM with the following command.
$ multipass shell node1
You’re now logged on to the VM and can run regular Docker commands.
Just type exit
to log out of the VM. Use multipass delete node1
and then multipass purge
to delete it.
Installing Docker on Linux
There are lots of ways to install Docker on Linux and most of them are easy. The recommended way is to search the web or ask your AI how to do it. The instructions in this section may be out of date and just for guidance purposes.
In this section we’ll look at one of the ways to install Docker on Ubuntu Linux 22.04 LTS. The procedure assumes you’ve already installed Linux and are logged on.
- Remove existing Docker packages.
$ sudo apt-get remove docker docker-engine docker.io containerd runc
- Update the apt package index.
$ sudo apt-get update $ sudo apt-get install ca-certificates curl gnupg ...
- Add the Docker GPG kye.
$ sudo install -m 0755 -d /etc/apt/keyrings $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \ sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg $ sudo chmod a+r /etc/apt/keyrings/docker.gpg
- Set-up the repository.
$ echo \ "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] \ https://download.docker.com/linux/ubuntu \ "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
- Install Docker from the official repo.
$ sudo apt-get update $ sudo apt-get install \ docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Docker is now installed and you can test by running some commands.
$ sudo docker --version
Docker version 24.0.0, build 98fdcd7
$ sudo docker info
Server:
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 1
Server Version: 24.0.0
Storage Driver: overlay2
...
Play with Docker
Play with Docker (PWD) is a fully functional internet-based Docker playground that lasts for 4 hours. You can add multiple nodes and even cluster them in a swarm.
Sometimes performance can be slow, but for a free service it’s excellent!
Visit https://labs.play-with-docker.com/
Chapter Summary
You can run Docker almost anywhere and most of the installation methods are simple.
Docker Desktop provides you a fully-functional Docker environment on your Linux, Mac, or Windows machine. It’s easy to install, includes the Docker engine, has a slick UI, and has a marketplace with lots of extensions to extend its capabilities. It’s a great choice for a local Docker development environment and even lets you spin-up a single-node Kubernetes cluster.
Packages exist to install the Docker engine on most Linux distros.
Play with Docker is a free 4-hour Docker playground on the internet.
4: The big picture
The aim of this chapter is to paint a quick big-picture of what Docker is all about before we dive in deeper in later chapters.
We’ll break this chapter into two:
- The Ops perspective
- The Dev perspective
In the Ops Perspective section, we’ll download an image, start a new container, log in to the new container, run a command inside of it, and then destroy it.
In the Dev Perspective section, we’ll focus more on the app. We’ll clone some app code from GitHub, inspect a Dockerfile, containerize the app, run it as a container.
These two sections will give you a good idea of what Docker is all about and how the major components fit together. It’s recommended that you read both sections to get the dev and the ops perspectives. DevOps anyone?
Don’t worry if some of the stuff we do here is totally new to you. We’re not trying to make you an expert in this chapter. This is about giving you a feel of things — setting you up so that when we get into the details in later chapters, you have an idea of how the pieces fit together.
If you want to follow along, all you need is a single Docker host with an internet connection. I recommend Docker Desktop for your Mac or Windows PC. However, the examples will work anywhere that you have Docker installed. We’ll be showing examples using Linux containers and Windows containers.
If you can’t install software and don’t have access to a public cloud, another great way to get Docker is Play With Docker (PWD). This is a web-based Docker playground that you can use for free. Just point your web browser to https://labs.play-with-docker.com/ and you’re ready to go (you’ll need a Docker Hub or GitHub account to be able to login).
As we progress through the chapter, we may use the terms “Docker host” and “Docker node” interchangeably. Both refer to the system that you are running Docker on.
The Ops Perspective
When you install Docker, you get two major components:
- The Docker client
- The Docker engine (sometimes called the “Docker daemon”)
The engine implements the runtime, API and everything else required to run containers.
In a default Linux installation, the client talks to the daemon via a local IPC/Unix socket at /var/run/docker.sock
. On Windows this happens via a named pipe at npipe:////./pipe/docker_engine
. Once installed, you can use the docker version
command to test that the client and daemon (server) are running and talking to each other.
> docker version
Client: Docker Engine - Community
Version: 24.0.0
API version: 1.43
Go version: go1.20.4
Git commit: 98fdcd7
Built: Mon May 15 18:48:45 2023
OS/Arch: linux/arm64
Context: default
Server: Docker Engine - Community
Engine:
Version: 24.0.0
API version: 1.43 (minimum version 1.12)
Go version: go1.20.4
Git commit: 1331b8c
Built: Mon May 15 18:48:45 2023
OS/Arch: linux/arm64
Experimental: false
<Snip>
If you get a response back from the Client
and Server
, you’re good to go.
If you are using Linux and get an error response from the Server component, make sure that Docker is up and running. Also, try the command again with sudo
in front of it – sudo docker version
. If it works with sudo
you will need to add your user account to the local docker
group, or prefix all docker
commands with sudo
.
Images
It’s useful to think of a Docker image as an object that contains an OS filesystem, an application, and all application dependencies. If you work in operations, it’s like a virtual machine template. A virtual machine template is essentially a stopped virtual machine. In the Docker world, an image is effectively a stopped container. If you’re a developer, you can think of an image as a class.
Run the docker images
command on your Docker host.
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
If you are working from a freshly installed Docker host, or Play With Docker, you’ll have no images and it will look like the previous output.
Getting images onto your Docker host is called pulling. Pull the ubuntu:latest
image.
$ docker pull ubuntu:latest
latest: Pulling from library/ubuntu
dfd64a3b4296: Download complete
6f8fe7bff0be: Download complete
3f5ef9003cef: Download complete
79d0ea7dc1a8: Download complete
docker.io/library/ubuntu:latest
Run the docker images
command again to see the image you just pulled.
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu latest dfd64a3b4296 1 minute ago 106MB
We’ll get into the details of where the image is stored and what’s inside of it in later chapters. For now, it’s enough to know that an image contains enough of an operating system (OS), as well as all the code and dependencies to run whatever application it’s designed for. The ubuntu
image that we’ve pulled has a stripped-down version of the Ubuntu Linux filesystem and a few of the common Ubuntu utilities.
If you pull an application container, such as nginx:latest
, you’ll get an image with a minimal OS as well as the code to run the app (NGINX).
It’s also worth noting that each image gets its own unique ID. When referencing images, you can refer to them using either IDs
or names. If you’re working with image ID’s, it’s usually enough to type the first few characters of the ID — as long as it’s unique, Docker will know which image you mean.
Containers
Now that we have an image pulled locally we can use the docker run
command to launch a container from it.
$ docker run -it ubuntu:latest /bin/bash
root@6dc20d508db0:/#
Look closely at the output from the previous command. You’ll see that the shell prompt has changed. This is because the -it
flags switch your shell into the terminal of the container — your shell is now inside of the new container!
Let’s examine that docker run
command.
docker run
tells Docker to start a new container. The -it
flags tell Docker to make the container interactive and to attach the current shell to the container’s terminal (we’ll get more specific about this in the chapter on containers). Next, the command tells Docker that we want the container to be based on the ubuntu:latest
image. Finally, it tells Docker which process we want to run inside of the container – a Bash shell.
Run a ps
command from inside of the container to list all running processes.
root@6dc20d508db0:/# ps -elf
F S UID PID PPID NI ADDR SZ WCHAN STIME TTY TIME CMD
4 S root 1 0 0 - 4560 - 13:38 pts/0 00:00:00 /bin/bash
0 R root 9 1 0 - 8606 - 13:38 pts/0 00:00:00 ps -elf
There are only two processes:
- PID 1. This is the
/bin/bash
process that we told the container to run with thedocker run
command. - PID 9. This is the
ps -elf
command/process that we ran to list the running processes.
The presence of the ps -elf
process in the Linux output can be a bit misleading as it is a short-lived process that dies as soon as the ps
command completes. This means the only long-running process inside of the container is the /bin/bash
process.
Press Ctrl-PQ
to exit the container without terminating it. This will land your shell back at the terminal of your Docker host. You can verify this by looking at your shell prompt.
Now that you are back at the shell prompt of your Docker host, run the ps
command again.
Notice how many more processes are running on your Docker host compared to the container you just ran.
Pressing Ctrl-PQ
from inside a container will exit you from the container without killing it. You can see all running containers on your system using the docker ps
command.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
6dc20d508db0 ubuntu:latest "/bin/bash" 7 mins Up 7 min vigilant_borg
The output shows a single running container. This is the one you created earlier and proves it’s still running. You can also see it was created 7 minutes ago and has been running for 7 minutes.
Attaching to running containers
You can attach your shell to the terminal of a running container with the docker exec
command. As the container from the previous steps is still running, let’s make a new connection to it.
This example references a container called “vigilant_borg”. The name of your container will be different, so remember to substitute “vigilant_borg” with the name or ID of the container running on your Docker host.
$ docker exec -it vigilant_borg bash
root@6dc20d508db0:/#
Notice that your shell prompt has changed again. You are logged into the container again.
The format of the docker exec
command is: docker exec <options> <container-name or container-id> <command/app>
. We used the -it
flags to attach our shell to the container’s shell. We referenced the container by name and told it to run the bash shell. We could easily have referenced the container by its hex ID.
Exit the container again by pressing Ctrl-PQ
.
Your shell prompt should be back to your Docker host.
Run the docker ps
command again to verify that your container is still running.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
6dc20d508db0 ubuntu:latest "/bin/bash" 9 mins Up 9 min vigilant_borg
Stop the container and kill it using the docker stop
and docker rm
commands. Remember to substitute the names/IDs of your own containers.
$ docker stop vigilant_borg
vigilant_borg
It may take a few seconds for the container to gracefully stop.
$ docker rm vigilant_borg
vigilant_borg
Verify that the container was successfully deleted by running the docker ps
command with the -a
flag. Adding -a
tells Docker to list all containers, even those in the stopped state.
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Congratulations, you’ve just pulled a Docker image, started a container from it, attached to it, executed a command inside it, stopped it, and deleted it.
The Dev Perspective
Containers are all about the apps.
In this section, we’ll clone an app from a Git repo, inspect its Dockerfile, containerize it, and run it as a container.
The Linux app can be cloned from: https://github.com/nigelpoulton/psweb.git
Run all of the following commands from a terminal on your Docker host.
Clone the repo locally. This will pull the application code to your local Docker host ready for you to containerize it.
$ git clone https://github.com/nigelpoulton/psweb.git
Cloning into 'psweb'...
remote: Enumerating objects: 63, done.
remote: Counting objects: 100% (34/34), done.
remote: Compressing objects: 100% (22/22), done.
remote: Total 63 (delta 13), reused 25 (delta 9), pack-reused 29
Receiving objects: 100% (63/63), 13.29 KiB | 4.43 MiB/s, done.
Resolving deltas: 100% (21/21), done.
Change directory into the cloned repo’s directory and list its contents.
$ cd psweb
$ ls -l
total 40
-rw-r--r--@ 1 ubuntu ubuntu 338 24 Apr 19:29 Dockerfile
-rw-r--r--@ 1 ubuntu ubuntu 396 24 Apr 19:32 README.md
-rw-r--r--@ 1 ubuntu ubuntu 341 24 Apr 19:29 app.js
-rw-r--r-- 1 ubuntu ubuntu 216 24 Apr 19:29 circle.yml
-rw-r--r--@ 1 ubuntu ubuntu 377 24 Apr 19:36 package.json
drwxr-xr-x 4 ubuntu ubuntu 128 24 Apr 19:29 test
drwxr-xr-x 3 ubuntu ubuntu 96 24 Apr 19:29 views
The app is a simple nodejs web app running some static HTML.
The Dockerfile is a plain-text document that tells Docker how to build the app and dependencies into a Docker image.
List the contents of the Dockerfile.
$ cat Dockerfile
FROM alpine
LABEL maintainer="nigelpoulton@hotmail.com"
RUN apk add --update nodejs nodejs-npm
COPY . /src
WORKDIR /src
RUN npm install
EXPOSE 8080
ENTRYPOINT ["node", "./app.js"]
For now, it’s enough to know that each line represents an instruction that Docker uses to build the app into an image.
At this point we’ve pulled some application code from a remote Git repo and we’ve looked at the application’s Dockerfile that contains the instructions Docker uses to build it as an image.
Use the docker build
command to create a new image using the instructions in the Dockerfile. This example creates a new Docker image called test:latest
.
Be sure to run the command from within the directory containing the app code and Dockerfile.
$ docker build -t test:latest .
[+] Building 36.2s (11/11) FINISHED
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load build definition from Dockerfile 0.0s
<Snip>
=> => naming to docker.io/library/test:latest 0.0s
=> => unpacking to docker.io/library/test:latest 0.7s
Once the build is complete, check to make sure that the new test:latest
image exists on your host.
$ docker images
REPO TAG IMAGE ID CREATED SIZE
test latest 1ede254e072b 7 seconds ago 154MB
You have a newly-built Docker image with the app and dependencies inside.
Run a container from the image and test the app.
$ docker run -d \
--name web1 \
--publish 8080:8080 \
test:latest
Open a web browser and navigate to the DNS name or IP address of the Docker host that you are running the container from, and point it to port 8080. You will see the following web page.
If you’re following along on Docker Desktop, you’ll be able to connect to localhost:8080
or 127.0.0.1:8080
. If you’re following along on Play With Docker, you will be able to click the 8080
hyperlink above the terminal screen.

Well done. You’ve copied some application code from a remote Git repo, built it into a Docker image, and ran it as a container. We call this “containerizing an app”.
Chapter Summary
In the Ops section of the chapter, you downloaded a Docker image, launched a container from it, logged into the container, executed a command inside of it, and then stopped and deleted the container.
In the Dev section you containerized a simple application by pulling some source code from GitHub and building it into an image using instructions in a Dockerfile. You then ran the containerized app.
This big picture view should help you with the up-coming chapters where we’ll dig deeper into images and containers.