Home Cloud & Networking Beginning DevOps with Docker

Beginning DevOps with Docker

By Joseph Muli
books-svg-icon Book
eBook $13.99 $8.99
Print $16.99
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
eBook $13.99 $8.99
Print $16.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
About this book
Making sure that your application runs across different systems as intended is quickly becoming a standard development requirement. With Docker, you can ensure that what you build will behave the way you expect it to, regardless of where it's deployed. By guiding you through Docker from start to finish (from installation, to the Docker Registry, all the way through to working with Docker Swarms), we’ll equip you with the skills you need to migrate your workflow to Docker with complete confidence.
Publication date:
May 2018
Publisher
Packt
Pages
96
ISBN
9781789532401

 

Chapter 1. Images and Containers

This lesson will cover fundamental concepts about containerization as a foundation for the images and containers we will later build. We will also get to understand how and why Docker gets involved in the DevOps ecosystem. Before we begin, we will see how virtualization differs from containerization in Docker.

 

Lesson Objectives


By the end of this lesson, you will be able to:

  • Describe how Docker improves a DevOps workflow

  • Interpret Dockerfile syntax

  • Build images

  • Set up containers and images

  • Set up a local dynamic environment

  • Run applications in Docker containers

  • Obtain a basic overview of how Docker manages images via Docker Hub

  • Deploy a Docker image to Docker Hub

 

Virtualization versus Containerization


This block diagram gives an overview of a typical virtual machine setup:

In virtual machines, the physical hardware is abstracted, therefore we have many servers running on one server. A hypervisor helps do this.

Virtual machines do sometimes take time to start up and are expensive in capacity (they can be GBs in size), although the greatest advantage they have over containers is the ability to run different Linux distributions such as CentOS instead of just Ubuntu:

In containerization, it is only the app layer (where code and dependencies are packaged) that is abstracted, making it possible for many containers to run on the same OS kernel but on separate user space.

Containers use less space and boot fast. This makes development easier, since you can delete and start up containers on the fly without considering how much server or developer working space you have.

Let's begin the lesson with a quick overview on how Docker comes into play in a DevOps workflow and the Docker environment.

 

How Docker Improves a DevOps Workflow


DevOps is a mindset, a culture, and a way of thinking. The ultimate goal is to always improve and automate processes as much as possible. In layman language, DevOps requires one to think in the laziest point of view, which puts most, if not all, processes as automatic as possible.

Docker is an open source containerization platform that improves the shipping process of a development life cycle. Note it is neither a replacement for the already existing platforms nor does the organization want it to be.

Docker abstracts the complexity of configuration management like Puppet. With this kind of setup, shell scripts become unnecessary. Docker can also be used on small or large deployments, from a hello world application to a full-fledged production server.

As a developer on different levels, whether beginner or expert, you may have used Docker and you didn't even realize it. If you have set up a continuous integration pipeline to run your tests online, most servers use Docker to build and run your tests.

Docker has gained a lot of support in the tech community because of its agility and, as such, a lot of organizations are running containers for their services. Such organizations include the following:

  • Continuous integration and continuous delivery platforms such as Circle CI, Travis CI, and Codeship

  • Cloud platforms such as Amazon Web Services (AWS) and Google Cloud Platform (GCP) allow developers to run applications out of containers

  • Cisco and the Alibaba group also run some of their services in containers

Docker's place in the DevOps workflow involves, but is not limited to, the following:

Note

Examples of Docker's use cases in a development workflow.

Unifying requirements refers to using a single configuration file. Docker abstracts and limits requirements to a single Dockerfile file.

Abstraction of OS means one doesn't need to worry about building the OS because there exist prebuilt images.

Velocity has to define a Dockerfile and build containers to test in, or use an already built image without writing a Dockerfile.Docker allows development teams to avoid investment on steep learning curves through shell scripts because "automation tool X" is too complicated.

Recap of the Docker Environment

We walked through the fundamentals of containerization earlier. Allow me to emphasize the alternative workflow that Docker brings to us.

Normally, we have two pieces to a working application: the project code base and the provisioning script. The code base is the application code. It is managed by version control and hosted in GitHub, among other platforms.

The provisioning script could be a simple shell script to be run in a host machine, which could be anywhere from a Windows workstation to a fully dedicated server in the cloud.

Using Docker does not interfere with the project code base, but innovates on the provisioning aspect, improving the workflow and delivery velocity. This is a sample setup of how Docker implements this:

The Dockerfile takes the place of the provisioning script. The two combined (project code and Dockerfile) make a Docker image. A Docker image can be run as an application. This running application sourced from a Docker image is called a Docker container.

The Docker container allows us to run the application in a completely new environment on our computers, which is completely disposable. What does this mean?

It means that we are able to declare and run Linux or any other operating system on our computers and then, run our application in it. This also emphasizes that we can build and run the container as many times as we want without interfering with our computer's configuration.

With this, I have brought to your attention four key words: image, container, build, and run. We will get to the nitty-gritty of the Docker CLI next.

 

Basic Docker Terminal Commands


Open Command Prompt to check that Docker is installed in your workstation. Entering the command docker on your terminal should show the following:

This is the list of available subcommands for Docker. To understand what each subcommand does, enter docker-subcommand –help on the terminal:

Run docker info and note the following:

  • Containers

  • Images

  • Server Version

This command displays system-wide information. The server version number is important at times, especially when new releases introduce something that is not backward-compatible. Docker has stable and edge releases for their Community Edition.

We will now look at a few commonly used commands.

This command searches Docker Hub for images:

docker search <term> (for example, docker search ubuntu)

Docker Hub is the default Docker registry. A Docker registry holds named Docker images. Docker Hub is basically the "GitHub for Docker images". Earlier, we looked at running an Ubuntu container without building one; this is where the Ubuntu image is stored and versioned:

"There are private Docker registries, and it is important that you are aware of this now."? Docker Hub is at hub.docker.com. Some images are hosted at store.docker.com but Docker Store contains official images. However, it mainly focuses on the commercial aspect of an app store of sorts for Docker images and provides workflows for use.

The register page is as shown here:

The log in page is as shown here:

From the results, you can tell how users have rated the image by the number of stars. You can also tell whether the image is official. This means that the image is promoted by the registry, in this case, Docker Hub. New Docker users are advised to use official images since they have great documentation, are secure, promote best practices, and are designed for most use cases. As soon as you have settled on one image, you'll need to have it locally.

Note

Ensure you are able to search for at least one image from Docker Hub. Image variety ranges from operating systems to libraries, such as Ubuntu, Node.js, and Apache.

This command allows you to search from Docker Hub:

docker search <term>

For example, docker search ubuntu.

This command pulls an image from the registry to your local machine:

docker pull

For example, docker pull ubuntu.

As soon as this command is running, you'll notice that it is using the default tag: latest. In Docker Hub, you can see the list of tags. For Ubuntu, they are listed here: https://hub.docker.com/r/library/ubuntu/ plus their respective Dockerfiles:

Download the Ubuntu image profile on Docker Hub from: https://hub.docker.com/r/library/ubuntu/.

Activity 1 — Utilizing the docker pull Command

To get you conversant with the docker pull command.

The goal of this activity is to gain a firm understanding of the docker-pull CLI, not only by running the listed commands, but also by seeking help on other commands while exploring, through manipulating the built containers.

  1. Is Docker up and running? Type docker on the terminal or command-line application.

  2. This command is used to pull the image from the Docker Hub.

    docker pull
    

Image variety ranges from operating systems to libraries, such as Ubuntu, Node.js, and Apache. This command allows you to pull images from Docker Hub:

For example, docker pull ubuntu.

This command lists the Docker images we have locally:

  • docker images

When we run the command, if we have pulled images from Docker Hub, we will be able to see a list of images:

They are listed according to the repository, tag, image ID, date created, and size. The repository is simply the image name unless it is sourced from a different registry. In this case, you'll have a URL without the http:// and the top level domain (TLD) such as >registry.heroku.com/<image-name> from the Heroku registry.

This command will check whether the image by the name hello-world exists locally:

docker run <image>

For example, docker run hello-world:

If the image is not local, it will be pulled from the default registry, Docker Hub, and run as a container, by default.

This command lists the running containers:

docker ps

If there aren't any running containers, you should have a blank screen with the headers:

Activity 2 — Analyzing the Docker CLI

Ensure you have the Docker CLI running by typing docker on your terminal.

You have been asked to demonstrate the commands covered so far.

To get you conversant with the Docker CLI. The goal of this activity is to gain a firm understanding of the docker-compose CLI, not only by running the listed commands, but also by seeking help on other commands while exploring, through manipulating the built containers. The goal is to be flexible enough with the CLI to be able to use it in a real-world scenario such as running an automated script.

  1. Is Docker up and running? Type docker on the terminal or command-line application.

  2. Search for the official Apache image using the CLI, using docker search apache:

  3. Attempt to pull the image using docker pull apache.

  4. Confirm the availability of the image locally using docker images.

  5. Bonus: Run the image as a container using docker run apache.

  6. Bonus: Stop the container using docker stop <container ID>.

  7. Bonus: Delete the container and the image using docker rm <contai ner ID>.

 

Dockerfile Syntax


Every Docker image starts from a Dockerfile. To create an image of an application or script, simply create a file called Dockerfile.

Note

It does not have an extension and begins with a capital letter D.

A Dockerfile is a simple text document where all the commands that template a container are written. The Dockerfile always starts with a base image. It contains steps to create the application or to run the script in mind.

Before we build, let's take a quick look at a few best practices on writing Dockerfiles.

Some best practices include, but are not limited to, the following:

  • Separation of concern: Ensure each Dockerfile is, as much as possible, focused on one goal. This will make it so much easier to reuse in multiple applications.

  • Avoid unnecessary installations: This will reduce complexity and make the image and container compact enough.

  • Reuse already built images: There are several built and versioned images on Docker Hub; thus, instead of implementing an already existing image, it's highly advisable to reuse by importing.

  • Have a limited number of layers: A minimal number of layers will allow one to have a compact or smaller build. Memory is a key factor to consider when building images and containers, because this also affects the consumers of the image, or the clients.

We'll start simply with a Python and JavaScript script. The choice of these languages is based on their popularity and ease of demonstration.

Writing Dockerfiles for Python and JavaScript examples

Note

No prior experience is required on the selected languages as theyare meant to give a dynamic view of how any language can adopt containerization.

Python

Before we begin, create a new directory or folder; let's use this as our workspace.

Open the directory and run docker search python. We'll pick the official image: python. The official image has the value [OK] in the OFFICIAL column:

Go to hub.docker.com or store.docker.com and search for python to get the correct tag or at least know what version the Python image with the latest tag is. We will talk more about tags in Topic D.

The image tag should be the number with this syntax that looks like 3.x.x or 3.x.x-rc.

Create a file by the name run.py and enter the first line as follows:

print("Hello Docker - PY")

Create a new file on the same folder level and name it Dockerfile.

Note

We do not have an extension for the Dockerfile.

Add the following in the Dockerfile:

FROM python
ADD . .
RUN ls
CMD python run.py

The FROM command, as alluded to earlier, specifies the base image.

The command can also be used on an inheritance point of view. This means you do not have to include extra package installations in the Dockerfile if there already exists an image with the packages.

The ADD command copies the specified files at source to the destination within the image's filesystem. This means the contents of the script will be copied to the directory specified.

In this case because run.py and Dockerfile are on the same level then run.py is copied to the working directory of the base image's file system that we are building upon.

The RUN command is executed while the image is being built. ls being run here is simply for us to see the contents of the image's filesystem.

The CMD command is used when a container is run based on the image we'll create using this Dockerfile. That means at the end of the Dockerfile execution, we are intending to run a container.

JavaScript

Exit the previous directory and create a new one. This one will be demonstrating a node application.

Add the following line in the script and save:

console.log("Hello Docker - JS")

Run docker search node - we'll pick the official image: node

Remember that the official image has the value [OK] in the OFFICIAL column:

Note that node is the JavaScript runtime based on Google's high performance, open source JavaScript engine, V8.

Go to hub.docker.com and search for node to get the correct tag or at least know what version the node image with the latest tag is.

Create a new Dockerfile and add the following:

This should be on the same file level as the script.

FROM node
ADD . .
RUN ls
CMD node run.js

We'll cover these for now.

Activity 3 — Building the Dockerfile

Ensure you have the Docker CLI running by typing docker on your terminal.

To get you conversant with Dockerfile syntax. The goal of this activity is to help understand and practice working with third-party images and containers. This helps get a bigger picture on how collaboration can still be affected through containerization. This increases product delivery pace by building features or resources that already exist.

You have been asked to write a simple Dockerfile that prints hello-world.

  1. Is Docker up and running? Type docker on the terminal or command-line application.

  2. Create a new directory and create a new Dockerfile.

  3. Write a Dockerfile that includes the following steps:

    FROM ubuntu:xenial 
    RUN apt-get install -y apt-transport-https curl software-properties-common python-software-properties
    RUN curl -fsSL https://apt.dockerproject.org/gpg | apt-key add 
    RUN echo 'deb https://apt.dockerproject.org/repo ubuntu-xenial main' > /etc/apt/sources.list.d/docker.list
    RUN apt-get update
    RUN apt-get install -y python3-pip
    RUN apt-get install -y build-essential libssl-dev libffi-dev python-dev
    
 

Building Images


Before we begin building images, let's understand the context first. An image is a standalone package that can run an application or allocated service. Images are built through Dockerfiles, which are templates that define how images are to be built.

A container is defined as a runtime instance or version of an image. Note this will run on your computer or the host as a completely isolated environment, which makes it disposable and viable for tasks such as testing.

With the Dockerfiles ready, let's get to the Python Dockerfile directory and build the image.

docker build

The command to build images is as follows:

docker build -t <image-name> <relative location of the Dockerfile>

-t stands for the tag. The <image-name> can include the specific tag, say, latest. It is advised that you do it this way: always tagging the image.

The relative location of the Dockerfile here would be a dot (.) to mean that the Dockerfile is on the same level as the rest of the code; that is, it is at the root level of the project. Otherwise, you would enter the directory the Dockerfile is in.

If, for example, it is in the Docker folder, you would have docker build -t <image-name> docker, or if it is in a folder higher than the root directory, you would have two dots. Two levels higher would be three dots in place of the one dot.

Note

The output on the terminal and compare to the steps written on the Dockerfiles. You may want to have two or more Dockerfiles to configure different situations, say, a Dockerfile to build a production-ready app and another one for testing. Whatever reason you may have, Docker has the solution.

The default Dockerfile is, yes, Dockerfile. Any additional one by best practices is named Dockerfile.<name>,say, Dockerfile.dev.

To build an image using a Dockerfile aside from the default one, run the following: docker build -f Dockerfile.<name> -t <image-name> <relative location of the Dockerfile>

Note

If you rebuild the image with a change to the Dockerfile, without specifying a different tag, a new image will be built and the previous image is named <none>.

The docker build command has several options that you can see for yourself by running docker build --help. Tagging images with names such as latest is also used for versioning. We will talk more on this in the Topic F.

To build the image, run the following command in the Python workspace:

>$ docker build -t python-docker .

Note

The trailing dot is an important part of the syntax here:

Note

The trailing dot is an important part of the syntax here:

Open the JavaScript directory and build the JavaScript image as follows:

>$ docker build -t js-docker .

Running the commands will outline the four steps based on the four lines of commands in the Dockerfile.

Running docker images lists the two images you have created and any other image you had pulled before.

Removing Docker Images

The docker rmi <image-id> command is used to delete an image. Let me remind you that the image ID can be found by running the docker images command.

To delete the images that are non-tagged (assumed not to be relevant), knowledge of bash scripting comes in handy. Use the following command:

docker rmi $(docker images | grep "^<none>" | awk "{print $3}")

This simply searches for images with <none> within their row of the docker images command and returns the image IDs that are in the third column:

Activity 4 — Utilizing the Docker Image

Ensure you have the Docker CLI running by typing docker on your terminal.

To get you conversant with running containers out of images.

You have been asked to build an image from the Dockerfile written in Activity C. Stop the running container, delete the image, and rebuild it using a different name.

  1. Is Docker up and running? Type docker on the terminal or command-line application.

  2. Open the JavaScript example directory.

  3. Run docker build -t <choose a name> (observe the steps and take note of the result).

  4. Run docker run <the-name-you-chose>.

  5. Run docker stop <container ID>.

  6. Run docker rmi <add the image ID here>.

  7. Run docker build -t <choose new name>.

  8. Run docker ps (note the result; the old image should not exist).

 

Running Containers From Images


Remember when we mentioned containers are built from images? The command docker run <image> creates a container based on that image. One can say that a container is a running instance of an image. Another reminder is that this image could either be local or in the registry.

Go ahead and run the already created images docker run python-docker and docker run js-docker:

What do you notice? The containers run outputs to the terminal's respective lines. Notice that the command preceded by CMD in the Dockerfile is the one that runs:

docker build -t python-docker:test .  and docker build -t js-docker:test .

Then, run the following:

python-docker:test and docker run js-docker:test

Note

You will not see any output on the terminal.

This is not because we don't have a command CMD to run as soon as the container is up. For both images built from Python and Node, there is a CMD inherited from the base images.

Note

Images created always inherit from the base image.

The two containers we have run contain scripts that run once and exit. Examining the results of docker ps, you'll have nothing listed from the two containers run earlier. However, running docker ps -a reveals the containers and their state as exited.

There is a command column that shows the CMD of the image from which the container is built from.

When running a container, you can specify the name as follows:

docker run --name <container-name> <image-name> (for example, docker run --name py-docker-container python-docker):

We outlined earlier that you only want to have relevant Docker images and not the <none> tagged Docker images.

As for containers, you need to be aware that you can have several containers from one image. docker rm <container-id> is the command for removing containers. This works for exited containers (those that are not running).

Note

For the containers that are still running, you would have to either:

Stop the containers before removing them (docker stop <container-id>)

Remove the containers forcefully (docker rm <container-id> -f)

No container will be listed if you run docker ps, but sure enough if we run docker ps -a, you will notice that the containers are listed and their command columns will show the inherited CMD commands: python3 and node:

Python

The CMD in Dockerfile for Python's image is python3. This means that the python3 command is run in the container and the container exits.

Note

With this in mind, one gets to run Python without installing Python in one's machine.

Try running this: docker run -it python-docker:test (with the image we created last).

We get into an interactive bash shell in the container. -it instructs the Docker container to create this shell. The shell runs python3, which is the CMD in the Python base image:

In the command docker run -it python-docker:test python3 run.py, python3 run.py is run as you would in the terminal within the container. Note that run.py is within the container and so runs. Running docker run -it python python3 run.py would indicate the absence of the run.py script:

The same applies to JavaScript, showing that the concept applies across the board.

docker run -it js-docker:test (the image we created last) will have a shell running node (the CMD in the node base image):

docker run -it js-docker:test node run.js will output Hello Docker - JS:

That proves the inheritance factor in Docker images.

Now, return the Dockerfiles to their original state with the CMD commands on the last line.

 

Versioning Images and Docker Hub


Remember talking about versioning images in Topic D? We did that by adding latest and using some numbers against our images, such as 3.x.x or 3.x.x-rc.

In this topic, we'll go through using tags for versioning and look at how official images have been versioned in the past, thereby learning best practices.

The command in use here is the following:

docker build -t <image-name>:<tag> <relative location of the Dockerfile>

Say, for example, we know that Python has several versions: Python 3.6, 3.5, and so on. Node.js has several more. If you take a look at the official Node.js page on Docker Hub, you see the following at the top of the list:

9.1.0, 9.1, 9, latest (9.1/Dockerfile) (as of November 2017):

This versioning system is called semver: semantic versioning. This version number has the format MAJOR, MINOR, PATCH in an incremental manner:

MAJOR: For a change that is backward-incompatible

MINOR: For when you have a backward-compatible change

PATCH: For when you make bug fixes that are backward-compatible

You'll notice labels such as rc and other prerelease and build metadata attached to the image.

When building your images, especially for release to the public or your team, using semver is the best practice.

That said, I advocate that you do this always and have this as a personal mantra: semver is key. It will remove ambiguity and confusion when working with your images.

 

Deploying a Docker Image to Docker Hub


Every time we run docker build, the image created is locally available. Normally, the Dockerfile is hosted together with the code base; therefore, on a new machine, one would need to use docker build to create the Docker image.

With Docker Hub, any developer has the opportunity to have a Docker image hosted to be pulled into any machine running Docker. This does two things:

  • Eliminates the repetitive task of running docker build

  • Adds an additional way of sharing your application which is simple to set up compared to sharing a link of your app's code base and README detailing the setup process

docker login is the command to run to connect to Docker Hub via the CLI. You need to have an account in hub.docker.com and enter the username and password through the terminal.

docker push <docker-hub-username/image-name[:tag]> is the command to send the image to the registry, Docker Hub:

A simple search of your image on hub.docker.com will give the output to your Docker image.

In a new machine, a simple docker pull <docker-hub-username/your-image-name> command will produce a copy of your image locally.

 

Summary


In this lesson, we have done the following:

  • Reviewed the DevOps workflow and a few use cases for Docker

  • Walked through Dockerfile syntax

  • Gained a high-level understanding of the build images for applications and running containers

  • Constructed a number of images, versioned them, and pushed them to Docker Hub

About the Author
  • Joseph Muli

    Joseph Muli loves programming, writing, teaching, gaming, and traveling. Currently, he works as a software engineer at Andela and Fathom, and specializes in DevOps and Site Reliability. Previously, he worked as a software engineer and technical mentor at Moringa School.

    Browse publications by this author
Beginning DevOps with Docker
Unlock this book and the full library FREE for 7 days
Start now