Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Learn OpenShift
Learn OpenShift

Learn OpenShift: Deploy, build, manage, and migrate applications with OpenShift Origin 3.9

By Denis Zuev , Artemii Kropachev , Aleksey Usov
$35.99 $24.99
Book Jul 2018 504 pages 1st Edition
eBook
$35.99 $24.99
Print
$43.99
Subscription
$15.99 Monthly
eBook
$35.99 $24.99
Print
$43.99
Subscription
$15.99 Monthly

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Jul 30, 2018
Length 504 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781788992329
Languages :

Estimated delivery fee Deliver to Russia

Premium 6 - 9 business days

$21.95
(Includes tracking information)

Economy 10 - 13 business days

$6.95
Table of content icon View table of contents Preview book icon Preview Book

Learn OpenShift

Containers and Docker Overview

This book is much more than just the fundamentals of OpenShift. It's about the past, present, and the future of microservices and containers in general. In this book, we are going to cover OpenShift and its surroundings; this includes topics such as the fundamentals of containers, Docker basics, and studying sections where we will work with both Kubernetes and OpenShift in order to feel more comfortable with them.

During our OpenShift journey, we will walk you through all the main and most of the advanced components of OpenShift. We are going to cover OpenShift security and networking and also application development for OpenShift using the most popular and built-in OpenShift DevOps tools, such as CI/CD with Jenkins and Source-to-Image (S2I) in conjunction with GitHub.

We will also learn about the most critical part for every person who would like to actually implement OpenShift in their company—the design part. We are going to show you how to properly design and implement OpenShift, examining the most common mistakes made by those who have just started working with OpenShift.

The chapter is focused on container and Docker technologies. We will describe container concepts and Docker basics, from the architecture to low-level technologies. In this chapter, we will learn how to use Docker CLI and manage Docker containers and Docker images. A significant part of the chapter is focused on building and running Docker container images. As a part of the chapter, you are asked to develop a number of Dockerfiles and to containerize several applications.

In this chapter, we will look at the following:

  • Containers overview
  • Docker container architecture
  • Understanding Docker images and layers
  • Understanding Docker Hub and Docker registries
  • Installing and configuring Docker software
  • Using the Docker command line
  • Managing images via Docker CLI
  • Managing containers via Docker CLI
  • Understanding the importance of environment variables inside Docker containers
  • Managing persistent storage for Docker containers
  • Building a custom Docker image

Technical requirements

In this chapter, we are going to use the following technologies and software:

  • Vagrant
  • Bash Shell
  • GitHub
  • Docker
  • Firefox (recommended) or any other browser

The Vagrant installation and all the code we use in this chapter are located on GitHub at https://github.com/PacktPublishing/Learn-OpenShift.

Instructions on how to install and configure Docker are provided in this chapter as we learn.

Bash Shell will be used as a part of your virtual environment based on CentOS 7.

Firefox or any other browser can be used to navigate through Docker Hub.

As a prerequisite, you will need a stable internet connection from your laptop.

Containers overview

Traditionally, software applications were developed following a monolithic architecture approach, meaning all the services or components were locked to each other. You could not take out a part and replace it with something else. That approach changed over time and became the N-tier approach. The N-tier application approach is one step forward in container and microservices architecture.

The major drawbacks of the monolith architecture were its lack of reliability, scalability, and high availability. It was really hard to scale monolith applications due to their nature. The reliability of these applications was also questionable because you could rarely easily operate and upgrade these applications without any downtime. There was no way you could efficiently scale out monolith applications, meaning you could not just add another one, five, or ten applications back to back and let them coexist with each other.

We had monolith applications in the past, but then people and companies started thinking about application scalability, security, reliability, and high availability (HA). And that is what created N-tier design. The N-tier design is a standard application design like 3-tier web applications where we have a web tier, application tier, and database backend. It's pretty standard. Now it is all evolving into microservices. Why do we need them? The short answer is for better numbers. It's cheaper, much more scalable, and secure. Containerized applications bring you to a whole new level and this is where you can benefit from automation and DevOps.

Containers are a new generation of virtual machines. That brings software development to a whole new level. Containers are an isolated set of different rules and resources inside a single operating system. This means that containers can provide the same benefits as virtual machines but use far less CPU, memory, and storage. There are several popular container providers including LXC, Rockt, and Docker, which we are going to focus on this book.

Container features and advantages

This architecture brings a lot of advantages to software development.

Some of the major advantages of containers are as follows:

  • Efficient hardware resource consumption
  • Application and service isolation
  • Faster deployment
  • Microservices architecture
  • The stateless nature of containers

Efficient hardware resource consumption

Whether you run containers natively on a bare-metal server or use virtualization techniques, using containers allows you to utilize resources (CPU, memory, and storage) in a better and much more efficient manner. In the case of a bare-metal server, containers allow you to run tens or even hundreds of the same or different containers, providing better resource utilization in comparison to usually one application running on a dedicated server. We have seen in the past that some server utilization at peak times is only 3%, which is a waste of resources. And if you are going to run several of the same or different applications on the same servers, they are going to conflict with each other. Even if they work, you are going to face a lot of problems during day-to-day operation and troubleshooting.

If you are going to isolate these applications by introducing popular virtualization techniques such as KVM, VMware, XEN, or Hyper-V, you will run into a different issue. There is going to be a lot of overhead because, in order to virtualize your app using any hypervisor, you will need to install an operating system on top of your hypervisor OS. This operating system needs CPU and memory to function. For example, each VM has its own kernel and kernel space associated with it. A perfectly tuned container platform can give you up to four times more containers in comparison to standard VMs. It may be insignificant when you have five or ten VMs, but when we talk hundreds or thousands, it makes a huge difference.

Application and service isolation

Imagine a scenario where we have ten different applications hosted on the same server. Each application has a number of dependencies (such as packages, libraries, and so on). If you need to update an application, usually it involves updating the process and its dependencies. If you update all related dependencies, most likely it will affect the other application and services. It may cause these applications not to work properly. Sure, to a degree these issues are addressed by environment managers such as virtualenv for Python and rbenv/rvm for Ruby—and dependencies on shared libraries can be isolated via LD_LIBRARY_PATH—but what if you need different versions of the same package? Containers and virtualization solve that issue. Both VMs and containers provide environment isolation for your applications.

But, in comparison to bare-metal application deployment, container technology (for example, Docker) provides an efficient way to isolate applications, and other computer resources libraries from each other. It not only provides these applications with the ability to co-exist on the same OS, but also provides efficient security, which is a big must for every customer-facing and content-sensitive application. It allows you to update and patch your containerized applications independently of each other.

Faster deployment

Using container images, discussed later in this book, allows us speed up container deployment. We are talking about seconds to completely restart a container versus minutes or tens of minutes with bare-metal servers and VMs. The main reason for this is that a container does not need to restart the whole OS, it just needs to restart the application itself.

Microservices architecture

Containers bring application deployment to a whole new level by introducing microservices architecture. What it essentially means is that, if you have a monolith or N-tier application, it usually has many different services communicating with each other. Containerizing your services allows you to break down your application into multiple pieces and work with each of them independently. Let's say you have a standard application that consists of a web server, application, and database. You can probably put it on one or three different servers, three different VMs, or three simple containers, running each part of this application. All these options require a different amount of effort, time, and resources. Later in this book, you will see how simple it is to do using containers.

The stateless nature of containers

Containers are stateless, which means that you can bring containers up and down, create or destroy them at any time, and this will not affect your application performance. That is one of the greatest features of containers. We are going to delve into this later in this book.

Docker container architecture

Docker is one of the most popular application containerization technologies these days. So why do we want to use Docker if there are other container options available? Because collaboration and contribution are key in the era of open source, and Docker has made many different things that other technologies have not been able to in this area.

For example, Docker partnered with other container developers such as Red Hat, Google, and Canonical to jointly work on its components. Docker also contributed it's software container format and runtime to the Linux Foundation's open container project. Docker has made containers very easy to learn about and use.

Docker architecture

As we mentioned already, Docker is the most popular container platform. It allows for creating, sharing, and running applications inside Docker containers. Docker separates running applications from the infrastructure. It allows you to speed up the application delivery process drastically. Docker also brings application development to an absolutely new level. In the diagram that follows, you can see a high-level overview of the Docker architecture:

Docker architecture

Docker uses a client-server type of architecture:

  • Docker server: This is a service running as a daemon in an operating system. This service is responsible for downloading, building, and running containers.
  • Docker client: The CLI tool is responsible for communicating with Docker servers using the REST API.

Docker's main components

Docker uses three main components:

  • Docker containers: Isolated user-space environments running the same or different applications and sharing the same host OS. Containers are created from Docker images.
  • Docker images: Docker templates that include application libraries and applications. Images are used to create containers and you can bring up containers immediately. You can create and update your own custom images as well as download build images from Docker's public registry.
  • Docker registries: This is a images store. Docker registries can be public or private, meaning that you can work with images available over the internet or create your own registry for internal purposes. One popular public Docker registry is Docker Hub, discussed later in this chapter.

Linux containers

As mentioned in the previous section, Docker containers are secured and isolated from each other. In Linux, Docker containers use several standard features of the Linux kernel. This includes:

  • Linux namespaces: It is a feature of Linux kernel to isolate resources from each other. This allows one set of Linux processes to see one group of resources while allowing another set of Linux processes to see a different group of resources. There are several kinds of namespaces in Linux: Mount (mnt), Process ID (PID), Network (net), User ID (user), Control group (cgroup), and Interprocess Communication (IPC). The kernel can place specific system resources that are normally visible to all processes into a namespace. Inside a namespace, a process can see resources associated with other processes in the same namespace. You can associate a process or a group of processes with their own namespace or, if using network namespaces, you can even move a network interface to a network namespace. For example, two processes in two different mounted namespaces may have different views of what the mounted root file system is. Each container can be associated with a specific set of namespaces, and these namespaces are used inside these containers only.
  • Control groups (cgroups): These provide an effective mechanism for resource limitation. With cgroups, you can control and manage system resources per Linux process, increasing overall resource utilization efficiency. Cgroups allow Docker to control resource utilization per container.
  • SELinux: Security Enhanced Linux (SELinux) is mandatory access control (MAC) used for granular system access, initially developed by the National Security Agency (NSA). It is an additional security layer for Debian and RHEL-based distributions like Red Hat Enterprise Linux, CentOS, and Fedora. Docker uses SELinux for two main reasons: host protection and to isolate containers from each other. Container processes run with limited access to the system resources using special SELinux rules.

The beauty of Docker is that it leverages the aforementioned low-level kernel technologies, but hides all complexity by providing an easy way to manage your containers.

Understanding Docker images and layers

A Docker image is a read-only template used to build containers. An image consists of a number of layers that are combined into a single virtual filesystem accessible for Docker applications. This is achieved by using a special technique which combines multiple layers into a single view. Docker images are immutable, but you can add an extra layer and save them as a new image. Basically, you can add or change the Docker image content without changing these images directly. Docker images are the main way to ship, store, and deliver containerized applications. Containers are created using Docker images; if you do not have a Docker image, you need to download or build one.

Container filesystem

The container filesystem, used for every Docker image, is represented as a list of read-only layers stacked on top of each other. These layers eventually form a base root filesystem for a container. In order to make it happen, different storage drivers are being used. All the changes to the filesystem of a running container are done to the top level image layer of a container. This layer is called a Container layer. What it basically means is that several containers may share access to the same underlying level of a Docker image, but write the changes locally and uniquely to each other. This process is shown in the following diagram:

Docker layers

Docker storage drivers

A Docker storage driver is the main component to enable and manage container images. Two main technologies are used for that—copy-on-write and stackable image layers. The storage driver is designed to handle the details of these layers so that they interact with each other. There are several drivers available. They do pretty much the same job, but each and every one of them does it differently. The most common storage drivers are AUFS, Overlay/Overlay2, Devicemapper, Btrfs, and ZFS. All storage drivers can be categorized into three different types:

Storage driver category

Storage drivers

Union filesystems

AUFS, Overlay, Overlay2

Snapshotting filesystems

Btrfs, ZFS

Copy-on-write block devices

Devicemapper

Container image layers

As previously mentioned, a Docker image contains a number of layers that are combined into a single filesystem using a storage driver. The layers (also called intermediate images) are generated when commands are executed during the Docker image build process. Usually, Docker images are created using a Dockerfile, the syntax of which will be described later. Each layer represents an instruction in the image's Dockerfile.

Each layer, except the very last one, is read-only:

Docker image layers

A Docker image usually consists of several layers, stacked one on top of the other. The top layer has read-write permissions, and all the remaining layers have read-only permissions. This concept is very similar to the copy-on-write technology. So, when you run a container from the image, all the changes are done to this top writable layer.

Docker registries

As mentioned earlier, a Docker image is a way to deliver applications. You can create a Docker image and share it with other users using a public/private registry service. A registry is a stateless, highly scalable server-side application which you can use to store and download Docker images. Docker registry is an open source project, under the permissive Apache license. Once the image is available on a Docker registry service, another user can download it by pulling the image and can use this image to create new Docker images or run containers from this image.

Docker supports several types of docker registry:

  • Public registry
  • Private registry

Public registry

You can start a container from an image stored in a public registry. By default, the Docker daemon looks for and downloads Docker images from Docker Hub, which is a public registry provided by Docker. However, many vendors add their own public registries to the Docker configuration at installation time. For example, Red Hat has its own proven and blessed public Docker registry which you can use to pull Docker images and to build containers.

Private registry

Some organization or specific teams don't want to share their custom container images with everyone for a reason. They still need a service to share Docker images, but just for internal usage. In that case, a private registry service can be useful. A private registry can be installed and configured as a service on a dedicated server or a virtual machine inside your network.

You can easily install a private Docker registry by running a Docker container from a public registry image. The private Docker registry installation process is no different from running a regular Docker container with additional options.

Accessing registries

A Docker registry is accessed via the Docker daemon service using a Docker client. The Docker command line uses a RESTful API to request process execution from the daemon. Most of these commands are translated into HTTP requests and may be transmitted using curl.

The process of using Docker registries is shown in the following section.

A developer can create a Docker image and put it into a private or public registry. Once the image is uploaded, it can be immediately used to run containers or build other images.

Docker Hub overview

Docker Hub is a cloud-based registry service that allows you to build your images and test them, push these images, and link to Docker cloud so you can deploy images on your hosts. Docker Hub provides a centralized resource for container image discovery, distribution and change management, user and team collaboration, and workflow automation throughout the development pipeline.

Docker Hub is the public registry managed by the Docker project, and it hosts a large set of container images, including those provided by major open source projects, such as MySQL, Nginx, Apache, and so on, as well as customized container images developed by the community.

Docker Hub provides some of the following features:

  • Image repositories: You can find and download images managed by other Docker Hub users. You can also push or pull images from private image libraries you have access to.
  • Automated builds: You can automatically create new images when you make changes to a source code repository.
  • Webhooks: The action trigger that allows you to automate builds when there is a push to a repository.
  • Organizations: The ability to create groups and manage access to image repositories.

In order to start working with Docker Hub, you need to log in to Docker Hub using a Docker ID. If you do not have one, you can create your Docker ID by following the simple registration process. It is completely free. The link to create your Docker ID if you do not have one is https://hub.docker.com/.

You can search for and pull Docker images from Docker Hub without logging in; however, to push images you must log in. Docker Hub gives you the ability to create public and private repositories. Public repositories will be publicly available for anyone and private repositories will be restricted to a set of users of organizations.

Docker Hub contains a number of official repositories. These are public, certified repositories from different vendors and Docker contributors. It includes vendors like Red Hat, Canonical, and Oracle.

Docker installation and configuration

Docker software is available in two editions: Community Edition (CE) and Enterprise Edition (EE).

Docker CE is a good point from which to start learning Docker and using containerized applications. It is available on different platforms and operating systems. Docker CE comes with an installer so you can start working with containers immediately. Docker CE is integrated and optimized for infrastructure so you can maintain a native app experience while getting started with Docker.

Docker Enterprise Edition (EE) is a Container-as-a-Service (CaaS) platform for IT that manages and secures diverse applications across disparate infrastructures, both on-premises and in a cloud. In other words, Docker EE is similar to Docker CE in that it is supported by Docker Inc.

Docker software supports a number of platforms and operating systems. The packages are available for most popular operating systems such as Red Hat Enterprise Linux, Fedora Linux, CentOS, Ubuntu Linux, Debian Linux, macOS, and Microsoft Windows.

Docker installation

The Docker installation process is dependent on the particular operating system. In most cases, it is well described on the official Docker portal—https://docs.docker.com/install/. As a part of this book, we will be working with Docker software on CentOS 7.x. Docker installation and configuration on other platforms is not part of this book. If you still need to install Docker on another operating system, just visit the official Docker web portal.

Usually, the Docker node installation process looks like this:

  1. Installation and configuration of an operating system
  2. Docker packages installation
  3. Configuring Docker settings
  4. Running the Docker service
We assume that our readers have sufficient knowledge to install and configure a CentOS-based virtual machine (VM) or bare-metal host. If you do not know how to use Vagrant, please follow the guidelines at https://www.vagrantup.com/intro/getting-started/.

Once you properly install Vagrant on your system, just run vagrant init centos/7 followed by vagrant up. You can verify whether vagrant is up with the vagrant status command, and finally you can ssh into VM by using vagrant ssh command.
Since Docker is supported on even the most popular OSes, you have an option to install Docker directly on your desktop OS. We advise you to either use Vagrant or any other virtualization provider such as VMware or KVM, because we have done all the tests inside the virtual environment on CentOS 7. If you still want to install Docker on your desktop OS, follow the link: https://docs.docker.com/install/.

Docker CE is available on CentOS 7 with standard repositories. The installation process is focused on the docker package installation:

# yum install docker -y
...
output truncated for brevity
...
Installed:
docker.x86_64 2:1.12.6-71.git3e8e77d.el7.centos.1
Dependency Installed:
...
output truncated for brevity
...

Once the installation is completed, you need to run the Docker daemon to be able to manage your containers and images. On RHEL7 and CentOS 7, this just means starting the Docker service like so:

# systemctl start docker
# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

You can verify that your Docker daemon works properly by showing Docker information provided by the docker info command:

# docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
...
output truncated for brevity
...
Registries: docker.io (secure)

Docker configuration

Docker daemon configuration is managed by the Docker configuration file (/etc/docker/daemon.json) and Docker daemon startup options are usually controlled by the systemd unit named Docker. On Red Hat-based operating systems, some configuration options are available at /etc/sysconfig/docker and /etc/sysconfig/docker-storage. Modification of the mentioned file will allow you to change Docker parameters such as the UNIX socket path, listen on TCP sockets, registry configuration, storage backends, and so on.

Using the Docker command line

In order to start using Docker CLI, you need to configure and bring up a Vagrant VM. If you are using macOS, the configuration process using Vagrant will look like this:

$ mkdir vagrant; cd vagrant
$ cat Vagrantfile
Vagrant.configure(2) do |config|
config.vm.box = "centos/7"
config.vm.hostname = 'node1.example.com'
config.vm.network "private_network", type: "dhcp"
config.vm.provision "shell", inline: "groupadd docker; usermod -aG docker vagrant; yum install docker -y; systemctl enable docker; systemctl start docker"
end
$ vagrant up
$ vagrant ssh

Using Docker man, help, info

The Docker daemon listens on unix:///var/run/docker.sock but you can bind Docker to another host/port or a Unix socket. The Docker client (the docker utility) uses the Docker API to interact with the Docker daemon.

The Docker client supports dozens of commands, each with numerous options, so an attempt to list them all would just result in a copy of the CLI reference from the official documentation. Instead, we will provide you with the most useful subsets of commands to get you up and running.

You can always check available man pages for all Docker sub-commands using:

$ man -k docker

You will be able to see a list of man pages for Docker and all the sub-commands available:

$ man docker
$ man docker-info
$ man Dockerfile

Another way to get information regarding a command is to use docker COMMAND --help:

# docker info --help
Usage: docker info
Display system-wide information
--help Print usage

The docker utility allows you to manage container infrastructure. All sub-commands can be grouped as follows:

Activity type

Related subcommands

Managing images

search, pull, push, rmi, images, tag, export, import, load, save

Managing containers

run, exec, ps, kill, stop, start

Building custom images

build, commit

Information gathering

info, inspect

Managing images using Docker CLI

The first step in running and using a container on your server or laptop is to search and pull a Docker image from the Docker registry using the docker search command.

Let's search for the web server container. The command to do so is:

$ docker search httpd
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
httpd ... 1569 [OK]
hypriot/rpi-busybox-httpd ... 40
centos/httpd 15 [OK]
centos/httpd-24-centos7 ... 9

Alternatively, we can go to https://hub.docker.com/ and type httpd in the search window. It will give us something similar to the docker search httpd results:

Docker Hub search results

Once the container image is found, we can pull this image from the Docker registry in order to start working with it. To pull a container image to your host, you need to use the docker pull command:

$ docker pull httpd

The output of the preceding command is as follows:

Note that Docker uses concepts from union filesystem layers to build Docker images. This is why you can see seven layers being pulled from Docker Hub. One stacks up onto another, building a final image.

By default, Docker will try to pull the image with the latest tag, but we can also download an older, more specific version of an image we are interested in using different tags. The best way to quickly find available tags is to go to https://hub.docker.com/, search for the specific image, and click on the image details:

Docker Hub image details

There we are able to see all the image tags available for us to pull from Docker Hub. There are ways to achieve the same goal using the docker search CLI command, which we are going to cover later in this book.

$ docker pull httpd:2.2.29

The output of the preceding code should look something like the following:

You may notice that the download time for the second image was significantly lower than for the first image. It happens because the first image we pulled (docker:latest) has most layers in common with the second image (httpd:2.2.29). So there is no need to download all the layers again. This is very useful and saves a lot of time in large environments.

Working with images

Now we want to check the images available on our local server. To do this, we can use the docker images command:

$ docker images

The output of the preceding command will be as shown in the following screenshot:


If we downloaded a wrong image, we can always delete it from the local server by using the docker rmi command: ReMove Image (RMI). In our case, we have two versions of the same image, so we can specify a tag for the image we want to delete:

$ docker rmi httpd:2.2.29

The output of the preceding command will be as shown in the following screenshot:

At this point, we have only one image left, which is httpd:latest:

$ docker images

The output of the preceding command will be as shown in the following screenshot:


Saving and loading images

The Docker CLI allows us to export and import Docker images and container layers using export/import or save/load Docker commands. The difference between save/load and export/import is that the first one works with images including metadata, but the export/import combination uses only container layers and doesn't include any image metadata information such as name, tags, and so on. In most cases, the save/load combination is more relevant and works properly for images without special needs. The docker save command packs the layers and metadata of all the chains required to build the image. You can then load this saved images chain into another Docker instance and create containers from these images.

The docker export will fetch the whole container, like a snapshot of a regular VM. It saves the OS, of course, but also any change a you made and any data file written during the container life. This one is more like a traditional backup:

$ docker save httpd -o httpd.tar

$ ls -l httpd.tar

To load the image back from the file, we can use the docker load command. Before we do that, though, let's remove the httpd image from the local repository first:

$ docker rmi httpd:latest

The output of the preceding command will be as shown in the following screenshot:

We verify that we do not have any images in the local repository:

 $ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE

Load the image file we previously saved with the docker save command. Like docker export and docker import, this command forms a pair with Docker save and thus is used for loading a saved container archive with all intermediate layers and metadata to the Docker cache:

$ docker load -i httpd.tar

The output of the preceding command will be as shown in the following screenshot:

Check the local docker images with docker image command:

$ docker images

The output of the preceding command will be as shown in the following screenshot:

Uploading images to the Docker registry

Now we know how to search, pull, remove, save, load, and list available images. The last piece we are missing is how to push images back to Docker Hub or a private registry.

To upload an image to Docker Hub, we need to do a few tricks and follow these steps:

  1. Log in to Docker Hub:
$ docker login
Username: #Enter your username here
Password: #Enter your password here
Login Succeeded
  1. Copy the Docker image you want to push to a different path in the Docker repository on your server:
$ docker tag httpd:latest flashdumper/httpd:latest
Note that flashdumper is your Docker Hub username.
  1. Finally, push the copied image back to Docker Hub:
$ docker push flashdumper/httpd:latest

The output of the preceding command will be as shown in the following screenshot:

Now the image is pushed to your Docker Hub and available for anyone to download.

$ docker search flashdumper/*

The output of the preceding command will be as shown in the following screenshot:

You can check the same result using a web browser. If you go to https://hub.docker.com/ you should be able to see this httpd image available under your account:

Docker Hub account images

Managing containers using Docker CLI

The next step is to actually run a container from the image we pulled from Docker Hub or a private registry in the previous chapter. We are going to use the docker run command to run a container. Before we do that, let's check if we have any containers running already by using the docker ps command:

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAME

Run a container with the docker run command:

$ docker run httpd

The output of the preceding command will be as shown in the following screenshot:

The container is running, but we cannot leave the terminal and continue working in the foreground. And the only way we can escape it is by sending a TERM signal (Ctrl + C) and killing it.

Docker ps and logs

Run the docker ps command to show that there are no running containers:

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

Run docker ps -a to show both running and stopped containers:

$ docker ps -a

The output of the preceding command will be as shown in the following screenshot:

There are a few things to note here. The STATUS field says that container 5e3820a43ffc exited about one minute ago. In order to get container log information, we can use the docker logs command:

$ docker logs 5e3820a43ffc

The output of the preceding command will be as shown in the following screenshot:

The last message says caught SIGTERM, shutting down. It happened after we pressed Ctrl + C. In order to run a container in background mode, we can use the -d option with the docker run command:

$ docker run -d httpd
5d549d4684c8e412baa5e30b20697b72593d87130d383c2273f83b5ceebc4af3

It generates a random ID, the first 12 characters of which are used for the container ID. Along with the generated ID, a random container name is also generated.

Run docker ps to verify the container ID, name, and status:

$ docker ps

The output of the preceding command will be as shown in the following screenshot:

Executing commands inside a container

From the output, we can see that the container status is UP. Now we can execute some commands inside the container using the docker exec command with different options:

$ docker exec -i 00f343906df3 ls -l /
total 12
drwxr-xr-x. 2 root root 4096 Feb 15 04:18 bin
drwxr-xr-x. 2 root root 6 Nov 19 15:32 boot
drwxr-xr-x. 5 root root 360 Mar 6 21:17 dev
drwxr-xr-x. 42 root root 4096 Mar 6 21:17 etc
drwxr-xr-x. 2 root root 6 Nov 19 15:32 home
...
Output truncated for brevity
...

Option -i (--interactive) allows you to run a Docker without dropping inside the container. But we can easily override this behavior and enter this container by using -i and -t (--tty) options (or just -it):

$ docker exec -it 00f343906df3 /bin/bash
root@00f343906df3:/usr/local/apache2#

We should fall into container bash CLI. From here, we can execute other general Linux commands. This trick is very useful for troubleshooting. To exit the container console, just type exit or press Ctrl + D.

Starting and stopping containers

We can also stop and start running containers by running docker stop and docker start commands:

Enter the following command to stop the container:

$ docker stop 00f343906df3
00f343906df3

Enter the following command to start the container:

$ docker start 00f343906df3
00f343906df3

Docker port mapping

In order to actually benefit from the container, we need to make it publicly accessible from the outside. This is where we will need to use the -p option with a few arguments while running the docker run command:

$ docker run -d -p 8080:80 httpd
3b1150b5034329cd9e70f90ee21531b8b1ab1d4a85141fd3a362cd40db80e193

Option -p maps container port 80 to your server port 8080. Verify that you have a httpd container exposed and a web server running:

$ curl localhost:8080
<html><body><h1>It works!</h1></body></html>

Inspecting the Docker container

While the container is running, we can inspect its parameters by using the docker inspect command. The output is provided in JSON format and it gives us a very comprehensive output:

$ docker inspect 00f343906df3
[
{
"Id": "00f343906df3f26c24e02cd61d6a37bbc36106b3b0372073673c2983cb6f",
...
output truncated for brevity
...
}
]

Removing containers

In order to delete a container, you can use the docker rm command. If the container you want to delete is running, you can stop and delete it or use the -f option and it will do the job:

$ docker rm 3b1150b50343
Error response from daemon: You cannot remove a running container 3b1150b5034329cd9e70f90ee21531b8b1ab1d4a85141fd3a362cd40db80e193. Stop the container before attempting removal or force remove

Let's try using -f option.

$ docker rm  -f 3b1150b50343

Another trick you can use to delete all containers, both stopped and running, is the following command:

$ docker rm -f $(docker ps -qa)
830a42f2e727
00f343906df3
5e3820a43ffc
419e7ce2567e

Verify that all the containers are deleted:

$ docker ps  -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • •Gain hands-on experience of working with Kubernetes and Docker
  • •Learn how to deploy and manage applications in OpenShift
  • •Get a practical approach to managing applications on a cloud-based platform
  • •Explore multi-site and HA architectures of OpenShift for production

Description

Docker containers transform application delivery technologies to make them faster and more reproducible, and to reduce the amount of time wasted on configuration. Managing Docker containers in the multi-node or multi-datacenter environment is a big challenge, which is why container management platforms are required. OpenShift is a new generation of container management platforms built on top of both Docker and Kubernetes. It brings additional functionality to the table, something that is lacking in Kubernetes. This new functionality significantly helps software development teams to bring software development processes to a whole new level. In this book, we’ll start by explaining the container architecture, Docker, and CRI-O overviews. Then, we'll look at container orchestration and Kubernetes. We’ll cover OpenShift installation, and its basic and advanced components. Moving on, we’ll deep dive into concepts such as deploying application OpenShift. You’ll learn how to set up an end-to-end delivery pipeline while working with applications in OpenShift as a developer or DevOps. Finally, you’ll discover how to properly design OpenShift in production environments. This book gives you hands-on experience of designing, building, and operating OpenShift Origin 3.9, as well as building new applications or migrating existing applications to OpenShift.

What you will learn

•Understand the core concepts behind containers and container orchestration tools •Understand Docker, Kubernetes, and OpenShift, and their relation to CRI-O •Install and work with Kubernetes and OpenShift •Understand how to work with persistent storage in OpenShift •Understand basic and advanced components of OpenShift, including security and networking •Manage deployment strategies and application's migration in OpenShift •Understand and design OpenShift high availability

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Jul 30, 2018
Length 504 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781788992329
Languages :

Estimated delivery fee Deliver to Russia

Premium 6 - 9 business days

$21.95
(Includes tracking information)

Economy 10 - 13 business days

$6.95

Table of Contents

24 Chapters
Preface Chevron down icon Chevron up icon
1. Containers and Docker Overview Chevron down icon Chevron up icon
2. Kubernetes Overview Chevron down icon Chevron up icon
3. CRI-O Overview Chevron down icon Chevron up icon
4. OpenShift Overview Chevron down icon Chevron up icon
5. Building an OpenShift Lab Chevron down icon Chevron up icon
6. OpenShift Installation Chevron down icon Chevron up icon
7. Managing Persistent Storage Chevron down icon Chevron up icon
8. Core OpenShift Concepts Chevron down icon Chevron up icon
9. Advanced OpenShift Concepts Chevron down icon Chevron up icon
10. Security in OpenShift Chevron down icon Chevron up icon
11. Managing OpenShift Networking Chevron down icon Chevron up icon
12. Deploying Simple Applications in OpenShift Chevron down icon Chevron up icon
13. Deploying Multi-Tier Applications Using Templates Chevron down icon Chevron up icon
14. Building Application Images from Dockerfile Chevron down icon Chevron up icon
15. Building PHP Applications from Source Code Chevron down icon Chevron up icon
16. Building a Multi-Tier Application from Source Code Chevron down icon Chevron up icon
17. CI/CD Pipelines in OpenShift Chevron down icon Chevron up icon
18. OpenShift HA Architecture Overview Chevron down icon Chevron up icon
19. OpenShift HA Design for Single and Multiple DCs Chevron down icon Chevron up icon
20. Network Design for OpenShift HA Chevron down icon Chevron up icon
21. What is New in OpenShift 3.9? Chevron down icon Chevron up icon
22. Assessments Chevron down icon Chevron up icon
23. Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Filter icon Filter
Top Reviews
Rating distribution
Empty star icon Empty star icon Empty star icon Empty star icon Empty star icon 0
(0 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 0%

Filter reviews by


No reviews found
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.