In the previous chapters, we have looked at the components that are used to launch virtual instances in OpenStack. These instances have been targeting the end result of virtual machines running on the compute nodes. Containers have become a very popular alternative to virtual machines for certain workloads. In this chapter, we will work through the configuration changes that are required to deploy Docker containers using OpenStack.
You're reading from OpenStack Essentials. - Second Edition
A container shares the kernel and system resources of its host. This is different from a virtual machine because a virtual machine has its own kernel and related resources. Containers do have their own filesystem tree. A very basic use of containers is to separate applications and runtimes from each other. There are more advanced uses that we are not able to explore here. What is important here is that OpenStack can be used to manage these containers. Containers are more lightweight than virtual machines but still provide a level of isolation for deployments. These can be managed, conceptually, very much like a virtual machine.
To integrate Docker with OpenStack, a compute node's driver needs to be changed from libvirt
to docker
. This forces an entire compute node's worth of virtualization capacity to be converted to container capacity. Docker containers and virtual machines cannot coexist on a compute node together. Docker containers and virtual machines can coexist in the same OpenStack cluster as long as there are at least two compute nodes, one for containers and one for virtual machines. For now, you will convert your existing compute node to docker
, then in Chapter 11, Scaling Horizontally, you will attach another compute node to re-enable virtual machine management.
To configure a compute node to use Docker, there is a Docker driver that needs to be installed on the compute node to use as the compute Nova compute's driver. There are directions for how to do everything, which we will discuss in the documentation in the GitHub repository for the nova-docker
driver at https://github.com/openstack/nova-docker.
The first task is to delete the virtual machines running. Once the following configuration is completed, OpenStack will not be able to manage the virtual machines running on the compute node. Go ahead and delete any running instances you have.
Next, let's walk through the setup instructions. Start by logging into a compute node, installing git
, docker
, and Docker Python support, cloning the nova-docker
GitHub repository, and running setup.py
to install the driver:
compute# yum install -y git docker python-docker-py compute# systemctl enable docker compute# git clone https://github.com/openstack/nova-docker compute# cd nova...
When Docker boots a container, it will need to start the container from a Docker image, similar to how it needs to start a virtual machine from a disk image. Nova will need to pull this container from Glance. This means that Glance needs to know how to manage Docker containers. In Mitaka, Glance knows how to handle Docker containers by default. This can be confirmed by looking in the /etc/glance/glance-api.conf
file and verifying that docker
is listed in the container_formats
list. Note that container of container_formats
here does not refer to a docker
container. It is related to the storage capabilities that Glance has.
Now that it has been confirmed that Glance can manage the Docker container, one must be imported into Glance and made available to boot a container instance. To do this, you use Docker to get images and then import them into Glance so they are available to Nova at spawn time for the instance. Docker needs to be installed...
Now that a there is a compute node that is configured to use the docker
driver and there is a Glance image available to launch from, you are ready to boot a Docker container with OpenStack. The docker integration that you have completed enables a Docker instance to be launched the same way that a virtual machine instance is launched:
undercloud# openstack server create --flavor 1 --image centos --key-name openstack --nic net-id={internal net-id} "My First Docker Instance"
The container will be spawned on the compute node that supports Docker and will become active. Once it is active, there is the question of what to do with it? Generally, a container is launched with intent to run a specific process. As an example, we can tell the Docker image to run the SSH daemon on boot as its process. Doing this, it can be connected to over SSH similar to a virtual machine:
undercloud# glance image-update --property os_command_line='/usr/sbin/sshd -D' centos undercloud# openstack...
In this chapter, we have explored the integration of Docker into an OpenStack cluster. In a cluster that has multiple compute nodes, Docker instances and virtual machine instances can coexist with one another. Remember that they cannot co-exist on the same compute node. A single compute node is dedicated to either virtual machines or to Docker containers.
In Chapter 11, Scaling Horizontally, we will add a second compute node to re-enable your OpenStack deployment to have virtual machine capability. Once the new node is added, you will have the capability to run both Docker containers and virtual machines in your OpenStack cluster.