Reader small image

You're reading from  OpenStack Essentials. - Second Edition

Product typeBook
Published inAug 2016
PublisherPackt
ISBN-139781786462664
Edition2nd Edition
Right arrow
Author (1)
Dan Radez
Dan Radez
author image
Dan Radez

Dan Radez joined the OpenStack community in 2012 in an operator role. His experience is focused on installing, maintaining, and integrating OpenStack clusters. He has been given the opportunity to internationally present OpenStack content to a range of audiences of varying expertise. In January 2015, Dan joined the OPNFV community and has been working to integrate RDO Manager with SDN controllers and the networking features necessary for NFV. Dan's experience includes web application programming, systems release engineering, and virtualization product development. Most of these roles have had an open source community focus to them. In his spare time, Dan enjoys spending time with his wife and three boys, training for and racing triathlons, and tinkering with electronics projects.
Read more about Dan Radez

Right arrow

Chapter 10. Docker

In the previous chapters, we have looked at the components that are used to launch virtual instances in OpenStack. These instances have been targeting the end result of virtual machines running on the compute nodes. Containers have become a very popular alternative to virtual machines for certain workloads. In this chapter, we will work through the configuration changes that are required to deploy Docker containers using OpenStack.

Containers


A container shares the kernel and system resources of its host. This is different from a virtual machine because a virtual machine has its own kernel and related resources. Containers do have their own filesystem tree. A very basic use of containers is to separate applications and runtimes from each other. There are more advanced uses that we are not able to explore here. What is important here is that OpenStack can be used to manage these containers. Containers are more lightweight than virtual machines but still provide a level of isolation for deployments. These can be managed, conceptually, very much like a virtual machine.

OpenStack integration


To integrate Docker with OpenStack, a compute node's driver needs to be changed from libvirt to docker. This forces an entire compute node's worth of virtualization capacity to be converted to container capacity. Docker containers and virtual machines cannot coexist on a compute node together. Docker containers and virtual machines can coexist in the same OpenStack cluster as long as there are at least two compute nodes, one for containers and one for virtual machines. For now, you will convert your existing compute node to docker, then in Chapter 11, Scaling Horizontally, you will attach another compute node to re-enable virtual machine management.

Nova compute configuration


To configure a compute node to use Docker, there is a Docker driver that needs to be installed on the compute node to use as the compute Nova compute's driver. There are directions for how to do everything, which we will discuss in the documentation in the GitHub repository for the nova-docker driver at https://github.com/openstack/nova-docker.

The first task is to delete the virtual machines running. Once the following configuration is completed, OpenStack will not be able to manage the virtual machines running on the compute node. Go ahead and delete any running instances you have.

Next, let's walk through the setup instructions. Start by logging into a compute node, installing git, docker, and Docker Python support, cloning the nova-docker GitHub repository, and running setup.py to install the driver:

compute# yum install -y git docker python-docker-py
compute# systemctl enable docker
compute# git clone https://github.com/openstack/nova-docker
compute# cd nova...

Glance configuration


When Docker boots a container, it will need to start the container from a Docker image, similar to how it needs to start a virtual machine from a disk image. Nova will need to pull this container from Glance. This means that Glance needs to know how to manage Docker containers. In Mitaka, Glance knows how to handle Docker containers by default. This can be confirmed by looking in the /etc/glance/glance-api.conf file and verifying that docker is listed in the container_formats list. Note that container of container_formats here does not refer to a docker container. It is related to the storage capabilities that Glance has.

Importing a Docker image to Glance

Now that it has been confirmed that Glance can manage the Docker container, one must be imported into Glance and made available to boot a container instance. To do this, you use Docker to get images and then import them into Glance so they are available to Nova at spawn time for the instance. Docker needs to be installed...

Launching a Docker instance


Now that a there is a compute node that is configured to use the docker driver and there is a Glance image available to launch from, you are ready to boot a Docker container with OpenStack. The docker integration that you have completed enables a Docker instance to be launched the same way that a virtual machine instance is launched:

undercloud# openstack server create --flavor 1 --image centos --key-name openstack --nic net-id={internal net-id} "My First Docker Instance"

The container will be spawned on the compute node that supports Docker and will become active. Once it is active, there is the question of what to do with it? Generally, a container is launched with intent to run a specific process. As an example, we can tell the Docker image to run the SSH daemon on boot as its process. Doing this, it can be connected to over SSH similar to a virtual machine:

undercloud# glance image-update --property os_command_line='/usr/sbin/sshd -D' centos
undercloud# openstack...

Summary


In this chapter, we have explored the integration of Docker into an OpenStack cluster. In a cluster that has multiple compute nodes, Docker instances and virtual machine instances can coexist with one another. Remember that they cannot co-exist on the same compute node. A single compute node is dedicated to either virtual machines or to Docker containers.

In Chapter 11, Scaling Horizontally, we will add a second compute node to re-enable your OpenStack deployment to have virtual machine capability. Once the new node is added, you will have the capability to run both Docker containers and virtual machines in your OpenStack cluster.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
OpenStack Essentials. - Second Edition
Published in: Aug 2016Publisher: PacktISBN-13: 9781786462664
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Dan Radez

Dan Radez joined the OpenStack community in 2012 in an operator role. His experience is focused on installing, maintaining, and integrating OpenStack clusters. He has been given the opportunity to internationally present OpenStack content to a range of audiences of varying expertise. In January 2015, Dan joined the OPNFV community and has been working to integrate RDO Manager with SDN controllers and the networking features necessary for NFV. Dan's experience includes web application programming, systems release engineering, and virtualization product development. Most of these roles have had an open source community focus to them. In his spare time, Dan enjoys spending time with his wife and three boys, training for and racing triathlons, and tinkering with electronics projects.
Read more about Dan Radez