OpenStack Administration with Ansible 2 - Second Edition

2 (1 reviews total)
By Walter Bentley
    Advance your knowledge in tech with a Packt subscription

  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. Introduction to OpenStack

About this book

Most organizations are seeking methods to improve business agility because they have realized just having a cloud is not enough. Being able to improve application deployments, reduce infrastructure downtime, and eliminate daily manual tasks can only be accomplished through some sort of automation.

We start with a brief overview of OpenStack and Ansible 2 and highlight some best practices. Each chapter will provide an introduction to handling various Cloud Operator administration tasks such as managing containers within your cloud; setting up/utilizing open source packages for monitoring; creating multiple users/tenants; taking instance snapshots; and customizing your cloud to run multiple active regions. Each chapter will also supply a step-by-step tutorial on how to automate these tasks with Ansible 2.

Packed with real-world OpenStack administrative tasks, this book will walk you through working examples and explain how these tasks can be automated using one of the most popular open source automation tools on the market today.

Publication date:
December 2016


Chapter 1. Introduction to OpenStack

This chapter will serve as a high-level overview of OpenStack and the projects that make up this cloud platform. Laying a clear foundation about OpenStack is very important in order to describe the OpenStack components, concepts, and verbiage. Once the overview is covered, we will transition into discussing the core features and benefits of OpenStack. Finally, the chapter will finish up with two working examples of how you can consume the OpenStack services via the application program interface (API) and command-line interface (CLI).

  • An overview of OpenStack

  • Reviewing the OpenStack services

  • OpenStack supporting components

  • Features and benefits

  • Working examples: listing the services


An overview of OpenStack

In the simplest definition possible, OpenStack can be described as an open source cloud operating platform that can be used to control large pools of compute, storage, and networking resources throughout a data center, all managed through a single interface controlled by either an API, CLI, and/or web graphical user interface (GUI) dashboard. The power that OpenStack offers administrators is the ability to control all of those resources, while still empowering the cloud consumers to provision those very same resources through other self-service models. OpenStack was built in a modular fashion; the platform is made up of numerous components. Some of those components are considered core services and are required in order to have a function cloud, whereas the other services are optional and only required unless they fit into your personal use case.

The OpenStack Foundation

Back in early 2010, Rackspace was just a technology hosting that focused on providing service and support through an offering named Fanatical Support. The company decided to create an open source cloud platform.

The OpenStack Foundation is made up of voluntary members governed by appointed board of directors and project-based tech committees. Collaboration occurs around a six-month, time-based major code release cycle. The release names are run in the alphabetical order and reference the region encompassing the location where the OpenStack design summit will be held. Each release incorporates something called OpenStack Design Summit, which is meant to build collaboration among OpenStack operators/consumers, allowing project developers to have live working sessions and also agree on release items.

As an OpenStack Foundation member, you can take an active role in helping develop any of the OpenStack projects. There is no other cloud platform that allows for such participation.

To learn more about the OpenStack Foundation, you can go to the website,


Reviewing the OpenStack services

Getting to the meat and potatoes of what makes up OpenStack as a project would be to review the services that make up this cloud ecosystem. One thing to keep in mind in reference to the OpenStack services is each service will have an official name and a code name associated with it. The use of the code names has become very popular among the community and most documentation will refer to the services in that manner. Becoming familiar with the code names is important and will ease the adoption process.

The other thing to keep in mind is each service is developed as an API driven REST web service. All actions are executed via that API, enabling for ultimate consumption flexibility. Even when using the CLI or web-based GUI, behind the scenes API calls are being executed and interpreted.

As of the Newton release, the OpenStack project consists of six of what are called Core Services and thirteen Optional Services. The services will be reviewed in order of release to show an overall services timeline. That timeline will show the natural progression of the OpenStack project overall, also showing how it is now surely Enterprise ready.

A great recent addition provided to the OpenStack community is the creation of Project Navigator. The Project Navigator is intended to be a living guide to the consumers of the OpenStack projects, aimed to share each of the services community adoption, maturity, and age. Personally, this resource has been found to be very useful and informative. The navigator can be found here on the OpenStack Foundation website,

OpenStack Compute (code-name Nova)

Integrated in release: Austin

Core Service

This was one of the first and is still the most important service part of the OpenStack platform. Nova is the component that provides the bridge to the underlying hypervisor used to manage the computing resources.


One common misunderstanding is that Nova is a hypervisor in itself, which is simply not true. Nova is a hypervisor manager of sorts, and it is capable of supporting many different types of hypervisors.

Nova would be responsible for scheduling instance creation, sizing options for the instance, managing the instance location, and as mentioned before, keeping track of the hypervisors available to the cloud environment. It also handles the functionality of segregating your cloud into isolation groups named cells, regions, and availability zones.

OpenStack Object Storage (code-name Swift)

Integrated in release: Austin

Core Service

This service was also one of the first services part of the OpenStack platform. Swift is the component that provides Object Storage as a Service to your OpenStack cloud, capable of storing petabytes of data, in turn, adding highly available, distributed, and eventually consistent object/blob store. Object storage is intended to be cheap, cost-effective storage solution for static data, such as images, backups, archives, and static content. The objects can then be streamed over standard web protocols (HTTP/S) to or from the object server to the end user initiating the web request. The other key feature to Swift is all data is automatically made highly available as it is replicated across the cluster. The storage cluster is meant to scale horizontally just by simply adding new servers.

OpenStack Image Service (code-name Glance)

Integrated in release: Bextar

Core Service

This service was introduced during the second OpenStack release, and it is responsible for managing/registering/maintaining server images for your OpenStack cloud. It includes the capability to upload or export OpenStack compatible images and store instance snapshots as use as a template/backup for later use. Glance can store those images on a variety of locations, such as locally and/or on distributed storage, for example, object storage. Most Linux kernel distributions already have OpenStack compatible images available for download. You can also create your own server images from existing servers. There exists support for multiple image formats including Raw, VHD, qcow2, VMDK, OVF, and VDI.

OpenStack Identity (code-name Keystone)

Integrated in release: Essex

Core Service

This service was introduced during the fifth OpenStack release. Keystone is the authentication and authorization component built into your OpenStack cloud. Its key role is to handle creation, registry, and management of users, tenants, and all the other OpenStack services. Keystone would be the first component to be installed when standing up an OpenStack cloud. It has the capability to connect to external directory services such as LDAP. Another key feature of Keystone is that it is built based on role-based access controls (RBAC). Allowing cloud operators to provide distinct role-based access to individual service features to the cloud consumers.

OpenStack Dashboard (code-name Horizon)

Integrated in release: Essex

This service is the second service to be introduced in the fifth OpenStack release. Horizon provides cloud operators and consumers with a web-based GUI to control their compute, storage, and network resources. The OpenStack dashboard runs on top of Apache and the Django REST framework. Making it very easy to integrate into and extend to meet your personal use case. On the backend, Horizon also uses the native OpenStack APIs. The basis behind Horizon was to be able to provide cloud operators with a quick overall view of the state of their cloud, and cloud consumers a self-service provisioning portal to the clouds resources designated to them.


Keep in mind that Horizon can handle approximately 70% of the overall available OpenStack functionality. To leverage 100% of the OpenStack functionality, you would need to use the API's directly and/or use CLI for each service.

OpenStack Networking (code-name Neutron)

Integrated in release: Folsom

Core Service

This service is probably the second most powerful component within your OpenStack cloud next to Nova.

OpenStack Networking is intended to provide a pluggable, scalable and API-driven system for managing networks and IP addresses.

This quote was taken directly from the OpenStack Networking documentation as it best reflects exactly the purpose behind Neutron. Neutron is responsible for creating your virtual networks with your OpenStack cloud. This would entail creation of virtual networks, routers, subnets, firewalls, load balancers, and similar network functions. Neutron was developed with an extension framework, which allows for integration from additional network components (physical network device control) and models (flat, Layer 2, and/or Layer 3 networks). Various vendor-specific plugins and adapters have been created to work inline with Neutron. This service adds to the self-service aspect of OpenStack, removing the network aspect from being a roadblock to consuming your cloud.

With Neutron being one of the most advanced and powerful components within OpenStack, a whole book was dedicated to it.

OpenStack Block Storage (code-name Cinder)

Integrated in release: Folsom

Core Service

Cinder is the component that provides Block Storage as a Service to your OpenStack cloud by leveraging local disks or attached storage devices. This translates into persistent block-level storage volumes available to your instances. Cinder is responsible for managing and maintaining the block volumes created, attaching/detaching those volumes, and also backup creation of that volume. One of the highly notable features of Cinder is its ability to connect to multiple types of backend-shared storage platforms at the same time. This capabilities spectrum also spans all the way down to being able to leverage simple Linux server storage as well. As an added bonus, quality of service (QoS) roles can be applied to the different types of backends. Extending the ability to use the block storage devices to meet various application requirements.

OpenStack Orchestration (code-name Heat)

Integrated in release: Havana

This was one of the two services to be introduced in the eighth OpenStack release. Heat provides the orchestration capability over your OpenStack cloud resources. It is described as a mainline project part of the OpenStack orchestration program. This infers that additional automation functionality is in the pipeline for OpenStack.

The built-in orchestration engine is used to automate provisioning of applications and its components, known as a stack. A stack might include instances, networks, subnets, routers, ports, router interfaces, security groups, security group rules, Auto Scaling rules, and so on. Heat utilizes templates to define a stack and is written in a standard markup format, YAML. You will hear of those templates referred to as HOT (Heat Orchestration Template) templates.

OpenStack Telemetry (code-name Ceilometer)

Integrated in release: Havana

This is the second of the two services introduced in the eighth OpenStack release. Ceilometer collects the cloud usage and performance statistics together into one centralized data store. This capability becomes a key component to a cloud operator as it gives clear metrics into the overall cloud, which can be used to make scaling decisions.


You have the option of choosing the data store backend to Ceilometer. Such options include MongoDB, MySQL, PostgreSQL, HBase, and DB2.

OpenStack Database (code-name Trove)

Integrated in release: Icehouse

Trove is the component that provides Database as a Service to your OpenStack cloud. This capability includes providing scalable and reliable relational and nonrelational database engines. The goal behind this service was to remove the burden of needing to understand database installation and administration. With Trove, cloud consumers can provision database instances just by leveraging the services API. Trove supports multiple singe-tenant databases within a Nova instance.


The data store types currently supported are MySQL, MongoDB, Cassandra, Redis, and CouchDB.

OpenStack Data Processing (code-name Sahara)

Integrated in release: Juno

Sahara is the component that provides Data Processing as a Service to your OpenStack cloud. This capability includes the ability to provision an application cluster tuned to handle large amounts of analytical data. The data store options available are Hadoop and/or Spark. This service will also aid the cloud consumer in being able to abstract the complication of installing and maintaining this type of cluster.

OpenStack Bare Metal Provisioning (code-name Ironic)

Integrated in release: Kilo

This service has been one of the most anxiously awaited components part of the OpenStack project. Ironic provides the capability to provision physical Bare Metal servers from within your OpenStack cloud. It is commonly known as a Bare Metal hypervisor API and leverages a set of plugins to enable interaction with the Bare Metal servers. It is the newest service to be introduced to the OpenStack family and is still under development.

Other optional services

There are a few additional services still in the early phases of maturity that are listed later. The scope and depth of some of them are still being determined, so it felt best not to possibly misrepresent them here in writing. The bigger takeaway here is the depth of added capability these new services will add to your OpenStack cloud when they are ready.




Messaging service


Share filesystems


DNS service


Key management




Application catalog




OpenStack supporting components

Very similar to any traditional application, there are dependent core components that are pivotal to its functionality and not necessarily the application itself. In the case of the base OpenStack architecture, there are two core components that would be considered the core or backbone of the cloud. OpenStack functionality requires access to an SQL-based backend database service and an AMQP (Advanced Message Queuing Protocol) software platform. Just like with any other technology, OpenStack too has base supported reference architectures out there for us to follow. From a database perspective, the common choice will be MySQL and the default AMQP package is RabbitMQ. These two dependencies must be installed, configured, and functional before you can start an OpenStack deployment.

There are additional optional software packages that can also be used to provide further stability as part of your cloud design. Information about this management software and further OpenStack architecture details can be found at the following link


Features and benefits

The power of OpenStack has been tested true by numerous enterprise-grade organizations, thus gaining the focus of many leading IT companies. As this adoption increases, we will surely see an increase in consumption and additional improved features/functionality. For now, let's review some of OpenStack's features and benefits.

Fully distributed architecture

Every service within the OpenStack platform can be grouped together and/or separated to meet your personal use case. Also as mentioned earlier, only the Core services (Keystone, Nova, and Glance) are required to have a functioning cloud. All other components can be optional. This level of flexibility is something every administrator seeks for an Infrastructure as a Service (IaaS) platform.

Using commodity hardware

OpenStack was uniquely designed to accommodate almost any type of hardware. The underlying OS is the only dependency to OpenStack. As long as OpenStack supports the underlying OS and that OS is supported on the particular hardware, you are all set to go! There is no requirement to purchase OEM hardware or even hardware with specific specs. This gives yet another level of deployment flexibility to administrators. A good example of this can be giving your old hardware sitting around in your data center new life within an OpenStack cloud.

Scaling horizontally or vertically

The ability to easily scale your cloud was another key feature to OpenStack. Adding additional compute nodes is as simple as installing the necessary OpenStack services on the new server. The same process is used to expand the OpenStack services control plane as well. Just as with other platforms, you also can add more computing resources to any node as another approach to scaling up.

Meeting high availability requirements

OpenStack is able to certify meeting high availability (99.9%) requirements for its own infrastructure services if implemented via the documented best practices.

Compute isolation and multi-dc Support

Another key feature of OpenStack is the support to handle compute hypervisor isolation and the ability to support multiple OpenStack regions across data centers. Compute isolation includes the ability to separate multiple pools of hypervisors distinguished by hypervisor type, hardware similarity, and/or vCPU ratio.

The ability to support multiple OpenStack regions, which is a complete installation of functioning OpenStack clouds with shared services such as Keystone and Horizon, across data centers is a key function to maintaining highly available infrastructure. This model eases overall cloud administration, allowing a single pane of glass to manage multiple clouds.

Robust role-based access control

All the OpenStack services allow RBAC when assigning authorization to cloud consumers. This gives cloud operators the ability to decide the specific functions allowed by the cloud consumers. Such an example would be to grant a cloud user the ability to create instances, but deny the ability to upload new server images or adjust instance-sizing options.


Working examples – listing the services

So we have covered what OpenStack is, the services that make up OpenStack, and some of the key features of OpenStack. It is only appropriate to show a working example of the OpenStack functionality and the methods available to manage/administer your OpenStack cloud.

To re-emphasize, OpenStack management, administration, and consumption of services can be accomplished via either an API, CLI, and/or web dashboard. When considering some level of automation, the last option of the web dashboard is normally not involved. So for the remainder of this book, we will solely focus on using the OpenStack APIs and CLIs.

Listing the OpenStack services

Now, let's take a look at how you can use either the OpenStack API or CLI to check for the available services active within your cloud.


The first step in using the OpenStack services is authentication against Keystone. You must always first authenticate (tell the API who you are) and then receive authorization (API ingests your username and determines what predefined task(s) you can execute) based on what your user is allowed to do. That complete process ends in providing you with an authentication token.


Keystone can provide four different types of token formats: UUID, fernet, PKI, and PKIZ. A typical UUID token looks like this 53f7f6ef0cc344b5be706bcc8b1479e1. Most do not use the PKI token as it is a much longer string and harder to work with. There are great performance benefits from using fernet tokens instead of UUID due to not requiring persistence. It is suggested to set Keystone to provide fernet tokens within your cloud.

Here is an example of making an authentication request for a secure token. Making API requests using cURL, a useful tool for interacting with RESTful APIs, is the easiest approach. Using cURL with various options, you are able to simulate actions similar to using the OpenStack CLI or the Horizon dashboard:

$ curl -d @credentials.json -X POST -H "Content-Type: application/json" | python -mjson.tool


Because the credential string is fairly long and easy to incorrectly manipulate, it is suggested to use the -d @<filename> functionality part of cURL. This allows you to insert the credential string into a file and then be able to pass it into your API request by just referencing the file. This exercise is very similar to creating a client environment script (also known as OpenRC files). Adding | python -mjson.tool to the end of your API request makes the JSON output easier to read.

An example of the credential string would look like this:

  "auth": { 
    "identity": { 
      "methods": [ 
      "password": { 
        "user": { 
          "name": "admin", 
          "domain": { 
            "id": "default" 
          "password": "passwd" 


Downloading the example code

Detailed steps to download the code bundle are mentioned in the Preface of this book.

The code bundle for the book is also hosted on GitHub at: We also have other code bundles from our rich catalog of books and videos available at: Check them out!

When the example is executed against the Keystone API, it will respond with an authentication token. The token is actually returned within the HTTP header of the response. That token should be used for all subsequent API requests. Keep in mind that the token does expire, but traditionally, a token is configured to the last 24 hours from the creation timestamp.

As mentioned earlier, the token can be found in the HTTP header of the API response message. The HTTP header property name is X-Subject-Token:

HTTP/1.1 201 Created 
Date: Tue, 20 Sep 2016 21:20:02 GMT 
Server: Apache 
X-Subject-Token: gAAAAABX4agC32nymQSbku39x1QPooKDpU2T9oPYapF6ZeY4QSA9EOqQZ8PcKqMT2j5m9uvOtC9c8d9szObciFr06stGo19tNueHDfvHbgRLFmjTg2k8Scu1Q4esvjbwth8aQ-qMSe4NRTWmD642i6pDfk_AIIQCNA 
Vary: X-Auth-Token 
x-openstack-request-id: req-8de6fa59-8256-4a07-b040-c21c4aee6064 
Content-Length: 283 
Content-Type: application/json 

Once you have the authentication token, you can begin crafting subsequent API requests to request information about your cloud and/or execute tasks. Now we will request the list of services available to your cloud:

$ curl -X GET -H 
  "Accept: application/json" -H "X-Auth-
  Token: 907ca229af164a09918a661ffa224747" | python -mjson.tool

The output from this API request will be the complete list of services registered within your cloud by name, description, type, id, and whether it is active. An abstract of the output would look similar to the following code:

  "links": { 
    "next": null, 
    "previous": null, 
    "self": "" 
  "services": [ 
      "description": "Nova Compute Service", 
      "enabled": true, 
      "id": "1999c3a858c7408fb586817620695098", 
      "links": { 
      "name": "nova", 
      "type": "compute" 
      "description": "Cinder Volume Service V2", 
      "enabled": true, 
      "id": "39216610e75547f1883037e11976fc0f", 
      "links": { 
      "name": "cinderv2", 
      "type": "volumev2" 


All the base principles applied to using the API earlier also applies to using the CLI. The major difference is with the CLI all you need to do is create an OpenRC file with your credentials and execute defined commands. The CLI handles formatting the API calls behind the scenes, grabbing the token for subsequent requests, and formatting the output also.

Same as earlier, first you need to authenticate against Keystone to be granted a secure token. This action is accomplished by first sourcing your OpenRC file and then by executing the service-list command. The next example will demonstrate it in more detail. Now that there are two active versions of the Keystone service, version 2.0 and 3.0, you have the choice of which version you wish to have active to handle authentication/authorization.

Here is an example of an OpenRC file v2.0 named openrc:

# To use an OpenStack cloud you need to authenticate against keystone. 
export OS_ENDPOINT_TYPE=internalURL 
export OS_USERNAME=admin 
export OS_TENANT_NAME=admin 
export OS_AUTH_URL= 
# With Keystone you pass the keystone password. 
echo "Please enter your OpenStack Password: " 

The OpenRC file v3.0 would look similar to this:

# *NOTE*: Using the 3 *Identity API* does not necessarily mean any other 
# OpenStack API is version 3. For example, your cloud provider may implement 
# Image API v1.1, Block Storage API v2, and Compute API v2.0. OS_AUTH_URL is 
# only for the Identity API served through keystone. 
export OS_AUTH_URL= 
# With the addition of Keystone we have standardized on the term **project** 
# as the entity that owns the resources. 
export OS_PROJECT_ID=5408dd3366e943b694cae90a04d71c88 
export OS_PROJECT_NAME="admin" 
export OS_USER_DOMAIN_NAME="Default" 
if [ -z "$OS_USER_DOMAIN_NAME" ]; then unset OS_USER_DOMAIN_NAME; fi 
# unset v2.0 items in case set 
# In addition to the owning entity (tenant), OpenStack stores the entity 
# performing the action as the **user**. 
export OS_USERNAME="admin" 
# With Keystone you pass the keystone password. 
echo "Please enter your OpenStack Password: " 
# If your configuration has multiple regions, we set that information here. 
# OS_REGION_NAME is optional and only valid in certain environments. 
export OS_REGION_NAME="RegionOne" 
# Don't leave a blank variable, unset it if it was empty 
if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi 
Once you create and source the OpenRC file, you can begin using the CLI to execute commands such as requesting the list of services. Take a look at the following working example:
$ source openrc
$ openstack service list

The output will look similar to this:



The adoption of OpenStack among the enterprises has taken off since the first revision of this book.  Many large companies such as Walmart, BMW, Volkswagon, AT&T and Comcast have come forward sharing their success stories and continued support for OpenStack.  I hope this chapter may have cleared up any questions you had about OpenStack and maybe even dispelled any myths you may have heard.

We will now transition to learning about Ansible and why using it in conjunction with OpenStack is a great combination.

About the Author

  • Walter Bentley

    Walter Bentley is a Rackspace Private Cloud Technical Marketing Engineer and author with a diverse background in production systems administration and solutions architecture. He has more than 15 years of experience in sectors such as online marketing, financial, insurance, aviation, the food industry, education, and now in technology. In the past, he was typically the requestor, consumer, and advisor to companies in the use of technologies such as OpenStack. Today, he’s an OpenStack promoter and cloud educator. In his current role, Walter helps customers build, design, and deploy private clouds built on OpenStack. That includes professional services engagements around operating OpenStack clouds and DevOps engagements creating playbooks/roles with Ansible. He presents and speaks regularly at OpenStack Summits, AnsibleFest, and other technology conferences, plus webinars, blog posts and technical reviews. His first book, OpenStack Administration with Ansible, was released in 2016.

    Browse publications by this author

Latest Reviews

(1 reviews total)
Voir plus haut dans les commentaires.
OpenStack Administration with Ansible 2 - Second Edition
Unlock this book and the full library for FREE
Start free trial