Home Cloud & Networking OpenStack Essentials - Second Edition

OpenStack Essentials - Second Edition

By Dan Radez
books-svg-icon Book
eBook $29.99 $20.98
Print $38.99
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
eBook $29.99 $20.98
Print $38.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
  1. Free Chapter
    RDO Installation
About this book
OpenStack is a widely popular platform for cloud computing. Applications that are built for this platform are resilient to failure and convenient to scale. This book, an update to our extremely popular OpenStack Essentials (published in May 2015) will help you master not only the essential bits, but will also examine the new features of the latest OpenStack release - Mitaka; showcasing how to put them to work straight away. This book begins with the installation and demonstration of the architecture. This book will tech you the core 8 topics of OpenStack. They are Keystone for Identity Management, Glance for Image management, Neutron for network management, Nova for instance management, Cinder for Block storage, Swift for Object storage, Ceilometer for Telemetry and Heat for Orchestration. Further more you will learn about launching and configuring Docker containers and also about scaling them horizontally. You will also learn about monitoring and Troubleshooting OpenStack.
Publication date:
August 2016
Publisher
Packt
Pages
182
ISBN
9781786462664

 

Chapter 1. RDO Installation

OpenStack has a very modular design, and because of this design, there are lots of moving parts. It is overwhelming to start walking through installing and using OpenStack without understanding the internal architecture of the components that make up OpenStack. In this chapter, we'll look at these components. Each component in OpenStack manages a different resource that can be virtualized for the end user. Separating the management of each of the types of resources that can be virtualized into separate components makes the OpenStack architecture very modular. If a particular service or resource provided by a component is not required, then the component is optional to an OpenStack deployment. Once the components that make up OpenStack have been covered, we will discuss the configuration of a community-supported distribution of OpenStack called RDO.

 

OpenStack architecture


Let's start by outlining some simple categories to group these services into. Logically, the components of OpenStack are divided into three groups:

  • Control

  • Network

  • Compute

The control tier runs the Application Programming Interface (API) services, web interface, database, and message bus. The network tier runs network service agents for networking, and the compute tier is the virtualization hypervisor. It has services and agents to handle virtual machines. All of the components use a database and/or a message bus. The database can be MySQL, MariaDB, or PostgreSQL. The most popular message buses are RabbitMQ, Qpid, and ActiveMQ. For smaller deployments, the database and messaging services usually run on the control node, but they could have their own nodes if required.

In a simple multi-node deployment, the control and networking services are installed on one server and the compute services are installed onto another server. OpenStack could be installed on one node or more than two nodes, but a good baseline for being able to scale out later is to put control and network together and compute by itself. An OpenStack cluster can scale far beyond a few nodes, and we will look at scaling beyond this basic deployment in Chapter 11, Scaling Horizontally.

Now that a base logical architecture of OpenStack has been defined, let's look at what components make up this basic architecture. To do that, we'll first touch on the web interface and then work toward collecting the resources necessary to launch an instance. Finally, we will look at what components are available to add resources to a launched instance.

 

Dashboard


The OpenStack dashboard is the web interface component provided with OpenStack. You'll sometimes hear the terms dashboard and Horizon used interchangeably. Technically, they are not the same thing. This book will refer to the web interface as the dashboard. The team that develops the web interface maintains both the dashboard interface and the Horizon framework that the dashboard uses.

More important than getting these terms right is understanding the commitment that the team that maintains this code base has made to the OpenStack project. They have pledged to include support for all the officially accepted components that are included in OpenStack. Visit the OpenStack website (http://www.openstack.org/) to get an official list of OpenStack components.

The dashboard cannot do anything that the API cannot do. All the actions that are taken through the dashboard result in calls to the API to complete the task requested by the end user. Throughout this book, we will examine how to use the web interface and the API clients to execute tasks in an OpenStack cluster. Next, we will discuss both the dashboard and the underlying components that the dashboard makes calls to when creating OpenStack resources.

 

Keystone


Keystone is the identity management component. The first thing that needs to happen while connecting to an OpenStack deployment is authentication. In its most basic installation, Keystone will manage tenants, users, and roles and be a catalog of services and endpoints for all the components in the running cluster.

Everything in OpenStack must exist in a tenant. A tenant is simply a grouping of objects. Users, instances, and networks are examples of objects. They cannot exist outside of a tenant. Another name for a tenant is a project. On the command line, the term tenant is used. In the web interface, the term project is used.

Users must be granted a role in a tenant. It's important to understand this relationship between the user and a tenant via a role. In Chapter 2, Identity Management, we will look at how to create the user and tenant and how to associate the user with a role in a tenant. For now, understand that a user cannot log in to the cluster unless they are a member of a tenant. Even the administrator has a tenant. Even the users the OpenStack components use to communicate with each other have to be members of a tenant to be able to authenticate.

Keystone also keeps a catalog of services and endpoints of each of the OpenStack components in the cluster. This is advantageous because all of the components have different API endpoints. By registering them all with Keystone, an end user only needs to know the address of the Keystone server to interact with the cluster. When a call is made to connect to a component other than Keystone, the call will first have to be authenticated, so Keystone will be contacted regardless.

Within the communication to Keystone, the client also asks Keystone for the address of the component the user intended to connect to. This makes managing the endpoints easier. If all the endpoints were distributed to the end users, then it would be a complex process to distribute a change in one of the endpoints to all of the end users. By keeping the catalog of services and endpoints in Keystone, a change is easily distributed to end users as new requests are made to connect to the components.

By default, Keystone uses username/password authentication to request a token and the acquired tokens for subsequent requests. All the components in the cluster can use the token to verify the user and the user's access. Keystone can also be integrated into other common authentication systems instead of relying on the username and password authentication provided by Keystone. In Chapter 2, Identity Management, each of these resources will be explored. We'll walk through creating a user and a tenant and look at the service catalog.

 

Glance


Glance is the image management component. Once we're authenticated, there are a few resources that need to be available for an instance to launch. The first resource we'll look at is the disk image to launch from. Before a server is useful, it needs to have an operating system installed on it. This is a boilerplate task that cloud computing has streamlined by creating a registry of pre-installed disk images to boot from. Glance serves as this registry within an OpenStack deployment. In preparation for an instance to launch, a copy of a selected Glance image is first cached to the compute node where the instance is being launched. Then, a copy is made to the ephemeral disk location of the new instance. Subsequent instances launched on the same compute node using the same disk image will use the cached copy of the Glance image.

The images stored in Glance are sometimes called sealed-disk images. These images are disk images that have had the operating system installed but have had things such as the Secure Shell (SSH) host key and network device MAC addresses removed. This makes the disk images generic, so they can be reused and launched repeatedly without the running copies conflicting with each other. To do this, the host-specific information is provided or generated at boot. The provided information is passed in through a post-boot configuration facility called cloud-init.

Usually, these images are downloaded from distribution's download pages. If you search the Internet for your favorite distribution's name and cloud image, you will probably get a link to where to download a generic pre-built copy of a Glance image, also known as a cloud image.

The images can also be customized for special purposes beyond a base operating system installation. If there was a specific purpose for which an instance would be launched many times, then some of the repetitive configuration tasks could be performed ahead of time and built into the disk image. For example, if a disk image was intended to be used to build a cluster of web servers, it would make sense to install a web server package on the disk image before it was used to launch an instance. It would save time and bandwidth to do it once before it is registered with Glance instead of doing this package installation and configuration over and over each time a web server instance is booted.

There are quite a few ways to build these disk images. The simplest way is to do a virtual machine installation manually, make sure that the host-specific information is removed, and include cloud-init in the built image. Cloud-init is packaged in most major distributions; you should be able to simply add it to a package list. There are also tools to make this happen in a more autonomous fashion. Some of the more popular tools are virt-install, Oz, and appliance-creator. The most important thing about building a cloud image for OpenStack is to make sure that cloud-init is installed. Cloud-init is a script that should run post boot to connect back to the metadata service.

In Chapter 3, Image Management, when Glance is covered in greater detail, we will download a pre-built image and use it to demonstrate how Glance works.

 

Neutron


Neutron is the network management component. With Keystone, we're authenticated, and from Glance, a disk image will be provided. The next resource required for launch is a virtual network. Neutron is an API frontend (and a set of agents) that manages the Software Defined Networking (SDN) infrastructure for you. When an OpenStack deployment is using Neutron, it means that each of your tenants can create virtual isolated networks. Each of these isolated networks can be connected to virtual routers to create routes between the virtual networks. A virtual router can have an external gateway connected to it, and external access can be given to each instance by associating a floating IP on an external network with an instance. Neutron then puts all the configuration in place to route the traffic sent to the floating IP address through these virtual network resources into a launched instance. This is also called Networking as a Service (NaaS). NaaS is the capability to provide networks and network resources on demand via software.

By default, the OpenStack distribution we will install uses Open vSwitch to orchestrate the underlying virtualized networking infrastructure. Open vSwitch is a virtual managed switch. As long as the nodes in your cluster have simple connectivity to each other, Open vSwitch can be the infrastructure configured to isolate the virtual networks for the tenants in OpenStack. There are also many vendor plugins that would allow you to replace Open vSwitch with a physical managed switch to handle the virtual networks. Neutron even has the capability to use multiple plugins to manage multiple network appliances. As an example, Open vSwitch and a vendor's appliance could be used in parallel to manage virtual networks in an OpenStack deployment. This is a great example of how OpenStack is built to provide flexibility and choice to its users.

Networking is the most complex component of OpenStack to configure and maintain. This is because Neutron is built around core networking concepts. To successfully deploy Neutron, you need to understand these core concepts and how they interact with one another. In Chapter 4, Network Management, we'll spend time covering these concepts while building the Neutron infrastructure for an OpenStack deployment.

 

Nova


Nova is the instance management component. An authenticated user who has access to a Glance image and has created a network for an instance to live on is almost ready to tie all of this together and launch an instance. The last resources that are required are a key pair and a security group. A key pair is simply an SSH key pair. OpenStack will allow you to import your own key pair or generate one to use. When the instance is launched, the public key is placed in the authorized_keys file so that a password-less SSH connection can be made to the running instance.

Before that SSH connection can be made, the security groups have to be opened to allow the connection to be made. A security group is a firewall at the cloud infrastructure layer. The OpenStack distribution we'll use will have a default security group with rules to allow instances to communicate with each other within the same security group, but rules will have to be added for Internet Control Message Protocol (ICMP), SSH, and other connections to be made from outside the security group.

Once there's an image, network, key pair, and security group available, an instance can be launched. The resource's identifiers are provided to Nova, and Nova looks at what resources are being used on which hypervisors, and schedules the instance to spawn on a compute node. The compute node gets the Glance image, creates the virtual network devices, and boots the instance. During the boot, cloud-init should run and connect to the metadata service. The metadata service provides the SSH public key needed for SSH login to the instance and, if provided, any post-boot configuration that needs to happen. This could be anything from a simple shell script to an invocation of a configuration management engine.

In Chapter 5, Instance Management, we'll walk through each of the pieces of Nova and see how to configure them so that instances can be launched and communicated with.

 

Cinder


Cinder is the block storage management component. Volumes can be created and attached to instances. Then they are used on the instances as any other block device would be used. On the instance, the block device can be partitioned and a filesystem can be created and mounted. Cinder also handles snapshots. Snapshots can be taken of the block volumes or of instances. Instances can also use these snapshots as a boot source.

There is an extensive collection of storage backends that can be configured as the backing store for Cinder volumes and snapshots. By default, Logical Volume Manager (LVM) is configured. GlusterFS and Ceph are two popular software-based storage solutions. There are also many plugins for hardware appliances.

In Chapter 6, Block Storage, we'll take a look at creating and attaching volumes to instances, taking snapshots, and configuring additional storage backends to Cinder.

 

Swift


Swift is the object storage management component. Object storage is a simple content-only storage system. Files are stored without the metadata that a block filesystem has. These are simply containers and files. The files are simply content. Swift has two layers as part of its deployment: the proxy and the storage engine. The proxy is the API layer. It's the service that the end user communicates with. The proxy is configured to talk to the storage engine on the user's behalf. By default, the storage engine is the Swift storage engine. It's able to do software-based storage distribution and replication. GlusterFS and Ceph are also popular storage backends for Swift. They have similar distribution and replication capabilities to those of Swift storage.

In Chapter 7, Object Storage, we'll work with object content and the configuration involved in setting up an alternative storage backend for Swift.

 

Ceilometer


Ceilometer is the telemetry component. It collects resource measurements and is able to monitor the cluster. Ceilometer was originally designed as a metering system for billing users. As it was being built, there was a realization that it would be useful for more than just billing and turned into a general-purpose telemetry system.

Ceilometer meters measure the resources being used in an OpenStack deployment. When Ceilometer reads a meter, it's called a sample. These samples get recorded on a regular basis. A collection of samples is called a statistic. Telemetry statistics will give insights into how the resources of an OpenStack deployment are being used.

The samples can also be used for alarms. Alarms are nothing but monitors that watch for a certain criterion to be met. These alarms were originally designed for Heat autoscaling. We'll look more at getting statistics and setting alarms in Chapter 8, Telemetry.Let's finish listing out OpenStack components by talking about Heat.

 

Heat


Heat is the orchestration component. Orchestration is the process of launching multiple instances that are intended to work together. In orchestration, there is a file, known as a template, used to define what will be launched. In this template, there can also be ordering or dependencies set up between the instances. Data that needs to be passed between the instances for configuration can also be defined in these templates. Heat is also compatible with AWS CloudFormation templates and implements additional features in addition to the AWS CloudFormation template language.

To use Heat, one of these templates is written to define a set of instances that needs to be launched. When a template launches, it creates a collection of virtual resources (instances, networks, storage devices, and so on); this collection of resources is called a stack. When a stack is spawned, the ordering and dependencies, shared configuration data, and post-boot configuration are coordinated via Heat.

Heat is not configuration management. It is orchestration. It is intended to coordinate launching the instances, passing configuration data, and executing simple post-boot configuration. A very common post-boot configuration task is invoking an actual configuration management engine to execute more complex post-boot configuration. In Chapter 9, Orchestration, we'll explore creating a Heat template and launching a stack using Heat.

 

OpenStack installation


The list of components that have been covered is not the full list. This is just a small subset to get you started with using and understanding OpenStack. Further components that are defaults in an OpenStack installation provide many advanced capabilities that we will not be able to cover. Now that we have introduced the OpenStack components, we will illustrate how they work together as a running OpenStack installation. To illustrate an OpenStack installation, we first need to install one. Let's use the RDO Project's OpenStack distribution to do that. RDO has two installation methods; we will discuss both of them and focus on one of them throughout this book.

Manual installation and configuration of OpenStack involves installing, configuring, and registering each of the components we covered in the previous chapter, and also multiple databases and a messaging system. It's a very involved, repetitive, error-prone, and sometimes confusing process. Fortunately, there are a few distributions that include tools to automate this installation and configuration process.

One such distribution is the RDO Project distribution. RDO, as a name, doesn't officially mean anything. It is just the name of a community-supported distribution of OpenStack. The RDO Project takes the upstream OpenStack code, packages it in RPMs and provides documentation, forums, IRC channels, and other resources for the RDO community to use and support each other in running OpenStack on RPM-based systems. There are no modifications to the upstream OpenStack code in the RDO distribution. The RDO project packages the code that is in each of the upstream releases of OpenStack. This means that we'll use an open source, community-supported distribution of vanilla OpenStack for our example installation. RDO should be able to be run on any RPM-based system. We will now look at the two installation tools that are part of the RDO Project, Packstack and RDO Triple-O. We will focus on using RDO Triple-O in this book. The RDO Project recommends RDO Triple-O for installations that intend to deploy a more feature-rich environment. One example is High Availability. RDO Triple-O is able to do HA deployments and Packstack is not. There is still great value in doing an installation with Packstack. Packstack is intended to give you a very lightweight, quick way to stand up a basic OpenStack installation. Let's start by taking a quick look at Packstack so you are familiar with how quick and lightweight is it.

 

Installing RDO using Packstack


Packstack is an installation tool for OpenStack intended for demonstration and proof-of-concept deployments. Packstack uses SSH to connect to each of the nodes and invokes a puppet run (specifically, a puppet apply) on each of the nodes to install and configure OpenStack.

The RDO Project quick start gives instructions to install RDO using Packstack in three simple steps:

  1. Update the system and install the RDO release rpm as follows:

    sudo yum update -y
    sudo yum install -y http://rdo.fedorapeople.org/rdo-release.rpm
    
  2. Install Packstack as shown in the following command:

    sudo yum install -y openstack-packstack
    
  3. Run Packstack as shown in the following command:

    sudo packstack --allinone
    

The all-in-one installation method works well to run on a virtual machine as your all-in-one OpenStack node. In reality, however, a cluster will usually use more than one node beyond a simple learning environment. Packstack is capable of doing multinode installations, though you will have to read the RDO Project documentation for Packstack on the RDO Project wiki. We will not go any deeper with Packstack than the all-in-one installation we have just walked through.

Tip

Don't avoid doing an all-in-one installation; it really is as simple as the steps make it out to be, and there is value in getting an OpenStack installation up and running quickly.

 

Installing RDO using Triple-O


The Triple-O project is an OpenStack installation tool developed by the OpenStack community. A Triple-O deployment consists of two OpenStack deployments. One of the deployments is an all-in-one OpenStack installation that is used as a provisioning tool to deploy a multi-node target OpenStack deployment. This target deployment is the deployment intended for end users. Triple-O stands for OpenStack on OpenStack. OpenStack on OpenStack would be OOO, which lovingly became referred to as Triple-O. It may sound like madness to use OpenStack to deploy OpenStack, but consider that OpenStack is really good at provisioning virtual instances. Triple-O applies this strength to bare-metal deployments to deploy a target OpenStack environment. In Triple-O, the two OpenStacks are called the undercloud and the overcloud. The undercloud is a baremetal management enabled all-in-one OpenStack installation that will build for you in a very prescriptive way. Baremetal management enabled means it is intended to manage physical machines instead of virtual machines. The overcloud is the target deployment of OpenStack that is intended be exposed to end users. The undercloud will take a cluster of nodes provided to it and deploy the overcloud to them, a fully featured OpenStack deployment. In real deployments, this is done with a collection of baremetal nodes. Fortunately, for learning purposes, we can mock having a bunch of baremetal nodes by using virtual machines.

Mind blown yet? Let's get started with this RDO Manager based OpenStack installation to start unraveling what all this means. There is an RDO Manager quickstart project that we will use to get going.

The RDO Triple-O wiki page will be the most up-to-date place to get started with RDO Triple-O. If you have trouble with the directions in this book, please refer to the wiki. OpenSource changes rapidly and RDO Triple-O is no exception. In particular, note that the directions refer to the Mitaka release of OpenStack. The name of the release will most likely be the first thing that changes on the wiki page that will impact your future deployments with RDO Triple-O.

Start by downloading the pre-built undercloud image from the RDO Project's repositories. This is something you could build yourself but it would take much more time and effort to build than it would take to download the pre-built one. As mentioned earlier, the undercloud is a pretty prescriptive all-in-one deployment which lends itself well to starting with a pre-built image. These instructions come from the readme of the Triple-O quickstart GitHub repository (https://github.com/redhat-openstack/tripleo-quickstart/):

myhost# mkdir -p /usr/share/quickstart_images/
myhost# cd /usr/share/quickstart_images/
myhost# wget https://ci.centos.org/artifacts/rdo/images/mitaka/delorean/stable/undercloud.qcow2.md5 \
https://ci.centos.org/artifacts/rdo/images/mitaka/delorean/stable/undercloud.qcow2

Make sure that your ssh key exists:

Myhost# ls ~/.ssh

If you don't see the id_rsa and id_rsa.pub files in that directory list, run the command ssh-keygen. Then make sure that your public key is in the authorized keys file:

myhost# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

Once you have the undercloud image and you ssh keys pull a copy of the quickstart.sh file, install the dependencies and execute the quickstart script:

myhost# cd ~
myhost# wget https://raw.githubusercontent.com/redhat-openstack/tripleo-quickstart/master/quickstart.sh
myhost#sh quickstart.sh -u \
file:///usr/share/quickstart_images/undercloud.qcow2 \
localhost

quickstart.sh will use Ansible to set up the undercloud virtual machine and will define a few extra virtual machines that will be used to mock a collection of baremetal nodes for an overcloud deployment. To see the list of virtual machines that quickstack.sh created, use virsh to list them:

myhost# virsh list --all
 Id    Name                           State
----------------------------------------------------
17    undercloud                   running
 -      ceph_0                         shut off
 -      compute_0                   shut off
 -      control_0                      shut off
 -      control_1                      shut off
 -      control_2                      shut off

Along with the undercloud virtual machine, there are ceph, compute, and control virtual machine definitions. These are the nodes that will be used to deploy the OpenStack overcloud. Using virtual machines like this to deploy OpenStack is not suitable for anything but your own personal OpenStack enrichment. These virtual machines represent physical machines that would be used in a real deployment that would be exposed to end users. To continue the undercloud installation, connect to the undercloud virtual machine and run the undercloud configuration:

myhost# ssh -F /root/.quickstart/ssh.config.ansible undercloud
undercloud# openstack undercloud install

The undercloud install command will set up the undercloud machine as an all-in-one OpenStack installation ready be told how to deploy the overcloud. Once the undercloud installation is completed, the final steps are to seed the undercloud with configuration about the overcloud deployment and execute the overcloud deployment:

undercloud# source stackrc
undercloud# openstack overcloud image upload
undercloud# openstack baremetal import --json instackenv.json
undercloud# openstack baremetal configure boot
undercloud# neutron subnet-list
undercloud# neutron subnet-update <subnet-uuid> --dns-nameserver 8.8.8.8

There are also some scripts and other automated ways to make these steps happen: look at the output of the quickstart script or Triple-O quickstart docs in the GitHub repository to get more information about how to automate some of these steps.

The source command puts information into the shell environment to tell the subsequent commands how to communicate with the undercloud. We will look at this more in depth in Chapter 2, Identity Management. The image upload command uploads disk images into Glance that will be used to provision the overcloud nodes. We will look at this more in Chapter 3, Image Management. The first baremetal command imports information about the overcloud environment that will be deployed. This information was written to the instackenv.json file when the undercloud virtual machine was created by quickstart.sh. The second configures the images that were just uploaded in preparation for provisioning the overcloud nodes. The two neutron commands configure a DNS server for the network that the overclouds will use, in this case Google's. Finally, execute the overcloud deploy:

undercloud# openstack overcloud deploy --control-scale 1 --compute-scale 1 --templates --libvirt-type qemu --ceph-storage-scale 1 -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml

Let's talk about what this command is doing. In OpenStack, there are two basic node types, control and compute. A control node runs the OpenStack API services, OpenStack scheduling service, database services, and messaging services. Pretty much everything except the hypervisors are part of the control tier and are segregated onto control nodes in a basic deployment. In an HA deployment, there are at least three control nodes. This is why you see three control nodes in the list of virtual machines quickstart.sh created. RDO Triple-O can do HA deployments, though we will focus on non-HA deployments in this book. Note that in the command you have just executed, control scale and compute scale are both set to 1. This means that you are deploying one control and one compute node. The other virtual machines will not be used. Take note of the libvirt-type parameter. It is only required if the compute node itself it virtualized, which is what we are doing with RDO Triple-O, to set the configuration properly for the instances to nested. Nested virtualization is when virtual machines are running inside of a virtual machine. In this case, the instances will be virtual machines running inside of the compute node, which is a virtual machine. Finally, the ceph storage scale and storage environment file will deploy Ceph at the storage backend for Glance and Cinder. If you leave off the Ceph and storage environment file parameters, one less virtual machine will be used for deployment. There is more information on storage backends in Chapter 6, Block Storage. The indication the overcloud deploy has succeeded will give you a Keystone endpoint and a success message:

Overcloud Endpoint: http://192.0.2.6:5000/v2.0
Overcloud Deployed
 

Connecting to your Overcloud


Finally, before we dig into looking at the OpenStack components that have been installed and configured, let's identify three ways that you can connect to the freshly installed overcloud deployment:

  • From the undercloud: This is the quickest way to access the overcloud. When the overcloud deployment completed, a file named overcloudrc was created. In Chapter 2, Identity Management, we will investigate this file in more detail. Throughout the rest of the book, this method will be used.

  • Install the client libraries: Both RDO Triple-O and Packstack were installed from the RDO release repository. By installing this release repository, in the same way that was demonstrated earlier for Packstack on another computer, the OpenStack client libraries can be installed on that computer. If these libraries are installed on a computer that can route the network the overcloud was installed on then the overcloud can be accessed from that computer the same as it can from the undercloud. This is helpful if you do not want to be tied to jumping through the undercloud node to access the overcloud:

    laptop# sudo yum install -y http://rdo.fedorapeople.org/rdo-release.rpm
    laptop# sudo yum install python-openstackclient
    

In addition to the client package, you will also need the overcloudrc file from the undercloud.

As an example, you can install the packages on the host machine you have just run quickstart.sh and make the overcloud routable by adding an IP address to the OVS bridge the virtual machines were attached to:

myhost# sudo ip addr add 192.0.2.222/24 dev bridget
myhost# sudo ip link set up dev bridget

Once this is done, the commands in the subsequent chapters could be run from the host machine instead of the undercloud virtual machine.

  • The OpenStack dashboard: OpenStack's included web interface is called the dashboard. Each chapter in this book will conclude by walking through how to complete the same action from the command-line interface with the web interface, if the functionality exists. In the installation you have just completed, you can access the overcloud's dashboard by first running the two ip commands used in the second of the preceding commands, then connecting to the IP address indicated as the overcloud endpoint but on port 80 instead of 5000:

    http://192.0.2.6/.
    
 

Summary


After looking at the components that make up an OpenStack installation, we used RDO Triple-O as a provisioning tool. We now have OpenStack installed and running. Now that OpenStack is installed and running, let's walk through each of the components discussed to learn how to use each of them.

In the next chapter, you will take a look at Keystone to manage users, tenants, and roles used in managing identities within the OpenStack cluster.

About the Author
  • Dan Radez

    Dan Radez joined the OpenStack community in 2012 in an operator role. His experience is focused on installing, maintaining, and integrating OpenStack clusters. He has been given the opportunity to internationally present OpenStack content to a range of audiences of varying expertise. In January 2015, Dan joined the OPNFV community and has been working to integrate RDO Manager with SDN controllers and the networking features necessary for NFV. Dan's experience includes web application programming, systems release engineering, and virtualization product development. Most of these roles have had an open source community focus to them. In his spare time, Dan enjoys spending time with his wife and three boys, training for and racing triathlons, and tinkering with electronics projects.

    Browse publications by this author
Latest Reviews (2 reviews total)
Must have book for all who are interested in OpenStack
Very Good Book to start off
OpenStack Essentials - Second Edition
Unlock this book and the full library FREE for 7 days
Start now