Reader small image

You're reading from  OpenStack Essentials. - Second Edition

Product typeBook
Published inAug 2016
PublisherPackt
ISBN-139781786462664
Edition2nd Edition
Right arrow
Author (1)
Dan Radez
Dan Radez
author image
Dan Radez

Dan Radez joined the OpenStack community in 2012 in an operator role. His experience is focused on installing, maintaining, and integrating OpenStack clusters. He has been given the opportunity to internationally present OpenStack content to a range of audiences of varying expertise. In January 2015, Dan joined the OPNFV community and has been working to integrate RDO Manager with SDN controllers and the networking features necessary for NFV. Dan's experience includes web application programming, systems release engineering, and virtualization product development. Most of these roles have had an open source community focus to them. In his spare time, Dan enjoys spending time with his wife and three boys, training for and racing triathlons, and tinkering with electronics projects.
Read more about Dan Radez

Right arrow

Chapter 11. Scaling Horizontally

One of the foundations of OpenStack is that it was built to run on generic commodity hardware and is intended to scale out horizontally very easily. Scaling horizontally means adding more commodity servers to get the job done. Scaling vertically means getting larger, more specialized servers. Whether the servers you run have a handful of processors and a few gigabytes of RAM, or double digits of processors and RAM approaching or exceeding terabytes, OpenStack will run on your servers. Further, whatever assortment of servers of varying horsepower you have collected, they can all be joined into an OpenStack cluster to run the API services, service agents, and hypervisors within the cluster. The only hard requirement is that your processors have virtualization extensions built into them, which is pretty much a standard feature in most modern-day processors. In this chapter, we will look at the process of scaling an OpenStack cluster horizontally on the control...

Scaling compute nodes


The first and easiest way to scale an OpenStack cluster is to add compute power. One control node can support more than one compute node. Remember installing RDO in Chapter 1, RDO Installation? We have come a long way since then! In that example, only one compute node was installed. One control node can support a large collection of compute nodes. The exact number that can be handled depends on the demand put on the cluster by its end users. It is probably safe to say that the capacity provided by one compute node is probably not going to meet most use cases, so let's take a look at how to add additional compute nodes to our OpenStack installation.

There are only two OpenStack services, NTP, and the supporting networking infrastructure that a new compute node needs to be running for it to be joined into an OpenStack cluster and start sharing the computing workload. These two services are the Nova compute service and the Neutron Open vSwitch agent. In our example installation...

Installing more control nodes


Adding more control nodes adds more complexity than adding compute nodes. Generally, when additional control nodes are added, each of the control services need to be scaled and the database and message bus need to become highly available.

Triple-O has the capability to install and configure highly available scaled services for you. There is a wealth of documentation on OpenStack's Triple-O Wiki at http://docs.openstack.org/developer/tripleo-docs/. Be sure to take a look at the documentation there if you are trying to deploy something that has not been covered in this book.

As long as there are bare-metal nodes available, just a few extra parameters need to be added to the overcloud deploy command to deploy a multi-control node deployment. It is not recommended to update an existing control tier. It is recommended to plan ahead accordingly and deploy all the control nodes for the first using these parameters. Here is an example of a deployment that would have three...

Load-balancing control services


When more compute services are added to the cluster, OpenStack's scheduler distributes the new instances appropriately. When new control or network services are added, traffic has to be deliberately sent to them. There is not anything natively in OpenStack that handles traffic being distributed across the API services. There is a load-balancing service called HAProxy that can do this for us. HAProxy can be run anywhere it can access the endpoints that will be balanced. It could go on its own node or it could be put on a node that already has a bit of OpenStack installed on it. Triple-O will run HAProxy on each of the control nodes.

HAProxy has a concept of frontends and backends. The frontends are where HAProxy listens for incoming traffic, and the backends define where the incoming traffic will be sent to and balanced across. When a user makes an API call to one of the OpenStack services, the HAProxy frontend assigned to the service will receive the request...

High availability


While HAProxy has monitors built into it to check the health of a host, this is only to know whether or not to send traffic to the host. It does not include any capability of recovering from failure.

To make the control tier highly available, Pacemaker is added to the cluster to monitor services, filesystems, networking resources, and other resources that is it not sufficient to simply load-balance, they need to be made highly available. Pacemaker is capable of moving services from node to node in a pacemaker cluster and monitoring the nodes to know whether action needs to be taken to recover a particular resource or even one of the entire nodes. Triple-O will install and configure a highly available control tier with the previously shown options passed to overcloud deploy in this chapter.

There are two major infrastructure considerations that go into designing a Pacemaker cluster. These points are related to the installation of Pacemaker and preparing it to start managing...

Highly available database and message bus


The database and the message bus are not necessarily OpenStack services, but they are services that OpenStack depends on and that you want to be sure are highly available too. One option to make the database highly available is to add it to Pacemaker with a shared storage device. If the database were to fail on a node, then Pacemaker would move the shared storage to another node and start the database on a different node. There are also active/passive and active/active replication scenarios that can be configured for the database. Active/passive means that there is more than one instance of the database running, but only one of them is used as the active writable instance. The other instance(s) are there as passive backups and only become active if the current active instance needs to fail over for some reason. Active/active means that there is more than one active writable instance of the database. These are running on different nodes and each can...

Summary


In this chapter, we looked at the concepts involved in scaling and load-balancing OpenStack services. We have also touched upon the concepts involved in making OpenStack highly available. Now that an OpenStack cluster is up and running and we have discussed how it could be scaled to meet demand, in the next chapter, we are going to take a look at monitoring the cluster to keep track of its health and help diagnose trouble when it arises.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
OpenStack Essentials. - Second Edition
Published in: Aug 2016Publisher: PacktISBN-13: 9781786462664
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Dan Radez

Dan Radez joined the OpenStack community in 2012 in an operator role. His experience is focused on installing, maintaining, and integrating OpenStack clusters. He has been given the opportunity to internationally present OpenStack content to a range of audiences of varying expertise. In January 2015, Dan joined the OPNFV community and has been working to integrate RDO Manager with SDN controllers and the networking features necessary for NFV. Dan's experience includes web application programming, systems release engineering, and virtualization product development. Most of these roles have had an open source community focus to them. In his spare time, Dan enjoys spending time with his wife and three boys, training for and racing triathlons, and tinkering with electronics projects.
Read more about Dan Radez