Cinder is the block storage component in OpenStack. In the previous chapter, all the necessary resources were collected to launch an instance. Now that this instance is running, let's look at the use case for block storage and the process of attaching virtual block storage to the OpenStack instance. Then, we will take a look at the storage engine used to store these block devices and the other options available for the backing store.
You're reading from OpenStack Essentials. - Second Edition
OpenStack instances run on ephemeral disks – disks that only exist for the life of the instance. When an instance is terminated, the disk is discarded. This is a problem if there is any information that requires persistence. Block storage is one type of storage that can be used to persist data beyond the termination of an OpenStack instance.
Using Cinder, users can create block devices on demand and present them to running instances. The instances see this as a normal, everyday block device – as if an extra hard drive was plugged into the machine. The extra drive can be used as any other block device by creating partitions and filesystems on it. Let's look now at how to create and present a block storage device using Cinder.
Creating a block device is as simple as specifying the size and a name for the block device being created:
undercloud# openstack volume create --size 1 my_volume
This command created a virtual block device that is 1 GB of storage space. To see the devices, use the list
command:
undercloud# openstack volume list
The volume will be listed with information about it. As with the components already covered, the admin user can see all volumes that are in Cinder and non-privileged users see only the Cinder volumes in their project. When volumes are created, they cycle through a progression of states that indicate the status of the new block device. When the status reaches Available, it is ready to be attached to an instance.
The virtual storage device we just created is not much good to us unless it is attached to an instance that can make use of it. Luckily for us, we just launched an OpenStack instance and logged in to it. Perform the following steps to attach the block storage to an instance:
To show the attachment, start by connecting to the instance and listing the existing block devices on the instance that is running:
instance# ls /dev/vd* /dev/vda /dev/vda1
The boot device for this instance is
vda
; this is the Glance image that was used to boot.Now attach the volume you just created to the instance you have running. When you list the devices on the instance again, you will see the Cinder volume show up as
vdb
:undercloud# openstack server add volume "My First Instance" my_volume instance# ls /dev/vd* /dev/vda /dev/vda1 /dev/vdb
The Cinder volume was attached as
vdb
to the instance. Now that we have a new block device on the instance, we treat it just as we...
Now that we have used the command line to manage Cinder volumes, let's take a look at using the web interface to accomplish the same thing:
Log in to the web interface as your non-administrative user and select the Volumes submenu from the Compute menu.
In the top-right corner, click on the Create Volume button.
Fill in the name and size and click on Create Volume on the form:
The web interface will update itself as the volume status changes. Once it becomes available, click on the More menu on the volume page and select Edit Attachments. In this dialog, the volume will be connected to the running instance. The following screenshot captures this step:
In the Attachments dialog, select the instance to attach the volume to and click on the Attach Volume button, as shown in the following screenshot:
Once again, the web interface will get updated as the status of the volume changes. The volume's status will become In-Use when it is attached to the instance...
Now that you have seen how to use Cinder, you may be wondering where that volume that was created was stored. The cloud may be a facade of endless resources, but the reality is that there are actual physical resources that have to back the virtual resources of the cloud. By default, Cinder is configured to use LVM as its backing store. In Chapter 1, RDO Installation, Ceph was indicated for configuration. If that had not been done, Triple-O would have created a virtual disk and mounted it as a loopback device on the control node to use as the LVM physical volume to create a cinder-volumes
volume group. The Cinder volume you just created would have been a logical volume in the cinder-volumes
volume group. This is not an ideal place to store a virtual storage resource for anything more than a demonstration. Using a virtual disk mounted as a loopback device has very poor performance and will quickly become a bottleneck under load.
Triple-O has built-in support to deploy a Ceph cluster and use it as the primary backing store for Cinder and for Glance. In Chapter 1, RDO Installation, there was a ceph
parameter and a storage environment file that were passed to the overcloud deployment command. They set up Ceph as the backing store for your Triple-O deployment. If you left those two options out of the deployment command, Triple-O would default back to LVM as the backing store for Cinder.
This demonstration shows an example starting with an LVM-configured backing store. If you would like to do this example, you will need to start with a deployment that has LVM configured and not Ceph.
Conveniently enough, a simple GlusterFS installation is not extremely complicated to set up. Assume three rpm-based Linux nodes named gluster1
, gluster2
, and gluster3
with an sdb
drive attached for use by a GlusterFS storage cluster. The file system XFS is recommended although an ext4
file system will work fine in...
In this chapter, we looked at creating Cinder volumes and adding an additional storage type definition. Cinder block storage is just one virtual storage option available. In the next chapter, we will take a look at the Swift object storage system to compare the storage options available to OpenStack instances. Cinder offers block storage that attaches directly to the instances. Swift offers an API-based object storage system. Each storage offering has its advantages and disadvantages and is chosen to meet specific needs in different use cases. It is important to know how each of these works so that you can make an informed decision about which is right for you when the time comes to choose a storage solution.