In this chapter, we will cover:
Creating a sandbox environment with VirtualBox
Installing OpenStack Compute packages
Configuring database services
Configuring OpenStack Compute
Stopping and starting Nova services
Creating a cloudadmin account and project
Installation of command-line tools
Uploading a sample machine image
Launching your first cloud instance
Terminating your instance
OpenStack Compute, also known as Nova, is the compute component of the open source cloud operating system, OpenStack. It is the component that allows you to run multiple instances of virtual machines on any number of hosts running the OpenStack Compute service, allowing you to create a highly scalable and redundant cloud environment. The open source project strives to be hardware and hypervisor agnostic. Nova compute is analogous to Amazon's EC2 (Elastic Compute Cloud) environment and can be managed in a similar way, demonstrating the power and potential of this service.
This chapter gets you up to speed quickly by giving you the information you need to create a cloud environment running entirely from your desktop machine. At the end of this chapter, you will be able to create and access virtual machines using the same command line tools you would use to manage Amazon's own EC2 compute environment.
Creating a sandbox environment using VirtualBox allows us to discover and experiment with the OpenStack Compute service, known as Nova. VirtualBox gives us the ability to spin up virtual machines and networks without affecting the rest of our working environment and is freely available from http://www.virtualbox.org for Windows, Mac OSX, and Linux. This test environment can then be used for the rest of this chapter.
It is assumed the computer you will be using to run your test environment in has enough processing power and has hardware virtualization support (modern AMDs and Intel iX processors) with at least 4 GB RAM. Remember we're creating a virtual machine that itself will be used to spin up virtual machines, so the more RAM you have, the better.
To begin with, we must download VirtualBox from http://www.virtualbox.org/ and then follow the installation procedure once this has been downloaded.
We will also need to download the Ubuntu 12.04 LTS Server ISO CD-ROM image from http://www.ubuntu.com/.
To create our sandbox environment within VirtualBox, we will create a single virtual machine that allows us to run all of the OpenStack Compute services required to run cloud instances. This virtual machine will be configured with at least 2 GB RAM and 20 GB of hard drive space and have three network interfaces. The first will be a NAT interface that allows our virtual machine to connect to the network outside of VirtualBox to download packages, a second interface which will be the public interface of our OpenStack Compute host, and the third interface will be for our private network that OpenStack Compute uses for internal communication between different OpenStack Compute hosts.
Carry out the following steps to create the virtual machine that will be used to run OpenStack Compute services:
In order to use a public and private network in our OpenStack environment, we first create these under VirtualBox. To do this, we can use the VirtualBox GUI by going to System Preferences then Network or use the
VBoxManagecommand from our VirtualBox install and run the following commands in a shell on our computer to create two
# Public Network vboxnet0 (172.16.0.0/16) VBoxManage hostonlyif create VBoxManage hostonlyif ipconfig vboxnet0 --ip 172.16.0.254 --netmask 255.255.0.0 # Private Network vboxnet1 (10.0.0.0/8) VBoxManage hostonlyif create VBoxManage hostonlyif ipconfig vboxnet1 --ip 10.0.0.254 --netmask 255.0.0.0
20 GB Hard Disk
Three Network Adapters, with the attached Ubuntu 12.04 ISO
This can either be done using the VirtualBox New Virtual Machine Wizard or by running the following commands in a shell on our computer:
# Create VirtualBox Machine VboxManage createvm --name openstack1 --ostype Ubuntu_64 --register VBoxManage modifyvm openstack1 --memory 2048 --nic1 nat --nic2 hostonly --hostonlyadapter2 vboxnet0 --nic3 hostonly --hostonlyadapter3 vboxnet1 # Create CD-Drive and Attach ISO VBoxManage storagectl openstack1 --name "IDE Controller" --add ide --controller PIIX4 --hostiocache on --bootable on VBoxManage storageattach openstack1 --storagectl "IDE Controller" --type dvddrive --port 0 --device 0 --medium Downloads/ubuntu-12.04-server-amd64.iso # Create and attach SATA Interface and Hard Drive VBoxManage storagectl openstack1 --name "SATA Controller" --add sata --controller IntelAHCI --hostiocache on --bootable on VBoxManage createhd --filename openstack1.vdi --size 20480 VBoxManage storageattach openstack1 --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium openstack1.vdi
We are now ready to power on our
OpenStack1node. Do this by selecting OpenStack1 Virtual Machine and then clicking on the Start button or by running the following command:
VBoxManage startvm openstack1 --type gui
This will take us through a standard text-based Ubuntu installer, as this is the server edition. Choose appropriate settings for your region and choose Eth0 as the main interface (this is the first interface in your VirtualBox VM settings—our NATed interface). When prompted for software selection, just choose SSH Server and continue. For a user, create a user named
openstackand the password of
openstack. This will help with using this book to troubleshoot your own environment.
Once installed, log in as the
We can now configure networking on our OpenStack Compute node. To do this we will create a static address on the second interface,
eth1, which will be the public interface and also configure our host to bring up
eth2without an address, as this interface will be controlled by OpenStack to provide the private network. To do this, edit the
/etc/network/interfacesfile with the following contents:
# The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet dhcp # Public Interface auto eth1 iface eth1 inet static address 172.16.0.1 netmask 255.255.0.0 network 172.16.0.0 broadcast 172.16.255.255 # Private Interface auto eth2 iface eth2 inet manual up ifconfig eth2 up
Save the file and bring up the interfaces with the following commands:
sudo ifup eth1 sudo ifup eth2
What we have done is created a virtual machine that is the basis of our OpenStack Compute host. It has the necessary networking in place to allow us to access this virtual machine from our host personal computer.
There are a number of virtualization products available that are suitable for trying OpenStack, for example, VMware Server and VMware Player are equally suitable. With VirtualBox, you can also script your installations using a tool named Vagrant. While outside the scope of this book, the steps provided here allow you to investigate this option at a later date.
To do this, we will create a machine that runs all the appropriate services for running OpenStack Nova. The services are as follows:
Ensure that you are logged in to the openstack1 VirtualBox virtual machine as the
Installation of OpenStack under Ubuntu 12.04 is simply achieved using the familiar
apt-get tool due to the OpenStack packages being available from the official Ubuntu repositories.
We can install the required packages with the following command:
sudo apt-get update sudo apt-get -y install rabbitmq-server nova-api nova-objectstore nova-scheduler nova-network nova-compute nova-cert glance qemu unzip
Once the installation has completed, we need to install and configure NTP as follows:
sudo apt-get -y install ntp
NTP is important in any multi-node environment and in the OpenStack environment it is a requirement that server times are kept in sync. Although we are configuring only one node, not only will accurate time-keeping help with troubleshooting, but also it will allow us to grow our environment as needed in the future. To do this we edit
/etc/ntp.confwith the following contents:
# Replace ntp.ubuntu.com with an NTP server on your network server ntp.ubuntu.com server 127.127.1.0 fudge 127.127.1.0 stratum 10
Once ntp has been configured correctly we restart the service to pick up the change:
sudo service ntp restart
Installation of OpenStack Nova from the main Ubuntu package repository represents a very straightforward and well-understood way of getting OpenStack onto our Ubuntu server. This adds a greater level of certainty around stability and upgrade paths by not deviating away from the main archives.
There are various ways to install OpenStack, from source code building to installation from packages, but this represents the easiest and most consistent method available. There are also alternative releases of OpenStack available. The ones available from Ubuntu 12.04 LTS repositories are known as Essex and represent the latest stable release at the time of writing.
Deviating from stable releases is appropriate when you are helping develop or debug OpenStack, or require functionality that is not available in the current release. To enable different releases, add different Personal Package Archives (PPA) to your system. To view the OpenStack PPAs, visit http://wiki.openstack.org/PPAs. To use them we first install a pre-requisite tool that allows us to easily add PPAs to our system:
sudo apt-get update sudo apt-get -y install python-software-properties
To use a particular release PPA we issue the following commands:
For Milestones (periodic releases leading up to a stable release):
sudo add-apt-repository ppa:openstack-ppa/milestone sudo apt-get update
For Bleeding Edge (Master Development Branch):
sudo add-apt-repository ppa:openstack-ppa/bleeding-edge sudo apt-get update
Once you have configured
apt to look for an alternative place for packages, you can repeat the preceding process for installing packages if you are creating a new machine based on a different package set, or simply type:
sudo apt-get upgrade
This will make
apt look in the new package archive areas for later releases of packages (which they will be as they are more recent revisions of code and development).
OpenStack supports a number of database backends—an internal Sqlite database (the default), MySQL, and Postgres. Sqlite is used only for testing and is not supported or used in a production environment, whereas MySQL or Postgres is down to the experience of the database staff. For the remainder of this book we shall use MySQL.
Setting up MySQL is easy and allows for you to grow this environment as you progress through the chapters of this book.
Ensure that you are logged in to the openstack1 VirtualBox virtual machine as the
We first set some options to pre-seed our installation of MySQL to streamline the process. This includes the default root password which we'll set as
openstack. Complete this step as the root user.
cat <<MYSQL_PRESEED | debconf-set-selections mysql-server-5.1 mysql-server/root_password password openstack mysql-server-5.1 mysql-server/root_password_again password openstack mysql-server-5.1 mysql-server/start_on_boot boolean true MYSQL_PRESEED
sudo apt-get update sudo apt-get -y install mysql-server sudo sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf sudo service mysql restart
Once that's done we then configure an appropriate database user, called
nova, and privileges for use by OpenStack Compute.
MYSQL_PASS=openstack mysql -uroot -p$MYSQL_PASS -e 'CREATE DATABASE nova;' mysql -uroot -p$MYSQL_PASS -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%'" mysql -uroot -p$MYSQL_PASS -e "SET PASSWORD FOR 'nova'@'%' = PASSWORD('$MYSQL_PASS');"
We now simply reference our MySQL server in our
/etc/nova/nova.conffile to use MySQL by adding in the
MySQL is an essential service to OpenStack as a number of services rely on it. Configuring MySQL appropriately ensures your servers operate smoothly. We first configured the Ubuntu
debconf utility to set some defaults for our installation so that when MySQL gets installed, it finds values for the root user's password and so skips the part where it asks you for this information during installation. We then added in a database called
nova that will eventually be populated by tables and data from the OpenStack Compute services and granted all privileges to the
nova database user so that user can use it.
The MySQL clustering using Galera recipe in Chapter 11, In the Datacenter
/etc/nova/nova.conf file is a
very important file and is referred to many times in this book. This file informs each OpenStack Compute service how to run and what to connect to in order to present OpenStack to our end users. This file will be replicated amongst our nodes as our environment grows.
To run our sandbox environment, we will configure OpenStack Compute so that it is accessible from our underlying host computer. We will have the API service (the service our client tools talk to) listen on our public interface and configure the rest of the services to run on the correct ports. The complete
nova.conf file as used by the sandbox environment is laid out next and an explanation of each line (known as flags) follows.
First, we amend the
/etc/nova/nova.conffile to have the following contents:
--dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/usr/bin/nova-dhcpbridge --logdir=/var/log/nova --state_path=/var/lib/nova --lock_path=/var/lock/nova --force_dhcp_release --iscsi_helper=tgtadm --libvirt_use_virtio_for_bridges --connection_type=libvirt --root_helper=sudo nova-rootwrap --ec2_private_dns_show_ip --sql_connection=mysql://nova:[email protected]/nova --use_deprecated_auth --s3_host=172.16.0.1 --rabbit_host=172.16.0.1 --ec2_host=172.16.0.1 --ec2_dmz_host=172.16.0.1 --public_interface=eth1 --image_service=nova.image.glance.GlanceImageService --glance_api_servers=172.16.0.1:9292 --auto_assign_floating_ip=true --scheduler_default_filters=AllHostsFilter
openstack-computeservice we specify that we are using software virtualization by specifying the following code in
We then issue a command that ensures the database has the correct tables schema installed and initial data populated with the right information:
sudo nova-manage db sync
We can then proceed to create the private network that will be used by our OpenStack Compute instances internally:
sudo nova-manage network create vmnet --fixed_range_v4=10.0.0.0/8 --network_size=64 --bridge_interface=eth2
And finally we can create the public network that will be used to access the instances from our personal computer:
sudo nova-manage floating create --ip_range=172.16.1.0/24
--dhcpbridge=is the location of the
--force_dhcp_releasereleases the DHCP assigned IP address when the instance is terminated.
--logdir=/var/log/novawrites all service logs to here. This area will be written to as the root user.
--state_path=/var/lib/novais an area on your host that Nova will use to maintain various states about the running service.
--lock_path=/var/lock/novais where Nova can write its lock files.
--connection_type=libvirtspecifies the connection to use
--libvirt_use_virtio_for_bridgesuses the virtio driver for bridges.
--root_helper=sudo nova-rootwrapspecifies a helper script to allow the OpenStack Compute services to obtain root privileges.
--use_deprecated_authtells Nova to not use the new Keystone authentication service.
--s3_host=172.16.0.1tells OpenStack services where to look for the
--rabbit_host=172.16.0.1tells OpenStack services where to find the
rabbitmqmessage queue service.
--ec2_host=172.16.0.1denotes the external IP address of the
--ec2_dmz_host=172.16.0.1denotes the internal IP address of the
--public_interface=eth1is the interface on your hosts running
novathat your clients will use to access your instances.
--glance_api_servers=172.16.0.1:9292specifies the server that is running the Glance Imaging service.
--auto_assign_floating_ip=truespecifies that when an instance is created, it automatically gets an IP address assigned from the range created in step 5 in the previous section.
----scheduler_default_filters=AllHostsFilterspecifies the scheduler can send requests to all compute hosts.
--libvirt_type=qemusets the virtualization mode. Qemu is software virtualization, which is required for running under VirtualBox. Other options include
The networking is set up so that internally the guests are given an IP in the range 10.0.0.0/8. We specified that we would use only 64 addresses in this network range. Be mindful of how many you want. It is easy to create a large range of addresses but it will also take a longer time to create these in the database, as each address is a row in the
nova.fixed_ips table where these ultimately get recorded and updated. Creating a small range now allows you to try OpenStack Compute and later on you can extend this range very easily.
The public range of IP addresses are created in the 172.16.1.0/24 address space. Remember we created our VirtualBox Host-Only adapter with access to 172.16.0.0/16 – this means we will have access to the running instances in that range.
There are a wide variety of options that are available for configuring OpenStack Compute. These will be explored in more detail in later chapters as the
nova.conf file underpins most of OpenStack Compute services.
You can find a description of each flag at the OpenStack website at http://wiki.openstack.org/NovaConfigOptions.
Now that we have configured our OpenStack Compute installation, it's time to start our services so that they're running on our OpenStack1 Virtual Machine ready for us to launch our own private cloud instances.
If you haven't done so already,
ssh to our virtual machine as the openstack user—either using a command-line tool or a client, such as PuTTY if you're using Windows.
This ensures that we can access our virtual machine, as we will need access to spin up instances from your personal computer.
The services that run as part of our
openstack1 setup are:
As part of the package installation, the OpenStack Compute services start up by default so the first thing to do is to stop them by using the following commands:
sudo stop nova-compute sudo stop nova-network sudo stop nova-api sudo stop nova-scheduler sudo stop nova-objectstore sudo stop nova-cert
There are also other services that we installed that are stopped in the same way:
sudo stop libvirt-bin sudo stop glance-registry sudo stop glance-api
Starting the OpenStack Compute services is done in a similar way to stopping them:
sudo start nova-compute sudo start nova-network sudo start nova-api sudo start nova-scheduler sudo start nova-objectstore sudo start nova-cert
There are also other services that we installed that are started in the same way:
sudo start libvirt-bin sudo start glance-registry sudo start glance-api
As part of our installation we specified
--use_deprecated_auth, which means that we are using a simple way of storing users, roles, and projects within our OpenStack Compute environment. This is an ideal way to start working with OpenStack within a small development environment such as our sandbox. For larger, production ready environments, Keystone is used, which is covered in Chapter 6, Administering OpenStack Storage.
cloudadmin account group is the equivalent of the root user on a Unix/Linux host. It has access to all aspects of your Nova cloud environment and so the first account we need to create must have this credential.
Each user has a project—a tenancy in the cloud that has access to certain resources and network ranges. In order to spin up instances in your private cloud environment, a user is assigned to a project. This project can then be kept separate from other users' projects, and equally other users can belong to the same project.
sudo nova-manage user admin openstack
We then assign the
openstackuser to the
cloudadminrole as follows:
sudo nova-manage role add openstack cloudadmin
Once we have that role assigned, which is appropriate for this section to run as the
cloudadminrole, we can create a project for this user that we will call
cookbook. We do this as follows:
sudo nova-manage project create cookbook openstack
At this point, we have all the required files set up for us to begin to use OpenStack Compute, but we need to ship these over to our underlying host computer (the computer running the VirtualBox software) so that we can access OpenStack Compute from there. OpenStack provides an option to package these credential files up as a ZIP file for this purpose.
sudo nova-manage project zipfile cookbook openstack
The result of this is a file called
nova.zipin your current directory.
We first create the
initial user, which is an administrator of the cloud project. This admin user is then assigned elevated privileges known as cloudadmin by use of the
nova-manage command. The
nova-manage command is used throughout this book and is instrumental in administering OpenStack Compute. The
nova-manage command must be executed with
root privileges so we always run this with
We then create a project for our user to operate in. This is a tenancy in our OpenStack Compute environment that has access to various resources such as disks and networks. As we are
cloudadmin, we have access to all resources and this is sufficient for this section.
Once the project has been created, the details of the project are zipped up ready for transporting back to the client that will operate the cloud.
Management of OpenStack Compute from the command line is achieved by using euca2ools and Nova Client. Euca2ools is a suite of tools that work with the EC2-API presented by OpenStack. This is the same API that allows you to manage your AWS EC2 cloud instances, start them up and terminate them, create security groups, and troubleshoot your instances. The Nova Client tool uses the OpenStack Compute API, OS-API. This API allows greater control of our OpenStack environment. Understanding these tools is invaluable in understanding the flexibility and power of cloud environments, not least allowing you to create powerful scripts to manage your cloud.
The tools will be installed on your host computer and it is assumed that you are running a version of Ubuntu, which is the easiest way to get hold of the Nova Client and euca2ools packages ready to manage your cloud environment.
The euca2ools and Nova Client packages are conveniently available from the Ubuntu repositories. If the host PC isn't running Ubuntu, creating a Ubuntu virtual machine alongside our OpenStack Compute virtual machine is a convenient way to get access to these tools.
As a normal user on our Ubuntu machine, type the following commands:
sudo apt-get update sudo apt-get install euca2ools python-novaclient unzip
Now the tools have been installed, we need to grab the
nova.zipfile that we created at the end of the previous section and unpack this on your Ubuntu computer. We do this as follows:
cd mkdir openstack cd openstack scp [email protected]:nova.zip . unzip nova.zip
We can now source the credentials file named
novarcinto our shell environment with the following command and set up our environment to allow us to use our command-line tools to communicate with OpenStack:
We now must create a keypair that allows us to access our cloud instance. Keypairs are SSH private and public key combinations that together allow you to access a resource. You keep the private portion safe, but you're able to give the public key to anyone or any computer without fear or compromise to your security, but only your private portion will match enabling you to be authorized. Cloud instances rely on keypairs for access.
The following commands will create a keypair named
To create our keypair using euca2ools, use the following commands:
euca-add-keypair openstack > openstack.pem chmod 0600 *.pem
To create your keypair using Nova Client, use the following commands:
nova keypair-add openstack > openstack.pem chmod 0600.pem
Using either euca2ools or Nova Client on Ubuntu is a very natural way of managing our OpenStack Cloud environment. We open up a shell and copy the created nova.zip file over from the previous section. When we unpack it, we can source in the contents of the novarc file—the file that contains the details on our Access Key, Secret Key (two vital pieces of information required to access our cloud environment using the EC2-API), Nova API Key and Nova Username (required for accessing the OS-API) as well as certificate files, which are used for uploading images to our environment and addresses to use when connecting to our environment.
When you look at your environment now with the
env command you will see these details, for example:
By also adding a keypair at this point, we can be ready to launch our instance. The
nova add-keypair commands create a public and private key combination for you. It stores the public key in the database references by the name you gave it, in our case
we matched our username, openstack, and output the details of the private key. We must keep the private key safe. If you lose it or delete it, the keypair will be invalid. A requirement to SSH, which we will use to connect to our instance later on, is to have the private key with permissions that are readable/writeable by the owner only, so we set this with the
Now that we have a running OpenStack Compute environment, it's time to upload an image for us to use. An image is a machine template, which is cloned when we spin up new cloud instances. Images used in Amazon, known as AMIs (or Amazon Machine Images) can often be used in OpenStack. For this next section, we will use an Ubuntu Enterprise Cloud image, which can be used in both Amazon and our OpenStack Compute cloud instance.
These steps are to be carried out on your Ubuntu machine under the user that has access to your OpenStack Compute environment credentials (as created in the Installation of command-line tools recipe).
Ensure you have sourced your OpenStack Compute environment credentials as follows:
cd ~/openstack . novarc
To upload an image into our OpenStack Compute environment, we perform the following steps:
We first download the Ubuntu UEC Cloud Image from ubuntu.com:
Once downloaded, we need to install the
cloud-utilspackage that provides tools to upload images to our OpenStack Compute environment:
sudo apt-get update sudo apt-get -y install cloud-utils
We can then proceed to upload this to our OpenStack Compute installation using the
cloud-publish-tarballcommand provided by the
cloud-publish-tarball ubuntu-12.04-server-cloudimg-i386.tar.gz images i386
You should see output such as the following:
You should see output like the following:
For Nova Client:
You should see output like the following:
The key information from the output are the
ami (and optionally
ari) IDs from the
euca2ools output, and the
ID string generated for the Nova Client output. We use this information to launch our cloud instances.
We first downloaded a
Ubuntu UEC image that has been created to run in our OpenStack environment. This tarball contained two components that were needed to run our instance: a kernel and a machine image. We used the command-line tool, cloud-publish-tarball from the
cloud-utils package to upload this to our Glance service, which populated the
Nova-Objectstore service with the machine images. Note that we specified an option here named images. This references a bucket in our objects tore, which is a place on the disk(s) where this image can be found by the OpenStack Compute service.
When we list the images, the information that gets used when spinning up cloud instances are the
eri- values for use with euca2ools and the image IDs for use with the Nova Client tools. Note that a RAM disk doesn't always need to be present for a cloud instance to work (as in the previous example) but sometimes you may come across cloud images that have these.
The Using public cloud images recipe in Chapter 2, Administering OpenStack Compute
we have a running OpenStack Compute environment and a machine image to use, its now time to spin up our first cloud instance! This section explains how to use the information from
euca-describe-images or the
image-list commands to reference this on the command line to launch the instance that we want.
These steps are to be carried out on our Ubuntu machine under the user that has access to our OpenStack Compute credentials (as created in the Installation of command-line tools recipe).
Before we spin up our first instance, we must create the default security settings that define the access rights. We do this only once (or when we need to adjust these) using either the
euca-authorize command under euca2ools or the
secgroup-add-rule command under Nova Client. The following set of commands gives us SSH access (Port 22) from any IP address and also allows us to ping the instance to help with troubleshooting. Note the default group and its rules are always applied if no security group is mentioned on the command line.
euca-authorize default -P tcp -p 22 -s 0.0.0.0/0 euca-authorize default -P icmp -t -1:-1
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
From our output of
get-imageswe were presented with two images. One was the machine image and the other was the kernel image. To launch our instance, we need this information and we specify this on the command line.
To launch an instance using euca2ools, we issue the following, specifying the machine image ID:
euca-run-instances ami-00000002 -t m1.small -k openstack
nova boot myInstance --image 0e2f43a8-e614-48ff-92bd-be0c68da19f4 --flavor 2 --key_name openstack
You should see output like the following when you launch an instance:
Listing instances using euca2ools:
Listing instances using Nova Client:
nova list nova show f10fd940-dcaa-4d60-8eda-8ac0c777f69c
This brings back output similar to the output of the previous command lines, yet this time it has created the instance and it is now running and has IP addresses assigned to it.
ssh -i openstack.pem [email protected]
Congratulations! We have successfully launched and connected to our first OpenStack cloud instance.
After creating the default security settings, we made a note of our machine image identifier, the
ami- or ID value, and then called a tool from euca2ools or Nova Client to launch our instance. Part of that command line refers to the keypair to use. We then connect to the instance using the private key as part of that keypair generated.
How does the cloud instance know what key to use? As part of the boot scripts for this image, it makes a call back to the meta-server which is a function of the nova-api service. The meta-server provides a go-between that bridges our instance and the real world that the cloud init boot process can call and, in this case, it downloaded a script to inject our private key into the Ubuntu user's .
ssh/authorized_keys file. We can modify what scripts are called during this boot process, which is covered later on.
When a cloud instance is launched, it produces a number of useful details about that instance—the same details that are output from the commands, euca-describe-instances, and nova list. For euca2ools output there is a RESERVATION section and an INSTANCE section. In the INSTANCE section, we get details of our running instance.
information is presented by the
show commands. The
list command shows a convenient short version listing the ID, name, status, and IP addresses of our instance. The
show command provides more details similar to that of euca-describe-instances.
The type of instance we chose, with the
-t option for
m1.small. This is an Amazon EC2 way of naming instance types. The same type was specified as an ID of 2 when using the
boot command. The instance types supported can be listed by running the following command (there is no euca2ools equivalent):
These flavors (specs of instances) are summarized as follows:
Type of instance
32 and 64-bit
32 and 64-bit
Cloud environments are designed to be dynamic and this implies that cloud instances are being spun up and terminated as required. Terminating a cloud instance is easy to do, but equally it is important to understand some basic concepts of cloud instances.
Cloud instances such as the instance we have used are not persistent. This means that the data and work you do on that instance only exists for the time that it is running. A cloud instance can be rebooted, but once it has been terminated, all data is lost.
To ensure no loss of data, an OpenStack Compute service named
nova-volume provides persistent data store functionality that allows you to attach a volume to it that doesn't get destroyed on termination but allows you to attach it to running instances. A volume is like a USB drive attached to your instance.
From our Ubuntu machine, first list the running instances to identify the instance you want to terminate.
To terminate an instance:
You can re-run
euca-describe-instancesagain to ensure your instance has terminated.
To terminate an instance:
nova delete myInstance
You can re-run
nova listagain to ensure your instance has terminated.
We simply identify the instance we wish to terminate by its ID, which is in the format
i-00000000 when viewing instances using
euca-describe-instances or by name (or ID) when using
delete. Once identified, we can specify this as the instance to terminate. Once terminated, that instance no longer exists—it has been destroyed. So if you had any data on there it will have been deleted along with the instance.