Supporting hypervisors by OpenNebula

(For more resources on Open Source, see here.)

A host is a server that has the ability to run virtual machines using a special software component called a hypervisor that is managed by the OpenNebula frontend.

All the hosts do not need to have homogeneous configuration, but it is possible to use different hypervisors on different GNU/Linux distributions on a single OpenNebula cluster.

Using different hypervisors in your infrastructure is not just a technical exercise but assures you greater flexibility and reliability. A few examples where having multiple hypervisors would prove to be beneficial are as follows:

  • A bug in the current release of A hypervisor does not permit the installation of a virtual machine with a particular legacy OS (let's say, for example,Windows 2000 Service Pack 4), but you can execute it with B hypervisor without any problem.

  • You have a production infrastructure that is running a closed source free-to-use hypervisor, and during the next year the software house developing that hypervisor will request a license payment or declare bankruptcy due to economic crisis.

The current version of OpenNebula will give you great flexibility regarding hypervisor usage since it natively supports KVM/Xen (which are open source) and VMware ESXi. In the future it will probably support both VirtualBox (Oracle) and Hyper-V (Microsoft).

Configuring hosts

The first thing to do before starting with the installation of a particular hypervisor on a host is to perform some general configuration steps. They are as follows:

  1. Create a dedicated oneadmin UNIX account (which should have sudo privileges for executing particular tasks, for example, iptables/ebtables,and network hooks that we have configured.

  2. The frontend and host's hostname should be resolved by a local DNS or a shared/etc/hosts file.

  3. The oneadmin on the frontend should be able to connect remotely through SSH to the oneadmin on the hosts without a password.

  4. Configure the shared network bridge that will be used by VM to get the physical network.


The oneadmin account and passwordless login

Every host should have a oneadmin UNIX account that will be used by the OpenNebula frontend to connect and execute commands.

If during the operating system install you did not create it, create a oneadmin user on the host by using the following command:

youruser@host1 $ sudo adduser oneadmin

You can configure any password you like (even blank) because we are going to set up a passwordless login from the frontend:

oneadmin@front-end $ ssh-copy-id oneadmin@host1

Now if you connect from the oneadmin account on the frontend to the oneadminaccount of the host, you should get the shell prompt without entering any password by using the following command:

oneadmin@front-end $ ssh oneadmin@host1

Uniformity of oneadmin UID number Later, we will learn about the possible storage solutions available with OpenNebula. However, keep in mind that if we are going to set up a shared storage, we need to make sure that the UID number of the oneadmin user is homogeneous between the frontend and every other host. In other words, check that with the id command the oneadmin UID is the same both on the frontend and the hosts.

Verifying the SSH host fingerprints

The first time you connect to a remote SSH server from a particular host, the SSH client will provide you the fingerprintprint of the remote server and ask for your permission to continue with the following message:

The authenticity of host host01 ('t be established. RSA key fingerprint is 5a:65:0f:6f:21:bb:fd:6a:4a:68:cd: 72:58:5c:fb:9f. Are you sure you want to continue connecting (yes/no)?

Knowing the fingerprint of the remote SSH key and saving it to the local SSH client fingerprint cache (saved in ~/.ssh/known_hosts) should be good enough to prevent man-in-the-middle attacks.

For this reason, you need to connect from the oneadmin user on the frontend to every host in order to save the fingerprints of the remote hosts in the oneadmin known_hosts for the first time. Not doing this will prevent OpenNebula from connecting to the remote hosts.

In large environments, this requirement may be a slow-down when cofiguring new hosts. However, it is possible to bypass this operation by instructing the remote client used by OpenNebula to connect to remote hosts and not check the remote SSH key in ~/.ssh/config. The command prompt will show the following content when the operation is bypassed:

Host* StrictHostKeyChecking no.

If you do not have a local DNS (or you cannot/do not want to set it up), you can manually manage the /etc/hosts file in every host, using the following IP addresses: localhost on-front kvm01 xen01 esx01

Now you should be able to remotely connect from a node to another with your hostname using the following command:

$ ssh oneadmin@kvm01

Configuring a simple DNS with dnsmasq

If you do not have a local DNS and manually managing the plain host's file on every host does not excite you, you can try to install and configure dnsmasq. It is a lightweight, easy-to-configure DNS forwarder (optionally DHCP and TFTP can be provided within it) that services well to a small-scale network.

The OpenNebula frontend may be a good place to install it.

For an Ubuntu/Debian installation use the following command:

$ sudo apt-get install dnsmasq

The default configuration should be fine. You just need to make sure that /etc/resolv.conf configuration details look similar to the following:

# dnsmasq nameserver # another local DNS nameserver # ISP or public DNS nameserver nameserver

The /etc/hosts configuration details will look similar to the following: localhost on-front kvm01 xen01 esx01

Configure any other hostname here in the hosts file on the frontend by running dnsmasq. Configure /etc/resolv.conf configuration details on the other hosts using the following code:

# ip where dnsmasq is installed nameserver

Now you should be able to remotely connect from a node to another using your plain hostname using the following command:

$ ssh oneadmin@kvm01

When you add new hosts, simply add them at /etc/hosts on the frontend and they will automatically work on every other host, thanks to dnsmasq.

Configuring sudo

To give administrative privileges to the oneadmin account on the hosts, add it to the sudo or admin group depending on your /etc/sudoers configuration using the following code:

# /etc/sudoers Defaults env_reset root ALL=(ALL) ALL %sudo ALL=NOPASSWD: ALL

With this simple sudo configuration, every user in the sudo group can execute any command with root privileges, without requiring to enter the user password before each command.

Now add the oneadmin user to the sudo group with the following command:

$ sudo adduser oneadmin sudo

Giving full administrative privileges to the oneadmin account might be considered inappropriate for most security-focused people. However, I can assure you that if you are taking the first step with OpenNebula now, having full administrative privileges could save some headaches. This is a suggested configuration but it is not required to run OpenNebula.

Configuring network bridges

Every host should have its bridges configured with the same name. Check the following /etc/network/interfaces code as an example:

# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
iface eth0 inet manual
auto lan0
iface lan0 inet static
bridge_ports eth0
bridge_stp off
bridge_fd 0

You can have as many bridges as you need, bound or not bound to a physical network. By eliminating the bridge_ports parameter you get a pure virtual network for your VMs but remember that without a physical network different VMs on different hosts cannot communicate with each other.

(For more resources on Open Source, see here.)

Managing hosts in OpenNebula

For each hypervisor supported by OpenNebula, we will describe the necessary steps to install and configure it. Knowledge about them is highly recommended but it is not needed to achieve a working virtualization host. Having experience with at least one hypervisor helps a lot for better understanding of how things work (and how they don't). The suggested hypervisor for newcomers is KVM as will be outlined later, because it is easier to set up.

To use a particular host on our OpenNebula cluster, it is required to register that host in OpenNebula using the following onehost command:

onehost command [args] [options]

The command with relevant available options is as follows:

$ onehost create hostname im_mad vmm_mad tm_mad vnm_mad

The hostname is the name of the remote host, which should be managed by the current OpenNebula frontend. It should be a correctly configured domain name (try to connect to it through ssh oneadmin@hostname). The parameters present in the command are used to specify the names of the scripts that will be used to retrieve information, manage virtual machines, and transfer images to a particular host. They also depend on the hypervisor running on the host or on our configuration needs.

The last parameter is used to specify a network driver to enforce traffic management (for example, iptables, ebtables, and vlan). You will learn which script should be used depending on the hypervisor, later in this article.

In order to delete a previously created host we use the following command:

$ onehost delete range|hostid_list

The command should be used only when you are dismissing a host. A hostid_list is a comma separated list of IDs or names of hosts, and a range is a ranged list of IDs such as 1..8.

In order to enable or disable a registered host, we bypass the monitoring and prevent the launch of new instances on the host. This could be used when you need to perform maintenance work on a host and you are migrating machines off it. The commands are as follows:

$ onehost enable range|hostid_list $ onehost disable range|hostid_list

In order to launch an editor for changing properties of an already existing host, we use the following command. It can also be used to change an incorrect hostname.

$ onehost update hostid

We can also re-synchronize monitoring scripts on the remote hosts. This should be used if you have modified something at /var/lib/one/remotes or $ONE_LOCATION/ var/remotes, or if after a onehost create the connection to the remote host fails for whatever reason (for example, you forgot to do the ssh-copy-id oneadmin@host and ssh oneadmin@host). The command for re-synchronization is as follows:

$ onehost sync

In order to list all the registered hosts on this frontend, use the following command:

$ onehost list

In order to show the configuration details and latest errors of a particular host, use the following command:

$ onehost show hostid

In order to show the list of registered hosts in a top-style way that is refreshed automatically every -d seconds, use the following command:

$ onehost top -d 3

Networking drivers

The last parameter while creating a new host is used to configure a particular network driver that will be used when launching every new VM.

The available network drivers are as follows:

  • dummy: This is the default driver that does not enforce any particular network policy. Every VM connected on the same physical bridge will be able to freely talk to the others.

  • fw: This automatically creates iptable rules on the host executing the VM. This driver can be used to filter different TCP/UDP ports and ICMP for every VM.

  • ebtables: This automatically creates ebtable rules on the host to enable network isolation between different VMs running on the same bridge, but only on different /24 networks.

  • 802.1Q: This is used to enable network isolation provided through host- managed VLANs with the 802.1Q standard. A bridge will be automatically created for each OpenNebula virtual network, and a VLAN tag will be attached to all the bridge traffic. Indeed, 802.1Q-compliant network switches must be used.

  • ovswitch: This is a complete network switching solution (for example, the VMwares vNetwork distributed vswitch). It supports VLANs, traffic filtering, QoS, and monitoring through standard interfaces (for example, NetFlow, sFlow, SPAN, and RSPAN).

  • vmware: This is the the specific VMware driver that can be used to achieve network isolation between VMs and 802.1Q VLAN support when using ESXi hosts.

We are going to take a look only at the fw and ebtables network drivers as they are the simplest to configure and do not need any special networking hardware to use them.

Configuring the fw support

In order to use the fw networking driver, the hosts need to have the iptables package installed. Install it using the following command:

$ sudo apt-get install iptables

The iptables command is made available to the oneadmin user through sudo using the following command:

$ sudo visudo oneadmin ALL = NOPASSWD: /sbin/ifconfig

Adding the new sudo rule is not needed if we have configured full sudo privileges to oneadmin as suggested earlier.

In order to enable fw support for a particular host, we should add it using the following command:

$ onehost create host01 im_kvm vmm_kvm tm_shared fw

Configuring the ebtables support

In order to use the ebtables networking driver, we need to install the ebtables package on every host using the following command:

$ sudo apt-get install ebtables

In order to enable sudo access to oneadmin, if needed, use the following command:

$ sudo visudo oneadmin ALL = NOPASSWD: /sbin/ebtables

Although it is the most easily usable driver, as it does not require any special hardware configuration, it lacks the ability to share IPs on the same subnet amongst different VNETs (for example, if a VNET is using some leases of another VNET cannot be using any available IPs in the same subnet).

In order to enable ebtables support for a particular host, we should add it using the following command:

$ onehost create host01 im_kvm vmm_kvm tm_shared ebtables

KVM installation

KVM is currently the easiest hypervisor to configure. The core module is included in the mainline Linux kernel and most distributions enable it in the generic kernels.

It runs on hosts that support hardware virtualization and can virtualize almost all operating systems (). It is the recommended choice if you do not have experience with any other virtualization technologies. For installing a KVM host for OpenNebula, you need to install and configure a base system plus some packages as follows:

  • A kernel with kvm-intel or kvm-amd module

  • A libvirt daemon

  • A Ruby interpreter

  • The qemu-kvm package

In order to check that your current kernel has the needed modules available, try to load it using the following command:

$ sudo modprobe kvm-intel

If you are running an AMD CPU, use the following command:

$ sudo modprobe kvm-amd

The command should not return any message. To double-check if the module has been correctly loaded, issue the following command:

$ lsmod|grep kvm

You should see the module loaded.

For the other needed packages on Ubuntu it is sufficient to install the required libvirt, Ruby, and qemu packages. Use the following command:

$ sudo apt-get install libvirt-bin qemu-kvm ruby

In order to add the oneadmin user to their groups use the following command:

$ sudo adduser oneadmin libvirtd

In order to enable live migrations, as they are directly managed by libvirt, you should enable the libvirt TCP port in /etc/default/libvirt-bin using the following code:

# options passed to libvirtd, add "-l" to listen on tcp libvirtd_opts="-d -l"

For the /etc/libvirt/libvirtd.conf bin, use the following code:

# This is disabled by default, uncomment this to enable it. listen_tcp = 1

As a test, add this brand new host to the OpenNebula pool from the frontend using the following command:

$ onehost create kvm01 im_kvm vmm_kvm tm_ssh dummy

In order to check whether the kvm01 host has been successfully added to OpenNebula pool use the following command:

$ onehost list


If something goes wrong, you will see the string err on the STAT column. In this case, double-check that you can remotely connect without password from the oneadmin user on the frontend to the oneadmin user of kvm01 with the following command:

$ ssh oneadmin@kvm01

That is all!As you can see, KVM host configuration is pretty straightforward and does not need any special fine-tuning. The following paragraphs dealing with KVMare optional but they will improve your KVM experience.

Enabling kernel samepage merging

KSM is a Linux feature used to de-duplicate identical pages in memory. It is very useful when on the same host are running a bunch of homogeneous virtual machines with similar software versions running on them. This not only maximizes the available memory on the host but actually permits the over-committing of memory without performance penalties for swapping.

In order to check if KSM is enabled by default on the host, use the following command:

$ cat /sys/kernel/mm/ksm/run

If the resulting output of the command is 0, KSM is currently disabled. If No such file or directory is printed, the current kernel does not support KSM.

In order to enable KSM on boot, you can edit your /etc/rc.local configuration file with the following:

#!/bin/sh -e echo 1 > /sys/kernel/mm/ksm/run exit 0

After some minutes, you can check the effectiveness of the KSM feature by checking the pages_shared value, which should now be greater than zero. Use the following command:

$ cat /sys/kernel/mm/ksm/pages_shared

Using an updated kernel in Ubuntu Lucid

As the core module of KVM is directly included in the mainline Linux kernel, using a more recent kernel signifies getting a more recent KVM module. This will ensure that you have the latest improvements in terms of features, performance, and stability for your virtual instances. On the other hand, a recent kernel is less tested and your mileage may vary. Do not update if you are happy with the standard Lucid kernel.However, building a newer kernel is not an easy task for a newbie. Fortunately, both Ubuntu (and Debian) provide special repositories that contain backports of recent software, namely recompiled packages from newer releases that were not originally included in the current version.

You can see the currently available backported kernel for Ubuntu Lucid at the following link,

In order to install a backported kernel in Ubuntu Lucid, append the following line to your /etc/apt/sources.list file:

deb lucid-backports main restricted universe multiverse

In order to update the indexes and search for a package, use the following commands respectively:

$ sudo apt-get update $ sudo apt-cache search linux-image-server-lts-backport

As an example, install the backported natty kernel with the following command:

$ sudo apt-get install linux-image-server-lts-backport-natty

On reboot, GRUB will automatically start the most recent kernel available.

(For more resources on Open Source, see here.)

Xen installation

Xen has been the first open source hypervisor available on Linux. Nowadays, it is probably the most used hypervisor by many IT businesses with tons of guides and howtos available on the Web.

It supports full virtualization using the native CPU extensions like KVM with very similar performances (Xen uses a patched QEMU as well). It also supports the plain old paravirtualization, which works only on supported OS (Linux, NetBSD, FreeBSD, OpenSolaris, and Novell Netware), but the virtualization overhead is lower providing more raw performance and scalability than full virtualization.

Unfortunately Ubuntu 10.04 does not include any pre-built binary packages for Xen, but Debian 6.0 does. So our first approach to Xen will be a fast and easy installation on a Debian Squeeze using Debian packages.

Installing on Debian Squeeze through standard repositories

You should make a clean Debian Squeeze install using a partitioning scheme according to your needs.

Make a stripped down installation During the setup process, the Debian installer will ask you about the packages that should get installed along with "Standard base system".If you de-select them, you will get a working base system with size less than 200 MB occupied space!

After the base system installation, login as root is completed; install your required packages using the following command:

# apt-get install sudo openssh-server ruby xen-hypervisor-4.0-amd64 linux-image-xen-amd64 xen-qemu-dm-4.0

Remember that you need to create oneadmin user, configure sudo, DNS, and network as described earlier in this article.

Let us take a look inside the Debian Xen packages. They are as follows:

  • Xen-hypervisor-4.0-amd64: This is the core of Xen. It is the kernel that will execute Dom0 and DomU instances, and boot up by GRUB before anything else. It controls the CPU and memory sharing between running instances.

  • Linux-image-xen-amd64: This is a Linux kernel with support for Dom0 (the instance used for managing the entire Xen system) and DomU (the kernel for virtual machines).

  • Xen-qemu-dm-4.0: This is a QEMU patch for specific Xen support. With this you can run a fully virtual machine using CPU virtualization support.

To boot the Xen system, you need to reboot your system using the newly installed Xen-enabled Linux kernel.

However, the default GRUB configuration for the first kernel is the new linux-image-xen-amd64 without Xen hypervisor enabled. To make it the default kernel that will start the hypervisor too, you can change the priority of the first GRUB script that can autodetect the standard local kernel with the following commands:

$ sudo mv /etc/grub.d/10_linux /etc/grub.d/50_linux $ sudo update-grub

Let's take a look at our new /boot/grub/grub.cfg auto-generated configuration.

The code is as follows:

### BEGIN /etc/grub.d/20_linux_xen ### menuentry 'Debian GNU/Linux, with Linux 2.6.32-5-xen-amd64 and XEN 4.0-amd64' --class debian --class gnu-linux --class gnu --class os --class xen {
insmod part_msdos insmod ext2
set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set bf1132b4-0727-4c4b-a91f- 617913a2ad48 echo 'Loading Linux 2.6.32-5-xen-amd64 ...' } multiboot /boot/xen-4.0-amd64.gz placeholder

The parameter multiboot is used to load the Xen core component:

module /boot/vmlinuz-2.6.32-5-xen-amd64 placeholder root=UUID=bf1132b4-0727-4c4b-a91f-617913a2ad48 ro quiet

The first module parameter is used to define the kernel that will be the kernel used to boot the Dom0 instance, where we will manage an entire Xen environment.

echo 'Loading initial ramdisk ...' module /boot/initrd.img-2.6.32-5-xen-amd64

The second module parameter is used to define the usage of a standard initrd image.

} ### END /etc/grub.d/20_linux_xen ###

After this section, you will actually find the standard kernel entries that can boot for an environment without a Xen instance running. It is useful for maintenance purposes or if something goes really wrong with the Xen enabled instance.

The code to be used is as follows:

### BEGIN /etc/grub.d/25_linux ### menuentry 'Debian GNU/Linux, with Linux 2.6.32-5-xen-amd64' --class debian --class gnu-linux --class gnu --class os { insmod part_msdos insmod ext2 set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set bf1132b4-0727-4c4b-a91f- 617913a2ad48 echo 'Loading Linux 2.6.32-5-xen-amd64 ...' multiboot /boot/xen-4.0-amd64.gz placeholder

As you can see the Xen-enabled kernel is used but it is run standalone without the Xen core.

module/ boot/vmlinuz-2.6.32-5-xen-amd64 placeholder root=UUID=bf1132b4-0727-4c4b-a91f-617913a2ad48 ro quiet

Now if you reboot, you should get a default kernel as follows:

echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.32-5-xen-amd64 } ### END /etc/grub.d/25_linux ###

You should also be able to execute from root using the following command:

Debian GNU/Linux, with Linux 2.6.32-5-xen-amd64 and XEN 4.0-amd64

If you get a screen message as follows:

$ sudo xm dmesg

Then you are probably running the Xen-enabled kernel, but without using Xen.Make sure that the GRUB entry you boot is the one that contains the multiboot command with xen*.gz.

Installing Xen through sources

If Xen is not available on your distribution or it is quite outdated, you can compile it from the upstream source Tarball downloadable from .

Download it from Products | Xen Hypervisor | Download the latest stable release and unpack it using the following command:

WARNING! Can't find hypervisor information in sysfs! Error: Unable to connect to xend: No such file or directory. Is xend running?

On Ubuntu and Debian systems, you need to install build-essential tools and some libraries. It can be done by using the following command:

$ wget 4.1.2/xen-4.1.2.tar.gz tar xvf xen-*.tar.gz

If you are running an amd64 distribution, you need gcc-multiarch support.

The command is as follows:

$ sudo apt-get install build-essential bcc bin86 gawk bridge-utilsiproute libcurl3 libcurl4-openssl-dev bzip2 module-init-tools transfigtgif texinfo texlive-latex-base texlive-latex-recommended texlive- fonts-extra texlive-fonts-recommended pciutils-dev mercurial build-essential make gcc libc6-dev zlib1g-dev python python-dev python-twisted libncurses5-dev patch libvncserver-dev libsdl-dev libjpeg62-dev iasl libbz2-dev e2fslibs-dev git-core uuid-dev ocaml ocaml-findlib libx11-devbison flex xz-utils

If you have trouble installing Xen dependencies, always take a look at the release notes on the official Xen Wiki for the version you are trying to install ().

While you proceed with the Xen compilation, please note that an active Internet connection is required to download the following specific patches:

$ sudo apt-get install gcc-multilib

Please note that specifying an empty PYTHON_PREFIX_ARG is currently required for a Ubuntu/Debian system; check release notes for additional information.

Speed up compilation In order to speed up the compilation process, add -j5 or the value of the running CPU cores of your system plus one.

$ cd xen-* $ make xen $ make tools $ make stubdom $ sudo make install-xen $ sudo make install-tools PYTHON_PREFIX_ARG= $ sudo make install-stubdom

Now enable the automatic start up of Xen services on system boot with the following commands:

$ sudo update-rc.d xencommons defaults 19 18 $ sudo update-rc.d xend defaults 20 21 $ sudo update-rc.d xendomains defaults 21 20 $ sudo update-rc.d xen-watchdog defaults 22 23

A suitable kernel with dom0 support – Debian Squeeze

Now that you have installed the Xen core and utilities, you need a Linux kernel with support for the dom0 instance.

On Ubuntu 10.04, the most straightforward way to get a stable kernel with dom0 support is to use the Debian Squeeze kernel (yes, it will work!).

Type the following URL into your browser: This is the main page of the main meta-package for Linux with Xen support. At the center of the page, you can find the latest available binary package that is:
dep: linux-image-2.6.32-5-xen-amd64

On this new page, scroll down to the Download linux-image-2.6.32-5-xen-amd64 section.

Click on amd64, pick a mirror from the ones listed, and download using the following command:

$ wget linux-image-2.6.32-5-xen-amd64_2.6.32-38_amd64.deb

Now download the Linux-base package () that is a dependency of the Linux-image package:

$ wget linux-image-2.6.32-5-xen-amd64_2.6.32-38_amd64.deb

In order to fix the dependencies, install the packages through dpkg using the following commands:

$ sudo dpkg -i linux-image-*-xen-*.deb linux-base*.deb $ sudo apt-get install -f

Now configure GRUB as in Debian Squeeze with the following command:

$ sudo mv /etc/grub.d/10_linux /etc/grub.d/50_linux

Append the following code to /etc/grub.d/40_custom:

menuentry "Xen" { insmod ext2
set root='(hd0,1)' multiboot (hd0,1)/boot/xen.gz dummy=dummy module (hd0,1)/boot/vmlinuz-2.6.32-5-xen-amd64 dummy=dummy root=/dev/sda1 ro module (hd0,1)/boot/initrd.img-2.6.32-5-xen-amd64 }

Carefully check the device definition (hd0,1) and the root parameter passed to vmlinuz. Also go through the already existing GRUB entries in /boot/grub/grub.cfg, otherwise the system will not boot.

You can find all the currently available options of a kernel with dom0 support for main distributions at .

A suitable Kernel with dom0 support – Oneiric backport

The Ubuntu 11.10 Oneiric contains a kernel with support for dom0. Even if it is for a different distribution, it is advisable to use the Debian Squeeze kernel rather than the Ubuntu Oneiric kernel on Ubuntu Lucid for stability purposes.

However, you might prefer the backported kernel as it is more recent than the Debian kernel, and it is directly installable and upgradeable through the standard Ubuntu backports repository.

In order to enable the backports repository in your /etc/apt/sources.list configuration file, use the following line:

deb lucid-backports main restricted universe multiverse

In order to install the backport package of the Oneiric kernel on Lucid, use the following commands:

$ sudo apt-get update $ sudo apt-get install linux-image-server-lts-backport-oneiric

If apt complains about Package not found, maybe the package is still in the proposed repository for testing purposes. If you want to use it, enable it in the /etc/
configuration file using the following line:

deb lucid-proposed main restricted universe multiverse

Please note that lucid-backports and lucid-proposed contain many packages. Revert back to your /etc/apt/sources.list file if you do not have any intention to install or upgrade the installed packages to recent (and possibly buggy) releases.

Now configure GRUB using the following command:

$ sudo mv /etc/grub.d/10_linux /etc/grub.d/50_linux

Append the following code to /etc/grub.d/40_custom:

menuentry "Xen 4 with Linux 3.x" {
insmod ext2 set root='(hd0,1)' multiboot (hd0,1)/boot/xen.gz dummy=dummy module (hd0,1)/boot/vmlinuz-3.0.0-13-server dummy=dummy root=/dev/sda1 ro module (hd0,1)/boot/initrd.img-3.0.0-13-server }

Checking if your current kernel has Xen support

For every kernel installed on Ubuntu and Debian distribution Xen support is available under /boot in a file named config-*, which is the configuration file used to build the kernel.

Alternatively, you can get the kernel configuartion of a running kernel through /proc/config.gz.

The kernel options required for domU support are as follows:


The kernel options required for dom0 support in addition to domU kernel options are as follows:


You can check if the CONFIG options mentioned earlier are available with the following command:

grep CONFIG_XEN /boot/config-2.6.32-5-xen-amd64

Alternatively, you can also use the following command:

zgrep CONFIG_XEN /boot/config.gz

In the output from the previous command, =y means available, =m means available as a module, is not set means not available for dom0 support.

Building a custom kernel with dom0 and domU support

For v2.x Linux kernels, you should download kernel sources with Xen support from the following Git repository link:


Scroll down and click on tags | xen-2.6.32.* | snapshot; download, unpack, and enter the following commands into the newly created directory:

$ tar xvf xen-*.tar.gz $ cd xen-*

Now copy your actual kernel configuration file here as .config and run the kernel utility oldconfig to update the Ubuntu standard kernel configuration to the new kernel with the following commands:

$ cp /boot/config-2.6.32-34-server .config $ make oldconfig

You can use the -generic configuration file without problems too, but if you use -server most options are already configured as is best for server equipment.

You will be prompted about enabling some new features that were not present in the default Ubuntu kernel especially the Xen-specific ones. You should reply y if you need to include support for a particular feature, m to include it as a module,or n to exclude it.

In this situation it is okay to enable all the Xen related features except debugging ones. It can be done using the following code:

Paravirtualized guest support (PARAVIRT_GUEST) [Y/n/?] y Xen guest support (XEN) [Y/n/?] y Enable Xen debug and tuning parameters in debugfs (XEN_DEBUG_FS) [N/y/?] n Enable Xen privileged domain support (XEN_DOM0) [N/y/?] (NEW) y Enable support for Xen PCI passthrough devices (XEN_PCI_PASSTHROUGH) [N/y/?] (NEW) y Xen PCI Frontend (XEN_PCIDEV_FRONTEND) [Y/n/m/?] (NEW) y Xen Watchdog support (XEN_WDT) [N/m/y/?] (NEW) y Xen virtual frame buffer support (XEN_FBDEV_FRONTEND) [M/n/y/?] m Xen memory balloon driver (XEN_BALLOON) [Y/n/?] y Scrub pages before returning them to system (XEN_SCRUB_PAGES) [Y/n/?] y Xen /dev/xen/evtchn device (XEN_DEV_EVTCHN) [M/n/y/?] m Backend driver support (XEN_BACKEND) [Y/n/?] (NEW) y Xen backend network device (XEN_NETDEV_BACKEND) [N/m/y/?] (NEW) y Block-device backend driver (XEN_BLKDEV_BACKEND) [N/m/y/?] (NEW) y Block-device tap backend driver (XEN_BLKDEV_TAP) [N/m/y/?] (NEW) y PCI-device backend driver (XEN_PCIDEV_BACKEND) [Y/n/m/?] (NEW) y PCI Backend Mode 1. Virtual PCI (XEN_PCIDEV_BACKEND_VPCI) (NEW) 2. Passthrough (XEN_PCIDEV_BACKEND_PASS) (NEW) 3. Slot (XEN_PCIDEV_BACKEND_SLOT) (NEW) choice[1-3]: 1 PCI Backend Debugging (XEN_PCIDEV_BE_DEBUG) [N/y] (NEW) n Xen filesystem (XENFS) [M/n/y/?] m Create compatibility mount point /proc/xen (XEN_COMPAT_XENFS)[Y/n/?] y Create xen entries under /sys/hypervisor (XEN_SYS_HYPERVISOR) [Y/n/?] y userspace grant access device driver (XEN_GNTDEV) [N/m/y/?] (NEW)y xen platform pci device driver (XEN_PLATFORM_PCI) [M/n/y/?] (NEW) m

Now you could start building the new kernel with default options, but maybe we can easily fine-tune other settings as well.

In order to run the kernel configuration utility, use the following command:

$ make menuconfig

After this a text-based menu will appear and we will be able to change the kernel configuration interactively.

Supporting hypervisors in OpenNebula

A first simple but effective change is to tune our new kernel to our specific CPU model. It can be done as follows:

Click on the Processor Type and Features | Processor Family and change from Generic-x86-64 to your specific CPU model (probably, Core 2/Newer Xeon or Opteron/Athelon64).

In virtualization, disable the Kernel-based Virtual Machine (KVM). As we are building for Xen we will certainly not use KVM.

  1. Once you enter the Device drivers section you will see a lot of specific hardware drivers for which support can be disabled. This is done in order to optimize speed and save space by not building the drivers that will not be used at all.

  2. If it is the first time that you recompile a kernel, or if you do not want to waste some hours configuring out why your new kernel would not boot, directly move to compilation without touching anything else.

We will use the native Debian/Ubuntu tools and use the following commands to generate a new kernel DEB package:

$ sudo apt-get install kernel-package fakeroot
$ make-kpkg clean
$ CONCURRENCY_LEVEL=5 fakeroot make-kpkg --initrd --append-to-version=- myversion kernel-image kernel-headers

The make-kpkg command with CONCURRENCY_LEVEL=5 gives the same results as that of the make-j5 command.

If anything goes wrong, you should install our shiny new kernel package from where you have downloaded the original source package and use the following commands:

$ sudo dpkg -i ../linux-image-2.6.*-myxen_2.6. *-myxen-10.00.Custom_amd64.deb $ sudo update-initramfs -c -k

On systems different than Debian and Ubuntu, you need to use the standard way to install the kernel, which can be done by using the following commands:

$ make bzImage $ make modules $ sudo make modules_install $ sudo cp -a .config /boot/config- $ sudo cp -a /boot/ $ sudo cp -a arch/x86/boot/bzImage /boot/ vmlinuz- $ sudo mkinitrd -f /boot/initrd.img- version

Please consult your distribution manual about kernel compilation for more information.

Now you can configure GRUB as usual by using the following command:

$ sudo mv /etc/grub.d/10_linux /etc/grub.d/50_linux

Append the following code to /etc/grub.d/40_custom file:

menuentry "Xen 4 with custom Linux 2.6.32" { insmod ext2 set root='(hd0,1)' multiboot (hd0,1)/boot/xen.gz dummy=dummy module (hd0,1)/boot/vmlinuz- dummy=dummy root=/dev/sda1 ro module (hd0,1)/boot/initrd.img- }

Autoloading necessary modules

Depending on your kernel it may be necessary to manually add to the /etc/modulesconfiguration file a reference to a bunch of Xen modules, one per line, so the kernel will auto-load them on system start up. The Xen Dom0 modules are as follows:

  • xen-evtchn
  • xen-gntdev
  • xen-netback
  • xen-blkback
  • xenfs
  • blktap

Now you should be able to install Xen in every environment that you face.

Onehost create for Xen hosts

After we have finished configuring our Xen host, we should let it join the OpenNebula host pool in a similar way as we did with KVM. Use the following command:

onehost create xen01 im_xen vmm_xen tm_ssh dummy

Installing VMware ESXi

The third hypervisor choice on our OpenNebula environment could be one of the VMware virtualization products such as VMware ESXi. It is a lightweight bare-metal hypervisor given for free (but it is not open source like KVM and Xen, thus requiring a free registration to obtain a serial number).

Although the current VMware driver supports VMware ESX and VMware Server products, here we will cover only ESXi as it is simpler to configure and because the other two are becoming legacy software. VMware itself advices its customers to migrate existing deployments to VMware ESXi.

The installation on the host is really simple, you just need to download and burnthe ISO available from the link , boot from your CD/DVD, and follow the Easy Wizard for the installation.

During the installation, your hardware is checked to make sure it meets the minimum requirements for ESXi, which are as follows:

  • A 64 bit CPU with VT-X or AMD-V available
  • A memory space of 2 GB
  • A supported RAID or AHCI/SATA Controller

Before buying your hardware, please remember to check the VMware Certified Compatibility Guides at or the Community-Supported Hardware and Software at

If you are performing a fresh install, your local storage will be automatically wiped and repartitioned. Hence, make a backup of your existing data, if any.

Required software on the frontend

To be able to manage VMware hypervisors, you need the following software components on the frontend:

  • A libvirt built with the with-esx flag used by the OpenNebula VMware drivers

  • The VMware driver add-ons available as a separate download from the OpenNebula site

Installing Libvirt with ESX support

Most distributions do not include the support for VMware hypervisor in the Libvirt package, so we should probably recompile it.

Browse to and click on Downloads | Official Releases | HTTP Server and download the latest libvirt-x.x.x.tar.gz package. Unpack it on your frontend and install some required build-dependencies using the following command:

$ sudo apt-get install build-essential gnutls-dev libdevmapper-dev python-dev libcurl4-gnutls-dev libnl-dev

In order to configure, build, and install use the following commands:

$ ./configure –with-esx $ make $ sudo make install $ sudo ldconfig

Libvirt should not be started as a daemon but it will be called directly by the VMware management scripts.

Adding a oneadmin user with privileges

A new user with administration privileges needs to be created in ESXi with the same UID and username as the oneadmin user of the OpenNebula frontend. A matching UID is required because of the storage limitations in ESXi.

For creating the new user, you need to download the VMware VI client on a Windows machine: type-in into your browser the IP address of your ESXi node and click on Download vSphere client.

After the download and install, connect to your ESXi node:

Supporting hypervisors in OpenNebula

  1. Click on the Local Users & Groups tab.

  2. Right-click on the list and click on Add.

  3. Insert new user information for the following tabs, User: oneadmin, UID:1000 (check the ID of oneadmin with the command id on the frontend), and group membership: root.

  4. Switch to the Permission tab and right-click on Add Permission.

  5. Add the oneadmin user with Assigned Role as Administrator and confirm by clicking on OK.

Supporting hypervisors in OpenNebula

(Move the mouse over the image to enlarge.)

Now you need to configure the user credentials in the vmware configuration files inside the OpenNebula /etc folder:

# Libvirt configuration :libvirt_uri: "'esx://@HOST@/?no_verify=1&auto_answer=1'" # Username and password of the VMware hypervisor :username: "oneadmin" :password:

Now you can register your ESXi host on your OpenNebula frontend with the following command:

$ onehost create esx01 im_vmware vmm_vmware tm_vmware dummy

The dummy driver for networking is the simplest choice with VMware. Through the vSphere Client, you must manually configureure at least one vSphere standard switch through the Configuration tab after clicking on the Networking tab.

Supporting hypervisors in OpenNebula

The name of the network should be used in the same way as we will use the standard KVM/Xen host bridge name (for example, lan0) when we see how to configure virtual networks. Remember to update it by clicking on the Properties on the right of Standard Switch: vSwitch0, selecting the default Virtual Machine Port Group, clicking on the Edit button and finally updating the Network Label.

Supporting hypervisors in OpenNebula

An alternative, for advanced network setups including VLAN support, is available since the OpenNebula 3.2 release. The vmware network driver can dynamically allocate vSphere standard switches and group each VM in different VLANs.

However, network switches with IEEE 802.1Q support are needed and will not be covered in this article. For additional information, please refer to the documentation page at the link,.

Wait for the next monitoring cycle to start and afterwards check the correctness of the procedure with the following command:

$ onehost list

If for whatever reason the first probe is unsuccessful and host resources are not reported correctly, try to connect from the frontend to the esx host with the following command:

$ virsh -c esx://esx01/?no_verify=1


In this article, we have learned how to install and configure all the available hypervisors supported by OpenNebula on our host machines, and seen how much is ready to be used by every hypervisor. KVM is a lot easier to set up than Xen as it is integrated in the mainline Linux kernel. However, Xen may be a good choice for a skilled system integrator who is already accustomed to it. ESXi hosts are easier to set up too, but the lack of freedom can be a problem when working on a heterogeneous environment.

Further resources on this subject:

You've been reading an excerpt of:

OpenNebula 3 Cloud Computing

Explore Title