oVirt is a flexible, multifunctional, feature-rich, and holistic virtualization management solution that provides easy and practical virtual infrastructure management. The oVirt virtualization platform avoids complexities and significantly reduces the cost of implementing of virtualization and subsequent operations related to its infrastructure maintenance. The main goal of oVirt is to help users build easily managed, high-availability infrastructure for handling VMs (Virtual Machines). oVirt provides ample opportunities for the rapid deployment of desktop and server virtualization infrastructure. oVirt would be a good choice in the following cases:
Combining various types of hardware in a single virtualization platform
Establishing a center for managing VMs with GUI control
Simplifying the management of large numbers of VMs
Automating VM's clustering and load balancing
Automating the hardware and VM's failover through live migration
oVirt is an open source software developed with backing from Red Hat, and is the base for Red Hat Enterprise Virtualization (RHEV). RHEV is a stable commercial product, while oVirt is an upstream product. Features from oVirt get merged into RHEV when tested and become stable. oVirt is a bleeding-edge development which RHEV is based on. oVirt is modern and very new but not as stable as RHEV. oVirt has no commercial support. RHEV is not as advanced but is stable and recommended for enterprise and mission-critical systems. oVirt also plays a part in the support and development of projects involving the following companies: IBM, Cisco, Intel, Canonical, NetApp, and SUSE.
Flexible management of virtualization infrastructure:
Centralized management portal for administrative tasks
Multilevel control that allows you to manage the physical infrastructure at the level of logical objects
The ability to add existing virtual machines on existing servers into the oVirt environment
Flexible user management with external directory servers
Resource usage efficiency:
Resource scheduler is able to dynamically maintain the balance of resources used
The ability to control a potential reduction in energy costs for cooling
Quotas and resource limitations
Fast deployment of virtual machines:
VM's template management that may need to create and manage virtual machines
Snapshots, cloning, and pre-started virtual machines that are ready for usage
Flexible storage management:
Storage virtualization for consistent treatment of shared storage from any server
Ability to use different types of storage
oVirt is a dynamically developed product that is based on modern technologies. oVirt is built on Linux and Libvirt (the official FAQ is available at http://wiki.libvirt.org/page/FAQ). Libvirt is a tool for virtualization management that allows managing virtual machines hosted on Qemu/KVM, Xen, VirtualBox, and LXC. However, oVirt is focused on Qemu with a Kernel-based Virtual Machine (KVM). (For more information about KVM refer to http://www.linux-kvm.org/page/Main_Page.)
oVirt uses KVM that requires processors with hardware virtualization extensions (for more information on virtualization extensions refer to http://en.wikipedia.org/wiki/X86_virtualization), such as Intel VT-x or AMD-V. KVM supports x86 processors and has been ported to ARM, IA-64, PowerPC, and S/390 platforms.
oVirt can be used widely. We can set up a small experimental installation on the desktop or create a large infrastructure that will ensure the needs of the whole enterprise. This is achieved by supporting a wide variety of hardware platforms. The data storage level also supports different types of storage such as NFS, iSCSI, Fibre Channel, and GlusterFS.
oVirt will be of interest to engineers, system administrators, and professionals who work with virtualization and Linux. To start working with oVirt, you must have basic skills in working with the Linux console. For more information, you can visit the following resources:
oVirt Engine: It is a control unit used for administrative tasks related to the management of the global configuration of the entire virtualization infrastructure, the management of virtual machines, storage, and network settings.
Storage and network infrastructure (external disk capacity units): These can be direct or network-attached storage (DAS/NAS) or high-performance storage area networks (SAN). Disk capacity units hold virtual machine images and OS installation images. Network devices, such as switches, provide connectivity between engines, nodes, and storage.
The oVirt architecture is as shown in the following figure:
oVirt Engine is a set of software and services that implement the functionality of the central control infrastructure. This control unit is platform's core and provides an interface for other infrastructure components. Using oVirt Engine interfaces, the administrator can run the whole setup inside oVirt. So with the help of oVirt Engine, we achieve one of the main goals of oVirt: centralized management.
Virtualization hosts (oVirt Nodes) are servers using Linux x86_64 with the installed libvirt daemon and VDSM (Virtual Desktop and Server Manager) (host-agent) service. These are the set of packages and support services that are required for the rapid deployment of virtualization. The most supported and preferred distributions to build the nodes is CentOS or Red Hat Linux. Also, we can use oVirt Nodeâa special stripped Fedora Linux minimalistic distribution containing only the necessary packages for integration into the oVirt platform.
Storage is an external component of the oVirt infrastructure but is required for oVirt. However, we can use a local type of storage when the storage is located on the compute node. The storage nodes use block or file storage type and can be either local or remote, accessible via the protocols: NFS (additional information on Network-Attached Storage is available at http://en.wikipedia.org/wiki/Network-attached_storage), iSCSI and Fibre Channel (SAN information about Storage Area Networks is available at http://en.wikipedia.org/wiki/Storage_area_network). The cluster filesystem, GlusterFS (GlusterFS community http://www.gluster.org/) is also supported through a special type of storage called POSIXFS since in oVirt 3.3, available as an additional storage type. For most cases, NFS or GlusterFS storages are a good choice that do not cost much. GlusterFS storage nodes are grouped into a single storage filesystem known as volumes that guarantee high data availability and high data redundancy. Storage can be configured to replicate data. Thus, when a failure occurs with one of the GlusterFS nodes, the storage can continue its work.
oVirt provides the ability to simultaneously work with multiple types of storage. However, there is a significant limitation as a data center can use only one type of storage.
Additionally, oVirt Engine can be set to an external service identification and authorization such as Active Directory (the Active Directory wiki page can be found at http://en.wikipedia.org/wiki/Active_Directory) or IPA (FreeIPA's official website is http://www.freeipa.org/page/Main_Page) for user authentication. Such services are third-party in relation to oVirt and is not included in oVirt packages.
We have reviewed the overall architecture of oVirt; however, we need to know the components of a platform and their purpose. It's especially important when troubleshooting. oVirt consists of several components, each responsible for a part of the work. It is shown in the following diagram:
At first glance, it might seem that the internal structure of oVirt is quite complex. However, this is not the case. Once you understand the basic components, which are given in the following points, it becomes clear that everything is simple.
Data Warehouse: It is the Data Warehouse (DWH) component based on Talend data integration software that performs ETL (Extract Transform Load) processing, and its results are loaded in the dedicated database history. Additional information related to Talend software can be found at http://www.talend.com/products/data-integration.
oVirt Engine is a Java-based application running as a web application. This service communicates directly with the VDSM agents placed on virtualization hosts. It is a scalable, centralized management tool for server and desktop virtualization, which internally uses modern technologies such as JBoss application server (for more information visit https://access.redhat.com/site/documentation/JBoss_Enterprise_Web_Server/#) , Java, and Python programming languages. oVirt Engine provides the following functions:
Virtual machines' full life cycle management
Authentication with LDAP providers (ActiveDirectory or IPA)
Network configuration management is used for the creation of logical networks and connecting them to the hosts
Storage management is used to manage domain's storage (NFS, iSCSI, Fibre Channel, GlusterFS, or Local) and disk images of virtual machines
High availability functions to automatically restart virtual machines on other nodes if there is hardware or network failure of the source host
System Scheduler is used for the implementation of load balancing of virtual machines based on resource usage policies
Image Management is used for allocation based on templates, thin provisioning, and snapshots
It uses objects' platform monitoring, such as monitoring virtual machines, hosts, network environment, and storage
As we can see, this component provides most of the functionalities inherent to oVirt. Engine is the core of the system through which the main oVirt goal is achievedâcentralized management and task automation.
Admin portal is a convenient web-based graphical administration interface designed for centralized infrastructure management and the administration of virtual machines. Admin portal allows you to manage all aspects of your virtual infrastructure, ranging from the creation of data centers and clusters to the maintenance of the VMs' life cycle. Admin Portal is an intermediary between the administrator and oVirt Engine, all of the commands sent to the administrator use oVirt Engine for direct implementation.
User portal is a simple graphical interface that provides access to the life cycle management of virtual machines. User portal does not contain the global configuration settings, and allows you to influence the work of the entire infrastructure. Through the User portal, we can start and stop virtual machines, template management, and simple monitoring.
User and Admin portals allow you to achieve another goal, that is, to provide a convenient interface to manage the virtual infrastructure.
oVirt Node or compute virtualization node implements the VDSM component (Host Agent). This component is developed in Python, which provides all of the functionalities of oVirt related to the compute nodes. The tasks assigned in the admin portal are passed to oVirt Engine and then transferred to the agent VDSM for direct execution. The tasks can be very different, such as creating or running a virtual machine, starting a migration, adding or removing a VM device, and reconfiguring the network environment. The host agent for virtual environments uses libvirtâvirtualization management library. Thus, it can be concluded that VDSM agents perform all of the work associated with virtualization.
Guest agent's component provides communication between the host system and the virtual machine through VirtIO connection. A VirtIO (VirtIO para-virtualized drivers are available at http://www.ibm.com/developerworks/library/l-virtio/) serial channel is connected to the host via a character device driver. Guest agent provides additional information for oVirt Engine, for example, information about the use of memory, devices, and the internal state of the guest environment. Based on the data obtained, the engine gets an idea of what is going on inside the infrastructure.
Database storage based on PostgreSQL RDBMS is used to store different information related to oVirt. This is used for global configuration, configuration of virtual machines, clusters and datacenters, and various journals and statistical data accumulated in the process. It uses the database achieved in the goalsâcentralized configuration storage.
Data Warehouse (DWH) is an internal component required to process the data that the reports will be built on. It is the data that is associated with the work of data centers, clusters, and virtual machines. This resource usage data is collected at the time of working with oVirt. Data is stored in a PostgreSQL database and can be used for further processing. The DWH ETL (Extract Transform Load) component uses the mechanisms of the Talend company related with data integration. The DWH component periodically collects data from the underlying PostgreSQL database for processing and then folds into a separate history database data ready for reporting.
Report Engine is a tool to generate reports on the use of resources within the infrastructure based on Jasper Reports (Jasper Report's official website is http:////community.jaspersoft.com/project/jasperreports-library). It is an open source reporting engine. Written in Java, Jasper Reports can use different data sources for creating reports. This is quite a flexible tool that lets you add or export different types of reports. Report Engine is quite functional and provides features such as the creation of scheduled reports and filters, export to various formats, and special tools to create reports. Reports can show a variety useful information. Based on reports, we can perform analysis and make predictions about how resources are used and how the infrastructure can be developed.
The developer interfaces are REST API and CLI SDK. A REST-oriented programming interface is used for integration with oVirt Engine. This interface can be used with any programming language for various oVirt tasks (creating oVirt resources, VM's management, and so on). A programming language that is used must be able to perform HTTP requests and deal with XML. Software development kit (SDK) is available, which is written in Python and Java. With the SDK, we can perform operations on objects in oVirt while providing a complete abstraction of the protocol; it is fully compatible with the architecture of oVirt API. A development kit is simple to use and easy to learn. The command-line interface (CLI) is also written in Python, and provides information and performs various operations on objects in oVirt. Like the development kit, CLI is also fully compatible with oVirt API and is intuitive by nature.
This section describes the minimum and recommended hardware requirements that must be met for successful installation of the oVirt platform. In order to deploy the oVirt environment, it will require several components that will act in different roles. Depending on the role of the executable, it can be a dedicated server or a workstation. We will need the following components:
One machine will act as the server manager.
One or more machines are used as virtualization nodesâwe need at least two machines to perform a live migration or for the implementation of power management features.
One or more machines will act as a client to access the administrator portal. This may be a simple desktop workstation.
Storage infrastructure that will provide its storage in one of the supported protocols (NFS, iSCSI, FibreChannel, GlusterFS, and POSIX).
The most difficult task is the selection of the storage infrastructure; it is often the most expensive component of infrastructure virtualization. Shared storage can be represented by a variety of equipment. These can be dedicated storage servers, NAS, or SAN infrastructure that can provide resources for oVirt. Shared storage must be available and enabled for virtualization hosts.
It is clear that the requirements for a more complex configuration require a more careful approach and analysis of the requirements to make the final decision. However, there are some simple and universal rules:
A large number of VMs require more virtualization hosts
A large amount of disk space inside VMs requires a large capacity of shared storage
Good performance requires high performance of hardware (network bandwidth, storage and disk performance, CPU, and so on)
More memory is better
The following requirements are the lower limits below which oVirt performance will be poor.
A dual or a quad core CPU
4 GB system RAM that is not being used by existing processes
10 GB of locally accessible and writable disk space
1 Network Interface Card (NIC) with bandwidth of at least 1 gigabits per second
An installed and running x86_64 operating system such as CentOS 6, RHEL 6, or Fedora 18/19
When you create a management server, it is worth bearing in mind that the load on this server is not constant, so increasing the number of cores does not significantly improve its performance. However, as we know memory is not enough for oVirt services to work comfortably, 4 to 8GB RAM will be enough. Disk performance of SATA drives will also be quite important as the disk is operating on the management node; there is no constant load on the disk I/O. Gigabit network cards have become an integral part of the server hardware, so the bandwidth should not pose any difficulties.
A dual or quad core CPU
4 GB system RAM that is not being used by existing processes
25 GB of locally accessible and writable disk space
1 NIC with bandwidth of at least 1 gigabits per second
An installed and running x86_64 operation system such as CentOS 6, RHEL 6, or Fedora 18/19
Virtualization hosts must have at least one CPU that must support the Intel64 or AMD64 CPU extensions and the AMD-V or Intel VT hardware virtualization extensions. In this case, more the number of CPUs, the better it is. More number of CPUs or cores means that we can start more VMs simultaneously. Currently, a maximum of 128 physical CPUs per virtualization host is supported. To check whether your processor supports the required virtualization extensions and that they are enabled in the Linux terminal, enter the following command in the terminal:
# grep -E 'svm|vmx' /proc/cpuinfo
This command searches for the presence of flags that define virtualization support in the installed CPUs. No output indicates that the hardware virtualization is not supported, which is bad. We should also check the BIOS settings, as some manufacturers, by default, turn off support for hardware virtualization.
In the case of memory, the phrase "No amount of memory is too much" is fully justified. The minimum amount of memory required by a virtualization host is 4 GB (although oVirt developers suggest to keep the value as 10 GB, you can start with a smaller RAM size); however, it all depends on how many virtual machines and resources they configured with. More VMs need more memory! There is a fairly universal rule that specifies that the amount of memory for the host system is the sum of all the amounts of VM's memory and 2 GB RAM for the host operating system itself.
It is also known that KVM can perform physical memory overcommit. This allows you to allocate for the virtual machine memory size exceeding the amount of the available physical memory. However, we must be very careful because the simultaneous peak load on these virtual machines can lead to swapping and significant performance degradationâuse with care. Also, oVirt currently supports the control of memory ballooning and KSM, which are allowed dynamically to adjust the memory usage of virtual machines and effectively utilize memory by combining duplicate pages. Memory is added and the upper limit is 1 TB RAM per virtualization host.
Normal operation of virtualization hosts requires a local storage that will store host configuration files, work logs, core dumps, and more. The recommended amount of space for local storage is 10 GB.
The minimum supported internal storage for each oVirt Node is the total amount of memory required to provision the following partitions:
Root partition with operation system files with at least 2 GB
Logfiles storage with at least 2048 MB
Data storage with at least 1 GB
Swap partition with at least 2 GB
Swap partition can also be determined by the following formula:
TOTAL_SWAP = (TOTAL_RAM x 0.5) + 4 GB
Physical virtualization host must have at least one network card with the speed of at least 1 gigabits per second for normal tasks' execution, which is directly dependent on network bandwidth, such as live migration. In large oVirt installations where intensive exchange with the shared storage is expected, it is also recommended to use a high-speed network connection. If possible, it is recommended to use two network cards. One network card can be used for the main production network and the second network card can be connected to a separate management network. Alternatively, we can aggregate network cards into a bond device for network connection backup or performance reasons.
It may happen that we have no free physical servers and really want to try oVirt. Well, we can use a virtual server in an existing virtual environment. Yes, oVirt Engine can run inside a virtual machine. To run virtualization hosts, our physical virtualization server must support and have nested virtualization enabled (all modern processors are able to do this); remaining are the storage nodes. Take a virtual machine with large virtual disks and run the NFS server. It can't be used in production of the oVirt deployment, but it is quite suitable for experiments. Additional information about nested virtualization can be found at http://www.ibm.com/developerworks/cloud/library/cl-nestedvirtualization/.
In this chapter, we discussed the basic theoretical issues to be aware of when installing oVirt. When we install any complex system, it is important to know how it works from the inside; this understanding further simplifies working with the system and helps to better understand the subtleties. Knowledge of the overall architecture and system requirements will help in choosing hardware and better use of available resources. Information about the components and their interaction provides you with a clear understanding of how to further optimize the use of resources in respect of each to the components. In Chapter 2, Installing oVirt, we move on with the installation of oVirt Engine and the virtualization hosts for oVirt. We will have a detailed look at the main stage of the installation and the initial configuration of oVirt.