With the rise in energy costs and the advancement of computer processing power and memory capacity, running separate, under-utilized server hardware with specific roles is no longer a luxury, and thus was born the technology we so affectionately call virtualization. The term virtualization mainly refers to virtualizing servers, but of late it is also being used for network or infrastructure virtualization. Server and network virtualization enabled us to truly create a virtual world where anything is possible. Virtual data centers, virtual storage systems, and virtual desktop PCs are just a few real-world applications where virtualization is being heavily used. Although nowadays the world is being swept by virtualization, the term virtualization itself is not new. The concept was used in mainframes of the 1960s, where virtualization was a way to logically divide the mainframe's resources for different application processing.
A hypervisor is the underlying platform or foundation that allows a virtual world to be built upon. In a way it is the very building block of all virtualization. A bare metal hypervisor acts as a bridge between physical hardware and the virtual machines by creating an abstraction layer. Because of this unique feature, an entire virtual machine can be moved over a vast distance over the Internet and be made able to function exactly the same. A virtual machine does not see the hardware directly; instead, it sees the layer of the hypervisor, which is the same no matter on what hardware the hypervisor has been installed.
The Proxmox hypervisor is one of the best kept secrets in the modern computer world. The reason is simple. It allows for the building of an enterprise business-class virtual infrastructure at a small business-class price tag without sacrificing stability, performance, and ease of use. Whether it is a massive data center to serve millions of people, a small educational institution, or a home serving important family members, Proxmox can fulfil the needs of just about any situation. Even a novice networker can get a stable virtualization platform up and running in less than an hour.
A Proxmox cluster consists of two or more computer nodes with Proxmox as the operating system and connected in the same network. A virtual machine can migrate from one node to another in the same cluster, which allows redundancy should a node fail for any reason. Refer to the following diagram of a very basic two-node Proxmox cluster with FreeNAS shared storage. Please note that while this form of setup is good enough for learning purposes, it may not be enough for a production environment where uptime is essential. In later chapters, we will see how to add more Proxmox nodes and storage nodes into the cluster to ensure redundancy.

In this and the upcoming chapters, we will see the mighty power of Proxmox from inside out. We will deconstruct scenarios and create a very complex virtual environment, which will challenge us to think outside the box. We will also see some real incident-based issues and how to troubleshoot them. So strap yourself and let's dive into the virtual world with the mighty hypervisor, Proxmox. The following are some of the topics we are going to see in this chapter:
A hands-on approach has been followed throughout this book to allow the reader to learn Proxmox in a practical way. If you do not have a Proxmox cluster set up or no access to an existing cluster, you can set up a basic-level Proxmox cluster by following the installation instructions laid out in the Setting up a basic cluster section of this chapter. If you already have a cluster, follow along from the next section, The Proxmox Graphical User Interface (GUI).
The Proxmox Graphical User Interface, or Proxmox GUI, allows users to interact with the Proxmox cluster graphically using menus and a visual representation of the cluster status. Even though all of the management can be done from the Command-line Interface (CLI), it can be overwhelming at times, and managing a cluster can become a daunting task. To properly utilize a Proxmox cluster, it is very important to have a clear understanding of the Proxmox GUI. The GUI can be easily accessed from just about any browser through a URL similar to https://192.168.1.1:8006
, as shown in the following screenshot:

The various fields marked in the previous screenshot are as follows:
1 shows the URL to access the Proxmox GUI through a browser
2 shows the logout button to exit the Proxmox GUI
3 shows the button to open the virtual machine creation dialog box
4 shows the button to open the OpenVZ container creation dialog box
5 shows the Proxmox tabbed menu bar
6 shows the drop-down menu to change the period of the status graphs
7 shows the status information block for Proxmox nodes, virtual machines, or containers
8 shows the OpenVZ containers
9 shows the available virtual machine template for cloning
10 shows the KVM virtual machines
11 shows the Proxmox nodes
12 shows the shared storages
13 shows the resource pools
14 shows the graphical representation of various statuses
15 shows the task log
The Proxmox GUI is a one-page administration control panel. This means that no matter which feature one is managing, the browser does not open a new page or leave the existing page. Menus on the admin page change depending on the feature that is being administered. For example, in the previous screenshot, the node pmxvm02 is selected, so the main menu only shows node-specific menus. When a virtual machine is selected, the menu looks like the following screenshot:

Some features of the Proxmox GUI, such as VNC console and shell, require Java or IcedTea (http://icedtea.classpath.org/wiki/Main_Page) to be installed on the computer you are accessing the GUI from. The GUI works great with Firefox and Google Chrome. The latest version of Internet Explorer may have an issue with functioning properly if not in compatibility mode. The Proxmox GUI also works with the Opera browser.
The following chart is a visual representation of a Proxmox GUI menu system. Some menu options need to be set up once and do not need any regular attention, such as DNS, Time, Services, and so on. Other menus, such as Summary, Syslog, Backup, Permissions, and so on, are regularly used to ensure healthy cluster environment.

In this book, we will mostly look at the menu options relevant for regular maintenance of a Proxmox cluster. We will also look at some of the advanced menu options that are needed to create a complex network infrastructure, such as VLAN, bridge, and so on. The rest of the menu options are very basic in nature and are very much self-explanatory.
In the Proxmox GUI, Datacenter is the main level folder of Proxmox nodes'/VMs' tree. Each Datacenter folder can only hold one Proxmox cluster.
To use the Search tab, navigate to GUI | Server View | Datacenter | Search Tab | Search Box.
It is very easy to manage a cluster with a small number of virtual machines with an even smaller number of Proxmox and storage nodes. When maintaining hundreds or thousands of virtual machines with several dozen Proxmox nodes in a cluster, the Search option makes it easier to find a particular virtual machine. Scrolling through a list of virtual machines to find a particular one is very time consuming. The following screenshot shows the Search option:

The search box under Datacenter | Search shows the result in real time as you type in the box. It can search with any string in the Type or Description columns. It can be the partial name of a VM, VMID, or VM type (qemu, openvz). The preceding screenshot shows all the virtual machines that have the word ceph
in the description.
To use the Storage tab, navigate to GUI | Server View | Datacenter | Storage.
The Storage tab is probably one of the most important options in the Proxmox GUI. This is where the Proxmox cluster and storage systems come together. This is the only menu to attach any storage system with Proxmox. Whether it is local/shared storage, NFS/RBD/LVM, all are done here. The following screenshot shows the Storage tab:

As of Version 3.2, Proxmox supports the following storage types:
Directory: This is mostly local storage
LVM: These are local or shared iSCSI targets
NF: This can be OmniOS, FreeNAS, Ubuntu, and so on
GlusterFS: Visit www.gluster.org for more information
RBD: Visit www.ceph.com for more information
In this book, we will be using NFS for basic low-end setup and RBD for advanced, distributed storage with a high level of redundancy. The following screenshot shows the storage types:

Cluster-wide backup schedules are created through this menu. Backup is the first line of defense against any form of cluster disaster. With a good backup plan, downtime can be minimized and valuable data can be saved. Although Proxmox backup system cannot do a granular file backup of a virtual machine, the ability to do full virtual machine backup is one of the strengths of Proxmox. This inclusion of a backup system is one of the best in the industry, and it "just works". The backup menu can be found in the following menu system.
To use the Backup tab, navigate to GUI | Server View | Datacenter | Backup. The following screenshot shows the Backup tab:

Proxmox only allows schedule creation on a daily and weekly basis. Select VMs to be backed up, day of the week, and time of the day, and backup will do the job on its own. The following screenshot shows the dialog box to create a schedule:

LZO compression and Snapshot mode are default in the Proxmox backup. Compression can be selected as None, LZO, and GZIP. In most cases, LZO works great. It has less compression but it is fast and easier on the hardware. GZIP can compress further, but it also takes up lot of CPU resources during backup.
The Snapshot mode allows live backup without needing to shut down running VM, thus minimizing downtime. In an always-on network environment, downtime may not be permitted. Other modes, such as Suspend and Stop may be used in special cases where shutting down the VM during backup is absolute necessary to ensure data integrity.
Please note that this Snapshot mode is not the same as the Snapshots option for a virtual machine. During full backup of a live virtual machine, LVM Snapshot is used, whereas Live Snapshots are used to preserve the state of a KVM-based virtual machine. Live Snapshots can be done for the OpenVZ container in Proxmox.
Tip
If the selected VMs are scattered over multiple nodes, it is very important to keep in mind that when backup starts at the scheduled time, it will simultaneously create a backup of VMs on multiple nodes to a single backup storage. If the backup storage is not powerful enough to handle all the incoming data from multiple Proxmox nodes, the backup process may fail.
The node-specific tabs are specific to each node in the cluster. New menu tabs become visible when the node is selected.
To use the Summary tab, navigate to GUI | Server View | Node | Summary.
The Summary tab for a node is a visual representation of the node's health. It shows vital information, such as Uptime and Resource Consumption. As you can see in the following screenshot, the Summary screen also shows CPU usage, Server Load, Memory Usage, and Network Traffic in a very easy-to-understand graph. An administrator can get the necessary information about a node just by glancing at the summary. Summary can be viewed on hourly, daily, weekly, monthly, and yearly bases.

To use the Network tab, navigate to GUI | Server View | Node | Network.
The Network menu acts as glue between all virtual machines, nodes, and shared storage systems. Without a proper Network Interface Card (NIC) or Virtual NIC (vNIC) and a virtual bridge setup, no communication can take place. Deeper understanding of this menu will allow you to create a very complex web of clusters, nodes, and virtual machines. Due to the importance of this menu option, we will look into this menu in greater detail later in this chapter.

To use the Syslog tab, navigate to GUI | Server View | Node | Syslog.
The Syslog option allows an administrator to view the system log in real time. Syslog gives feedback as it happens in the node. It also allows scrolling up to view logs in the past. More importantly, if any error occurs in the node, Syslog gives that information in real time with the time and date stamp. This helps to pinpoint an issue exactly when it occurred. Here's an example of a Syslog menu visit scenario: if the node cannot connect to a storage system, the Syslog screen will show the error that is preventing connection.
The following screenshot shows the Syslog option:

To use the UBC tab, navigate to GUI | Server View | Node | UBC.
User Bean Counters (UBC) is a set of limits, which guarantees resources control per container. This is a vital component for OpenVZ/container resource management. The UBC menu option in the Proxmox GUI is for viewing only. There is no option to edit any of the limits.
The UBC screen only gets populated when an OpenVZ container is selected, as shown in the following screenshot:

To use the Subscription tab, navigate to GUI | Server View | Node | Subscription. Proxmox can be downloaded and used for free without any restriction for any feature. It is by no means a trialware, shareware, or an n-day evaluation hypervisor. However, Proxmox also has a subscription model, which allows enterprise-class repositories. The free version of Proxmox only comes with standard repositories. The main difference between enterprise and standard repositories is that enterprise repositories go through a higher level of testing to ensure a very stable cluster environment. The following screenshot shows the Subscription tab:

Note
Keep in mind that even with the free version, Proxmox is still very stable. Do not let the subscription level fool you to think the free version is not even worth considering.
This level of tests is mandatory for an enterprise-class network environment where a small issue can cost a company a lot of money. A highly stable environment is usually not needed in a home-based platform or small business environment. The Subscription tab allows activating purchased subscription on a node.
Tip
Proxmox has the very best prices per subscription in the virtualization product industry. The operating cost of Proxmox cluster is very minimal compared with a giant virtual product such as VMware. Proxmox provides big business virtualization at small business cost. For details of different subscription levels, visit the link http://proxmox.com/proxmox-ve/pricing.
To use the Updates tab, navigate to GUI | Server View | Node | Updates.
The Proxmox node can be updated right from the GUI through the Updates tab. Each node checks daily for any available updates and alerts administrator through e-mail if there are any new updates. It is important to keep all nodes up to date by updating regularly. The Updates menu enables upgrading by just using a few mouse clicks. The following screenshot shows the Updates tab:

Ceph is a robust and powerfully distributed storage system that can be used as shared storage for Proxmox cluster. Ceph provides the RADOS Block Device (RBD) storage backend. A Ceph storage cluster can scale out to several petabytes. Ceph is powerful enough to handle infrastructure of any size while being resilient enough to provide great storage redundancy. Understanding the true potential of Ceph, we have dedicated an entire chapter in this book to show you how to set up a Ceph cluster using both command line and the Proxmox GUI to build truly enterprise class complex virtual infrastructures.
Starting with Proxmox VE 3.2, Ceph server is added as technology preview. This allows both Proxmox and Ceph to co-exist on the same node. Ceph itself does not come with any graphical user interface to manage Ceph storage, with the exception being the subscription version of Ceph. Proxmox enables us to manage Ceph cluster almost entirely from the Proxmox GUI. Currently CrushMAP cannot be edited and multiple Ceph clusters cannot be managed through the GUI. The following screenshot shows the Ceph menu tab along with Ceph-related tabs:

We will look into the Ceph tab in greater detail in Chapter 7, High Availability Storage for High Availability Cluster. The following list is a short description of Ceph-related tabs and their functions:
Status: This shows the current status or health of a Ceph storage
Config: This displays the content of the Ceph cluster configuration file
cluster.conf
Monitor: This starts, stops, creates, removes, and displays a list of Ceph Monitors (MONs)
Disks: This displays the available drives attached to the Proxmox nodes and creates new OSDs
Pools: This creates, removes, and displays the list of pools
The following menu tabs are available when a virtual machine is selected.
The Summary menu tab represents similar information such as the one accessed by navigating to Node | Summary. Valuable information can be gathered, which shows the real-time status of a virtual machine. One additional feature this menu has is the addition of the Notes textbox. By double-clicking on the Notes box, it brings up a multiline textbox where an administrator can enter data such as the department, the usage the VM is intended for, or just about any other information that needs to be on hand.

The initially created and configured virtual machine sometimes needs further resource allocation. As the functions of VM rise, it becomes necessary to add additional virtual drives or network interfaces. The Hardware menu tab under the virtual machine is where the adding and removing of devices happens. The following screenshot shows the Hardware tab:

Through the Add menu, additional CD drives, hard drives, and network interfaces (bridge, vNIC, and so on) can be added to a virtual machine, as shown in the following screenshot:

Each of these additions requires the virtual machine to be fully powered off and not just the restart/reboot process. Ejecting an ISO image file to attach a different one does not require any VM power cycle. By adding some configuration arguments in the virtual machine configuration file, it is possible to hot swap a virtual hard disk into a VM. This configuration is further explained in Chapter 4, A Virtual Machine for a Virtual World.
Besides the Add menu, other menus such as Remove, Edit, Resize Disk, and Move Disk are also available through the Hardware menu. All these additional menus except Add require a hardware item to be selected. Resize Disk and Move Disk will be enabled for clicking when a virtual drive is selected. We will see these in greater detail in later chapters.
The Options menu under virtual machine allows further tweaking, such as changing name, boot order, and so on. Most of the options here can be left to default.
Tip
If you want the virtual machine to auto-start as soon as the Proxmox node reboots, set the Start at boot option to Yes.
The Options tab is shown in the following screenshot:

A good backup plan is the first line of defense against any disaster, which can cause major or minor data loss. In our ultra-modern digital world, data is much more valuable than ever before. Every virtual environment administrator struggles with backup strategy of his/her virtual environment. The following screenshot shows the Backup tab:

The fine line between granular files and an entire machine backup is somewhat diminished in a virtual environment. To take the daily struggle of backup plan out of the equation, Proxmox added an excellent backup system right in the hypervisor itself. Although the backup system cannot backup individual files inside a virtual machine, it works well while backing up an entire virtual machine.
Note
Proxmox backup system can only do full backup of a virtual machine and cannot be used to backup individual files inside the virtual machine at the granular level.
Proxmox backups can be scheduled over multiple storage systems and multiple days. A backup system is only as good as the ability to restore the backup. Both backup and restore can be done from single menu under virtual machine. It also allows backups browsing and manual deletion of any backups. All these are done from a single interface with a few mouse clicks. Due to the importance of backup strategy in a virtual environment, we will look into Proxmox backup system in much greater detail in Chapter 4, A Virtual Machine for a Virtual World.
Proxmox Snapshots is a way to roll back a KVM-based virtual machine to a previous state. Although it provides similar protection to Proxmox Backup, it comes with speed. Proxmox Snapshot is extremely fast when compared with Proxmox Backup, thus allowing a user to take several snapshots a day. The following screenshot shows the Snapshots tab:

A common scenario where Snapshots can be used is when a user wants to install or update a software. He or she can take a snapshot, execute the program, and if anything goes wrong, then simply roll back to the previous state. It creates Snapshot with the RAM itself, so the virtual machine stays exactly the same as it is running. Live snapshots are not included in full virtual machine backups.
The Permissions menu allows the management of user permissions for a particular virtual machine. It is possible to give multiple users access to the same virtual machine. Click on Add to add users or groups to the permission. The following screenshot shows the Permissions tab:

A common scenario of permission usage is in an office setup where there is one accounting virtual machine and multiple staff need to access data. A permission can be set either at the user or the group level.
This section will help you to create a shopping list of components that you need and provide you step-by-step instructions to set up a basic Proxmox cluster. The steps in this section are in a much simpler form to get you up and running quickly. You can see Proxmox setup instructions in greater details from Proxmox Wiki documentation at http://pve.proxmox.com/wiki/Installation.
To set up a shared storage to be used with a Proxmox cluster, we are going to use Ubuntu or FreeNAS storage. There are options other than FreeNAS, such as OpenMediaVault, NAS4Free, GlusterFS, and DRBD to name a few. FreeNAS is an excellent choice for shared storage due to its ZFS filesystem implementation, simplicity of installation, large active community, and no licensing cost. Although we have used FreeNAS in this book, you can use just about any flavor of shared storage with the NFS and iSCSI support you want. Installation guide to set up FreeNAS is beyond the scope of this book.
Note
For complete setup instructions for FreeNAS, visit http://doc.freenas.org/index.php/Installing_from_CDROM.
Ubuntu is also a great choice to learn how shared storage works with Proxmox. Almost anything you can set up with FreeNAS, you can also set up in Ubuntu. The only difference is that there is no user-friendly graphical interface in Ubuntu as in FreeNAS. Deeper into the book, we will look at the ultimate shared storage solution using Ceph. But to get our first basic cluster up and running, we will use Ubuntu or FreeNAS to set up an NFS and iSCSI share.
Note
For installation instructions for Ubuntu server, visit http://www.ubuntu.com/download/server.
The following is a list of hardware components that we will need to put together our first basic Proxmox cluster. If you already have some components you would like to use to set up your cluster, it is important to check if they will support virtualization. Not all hardware platforms support virtualization, especially if they are quite old. To get details on how to check your components, visit http://virt-tools.org/learning/check-hardware-virt/.
A quicker way to check is through the BIOS and look for one of the following settings in the BIOS option. Any one of the these should be Enabled in order for the hypervisor to work.
Intel ® Virtualization Technology
Virtualization Technology (VTx)
Virtualization
Note
This list of hardware is to build a bare minimum Proxmox cluster for learning purposes only and not suitable for enterprise-class infrastructure.
Component type
Brand/model
Quantity
CPU/Processor
Intel i3-2120 3.30 Ghz 4 Core
2
Motherboard
Asus P8B75-M/CSM
2
RAM
Kingston 8 GB 1600 Mhz DDR3 240 Pin Non-ECC
3
HDD
Seagate Momentus 250 GB 2.5" SATA
2
USB stick
Patriot Memory 4 GB
1
Power supply
300+ Watt
3
LAN switch
Netgear GS108NA 8-Port Gigabit Switch
1
Download the software given in the following table in the ISO format from their respective URL, and then create a CD from the ISO images.
Software |
Download link |
---|---|
Proxmox VE | |
FreeNAS | |
Ubuntu Server | |
clearOS community |
The next diagram is a network diagram of a basic Proxmox cluster. We will start with two node clusters with one shared storage setup with either Ubuntu or FreeNAS. The setup in the illustration is a guideline only. Depending on the level of experience, budget, and available hardware on hand, you can set up any way you see fit. Regardless of whatever setup you use, it should meet the following requirements:
Two Proxmox nodes with two Network Interface Cards
One shared storage with NFS and iSCSI connectivity
One physical firewall
One 8+ port physical switch
One KVM virtual machine
One OpenVZ/container machine
This book is intended for beyond-beginner-level user and, therefore, full instruction of the hardware assembly process is not detailed here. After connecting all equipment together, it should resemble the following diagram:

Perform the following simple steps to install Proxmox VE on Proxmox nodes:
Assemble all three nodes with proper components, and connect all of them with a LAN switch.
Power up the first node and access BIOS to make necessary changes such as enabling virtualization.
Boot the node from the Proxmox installation disc.
Follow along the Proxmox graphical installation process. Enter the IP address
192.168.145.1
, or any other subnet you wish, when prompted. Also enterpmxvm1.domain.com
or any other hostname that you choose to use.Perform step 3 and 4 for second node. Use IP address
192.168.145.2
or any other subnet. Usepmxvm2.domain.com
as hostname or any other hostname.
We are now going to create a Proxmox cluster with two Proxmox nodes we just installed. From admin PC, (Linux/Windows), log in to Proxmox node #1 (pmxvm01) through secure login. If the admin PC is Windows based, use program such as PuTTY to remotely log in to Proxmox node.
Note
Download PuTTY from http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html.
Linux users use the following command to securely log in to Proxmox node:
# ssh root@192.168.145.1
After logging in, it is now time to create our cluster. The command to create a Proxmox cluster is as follows:
# pvecm create <cluster_name>
This command can be executed on any of the Proxmox nodes but only once.
Tip
Never run a cluster creation command on more than one node in the same cluster. The cluster creation process must be completed on one node before adding nodes to the cluster.
The cluster does not operate on master/slave basis, but on Quorum. In order to achieve healthy cluster status, all nodes need to be online. Let's execute the following command and create the cluster:
root@pmxvm01:~# pvecm create pmx-cluster
The preceding command will display the following messages on the screen as it creates a new cluster and activates it:
Restarting pve cluster filesystem: pve-cluster[dcdb] notice: wrote new cluster config '/etc/cluster/cluster.conf' Starting cluster: Checking if cluster has been disabled at boot. . . [ OK ] Checking Network Manager. . . [ OK ] Global setup. . . [ OK ] Loading kernel modules. . . [ OK ] Mounting configfs. . . [ OK ] Starting cman. . . [ OK ] Waiting for quorum. . . [ OK ] Starting fenced. . . [ OK ] Starting dlm_controld. . . [ OK ] Tuning DLM kernel config. . . [ OK ] Unfencing self. . . [ OK ] root@pmxvm01:~#
After cluster creation is complete, check its status by using the following command:
root@pmxvm01:~# pvecm status
The preceding command will display the following output:
Version: 6.2.0 Config Version: 1 Cluster Name: pmx-cluster Cluster ID: 23732 Cluster Member: Yes Cluster Generation: 4 Membership status: Cluster-Member Nodes: 1 Expected votes: 1 Total votes: 1 Quorum: 1 Active subsystem: 5 Flags: Ports Bound: 0 Node name: pxvm01 Node ID: 1 Multicast addresses: 239.192.92.17 Node addresses: 192.168.145.1 root@pmxvm01:~#
The status shows some vital information that is needed to see how the cluster is doing and what the other member nodes of the cluster are. Although from Proxmox GUI we can visually see cluster health, command-line information gives a little bit more in-depth picture.
After the cluster has been created, the next step is to add Proxmox nodes into the cluster. Securely log in to the other node and run the following command:
root@pmxvm02:~# pvecm add 192.168.145.1
Verify that this node is now joined with the cluster with the following command:
root@pmxvm02:~# pvecm nodes
It should print the following node list that are member of the cluster we have just created:
Node Sts Inc Joined Name 1 M 4900 2014-01-26 16:02:34 pmxvm01 2 M 4774 2014-01-26 16:12:19 pmxvm02
The next step is to log in to Proxmox Web GUI to see the cluster and attach shared storage. Use the URL in the following format from a browser on the admin computer to access the Proxmox graphical user interface:
https://<ip_proxmox_node>:8006
The cluster should look similar to the following screenshot:

On a clean installed Proxmox node, a paid subscription-based repository is enabled by default. When you log in to the Proxmox GUI, the following message will pop up on entering login information:

If you want to continue using Proxmox without subscription, perform the following steps to remove the enterprise repository and enable a subscription-less repository. This needs to be done on all Proxmox nodes in the cluster.
Run the following command:
# nano /etc/apt/sources.list.d/pve-enterprise.list
Comment out enterprise repository as follows:
#deb https://enterprise.proxmox.com/debian wheezy pve-enterprise
Run the following command:
# nano /etc/apt/sources.list
Add a subscription-less repository as follows:
deb http://download.proxmox.com/debian wheezy pve-no-subscription
A Proxmox cluster can function with local storage just fine. But shared storage has many advantages over local storage, especially when we throw migration and disaster-related downtime in the mix. Live migration while a virtual machine is powered on is not possible without shared storage. We will start our journey into Proxmox with NFS/iSCSI shared storage, such as Ubuntu or FreeNAS.
Tip
Take a pause reading right here, and set up the third node with Ubuntu or FreeNAS. Both websites Ubuntu and FreeNAS have complete instructions to get you up and running.
After you have the choice of shared storage server setup, attach the storage with Proxmox by navigating to Datacenter | Storage. There should be three shares as shown in the following table:
Share ID |
Share type |
Content |
Purpose |
---|---|---|---|
ISO-nfs-01 |
NFS |
ISO, templates |
To store ISO images |
vm-nfs-01 |
NFS |
Image, containers |
To store VM with the |
nas-lvm-01 |
iSCSI |
Image |
To store a raw VM |
After setting up both the NFS and iSCSI shares, the Proxmox GUI should look like the following screenshot:

With our cluster up and running, it is time to add some virtual machines to it. Click on Create VM to start a KVM virtual machine creation process. The option window to create a virtual machine looks like the following screenshot:

The virtual machine we are going to create will act as main server for the rest of the virtual machines in the cluster. This will provide services such as DHCP, DNS, and so on. You can use any Linux flavor you are familiar with to get the DHCP/DNS Server set up. The ClearOS Community edition is a great choice since it allows putting all services in one machine and actually works very well.
Note
ClearOS is an open source, server-in-a-box Linux distribution, which means it can pull the weight of multiple servers/services in one setup. ClearOS is a Linux replacement of Windows Small Business Server. Learn more details and download it from http://www.clearfoundation.com/Software/overview.html.
Before creating a KVM-based virtual machine from scratch, we have to upload an ISO image of an operating system into Proxmox. This also applies to any ISO image we want a user to have access to, such as an ISO image of the installation disk for Microsoft Office or any other software. Not all storage types are supported to store ISO images. As of this writing, only local Proxmox storage, NFS, Ceph FS, and GlusterFS can be used to store ISO images. To upload an ISO image, perform the following steps:
Select proper storage from the Datacenter or Storage view on the Proxmox GUI.
Click on the Content tab.
Click on the Upload button to open the upload dialog box as shown in the following screenshot:
Click on the Select File… button to select the ISO image, and then click on the Upload button. After uploading, the ISO will show up on the content page as shown in the following screenshot:
Since the upload happens through the browser, it may cause a timeout error while uploading a large ISO file. In these cases, use a client program such FileZilla to upload the ISO image. Usually the Proxmox directory path to upload an ISO file is /mnt/pve/<storage_name>/template/iso
.
After the ISO image is in place, we can proceed with KVM virtual machine creation using the configuration in the following table:
VM creation tab |
Specification |
Selection |
---|---|---|
General |
Node |
pmxvm01 |
Virtual machine ID |
101 | |
Virtual machine name |
pmxMS01 | |
OS |
Linux/other OS types |
Linux 3.x/2.6 Kernel |
CD/DVD |
Use CD/DVD disc image file |
ClearOS 6 Community |
Hard disk |
Bus/device |
virtio |
Storage |
vm-nfs-01 | |
Disk size (GB) |
25 | |
Format |
QEMU image (qcow2) | |
CPU |
Sockets |
1 |
Cores |
1 | |
Type |
Default (kvm64) | |
Memory |
Automatically allocate memory within range |
Max. 1024 MB Minimum 512 MB |
Network |
vmbr0 | |
Model |
Intel E1000 / VirtIO |
After the main server is set up, we are now going to create a second virtual machine with Ubuntu as the operating system. Proxmox has a cloning feature, which saves lot of time when deploying VMs with the same operating system and configuration. We will use the Ubuntu virtual machine as the template for all Linux-based VMs throughout this book. Create the Ubuntu VM with the following configuration:
VM creation tab |
Specification |
Selection |
---|---|---|
General |
Node |
pmxvm01 |
Virtual machine ID |
201 | |
Virtual machine name |
template-Ubuntu | |
OS |
Linux/other OS types |
Linux 3.x/2.6 Kernel |
CD/DVD |
Use CD/DVD disc image file |
Ubuntu server ISO |
Hard disk |
Bus/device |
virtio |
Storage |
vm-nfs-01 | |
Disk size (GB) |
30 | |
Format |
QEMU image (qcow2) | |
CPU |
Sockets |
1 |
Cores |
1 | |
Type |
Default (kvm64) | |
Memory |
Automatically allocate memory within range |
Maximum 1024 MB Minimum 512 MB |
Network |
vmbr0 | |
Model |
Intel E1000 / VirtIO |
Now we will create one OpenVZ/container virtual machine. OpenVZ is container-based virtualization for Linux where all containers share the base host operating system. At this moment, only the Linux OpenVZ virtual machine is possible, and no Windows-based container. Although OpenVZ containers act as independent virtual machines, they rely heavily on the underlying Linux kernel of the hypervisor. All containers in a cluster share the same Linux kernel of the same version. The biggest advantage of the OpenVZ container is soft memory allocation where memory not used in one container can be used by other containers. Since each container does not have its own full version of the operating system, the backup size of containers is much smaller than the KVM-based virtual machine. OpenVZ is a great option for an environment such as a web hosting provider, where many instances can run simultaneously to host client sites.
Tip
Go to http://openvz.org/Main_Page for more details on OpenVZ.
Unlike a KVM virtual machine, OpenVZ containers cannot be installed using an ISO image. Proxmox uses templates to create OpenVZ container virtual machines and comes with the very nice feature of templates repository. At the time of this writing, the repository has close to 400 templates ready to download through the Proxmox GUI.
Templates could also be user-created with specific configurations. Creating your own template can be a difficult task and usually requires extensive knowledge of the operating system. To take the difficulties out of the equation, Proxmox provides an excellent script called Debian Appliance Builder (DAB) to create OpenVZ templates. Visit the following links before undertaking OpenVZ templates:
From the Proxmox GUI, click on the Templates button as shown in the following screenshot to open the built-in template browser dialog box and to download templates:

For our OpenVZ virtual machine lesson, we will be using the Ubuntu 12.04 template under Section: system as shown in the following screenshot:

Create the OpenVZ container using the following specifications:
OpenVZ creation tab |
Specification |
Selection |
---|---|---|
General |
Node |
pxvm01 |
Virtual machine ID |
121 | |
Virtual machine hostname |
ubuntuCT-01 | |
Storage |
vm-nfs-01 | |
Password |
any | |
Template |
Storage |
ISO-nfs-01 |
Template |
ubuntu-12.04-standard | |
Resources |
Memory |
1024 MB |
Swap |
512 MB | |
Disk size (GB) |
30 | |
CPUs |
1 | |
Network |
Bridged mode |
vmbr0 |
Tip
OpenVZ containers cannot be cloned for mass deployment. If such mass deployment is required, then the container can be backed up and restored with different VM IDs as many times as required.
With all three of the virtual machines created, the Proxmox cluster GUI should look like the following screenshot:

One of the great features of Proxmox is the ability to clone a virtual machine for mass deployment. It saves an enormous amount of time while deploying virtual machines with similar operating systems.
It is entirely possible to clone a virtual machine without ever creating a template. The main advantage of creating a template is virtual machine organizing within the cluster.
A template has a distinct icon as seen in the following screenshot, which easily identifies it from a standard virtual machine. Just create a VM with desired configuration, and then by the touch of a mouse click, turn the VM into a template. Whenever a new VM is required, just clone the template.

For a small cluster with few virtual machines, it is not an issue. But an enterprise cluster with hundreds, if not thousands, of virtual machines, finding the right template can become a tedious task. By right-clicking on a VM, you can pull up a context menu, which shows the option related to that VM.
Menu item |
Function |
---|---|
Start |
Starts virtual machine. |
Migrate |
Allow online/offline migration of virtual machine. |
Shutdown |
Safely powers down virtual machine. |
Stop |
Powers down virtual machine immediately. Might cause data loss. Similar to holding down the Power button for 6 seconds on a physical machine. |
Clone |
Clones virtual machine. |
Convert to Template |
Transforms a virtual machine into a template for cloning. Templates themselves cannot be used as a regular virtual machine. |
Console |
Let's turn our Ubuntu virtual machine we created in the Creating a KVM virtual machine section into a template. Perform the following steps:
Right-click on a virtual machine to open the context menu.
Click on Convert to template as shown in the following screenshot. This will convert the VM into a template that can be used to clone an unlimited number of virtual machines. While creating a template, keep in mind that a template itself cannot be used as a virtual machine. But it can be migrated to different hosts just like a virtual machine.
The template is now ready for cloning. Right-clicking on the template will open up the context menu, which will have only two menu options: Migrate and Clone. Click on Clone to open the template cloning option window as shown in the following screenshot:

The most important option to notice in this menu is the Mode option. A clone can be created from a template using either Full Clone or Linked Clone.
The following is a comparison table with features of Full Clone and Linked Clone:
From the previous table, we can see that both Full Clone and Linked Clone have pros and cons. One rule of thumb is that if performance is the main focus, go with Full Clone. If storage space conservation is the focus, then go with Linked Clone.
Proxmox migration allows a VM or OpenVZ container to be moved to a Proxmox node in both offline and online modes. The most common scenario of VM migration is when a Proxmox node needs a reboot due to a major kernel update. Without the migration option, each reboot would be very difficult for an administrator as all the running VMs have to be stopped first before reboot occurs, which will cause major downtime in a mission-critical virtual environment.
With the migration option, a running VM can be moved to another node without a single downtime. During live migration, VM does not experience any major slowdown. After the node reboots, simply migrate the VMs back to the original node. Any offline VMs can also be moved with ease.

Proxmox takes a very minimalistic approach to the migration process. Just select the destination node and online/offline check box. Then hit the Migrate button to get the migration process started. Depending on the size of virtual drive and allocated memory of the VM, the entire migration process time can vary.
In this chapter, we saw what a basic Proxmox cluster looks like and went through the setup process of Proxmox nodes. We took a closer look at the Proxmox GUI where we will spend almost all of our virtual infrastructure administrative life. We also have set up a basic-level Proxmox cluster, which will serve us as a foundation for the rest of the book and help us to gain knowledge of inner workings of Proxmox.
We created virtual machines in our cluster and learned how cloning and template can save an enormous amount of time. We attached a shared storage with our cluster using FreeNAS. FreeNAS is an excellent open source choice for all Network Attached Storage (NAS) needs. It supports NFS, CIFS, AFP, iSCSI, FTP, TFTP, RSYNC, ZFS, and many more storage-related features.
There is a lot of information on Proxmox available at the official wiki page at https://pve.proxmox.com/wiki/Main_Page.
With the introductory chapter out of the way, in the next chapter, we will take a look at what is under the hood of Proxmox hypervisor. We will see how the Proxmox folder structure is laid out to hold some of the important files, which make Proxmox run so effectively. Most importantly, we will go deeper into some of the configuration and see their function line by line. In order to build a complex enterprise-class Proxmox cluster, it is important to be quite familiar with these configurations. Proxmox cluster can be tweaked and tailored further beyond the GUI through these files.
For more information, you can visit http://www.masteringproxmox.com/. You can use this forum for discussing about Proxmox and related topics.