A Virtual Machine for a Virtual World

(For more resources related to this topic, see here.)

Creating a VM from a template

Let us start by creating our second virtual machine from the Ubuntu template. Right-click on the template and select Clone, as shown in the following screenshot:

Use the settings shown in the following screenshot for the new virtual machine. You can also use any virtual machine name you like. A VM name can only be alphanumeric without any special characters.

You can also use any other VM you have already created in your own virtual environment. Access the virtual machine through the Proxmox console after cloning and setting up network connectivity such as IP address, hostname, and so on. For our Ubuntu virtual machine, we are going to edit interfaces in /etc/network/, hostname in /etc/, and hosts in /etc/.

Advanced configuration options for a VM

We will now look at some of the advanced configuration options we can use to extend the capability of a KVM virtual machine.

The hotplugging option for a VM

Although it is not a very common occurrence, a virtual machine can run out of storage unexpectedly whether due to over provisioning or improper storage requirement planning. For a physical server with hot swap bays, we can simply add a new hard drive and then partition it, and you are up and running. Imagine another situation when you have to add some virtual network interface to the VM right away, but you cannot afford shutting down the VM to add the vNICs. The hotplug option also allows hotplugging virtual network interfaces without shutting down a VM.

Proxmox virtual machines by default do not support hotplugging. There are some extra steps needed to be followed in order to enable hotplugging for devices such as virtual disks and virtual network interfaces. Without the hotplugging option, the virtual machine needs to be completely powered off and then powered on after adding a new virtual disk or virtual interface. Simply rebooting the virtual machine will not activate the newly added virtual device. In Proxmox 3.2 and later, the hotplug option is not shown on the Proxmox GUI. It has to be done through CLI by adding options to the <vmid>.conf file. Enabling the hotplug option for a virtual machine is a three-step process:

  1. Shut down VM and add the hotplug option into the <vmid>.conf file.
  2. Power up VM and then load modules that will initiate the actual hotplugging.
  3. Add a virtual disk or virtual interface to be hotplugged into the virtual machine.

The hotplugging option for <vmid>.conf

Shut down the cloned virtual machine we created earlier and then open the configuration file from the following location. Securely log in to the Proxmox node or use the console in the Proxmox GUI using the following command:

# nano /etc/pve/nodes/<node_name>/qemu-server/102.conf

With default options added during the virtual machine creation process, the following code is what the VM configuration file looks like:

ballon: 512 bootdisk: virtio0 cores: 1 ide2: none, media=cdrom kvm: 0 memory: 1024 name: pmxUB01 net0: e1000=56:63:C0:AC:5F:9D,bridge=vmbr0 ostype: l26 sockets: 1 virtio0: vm-nfs-01:102/vm-102-disk-1.qcow2,format=qcow2,size=32G

Now, at the bottom of the 102.conf configuration file located under /etc/pve/nodes/<node_name>/qemu-server/, we will add the following option to enable hotplugging in the virtual machine:


Save the configuration file and power up the virtual machine.

Loading modules

After the hotplug option is added and the virtual machine is powered up, it is now time to load two modules into the virtual machine, which will allow hotplugging a virtual disk anytime without rebooting the VM. Securely log in to VM or use the Proxmox GUI console to get into the command prompt of the VM. Then, run the following commands to load the acpiphp and pci_hotplug modules. Do not load these modules to the Proxmox node itself:

# sudo modprobe acpiphp # sudo modprobe pci_hotplug


The acpiphp and pci_hotplug modules are two hot plug drivers for the Linux operating system. These drivers allow addition of a virtual disk image or virtual network interface card without shutting down the Linux-based virtual machine.

The modules can also be loaded automatically during the virtual machine boot by inserting them in /etc/modules. Simply add acpiphp and pci_hotplug on two separate lines in /etc/modules.

Adding virtual disk/vNIC

After loading both the acpiphp and pci_hotplug modules, all that remains is adding a new virtual disk or virtual network interface in the virtual machine through a web GUI. On adding a new disk image, check that the virtual machine operating system recognizes the new disk through the following command:

#sudo fdisk -l

For a virtual network interface, simply add a new virtual interface from a web GUI and the operating system will automatically recognize a new vNIC. After adding the interface, check that the vNIC is recognized through the following command:

#sudo ifconfig –a

Please note that while the hotplugging option works great with Linux-based virtual machines, it is somewhat problematic on Windows XP/7-based VMs. Hotplug seems to work great with both 32- and 64-bit versions of the Windows Server 2003/2008/2012 VMs. The best practice for a Windows XP/7-based virtual machine is to just power cycle the virtual machine to activate newly added virtual disk images. Forcing the Windows VM to go through hotplugging will cause an unstable operating environment. This is a limitation of the KVM itself.

Nested virtual environment

In simple terms, a virtual environment inside another virtual environment is known as a nested virtual environment. If the hardware resource permits, a nested virtual environment can open up whole new possibilities for a company. The most common scenario of a nested virtual environment is to set up a fully isolated test environment to test software such as hypervisor, or operating system updates/patches before applying them in a live environment.

A nested environment can also be used as a training platform to teach computer and network virtualization, where students can set up their own virtual environment from the ground without breaking the main system. This eliminates the high cost of hardware for each student or for the test environment. When an isolated test platform is needed, it is just a matter of cloning some real virtual machines and giving access to authorized users. A nested virtual environment has the potential to give the network administrator an edge in the real world by allowing cost cutting and just getting things done with limited resources.

One very important thing to keep in mind is that a nested virtual environment will have a significantly lower performance than a real virtual environment. If the nested virtual environment also has virtualized storage, performance will degrade significantly. The loss of performance can be offset by creating a nested environment with an SSD storage backend. When a nested virtual environment is created, it usually also contains virtualized storage to provide virtual storage for nested virtual machines. This allows for a fully isolated nested environment with its own subnet and virtual firewall.

There are many debates about the viability of a nested virtual environment. Both pros and cons can be argued equally. But it will come down to the administrator's grasp on his or her existing virtual environment and good understanding of the nature of requirement. This allowed us to build a fully functional Proxmox cluster from the ground up without using additional hardware. The following screenshot is a side-by-side representation of a nested virtual environment scenario:

In the previous comparison, on the right-hand side we have our basic cluster we have been building so far. On the left-hand side we have the actual physical nodes and virtual machines used to create the nested virtual environment.

Our nested cluster is completely isolated from the rest of the physical cluster with a separate subnet. Internet connectivity is provided to the nested environment by using a virtualized firewall 1001-scce-fw-01.

Like the hotplugging option, nesting is also not enabled in the Proxmox cluster by default. Enabling nesting will allow nested virtual machines to have KVM hardware virtualization, which increases the performance of nested virtual machines. To enable KVM hardware virtualization, we have to edit the modules in /etc/ of the physical Proxmox node and <vmid>.conf of the virtual machine. We can see that the option is disabled for our cloned nested virtual machine in the following screenshot:

Enabling KVM hardware virtualization

KVM hardware virtualization can be added just by performing the following few additional steps:

  1. In each Proxmox node, add the following line in the /etc/modules file:

    kvm-amd nested=1

  2. Migrate or shut down all virtual machines of Proxmox nodes and then reboot.
  3. After the Proxmox nodes reboot, add the following argument in the <vmid>.conf file of the virtual machines used to create a nested virtual environment:

    args: -enable-nesting

  4. Enable KVM hardware virtualization from the virtual machine option menu through GUI. Restart the nested virtual machine.

Network virtualization

Network virtualization is a software approach to set up and maintain network without physical hardware. Proxmox has great features to virtualize the network for both real and nested virtual environments. By using virtualized networking, management becomes simpler and centralized. Since there is no physical hardware to deal with, the network ability can be extended within a minute's notice. Especially in a nested virtual environment, the use of virtualized network is very prominent. In order to set up a successful nested virtual environment, a better grasp of the Proxmox network feature is required. With the introduction of Open vSwitch (www.openvswitch.org) in Proxmox 3.2 and later, network virtualization is now much more efficient.

Backing up a virtual machine

A good backup strategy is the last line of defense against disasters, such as hardware failure, environmental damages, accidental deletions, and misconfigurations. In a virtual environment, a backup strategy turns into a daunting task because of the number of machines needed to be backed up. In a busy production environment, virtual machines may be created and discarded whenever needed or not needed. Without a proper backup plan, the entire backup task can go out of control. Gone are those days when we only had few server hardware to deal with and backing them up was an easy task. Today's backup solutions have to deal with several dozens or possibly several hundred virtual machines.

Depending on the requirement, an administrator may have to backup all the virtual machines regularly instead of just the files inside them. Backing up an entire virtual machine takes up a very large amount of space after a while depending on how many previous backups we have. A granular file backup helps to quickly restore just the file needed but sure is a bad choice if the virtual server is badly damaged to a point that it becomes inaccessible. Here, we will see different backup options available in Proxmox, their advantages, and disadvantages.

Proxmox backup and snapshot options

Proxmox has the following two backup options:

  • Full backup: This backs up the entire virtual machine.
  • Snapshot: This only creates a snapshot image of the virtual machine. Proxmox 3.2 and above can only do a full backup and cannot do any granular file backup from inside a virtual machine. Proxmox also does not use any backup agent.

Backing up a VM with a full backup

All full backups are in the .tar format containing both the configuration file and virtual disk image file. The TAR file is all you need to restore the virtual machine on any nodes and on any storage. Full backups can also be scheduled on a daily and weekly basis. Full virtual backup files are named based on the following format:


The following screenshot shows what a typical list of virtual machine backups looks like:

Proxmox 3.2 and above cannot do full backups on LVM and Ceph RBD storage. Full backups can only occur on local, Ceph FS, and NFS-based storages, which are defined as backup during storage creation. Please note that Ceph FS and RBD are not the same type of storage even though they both coexist on the same Ceph cluster. The following screenshot shows the storage feature through the Proxmox GUI with backup-enabled attached storages:

The backup menu in Proxmox is a true example of simplicity. With only three choices to select, it is as easy as it can get. The following screenshot is an example of a Proxmox backup menu. Just select the backup storage, backup mode, and compression type and that's it:

Creating a schedule for Backup

Schedules can be created from the virtual machine backup option. We will see each option box in detail in the following sections. The options are shown in the following screenshot:


By default, a backup job applies to all nodes. If you want to apply the backup job to a particular node, then select it here. With a node selected, backup job will be restricted to that node only. If a virtual machine on node 1 was selected for backup and later on the virtual machine was moved to node 2, it will not be backed up since only node 1 was selected for this backup task.


Select a backup storage destination where all full backups will be stored. Typically an NFS server is used for backup storage. They are easy to set up and do not require a lot of upfront investment due to their low performance requirements. Backup servers are much leaner than computing nodes since they do not have to run any virtual machines. Backups are supported on local, NFS, and Ceph FS storage systems. Ceph FS storages are mounted locally on Proxmox nodes and selected as a local directory. Both Ceph FS and RBD coexist on the same Ceph cluster.

Day of Week

Select which day or days the backup task applies to. Days' selection is clickable in a drop-down menu. If the backup task should run daily, then select all the days from the list.

Start Time

Unlike Day of Week, only one time slot can be selected. Multiple selections of time to backup different times of the day are not possible. If the backup must run multiple times a day, create a separate task for each time slot.

Selection mode

The All selection mode will select all the virtual machines within the whole Proxmox cluster. The Exclude selected VMs mode will back up all VMs except the ones selected. Include selected VMs will back up only the ones selected.

Send email to

Enter a valid e-mail address here so that the Proxmox backup task can send an e-mail upon backup task completion or if there was any issue during backup. The e-mail includes the entire log of the backup tasks. It is highly recommended to enter the e-mail address here so that an administrator or backup operator can receive backup task feedback e-mails. This will allow us to find out if there was an issue during backup or how much time it actually takes to see if any performance issue occurred during backup. The following screenshot is a sample of a typical e-mail received after a backup task:


By default, the LZO compression method is selected. LZO (http://en.wikipedia.org/wiki/Lempel–Ziv–Oberhumer) is based on a lossless data compression algorithm, designed with the decompression ratio in mind. LZO is capable to do fast compression and even faster decompressions. GZIP will create smaller backup files at the cost of high CPU usage to achieve a higher compression ratio. Since higher compression ratio is the main focal point, it is a slow backup process. Do not select the None compression option, since it will create large backups without compression. With the None method, a 200 GB RAW disk image with 50 GB used will have a 200 GB backup image. With compression turned on, the backup image size will be around 70-80 GB.


Typically, all running virtual machine backups occur with the Snapshot option. Do not confuse this Snapshot option with Live Snapshots of VM. The Snapshot mode allows live backup while the virtual machine is turned on, while Live Snapshots captures the state of the virtual machine for a certain point in time. With the Suspend or Stop mode, the backup task will try to suspend the running virtual machine or forcefully stop it prior to commencing full backup. After backup is done, Proxmox will resume or power up the VM. Since Suspend only freezes the VM during backup, it has less downtime than the Stop mode because VM does not need to go through the entire reboot cycle. Both the Suspend and Stop modes backup can be used for VM, which can have partial or full downtime without disrupting regular infrastructure operation, while the Snapshot mode is used for VMs that can have a significant impact due to their downtime.

Creating snapshots

Snapshots are a great way to preserve the state of a virtual machine. It is much faster compared to full backup since it does not copy the data. Snapshot is not really a backup in a way and does not perform granular level backup. It captures the state in a point in time and allows rollback to that previous state. Snapshot is a great feature to be used in between full backups. The Proxmox snapshot even allows to capture memory of the virtual machine, so when rolled back, it is almost as if it never changed. The following screenshot is our virtual machine pmxUB01 without any snapshot:

The actual snapshot creation process is very straightforward. Just enter a name, select the RAM content, and type in some description. The Name textbox does not allow any spaces and the name must start with an alphabet as shown in the following screenshot:

Keep in mind that when you select to include RAM in the snapshot, the bigger the RAM allocation is for the virtual machine, the longer it will take to create a snapshot. But it is still much faster than full backup. The Snapshot feature is only available for KVM virtual machines and not for OpenVZ containers.

Now, we have our very first snapshot of a virtual machine. If we want to go back to the snapshot image, just select the snapshot we want to go back to and click on Rollback. The following screenshot shows the Snapshots tab:

Rollback will erase all the changes that happened to the virtual machine between the time of rolling back and the snapshot being rolled back to. When full backup occurs on a virtual machine with snapshots, Proxmox only backs up the main configuration file and disk image file. It does not include any snapshot images.

Proxmox Snapshot does not offer any scheduling option. All snapshots are taken through a manual process. A lot of feature requests have been made to make snapshot scheduling available. Hopefully, in the future versions this feature will be included. In an environment with several dozen virtual machines, manual snapshots can become a time-consuming task. It is possible to set up snapshot scheduling by using bash, cron, and qm, but it is known to be somewhat unstable and, therefore, not recommended for production environment.

Snapshots are also a great way to test new software or configuration on a virtual machine. Take a snapshot before installing a software or applying new configuration. After the software test or if the configuration does not work as intended, simply roll back to the previous snapshot. This is much faster and cleaner instead of uninstalling the tested software itself.

Deleting old backups

Depending on the backup strategy and business requirement, there may be a need to keep certain periods of backup history. Proxmox allows both automatic and manual deletion of any backups outside the required history range. The following screenshot shows the storage edit dialog box where we can set the maximum number of backups we want to keep. This maximum backup value is for each virtual machine backup:

We can enter any numeric number between 0 and 365 as Max Backups. For example, our NFS storage has a Max Backups value of 3. This means that during full backup, Proxmox will keep three newest backups of each virtual machine and delete anything beyond that. If we did daily backup, we could potentially keep 365 days or 1 year worth of backups at any given time. If we did backup every other day, then it would be two years worth of backup. It is possible to create separate shared storages daily, weekly, monthly, and yearly in order to store backups. As long as we have the schedule created just right, Proxmox will do the rest.

For higher backup redundancy and assurance, we can attach multiple shared storages to store backups. In the example shown in the following screenshot, there are four separate shared storages attached for daily, weekly, monthly, and yearly backup. There are also two backup tasks scheduled: one for daily backup, which keeps six days of backups, and the second task for weekly backup, which keeps four backups The following screenshot shows the Backup tab:

Since Proxmox does not have any option to backup monthly or yearly, the only option we have is to modify the second backup task in the previous screenshot and change the destination storage appropriately based on monthly or yearly backup needs. The reason to separate the storages from weekly or monthly backup is so that wrong backups do not get deleted automatically. If we store all backups under one share and set the Max Backups value of that storage, then regardless of when the backups were made older backups beyond the Max Backups range will get deleted. The following screenshot shows the configuration for daily backup on host set up to be daily backup node.

The following screenshot shows the configuration for weekly backup on host setup to be weekly backup node:

Having multiple physical hosts to store backups separately provides a very high level of redundancy and assurance. This way not all backups are kept in one backup node. The monthly or yearly backup node can actually be taken offsite when not in use and brought back once a month or once a year for monthly or yearly backup. Although Proxmox cannot perform granular file backup inside VM or detailed scheduling options, it is still a powerful and simple enough feature to support all sizes of Proxmox clusters.

Restoring a virtual machine

To keep up with the simplicity theme, the Proxmox Restore option also features a very simple interface. We just need to select which virtual machine we want to restore, the destination storage, and virtual machine ID, as shown in the following screenshot:

If the same VM ID is kept, then the existing virtual machine with the same ID will be deleted and restored from the backup version. One important thing to remember is a full backup created for a virtual machine with the qcow2 or vmdk image format can only be restored to local, Ceph FS, or NFS-like storages. But a virtual machine with the RAW image format can be restored on just about any storage system. RBD or LVM storages do not support image types such as qcow2 or vmdk. There is no restoration for snapshot images. The snapshots can only be rolled back to the previous state.

Command-line vzdump

The entire backup process can be handled from the command line in case the GUI becomes inaccessible. The command to start a backup is as follows:

# vzdump <vmid> <options>

There is a long list of the vzdump options that can be used with the command. The following are just a few of the most commonly used ones:




Default is 0. This option will back up all the available virtual machines in a Proxmox node.


Adjust the backup bandwidth in KBPS.


Default is LZO. Sets the compression type or disables compression. The available options are 0, 1, gzip, and lzo.


E-mail address to send backup report.


Integer number. Sets maximum number of backup files to be kept.


Default is Stop. Sets backup mode. Available options are snapshot, stop, and suspend.


Default is 1. Removes older backups if more than the value entered in -maxfiles.

There are two commands available to restore the KVM and OpenVZ virtual machines. They are as follows:

  • For KVM machines, the command is as follows:

    #qmrestore <backup_file> <vmid> <options>




    0 or 1. This option allows overwriting the existing VM. Use this option with caution.


    0 or 1. Assigns a unique random Ethernet address to the virtual network interface.

  • For OpenVZ machines, the command is as follows:

    #vzrestore <backup_file> <vmid> <options>




    0 or 1. This option allows overwriting the existing VM. Use this option with caution.


    0 or 1. Assigns a unique random Ethernet address to the virtual network interface.

For a complete list of options for vzdump, qmrestore, and vzrestore, visit the following pages:



Backup configuration file – vzdump.conf

The vzdump.conf file in Proxmox allows advanced backup configuration to go beyond just the default. For example, if we want to limit the backup speed so that the backup task does not consume all of the available network bandwidth, we can limit it with the #bwlimit option. In Proxmox Version 3.2 and above, the vzdump.conf file cannot be edited from GUI. It has to be done from CLI using an editor. The following code is the default vzdump.conf file on a new Proxmox cluster:

# vzdump default settings #tmpdir: DIR #dumpdir: DIR #storage: STORAGE_ID #mode: snapshot|suspend|stop #bwlimit: KBPS #ionice: PRI #lockwait: MINUTES #stopwait: MINUTES #size: MB #maxfiles: N #script: FILENAME #exclude-path: PATHLIST

All the options are commented by default in the file because Proxmox has a set of default options already encoded in the operating system. Changing the vzdump.conf file overwrites the default settings and allows us to customize the Proxmox backup.


The most common option to edit in vzdump.conf is to adjust the backup speed. This is usually done in case of remotely stored backups and interface saturation if the backup interface is the same used for the VM production traffic. For example, to limit backup to 200 Mbps, make the following adjustment:

bwlimit: 200000


The Proxmox backup uses a global lock file to prevent multiple instances running simultaneously. More instances put extra load on the server. The default lock wait in Proxmox is 180 minutes. Depending on different virtual environments and number of virtual machines, the lock wait time may need to be increased. If the limit needs to be 10 hours or 600 minutes, adjust the option as follows:

lockwait: 600

The lock prevents the VM from migrating or shutting down while the backup task is running. Any backup interruptions such as failback storage, I/O bottleneck, and so on can cause the VM to remain locked. In such cases, the VM needs to be unlocked with the following command from CLI:

# qm unlock <vmid>


The #stopwait value is the maximum time in minutes the backup will wait till a VM is stopped. A use case scenario is a VM, which takes much longer to shut down, for example, exchange server or database server. If VM is not stopped within the allocated time, backup is skipped for that VM.


It is possible to create backup scripts and hook them up with a backup task. This script basically is a set instruction that can be called upon during entire backup tasks to accomplish various backup-related tasks such as start/stop a backup, shut down/suspend VM, and so on. We can add customized scripts as:

script: /etc/pve/script/my-script.pl

Here is a sample publicly available hook script for a backup task. The following script is courtesy Dietmar Maurer from the Proxmox staff. The script is shown here to point you to the right direction:

#!/usr/bin/perl -w # example hook script for vzdump (--script option) use strict; print "HOOK: " . join (' ', @ARGV) . "\n"; my $phase = shift; if ($phase eq 'job-start' || $phase eq 'job-end' || $phase eq 'job-abort') { my $dumpdir = $ENV{DUMPDIR}; my $storeid = $ENV{STOREID}; print "HOOK-ENV: dumpdir=$dumpdir;storeid=$storeid\n"; # do what you want } elsif ($phase eq 'backup-start' || $phase eq 'backup-end' || $phase eq 'backup-abort' || $phase eq 'log-end' || $phase eq 'pre-stop' || $phase eq 'pre-restart') { my $mode = shift; # stop/suspend/snapshot my $vmid = shift; my $vmtype = $ENV{VMTYPE}; # openvz/qemu my $dumpdir = $ENV{DUMPDIR}; my $storeid = $ENV{STOREID}; my $hostname = $ENV{HOSTNAME}; # tarfile is only available in phase 'backup-end' my $tarfile = $ENV{TARFILE}; # logfile is only available in phase 'log-end' my $logfile = $ENV{LOGFILE}; print "HOOK-ENV: vmtype=$vmtype;dumpdir=$dumpdir;storeid=$storeid; hostname=$hostname;tarfile=$tarfile;logfile=$logfile\n"; # example: copy resulting backup file to another host using scp if ($phase eq 'backup-end') { #system ("scp $tarfile backup-host:/backup-dir") == 0 || # die "copy tar file to backup-host failed"; } # example: copy resulting log file to another host using scp if ($phase eq 'log-end') { #system ("scp $logfile backup-host:/backup-dir") == 0 || # die "copy log file to backup-host failed"; } } else { die "got unknown phase '$phase'"; } exit (0);


To ignore certain folders from backing up, use the exclude-path option. All paths must be entered on one line without breaks. Please keep in mind that this option is only for OpenVZ containers:

exclude-path: "/log/.+" "/var/cache/.+"

The previous example will exclude all the files and directories under /log and /var/cache. To manually exclude other directories from being backed up, simply use the following format:

exclude-path: "/<directory_tree>/.+" 3


In this article, we have covered some interesting topics, such as nested virtual environment, the hotplug feature, and Proxmox backup/restore. A good backup plan can save the day many times over. Although Proxmox does not provide everything you need for backup such as a granular file backup, to back up a virtual machine is very helpful. Backup features in the Proxmox platform have proven to be reliable in production environments and during actual disaster scenarios.

Resources for Article:

Further resources on this subject:

You've been reading an excerpt of:

Mastering Proxmox

Explore Title