Linux virtual machine with KVM from the command line

If we want to raise virtual machines in a Linux environment that does not have a graphical environment, we can raise virtual machines using the command line with a XML template.

This article explains how the deployment performed with Ansible-libvirt at KVM, Ansible and how to deploy a test environment works internally

Install Qemu-KVM and Libvirt

Linux virtual machine with KVM from the command line 1

First: we must to install libvirt and Qemu-KVM. In Ubuntu / Debian is installed with:

$ sudo apt-get install -y libvirt-daemon-system python-libvirt python-lxml

And in CentOS / Redhat with:

$ sudo yum install -y libvirt-daemon-kvm python-lxml

To launch the service we must do: $ sudo systemctl enable libvirtd && sudo systemctl start libvirtd

Configure a network template

Libvirt provides us with a powerful tool for managing virtual machines called ‘virsh’, which we must use to be able to manage KVM virtual machines from the command line.

For a virtual machine we mainly need three elements, the first is a network configuration that provides IP to virtual machines via DHCP. To do this libvirt needs XML template like the next template (which we will designate “net.xml”):

  <forward mode='nat'>
      <port start='1' end='65535'/>
  <bridge name='BRIDGE_NAME' stp='on' delay='0'/>
  <ip address='IP_HOST' netmask='NETWORK_MASK'>
      <range start='BEGIN_DHCP_RANGE' end='END_DHCP_RANGE'/>

Whose main elements are:

  • NETWORK_NAME: Descriptive name that we are going to use to designate the network, for example, “test_net” or “production_net”.
  • BRIDGE_NAME: Each network creates an interface on the host server that will serve as gateway of the input/output packets of that previous network to the outside. Here we assign a descriptive name that let as identify the interface.
  • IP_HOST: The IP that such interface will have on the host server and that will be the gateway of the virtual machines.
  • NETWORK_MASK: Depends on the network, usually for testing must be use a class C (
  • BEGIN_DHCP_RANGE: To assign IPs to virtual machines using libvirt, there are an internal DHCP server (dnsmasq based), here we define the first IP of the range that we can serve to virtual machines.
  • END_DHCP_RANGE: And here we define the last IP that virtual machines can obtain.

Preparing the operating system image

The second element is the virtual machine image, this image can be created or downloaded, the second is recommended to reduce the deployment time. An image source for virtual machines with KVM / libvirt is Vagrant (, to obtain an image of the virtual machine that interests us, we must download from where APP_NAME is the name of the application we want to download (e.g. debian), APP_TAG is the distribution of this application (e.g. stretch64) and finally APP_VERSION is the version of the application (e.g. 9.9.0). Even so, there are specific image repositories such as CentOs that are at

Once the file is obtained, it must be decompressed with tar -zcf, which generates three files, one of which is the image of the virtual machine (box.img) that we will rename to NAME_DESCRIPTIVE.qcow2 and copy it to the standard folder for libvirt virtual machine images (/var/lib/libvirt/images) and we will give the libvirt user permissions so that he can manage that image (chown libvirt /var/lib/libvirt/images/NAME_DESCRIPTIVE.qcow2)

It is important to know that if we want to access the virtual machine we will need a password that we can download with wget -O insecure_private_key, and we must apply permissions for access only with the current user ($ chmod 600 insecure_private_key).

Virtual Machine Template

The third element to create the virtual machine once the last two are created is a virtual machine template, which for a recent libvirt release will have the following form:

<domain type='kvm'>
  <memory unit='MB'>VM_MEMORY</memory>
    <bootmenu enable='no'/>
    <boot dev='hd'/>
  <clock offset='utc'/>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='IMAGE_STORAGE_PATH/IMAGE_NAME.qcow2'/>
      <target dev='vda' bus='virtio'/>
    <interface type='network'>
      <source network='NETWORK_NAME'/>
      <model type='virtio'/>
    <console type='pty'>
      <target type='serial' port='0'/>
    <input type='mouse' bus='ps2'>
      <alias name='input0'/>
    <input type='keyboard' bus='ps2'>
      <alias name='input1'/>
    <graphics type='spice' port='5900' autoport='yes' listen=''>
      <listen type='address' address=''/>
      <image compression='off'/>
      <model type='cirrus' vram='16384' heads='1' primary='yes'/>
      <alias name='video0'/>
    <memballoon model='virtio'/>

Whose elements are:

  • VM_NAME: The descriptive name that will allow us to identify the virtual machine
  • VM_MEMORY: How much memory in MB we allocate to the virtual machine
  • CPU_NUMBER: How many virtual CPUs will the virtual machine have
  • IMAGE_STORAGE_PATH: The path where we have saved the virtual machine, the standard is /var/lib/libvirt/images
  • IMAGE_NAME: A descriptive name we have given to the image in the previous stage
  • NETWORK_NAME: A descriptive name have we given to the network that will manage the virtual machines.

Deploying Virtual Machines

Once the templates are created and the image is deployed, we execute:

# sudo virsh net-create plantilla_red.xml
# sudo virsh create plantilla_máquina_virtual.xml

Finally we must check that it works, first we identifyif the virtual machines are running with # sudo virsh list, second we need to get the ip in use with # sudo virsh net-dhcp-leases NETWORK_NAME and finally we access the assigned IP with # ssh -i insecure_private_key vagrant @IP

KVM, Ansible and how to deploy a test environment

In local development environments there is always a need for simulation of more powerful environments, as usually happens in the making of demos.

For this we’ll always follow the “KISS” philosophy (keep it simple stupid!) And we will use those services so that our Linux requires the least use of possible resources. We’ll need two tools to simplify the work that are Ansible for deployment and KVM as a hypervisor.

Images for the test environment

The first step is to raise a system that provides us with images in the simplest way possible. We’ll find that Vagrant as a wonderful source of images. We have two ways to use it:

  1. Download from and install (with sudo dpkg -i vagrant_VERSION_x86_64.deb in Debian / Ubuntu environments or with sudo rpm -i vagrant_VERSION_x86_64.rpm in RHEL / Centos environments), to get an image as small as possible we will make use of a debian 9.9.0 with the following command:
    $ vagrant box add --provider libvirt debian/stretch64
    ==> box: Loading metadata for box 'debian/stretch64'
    box: URL:
    ==> box: Adding box 'debian/stretch64' (v9.9.0) for provider: libvirt
    box: Downloading:
    box: Download redirected to host:
    ==> box: Successfully added box 'debian/stretch64' (v9.9.0) for 'libvirt'!
    The downloaded image will be in ~/.vagrant.d/boxes/debian-VAGRANTSLASH-stretch64/9.9.0/libvirt, in the form of three files, being the one that interests us: box.img which is an image with QCOW format.

  2. Directly download the images that we will use, for example a Centos image: and a Debian image:
    To make the deployment easier, Ansible has been configured to download automatically and save the image in /root/.images and use it directly without need to do anything else.

The next thing we need is to download the Ansible tasks that will allow us to launch our test environment, the “package” is formed by a file that will be really important called “inventory.yml” where we will really define how our demo will be, it is formed by a “creation” and a “destruction” file of the virtualized environment. The rest of the files are variables and functions (tasks) that will execute when is necessary to raise our environment. We proceed to download the environment with:

$ git clone
$ cd disasterproject/ansible

Inside the “ansible” directory we’ll found a “files” directory that has Vagrant’s insecure private key that will help us to access each of the machines that we will deploy. This key is obtained from, we proceed to change the permissions so that the key is accepted by SSH:

$ chmod 600 files/insecure_private_key

Download Ansible

In order to perform Ansible tasks, we must obtain and install Ansible following the instructions in the Ansible installation guide (, for example for Debian you have to follow the next instructions:

$ sudo sh -c 'echo "deb trusty main" >>/etc/apt/sources.list'
$ sudo apt-key adv --keyserver --recv-keys 93C4A3FD7BB9C367
$ sudo apt-get update && sudo apt-get install -y ansible 

Installing libvirt and KVM

Next is to install the rest of the packages that we will need, in debian they are:

$ sudo apt-get install -y libvirt-daemon-system python-libvirt python-lxml

In CentOs are:

$ sudo yum install -y libvirt-daemon-kvm python-lxml

To start the service we’ll execute $ sudo systemctl enable libvirtd && sudo systemctl start libvirtd

Create an inventory for the environment

In the next step we will edit the “inventory.yml” file that should have a format like that:

          memory: 1024
          vcpus: 1
          vm_ip: ""
          linux_flavor: "debian"
        network: "192.168.8"

We can see the definition of the machine/s (name: debian, memory in MB, number of vcpus, ip, and linux flavor – for now only debian or centos -) and a series of global values, as a domain , the network without the last octet (the network will be a class C, typical /24 mask, sufficient for demo enviroment), where {{network}}. 1 is the IP running as a gateway to all the virtual machines that we’ll deploy, the IPs of the virtual machines will be configured via DHCP and must belong to that network. Both {{network}}. 1 and the range {{network}}.240/28 are reserved and can’t be used for virtual machines.

Deploy the virtual machines

The next step is to launch the MVs with Ansible for this we execute:

$ ansible-playbook -i inventory.yml create.yml --ask-become-pass

We can see all the steps to deploy the virtual machine:

When the execution is finished correctly, the virtual machine/s are started and ready to use, in our example we can login to the MV with the IP with:

$ ssh -i files/insecure_private_key vagrant@

The vagrant user has sudo so we can manage the virtual machine without problems.

Comparing KVM and VirtualBox. Why KVM?

Finally, some conclusions over the performance of KVM and VIrtualBox hypervisors, although it is true that the VirtualBox console is efficient, the performance of this virtualization solution suffers when there is a lot of access to disk and / or CPU, although in the latest version (6. x) has clearly improved this, remains behind KVM and is not recommended for development or demos. More information on openbenchmarking:

Here are some results from iozone (/ usr / bin / iozone -s24m -r64 -i 0 -i 1 – + u -t 1) which indicate us improved performance in 3 of the 4 tests performed over a VM in KVM enviroment and VirtualBox enviroment:

Avg throughput per process Avg throughput per process
Throughput for 1 initial writers
922105.38 kB/sec
Throughput for 1 initial writers
712577.69 kB/sec
Throughput for 1 rewriters
1097535.38 kB/sec
Throughput for 1 rewriters
1244981.12 kB/sec
Throughput for 1 readers
2971712.50 kB/sec
Throughput for 1 readers
1833079.75 kB/sec
Throughput for 1 re-readers
2219869.75 kB/sec
Throughput for 1 re-readers
559970.25 kB/sec

Posts navigation

1 2
Scroll to top