If we want to raise virtual machines in a Linux environment that does not have a graphical environment, we can raise virtual machines using the command line with a XML template.
This article explains how the deployment performed with Ansible-libvirt at KVM, Ansible and how to deploy a test environment works internally
Table of Contents
Install Qemu-KVM and Libvirt
First: we must to install libvirt and Qemu-KVM. In Ubuntu / Debian is installed with:
$ sudo apt-get install -y libvirt-daemon-system python-libvirt python-lxml
And in CentOS / Redhat with:
$ sudo yum install -y libvirt-daemon-kvm python-lxml
To launch the service we must do:
$ sudo systemctl enable libvirtd && sudo systemctl start libvirtd
Configure a network template
Libvirt provides us with a powerful tool for managing virtual machines called ‘virsh’, which we must use to be able to manage KVM virtual machines from the command line.
For a virtual machine we mainly need three elements, the first is a network configuration that provides IP to virtual machines via DHCP. To do this libvirt needs XML template like the next template (which we will designate “net.xml”):
<network> <name>NETWORK_NAME</name> <forward mode='nat'> <nat> <port start='1' end='65535'/> </nat> </forward> <bridge name='BRIDGE_NAME' stp='on' delay='0'/> <ip address='IP_HOST' netmask='NETWORK_MASK'> <dhcp> <range start='BEGIN_DHCP_RANGE' end='END_DHCP_RANGE'/> </dhcp> </ip> </network>
Whose main elements are:
- NETWORK_NAME: Descriptive name that we are going to use to designate the network, for example, “test_net” or “production_net”.
- BRIDGE_NAME: Each network creates an interface on the host server that will serve as gateway of the input/output packets of that previous network to the outside. Here we assign a descriptive name that let as identify the interface.
- IP_HOST: The IP that such interface will have on the host server and that will be the gateway of the virtual machines.
- NETWORK_MASK: Depends on the network, usually for testing must be use a class C (255.255.255.0)
- BEGIN_DHCP_RANGE: To assign IPs to virtual machines using libvirt, there are an internal DHCP server (dnsmasq based), here we define the first IP of the range that we can serve to virtual machines.
- END_DHCP_RANGE: And here we define the last IP that virtual machines can obtain.
Preparing the operating system image
The second element is the virtual machine image, this image can be created or downloaded, the second is recommended to reduce the deployment time. An image source for virtual machines with KVM / libvirt is Vagrant (https://app.vagrantup.com/boxes/search?provider=libvirt), to obtain an image of the virtual machine that interests us, we must download from https://app.vagrantup.com/APP_NAME/boxes/APP_TAG/versions/APP_VERSION/providers/libvirt.box where APP_NAME is the name of the application we want to download (e.g. debian), APP_TAG is the distribution of this application (e.g. stretch64) and finally APP_VERSION is the version of the application (e.g. 9.9.0). Even so, there are specific image repositories such as CentOs that are at http://cloud.centos.org/centos/7/vagrant/x86_64/images.
Once the libvirt.box file is obtained, it must be decompressed with tar -zcf libvirt.box, which generates three files, one of which is the image of the virtual machine (box.img) that we will rename to NAME_DESCRIPTIVE.qcow2 and copy it to the standard folder for libvirt virtual machine images (
/var/lib/libvirt/images) and we will give the libvirt user permissions so that he can manage that image (
chown libvirt /var/lib/libvirt/images/NAME_DESCRIPTIVE.qcow2)
It is important to know that if we want to access the virtual machine we will need a password that we can download with
wget -O insecure_private_key https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant, and we must apply permissions for access only with the current user (
$ chmod 600 insecure_private_key).
Virtual Machine Template
The third element to create the virtual machine once the last two are created is a virtual machine template, which for a recent libvirt release will have the following form:
<domain type='kvm'> <name>VM_NAME</name> <memory unit='MB'>VM_MEMORY</memory> <vcpu>CPU_NUMBER</vcpu> <os> <type>hvm</type> <bootmenu enable='no'/> <boot dev='hd'/> </os> <features> <acpi/> <apic/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='IMAGE_STORAGE_PATH/IMAGE_NAME.qcow2'/> <target dev='vda' bus='virtio'/> </disk> <interface type='network'> <source network='NETWORK_NAME'/> <model type='virtio'/> </interface> <console type='pty'> <target type='serial' port='0'/> </console> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <graphics type='spice' port='5900' autoport='yes' listen='127.0.0.1'> <listen type='address' address='127.0.0.1'/> <image compression='off'/> </graphics> <video> <model type='cirrus' vram='16384' heads='1' primary='yes'/> <alias name='video0'/> </video> <memballoon model='virtio'/> </devices> </domain>
Whose elements are:
- VM_NAME: The descriptive name that will allow us to identify the virtual machine
- VM_MEMORY: How much memory in MB we allocate to the virtual machine
- CPU_NUMBER: How many virtual CPUs will the virtual machine have
- IMAGE_STORAGE_PATH: The path where we have saved the virtual machine, the standard is
- IMAGE_NAME: A descriptive name we have given to the image in the previous stage
- NETWORK_NAME: A descriptive name have we given to the network that will manage the virtual machines.
Deploying Virtual Machines
Once the templates are created and the image is deployed, we execute:
# sudo virsh net-create plantilla_red.xml # sudo virsh create plantilla_máquina_virtual.xml
Finally we must check that it works, first we identifyif the virtual machines are running with
# sudo virsh list, second we need to get the ip in use with
# sudo virsh net-dhcp-leases NETWORK_NAME and finally we access the assigned IP with
# ssh -i insecure_private_key vagrant @IP