JustKernel

Ray Of Hope

qemu-kvm and virtio

QEMU uses software virtualization approach and  emulates lot of host instructions . This approach has its negative impact on the speed .

But now days latest processors provide hardware virtualization support and QEMU utilizes this hardware support by use of KVM modules. KVM is  a group of  kernel modules(kvm.ko, kvm_amd.ko/kvm_intel.ko) that allows guest code to be directly executed on the host processor. QEMU uses KVM to reap the benefits of hardware virtualization.

Installation of qemu-KVM: I did this on my fedora 17 32 bit box.

–> yum groupinfo @virtualization

—> systemctl start libvirtd (On fedora libvirtd package is provided that provides nice GUI to manage your VM and internally it wraps QEMU and issues QEMU command for each operation)

Then to start VM manager and add a new VM , run this command “#virt-manager”.

To build QEMU from source,

1)git clone git://git.qemu-project.org/qemu.git

2) I need following additional packages: um -y install DTC-devel, yum -y install pixman-devel, yum -y install glib2.0*, yum -y install glib-devel

3) ./configure, make and make install.

commands to launch VM from command line.

sudo  /usr/bin/qemu-system-i386 \–enable-kvm \-m 1024 \-drive file=/var/lib/libvirt/images/win7.img  \-chardev stdio,id=mon0 \-mon chardev=mon0

Remote kernel debugging using QEMU.

1) Set up two Windows VM. And enable serial port debuggin on the target VM.

2) launch the host VM with the following command : sudo  /usr/bin/qemu-system-i386 \–enable-kvm \-m 1024 \-drive file=/var/lib/libvirt/images/win7.img  \-chardev stdio,id=mon0 \-mon chardev=mon0 \-serial tcp::4445,server,nowait . This will make host VM wait for connection on the tcp port:4445

3) launch target with kernel debugging enabled : /usr/bin/qemu-system-i386 \–enable-kvm \-m 1024 \-drive file=/var/lib/libvirt/images/xp.img  \-chardev stdio,id=mon0 \-mon chardev=mon0 \-serial tcp:127.0.0.1:4445

3) You can eaily do it with Virt-manager also . Add a new serial device of type tcp and fill in the configuration.

Virt-IO and why to use Virt-IO

Frontend driver are implemented in the guest  and backend drivers are implemented in the hypervisor. Virtio serves as a communication channel between frontend  driver and the backend drivers. Virtio implements this mechanism by way of queues. Frontend driver adds packets/data to the virtio queue and backends use this data.

How this is different from the traditional approach: In the traditional approach we use a emulated hardware for the guest. Guest sees the emulated hardware as actual hardware so for eg suppose guest network driver makes xmit request (skype) then net-driver tries to utilize the  emulated hardware, then a trap occurs and call is routed to the hardware.

But in case if use virtio, there is no need to emulated hardware. But instead modified guest driver directly route the call to hypervisor via virtio which is much more efficient.

for eg taking the example of network code block.

On guest Windows side:  ParaNDIS_DPCWorkBody–>ParaNDIS_ProcessTx->ParaNDISDoSubmitPackets->add_buf() //DPC based – add bufs to be directly read by hypervisor.

On guest Windows side:  ParaNDIS5_SendPackets–>ParaNDIS_ProcessTx->ParaNDISDoSubmitPackets->add_buf() //interrupt based . add bufs to be directly read by hypervisor.

On Linux host side:
virtio-net.c : virtio_net_flush_tx handles the xmit of packet. It receives the xmit and rx queues from the frontend drivers from the guest and perform the task of actual xmit of data.

virtio_stor_hw_helper.c: RhelDoReadWrite()->add_buf(): adding packets to buffer to be directly read by hypervisor.

VirtIO-Details

When running a virtual machine, the virtual environment has to present devices to the guest OS – disks and network being the main two (plus video, USB, timers, and others). Effectively, this is the hardware that the VM guest sees.

Now, if the guest is to be kept entirely ignorant of the fact that it is virtualised, this means that the host must emulate some kind of real hardware. This is quite slow (particularly for network devices), and is the major cause of reduced performance in virtual machines.

However, if you are willing to let the guest OS know that it’s in a virtual environment, it is possible to avoid the overheads of emulating much of the real hardware, and use a far more direct path to handle devices inside the VM. This approach is called paravirtualisation. In this case, the guest OS needs a particular driver installed which talks to the paravirtual device. Under Linux, this interface has been standardised, and is referred to as the “virtio” interface.

When running a virtual machine, the virtual environment has to present devices to the guest OS – disks and network being the main two (plus video, USB, timers, and others). Effectively, this is the hardware that the VM guest sees.

Now, if the guest is to be kept entirely ignorant of the fact that it is virtualised, this means that the host must emulate some kind of real hardware. This is quite slow (particularly for network devices), and is the major cause of reduced performance in virtual machines.

However, if you are willing to let the guest OS know that it’s in a virtual environment, it is possible to avoid the overheads of emulating much of the real hardware, and use a far more direct path to handle devices inside the VM. This approach is called paravirtualisation. In this case, the guest OS needs a particular driver installed which talks to the paravirtual device. Under Linux, this interface has been standardised, and is referred to as the “virtio” interface.

Accessing your virtio device

In the guest OS, you will need the modules virtio-blk and virtio-pci loaded. This should then create devices /dev/vda, /dev/vdb, etc. – one for each virtio -drive option you specified on the command line to qemu. These devices can be treated just like any other hard disk – they can be partitioned, formatted, and filesystems mounted on them. Note that at the moment, there doesn’t seem to be any support for booting off them, so you will need at least one non-virtio device in the VM.

virtio buffers

Guest (front-end) drivers communicate with hypervisor (back-end) drivers through buffers. For an I/O, the guest provides one or more buffers representing the request. For example, you could provide three buffers, with the first representing a Read request and the subsequent two buffers representing the response data. Internally, this configuration is represented as a scatter-gather list (with each entry in the list representing an address and a length).

Core API

Linking the guest driver and hypervisor driver occurs through the virtio_device and most commonly through virtqueues. The virtqueue supports its own API consisting of five functions. You use the first function, add_buf, to provide a request to the hypervisor. This request is in the form of the scatter-gather list discussed previously. To add_buf, the guest provides the virtqueue to which the request is to be enqueued, the scatter-gather list (an array of addresses and lengths), the number of buffers that serve as out entries (destined for the underlying hypervisor), and the number of in entries (for which the hypervisor will store data and return to the guest). When a request has been made to the hypervisor through add_buf, the guest can notify the hypervisor of the new request using the kick function. For best performance, the guest should load as many buffers as possible onto the virtqueue before notifying through kick.

Responses from the hypervisor occur through the get_buf function. The guest can poll simply by calling this function or wait for notification through the provided virtqueue callback function. When the guest learns

http://www.ibm.com/developerworks/library/l-virtio/

http://www.linux-kvm.org/wiki/images/d/dd/KvmForum2007%24kvm_pv_drv.pdf

www.justkernel.com

Anshul Makkar, anshul_makkar@justkernel.com

 

Tags:


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.