Author: Siro Mugabi

Category: qemu


This entry presents a few examples that make use of the QEMU-virtio framework.

Tags: qemu linux virtio

This material is probably obsolete.

QEMU-virtio and Device I/O

Standard QEMU I/O virtual devices emulate all/essential aspects of the physical hardware including control and status registers, I/O ports, device memory, IRQ lines etc. This detailed emulation of real hardware incurs significant performance cost but is necessary for the respective guest OS device drivers to continue to function as they do on physical hardware. As a platform for training engineers or teaching students, virtual devices present an elegant, cost effective (albeit limited) plaform for driver development. In embedded system development, full hardware emulation of custom and non-existent devices enables parallel development of complete device driver and software stacks alongside hardware implementation.

QEMU-virtio presents an alternative framework for I/O: instead of emulating the details of real hardware, a more direct path to the host's backends is used instead. This approach generally results in an overall improvement in performance since the emulation overhead of real hardware artifacts is now avoided.

However, this now implies the use of special device drivers in the guest OS to operate the QEMU-virtio devices. In this respect, the QEMU-virtio scheme introduces an element of paravirtualization in QEMU's paradigm of full hardware virtualization via software emulation. Use of these special paravirtualized virtio device drivers technically means that the guest OS is now aware that it is executing in a virtual environment.

The motivation behind the development of the virtio framework was to provide a standardized API for hypervisor implementations that would represent interactions with an I/O device. This API would then be used by guest OS paravirtualized device drivers.

qemu-system supports a set of virtual devices which export virtio interfaces that enable the guest OS' paravirtualized device drivers to perform guest-host communication. Upstream support for most of the QEMU-virtio features described here has been available since around QEMU 0.12 and Linux 2.6.35. As a point of clarification, in the context of Linux guest and QEMU-virtio, the virtio framework is within the QEMU userspace process and does not depend on the KVM hypervisor.

In brief, in order to exploit QEMU-virtio:

  • Ensure that the required virtio support is enabled in the guest OS kernel.

  • Configure guest OS userspace accordingly e.g. device files ( /dev/vdN, /dev/virtio-ports, /dev/hvcN, etc). In addition, certain QEMU-virtio functionality such as virtconsole will require extra configuration e.g. agetty(1) for a login terminal.

  • Specify the required virtio options on QEMU's command line, disk image bootloader, etc

Virtio Support in the Linux Guest Kernel

The exact set of Linux virtio configuration options to be enabled, or whether a particlar option should be built-in or a module, will depend on the use case. Mileage will vary. At least, for most of the examples in this entry, the following were enabled (Linux 3.x). If in doubt, configure the options statically at first, then selectively build some as modules in later test iterations:


A typical distro configuration, e.g. Ubuntu's, will resemble:


QEMU-virtio Commandline

Virtio block devices

The simplest way to map a disk image on the host as a virtio-blk device in the guest is by specifying the -drive file=${DISKIMAGE},if=virtio QEMU commandline option and something like /dev/vda on the Linux commandline:

  • Example 1:

    $ qemu-system-x86_64 -kernel bzImage -append "console=ttyS0 rw root=/dev/vda2" -nographic  \
        -drive file=brdisk-img.raw,if=virtio

    If the disk image had udev(7) installed, a /dev/virtio_blk device file should be created automatically. The ad-hoc Buildroot disk image used in this particular instance did not feature udev(7) and so some manual device file creation was performed:

    [root@buildroot ~]# cat /proc/devices | grep virt  
    253 virtblk
    [root@buildroot ~]# mknod -m 640 /dev/virtio_blk b 253 0
    [root@buildroot ~]# fdisk -u -l /dev/virtio_blk
    Disk /dev/virtio_blk: 2684 MB, 2684354560 bytes
    16 heads, 63 sectors/track, 5201 cylinders, total 5242880 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x56a8c587
                        Device Boot      Start         End      Blocks   Id  System
    /dev/virtio_blk1            2048     1026047      512000   83  Linux
    /dev/virtio_blk2         1026048     5242879     2108416   83  Linux

    You may also run the following checks:

    [root@buildroot ~]# basename `readlink /sys/bus/virtio/devices/virtio0/driver`
    ## QEMU monitor console
    (qemu) info block
    virtio0: /tmp/brdisk-img.raw (raw)
  • Example 2: Booting off a GRUB installed disk image:

    $ qemu-system-x86_64 -drive file=brdisk-img.raw,if=virtio

Grub QEMU-virtio

There exist more sophisticated ways of specifying virtio-blk usage on the QEMU commandline e.g:

-drive file=disk-img,if=none,id=guest,cache=unsafe -device virtio-blk-pci,drive=guest

Virtio network devices

Generally, for virtio-net devices, no special Linux guest commandline or userspace configuration is required. For simple invokation, just specify -net nic,model=virtio for the QEMU network device model. For example:

$ sudo qemu-system-x86_64 (...) -net nic,model=virtio -net tap,ifname=tap0,[,]

Upon boot, proceed with normal eth0 (or whatever) configuration:

Welcome to Buildroot
buildroot login: root

[root@buildroot ~]# ifconfig eth0 netmask

[root@buildroot ~]# ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 44:33:22:11:66:77  
                    inet addr:  Bcast:  Mask:

[root@buildroot ~]# basename `readlink /sys/bus/virtio/devices/virtio0/driver`

## QEMU monitor console
(qemu) info network 
hub 0
 \ tap.0: index=0,type=tap,ifname=tap0,,
 \ virtio-net-pci.0: index=0,type=nic,model=virtio-net-pci,macaddr=64:32:55:84:89:27

virtio-net PCI devices also include other options for things such as MSI vectors, device address, etc. Consult qemu(1).

Virtio serial

The virtio-serial device supports the virtconsole and virtserialport device options. virtconsole is used to provide a guest console/terminal while virtserialport may be used for other I/O operations such as file transfer and sending/receiving control messages to/from the guest. This section illustrates using virtconsole for a guest system console. Check out Using QEMU Character Devices for an example of using virtserialport for guest-host data transfer.

Linux boot messages on virtconsole

To provide a system console on the virtconsole device, CONFIG_VIRTIO_CONSOLE will first have to be statically built into the kernel. Then, just as for an ordinary system console, specify, say, console=hvc0 on the Linux commandline. hvc0 will correspond to the the first virtconsole device on the QEMU commandline. For instance:

$ qemu-system-x86_64 (...) -append "root=/dev/vda2 console=hvc0 console=ttyS0 rw" -chardev stdio,id=virtiocon0 -device virtio-serial -device virtconsole,chardev=virtiocon0

Here, Linux boot messages will appear on both the serial port system console, ttyS0 (on a QEMU SDL VC e.g. CTRT+ALT+3), and on virtconsole (stdio). Also see QEMU Serial Port System Console for tips on using different host backends (replacing ttySN in the examples with hvcN). Check out Using QEMU Character Devices for an example of using USB-serial as an alternative serial port device for guest system console.

Login via Virtio Console

To receive a login prompt and enable a terminal on virtconsole, there are a few things that must first be properly configured in the guest root filesystem:

  • First, the /dev/hvc0 need be present during system boot. More device files e.g. hvc1, hvc2, etc, will correspond to more than one virtconsole device specified on the QEMU commandline. If the guest root filesystem has udev(7) support, these files will be created automatically. Otherwise, these device files may have to be installed statically.

  • Configure getty(8):

    • SysV init(8):

      [root@buildroot ~]# cat /etc/inittab
      # Buildroot's default inittab for Busybox.
      1:1:respawn:/sbin/getty 38400 tty1
      2:1:respawn:/sbin/getty 38400 tty2
      S0:1:respawn:/sbin/getty -L 115200 ttyS0 vt100 # GENERIC_SERIAL
      hvc0:1:respawn:/sbin/getty -L 115200 hvc0  vt100 # VIRTIO SERIAL
    • Upstart init(5):

      $ cat /etc/init/hvc0.conf
      start on stopped rc or RUNLEVEL=[12345]
      stop on runlevel [!12345]
      exec /sbin/getty -L 115200 hvc0 vt102
  • Depending on the guest kernel configuration, the /etc/securetty file may have to be edited to include hvcN settings. This will let SELinux permit login on virtconsole. On one Ubuntu 12.04 distro, the file already had this default configuration:

    # IBM iSeries/pSeries virtual console

    With the ad-hoc Buildroot image used here, hvcN settings had to be appended to its /etc/securetty.

Note that for login via virtconsole, CONFIG_VIRTIO_CONSOLE need not be statically built into the kernel - especially if the guest has udev(7) support. However, in this case, no Linux boot messages will appear on virtconsole. For instance, the following QEMU commandline was used against a distro kernel which had CONFIG_VIRTIO_CONSOLE[=m] settings:

$ qemu-system-x86_64 -enable-kvm -smp 2 -m 1G \
    -drive file=vm_test-img.raw,if=virtio \
    -chardev stdio,id=virtiocon0 \
    -device virtio-serial \
    -device virtconsole,chardev=virtiocon0

    Ubuntu 12.04.4 LTS swara hvc0
    swara login:

Also See

  • Using QEMU Character Devices for an example of using -device virtserialport. Also check out for examples of specifying -device virtio-serial options including MSI vectors, number of ports, multiple devices per guest, hot-plug/hot-unplug via QEMU monitor console, etc.

  • QEMU Serial Port System Console