Tag Archives: qemu

Fresh Fedora 26 images uploaded

Fedora 26 is out of the door, and here are fresh fedora 26 images.

There are raspberry pi images. The aarch64 images requires a model 3, the armv7 image boots on both 2 and 3 models. Unlike the images for the previous fedora releases the new images use the standard fedora kernels instead of a custom kernel. So, the kernel update service for the older images will stop within the next weeks.

There are efi images for qemu. The i386 and x86_64 images use systemd-boot as bootloader. grub2 doesn’t work due to bug 1196114 (unless you create a boot menu entry manually in uefi setup). The arm images use grub2 as bootloader. armv7 isn’t supported by systemd-boot in the first place. The aarch64 versions throws an exception. The efi images can also be booted as container, using "systemd-nspawn --boot --image <file>", but you have to convert them to raw first.

The images don’t have a root password. You have to set one using "virt-customize -a <image> --root-password "password:<secret>", otherwise you can’t login after boot.

The images have been created with imagefish.

Fedora 25 images for qemu and raspberry pi 3 uploaded

I’ve uploaded three new images to https://www.kraxel.org/repos/rpi2/images/.

The fedora-25-rpi3 image is for the raspberry pi 3.
The fedora-25-efi images are for qemu (virt machine type with edk2 firmware).

The images don’t have a root password set. You must use libguestfs-tools to set the root password …

virt-customize -a <image> --root-password "password:<your-password-here>>

… otherwise you can’t login after boot.

The rpi3 image is partitioned simliar to the official (armv7) fedora 25 images: The firmware is on a separate vfat partition for only firmware and uboot (mounted at /boot/fw). /boot is a ext2 filesystem now and holds the kernels only. Well, for compatibility reasons with the f24 images (all images use the same kernel rpms) firmware files are in /boot too, but they are not used. So, if you want tweak something in config.txt, go to /boot/fw not /boot.

The rpi3 images also have swap is commented out in /etc/fstab. The reason is that the swap partition must be reinitialized, because swap partitions generated when running on a 64k pages kernel (CONFIG_ARM64_64K_PAGES=y) are not compatible with 4k pages (CONFIG_ARM64_4K_PAGES=y). This must be fixed, by running “swapon --fixpgsz <device>” once, then you can uncomment the swap line in /etc/fstab.

local display for intel vgpu starts working

Intel vgpu guest display shows up in the qemu gtk window. This is a linux kernel booted to the dracut emergency shell prompt, with the framebuffer console running @ inteldrmfb. Booting a full linux guest with X11 and/or wayland not tested yet. There are rendering glitches too, running “dmesg” looks like this:

So, a bunch of issues to solve before this is ready for users, but it’s a nice start.

For the brave:

host kernel: https://www.kraxel.org/cgit/linux/log/?h=intel-vgpu
qemu: https://www.kraxel.org/cgit/qemu/log/?h=work/intel-vgpu

Take care, both branches are moving targets (aka: rebasing at times).

virtual gpu support landing upstream

The upstreaming process of virtual gpu support (vgpu) made a big step forward with the 4.10 merge window. Two important pieces have been merged:

First, the mediated device framework (mdev). Basically this allows kernel drivers to present virtual pci devices, using the vfio framework and interfaces. Both nvidia and intel will use mdev to partition the physical gpu of the host into multiple virtual devices which then can be assigned to virtual machines.

Second, intel landed initial mdev support for the i915 driver too. There is quite some work left to do in future kernel releases though. Accessing to the guest display is not supported yet, so you must run x11vnc or simliar tools in the guest to see the screen. Also there are some stability issues to find and fix.

If you want play with this nevertheless, here is how to do it. But be prepared for crashes and better don’t try this on a production machine.

On the host: create virtual devices

On the host machine you obviously need a 4.10 kernel. Also the intel graphics device (igd) must be broadwell or newer. In the kernel configuration enable vfio and mdev (all CONFIG_VFIO_* options). Enable CONFIG_DRM_I915_GVT and CONFIG_DRM_I915_GVT_KVMGT for intel vgpu support. Building the mtty sample driver (CONFIG_SAMPLE_VFIO_MDEV_MTTY, a virtual serial port) can be useful too, for testing.

Boot the new kernel. Load all modules: vfio-pci, vfio-mdev, optionally mtty. Also i915 and kvmgt of course, but that probably happened during boot already.

Go to the /sys/class/mdev_bus directory. This should look like this:

kraxel@broadwell ~# cd /sys/class/mdev_bus
kraxel@broadwell .../class/mdev_bus# ls -l
total 0
lrwxrwxrwx. 1 root root 0 17. Jan 10:51 0000:00:02.0 -> ../../devices/pci0000:00/0000:00:02.0
lrwxrwxrwx. 1 root root 0 17. Jan 11:57 mtty -> ../../devices/virtual/mtty/mtty

Each driver with mdev support has a directory there. Go to $device/mdev_supported_types to check what kind of virtual devices you can create.

kraxel@broadwell .../class/mdev_bus# cd 0000:00:02.0/mdev_supported_types
kraxel@broadwell .../0000:00:02.0/mdev_supported_types# ls -l
total 0
drwxr-xr-x. 3 root root 0 17. Jan 11:59 i915-GVTg_V4_1
drwxr-xr-x. 3 root root 0 17. Jan 11:57 i915-GVTg_V4_2
drwxr-xr-x. 3 root root 0 17. Jan 11:59 i915-GVTg_V4_4

As you can see intel supports three different configurations on my machine. The configuration (basically the amount of video memory) differs, and the number of instances you can create. Check the description and available_instance files in the directories:

kraxel@broadwell .../0000:00:02.0/mdev_supported_types# cd i915-GVTg_V4_2
kraxel@broadwell .../mdev_supported_types/i915-GVTg_V4_2# cat description 
low_gm_size: 64MB
high_gm_size: 192MB
fence: 4
kraxel@broadwell .../mdev_supported_types/i915-GVTg_V4_2# cat available_instance 
2

Now it is possible to create virtual devices by writing a UUID into the create file:

kraxel@broadwell .../mdev_supported_types/i915-GVTg_V4_2# uuid=$(uuidgen)
kraxel@broadwell .../mdev_supported_types/i915-GVTg_V4_2# echo $uuid
f321853c-c584-4a6b-b99a-3eee22a3919c
kraxel@broadwell .../mdev_supported_types/i915-GVTg_V4_2# sudo sh -c "echo $uuid > create"

The new vgpu device will show up as subdirectory of the host gpu:

kraxel@broadwell .../mdev_supported_types/i915-GVTg_V4_2# cd ../../$uuid
kraxel@broadwell .../0000:00:02.0/f321853c-c584-4a6b-b99a-3eee22a3919c# ls -l
total 0
lrwxrwxrwx. 1 root root    0 17. Jan 12:31 driver -> ../../../../bus/mdev/drivers/vfio_mdev
lrwxrwxrwx. 1 root root    0 17. Jan 12:35 iommu_group -> ../../../../kernel/iommu_groups/10
lrwxrwxrwx. 1 root root    0 17. Jan 12:35 mdev_type -> ../mdev_supported_types/i915-GVTg_V4_2
drwxr-xr-x. 2 root root    0 17. Jan 12:35 power
--w-------. 1 root root 4096 17. Jan 12:35 remove
lrwxrwxrwx. 1 root root    0 17. Jan 12:31 subsystem -> ../../../../bus/mdev
-rw-r--r--. 1 root root 4096 17. Jan 12:35 uevent

You can see the device landed in iommu group 10. We’ll need that in a moment.

On the host: configure guests

Ideally this would be as simple as adding <hostdev> to your guests libvirt xml config. The mdev devices don’t have a pci address on the host though, and because of that they must be passed to qemu using the sysfs device path instead of the pci address. libvirt doesn’t (yet) support sysfs paths though, so it is a bit more complicated for now. Alot of the setup libvirt does for hostdevs automatically must be done manually instead.

First, we must allow qemu access /dev. By default libvirt uses control groups to restrict access. That must be turned off. Edit /etc/libvirt/qemu.conf. Uncomment the cgroup_controllers line. Remove "devices" from the list. Restart libvirtd.

Second, we must allow qemu access the iommu group (10 in my case). A simple chmod will do:

kraxel@broadwell ~# chmod 666 /dev/vfio/10

Third, we must update the guest configuration:

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  [ ... ]
  <currentMemory unit='KiB'>1048576</currentMemory>
  <memoryBacking>
    <locked/>
  </memoryBacking>
  [ ... ]
  <qemu:commandline>
    <qemu:arg value='-device'/>
    <qemu:arg value='vfio-pci,addr=05.0,sysfsdev=/sys/class/mdev_bus/0000:00:02.0/f321853c-c584-4a6b-b99a-3eee22a3919c'/>
  </qemu:commandline>
</domain>

There is special qemu namespace which can be used to pass extra command line arguments to qemu. We do this here to use a qemu feature not yet supported by libvirt (use sysfs paths for vfio-pci). Also we must explicitly allow to lock down guest memory.

Now we are ready to go:

kraxel@broadwell ~# virsh start --console $guest

In the guest

It is a good idea to prepare the guest a bit before adding the vgpu to the guest configuration. Setup a serial console, so you can talk to it even in case graphics are broken. Blacklist the i915 module and load it manually, at least until you have a known-working configuration. Also booting to runlevel 3 (aka multi-user.target) instead of 5 (aka graphical.target) and starting the xorg server manually is better for now.

For the guest machine intel recommends the 4.8 kernel. In theory newer kernels should work too, in practice they didn’t last time I tested (4.10-rc2). Also make sure the xorg server uses the modesetting driver, the intel driver didn’t work in my testing. This config file will do:

root@guest ~# cat /etc/X11/xorg.conf.d/intel.conf 
Section "Device"
        Identifier  "Card0"
#       Driver      "intel"
        Driver      "modesetting"
        BusID       "PCI:0:5:0"
EndSection

I’m starting the xorg server with x11vnc, xterm and mwm (motif window manager) using this little script:

#!/bin/sh

# debug
echo "# $0: DISPLAY=$DISPLAY"

# start server
if test "$DISPLAY" != ":4"; then
        echo "# $0: starting Xorg server"
        exec startx $0 -- /usr/bin/Xorg :4
        exit 1
fi
echo "# $0: starting session"

# configure session
xrdb $HOME/.Xdefaults

# start clients
x11vnc -rfbport 5904 &
xterm &
exec mwm

The session runs on display 4, so you should be able to connect from the host this way:

kraxel@broadwell ~# vncviewer $guest_ip:4

Have fun!

advanced network booting

I use network booting a lot. Very convenient, I rarely have to fiddle with boot iso images these days. My setup has evolved over the years, and I’m going to describe here how it looks today.

general intro

First thing needed is a machine which runs the dhcp server. In a typical setups the home internet router also provides the dhcp service, but usually you can’t configure the dhcp server much. So I turned off the dhcp server in the router and configured a linux machine with dhcpd instead. These days this is a raspberry pi, running fedora 24, serving dhcp and dns.

I used to run this on my x86 server acting as NAS, but that turned out to be a bad idea. We have power failures now and then, the NAS goes check the filesystems at boot, which can easily take half an hour, and there used to be no dhcp and dns service during that time. In contrast the raspberry pi is back to service in less than a minute.

Obviously the raspberry pi itself can’t get a ip address via dhcp, it needs a static address assigned instead. systemd-networkd does the job with this ethernet.network config file:

[Match]
Name=eth0

[Network]
Address=192.168.2.10/24
Gateway=192.168.2.1
DNS=127.0.0.1
Domains=home.kraxel.org

Address=, Gateway= and Domain= must be adjusted according to your local network setup of course. The same applies to all other config file snippets following below.

dhcp setup

Ok, lets walk through the dhcpd.conf file:

option arch code 93 = unsigned integer 16;      # RFC4578

Defines option arch which will be used later.

default-lease-time 86400;                       # 1d
max-lease-time 604800;                          # 7d

With long lease times there will be less chatter in the log file, also nobody will loose network connectivity in case the dhcpd server is down for a moment (for example when fiddeling with the config and dhcpd not restarting properly due to syntax errors).

ddns-update-style none;
authoritative;

No ddns used here. I assign static ip addresses instead (more on this below).

subnet 192.168.2.0 netmask 255.255.255.0 {
        # network
        range 192.168.2.200 192.168.2.249;
        option routers 192.168.2.1;

Default ip address and network of my router (Fritzbox). The small range configured here is used for dynamic ip addresses only, the room below 200 is left to static ip addresses.

        # dns, ntp
        option domain-name "home.kraxel.org";
        option domain-name-servers 192.168.2.10;
        option ntp-servers ntp.home.kraxel.org;

Oh, right, almost forgot to mention that, the raspberry pi also runs ntp so not each and every machine has to talk to the ntp pool servers. So we announce that here (together with the dns server), so the dhcp clients can pick up that information.

Nothing tricky so far, now the network boot configuration starts:

        if (option arch = 00:0b) {
                # EFI aarch64
                next-server 192.168.2.14;
                filename "efi-a64/grubaa64.efi";

Here I use the option arch to figure what client is booting, to pick a matching boot file. This is for 64bit arm machines, loading grub2 efi binary. grub in turn will look for a grub.cfg file in the efi-a64/ directory, so placing stuff in sub-directories separates things nicely.

        } else if (option arch = 00:09) {
                # EFI x64 -- ipxe
                next-server 192.168.2.14;
                filename "efi-x64/shim.efi";
        } else if (option arch = 00:07) {
                # EFI x64 -- ovmf
                next-server 192.168.2.14;
                filename "efi-x64/shim.efi";

Same for x86_64. For some reason ovmf (with the builtin virtio-net driver) and ipxe efi drivers use different arch ids to signal x86_64. I simply list both here to get things going no matter what.

Update (Nov 9th): 7 is the correct value for x86_64. Looks like RfC 4578 got this wrong initially. There is an errata for this. ipxe is fixed meanwhile.

        } else if ((exists vendor-class-identifier) and
                   (option vendor-class-identifier = "U-Boot.armv8")) {
                # rpi 3
                next-server 192.168.2.108;
                filename "rpi3/boot.conf";

This is uboot (64bit) on a raspberry pi 3 trying to netboot. Well, any aarch64 to be exact, but I don’t own any other device. The boot.conf file is a dummy and doesn’t exist. uboot will pick up the rpi3/ sub-directory though, and after failing to load boot.conf it will try to boot pxelinux style, i.e. look for config files in the rpi3/pxelinux.cfg/ directory.

        } else if ((exists vendor-class-identifier) and
                   (option vendor-class-identifier = "U-Boot.armv7")) {
                # rpi 2
                next-server 192.168.2.108;
                filename "rpi2/boot.conf";

Same for armv7 (32bit) uboot, raspberry pi 2 in my case. raspberry pi 3 with 32bit uboot will land here too.

        } else if ((exists user-class) and ((option user-class = "iPXE") or
                                            (option user-class = "gPXE"))) {
                # bios -- gpxe/ipxe
                next-server 192.168.2.14;
                filename "http://192.168.2.14/tftpboot/pxelinux.0";

This is for ipxe/gpxe on classic bios-based x86 machines. ipxe can load files over http, and pxelinux running with ipxe as network driver can do this too. So we can specify a http url as filename, and it’ll work fine and faster than using tftp.

        } else {
                # bios -- chainload gpxe
                next-server 192.168.2.14;
                filename "gpxe-undionly.kpxe";
        }

Everything else chainloads gpxe. I think this is unused since ages, new physical machines have EFI these days and virtual machines already use gpxe/ipxe roms when running with seabios.

}
include "/etc/dhcp/home.4.dhcp.inc";
include "/etc/dhcp/xeni.4.dhcp.inc";

Here are the static ip addresses assigned. This is in a include file because these files are generated by a script. I basically have a file with a table listing hostname, mac address and ip address, and my script generates include files for dhcpd.conf and dns zone files from that. Inside the include there are lots of entries looking like this:

host photosmart {
        hardware ethernet 10:60:4b:11:71:34;
        fixed-address 192.168.2.53;
}

So all machines get a static ip address assigned, based on the mac address. Together with the dns configuration this allows to connect to the machines by hostname.

In case you want a different netboot configuration for a specific host it is also possible to add next-server and filename entries here to override the network defaults configured above.

That’s it for dhcp.

tftp setup

Ok, an tftp server is needed too. That can run on the same machine, but doesn’t has to. You can even have multiple machines serving tftp. You might have noticed that not all entries above have the same next-server. The raspberry pi entries point to my workstation where I cross-compile arm kernels now and then, so I can boot them directly. All other entries point to the NAS where all the install trees are located. Getting tftp running is easy these days:

  1. dnf install tftp-server
  2. systemctl enable tftp
  3. Place the boot files in /var/lib/tftpboot

Placing the boot files is left as an exercise to the reader, otherwise this becomes too long. Maybe I’ll do a separate post on the topic later.

For now just a general hint: in.tftpd has a --verbose switch. Turning that on and watching the log is a good way to see which files clients are trying to load. Helpful for trouble-shooting.

httpd setup

Only needed if you want boot over http as mentioned above. Unless you have apache httpd already running you have to install and enable it of course. Then you can drop this tftpboot.conf file into /etc/httpd/conf.d to make /var/lib/tftpboot available over http too:

<Directory "/var/lib/tftpboot">
        Options Indexes FollowSymLinks Includes
        AllowOverride None
        Require ip 127.0.0.1/8 192.168.0.0/16
</Directory>

Alias   "/tftpboot"      "/var/lib/tftpboot"

libvirt setup

So, of course I want netboot my virtual machines too, even if they are on a NAT-ed libvirt network. In that case they are not using the dhcp server running on my raspberry pi, but the dnsmasq server started by libvirt on the virtualization host. Luckily it is possible to configure network booting in the network configuration, like this:

<network>
  <name>default</name>
  <forward mode='nat'/>
  <ip address='192.168.123.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.123.10' end='192.168.123.99'/>
      <bootp file='http://192.168.2.14/tftpboot/pxelinux.0'/>
    </dhcp>
  </ip>
</network>

That’ll work just fine for seabios guests, but not when running ovmf. Unfortunately libvirt doesn’t support serving different boot files depending in client architecture. But there is an easy way out: Just define a separate network for ovmf guests:

<network>
  <name>efi-x64</name>
  <forward mode='nat'/>
  <ip address='192.168.132.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.132.10' end='192.168.132.99'/>
      <bootp file='efi-x64/shim.efi' server='192.168.2.14'/>
    </dhcp>
  </ip>
</network>

Ok, almost there. While the tcp-based http protocol goes through the NAT forwarding without a hitch the udp-based tftp protocol doesn’t. It needs some extra help. The nf_nat_tftp kernel module handles that. You can use the systemd module load service to make sure it gets loaded on each boot.

using qemu directly

The qemu user network stack has netboot support too, you point it to the tftp server this way:

qemu-system-x86_64 \
        -netdev user,id=net0,bootfile=http://192.168.2.14/tftpboot/pxelinux.0 \
        -device virtio-net-pci,netdev=net0 \
        [ more args follow here ]

In case the tftpboot directory is available on the local machine it is also possible to use the builtin tftp server instead:

qemu-system-x86_64 \
        -netdev user,id=net0,tftp=/var/lib/tftpboot,bootfile=pxelinux.0 \
        -device virtio-net-pci,netdev=net0 \
        [ more args follow here ]

Using virtio-gpu with libvirt and spice

It’s been a while since the last virtio-gpu status report. So, here we go with an update.

I gave a talk about qemu graphics at KVM Forum 2016 in Toronto, covering (among other things) virtio-gpu. Here are the slides. I’ll go summarize the important stuff for those who want to play with it below. The new blog picture above is the Toronto skyline btw.

First the good news: Almost everything needed is upstream meanwhile, so using virtio-gpu (with virgl aka opengl acceleration) is alot easier these days. Fedora 24 supports it out-of-the-box for both host and guest.

Requirements for the guest:

  • linux kernel 4.4 (version 4.2 without opengl)
  • mesa 11.1
  • xorg server 1.19, or 1.18 with commit "5627708 dri2: add virtio-gpu pci ids" backported

Requirements for the host:

  • qemu 2.6
  • virglrenderer
  • spice server 0.13.2 (development release)
  • spice-gtk 0.32 (used by virt-viewer & friends)
    Note that 0.32 got a new shared library major version, therefore the tools using this must be compiled against that version.
  • mesa 10.6
  • libepoxy 1.3.1

The libvirt domain config snippet:

<graphics type='spice'>
  <listen type='none'/>
  <gl enable='yes'/>
</graphics>
<video>
  <model type='virtio'/>
</video>

And the final important bit is that spice needs a unix socket connection for opengl to work, and you’ll get it by attaching the viewer to qemu this way:

virt-viewer --attach $domain

If everything went fine you should see this in the guests linux kernel log:

root@fedora ~# dmesg | grep '\[drm\]'
[drm] Initialized drm 1.1.0 20060810
[drm] pci: virtio-vga detected
[drm] virgl 3d acceleration enabled
[drm] virtio vbuffers: 272 bufs, 192B each, 51kB total.
[drm] number of scanouts: 1
[drm] number of cap sets: 1
[drm] cap set 0: id 1, max-version 1, max-size 308
[drm] Initialized virtio_gpu 0.0.1 0 on minor 0

And glxinfo should print something like this:

root@fedora ~# glxinfo | grep ^OpenGL
OpenGL vendor string: Red Hat
OpenGL renderer string: Gallium 0.4 on virgl
OpenGL core profile version string: 3.3 (Core Profile) Mesa 12.0.2
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 12.0.2
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.0 Mesa 12.0.2
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.00
OpenGL ES profile extensions:

Using virtio-input with libvirt

The new virtio input devices are not that new any more. Support was merged in qemu 2.4 (host) and linux kernel 4.1 (guest). Which means that most distributions should have picked up support for virtio-input meanwhile. libvirt gained support for libvirt-input too (version 1.3.0 & newer), so using virtio-input devices is as simple as adding

<input type='tablet' bus='virtio'/>

to your domain xml configuration. Or replacing the usb tablet with a virtio tablet, possibly eliminating the need to have a usb host adapter in your virtual machine as often the usb tablet is the only usb device.

There are also virtio keyboard and mouse devices. Using them on x86 isn’t very useful as every virtual machine has ps/2 keyboard and mouse anyway. For ppc64, arm and aarch64 architectures the virtio keyboard is a possible alternative to the usb keyboard:

<input type='keyboard' bus='virtio'/>

At the moment the firmware (edk2/slof) lacks support for virtio keyboards, so switching from usb to virtio looses the ability to do any keyboard input before the linux kernel driver loads, i.e. you can’t operate the grub boot menu. I hope this changes in the future.

If you have to stick to the usb keyboard or usb tablet due to missing guest drivers for virtio input I strongly suggest to use xhci as usb host adapter:

<controller type='usb' model='nec-xhci'/>
<input type='tablet' bus='usb'/>

xhci emulation needs noticable fewer cpu cycles when compared to uhci, ohci and ehci host adapters. That of course requires xhci driver support in the guest, but that should be less of an issue these days. Windows 7 is probably the only guest without xhci support which is still in widespread use.

Two new images uploaded

Uploded two new images.

First a centos7 image for the raspberry pi 3 (arm64). Very simliar to the fedora images, see the other raspberrypi posts for details.

Second a armv7 qemu image with grub2 boot loader. Boots with efi firmware (edk2/tianocore). No need to copy kernel+initrd from the image and pass that to qemu, making the boot process much more convenient. Looks like this will not land in Fedora though. The Fedora maintainers prefer to stick with u-boot for armv7 and plan to impove u-boot instead so it works better with qemu. Which will of course solve the boot issue too, but it doesn’t exist today.

Usage: Fetch edk2.git-arm from the firmware repo. Use this to define a libvirt guest:

<domain type='qemu'>
  <name>fedora-armv7</name>
  <memory unit='KiB'>1048576</memory>
  <os>
    <type arch='armv7l' machine='virt'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/edk2.git/arm/QEMU_EFI-pflash.raw</loader>
    <nvram template='/usr/share/edk2.git/arm/vars-template-pflash.raw'/>
  </os>
  <features>
    <gic version='2'/>
  </features>
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>cortex-a15</model>
  </cpu>
  <devices>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/path/to/arm-qemu-f24-uefi.raw'/>
      <target dev='sda' bus='scsi'/>
      <boot order='1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </controller>
    <console/>
  </devices>
</domain>

Update 1: root password for the qemu uefi image is “uefi”.
Update 2: comments closed due to spam flood, sorry.

latest updates from virtio-gpu department

It’s been a while since the last virtio-cpu status report. qemu 2.6 release is just around the corner, time for a writeup about what is coming.

virgl support was added to qemu 2.5 already. It was supported only by the SDL2 and gtk UIs. Which is a nice start, but has the drawback that qemu needs direct access to your display. You can’t logout from your desktop session without also killing the virtual machine.

qemu 2.6 features headless opengl support and spice integration. qemu will use render nodes, render the guest display into dma-bufs, and pass on those dma-bufs to spice-client for display (if connected). Works for local display only for now, i.e. spice will simply display the dma-bufs. Long-term plan (after the spice video support overhaul currently in progress is merged) is to also support hardware-assisted video encoding for a remote display.

If you want play with this you need:

  • latest spice-gtk version (0.31)
  • latest spice-server development release (0.13.1)
  • virglrenderer (packages should start showing up in distros now).

If you are running fedora or rhel7 you can use my copr repo.
And of course you need latest qemu (2.6 release candidate or final release once out) too.

OpenGL support is off by default in qemu, you have to enable it manually by appending gl=on to the -spice options. libvirt 1.3.0 and newer has support for that in the domain xml. The spice client must be attached (virt-viewer –attach) using unix sockets because they are used to pass the dma-bug file descriptors.

Have fun!

virtio-gpu and libvirt

libvirt has no support for virtio-vga yet. There is a bug for virtio-vga support in libvirt. But for the time being we need to play some tricks. Luckily libvirt has some special xml syntax to pass command line options to qemu. So here we go:

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  [ ... ]
  <device>
    [ ... ]
    <video>
      <model type='cirrus' vram='16384' heads='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    [ ... ]
  </devices>
  <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.video0.driver=virtio-vga'/>
  </qemu:commandline>
</domain>

The idea is to tell libvirt we wanna have a cirrus vga. That one has no config options, thats why it works best for our purposes. Then use the -set switch to change the emulation driver from cirrus to virtio-vga. video0 is the id libvirt assigns to the (first and only) video device.

Next level is turning on virgl & opengl support. Initially sdl and gtk user interfaces will be supported (qemu 2.5 most likely). Spice will follow later. So lets pick gtk. Here we go:

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  [ ... ]
  <device>
    [ ... ]
    <graphics type='sdl' display=':0' xauth='/path/to/.Xauthority'/>
    <video>
      <model type='cirrus' vram='16384' heads='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    [ ... ]
  </devices>
  <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.video0.driver=virtio-vga'/>
    <qemu:arg value='-display'/>
    <qemu:arg value='gtk,gl=on'/>
  </qemu:commandline>
</domain>

Simliar approach: We ask libvirt for sdl support. Picking that one because libvirt will take care to pass xauth info to qemu so it gets access to the X11 display. Then use the -display switch to override the libvirt display configuration with gtk, and turn on opengl.

One final remark: On modern linux systems xauth info is stored in /run/gdm/auth-for-$USER-$RANDOM/database instead of $HOME/.Xauthority. I have a little script to copy the xauth info to a fixed place and also make it readable so libvirt can access it:

#!/bin/sh
if test "$XAUTHORITY" != "$HOME/.Xauthority"; then
    xauth extract "$HOME/.Xauthority" "$DISPLAY"
    chmod +r $HOME/.Xauthority
fi

Note that making .Xauthority world-readable effectively gives every user on your machine access to your X11 display. On your private laptop where you are the only user that should be ok, but on a shared machine you probably want do something more secure instead.