Category Archives: Tech Tips

Fedora 25 images for qemu and raspberry pi 3 uploaded

I’ve uploaded three new images to https://www.kraxel.org/repos/rpi2/images/.

The fedora-25-rpi3 image is for the raspberry pi 3.
The fedora-25-efi images are for qemu (virt machine type with edk2 firmware).

The images don’t have a root password set. You must use libguestfs-tools to set the root password …

virt-customize -a <image> --root-password "password:<your-password-here>>

… otherwise you can’t login after boot.

The rpi3 image is partitioned simliar to the official (armv7) fedora 25 images: The firmware is on a separate vfat partition for only firmware and uboot (mounted at /boot/fw). /boot is a ext2 filesystem now and holds the kernels only. Well, for compatibility reasons with the f24 images (all images use the same kernel rpms) firmware files are in /boot too, but they are not used. So, if you want tweak something in config.txt, go to /boot/fw not /boot.

The rpi3 images also have swap is commented out in /etc/fstab. The reason is that the swap partition must be reinitialized, because swap partitions generated when running on a 64k pages kernel (CONFIG_ARM64_64K_PAGES=y) are not compatible with 4k pages (CONFIG_ARM64_4K_PAGES=y). This must be fixed, by running “swapon --fixpgsz <device>” once, then you can uncomment the swap line in /etc/fstab.

tweak arm images with libguestfs-tools

So, when using the official fedora arm images on your raspberry pi (or any other arm board) board you might have faced the problem that it is not easy to use them for a headless (i.e. no keyboard and display connected) machine. There is no default password, fedora asks you to set one on the first boot instead. Which is from a security point of view surely better than shipping with a fixed password. But for headless machines it is quite inconvenient …

Luckily there is an easy way out. You can use libguestfs-tools. The tools have been created to configure virtual machine images (this is where the name comes from). But the tools work fine with sdcards too.

I’m using a usb sdcard reader which shows up as /dev/sdc on my system. I can just pass /dev/sdc as image to the tools (take care, the device is probably something else for you). For example, to set a root password:

virt-customize -a /dev/sdc --root-password "password:<your-password-here>"

The initial setup on the first boot is a systemd service, and it can be turned off by simply removing the symlinks which enable the service:

virt-customize -a /dev/sdc \
  --delete /etc/systemd/system/multi-user.target.wants/initial-setup.service \
  --delete /etc/systemd/system/graphical.target.wants/initial-setup.service

You can use virt-copy-in (or virt-tar-in) to copy config files to the disk image. Small (or empty) configuration files can also be created with the write command:

virt-customize -a /dev/sdc --write "/.autorelabel:"

Adding the .autorelabel file will force selinux relabeling on the first boot (takes a while). It is a good idea to do that in case you copy files to the sdcard, to make sure the new files are labeled correctly. Especially in case you copy security sensitive things like ssh keys or ssh config files. Without relabeling selinux will not allow sshd access those files, which in turn can break remote logins.

There is alot more the virt-* tools can do for you. Check out the manual pages for more info. And you can easily script things, virt-customize has a --commands-from-file switch which accepts a file with a list of commands.

virtual gpu support landing upstream

The upstreaming process of virtual gpu support (vgpu) made a big step forward with the 4.10 merge window. Two important pieces have been merged:

First, the mediated device framework (mdev). Basically this allows kernel drivers to present virtual pci devices, using the vfio framework and interfaces. Both nvidia and intel will use mdev to partition the physical gpu of the host into multiple virtual devices which then can be assigned to virtual machines.

Second, intel landed initial mdev support for the i915 driver too. There is quite some work left to do in future kernel releases though. Accessing to the guest display is not supported yet, so you must run x11vnc or simliar tools in the guest to see the screen. Also there are some stability issues to find and fix.

If you want play with this nevertheless, here is how to do it. But be prepared for crashes and better don’t try this on a production machine.

On the host: create virtual devices

On the host machine you obviously need a 4.10 kernel. Also the intel graphics device (igd) must be broadwell or newer. In the kernel configuration enable vfio and mdev (all CONFIG_VFIO_* options). Enable CONFIG_DRM_I915_GVT and CONFIG_DRM_I915_GVT_KVMGT for intel vgpu support. Building the mtty sample driver (CONFIG_SAMPLE_VFIO_MDEV_MTTY, a virtual serial port) can be useful too, for testing.

Boot the new kernel. Load all modules: vfio-pci, vfio-mdev, optionally mtty. Also i915 and kvmgt of course, but that probably happened during boot already.

Go to the /sys/class/mdev_bus directory. This should look like this:

kraxel@broadwell ~# cd /sys/class/mdev_bus
kraxel@broadwell .../class/mdev_bus# ls -l
total 0
lrwxrwxrwx. 1 root root 0 17. Jan 10:51 0000:00:02.0 -> ../../devices/pci0000:00/0000:00:02.0
lrwxrwxrwx. 1 root root 0 17. Jan 11:57 mtty -> ../../devices/virtual/mtty/mtty

Each driver with mdev support has a directory there. Go to $device/mdev_supported_types to check what kind of virtual devices you can create.

kraxel@broadwell .../class/mdev_bus# cd 0000:00:02.0/mdev_supported_types
kraxel@broadwell .../0000:00:02.0/mdev_supported_types# ls -l
total 0
drwxr-xr-x. 3 root root 0 17. Jan 11:59 i915-GVTg_V4_1
drwxr-xr-x. 3 root root 0 17. Jan 11:57 i915-GVTg_V4_2
drwxr-xr-x. 3 root root 0 17. Jan 11:59 i915-GVTg_V4_4

As you can see intel supports three different configurations on my machine. The configuration (basically the amount of video memory) differs, and the number of instances you can create. Check the description and available_instance files in the directories:

kraxel@broadwell .../0000:00:02.0/mdev_supported_types# cd i915-GVTg_V4_2
kraxel@broadwell .../mdev_supported_types/i915-GVTg_V4_2# cat description 
low_gm_size: 64MB
high_gm_size: 192MB
fence: 4
kraxel@broadwell .../mdev_supported_types/i915-GVTg_V4_2# cat available_instance 
2

Now it is possible to create virtual devices by writing a UUID into the create file:

kraxel@broadwell .../mdev_supported_types/i915-GVTg_V4_2# uuid=$(uuidgen)
kraxel@broadwell .../mdev_supported_types/i915-GVTg_V4_2# echo $uuid
f321853c-c584-4a6b-b99a-3eee22a3919c
kraxel@broadwell .../mdev_supported_types/i915-GVTg_V4_2# sudo sh -c "echo $uuid > create"

The new vgpu device will show up as subdirectory of the host gpu:

kraxel@broadwell .../mdev_supported_types/i915-GVTg_V4_2# cd ../../$uuid
kraxel@broadwell .../0000:00:02.0/f321853c-c584-4a6b-b99a-3eee22a3919c# ls -l
total 0
lrwxrwxrwx. 1 root root    0 17. Jan 12:31 driver -> ../../../../bus/mdev/drivers/vfio_mdev
lrwxrwxrwx. 1 root root    0 17. Jan 12:35 iommu_group -> ../../../../kernel/iommu_groups/10
lrwxrwxrwx. 1 root root    0 17. Jan 12:35 mdev_type -> ../mdev_supported_types/i915-GVTg_V4_2
drwxr-xr-x. 2 root root    0 17. Jan 12:35 power
--w-------. 1 root root 4096 17. Jan 12:35 remove
lrwxrwxrwx. 1 root root    0 17. Jan 12:31 subsystem -> ../../../../bus/mdev
-rw-r--r--. 1 root root 4096 17. Jan 12:35 uevent

You can see the device landed in iommu group 10. We’ll need that in a moment.

On the host: configure guests

Ideally this would be as simple as adding <hostdev> to your guests libvirt xml config. The mdev devices don’t have a pci address on the host though, and because of that they must be passed to qemu using the sysfs device path instead of the pci address. libvirt doesn’t (yet) support sysfs paths though, so it is a bit more complicated for now. Alot of the setup libvirt does for hostdevs automatically must be done manually instead.

First, we must allow qemu access /dev. By default libvirt uses control groups to restrict access. That must be turned off. Edit /etc/libvirt/qemu.conf. Uncomment the cgroup_controllers line. Remove "devices" from the list. Restart libvirtd.

Second, we must allow qemu access the iommu group (10 in my case). A simple chmod will do:

kraxel@broadwell ~# chmod 666 /dev/vfio/10

Third, we must update the guest configuration:

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  [ ... ]
  <currentMemory unit='KiB'>1048576</currentMemory>
  <memoryBacking>
    <locked/>
  </memoryBacking>
  [ ... ]
  <qemu:commandline>
    <qemu:arg value='-device'/>
    <qemu:arg value='vfio-pci,addr=05.0,sysfsdev=/sys/class/mdev_bus/0000:00:02.0/f321853c-c584-4a6b-b99a-3eee22a3919c'/>
  </qemu:commandline>
</domain>

There is special qemu namespace which can be used to pass extra command line arguments to qemu. We do this here to use a qemu feature not yet supported by libvirt (use sysfs paths for vfio-pci). Also we must explicitly allow to lock down guest memory.

Now we are ready to go:

kraxel@broadwell ~# virsh start --console $guest

In the guest

It is a good idea to prepare the guest a bit before adding the vgpu to the guest configuration. Setup a serial console, so you can talk to it even in case graphics are broken. Blacklist the i915 module and load it manually, at least until you have a known-working configuration. Also booting to runlevel 3 (aka multi-user.target) instead of 5 (aka graphical.target) and starting the xorg server manually is better for now.

For the guest machine intel recommends the 4.8 kernel. In theory newer kernels should work too, in practice they didn’t last time I tested (4.10-rc2). Also make sure the xorg server uses the modesetting driver, the intel driver didn’t work in my testing. This config file will do:

root@guest ~# cat /etc/X11/xorg.conf.d/intel.conf 
Section "Device"
        Identifier  "Card0"
#       Driver      "intel"
        Driver      "modesetting"
        BusID       "PCI:0:5:0"
EndSection

I’m starting the xorg server with x11vnc, xterm and mwm (motif window manager) using this little script:

#!/bin/sh

# debug
echo "# $0: DISPLAY=$DISPLAY"

# start server
if test "$DISPLAY" != ":4"; then
        echo "# $0: starting Xorg server"
        exec startx $0 -- /usr/bin/Xorg :4
        exit 1
fi
echo "# $0: starting session"

# configure session
xrdb $HOME/.Xdefaults

# start clients
x11vnc -rfbport 5904 &
xterm &
exec mwm

The session runs on display 4, so you should be able to connect from the host this way:

kraxel@broadwell ~# vncviewer $guest_ip:4

Have fun!

raspberry pi status update

You might have noticed meanwhile that Fedora 25 ships with raspberry pi support and might have wondered what this means for my packages and images.

The fedora images use a different partition layout than my images. Specifically the fedora images have a separate vfat partition for the firmware and uboot and the /boot partition with the linux kernels lives on ext2. My images have a vfat /boot partition with everything (firmware, uboot, kernels), and the rpms in my repo will only work properly on such a sdcard. You can’t mix & match stuff and there is no easy way to switch from my sdcard layout to the fedora one.

Current plan forward:

I will continue to build rpms for armv7 (32bit) for a while for existing installs. There will be no new fedora 25 images though. For new devices or reinstalls I recommend to use the official fedora images instead.

Fedora 25 has no aarch64 (64bit) support, although it is expected to land in one of the next releases. Most likely I’ll create new Fedora 25 images for aarch64 (after final release), and of course I’ll continue to build kernel updates too.

Finally some words on the upstream kernel status:

The 4.8 dwc2 usb host adapter driver has some serious problems on the raspberry pi. 4.7 works ok, and so do the 4.9-rc kernels. But 4.7 doesn’t get stable updates any more, so I jumped straight to the 4.9-rc kernels for mainline. You might have noticed already if you updated your rpi recently. The raspberry pi foundation kernels don’t suffer from that issue as they use a different (not upstream) driver for the dwc.

advanced network booting

I use network booting a lot. Very convenient, I rarely have to fiddle with boot iso images these days. My setup has evolved over the years, and I’m going to describe here how it looks today.

general intro

First thing needed is a machine which runs the dhcp server. In a typical setups the home internet router also provides the dhcp service, but usually you can’t configure the dhcp server much. So I turned off the dhcp server in the router and configured a linux machine with dhcpd instead. These days this is a raspberry pi, running fedora 24, serving dhcp and dns.

I used to run this on my x86 server acting as NAS, but that turned out to be a bad idea. We have power failures now and then, the NAS goes check the filesystems at boot, which can easily take half an hour, and there used to be no dhcp and dns service during that time. In contrast the raspberry pi is back to service in less than a minute.

Obviously the raspberry pi itself can’t get a ip address via dhcp, it needs a static address assigned instead. systemd-networkd does the job with this ethernet.network config file:

[Match]
Name=eth0

[Network]
Address=192.168.2.10/24
Gateway=192.168.2.1
DNS=127.0.0.1
Domains=home.kraxel.org

Address=, Gateway= and Domain= must be adjusted according to your local network setup of course. The same applies to all other config file snippets following below.

dhcp setup

Ok, lets walk through the dhcpd.conf file:

option arch code 93 = unsigned integer 16;      # RFC4578

Defines option arch which will be used later.

default-lease-time 86400;                       # 1d
max-lease-time 604800;                          # 7d

With long lease times there will be less chatter in the log file, also nobody will loose network connectivity in case the dhcpd server is down for a moment (for example when fiddeling with the config and dhcpd not restarting properly due to syntax errors).

ddns-update-style none;
authoritative;

No ddns used here. I assign static ip addresses instead (more on this below).

subnet 192.168.2.0 netmask 255.255.255.0 {
        # network
        range 192.168.2.200 192.168.2.249;
        option routers 192.168.2.1;

Default ip address and network of my router (Fritzbox). The small range configured here is used for dynamic ip addresses only, the room below 200 is left to static ip addresses.

        # dns, ntp
        option domain-name "home.kraxel.org";
        option domain-name-servers 192.168.2.10;
        option ntp-servers ntp.home.kraxel.org;

Oh, right, almost forgot to mention that, the raspberry pi also runs ntp so not each and every machine has to talk to the ntp pool servers. So we announce that here (together with the dns server), so the dhcp clients can pick up that information.

Nothing tricky so far, now the network boot configuration starts:

        if (option arch = 00:0b) {
                # EFI aarch64
                next-server 192.168.2.14;
                filename "efi-a64/grubaa64.efi";

Here I use the option arch to figure what client is booting, to pick a matching boot file. This is for 64bit arm machines, loading grub2 efi binary. grub in turn will look for a grub.cfg file in the efi-a64/ directory, so placing stuff in sub-directories separates things nicely.

        } else if (option arch = 00:09) {
                # EFI x64 -- ipxe
                next-server 192.168.2.14;
                filename "efi-x64/shim.efi";
        } else if (option arch = 00:07) {
                # EFI x64 -- ovmf
                next-server 192.168.2.14;
                filename "efi-x64/shim.efi";

Same for x86_64. For some reason ovmf (with the builtin virtio-net driver) and ipxe efi drivers use different arch ids to signal x86_64. I simply list both here to get things going no matter what.

Update (Nov 9th): 7 is the correct value for x86_64. Looks like RfC 4578 got this wrong initially. There is an errata for this. ipxe is fixed meanwhile.

        } else if ((exists vendor-class-identifier) and
                   (option vendor-class-identifier = "U-Boot.armv8")) {
                # rpi 3
                next-server 192.168.2.108;
                filename "rpi3/boot.conf";

This is uboot (64bit) on a raspberry pi 3 trying to netboot. Well, any aarch64 to be exact, but I don’t own any other device. The boot.conf file is a dummy and doesn’t exist. uboot will pick up the rpi3/ sub-directory though, and after failing to load boot.conf it will try to boot pxelinux style, i.e. look for config files in the rpi3/pxelinux.cfg/ directory.

        } else if ((exists vendor-class-identifier) and
                   (option vendor-class-identifier = "U-Boot.armv7")) {
                # rpi 2
                next-server 192.168.2.108;
                filename "rpi2/boot.conf";

Same for armv7 (32bit) uboot, raspberry pi 2 in my case. raspberry pi 3 with 32bit uboot will land here too.

        } else if ((exists user-class) and ((option user-class = "iPXE") or
                                            (option user-class = "gPXE"))) {
                # bios -- gpxe/ipxe
                next-server 192.168.2.14;
                filename "http://192.168.2.14/tftpboot/pxelinux.0";

This is for ipxe/gpxe on classic bios-based x86 machines. ipxe can load files over http, and pxelinux running with ipxe as network driver can do this too. So we can specify a http url as filename, and it’ll work fine and faster than using tftp.

        } else {
                # bios -- chainload gpxe
                next-server 192.168.2.14;
                filename "gpxe-undionly.kpxe";
        }

Everything else chainloads gpxe. I think this is unused since ages, new physical machines have EFI these days and virtual machines already use gpxe/ipxe roms when running with seabios.

}
include "/etc/dhcp/home.4.dhcp.inc";
include "/etc/dhcp/xeni.4.dhcp.inc";

Here are the static ip addresses assigned. This is in a include file because these files are generated by a script. I basically have a file with a table listing hostname, mac address and ip address, and my script generates include files for dhcpd.conf and dns zone files from that. Inside the include there are lots of entries looking like this:

host photosmart {
        hardware ethernet 10:60:4b:11:71:34;
        fixed-address 192.168.2.53;
}

So all machines get a static ip address assigned, based on the mac address. Together with the dns configuration this allows to connect to the machines by hostname.

In case you want a different netboot configuration for a specific host it is also possible to add next-server and filename entries here to override the network defaults configured above.

That’s it for dhcp.

tftp setup

Ok, an tftp server is needed too. That can run on the same machine, but doesn’t has to. You can even have multiple machines serving tftp. You might have noticed that not all entries above have the same next-server. The raspberry pi entries point to my workstation where I cross-compile arm kernels now and then, so I can boot them directly. All other entries point to the NAS where all the install trees are located. Getting tftp running is easy these days:

  1. dnf install tftp-server
  2. systemctl enable tftp
  3. Place the boot files in /var/lib/tftpboot

Placing the boot files is left as an exercise to the reader, otherwise this becomes too long. Maybe I’ll do a separate post on the topic later.

For now just a general hint: in.tftpd has a --verbose switch. Turning that on and watching the log is a good way to see which files clients are trying to load. Helpful for trouble-shooting.

httpd setup

Only needed if you want boot over http as mentioned above. Unless you have apache httpd already running you have to install and enable it of course. Then you can drop this tftpboot.conf file into /etc/httpd/conf.d to make /var/lib/tftpboot available over http too:

<Directory "/var/lib/tftpboot">
        Options Indexes FollowSymLinks Includes
        AllowOverride None
        Require ip 127.0.0.1/8 192.168.0.0/16
</Directory>

Alias   "/tftpboot"      "/var/lib/tftpboot"

libvirt setup

So, of course I want netboot my virtual machines too, even if they are on a NAT-ed libvirt network. In that case they are not using the dhcp server running on my raspberry pi, but the dnsmasq server started by libvirt on the virtualization host. Luckily it is possible to configure network booting in the network configuration, like this:

<network>
  <name>default</name>
  <forward mode='nat'/>
  <ip address='192.168.123.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.123.10' end='192.168.123.99'/>
      <bootp file='http://192.168.2.14/tftpboot/pxelinux.0'/>
    </dhcp>
  </ip>
</network>

That’ll work just fine for seabios guests, but not when running ovmf. Unfortunately libvirt doesn’t support serving different boot files depending in client architecture. But there is an easy way out: Just define a separate network for ovmf guests:

<network>
  <name>efi-x64</name>
  <forward mode='nat'/>
  <ip address='192.168.132.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.132.10' end='192.168.132.99'/>
      <bootp file='efi-x64/shim.efi' server='192.168.2.14'/>
    </dhcp>
  </ip>
</network>

Ok, almost there. While the tcp-based http protocol goes through the NAT forwarding without a hitch the udp-based tftp protocol doesn’t. It needs some extra help. The nf_nat_tftp kernel module handles that. You can use the systemd module load service to make sure it gets loaded on each boot.

using qemu directly

The qemu user network stack has netboot support too, you point it to the tftp server this way:

qemu-system-x86_64 \
        -netdev user,id=net0,bootfile=http://192.168.2.14/tftpboot/pxelinux.0 \
        -device virtio-net-pci,netdev=net0 \
        [ more args follow here ]

In case the tftpboot directory is available on the local machine it is also possible to use the builtin tftp server instead:

qemu-system-x86_64 \
        -netdev user,id=net0,tftp=/var/lib/tftpboot,bootfile=pxelinux.0 \
        -device virtio-net-pci,netdev=net0 \
        [ more args follow here ]

Using virtio-gpu with libvirt and spice

It’s been a while since the last virtio-gpu status report. So, here we go with an update.

I gave a talk about qemu graphics at KVM Forum 2016 in Toronto, covering (among other things) virtio-gpu. Here are the slides. I’ll go summarize the important stuff for those who want to play with it below. The new blog picture above is the Toronto skyline btw.

First the good news: Almost everything needed is upstream meanwhile, so using virtio-gpu (with virgl aka opengl acceleration) is alot easier these days. Fedora 24 supports it out-of-the-box for both host and guest.

Requirements for the guest:

  • linux kernel 4.4 (version 4.2 without opengl)
  • mesa 11.1
  • xorg server 1.19, or 1.18 with commit "5627708 dri2: add virtio-gpu pci ids" backported

Requirements for the host:

  • qemu 2.6
  • virglrenderer
  • spice server 0.13.2 (development release)
  • spice-gtk 0.32 (used by virt-viewer & friends)
    Note that 0.32 got a new shared library major version, therefore the tools using this must be compiled against that version.
  • mesa 10.6
  • libepoxy 1.3.1

The libvirt domain config snippet:

<graphics type='spice'>
  <listen type='none'/>
  <gl enable='yes'/>
</graphics>
<video>
  <model type='virtio'/>
</video>

And the final important bit is that spice needs a unix socket connection for opengl to work, and you’ll get it by attaching the viewer to qemu this way:

virt-viewer --attach $domain

If everything went fine you should see this in the guests linux kernel log:

root@fedora ~# dmesg | grep '\[drm\]'
[drm] Initialized drm 1.1.0 20060810
[drm] pci: virtio-vga detected
[drm] virgl 3d acceleration enabled
[drm] virtio vbuffers: 272 bufs, 192B each, 51kB total.
[drm] number of scanouts: 1
[drm] number of cap sets: 1
[drm] cap set 0: id 1, max-version 1, max-size 308
[drm] Initialized virtio_gpu 0.0.1 0 on minor 0

And glxinfo should print something like this:

root@fedora ~# glxinfo | grep ^OpenGL
OpenGL vendor string: Red Hat
OpenGL renderer string: Gallium 0.4 on virgl
OpenGL core profile version string: 3.3 (Core Profile) Mesa 12.0.2
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 12.0.2
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.0 Mesa 12.0.2
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.00
OpenGL ES profile extensions:

Using virtio-input with libvirt

The new virtio input devices are not that new any more. Support was merged in qemu 2.4 (host) and linux kernel 4.1 (guest). Which means that most distributions should have picked up support for virtio-input meanwhile. libvirt gained support for libvirt-input too (version 1.3.0 & newer), so using virtio-input devices is as simple as adding

<input type='tablet' bus='virtio'/>

to your domain xml configuration. Or replacing the usb tablet with a virtio tablet, possibly eliminating the need to have a usb host adapter in your virtual machine as often the usb tablet is the only usb device.

There are also virtio keyboard and mouse devices. Using them on x86 isn’t very useful as every virtual machine has ps/2 keyboard and mouse anyway. For ppc64, arm and aarch64 architectures the virtio keyboard is a possible alternative to the usb keyboard:

<input type='keyboard' bus='virtio'/>

At the moment the firmware (edk2/slof) lacks support for virtio keyboards, so switching from usb to virtio looses the ability to do any keyboard input before the linux kernel driver loads, i.e. you can’t operate the grub boot menu. I hope this changes in the future.

If you have to stick to the usb keyboard or usb tablet due to missing guest drivers for virtio input I strongly suggest to use xhci as usb host adapter:

<controller type='usb' model='nec-xhci'/>
<input type='tablet' bus='usb'/>

xhci emulation needs noticable fewer cpu cycles when compared to uhci, ohci and ehci host adapters. That of course requires xhci driver support in the guest, but that should be less of an issue these days. Windows 7 is probably the only guest without xhci support which is still in widespread use.

Two new images uploaded

Uploded two new images.

First a centos7 image for the raspberry pi 3 (arm64). Very simliar to the fedora images, see the other raspberrypi posts for details.

Second a armv7 qemu image with grub2 boot loader. Boots with efi firmware (edk2/tianocore). No need to copy kernel+initrd from the image and pass that to qemu, making the boot process much more convenient. Looks like this will not land in Fedora though. The Fedora maintainers prefer to stick with u-boot for armv7 and plan to impove u-boot instead so it works better with qemu. Which will of course solve the boot issue too, but it doesn’t exist today.

Usage: Fetch edk2.git-arm from the firmware repo. Use this to define a libvirt guest:

<domain type='qemu'>
  <name>fedora-armv7</name>
  <memory unit='KiB'>1048576</memory>
  <os>
    <type arch='armv7l' machine='virt'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/edk2.git/arm/QEMU_EFI-pflash.raw</loader>
    <nvram template='/usr/share/edk2.git/arm/vars-template-pflash.raw'/>
  </os>
  <features>
    <gic version='2'/>
  </features>
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>cortex-a15</model>
  </cpu>
  <devices>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/path/to/arm-qemu-f24-uefi.raw'/>
      <target dev='sda' bus='scsi'/>
      <boot order='1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </controller>
    <console/>
  </devices>
</domain>

Update 1: root password for the qemu uefi image is “uefi”.
Update 2: comments closed due to spam flood, sorry.

New Raspberry PI images uploaded.

I’ve uploaded new images. Both a Fedora 23 refresh and new Fedora 24 images. There are not many changes, almost all notes from the two older articles here and here still apply.

Noteworthy change is that the 32bit images don’t have a 64bit kernel for the rpi3 any more, so both rpi2 and rpi3 boot a armv7 kernel. If you want run your rpi3 in 64bit mode go for the arm64 image which has both 64bit kernel and userspace. Reason for that is that 32bit dnf running on a 64bit kernel throws errors due to the unknown aarch64 architecture so you can’t update the system. And, no, using “linux32 dnf …” doesn’t work either, dnf still complains, it doesn’t know what armv8 is …

Fedora on Raspberry PI updates

Some updates for the Running Fedora on the Raspberry PI 2 article.

There has been great progress on getting the Raspberry PI changes merged upstream in the last half year. u-boot has decent support meanwhile for the whole family, including 64bit support for the Raspberry PI 3. Raspberry PI 2 support has been merged upstream during the 4.6 merge window. It looks like Raspberry PI 3 will follow in the next (4.7) merge window. Time to have a closer look at all this new stuff.

arm images

There are two different kinds of arm images now:

First arm-rpi2-f23-foundation. They are using a kernel built from the tree maintained by the Raspberry PI Foundation at github. They are like the older ones, just with newer packages. The kernels are booted directly by the firmware.

Second arm-rpi2-f23-mainline. They are using a mainline kernel, with a few patches on top for Raspberry PI 3 support (device tree, 64bit, wifi). I expect the number of patches will decreate quickly as things get merged mainline with the next merge windows. The kernel is booted using u-boot. On the Raspberry PI 3 it’ll boot a 64bit kernel (with 32bit userspace) by default. If you don’t want that you can just run “dnf remove uboot-rpi3-64“, which will remove both 64bit uboot and kernel, then reboot.

There are no big differences between the two images types. At the end of the day it is just a different set of packages. You can move from foundation kernels (package name “kernel-rpi2” to mainline kernels (package names “kernel-main” and “kernel-main64“) and back without problems. Having them installed in parallel works fine too. The foundation kernels set kernel= in config.txt in postinstall, so one just needs to install and reboot. Switching to mainline is done by installing them, then commenting out the kernel= line in config.txt. Next boot will load uboot then, which in turn loads the mainline kernel.

arm64 images

While looking at the image directory you might have noticed the arm64-rpi3-f23-mainline image. That is a full 64bit (aka aarch64) build, with both 64bit kernel and userspace. Will boot on the Raspberry PI 3 only, for obvious reasons.

kvm

There is progress with kvm too. Recent firmware loads the kernel in hyp mode, so the hypervisor extensions are available to the linux kernel. Also it seems the kvm core and irqchip (GIC) emulation are separated now. kvm initializes successfully, but the kernel doesn’t provide irqchip because the Raspberry PI hasn’t a GIC. Result is this:

[kraxel@pi-dpy ~]$ sudo lkvm run
# lkvm run -k /boot/vmlinuz-4.5.1-103-rpi2 -m 448 -c 4 --name guest-10565
Error: Unsupported KVM extension detected: KVM_CAP_IRQCHIP
Fatal: Failed to create virtual GIC

So, userspace can’t deal with the missing GIC emulation (qemu fails too). But it looks like by doing GIC emulation in userspace it should be possible to run kvm on the Raspberry PI.