When developing cross-platform code, testing is critical. Access to various families of operating systems will allow rapid identification of (some) inadvertent platform specific assumptions. While there are some public testing services, like Travis CI, you will inevitably need to test locally at some point. I use a set of small virtual machines on my development machine for this.

I could use Xen or VirtualBox, but I am happier with KVM/QEMU in my particular environment. Unfortunately, because I use WiFi on my network it is impossible (in general) to use the standard bridge networking setup that systems like libvirtd are geared to support. Thankfully Bodhi Zazen’s guide outlines a method for layer 3 ‘bridging’ which suits my needs just fine.

What follows is a brief outline of my configuration for my Gentoo host, but the concepts should transfer to other Linux hosts.

Creating the devices

/etc/conf.d/net
modules="wpa_supplicant"

config_wlan0="192.168.178.38/24"
routes_wlan0="default via 192.168.178.1"
wpa_supplicant_wlan0="-d -f /var/log/wpa_supplicant.log"

tuntap_tap0="tap"
config_tap0="null"
routes_tap0="-net 192.168.178.128/25 dev tap0"

My physical machines sit on the network 192.168.178.0/24, my gateway at 192.168.178.1, and my development machine on 192.168.178.38. I partitioned off the top 128 addresses for virtual machines; ie. the subnet 192.168.178.128/25. Note: this might be problematic if you need to support multiple virtualisation hosts on the same network.

I will create a single tap device, tap0, which our VMs will connect to for their networking. Note that I didn’t give it an IP address. Instead, I forward packets intended for our VMs with static routes.

This is sufficient for our guests' packets to reach our host machine, but packets from our guest won’t leave our host, and packets from the greater network won’t reach our guest (neither IP nor ARP). For this I need to setup forwarding.

Forwarding the packets

/etc/local.d/vm_forwarding.start
#!/bin/sh

echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/conf/default/proxy_arp
echo 1 > /proc/sys/net/ipv4/conf/all/proxy_arp

I enable IP and ARP forwarding using a local.d script. Permanently enabling forward may not be the greatest idea on a more public network, but works fine for testing on a small private network. I only considered IPv4 due to specifics of my development network.

Virtual Distributed Ethernet

If you want to use multiple guests simultaneously then the VDE networking option is hands down the simplest solution. Without this the tap device will be locked by the first VM. I’ve seen comments that VDE is unmaintained, but it works and requires zero configuration which is a massive win.

/etc/conf.d/vde
VDE_MODPROBE_TUN="yes"

VDE_TAP="tap0"

VDE_SOCK_CHMOD="770"
VDE_SOCK_CHOWN=":qemu"

VDE_OPTS="-tap ${VDE_TAP} -daemon"

I believe I used the default vde configuration file and simply enabled vde on boot with rc-update add vde default. Your system will likely vary given the popularity of OpenRC…

Running the VM

run-vm.sh
#!/bin/sh

die () { echo "$@" 1>&2; exit 1; }

source "./$1.conf" || die "invalid configuration file"

[[ -n "${MAC}" ]] || die "MAC not supplied"
[[ -n "${IMG}" ]] || die "IMG not supplied"

qemu-system-x86_64 -enable-kvm -m 4G -smp 8 -drive file=img/${IMG}.raw,format=raw -usb -net nic,macaddr=${MAC} -net vde -nographic -vga qxl -spice disable-ticketing,unix,addr=/tmp/virt-${IMG}.sock
freebsd.conf
IP="192.168.178.131"
MAC="52:C8:E9:A3:DE:D9"
IMG="freebsd"

To run a VM I whipped up a terrible bash script and configuration file. I would wholeheartedly recommend a more systematic approach to error handling than the above.

Given have a local directory img which holds the guest disk images, and a local freebsd.conf, we can invoke the script with ./run-vm.sh freebsd.

For any semblance of performance you really do need to use the -enable-kvm option (with the associated kernel modules and hardware requirements). This shouldn’t be an issue on any halfway decent distribution or hardware in the last 5 years.

QXL/SPICE are the most friendly methods for interacting with your VM, assuming your guests has the appropriate drivers. If you have the spice-gtk package available then spicy is a decent frontend. In this example I would use spicy --uri=spice+unix:///tmp/virt-${IMG}.sock as the invocation.

There are more aggressive KVM/QEMU options available, but the above should do you well on recent editions of Linux and FreeBSD.

Future extensions

The largest pain-point of the above is the need to manually configure each guest’s network configuration. Ideally I would run a local DHCP server purely for the guests, but my enthusiasm for this project is rapidly waning. But, how often does one really need to configure their networking?…