OpenEmbedded Xen Network Driver VM

I wrote about a similar topic what feels like ages ago and I guess it was (8 months is a long time in this business). Since then I’ve been throwing some spare time at this same problem and I’ve actually made measurable progress. There have been a number of serendipitous events that came together to make this possible, the most important of which is the massive update to the Xen recipe in meta-virtualization. With this it’s super easy to crank out a Xen pvops kernel so combining this with an image that has the right plumbing in place it’s not as hard as you might think to build an NDVM.

So armed with the new Xen stuff from meta-virtualization I set out to build a reference NDVM. This isn’t intended to replace the NDVM in a system like XenClient-XT which is far more sophisticated. It’s just intended for experimentation and I don’t intend to build anything more sophisticated than a dumb Ethernet bridge into it.

To host this I’ve started a layer I call ‘meta-integral’. I know, all the good names were taken. Anyways this is intended to be as sort of distro layer where I can experiment with Xen stuff. Currently I’ve got a distro config for dom0 and an NDVM. The dom0 work is still very much a work in progress but the NDVM (much simpler) will actually boot as a PV guest.

To build this just clone my git repo with the build scripts and it’ll do all of the hard work for you:

git clone https://github.com/flihp/oe-build-scripts.git
git checkout ndvm
./build.sh | tee build.log

This will crank out an image suitable to run on an Intel SandyBridge (SNB) system. I’ve only tested PV guests so you’ll have to set up a config like the following:

kernel = "/usr/lib/xen-common/bzImage-sugarbay.bin"
extra = "root=/dev/xvda console=hvc0"
iommu = "soft"
memory = "512"
name = "ndvm"
uuid = "a9ae8853-f1e9-41ca-9904-0e906efeb4af"
vcpus = "1"

disk = ['phy:/dev/loop0,xvda,w']
pci = ['0000:04:00.0']

Notice the kernel image and the rootfs image must be copied over to the Xen dom0 that you want to test the NDVM on. The image is listed in the kernel line and this can be found at tmp-eglibc/deploy/images/sugarbay/bzImage-sugarbay.bin relative to your build root. The image will be in the same directory and called something like integral-image-ndvm-sugarbay.ext3. Notice that the disk config is pointing at a loopback. You’ll have to set this up with losetup just like any other loopback device. The part that differentiates this from any other PV guest is that we’re passing a PCI network device through to it and it’ll offer up a bridge to other guest VMs. The definitive documentation on how to do this with Xen is here: http://wiki.xen.org/wiki/Xen_PCI_Passthrough

The bit that I had to wrangle to get the bridge set up properly with OE was the integration between a network interfaces file and the bridge. I’ve been spoiled by Debian and it’s seamless integration between the two. OE has no such niceties. In this situation I had to chose between hacking a script manually or finding the scripts that integrate interfaces configuration with the bridge and baking that into the bridge-utils package from meta-oe. I figured getting bridges integrated with interfaces would be useful to others so I went through the Debian source package, extracted the scripts and baked them into OE directly. Likely this should go ‘upstream’ but for now this specialization is just sitting in my meta-integral layer.

So after fixing up the bridge-utils package so it plays nice with the interfaces file, the interfaces in our NDVM looks like so:

# /etc/network/interfaces -- configuration file for ifup(8), ifdown(8)
 
# The loopback interface
auto lo
iface lo inet loopback

# real interface
auto eth0
iface eth0 inet manual

# xen bridge
auto xenbr0
iface xenbr0 inet manual
        bridge_ports eth0
        bridge_stp off
        bridge_waitport 0
        bridge_fw 0

So that’s it. Boot up this NDVM and it’ll have a physical network device and a bridge ready for consumption by other guests. I’ve not yet gone through and tested adding additional guests to the bridge so I’m assuming there’s still a bit of work lurking there. I’ll give this last bit a go and hopefully have positive results to post sooner than later. I’ve also not tested this on XenClient-XT as the most recent stable release is getting a bit old and likely there’s going to be incompatibilities between netfront / back stuff. This approach however is likely a great starting point if you’re building a service VM you want to run on our next release of XT though so feel free to fork and experiment.

UPDATE: Gave my NDVM a test just by giving the dom0 that was hosting it a vif. You can do this like so:

# xl network-attach Domain-0 backend=ndvm

The above assumes your NDVM has been named ‘ndvm’ in it’s VM config naturally. Anyways this will pop up a vif in dom0 backed by the NDVM. Pretty slick IMHO. Now to wrap this whole thing up so dom0 and the NDVM can be built as a single image with OE … Sounds easy enough 🙂