TPM2 Certificate Authority

Certificate authorities use some serious security measures to protect their signing keys (or at least they’re supposed to). These high security requirements are the realm of the traditional hardware security module (HSM). These things address threat models that include attacks against both software and hardware, sometimes very sophisticated ones (like decapping). You get what you pay for, and so HSMs can get pretty pricey. These extreme levels of security and the associated price seems reasonable to me, at least for the CAs whose keys are trusted by the likes of Chrome and Firefox (and subsequently all of their users).

Not all CAs need that kind of security though. Definitely not the CA I use for various test applications. For this I’ve always just kept the keys on a thumb drive as a way to keep them off of my laptop.

The type, strength and cost of the security measures appropriate for an application aren’t binary. The two example applications above are likely the extremes since the key protection schemes have such radically different security properties: HSM vs commodity OS storing keys / files on disk. But between these two applications are a spectrum of others with varying threat models.

root CA signing key in TPM2

What these applications are isn’t the interesting part so long as we’re willing to admit that they exist. What *is* interesting are the technologies available to mitigate relevant threats. Historically, there haven’t been many options available between the two extremes described above. Recently however some products have emerged to fill this space including TPM2. Microsoft added TPM2 to their logo requirements for Windows 10 so it’s effectively ubiquitous in newer laptops. TPM2 is an interesting option because it mitigates key theft through the use of “shielded locations” where sensitive operations (use of private keys) are carried out separate from the main CPU. For my particular application this is “good enough”.

background

I’ve always used the OpenSSL tools and the associated commands to manage my local CA. The available documentation and collective knowledge on the internet make this tool indispensable and I’ve got my workflow scripted. What I want to do is integrate the TPM2 OpenSSL engine into my existing scripts and configurations.

Documentation for building and installing tpm2tss openssl engine is here: https://github.com/tpm2-software/tpm2-tss-engine/blob/master/INSTALL.md. The rest of this document assumes you have it installed and properly configured.

make your root key

Using the TPM2 to protect your CA signing keys is surprisingly easy. I typically shy away from using the word “easy” when talking about the TPM but in this case, and thanks to the TPM2 OpenSSL Engine it really is. The `tpm2tss` engine provides a binary `tpm2tss-genkey` for key key generation. For this example a simple RSA 2k key is generated:


$ tpm2tss-genkey --alg rsa --keysize 2048 ca-root.key.tss

I’ve given the root key the extension `.tss` because it’s in a form unique to the tpm2tss engine.

Once you’ve got your ca root key we can use the `openssl` command line tool to generate a CSR for it.
NOTE: The details of the openssl configuration file used for root CA signing keys is beyond the scope of this document. I’m using unmodified versions of these same files from the exceptional OpenSSL Certificate Authority by Jamie Nguyen, openssl.cnf.


$ openssl req \     
    -config openssl.cnf \
    -new -x509 \  
    -engine tpm2tss \
    -key ca-root.key.tss \   
    -keyform engine \                                                              
    -new -x509 \
    -days 7300 \  
    -sha256 \     
    -extensions v3_ca \                                                            
    -out ca-root.cert

Notice that the options for this command include `-engine tpm2tss` as well as `-keyform engine`. This causes openssl to generate & sign a CSR for the root key. The output is a self signed cert for the `ca-root.key.tss` key. It’s possible to include the engine configuration in the `openssl.cnf`. They’re provided on the commend line here for emphasis.

issuing subkeys

The rest of this is CA stuff is mechanical: We use the `openssl` tool `req` and `ca` commands to issue subkeys for various purposes while providing the new engine specific command line options. All of this is signing operations and certificate generation and all supported by the tpm2tss openssl engine. I followed Jamie Nguyen’s documentation above substituting in use of the `tpm2tss` engine on the command line where appropriate and everything worked as expected.

conclusions etc

TPM2 is a powerful tool and thanks to the `tpm2-tss-engine` we can continue to use the `openssl` command line tools we all know and love while benefiting from the protections offered by TPM2. The example above is just an example though. Root signing keys are rarely used and so storing them offline or in a token like a yubikey is often the best choice. TPM2 may be a better fit for intermediate keys, like those on a signing server integrated into a CI pipeline. Might be fun to build an “ideal” home CA architecture with a yubikey for the root keys and an embedded platform with a TPM2 for issuing credentials as an an intermediate CA for some application.

Getting serial output on my Ivy Bridge NUC

I’d been using a rather old Sandy bridge system (Intel DQ67EP + i7 2600S) to test my work on meta-measured for a long time. Very nice, very stable system. But with Intel getting out of the motherboard business I started eyeing their new venture: the NUC.

The DC53427HYE vPro IVB NUC

Everything is getting smaller and thankfully Intel has finally caught on. Better yet they’re supporting TXT on some of these systems and so when the Haswell NUC was released over the summer the price on thevPro Ivy Bridge NUC (DC53427HYE) finally dropped enough to put it in my price range. Intel opted to skip the vPro NUC for Haswell anyways so it was my only option.

Let the fun of testing TXT on a new system begin! Like any new system we hope it works “out of the box”. But with TXT, odds are it won’t. My SNB system was great but this NUC … not so much, yet. The kicker though is that as systems get smaller something’s got to give. Space ain’t free and … well who needs a serial port anyways right?

NUC IVB guts

Where’s my serial?

So without serial hardware, debugging TXT / tboot is pretty much a lost cause. Sure you can slow down the VGA output with the vga_delay command line option. But if you want to actually analyze the output you need to be able to capture the text somehow and setting vga_delay to a large value and then copying the output by hand doesn’t scale (and it’s a stupid idea to boot). So the search for serial output continues.

To get TXT we must … ::cough:: … endure the presence of the Management Engine (ME) and it’s supposed to have a serial console built in. The docs for the system even say you can get BIOS output from the ME serial console. But for whatever reason, I spent an afternoon messing about with it and made no progress.

I’ve no way to know where the problem with this lies. There are tools for accessing the ME serial console for Linux but I couldn’t get early boot output. Setting up a serial console login for a bare metal Linux system worked but no early boot stuff (BIOS, grub or tboot). Judging by the AMT docs for Linux: you can’t count on the ME serial interface for much. The docs state that if you use Xen then the ME will get the DHCP address all messed up and that setting a static address in the ME interface just doesn’t work. So long story short, the ME serial interface is limited at best and these limitations preclude getting early boot messages like those from tboot.

Now that the ME bashing is done we must fall back on real serial hardware. Thankfully this thing has both a half height and a full height mini-PCIe slot and a market for these arcane serial things still exists. StarTech fills this need with the 2s1p mini PCIe card. This is a great little piece of hardware but the I/O ports aren’t the default (likely to prevent conflict with on-board serial hardware) so we’ve gotta do some work before tboot will use it for ouput messages.

StarTech mini-PCIe serial card

NUC IVB with serial card

We have serial! Now what?

With some real serial hardware we’re half way there. Now we need to get tboot to talk to it. Unfortunately just adding serial to the logging= parameter in the boot config isn’t sufficient. The default base address for the serial I/O port used by tboot is 0x3F8 (see the README). This address corresponds to the “default serial port” aka COM1. So our shiny new mini-PCIe serial hardware must be using a different port.

tboot will log to an alternative port but we need to find the right I/O port address for the add on card. If you’re like me you keep a bootable Linux image on a USB drive handy for times like these. So we boot up the NUC and break out lspci to dump some data about our new serial card:

02:00.0 Serial controller: NetMos Technology PCIe 9912 Multi-I/O Controller (prog-if 02 [16550])
02:00.1 Serial controller: NetMos Technology PCIe 9912 Multi-I/O Controller (prog-if 02 [16550])

Not a bad start. This card has two serial ports and it shows up as two distinct serial devices. To get the I/O port base address we need to throw some -vvv at lspci. I’ll trim off the irrelevent bits:

02:00.0 Serial controller: NetMos Technology PCIe 9912 Multi-I/O Controller (prog-if 02 [16550])
        Subsystem: Device a000:1000
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 64 bytes
        Interrupt: pin A routed to IRQ 17
        Region 0: I/O ports at e030 [size=8]
        Region 1: Memory at f7d05000 (32-bit, non-prefetchable) [size=4K]
        Region 5: Memory at f7d04000 (32-bit, non-prefetchable) [size=4K]
        Capabilities: [50] MSI: Enable- Count=1/8 Maskable- 64bit+
                Address: 0000000000000000  Data: 0000
.
.
.
02:00.1 Serial controller: NetMos Technology PCIe 9912 Multi-I/O Controller (prog-if 02 [16550])
        Subsystem: Device a000:1000
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 64 bytes
        Interrupt: pin B routed to IRQ 18
        Region 0: I/O ports at e020 [size=8]
        Region 1: Memory at f7d03000 (32-bit, non-prefetchable) [size=4K]
        Region 5: Memory at f7d02000 (32-bit, non-prefetchable) [size=4K]
        Capabilities: [50] MSI: Enable- Count=1/8 Maskable- 64bit+
                Address: 0000000000000000  Data: 0000
.
.
.

The lines we care about here are:

Region 0: I/O ports at e030 [size=8]
Region 0: I/O ports at e020 [size=8]

So the I/O port address for 02:00.0 is 0xe030 and 02:00.1 is 0xe020. The 9 pin headers on the board are labeled S1 and S2 so you can probably guess which is which. With the NUC booted off my Linux USB key we can dump more data bout the hardware so we know for sure but with a serial cable hooked up to S1 I just threw some text at the device to see if something would come out the other end:

echo "test" > /dev/ttyS0

Sure enough I got "test" out. So I know my cable is hooked up to ttyS0. Now to associate /dev/ttyS0 with one of the PCI devices so we can get the I/O port. Poking around in sysfs is the thing to do here:

ls /sys/bus/pci/devices/02:00.0/tty/
ttyS0

With all of this we know we want tboot to log data to I/O port 0xe030 so we need the following options on the command line: logging=serial serial=115200,8n1,0xe030.

Next time

Now that I’ve got some real serial hardware and a way to get tboot to dump data out to it I can finally debug TXT / tboot. We’ll save that for next time.

building HVM Xen guests

On my Xen systems I’ve run pretty much 99% of my Linux guests paravirtualized (PV). Mostly this was because I’m lazy. Setting up a PV guest is super simple. No need for partitions, boot loaders or any of that complicated stuff. Setting up a PV Linux guest is generally as simple as setting up a chroot. You don’t even need to install a kernel.

There’s been a lot of work over the past 5+ years to add stuff to processors and Xen to make the PV extensions to Linux unnecessary. After checking out a presentation by Stefano Stabilini a few weeks back I decided I’m long overdue for some HVM learning. Since performance of HVM guests is now better than PV for most cases it’s well worth the effort.

This post will serve as my documentation for setting up HVM Linux guests. My goal was to get an HVM Linux installed using typical Linux tools and methods like LVM and chroots. I explicitly was trying to avoid using RDP or anything that isn’t a command-line utility. I wasn’t completely successful at this but hopefully I’ll figure it out in the next few days and post an update.

Disks and Partitions

Like every good Linux user LVMs are my friend. I’d love a more flexible disk backend (something that could be sparsely populated) but blktap2 is pretty much unmaintained these days. I’ll stop before I fall down that rabbit hole but long story short, I’m using LVMs to back my guests.

There’s a million ways to partition a disk. Generally my VMs are single-purpose and simple so a simple partitioning scheme is all I need. I haven’t bothered with extended partitions as I only need 3. The layout I’m using is best described by the output of sfdisk:

# partition table of /dev/mapper/myvg-hvmdisk
unit: sectors

/dev/mapper/myvg-hvmdisk1 : start=     2048, size=  2097152, Id=83
/dev/mapper/myvg-hvmdisk2 : start=  2099200, size=  2097152, Id=82
/dev/mapper/myvg-hvmdisk3 : start=  4196352, size= 16775168, Id=83
/dev/mapper/myvg-hvmdisk4 : start=        0, size=        0, Id= 0

That’s 3 partitions, the first for /boot, the second for swap and the third for the rootfs. Pretty simple. Once the partition table is written to the LVM volume we need to get the kernel to read the new partition table to create devices for these partitions. This can be done with either the partprobe command or kpartx. I went with kpartx:

$ kpartx -a /dev/mapper/myvg-hvmdisk

After this you’ll have the necessary device nodes for all of your partitions. If you use kpartx as I have these device files will have a digit appended to them like the output of sfdisk above. If you use partprobe they’ll have the letter ‘p’ and a digit for the partition number. Other than that I don’t know that there’s a difference between the two methods.

Then get the kernel to refresh the links in /dev/disk/by-uuid (we’ll use these later):

$ udevadm trigger

Now we can set up the filesystems we need:

$ mkfs.ext2 /dev/mapper/myvg-hvmdisk1
$ mkswap /dev/mapper/myvg-hvmdisk2
$ mkfs.ext4 /dev/mapper/myvg-hvmdisk3

Install Linux

Installing Linux on these partitions is just like setting up any other chroot. First step is mounting everything. The following script fragment

# mount VM disks (partitions in new LV)
if [ ! -d /media/hdd0 ]; then mkdir /media/hdd0; fi
mount /dev/mapper/myvg-hvmdisk3 /media/hdd0
if [ ! -d /media/hdd0/boot ]; then mkdir /media/hdd0/boot; fi
mount /dev/mapper/myvg-hvmdisk1 /media/hdd0/boot

# bind dev/proc/sys/tmpfs file systems from the host
if [ ! -d /media/hdd0/proc ]; then mkdir /media/hdd0/proc; fi
mount --bind /proc /media/hdd0/proc
if [ ! -d /media/hdd0/sys ]; then mkdir /media/hdd0/sys; fi
mount --bind /sys /media/hdd0/sys
if [ ! -d /media/hdd0/dev ]; then mkdir /media/hdd0/dev; fi
mount --bind /dev /media/hdd0/dev
if [ ! -d /media/hdd0/run ]; then mkdir /media/hdd0/run; fi
mount --bind /run /media/hdd0/run
if [ ! -d /media/hdd0/run/lock ]; then mkdir /media/hdd0/run/lock; fi
mount --bind /run/lock /media/hdd0/run/lock
if [ ! -d /media/hdd0/dev/pts ]; then mkdir /media/hdd0/dev/pts; fi
mount --bind /dev/pts /media/hdd0/dev/pts

Now that all of the mounts are in place we can debootstrap an install into the chroot:

$ sudo debootstrap wheezy /media/hdd0/ http://http.debian.net/debian/

We can then chroot to the mountpoint for our new VMs rootfs and put on the finishing touches:

$ chroot /media/hdd0

Bootloader

Unlike a PV guest, you’ll need a bootloader to get your HVM up and running. A first step in getting the bootloader installed is figuring out which disk will be mounted and where. This requires setting up your fstab file.

At this point we start to run into some awkward differences between our chroot and what our guest VM will look like once it’s booted. Our chroot reflects the device layout of the host on which we’re building the VM. This means that the device names for these disks will be different once the VM boots. On our host they’re all under the LVM /dev/mapper/myvg-hvmdisk and once the VM boots they’ll be something like /dev/xvda.

The easiest way to deal with this is to set our fstab up using UUIDs. This would look something like this:

# / was on /dev/xvda3 during installation
UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /               ext4    errors=remount-ro 0       1
# /boot was on /dev/xvda1 during installation
UUID=yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy /boot           ext2    defaults        0       2
# swap was on /dev/xvda2 during installation
UUID=zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz none            swap    sw              0       0

By using UUIDs we can make our fstab accurate even in our chroot.

After this we need to set up the /etc/mtab file needed by lots of Linux utilities. I found that when installing Grub2 I needed this file in place and accurate.

Some data I’ve found on the web says to just copy or link the mtab file from the host into the chroot but this is wrong. If a utility consults this file to find the device file that’s mounted as the rootfs it will find the device holding the rootfs for the host, not the device that contains the rootfs for our chroot.

The way I made this file was to copy it off of the host where I’m building the guest VM and then modify it for the guest. Again I’m using UUIDs to identify the disks / partitions for the rootfs and /boot to keep from having data specific to the host platform leak into the guest. My final /etc/mtab looks like this:

rootfs / rootfs rw 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev devtmpfs rw,relatime,size=10240k,nr_inodes=253371,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=203892k,mode=755 0 0
/dev/disk/by-uuid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx / ext4 rw,relatime,errors=remount-ro,user_xattr,barrier=1,data=ordered 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
tmpfs /run/shm tmpfs rw,nosuid,nodev,noexec,relatime,size=617480k 0 0
/dev/disk/by-uuid/yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy /boot ext2 rw,relatime,errors=continue,user_xattr,acl 0 0

Finally we need to install both a kernel and the grub2 bootloader:

$ apt-get install linux-image-amd64 grub2

Installing Grub2 is a pain. All of the additional disks kicking around in my host confused the hell out of the grub installer scripts. I was given the option to install grub on a number of these disks and none were the one I wanted to install it on.

In the end I had to select the option to not install grub on any disk and fall back to installing it by hand:

$ grub-install --force --no-floppy --boot-directory=/boot /dev/disk/by-uuid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

And then generate the grub config file:

update-grub

If all goes well the grub boot loader should now be installed on your disk and you should have a grub config file in your chroot /boot directory.

Final Fixups

Finally you’ll need to log into the VM. If you’re confident it will boot without you having to do any debugging then you can just configure the ssh server to start up and throw a public key in the root homedir. If you’re like me something will go wrong and you’ll need some boot logs to help you debug. I like enabling the serial emulation provided by qemu for this purpose. It’ll also allow you to login over serial which is convenient.

This is pretty standard stuff. No paravirtual console through the xen console driver. The qemu emulated serial console will show up at ttyS0 like any physical serial hardware. You can enable serial interaction with grub by adding the following fragment to /etc/default/grub:

GRUB_TERMINAL_INPUT=serial
GRUB_TERMINAL_OUTPUT=serial
GRUB_SERIAL_COMMAND="serial --speed=38400 --unit=0 --word=8 --parity=no --stop=1"

To get your kernel to log to the serial console as well set the GRUB_CMDLINE_LINUX variable thusly:

GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,38400n8"

Finally to get init to start a getty with a login prompt on the console add the following to your /etc/inittab:

T0:23:respawn:/sbin/getty -L ttyS0 38400 vt100

Stefano Stabilini has done another good write-up on the details of using both the PV and the emulated serial console here: http://xenbits.xen.org/docs/4.2-testing/misc/console.txt. Give it a read for the gory details.

Once this is all done you need to exit the chroot, unmount all of those bind mounts and then unmount your boot and rootfs from the chroot directory. Once we have a VM config file created this VM should be bootable.

VM config

Then we need a configuration file for our VM. This is what my generic HVM template looks like. I’ve disabled all graphical stuff: sdl=0, stdvga=0, and vnc=0, enabled the emulated serial console: serial='pty' and set xen_platform_pci=1 so that my VM can use PV drivers.

The other stuff is standard for HVM guests and stuff like memory, name, and uuid that should be customized for your specific installation. Things like uuid and the mac address for your virtual NIC should be unique. There are websites out there that will generate these values. Xen has it’s own prefix for MAC addresses so use a generator to make a proper one.

builder = "hvm"
memory = "2048"
name = "myvm"
uuid = "uuuuuuuu-uuuu-uuuu-uuuu-uuuuuuuuuuuu"
vcpus = 1
cpus = '0-7'
pae=1
acpi=1
apic=1
boot='c'
xen_platform_pci=1
sdl=0
vnc=0
vnclisten='0.0.0.0'
stdvga=0
serial='pty'

disk = [
    '/dev/ssdraid1/wwwhome,raw,xvda,rw'
]
vif = [
    'mac=XX:XX:XX:XX:XX:XX,model=e1000',
]

Boot

Booting this VM is just like booting any PV guest:

xl create -c /etc/xen/vms/myvm.cfg

I’ve included the -c option to attach to the VMs serial console and ideally we’d be able to see grub and the kernel dump a bunch of data as the system boots.

TODO

I’ve tested these instructions twice now on a Debian Wheezy system with Xen 4.3.1 installed from source. Both times Grub installs successfully but fails to boot. After enabling VNC for the VM and connecting with a viewer it’s apparent that the VM hangs when SEABIOS tries to kick off grub.

As a work-around both times I’ve booted the VM from a Debian rescue ISO, setup a chroot much like in these instructions (the disk is now /dev/xvda though) and re-installed Grub. This does the trick and rebooting the VM from the disk now works. So I can only conclude that either something from my instructions w/r to installing Grub is wrong but I think that’s unlikely as they’re confirmed from numerous other “install grub in a chroot” instructions on the web.

The source of the problem is speculation at this point. Part of me wants to dump the first 2M of my disk both after installing it using these instructions and then again after fixing it with the rescue CD. Now that I think about it the version of Grub installed in my chroot is probably a different version than the one on the rescue CD so that could have something to do with it.

Really though, I’ll probably just install syslinux and see if that works first. My experiences with Grub have generally been bad any time I try to do something out of the ordinary. It’s incredibly complicated and generally I just want something simple like syslinux to kick off a very simple VM.

I’ll post an update once I’ve got to the bottom of this mystery. Stay tuned.

Chrome web sandbox on XenClient

There’s lots of software out there that sets up a “sandbox” to protect your system from untrusted code. The examples that come to mind are Chrome and Adobe for the flash sandbox. The strength of these sandboxes are an interesting point of discussion. Strength is always related to the mechanism and if you’re running on Windows the separation guarantees you get are only as strong as the separation Windows affords to processes. If this is a strong enough guarantee for you then you probably won’t find this post very useful. If you’re interested in using XenClient and the Xen hypervisor to get yourself the strongest separation that I can think of, then read on!

Use Case

XenClient allows you to run any number of operating systems on a single piece of hardware. In my case this is a laptop. I’ve got two VMs: my work desktop (Windows 7) for email and other work stuff and my development system that runs Debian testing (Wheezy as of now).

Long story short, I don’t trust some of the crap that’s out on the web to run on either of these systems. I’d like to confine my web browsing to a separate VM to protect my company’s data and my development system. This article will show you how to build a bare bones Linux VM that runs a web browser (Chromium) and little more.

Setup

You’ll need a linux VM to host your web browser. I like Debian Wheezy since the PV xen drivers for network and disk work out of the box on XenClient (2.1). There’s a small bug that required you use LVM for your rootfs but I typically do that anyways so no worries there.

Typically I do an install omitting even the “standard system tools” to keep things as small as possible. This results in a root file system that’s < 1G. All you need to do then is install the web browser (chromium), rungetty, and the xinint package. Next is a bit of scripting and some minor configuration changes.

inittab

When this VM boots we want the web browser to launch and run full screen. We don’t want a window manager or anything. Just the browser.

When Linux boots, the init process parses the /etc/inittab file. One of the things specified in inittab are processes that init starts like getty. Typically inittab starts getty‘s on 6 ttys but we want it to start chrome for us. We can do this by having init execute rungetty (read the man page!) which we can then have execute arbitrary commands for us:

# /sbin/getty invocations for the runlevels.
#
# The "id" field MUST be the same as the last
# characters of the device (after "tty").
#
# Format:
#  :::
#
# Note that on most Debian systems tty7 is used by the X Window System,
# so if you want to add more getty's go ahead but skip tty7 if you run X.
#
1:2345:respawn:/sbin/getty 38400 tty1
2:23:respawn:/sbin/getty 38400 tty2
3:23:respawn:/sbin/getty 38400 tty3
4:23:respawn:/sbin/getty 38400 tty4
5:23:respawn:/sbin/getty 38400 tty5
6:23:respawn:/sbin/rungetty tty6 -u root /usr/sbin/chrome-restore.sh

Another configuration change you’ll have to make is in /etc/X11/Xwrapper.config. The default configuration in this file prevents users from starting X if their controlling TTY isn’t a virtual console. Since we’re kicking off chromium directly we need to relax this restriction:

allowed_users=anybody

chromium-restore script

Notice that we have rungetty execute a script for us and it does so as the root user. We don’t want chromium running as root but we need to do some set-up before we kick off chromium as an unprivileged user. Here’s the chrome-restore.sh script:

#!/bin/sh

USER=chromium
HOMEDIR=/home/${USER}
HOMESAFE=/usr/share/${USER}-clean
CONFIG=${HOMEDIR}/.config/chromium/Default/Preferences
LAUNCH=$(which chromium-launch.sh)
if [ ! -x "${LAUNCH}" ]; then
	echo "web-launch.sh not executable: ${LAUNCH}"
	exit 1
fi
CMD="${LAUNCH} ${CONFIG}"

rsync -avh --delete ${HOMESAFE}/ ${HOMEDIR}/ > /dev/null 2>&1
chown -R ${USER}:${USER} ${HOMEDIR}

/bin/su - -- ${USER} -l -c "STARTUP="${CMD}" startx" < /dev/null
shutdown -Ph now

The first part of this script is setting up the home directory for the user (chromium) that will be running chromium. This is the equivalent of us restoring the users home directory to a “known good state”. This means that the directory located at /usr/share/chromium-clean is a “known good” home directory for us to start from. On my system it’s basically an empty directory with chrome’s default config.

The second part of the script, well really the last two lines just runs startx as an unprivileged user. startx kicks off the X server but first we set a variable STARTUP to be the name of another script: chromium-launch.sh. When this variable is set, startx runs the command from the variable after the X server is started. This is a convenient way to kick off an X server that runs just a single graphical application.

The last command shuts down the VM. The shutdown command will only be run after the X server terminates which will happen once the chromium process terminates. This means that once the last browser tab is closed the VM will shutdown.

chromium-launch script

The chromium-launch.sh script looks like this:

#!/bin/sh

CONFIG=$1
if [ ! -f "${CONFIG}" ]; then
	echo "cannot locate CONFIG: ${CONFIG}"
	exit 1
fi

LINE=$(xrandr -q 2> /dev/null | grep Screen)
WIDTH=$(echo ${LINE} | awk '{ print $8 }')
HEIGHT=$(echo ${LINE} | awk '{ print $10 }' | tr -d ',')

sed -i -e "s&(s+"bottom":s+)-?[0-9]+&1${HEIGHT}&" ${CONFIG}
sed -i -e "s&(s+"left":s+)-?[0-9]+&10&" ${CONFIG}
sed -i -e "s&(s+"right":s+)-?[0-9]+&1${WIDTH}&" ${CONFIG}
sed -i -e "s&(s+"top":s+)-?[0-9]+&10&" ${CONFIG}
sed -i -e "s&(s+"work_area_bottom":s+)-?[0-9]+&1${HEIGHT}&" ${CONFIG}
sed -i -e "s&(s+"work_area_left":s+)-?[0-9]+&10&" ${CONFIG}
sed -i -e "s&(s+"work_area_right":s+)-?[0-9]+&1${WIDTH}&" ${CONFIG}
sed -i -e "s&(s+"work_area_top":s+)-?[0-9]+&10&" ${CONFIG}

chromium

It’s a pretty simple script. It takes one parameter which is the path to the main chromium config file. It query’s the X server through xrandr to get the screen dimensions (WIDTH and HEIGHT) which means it must be run after the X server starts. It then re-writes the relevant values in the config file to the maximum screen width and height so the browser is run “full screen”. Pretty simple stuff … once you figure out the proper order to do things and the format of the Preferences file which was non-trivial.

User Homedir

The other hard part is creating the “known good” home directory for your unprivileged user. What I did was start up chromium once manually. This causes the standard chromium configuration to be generated with default values. I then copied this off to /usr/share to be extracted on each boot.

Conclusion

So hopefully these instructions are enough to get you a Linux system that boots and runs Chromium as an unprivileged user. It should restore that users home directory to a known good state on each boot so that any downloaded data will be wiped clean. When the last browser tab is closed it will power off the system.

I use this on my XenClient XT system for browsing sites that I want to keep separate from my other VMs. It’s not perfect though and as always there is more that can be done to secure it. I’d start by making the root file system read only and adding SELinux would be fun. Also the interface is far too minimal. Finding a way to handle edge cases like making pop-ups manageable and allowing us to do things like control volume levels would also be nice. This may require configuring a minimal window manager which is a pretty daunting task. If you have any other interesting ways to make this VM more usable or lock it down better you should leave them in the comments.

openembedded yocto native hello world

NOTE: I took the time to get to the bottom of the issue discussed in this post. There’s a new post here that explains the “right way” to use Makefiles with yocto. As always, the error in this post was mine 🙂

I’ve officially “drank the Kool-Aid” and I’m convinced openembedde and Yocto are pretty awesome. I’ve had a blast building small Debian systems on PCEngines hardware in the past and while I’m waiting for my Raspberry Pi to arrive I’ve been trying to learn the ins and outs of Yocto. The added bonus is that the XenClient team at Citrix uses openembedded for our build system so this work can also fall under the heading of “professional development”.

Naturally the first task I took on was way too complicated so I made a bunch of great progress (more about that in a future post once I get it stable) but then I hit a wall that I ended up banging my head against for a full day. I posted a cry for help on the mailing list and didn’t get any responses so I set out to remove as many moving parts as possible and find the root cause.

First things first read the Yocto development manual and the Yocto reference for whatever release you’re using. This is essential because no one will help you till you’ve read and understand these 🙂

So the software I’m trying to build is built using raw Makefiles, none of that fancy autotools stuff. This can be a bit of a pain because depending on the Makefiles, it’s not uncommon for assumptions to be made about file system paths. Openembedded is all about cross compiling so it wants to build and install software under all sorts of strange roots and some Makefiles just can’t handle this. I ran into a few of these scenarios but nothing I couldn’t overcome.

Getting a package for my target architecture wasn’t bad but I did run into a nasty problem when I tried to get a native package built. From the searches I did on the interwebs it looks like there have been a number of ways to build native packages. The current “right way” is simply to have your recipe extend the native class. Thanks to XorA for documenting his/her new package workflow for that nugget.

BBCLASSEXTEND = "native"

After having this method blow up for my recipe I was tempted to hack together some crazy work around. I really want to upstream the stuff I’m working on though and I figure having crazy shit in my recipe to work around my misunderstanding of the native class was setting the whole thing up for failure. So instead I went back to basics and made a “hello world” program and recipe (included at the end of this post) hoping to recreate the error and hopefully figure out what I was doing wrong at the same time.

It took a bit of extra work but I was able to recreate the issue with a very simple Makefile. First the error message:

NOTE: package hello-native-1.0-r0: task do_populate_sysroot: Started
ERROR: Error executing a python function in /home/build/poky-edison-6.0/meta-test/recipes-test/helloworld/hello_1.0.bb:
CalledProcessError: Command 'tar -cf - -C /home/build/poky-edison-6.0/build/tmp/work/i686-linux/hello-native-1.0-r0/sysroot-destdir///home/build/poky-edison-6.0/build/tmp/sysroots/i
686-linux -ps . | tar -xf - -C /home/build/poky-edison-6.0/build/tmp/sysroots/i686-linux' returned non-zero exit status 2 with output tar: /home/build/poky-edison-6.0/build/tmp/work
/i686-linux/hello-native-1.0-r0/sysroot-destdir///home/build/poky-edison-6.0/build/tmp/sysroots/i686-linux: Cannot chdir: No such file or directory
tar: Error is not recoverable: exiting now
tar: This does not look like a tar archive
tar: Exiting with failure status due to previous errors


ERROR: The stack trace of python calls that resulted in this exception/failure was:
ERROR:   File "sstate_task_postfunc", line 10, in 
ERROR:
ERROR:   File "sstate_task_postfunc", line 4, in sstate_task_postfunc
ERROR:
ERROR:   File "sstate.bbclass", line 19, in sstate_install
ERROR:
ERROR:   File "/home/build/poky-edison-6.0/meta/lib/oe/path.py", line 59, in copytree
ERROR:     check_output(cmd, shell=True, stderr=subprocess.STDOUT)
ERROR:
ERROR:   File "/home/build/poky-edison-6.0/meta/lib/oe/path.py", line 121, in check_output
ERROR:     raise CalledProcessError(retcode, cmd, output=output)
ERROR:
ERROR: The code that was being executed was:
ERROR:      0006:        bb.build.exec_func(intercept, d)
ERROR:      0007:    sstate_package(shared_state, d)
ERROR:      0008:
ERROR:      0009:
ERROR:  *** 0010:sstate_task_postfunc(d)
ERROR:      0011:
ERROR: (file: 'sstate_task_postfunc', lineno: 10, function: )
ERROR:      0001:
ERROR:      0002:def sstate_task_postfunc(d):
ERROR:      0003:    shared_state = sstate_state_fromvars(d)
ERROR:  *** 0004:    sstate_install(shared_state, d)
ERROR:      0005:    for intercept in shared_state['interceptfuncs']:
ERROR:      0006:        bb.build.exec_func(intercept, d)
ERROR:      0007:    sstate_package(shared_state, d)
ERROR:      0008:
ERROR: (file: 'sstate_task_postfunc', lineno: 4, function: sstate_task_postfunc)
ERROR: Function 'sstate_task_postfunc' failed
ERROR: Logfile of failure stored in: /home/build/poky-edison-6.0/build/tmp/work/i686-linux/hello-native-1.0-r0/temp/log.do_populate_sysroot.30718
Log data follows:
| NOTE: QA checking staging
| ERROR: Error executing a python function in /home/build/poky-edison-6.0/meta-test/recipes-test/helloworld/hello_1.0.bb:
| CalledProcessError: Command 'tar -cf - -C /home/build/poky-edison-6.0/build/tmp/work/i686-linux/hello-native-1.0-r0/sysroot-destdir///home/build/poky-edison-6.0/build/tmp/sysroots
/i686-linux -ps . | tar -xf - -C /home/build/poky-edison-6.0/build/tmp/sysroots/i686-linux' returned non-zero exit status 2 with output tar: /home/build/poky-edison-6.0/build/tmp/wo
rk/i686-linux/hello-native-1.0-r0/sysroot-destdir///home/build/poky-edison-6.0/build/tmp/sysroots/i686-linux: Cannot chdir: No such file or directory
| tar: Error is not recoverable: exiting now
| tar: This does not look like a tar archive
| tar: Exiting with failure status due to previous errors
|
|
| ERROR: The stack trace of python calls that resulted in this exception/failure was:
| ERROR:   File "sstate_task_postfunc", line 10, in 
| ERROR:
| ERROR:   File "sstate_task_postfunc", line 4, in sstate_task_postfunc
| ERROR:
| ERROR:   File "sstate.bbclass", line 19, in sstate_install
| ERROR:
| ERROR:   File "/home/build/poky-edison-6.0/meta/lib/oe/path.py", line 59, in copytree
| ERROR:     check_output(cmd, shell=True, stderr=subprocess.STDOUT)
| ERROR:
| ERROR:   File "/home/build/poky-edison-6.0/meta/lib/oe/path.py", line 121, in check_output
| ERROR:     raise CalledProcessError(retcode, cmd, output=output)
| ERROR:
| ERROR: The code that was being executed was:
| ERROR:      0006:        bb.build.exec_func(intercept, d)
| ERROR:      0007:    sstate_package(shared_state, d)
| ERROR:      0008:
| ERROR:      0009:
| ERROR:  *** 0010:sstate_task_postfunc(d)
| ERROR:      0011:
| ERROR: (file: 'sstate_task_postfunc', lineno: 10, function: )
| ERROR:      0001:
| ERROR:      0002:def sstate_task_postfunc(d):
| ERROR:      0003:    shared_state = sstate_state_fromvars(d)
| ERROR:  *** 0004:    sstate_install(shared_state, d)
| ERROR:      0005:    for intercept in shared_state['interceptfuncs']:
| ERROR:      0006:        bb.build.exec_func(intercept, d)
| ERROR:      0007:    sstate_package(shared_state, d)
| ERROR:      0008:
| ERROR: (file: 'sstate_task_postfunc', lineno: 4, function: sstate_task_postfunc)
| ERROR: Function 'sstate_task_postfunc' failed
NOTE: package hello-native-1.0-r0: task do_populate_sysroot: Failed
ERROR: Task 3 (virtual:native:/home/build/poky-edison-6.0/meta-test/recipes-test/helloworld/hello_1.0.bb, do_populate_sysroot) failed with exit code '1'
ERROR: 'virtual:native:/home/build/poky-edison-6.0/meta-test/recipes-test/helloworld/hello_1.0.bb' failed

So even with the most simple Makefile I could cause a native recipe build to blow up. Here’s the Makefile:

.PHONY : all clean install uninstall

PREFIX ?= $(DESTDIR)/usr
BINDIR ?= $(PREFIX)/bin

HELLO_src = hello.c
HELLO_bin = hello
HELLO_tgt = $(BINDIR)/$(HELLO_bin)

all : $(HELLO_bin)

$(HELLO_bin) : $(HELLO_src)

$(HELLO_tgt) : $(HELLO_bin)
	install -d $(BINDIR)
	install -m 0755 $^ $@

clean :
	rm $(HELLO_bin)

install : $(HELLO_tgt)

uninstall :
	rm $(BINDIR)/$(HELLO_tgt)

And here’s the relevant install method from the bitbake recipe:

do_install () {
    oe_runmake DESTDIR=${D} install
}

Notice I’m using the variable DESTDIR to tell the Makefile the root (not just /) to install things to. This should work right? It works for a regular package but not for a native one! This drove me nuts for a full day.

The solution to this problem lies in some weirdness in the Yocto native class when combined with the populate_sysroot method. The way I figured this out was by inspecting the differences in the environment when building hello vs hello-native. When building the regular package for the target architecture variables like bindir and sbindir were what I would expect them to be:

bindir="/usr/bin"
sbindir="/usr/sbin"

but when building hello-native they get a bit crazy:

bindir="/home/build/poky-edison-6.0/build/tmp/sysroots/i686-linux/usr/bin"
sbindir="/home/build/poky-edison-6.0/build/tmp/sysroots/i686-linux/usr/sbin"

This is a hint at the source of crazy path that staging is trying to tar up above in the error message. Further if you look in the build directory for a regular target arch package you’ll see your files where you expect in ${D}sysroot-destdir/usr/bin but for a native build you’ll see stuff in ${D}sysroot-destdir/home/build/poky-edison-6.0/build/tmp/sysroots/i686-linux/usr/bin. Pretty crazy right? I’m sure there’s a technical reason for this but it’s beyond me.

So the way you can work around this is by telling your Makefiles about paths like bindir through the recipe. A fixed do_install would look like this:

do_install () {
    oe_runmake DESTDIR=${D} BINDIR=${D}${bindir} install
}

For more complicated Makefiles you can probably specify a PREFIX and set this equal to the ${prefix} variable but YMMV. I’ll be trying this out to keep my recipes as simple as possible.

If you want to download my example the recipe is here. This will pull down the hello world source code and build the whole thing for you.

Troubles with Ovi Store after upgrading N8 to Anna

I took some time yesterday to upgrade my beloved Nokia N8 to the new(ish) Symbian^3 Anna build and it wasn’t a smooth upgrade. Upgrading through the Ovi Suite went well enough but applications like the Ovi Store just wouldn’t work after the upgrade. The Ovi Store application would install and start but would sit on the splash screen showing the “Loading …” message. At this point I did a hard reset that did nothing.

After combing through a hand full of forum posts I ran across this one in the Nokia Europe discussion boards. Initially I had no idea what this guy was talking about. I guess I’m used to my phone being an appliance so downgrading packages seemed pretty crazy, especially since I had no idea where to obtain the packages (sis files) described in the post. A few web searches later and I ran across the “Fix Symbian” site. Pretty encouraging name all things considered. Anyways all I did was download the sis files from the post titled “S^3 QT 4.7.3 MOBILITY 1.1.3″ which downgrades a number of components in the ” Notifications Support Package Symbian3 v1.1.11120″ package. Once quick reboot and Ovi Store was back up and running.

So I guess the Anna upgrade that Nokia is shipping is broken? Pretty strange really but not something that can’t be worked around thanks to some contributions from the interwebs. Unfortunately the upgrade to Anna didn’t fix the problems I’ve had with my N8 and my wireless access point. I’ll debug this sooner or later, likely when I upgrade my home network with the ALIX boards I got in the mail last week.

Ethernet Bonding on Debian Squeeze

Spent a few minutes searching for a howto for setting up ethernet interface bonding on a new file server I’m building today. Nothing special but I found a bunch that aren’t that great … I know, welcome to the internet right? But I did find one that’s awesome from tuxhelp.org.

My final config went like this:

echo -e "bondingnmii" | sudo tee -a /etc/modules

With an /etc/network/interfaces file that looks like this:

auto lo bond0
iface lo inet loopback

iface bond0 inet dhcp
    bond_mode balance-rr
    bond_miimon 100
    bond_downdelay 200
    bond_updelay 200
    slaves eth0 eth1

What was lacking in all other (even Debian specific) howto’s is that they always use direct invocation of ifenslave and pass options to the bonding driver manually. IMHO it’s so much nicer to use the facilities built in to ifup like the slaves option instead of using something like:

up /sbin/ifenslave bond0 eth0 eth1

That said I haven’t had much luck finding documentation for options like this specific to a driver and how to use them in the interfaces file. Given the above example I can guess but I’m looking for a definitive source … Anyone out there know?

Thruxton ignition relocation

The weather’s starting to get nice and since I don’t have a garage to work in over the winter, I had to wait for nice weather to work on my Thruxton … in my driveway. Joker Machine makes some really nice bolt-ons and they’re pretty spendy so over the winter I picked up a few when I had a buck or two kicking around. A few days back when it finally hit 65 degrees outside I put on my ignition relocation kit.

The ignition location is a common complaint from Thruxton owners. It’s located on the headlight mount which is a bit odd, but really I’ve become used to it by now:

The relocation kit uses two bolts on the front of the frame as the anchor for the new bracket:

Removing the ignition is simple but it does require removing the headlight bracket to access the screws holding it in place:

After removing the ignition the fun begins. You can’t simply attach the new bracket with the existing cable. There just isn’t enough of it.

The ignition wires hook up to the main harness at a plug that’s housed in the headlight bucket. Actually just about everything that hooks up to the harness on the front end of the bike does so in the headlight bucket. So there are basically two options:

  • extend the ignition wires
  • cut into the harness and hope there’s enough wire in there to get the ignition to its new home

I went for the second option because I only needed a few extra inches of wire but it came at a cost: I couldn’t keep the connector between the ignition and the wiring harness in the headlight bucket. Here’s a shot of the harness with the cuts I had to make:

After that, wrap up the harness with electrical tape and stuff the connector up under the frame. Be sure to clean off the harness before you put the tape on it. Dirt makes tape pretty ineffective:

In the end it’ll look pretty cool:

The new location for the ignition isn’t any more convenient than the original if you ask me. Looks cool though.

Force apache2 digest auth over SSL

This may seem like a strange reason to be configuring an authenticated and encrypted HTTP connection … but it’s tax season! There’s a story behind this naturally but first a quick overview. Recently I’ve had had to exchange sensitive documents with someone. To do this I had to configure my web server to require digest authentication for all URLs below a certain directory. Further, to protect the data in transit I force traffic to these URLs over SSL. Pretty simple but very useful and worth a quick howto.

The Story

The guy that does my taxes is actually a friend’s dad. He’s an great CPA, a great guy and I completely trust him with my financials. The problem is … well he’s my buddy’s dad and he’s probably in mis mid to later 60’s so he’s not super tech savvy. He’s got email down (unlike my parents) but last year we ran into a problem.

I sent him my all of my tax relevant docs in hard copy as we’ve always done. What I didn’t expect was for him to send all of my tax documents back for me to sign in soft copy. This is great right? He meant well but I really wasn’t thrilled that he sent documents that have my SSN over email and in plain text. I tried to explain to him how to get an email certificate so we could encrypt our email exchanges but, well I think we ran into the technological version of generation gap. Needless to say, we fell back on hard copy last year.

After collecting up all of my soft copy forms for this years taxes I couldn’t bear the thought of having to find a printer to convert them into hard copy. That and I just wanted to get them out the door to my CPA same day. So with the available tools (a web server) I came up with a way to get my docs over to my tax guy with a level of security I’m comfortable with. Here’s how:

apache2 digest auth

There’s a million docs on the web describing how to set up the common auth modules for apache2. Frankly though my search turned up some pretty wild .htaccess files. I just wanted something that I could drop into the directory I wanted to protect and it would work. Here’s the digest auth part:

AuthType Digest
AuthName "taxes"
AuthDigestDomain ./
AuthUserFile /etc/apache2/taxes.digest
Require valid-user

This assumes you’ve got the digest_auth module enabled already (this is distro specific much of the time). Here’s a quick breakdown of the configuration directives. For the best reference see the apache mod_auth_digest docs.

  • AuthName directive specifies the name of the authentication realm. Any user that accesses this directory on the server must have credentials defined in this realm. For my tax documents I’ve named the realm taxes.
  • AuthDigestDomain tells the server which URIs will require authentication. In line with my desire to just have a drop in .htaccess file, I’ve used ./ which is the current directory. All subdirectories will require authentication as well.
  • AuthUserFile is the database file that has all of the user credentials … more on this in a minute
  • Require is how we specify additional constraints on which users can access the domain we’ve defined. I’m using valid-user which simply requires that the user specify credentials belonging to a … you guessed it, a valid user in the taxes realm. There’s a lot you can do with Require so you should read the docs for this one.

For any of this to work we need to specify the users names and their passwords. Apache has a tool that does this for us and it’s called htdigest. Check out the manpage for details but for the above example .htaccess file I used the following comand:

sudo htdigest -c /etc/apache2/taxes.digest taxes username

Force SSL

This is a pretty easy task but the method I came up with to solve it requires using Apache’s mod_rewrite which is basically regex black magic in apache config files. This is very much like driving in a tack nail with a 10lb sledge hammer. You can do some serious damage with mod_rewrite if you’re not sure of what you’re doing. For a simple task like this the solution should be simple and if you use mod_rewrite properly the result is actually very simple

DISCLAIMER: before using mod_rewrite you should read the mod_rewrite docs front to back and be comfortable matching strings with regex patterns (play around with grep on the command line).

RewriteCond %{HTTPS} ^off$
RewriteRule ^.*$ https://%{HTTP_HOST}%{REQUEST_URI}

To protect the directory containing my tax documents I only needed two mod_rewrite directives in my .htaccess:

  • RewriteCond specifies the conditions under which the following rule will be checked. Here I’ve indicated that only URIs that aren’t HTTPS (the HTTPS server variable is set to off)
  • RewriteRule is where all the work is done and I’m using its most simple form. The first string is the regex that is matched against the requested URI. I’ve specified a pure wild card, it will match everything. Since this is an .htaccess file the rule will only be processed for URIs that are under this directory so I want it to match everything. The following string is the replacement text. This ends up being a simple redirect to the same URI that was requested but over SSL (trust me). If you’re skeptical hit the docs.

Conclusion

That’s it, you can get the taxes.htaccess. Nothing really novel here, just a practical use of my web server for exchanging sensitive documents, I thought that was worth a quick post.

Naturally this isn’t a perfect solution but it’s good enough. My tax guy can still screw things up on his end by having maleware on his office computer or he may sell my personal info to the highest bidder but these things are largely out of my control and would be a problem even if I sent him all of my stuff in hard copy. This also doesn’t scale at all but I’ve got only one guy doing my taxes in any one year so that’s a non-issue. Next is figuring out a way to get my completed documents back from him. I’m thinking I’ll have to code up a quick upload script … more to come.

What does acpi_fakekeyd do?

In setting up SELinux on my Laptop running Squeeze I’m taking a pretty standard approach. First off I’m working off the packages provided in Sid maintained by Russell Coker so most of the hard work has been done. There are a few programs, mostly specific to a laptop that still aren’t in the right domains. We can see this by dumping out the running programs and their domains:

ps auxZ

Determining the “right domain” for a process is a bit harder but there’s a pretty obvious place to start. No daemons should be running in initrc_t!

initrc_t is the domain given to scripts run by the init daemon. That’s pretty much any script in /etc/init.d. If a daemon is running in this domain after startup it likely means that there was no transition rule in place to put it into a domain specific to the daemon. I figured I’d take these on alphabetically and started with acpi_fakekeyd 🙂

A policy for acpi_fakekeyd

All of the power management stuff like acpid runs in the apmd_t so the first thing I tried was running acpi_fakekeyd in this domain. You can go through the trouble of adding the path /usr/sbin/acpi_fakekeydto the apmd_t policy module, rebuilding it and reloading it (which really isn’t that hard these days) or you can take a shortcut like so:

echo "system_u:system_r:apmd_exec_t:s0" | sudo attr -S -s selinux /usr/sbin/acpi_setkeyd

This sets the label on the executable such that when init runs the start up script, the daemon will end up in the apmd_t domain.

Once the label is set you can restart the daemon using run_init, assuming your user is in a domain that can run init scripts (unconfined, admin etc). If all goes well the daemon will end up running in the right domain. I then did what I thought was exercising the domain to see if it would cause any AVCs. This required sending the daemon a few characters using the acpi_fakekey command directly as well as putting my laptop to sleep and into hibernation (see the /etc/acpi/sleep.sh script). There weren’t any AVCs so I concluded the apmd_t domain had all of the permissios that the fakekey daemon needed. I was wrong but we’ll get to that.

acpi_fakekeyd in it’s own domain

I was really expecting a few denial messages so I decided to put acpi_fakekeyd into its own domain with no privileges. The idea was to see some AVCs and to get a feeling for what exactly the daemon does.

The policy module I whipped up is super simple:
acpi_fakekeyd.te

policy_module(acpi_fakekeyd, 0.1)

########################################
#
# Declarations
#
type acpi_fakekeyd_t;
type acpi_fakekeyd_exec_t;
init_daemon_domain(acpi_fakekeyd_t, acpi_fakekeyd_exec_t)

acpi_fakekey.fc

/usr/sbin/acpi_fakekeyd --      gen_context(system_u:object_r:acpi_fakekeyd_exec_t,s0)

No interfaces yet so the acpi_fakekeyd.if file was empty.

After restarting the daemon, checking it’s in the right domain and exercising my ACPI system … there still weren’t any AVCs! Obviously I’m missing something so a bit of research turned up this bug report which explains pretty much everything.

acpi_fakekeyd deprecated

To save you a bunch of reading it turns out that toward the end of the discussion thread (about 8 months after the initial post) it’s identified that the functionality of acpi_fakekeyd is deprecated in kernels after 2.6.24. It seems that the functionality should instead be provided by an in-kernel driver which my laptop (ThinkPad x61s) has.

So why is this daemon installed and running? If I disable it my laptop ACPI still works fine. But the acpi_support package which is required to put my laptop to sleep depends on the acpi_fakekey package. This is likely because the scripts provided by acpi_support call the acpi_fakekey application for backwards comparability on some systems. This doesn’t make much sense to me though since Squeeze ships with a 2.6.32 kernel.

The answer to the question I pose as the title of this post is: It doesn’t do anything on my system. I don’t even need to have it running so I just shut it off. Problem solved I guess, and from a security perspective this is an even better solution that running it in it’s own SELinux domain. If it’s not running, it can’t do any damage. I’d rather be able to remove the package completely though.

Does anyone out there have a laptop that requires this daemon? I’m tempted to file a bug against the package … Anyway on to the next daemon 🙂