LaTeX for your logic homework

It’s been a long time since I’ve posted. Day job has taken over my life. It’s had me attached to a keyboard working 12 hour days for pretty much a month now. I haven’t had much chance or desire to geek out after getting home from work as a result.

With the fall semester starting up (my last class!) I have spent some time playing with LaTeX again. I’ve been using LaTeX off and on for a few years to prepare labs and reports for school and even some proposals for work though I’ve never done more than minor edits to existing LaTeX styles. For my fall class I’m using LaTeX in a similar capacity but with a fun twist: I’m formatting propositional logic proofs.

First off the class is CSE 774: Access Control, Security, and Trust taught by Dr. Shiu-Kai Chin at Syracuse University. He co-authored a boot by the same name if you’re interested in the content. Long story short Dr. Chin and Dr. Older developed a small logic language based on the logic of Abadi and Lampson. They use their logic to formally reason about access control decisions in a number of systems. Interesting stuff.

Most of the homework I’ve been doing requires either doing formal proofs or manipulating formulas and constructing Kripke Structures. The first assignment got messy quick so instead of copying all of my work over I decided I’d type it up in LaTeX. I managed to use generic math mode (the align environment) for most of my work but the structured proofs needed something extra.

Thankfully some industrious academics out there ran into this problem long ago and wrote two different styles for typesetting Fitch-style structured deduction. Both have their strengths and in the end I used a mix of the two. The fitch style is simple and straight forward. That said I couldn’t figure out a way to get it to display multi-line formulas which was a problem since the logic we’re using is quite verbose. I managed to track down another fitch style, this one by Peter Selinger which handles multi-line formulas quite well.

I know people are always looking for a way to get their work done more quickly and LaTeX isn’t going to help you there. So in the end my homework didn’t get done any faster but it sure looked nice when I was done 🙂

Debian Logo Rights and Misuse

I’ve been traveling a whole bunch for work lately. A month or two ago I was on the road the hotel I was staying in was hosting a job fair that was put on by a company called the Catalyst Career Group. I couldn’t help but notice their logo looked familiar.

So after standing on my head in the middle of the lobby I realized that it’s just the Debian logo upside down. The Debian project is pretty explicit about the license they distribute their logo under [1] and this doesn’t follow the license by my reading of it.

Now what to do about it? The Debian legal mailing list [2] looks like the right place. I’ll update on the results of this one. Interesting stuff.

[1] http://www.debian.org/logos/
[2] http://www.debian.org/legal/

Xen Network Driver Domain: How

In my last post I went into the reasons why exporting the network hardware from dom0 to an unprivileged driver domain is good for security. This time the “how” is our focus. The documentation out there isn’t perfect and it could use a bit of updating so expect to see a few edits to the relevant Xen wiki page [1] in the near future.

Basic setup

How you configure your Xen system is super important. The remainder of this post assumes you’re running the latest Xen from the unstable mercurial repository (4.0.1) with the latest 2.6.32 paravirt_ops kernel [2] from Jeremy Fitzhardinge’s git tree (2.6.32.16). If you’re running older versions of either Xen or the Linux kernel this may not work so you should consider updating.

For this post I’ll have 3 virtual machines (VMs).

  1. the administrative domain (dom0) which is required to boot the system
  2. an unprivileged domain (domU) that we’ll call “nicdom” which is short for network interface card (NIC) domain. You guessed it, this will become our network driver domain.
  3. another unprivileged domain (domU or client domain) that will get its virtual network interface from nicdom

I don’t really care how you build your virtual machines. Use whatever method you’re comfortable with. Personally I’m a command line junkie so I’ll be debootstrapping mine on LVM partitions as minimal Debian squeeze/sid systems running the latest pvops kernel. Initially the configuration files used to start up these two domUs will be nearly identical:
nicdom:

kernel="/boot/vmlinuz-2.6.32.16-xen-amd64"
ramdisk="/boot/initrd.img-2.6.32.16-xen-amd64"
memory=256
name="nicdom"
disk=["phy:/dev/lvgroup/nicdom_root,xvda,w"]
root="/dev/xvda ro"
extra="console=hvc0 xencons=tty"

client domain

kernel="/boot/vmlinuz-2.6.32.16-xen-amd64"
ramdisk="/boot/initrd.img-2.6.32.16-xen-amd64"
memory=1024
name="client"
disk=[
    "phy:/dev/lvgroup/client_root,xvda,w",
    "phy:/dev/lvgroup/client_swap,xvdb,w",
]
root="/dev/xvda ro"
extra="console=hvc0 xencons=tty"

I’ve given the client a swap partition and more ram because I intend to turn it into a desktop. The nicdom (driver domain) has been kept as small as possible since it’s basically a utility that won’t have many logins. Obviously there’s more to it than just load up these config files but installing VMs is beyond the scope of this document.

PCI pass through

The first step in configuring the nicdom is passing the network card directly through to it. The xen-pciback driver is the first step in this process. It hides the PCI device from dom0 which will later allow us to bind the device to a domU through configuration when we boot it using xm

There’s two ways to configure the xen-pciback driver:

  1. kernel parameters at dom0 boot time
  2. dynamic configuration using sysfs

xen-pciback kernel parameter

The first is the easiest so we’ll start there. You need to pass the kernel some parameters to tell it which PCI device to pass to the xen-pciback driver. Your grub kernel line should look something like this:

module /vmlinuz-2.6.32.16-xen-amd64 /vmlinuz-2.6.32.16-xen-amd64 root=/dev/something ro console=tty0 xen-pciback.hide=(00:19.0) intel_iommu=on

The important part here is the xen-pciback.hide parameter that identifies the PCI device to hide. I’m using a mixed Debian squeeze/sid system so getting used to grub2 is a bit of a task. Automating the configuration through grub is outside the scope of this document so I’ll assume you have a working grub.cfg or a way to build one.

Once you boot up your dom0 you’ll notice that lspci still shows the PCI device. That’s fine because the device is still there, it’s just the kernel is ignoring it. What’s important is that when you issue an ip addr you don’t have a network device for this PCI device. On my system all I see is the loopback (lo) device, no eth0.

dynamic configuration with sysfs

If you don’t want to restart your system you can pass the network device to the xen-pciback driver dynamically. First you need to unload all drivers that access the device: modprobe -r e1000e. This is the e1000e driver in my case.

Next we tell the xen-pciback driver to hide the device by passing it the device address:

echo "0000:00:19.0" | sudo tee /sys/bus/pci/drivers/pciback/new_slot
echo "0000:00:19.0" | sudo tee /sys/bus/pci/drivers/pciback/bind

Some of you may be thinking “what’s a slot” and I’ve got no good answer. If someone reading this knows, leave me something in the comments if you’ve got the time.

passing pci device to driver domain

Now that dom0 isn’t using the PCI device we can pass it off to our nicdom. We do this by including the line:

pci=['00:19.0']

in the configuration file for the nicdom. We can pass more than one device to this domain by placing another address between the square brackets like so:

pci=['00:19.0', '03:00.0']

Also we want to tell Xen that this domain is going to be a network driver domain and we have to configure IOMMU:

netif="yes"
extra="console=hvc0 xencons=tty iommu=soft"

Honestly I’m not sure exactly what these last two configuration lines do. There are a number of mailing list posts giving a number of magic configurations that are required to get PCI passthrough to work right. These ones worked for me so YMMV. If anyone wants to explain please leave a comment.

Now when this domU boots we can lspci and we’ll see these two devices listed. Their address may be the same as in dom0 but this depends on how you’ve configured your kernel. Make sure to read the Xen wiki page for PCIPassthrough [4] as it’s quite complete.

Depending on how you’ve set up your nicdom you may already have some networking configuration in place. I’m partial to debootstrapping my installs on a LVM partition so I end up doing the network configuration by hand. I’ll dedicate a whole post to configuring the networking in the nicdom later. For now just get it working however you know how.

the driver domain

As much as we want to just jump in and make the driver domain work there’s still a few configurations that we need to run through first.

Xen split drivers

Xen split drivers exist in two halves. The backend of the driver is located in the domain that owns the physical device. Each client domain that is serviced by the backend has a frontend driver that exposes a virtual device for the client. This is typically referred to as xen split drivers [3].

The xen networking drivers exist in two halves. For our nicdom to serve its purpose we need to load the xen-netback driver along with the xen-evtchn and the xenfs. We’ve already discussed what the xen-netback driver so let’s talk about what the others are.

The xenfs driver will exposes some xen specific stuff form the kernel to user space through the /proc file system. Exactly what this “stuff” is I’m still figuring out. If you dig into the code for the xen tools (xenstored and the various xenstore-* utilities) you’ll see a number of references to files in proc. From my preliminary reading this is where a lot of the xenstore data is exposed to domUs.

The xen-evtchn is a bit more mysterious to me at the moment. The name makes me think it’s responsible for the events used for communication between backend and frontend drivers but that’s just a guess.

So long story short, we need these modules loaded in nicdom:

modprobe -i xenfs xen-evtchn xen-netback

In the client we need the xenfs, xen-evtchn and the xen-netfront modules loaded.

Xen scripts and udev rules

Just like the Xen wiki says, we need to install the udev rules and the associated networking scripts. If you’re like me you like to know exactly what’s happening though, so you may want to trigger the backend / frontend and see the events coming from udev before you just blindly copy these files over.

udev events

To do this you need both the nicdom and the client VM up and running with no networking configured (see configs above). Once their both up start udevadm monitor --kernel --udev in each VM. Then try to create the network front and backends using xm. This is done from dom0 with a command like:

xm network-attach client mac=XX:XX:XX:XX:XX:XX,backend=nicdom

I’ll let the man page for xm explain the parameters 🙂

In the nicdom you should see the udev events creating the backend vif:

KERNEL[timestamp] online   /devices/vif/-4-0 (xen-backend)
UDEV_LOG=3
ACTION=online
DEVPATH=/devices/vif-4-0
SUBSYSTEM=xen-backend
XENBUS_TYPE=vif
XENBUS_PATH=backend/vif/4/0
XENBUS_BASE_PATH=backend
script=/etc/xen/scripts/vif-nat
vif=vif4.0

There are actually quite a few events but this one is the most important mostly because of the script and vif values. script is how the udev rule configures the network interface in the driver domain and the vif tells us the new interface name.

Really we don’t care what udev events happend in the client since the kernel will just magically create an eth0 device like any other. You can configure it using /etc/network/interfaces or any other method. If you’re interested in which events are triggered in the client I recommend recreating this experiment for yourself.

Without any udev rules and scripts in place the xm network-attach command should fail after a time out period. If you’re into reading network scripts or xend log files you’ll see that xend is waiting for the nicdom to report the status of the network-attach in a xenstore variable:

DEBUG (DevController:144) Waiting for 0.
DEBUG (DevController:628) hotplugStatusCallback /local/domain/1/backend/vif/3/0/hotplug-status

installing rules, scripts and tools

Now that we’ve seen the udev events we want to install the rules for Xen that will wait for the right event and will then trigger the necessary script. From the udevadm output above we’ve seen that dom0 passes the script name through the udev event. This script name is actually configured in the xend-config.xsp file in dom0:

(vif-script vif-whatever)

You can use whatever xen networking script you want (bridge is likely the easiest).

So how to install the udev rules and the scripts? Well you could just copy them over manually (mount the nicdom partition in dom0 and literally cp them into place). This method got me in trouble though and this detail is omitted from the relevant Xen wiki page [1]. What I didn’t know is the info I just supplied above: that dom0 waits for the driver domain to report its status through the xenstore. The networking scripts that get run in nicdom report this status but they require some xenstore-* utilities that aren’t installed in a client domain by default.

Worse yet I couldn’t see any logging out put from the script indicating that it was trying to execute xenstore-write and failing because there wasn’t an executable by that name on it’s path. Once I tracked down this problem (literally two weeks of code reading and bugging people on mailing lists) it was smooth sailing. You can install these utilities by hand to keep your nicdom as minimal as possible. What I did was copy over the whole xen-unstable source tree to my home directory on nicdom with the make tools target already built. Then I just ran make -C tools install to install all of the tools.

This is a bit heavy handed since it installs xend and xenstored which we don’t need. Not a big deal IMHO at this point. That’s pretty much it. If you want your vif to be created when your client VM is created just add a vif line to its configuration:

vif=["mac=XX:XX:XX:XX:XX:XX,backend=nic"]

Conclusion

In short the Xen DriverDomain has nearly all the information you need to get a driver domain up and running. What they’re missing are the little configuation tweeks that likely change from time to time and that the xenstore-* tools need to be installed in the driver domain. This last bit really stumped me since there seems to be virtually no debug info that comes out of the networking scripts.

If anyone out there tries to follow this leave me some feedback. There’s a lot of info here and I’m sure I forgot something. I’m interested in any way I can make this better / more clear so let me know what you think.

[1] http://wiki.xen.org/xenwiki/DriverDomain
[2] http://wiki.xensource.com/xenwiki/XenParavirtOps
[3] http://wiki.xen.org/xenwiki/XenSplitDrivers
[4] http://wiki.xen.org/xenwiki/XenPCIpassthrough

Xen Network Driver Domain: Why

If you’ve been watching the xen-user, xen-devel or the xen channel on freenode you’ve probably seen me asking questions about setting up a driver domain. You also may have noticed that the Xen wiki page dedicated to the topic [1] is a bit light on details. I’ve even had a few people contact me directly through email and my blog to see if I have this working yet which I think is great. I’m glad to know there are other people interested in this besides me.

This post is the first in a series in which I’ll explains how I went about configuring Xen to bind a network device to an unprivileged domain (typically called domU) and how I configured this domU (Linux) to act as a network back end for other Linux domUs. This first post will frame the problem and tell you why you should care. My next post will dig into the details of what needs to be done to set up a network driver domain.

Why

First off why is this important? Every Xen configuration I’ve seen had all devices hosted in the administrative domain (dom0) and everything worked fine. What do we gain by removing the device from dom0 and having it hosted in a domU.

The answer to these questions is all about security. If all you care about is functionality then don’t bother configuring a driver domain. You get no new “features” and no performance improvement (that I know of). What you do get is a dom0, the most security critical domain, with a reduced attack surface.

Let’s consider an attack scenario to make this concrete: Say an exploit exists in whichever Linux network driver you use. This exploit allows a remote attacker to send a specially crafted packet to your NIC and execute arbitrary code in your kernel. This is a worst case scenario, probably not something that will happen but it is possible. If dom0 is hosting this and all other devices and their drivers your system is hosed. The attacker can manipulate all of your domUs, the data in your domUs, everything.

Now suppose you’ve bound your NIC to a domU and configure this domU to act as a network back end for other domUs. Using the same (slightly far-fetched) vulnerability as an example, have we reduced the impact of the exploit?

The driver domain makes a significant difference here. Instead of having malicious code executing in dom0, it’s in an unprivileged domain. This isn’t to say that the exploit has no effect on the system. What we have achieved though is reducing the effects of the exploit. Instead of a full system compromise we’re now faced with a single unprivileged domain compromise and a denial of service on the networking offered to the other VMs.

The attacker can take down the networking for all of your VMs but they can’t get at their disks, they can’t shut them down, and they can’t create new VMs. Sure the attacker could sit in the driver domain and snoop on you domUs traffic but this sort of snooping is possible remotely. The appropriate use of encryption solves the disclosure problem. In the worst case scenario attacker could use this exploit as a first step in a more complex attack on the other VMs by manipulating the network driver front ends and possibly triggering another exploit.

In short a driver domain can reduce the attack surface of your Xen system. It’s not a cure-all but it’s good for your overall system security. I’m pretty burned out so I’ll wrap up with the matching “How” part of this post in the next day or two. Stay tuned.

[1] http://wiki.xen.org/xenwiki/DriverDomain

Debian Lenny make-kpkg broken with new kernel

If you use make-kpkg to build your kernels and you’re running Lenny you may have had problems building 2.6.34 when it came out. With kernel-package version 11.015 I’m getting the following error:

The UTS Release version in include/linux/version.h
""
does not match current version:
"2.6.34"
Please correct this

I’m sure more recent packages have this bug squashed but on Lenny it’s still a problem. What’s happening is make-kpkg is looking for a version string in $(KERN_ROOT)/include/linux/version.h and it’s not there. Every once in a while the kernel maintainers move stuff around and that’s exactly what happened. The UTS_RELEASE definition was moved from $(KERN_ROOT)/include/linux/utsrelease.h to $(KERN_ROOT)/include/generated/utsrelease.h

I found it confusing that the error message lists the version.h file. Turns out this definition has been moved before and when make-kpkg can’t find it in $(KERN_ROOT)/include/linux/utsrelease.h it falls back to version.h in the same directory. So we fix it with a quick patch.

--- ./version_vars.mk	2008-11-24 12:01:32.000000000 -0500
+++ ./version_vars.mk.new	2010-06-29 21:51:50.000000000 -0400
@@ -138,10 +138,10 @@
 EXTRAV_ARG :=
 endif
 
-UTS_RELEASE_HEADER=$(call doit,if [ -f include/linux/utsrelease.h ]; then  
-	                       echo include/linux/utsrelease.h;            
+UTS_RELEASE_HEADER=$(call doit,if [ -f include/generated/utsrelease.h ]; then  
+	                       echo include/generated/utsrelease.h;            
 	                   else                                            
-                               echo include/linux/version.h ;              
+                               echo include/linux/utsrelease.h ;              
 	                   fi)
 UTS_RELEASE_VERSION=$(call doit,if [ -f $(UTS_RELEASE_HEADER) ]; then                    
                  grep 'define UTS_RELEASE' $(UTS_RELEASE_HEADER) |                       

Down load version_vars.mk.patch. Copy this patch to /usr/share/kernel-package/ruleset/misc/ and apply it:

zcat version_vars.mk.patch.gz | sudo patch -p1

If you’ve already tried to build your kernel and had it fail because of this bug you should copy the version_vars.mk we just patched to $(KERN_ROOT)/debian/ruleset/misc/ and run make-kpkg again. This should keep you from having to rebuild the whole kernel … which takes an age on my laptop.

Can’t wait for Squeeze to go stable but that always comes with a whole set of new problems 🙂

xend fails silently

I’ve been giving myself a crash course in Xen recently. I’ve played with Xen in the past but it’s always annoyed me that the kernel for dom0 was super old (2.6.18). Debian has always been good with their support and the 2.6.26 kernel that ships with Lenny has worked well.

Now that Xen 4.0 is out and I’m getting ready to transition over to Squeeze, I wanted to go back and find where the upstream dom0 kernel was maintained. Turns out there’s a git tree hosted on kernel.org with a stable 2.6.32 and more bleeding edge kernels as well. Building both Xen 4 and the 2.6.32.10 kernel from git were pretty uneventful. I did however run into a problem getting xend to start.

There’s nothing worse than a program that fails without giving any good indication as to why. xend failed with this error message in /var/log/xen/xend-debug.log:

Error: (111, 'Connection refused')

That’s it. Not helpful at all. I started fishing through an strace but got lost in the output pretty quick. Luckily there was some help to be had in the #xen channel on freenode.

The folks on IRC told me it’s quite common for xend to fail quietly like this if there’s a needed xen kernel module that isn’t loaded. The usual suspects are xen-evtchn or xen-gntdev but it may be some other component that wasn’t compiled in. In my situation I had xen-gntdev built in but xen-evtchn was built as a module and hadn’t been loaded. A quick modprobe later and xend was up and running. I went one step further and recompiled this piece into the kernel directly.

Now I’m happily running Xen 4.0 with a 2.6.32 kernel on Debian Squeeze. Good times.

Running OP for the First Time

OP is a little wonky to run the first time. Not because it doesn’t work but because it doesn’t appear to. Sounds strange right? Well the first time I fired it up (by running the run-op2 script in the root of the source tree) it loaded up the web page of one of the students working on the project. His name is Shou Tang [1].

Unfortunately OP loads most of his page, sometimes the whole thing, and then seems to get lost. Attempts to load any other page will fail. The little icon in the tab which indicates that it’s “thinking” continues to spin but nothing happens. If you fire up tcpdump and watch for http traffic you’ll see that OP is making web requests and getting responses but it doesn’t render the page at all.

Unfortunately there isn’t a whole lot of debug output. Since I’m not very familiar with the architecture yet there may be some debug that I just don’t know where to find. There is a warning that gets dumpted to the console indicating that a clientClosed signal is being ignored this warning is generated by a lot of pages, not just Shou’s page. I did get a few weird errors from X but I couldn’t seem to reproduce these reliably.

So to get OP working you need to fire it up using the run-op2 script and let it get all messed up by Shou’s page. Then pull down the Menu button that’s in the upper right side of the screen. Go into the Edit tab and select Preferences. In here change the default home page to something that won’t muck up the browser (I chose Google). Then restart the browser. This time you’ll start out on a different home page and hopefully this one won’t screw up OP.

I’m gonna hold off on speculating as to why Shou’s page messes up OP. Fixing this bug would be nice but it’s not really my focus. My next post will detail how OP starts up and hopefully some architectural details (which parts talk to each ether etc). Documenting this will be the first step towards laying out what a Type Enforcement policy (a la SELinux) for OP should look like.

[1] http://www.cs.uiuc.edu/homes/stang6/

OP browser install

Building / installing “research” software is always fun. OP was better than most as far as the building goes. There isn’t a way to install it (at least not through the build system) so we’ll leave that part out. For posterity the code I’m using is a tarball they put up on google code back in October [1]. I had hoped to use the svn tree that they advertise [2] but it’s just an empty directory, no code.

Their directions are pretty good. I started from their google code wiki page that has directions for doing the install on Ubuntu. I did the install on a Debian Squeeze system that I’m running as a VM on my laptop. OP uses the WebKit rendering engine and those of you familiar with building WebKit already know how long it’s gona take me to build this in a wimpy little VM 🙂

Packages

The first time running OPs build script it will fail. There’s a bunch of development software you’ll need that’s not part of a default Squeeze install. Save yourself some time and just apt-get install these packages:

apt-get install gcc g++ flex bison gperf qt4-qmake libqt4-dev libsqlite3-dev libphonon-dev libxext-dev x11proto-xext-dev libfontconfig1-dev

Some of these libraries like libphonon-dev and libxext-dev were discovered as dependencies through trial and error. I mean to say I ran the build script, it errored out with some cryptic error like a missing header file like SomethingPhonon.h and then I apt-cache searched for a development package with the keywords phonon and dev and found the right one. Trial and error is pretty time consuming when you’re compiling this on a very low powered machine. Some additional packages may be pulled in as dependencies but the above list should be enough to get you what you need. If you run into any problems recreating this let me know in the comments.

OP recommends that you install Qt directly from Nokia but everything built fine for me using the Qt4 shipped with Squeeze. There are webkit and libqt4-webkit packages on squeeze and I tried these first. I’m pretty sure the libqt4-webkit package is missing some headers that OP needs since the build failed looking for headers that are supplied in the Qt bindings from the WebKit source tree. Nothing’s perfect, just use the WebKit source and the Qt4 bindings that comes with it.

Building WebKit

This is the part where I start complaining about building WebKit. Not just how long it takes (that’s my laptops fault) but the crazy build system. I guess I’ve been spoiled by all the great open source packages out there that build with the standard ./configure && make && sudo make install. Webkit ships with an autobuild.sh which will bootstrap the standard gnu autotools infrastructure but it will only build the WebKit core, it won’t build the Qt bindings we need for OP.

OP goes above and beyond in that they ship a script that downloads the WebKit code from the “nightly build” that OP was developed against [3]). It applies a set of patches too.

You can build WebKit through the script supplied by OP or you can do it yourself. If you chose the latter all you need to do is:

wget http://builds.nightly.webkit.org/files/trunk/src/WebKit-r48592.tar.bz2
tar jxvf WebKit-r48592.tar.bz2
mv WebKit-r48592 web-app/WebKit
cd WebKit; cat ../webkit_patches/*r48592.diff | patch -p0; cd ../
./web-app/WebKit/WebKitTools/Scripts/build.sh --qt --release

That’s downloading the right nightly build, extracting it, renaming the directory (OP has this path hard coded in their scripts), patching it and running the build script. I did this manually because the build kept failing and I wanted to narrow down the problem. Even with all of the right libraries installed I was getting a strange error from g++ indicating that there was a bug in the compiler itself:

g++: Internal error: Killed (program cc1plus)
Please submit a full bug report.
See <file:///usr/share/doc/gcc-4.4/README.Bugs> for instructions.
make[1]: *** [obj/release/FrameLoader.o] Error 1
make[1]: *** Waiting for unfinished jobs....
make[1]: Leaving directory `/home/myuname/opbrowser-release-2009_09_30/webapp/WebKit/WebKitBuild/Release/WebCore'
make: *** [sub-WebCore-make_default-ordered] Error 2

Google for this error real quick and usually the problem is the machine that’s doing the compile running out of RAM. There are some great mailing list posts with people compiling glibc on a system with 32MB of ram with 128MB swap space. That makes my VM look like a super computer (512MB of RAM and a 1GB swap disk). My first reaction was then to think that there was no way I was running out of RAM.

More swap!!

So how to test this? Run the build again and this time run in parallel the command free -m -s 2. This will poll your RAM and swap usage every 2 seconds printing some info to the console. Sure enough the build was using up all of my RAM and swap which is pretty ridiculous IMHO.

So just throw more ram at it right? Getting KVM to give this VM more RAM take a restart so we get on that giving it 1GB of RAM (double what it had previously) and leave the 1GB of swap alone. FAIL, free still shows us running out of both RAM and swap.

OK no messing around this time. On my host system I allocated a 5Gb KVM logical volume and passed this to the VM as an additional hard disk (another restart of the VM). I then dropped the old swap space and set this 5Gb disk as swap. This turned out to be enough. Watching free showed my swap usage going well over 2Gb … jeez that greedy. Something in this script is assuming that my system is pretty beefy.

One final problem I ran into was a significant number of undefined references turning up in the final linking. This is from the build failing so many times previously. Typically you’d hope the build system would rebuild anything that fails but that’s not the case here. In fact even if you run the build script with the --clean switch it doesn’t clean the enough to remove broken object files. I had to manually delete the WebKitBuild directory which is under the WebKit root and rebuild WebKit one last time. You’ll see this message when you’re done:

===========================================================
 WebKit is now built (2h:45m:25s). 
 To run QtLauncher with this newly-built code, use the
 "./run-launcher" script.
===========================================================

That’s right, almost 3 hours to build this beast.

Building OP

In comparison to WebKit, building OP was a breeze. The only additional libraries were a few from boost [4]. These are as follows.

apt-get install ant sun-java6-jdk libboost-dev libboost-regex-dev

Installing libboost-regex-dev will pull in a bunch of boost dev packages one of which is libboost-dev. I’ve included libboost-dev in the list above just for completeness.

I’m pretty sure the OpenJDK java packages would work but since we’re trying to minimize the possible problems we may run into I just grabbed the “non-free” sun packages. That way if I end up having to get in touch with the guys that wrote OP with a question / problem they won’t have the opportunity to say “we don’t support the OpenJDK packages, make sure you’re using the genuine Sun (Oracle?) Java”. If anyone gives this a go with the OpenJDK packages let me know how it turns out in the comments.

Once you’re done accepting the licensing agreement ::sigh:: run the build script and OP should be good to go. OP ships with a build script in its root named build.sh. Agian this makes some assumptions about your system since it passes make the -j4 flag. This is generally the option you’d pass to make if you’ve got 2 CPUs. Since my VM only has one I went through and removed it:

cat build.sh | sed 's/-j4//' > build-single.sh

Then run it and you should be good to go.

I couldn’t figure out how to actually install WebKit once it was built. OP takes this on by setting LD_LIBRARY_PATH and DYLD_LIBRARY_PATH environment variable in its launch script to contain the WebKit release library directory: web-app/WebKit/WebKitBuild/Release/lib. There is also a hard coded reference to that path in some of OPs Makefiles.

This can cause a problem if you don’t pass the WebKit build script the --release flag (like maybe you built it with --debug instead). OP won’t build right in this case. It will fail complaining about a bunch of undefined references. If you do this by mistake and you don’t want to rebuild WebKit (because it takes around 3 hours) you can just use a soft link.

So now it’s built. This post is long enough so I’ll comment on running OP next time.

[1] http://code.google.com/p/op-web-browser/downloads/list
[2] http://code.google.com/p/op-web-browser/source/checkout
[3] http://builds.nightly.webkit.org/files/trunk/src/WebKit-r48592.tar.bz2
[4] http://www.boost.org/

CIS791: OP browser based project statement

As part of the web security class I’m taking this semester we’re required to put together a project in the last 6-ish weeks of class. The intent is to get us familiar with doing research and formulating a project based on our research. This project is a short one, really I don’t expect to do much more than scratch the surface of a project and show how cool it could be given enough time. Gotta have something interesting to talk about by the end of the semester though but little more. What I don’t want it to be though is a boring project that evaluates some platform or technology.

After doing a bunch of reading about “safe” subsets of Java and JavaScript (which has decidedly no relation to Java) I came to the conclusion that JavaScript is a mess. I love the idea of object capability languages but as close to a safe subset of JavaScript as some have got, I’m pretty sure trying to fix JavaScript isn’t for me.

the OP web browser

After reading the OP web browser paper when it was published back in 2008 I started looking for a reason to play around with it. It’s much more in line with my interests (software architecture as well as security). I especially like the idea of structuring an application using well established architectural concepts from OS design.

Separation is a huge deal in the OS (think separate process address spaces) and in the application space its making a come back. Breaking the browser up into smaller components with well defined interfaces and communication semantics is a great idea. From the security perspective it keeps browser plugins / components from stomping all over each other when one gets compromised. It also is an excellent way to exploit multi core systems. I always get super pissed when my flash player pins one of my CPUs to 100% and the whole browser gets dragged down with it while the other CPU is sitting idle.

potential contribution of this work

What I’m interested in is, of course learning some of the insides of this browser. But specifically I’m interested in the code that is interposed between different components of the browser (aptly named the kernel) and how much like a reference monitor it is or can be. Also the range of security policies it does / could enforce would be very interesting to discuss.

This latter point may actually require some work as it’s likely that OP could enforce any number of policies but that may take some heavy lifting to generalize the enforcement logic (this is pure speculation at this point). This would start looking like the LSM work from the Linux kernel.

When we start talking about policies, the granularity with which policies can be specified becomes important. Subjects and objects in operating systems are well understood for the most part. In a web browser that’s not so clear. The obvious things that come to mind are plugins (including instances of the java script engine) as subjects and components of the DOM and passive user data as objects (cookies, history, saved passwords etc).

Being the SELinux fanboy that I am I’m pretty convinced that a lot of the enforcement can be done in a user space object manager. This gives us the policy language (type enforcement) and policy enforcement point (the Linux kernel) for free but leaves the details of object definition anbd labeling us. It also does nothing for us as far as verifying the expected properties of our reference monitor (complete mediation, tamperproof and analysis for correctness). The design of OP itself will likely be a big help in verifying these properties, but this isn’t something I plan to spend much time on.

Conclusion

So there isn’t much to conclude yet. This is a basic statement of the project I’m undertaking for the rest of this semester, a jumping off point. Hopefully it won’t be too painful but just getting OP to install is a non-trivial task (I’m currently waiting for webkit to compile). I’ve already ran into some quirks in their build system which were pretty easily fixed but there isn’t a mailing list for the project or anything so I’m trying to track down a way to communicate with the project owners beyond sending emails to their personal accounts. We’ll see how receptive they are to suggestions soon enough.

MAC SEED Lab Requirements

It’s been way to long since my last post on this subject. I’ve been rolling around ideas in my head for the elements that will make this Lab good, or bad for that matter. I’m just going to dump them here and refine them as I either as I run across new requirements or I realize that something on this list is a bad idea.

  • The lab task should focus on policy development and what it means to the system as a whole. Integrity and secrecy should both be addressed as part of the lab.
  • Set up of the lab should take no effort on the part of the student. SELinux should already be installed with a known good policy on their systems. The only thing they should be concerned with is writing policy, maybe some code to confine, building the policy, inserting it into the kernel and debugging the output.
  • This lab is intended to reinforce the mandatory access control concept. It’s not an SELinux lab per se. SELinux is just the MAC system used to reinforce the MAC concept. This implies that the Lab shouldn’t be bogged down in the details of managing an SELinux system. Since the labs are intended to be run from a Ubuntu VM we have to be sure SELinux is well supported or already set up on this VM.
  • Following the previous point it’s important that the MAC concepts from the class lecture be incorporated into the lab explicitly. It’s been a while since I took the class that this lab will be taught in so getting a copy of the lecture notes and ensuring I’m reinforcing the right concepts is important. I may need to make suggestions for new topics to be discussed in class but I’ll try to keep changes to the lecture minimal.
  • It would be nice if we could make practical links to a previous lab showing how MAC can defend against specific attacks. There is a lab used in this class showing buffer overflows at work. Showing the code previously developed for the buffer overflow lab thwarted by SELinux would be cool. After walking through an example this would make a good independent task for the students to undertake.
  • Simplicity. Keep the policy developed from getting too scary is a must.
  • The Reference policy by far the policy “language” to use when developing “real” policy but exposure to the raw policy is a must. It may make sense to have students determine the raw policy needed to perform a task and then have them hunt through the reference policy interfaces searching for the right interface to use. That could get ugly though. That may not even be practical since reading the reference policy requires a certain amount of skill. This ones gona take some thought.

I’m going to let these sit for a day or two and do some thinking. Refinements will follow as will a task list derived from the “final” requirements. I know, requirements are never final but I’ll pretend they are when I move on to the tasks … at least until they change. I’m interested in any comments or suggestions that the interwebs may have so let me know what you think.