Older blog entries for mjg59 (starting at number 385)

How XMir and Mir fit together

Mir is Canonical's new display server. It fulfils a broadly similar role to Wayland and Android's Surfaceflinger, in that it takes final responsibility for getting pixels onto the screen. XMir is an X server that runs on top of Mir. It permits applications that know how to speak the X protocol but don't know how to speak to Mir (ie, approximately all of them at present) to run in a Mir-based environment.

For Ubuntu 13.10, Canonical are proposing to use Mir by default. This doesn't mean that most applications will be using Mir, though - instead, the default session will run XMir as a full-screen client and a normal X environment will be run on that. This lets Canonical deploy Mir without forcing anyone to update their applications, allowing them to take a gradual approach. By 14.10, Canonical expect the default Unity session to be a Mir client rather than an X client. In theory it will then be possible to run an Ubuntu system without any X applications at all, leaving XMir to do nothing other than run legacy applications.

Of course, this requires a certain amount of replumbing. X would normally be responsible for doing things like setting up the screen and pushing the pixels out to the hardware, but this is now handled by Mir instead. Where a native X server would allocate a framebuffer in video memory and render into it, XMir asks Mir for a window corresponding to the size of the screen and renders into that and then simply asks Mir to display it. This step is actually more interesting than it sounds.

Unless you're willing to throw lots of CPU at them, unaccelerated graphics are slow. Even if you are, you're going to end up consuming more power for the same performance, so XMir would be impractical if it didn't provide access to accelerated hardware graphics functions. It makes use of the existing Xorg accelerated X drivers to do this, which is as simple as telling the drivers to render into the window that XMir requested from Mir rather than into the video framebuffer directly. In other words, when displaying through XMir, you're using exactly the same display driver stack as you would be if you were using Xorg. In theory you'd expect identical performance - in practice there's a 10-20% performance hit right now, but that's being actively worked on. Fullscreen 3D apps will also currently take a hit due to there being no support for skipping compositing, which is being fixed. XMir should certainly be capable of performing around as well as native X, but there's no reason for it to be any faster.

The output drivers aren't the only part of the stack. XMir still needs some way to get input events. You might naively expect that Mir would forward input events to XMir, but the current state of affairs is that XMir loads the existing X input drivers which in turn open the input devices themselves. This explains why XMir currently shows two cursors[1] - the first cursor is the Mir cursor that's being driven by the input events Mir is receiving, while the second is the cursor being drawn by XMir in response to the input events that it's receiving[2].

The final main piece of duplicated functionality is output management. In the case where X is its own display server, it can simply ask the kernel to program the outputs. XMir can't do that, so would have to ask Mir to do it instead. Sadly, that functionality's not implemented yet. Running xrandr on Xmir gives:

Screen 0: minimum 320 x 320, current 1366 x 768, maximum 8192 x 8192
XMIR-1 connected primary 1366x768+0+0 (normal left inverted right x axis y axis) 0mm x 0mm
   XMIR mode of death[3]   60.0*+
which translates as "I have a screen that can be any size from 320x320 to 8192x8192, and have one output of unknown physical size that can only display 1366x768 at 60Hz". Other than the 1366x768, the result will be identical no matter how many outputs you have - XMir currently has no support for configuring multiple displays, and will always report 60Hz. It also has no support for placing monitors into any kind of DPMS state.

Obviously, this is still software under active development. There's three months from now until until Ubuntu is due to release with this functionality enabled by default, and the only significant missing features are xrandr and DPMS support. As long as Mir exposes interfaces for controlling those, they shouldn't be a problem to implement. To most users, the transition from native Xorg to XMir should be completely invisible.

Which is kind of the problem. What benefits does the user gain from using XMir on Mir rather than native Xorg?

Features? XMir is a total of around 1000 lines of code on top of Xorg. It implements no functionality that isn't present in Xorg.

Performance? XMir effectively is Xorg - it's the same code running the same drivers. It's not going to be any faster. At the moment it's actually slower, but that's probably fixable.

Security? Again, XMir is effectively Xorg. It still runs as root. It has the same privileges. There's no security benefit to using XMir instead of Xorg.

Like I said, the transition should be completely invisible - no drawbacks, but no benefits. What it does do is get Mir deployed on a larger number of systems, which means Canonical can both get wider testing of the basic Mir functionality and argue that the number of deployed Mir systems means hardware vendors should support it. Users won't see any of the benefits until Unity transitions to being a native Mir client, which is slated for 14.10 next year.

As a PR move, it's pretty great. The public perception is that Mir has gone from not existing to being sufficient to run a full desktop environment in very little time, and it's natural to compare it to Wayland which has been in development for longer and still isn't running a fully featured desktop session. The problem with that perception is that Mir is doing very little at the moment. It's setting an output mode (by asking the kernel to), allocating a window for XMir to draw into and then pushing that window onto the screen whenever XMir tells it that it's changed. This isn't advanced display server functionality, it's the bare minimum that a display server could possibly do[4]. Which is a shame, because Mir is significantly more functional than that, it's just not being used to do anything we couldn't already do.

In summary: XMir on Mir in Ubuntu provides no user benefits and isn't a compelling technology demo. Mir itself will permit a range of additional features, but isn't slated to be running a user session itself until 14.10. The only obvious benefit to Canonical in shipping XMir on Mir is to gain additional testing, which makes using it in 14.04, a supposedly stable and long term release, a somewhat surprising choice.

[1] Although this is in the process of being fixed
[2] This also explains why, on systems with a trackpoint and a touchpad, only the trackpoint moves the Mir cursor while both move the XMir cursor. The X touchpad driver has put the touchpad in absolute mode, which means it's now reporting events that Mir currently ignores.
[3] "XMIR mode of death" is hardcoded into XMir. I'm really hoping it's a Regular Show reference.
[4] So why hasn't anyone done this with Wayland? Mostly because, as described above, there's no technical benefit in doing so. Wayland does have an X server for compatibility purposes, and if you wanted you could run it as a full screen client and run an entire session underneath it. But you'd gain nothing by doing so. The Wayland X server is intended for running individual clients rather than an entire X session. Run an X client under Wayland and it'll pop up in its own individual window and managed by your Wayland session's window manager, just like it would under X. XMir currently has no support for this "rootless" mode - right now if you want to run X apps under Mir, you'll need to launch an entire X session with its own window manager.

comment count unavailable comments

Syndicated 2013-07-15 16:10:30 from Matthew Garrett

Explaining Intel Rapid Start Technology

Recent Intel-based systems often implement something called Intel Rapid Start Technology. Like many things with the word "Technology" in the name, there's a large part of this that's marketing. The relatively small amount of technical documentation available implies that it's tied to your motherboard chipset and CPU, but as far as I can tell it's entirely implemented in firmware and could work just as well on, say, a Cyrix on a circa 1996 SIS-based motherboard if someone wrote the BIOS code[1]. But since nobody has, we're stuck with the vendors who've met Intel's requirements and licensed the code.

The concept of IRST is pretty simple. There's a firmware mechanism for setting a sleep timeout. If you suspend your computer and this timeout expires, it'll resume. However, instead of handing control back to the OS, the firmware just copies the entire contents of RAM to a special partition and turns the computer off. Next time you hit the power button, the firmware dumps the partition contents back into RAM and resumes as if nothing had changed. This takes a few seconds longer than resume from S3 but is far faster than resume from hibernation since it starts the moment the system gets power.

At a more technical level, it's a little more complicated. The first thing to know about this feature is that it's entirely invisible unless your hard drive is set up correctly. There needs to be a partition that's at least the size of your system's physical RAM. For GPT systems, this needs to have a type GUID of D3BFE2DE-3DAF-11DF-BA-40-E3A556D89593. For MBR systems, you need a partition type of 0x84[2]. If the firmware doesn't find an appropriate partition then the OS will get no indication that the firmware supports it. Boo.

(The second thing is that it seems like it really does have to be on an SSD, and if you try to do this on spinning media your firmware will ignore it anyway)

If all the prerequisites are in place, an ACPI device with an HID of INT3392 will exist. It has four methods associated with it: GFFS, SFFS, GFTV and SFTV. GFFS returns an integer representing the events that will cause the system to wake up from S3 and suspend to SSD. The system will wake after the timeout expires if bit 0 is set, and will wake when the battery becomes critically low if bit 1 is set. The other bits appear to be unused at the moment[3]. SFFS sets the wakeup events, using the same bit values as GFFS. GFTV returns an integer containing the current wakeup timeout in minutes. SFTV sets it. Values above 1440 (ie, 24 hours) seem to be considered invalid - if I set them the value instead ends up as 10 and the timeout flag gets cleared from the wakeup events field.

I've submitted a patch that adds a sysfs interface for setting these values, and unless anyone objects it'll probably end up in 3.11. There's still the remaining question of how userspace should make use of these, and also how installers should behave when it comes to systems that support IRST. As previously mentioned, there's no obvious indication to the OS that the feature is supported unless the appropriate partition already exists. The easiest way to deal with this is for installers to default to retaining any partitions with the magic IDs, but I'm still looking into whether it's possible to get the firmware to cough up some more information so it can be created automatically even if the drive's entirely blank.

And now, having got this working on a test machine, I just need to split my Thinkpad's swap partition in half and make sure it works here as well. Woo.

[1] Note: I am not going to do this.
[2] Conveniently, the same as the partition type that APM systems used for suspend to disk back when dubstep hadn't been invented yet
[3] At least, if you attempt to set them they get ignored.

comment count unavailable comments

Syndicated 2013-07-03 16:50:49 from Matthew Garrett

Automating HP server BIOS setup

One of the goals of our work at Nebula is making it as easy as possible for someone to set up a private cloud. In an ideal world that would involve being able to just plug in the controller hardware, wire up a rack of servers and turn everything on, but right now there are cases where the default firmware configuration on the servers doesn't match our desired configuration. Plugging a console into individual servers just to set some BIOS options is (based on personal experience) about as much fun as writing a doctoral thesis on the experience of watching paint dry, so it seemed worth trying to find a way to avoid people having to deal with that. Thankfully, it turns out that the industry has come to a similar set of conclusions. Recent Dell hardware lets you use WS-MAN, which makes it easy to do things like enable security features as long as you have an authenticated connection to the iDRAC management system. This actually works out wonderfully - the first time a Dell node sends a PXE request to the Cloud Controller, it can push back some configuration changes, reboot the system and then boot it in a known good configuration. Thanks, Dell![1].

HP's slightly trickier. As far as I've been able to work out, the remotely-available programmatic interfaces only provide configuration interfaces to the iLO device itself, along with a range of chassis-level monitoring. Instead, HP provide a couple of utilities that allow the OS to change values. The first of these is called conrep. Conrep is able to modify BIOS settings, but needs an XML file which tells it which configuration values are at which addresses. That's a bit of a pain. Thankfully it's being replaced with a new tool called hprcu which is able to directly query the firmware in order to figure out which configuration options are available and which values have to be stored where in order to set them. You run hprcu once and it spits out a file containing your existing settings. You modify that, feed it back into hprcu, reboot and you're set.

Sounds pretty ideal. What's the problem? The first is that hprcu is currently only shipped as a 32-bit binary, and right now we don't deploy any 32-bit support code on our nodes. It'd be irritating to have to do so just to configure the firmware. The second is that it works by doing raw port IO, and that's not going to be possible from userspace once we've moved to a UEFI Secure Boot setup.

So, obviously, I've been working on reimplementing it. The first step was to figure out what it was actually doing. The first step was to strace it to see if it was using the kernel IPMI interface. strace showed no accesses to /dev/ipmi*. My next thought was that it could be using some sort of shared memory segment with the iLO hardware, so I used MMIOTrace to dump the memory accesses it made. Turned out there weren't many, and certainly not enough to do anything interesting. Then I went back to the strace logs and saw that it was calling iopl() a bunch, and then I got sad.

iopl() allows a userspace application to change its io privilege levels. The Linux manpage for the iopl() call is amazingly helpful, informing us with a straight face that "This call is necessary to allow 8514-compatible X servers to run under Linux." What it actually does is grant userspace access to the full range of io ports.

So, what's an io port? The x86 architecture (and some others) provide two ways to communicate with hardware. The one that's mostly used these days is to map the hardware into the same address space as memory (memory-mapped io), but x86 has an entirely separate address range that can be used. This is significantly more limited, as only 65,536 addresses are available for the entire system. Further, you can only read or write a maximum of 4 bytes in a single transaction. However, while slower and less generally useful than memory-mapped io, port io has the advantage that it's much simpler to implement. As a result, a lot of the original PC hardware was intended to be accessed via port io, and a bunch of that's still present. Want to reprogram your real-time clock? Port io. Want to read from the legacy keyboard controller? Port io. For low-bandwidth transactions, it's a completely reasonable way to implement hardware.

Applications[2] perform port io by executing in and out instructions. Since these instructions allow you to do things like, say, hit the keyboard controller directly[3], userspace is generally forbidden from calling them. Sufficiently privileged applications can call iopl() to raise their privileges and gain access to the full set of io ports. But, since in and out are CPU-level instructions, the actual port io accesses won't show up in strace. So I had to take another approach.

The correct way of doing this would be to LD_PRELOAD something that intercepted iopl(), didn't call it, let the application perform an in or out instruction, caught the fault, looked at the stack frame, called iopl(), performed the access, dumped debug details, dropped iopl() again and restored state. Because that seemed like a bunch of work, I took advantage of the fact that hprcu had separate in and out functions and used gdb. I told gdb to break on every call to in or out, dump the register state and then continue. Then I hacked up a script to parse the register dumps and tell me where the reads and writes were going. Astonishingly, it worked. And then I read the results and got really sad.

PCI setup is hard, if you're a BIOS. You've got PCI devices with large memory windows and you've got to arrange them all somehow and look you've only got 640K of RAM you can access while you're doing this and come on, seriously? So PC-type systems define a port IO interface to performing PCI configuration. Each PCI device has an address made up of a bus, a device and a function. You take a 32-bit integer, set the top bit to indicate that you want PCI access, put the bus in bits 16-23, the device in bits 11-15, the function in bits 8-10 and the configuration register on that device you want to access in bits 0-7. Then you write that to io port 0xcf8. Reads or writes to io port 0xcfc then read or write from that configuration register on that device. This lets the BIOS tell each PCI device where it's going to live in mmio address space without having to actually perform any mmio. Good work, BIOS developers.

This works fine in the BIOS, because nothing else is running while the BIOS is. It even works fine in the kernel, which uses this approach on older hardware that doesn't provide a memory-mapped mechanism to access PCI configuration registers. But it works really badly if you're doing it in userspace, because it shares a problem with all other indexed accesses. You write an address to 0xcf8. You write a value to 0xcfc. What happens if the kernel writes to 0xcf8 before you write to 0xcfc? Your access goes to some other register on some other device and suddenly you've just mapped a PCI device over the top of RAM and well I sure hope you saved all your work.

So, I wasn't thrilled to discover that hprcu was communicating with the iLO by using 0xcf8/0xcfc port io accesses. Not only was it going to stop working once we started deploying UEFI Secure Boot, it had the potential to cause really annoyingly hard to debug problems. Working out how to reimplement it became something of a priority.

By looking at the XML files it generated, and by following the port IO accesses it was performing, I could figure out pretty much how it worked. Some configuration values are stored in the real-time clock CMOS. These were just done in the standard way - write the address to port 0x70, read or write the value to port 0x71[4]. Other configuration values were stored in NVRAM, with accesses going via PCI. This was a little trickier to figure out, because sometimes different NVRAM addresses went to the same PCI address. I finally figured it out - there's a 48 byte window into NVRAM via the PCI configuration registers, and another register which chooses which set of NVRAM is visible. So, take the NVRAM address, divide by 48, write that value to register 0xa6, take the modulus, add it to 0x80 and write the desired value to that address.

So now I knew enough to be able to take an existing XML file and deploy it. Definitely progress, and I could even add support to the kernel. But I still wanted to know how hprcu generated that XML file in the first place. Running strings over the binary showed a bunch of debug output that it never actually printed, but immediately after the help text it also printed SsLlFf:HhAaTtOoDd. I've lost enough years of my life to this kind of thing to be able to identity that as a getopt format string, and there was that tantalising D option at the end that went entirely unmentioned in the help text. Re-running hprcu with the -D argument gave me huge piles of debug output, including references to a DMI entry and an $RBS table.

DMI is a standard for exposing system information, and one aspect of it is providing data from the firmware to the operating system. The OS can look for a defined signature in a fixed area of memory and then pull out a bunch of data about the hardware, including things like the vendor name, model, serial number, BIOS version and more. Some of these information tables are standardised and some are vendor-specific. hprcu was looking for a vendor-specific table and then scanning for it looking for a known signature. That table turned out to to contain a set of signatures and addresses. The address corresponding to the signature hprcu was contained a bunch of data starting with "$RBS". It also contained a huge quantity of ASCII that looked awfully like the strings in the XML file that hprcu had written. Success!

So, the rest of today was spent on working out the format of this table. This was only marginally less tedious than setting up BIOS settings on 20 servers by hand, but I've made enough progress to be able to figure out how to write a kernel-level driver for this. That's about half-done now - the parsing code is all implemented, I just need to add the sysfs glue and I am nowhere near drunk enough for that right now[5], so for now you're just getting a format description.

To start with, look for a DMI table with a type of 0xe5. This contains several entries of 4 bytes of signature, 8 bytes of address and 4 bytes of length. Find the entry with a signature of "$CRQ" and map the corresponding address. The first four bytes of the mapped region should be "$RBS". The first record is 21 bytes into the file. Each record starts with a single byte representing the type and two bytes representing the record length. Records of type 1 contain an ascii string that starts 8 bytes into the record, and have a further type number 6 bytes into the record that tells you what type of string it is. 0x05 indicates that it's a "Feature string", which defines the overall classification of the following features. 0x06 is an "Option string", which provides a human readable explanation of what this configuration value corresponds to. 0x60 is an "Optional warning string", which tells the user which further configuration may be required before the configuration change takes effect.

All other types appear to be configuration options. Bytes 3 to 6 provide the little-endian name of the configuration option. Bytes 7 and 8 provide a unique numberical identifier for the configuration option. Byte 13 indicates the type. 0x03 is stored in CMOS. 0x04 is stored in NVRAM. 0x05 appears to be some different kind of CMOS store. I haven't figured out the others. There then follows a set of choices. These vary in length depending on the type - 0x03 are 14 bytes long, 0x04 are 6 bytes long, 0x05 are 5 bytes long. Type 0x03 choices have the choice id at byte 0, the CMOS offset at byte 2, the mask at byte 3 and the value at byte 4. Type 0x04 choices have the choice id at byte 0, the nvram offset at bytes 1 and 2, the mask at byte 3 and the value at byte 4. Type 0x05 choices have the choice id at byte 0, the cmos offset at byte 1, the mask at byte 2 and the value at byte 3. There are optional flags following each choice - if the final byte of the choice isn't 0, there then follow 6 bytes of flags. The first four bytes provide the name of a configuration option (little-endian, again), the fifth byte refers to a choice on that option and the sixth byte indicates what kind of flag. 0x1 appears to indicate that the option is mutually exclusive with that other option. Examples include the embedded serial port and virtual serial port options, where it's impossible to map both to the same address at once. A choice can have multiple flags.

After a configuration option, there'll be a set of option string records. In turn, these correspond to the previous choices - if a configuration option had three choices, there'll be three option string records. I haven't figured out whether there's a stronger way of binding the option string records to the configuration choice yet. The other thing I haven't entirely figured out are the details of configuring the platform to perform a cold reboot on the next power cycle, but the io traces for that don't look too bad.

The aim is to provide a kernel driver that exposes all these configuration options via sysfs, including indicating the current value, the available values and letting the user set a new value. In an ideal world we'd have a wonderfully generic interface to this kind of functionality, but I'm (sadly) not sure that that's possible.

And that's how I spent the past two days.

[1] (Thell)
[2] Or, indeed, kernels
[3] And, hence, read your keystrokes
[4] Which has exactly the same problems as the PCI access, in that if you happen to ask the clock what the time is while hprcu is accessing CMOS, at least one of you is going to get very confused
[5] Obviously, I would never write kernel code while drunk. You're certainly not running any of it in your enterprise kernel.

comment count unavailable comments

Syndicated 2013-06-27 05:47:47 from Matthew Garrett

Mir, the Canonical CLA and skewing the playing field

Mir is Canonical's equivalent to Wayland - a display server, responsible for getting application pixmaps onto a screen. It's intended to scale from mobile devices to the desktop, and as such is expected to turn up in Ubuntu Phone before too long[1]. There's already plenty of discussion about whether the technical differences between Wayland and Mir are sufficient to justify Canonical going their own way, so I'm not planning on talking about that.

Like many Canonical-led projects, Mir is under GPLv3 - a strong copyleft license. There's a couple of aspects of GPLv3 that are intended to protect users from being unable to make use of the rights that the license grants them. The first is that if GPLv3 code is shipped as part of a user product, it must be possible for the user to replace that GPLv3 code. That's a problem if your device is intended to be locked down enough that it can only run vendor code. The second is that it grants an explicit patent license to downstream recipients, permitting them to make use of those patents in derivative works.

One of the consequences of these obligations is that companies whose business models depend on either selling locked-down devices or licensing patents tend to be fairly reluctant to ship GPLv3 software. In effect, this is GPLv3 acting entirely as intended - unless you're willing to guarantee that a user can exercise the freedoms defined by the free software definition, you don't get to ship GPLv3 material. Some companies have decided that shipping GPLv3 code would be more expensive than either improving existing code under a more liberal license or writing new code from scratch. Android's a pretty great example of this - it contains no GPLv3 code, and even GPLv2 code (outside the kernel) is kept to a minimum.

Which, given Canonical's focus on pushing Ubuntu into GPLv3-hostile markets, makes the choice of GPLv3 an odd one. This isn't a problem as long as they're the sole copyright holder, because the copyright holder is obviously free to ship their code under as many licenses as they want. But Canonical still aim to foster community involvement, and ideally that includes accepting external contributions to their code. If Canonical simply accepted those contributions under GPLv3 then they'd no longer have the right to relicense the entire codebase, so any contributions are only accepted if the contributor has signed a Contributor License Agreement.

Canonical's CLA is pretty simple. In essence, it grants Canonical the right to use, modify and distribute your code, and it grants Canonical a patent license under any patents you own that may cover the code in question. But, most importantly, it grants Canonical the right to relicense your contribution under their choice of license. This means that, despite not being the sole copyright holder, Canonical are free to relicense your code under a proprietary license.

Given Canonical's market goals, this makes sense. They can relicense Mir (and any other GPLv3 projects they own) under licenses that keep their hardware partners happy, and they can ship in the phone market. Everyone's a winner.

Except, if Canonical want to ship proprietary versions, why not just license Mir under a license that permits that in the first place? This is where the asymmetry comes in. The Android userland is released under a permissive license that allows anyone to take Google's code, modify it as they wish and ship it on whatever hardware they want. I could legally start a company that provided customised versions of Android to phone vendors without them having any GPLv3 concerns. I won't be able to do that with Ubuntu Phone.

I'm a fan of GPLv3. I think the provisions it contains to support user freedom are important. I hate the growing trend of using free software to build devices that are, effectively, impossible for the end user to modify. If Canonical were releasing software under GPLv3 because of a commitment to free software then that would be an amazing thing. But it's pretty much impossible to square the CLA's requirement that contributors grant Canonical the right to ship under a proprietary license with a commitment to free software. Instead you end up with a situation that looks awfully like Canonical wanting to squash competition by making it impossible for anyone else to sell modified versions of Canonical's software in the same market.

Canonical aren't doing anything illegal or immoral here. They're free to run their projects in any way they choose. But retaining the right to produce proprietary versions of external contributions without granting equivalent reciprocal rights isn't consistent with caring about free software or contributing to the wider Linux community, especially if it means you get to exclude those external contributors from the market you're selling their code into.

(Edit to add: a friend in the contracting industry points out that it also prevents vendors who won't ship GPLv3 from using external contractors to work on Mir - they have to go to Canonical, because only Canonical can relicense contributions under a proprietary license.)

[1] Right now Ubuntu Phone is using Surfaceflinger, the Android display server, but that's apparently just an interim solution.

comment count unavailable comments

Syndicated 2013-06-19 22:15:47 from Matthew Garrett

Dealing with UEFI non-volatile memory quirks

Since I wrote this, we've made some worthwhile progress on avoiding damaging Samsung hardware. The first is that the samsung-laptop driver appeared to be causing the firmware to attempt to write to an area of memory that was marked in the chipset, triggering a Machine Check Exception. That was what generated the pstore output that caused the problem originally. The driver now refuses to load if EFI is enabled, which avoids the problem. It's not ideal, since it's currently the only mechanism we have for certain functionality on Samsung laptops, but there you go.

The second problem was that avoiding crashing on boot didn't actually fix the problem in any fundamental way. Even with pstore disabled, it was possible for userspace to fill the nvram and trigger the same problem. Our first approach to this was to prevent any writes to nvram if the UEFI QueryVariableInfo() call reported that more than 50% of the nvram storage space would be used. That was safe, but led to another issue. The nvram storage area is typically implemented as part of the same flash chip as the firmware. Flash isn't arbitrarily accessible - changing the contents of a block typically involves rewriting the entire block. It's impractical to rewrite the entire nvram area on every write, so what actually happens is that deleting variables just results in them being marked as inactive but doesn't actually free up the space. The firmware can later perform some sort of garbage collection to free it up.

This caused us problems, since inactive space that hasn't been garbage collected yet isn't actually available, and as a result firmware implementations tend to count it as used. Say you had 64KB of nvram and wrote 32KB of variables. We'd then refuse to write any more because you'd drop below 50%. So you delete 16KB of the variables you've created and try again. Unfortunately, the firmware still thinks that there's 32KB in use and Linux would still refuse.

If you were lucky, rebooting would trigger a garbage collection run. If you weren't, it wouldn't. Problematic. Our next approach was to try to account for the space actually actively used by the variables, rather than relying on what the firmware told us via QueryVariableInfo(). This seems simple enough - just add up the size of all the variables and subtract that from the overall size to determine how much of the "used" space is actually just old inactive variables that can be ignored. However, there's still some problems there. The first is that each variable has some additional overhead associated with it, and the size of that overhead varies depending on the system vendor. We had to make a conservative guess, which could cause problems if systems had large numbers of small variables. The second is that the only variables the kernel can see are those that are flagged as runtime-visible. There may also be a significant quantity of nvram used to store variables that are only visible in boot services code. We could work around this by adding up sizes while we're still in boot services code, but on some systems calling QueryVariableInfo() before ExitBootServices() results in later calls to GetNextVariable() jumping to invalid addresses and crashing the kernel. Not a great approach.

Meanwhile, Samsung got back to us and let us know that their systems didn't require more than 5KB of nvram space to be available, which meant we could get rid of the 50% value and replace it with 5KB. The hope was that any system that booted with only 5KB of space available in nvram would trigger a garbage collection run. Unfortunately, it turned out that that wasn't true - some systems will only trigger garbage collection if the OS actually makes an attempt to write a variable that won't otherwise fit.

Hence this patch. The new approach is to ask the firmware how much space is available. If the size of the new variable would reduce this to less than 5K, we attempt to create a variable bigger than the remaining space. This should cause the firmware to realise that it's out of room and either (depending on implementation) perform a garbage collection run at runtime or set a flag that will cause the system to perform garbage collection on the next reboot. We then call QueryVariableInfo() again to see whether a garbage collection run actually happened, and if so check whether we now have enough space. If so, we go ahead and write the variable. If not, we tell userspace that there's not enough space.

This seems to work in all the situations I've tested, and it should avoid ending up in a situation where a Samsung can end up bricked. However, it's firmware, so who knows whether it's going to break things for someone else.

comment count unavailable comments

Syndicated 2013-06-03 15:25:23 from Matthew Garrett

Secure Boot isn't the only problem facing Linux on Windows 8 hardware

There's now no shortage of Linux distributions that support Secure Boot out of the box, so that's a mostly solved problem. But even if your distribution supports it entirely you still need to boot your install media in the first place.

Hardware initialisation is a slightly odd thing. There's no specification that describes the state ancillary hardware has to be in after firmware→OS handover, so the OS effectively has to reinitialise it again. This means that certain bits of hardware end up being initialised twice, and that's slow in some cases. The most obvious is probably USB, which has various timeouts as you wait for hardware to settle. Full USB support in the firmware probably adds a couple of seconds to boot time, and it's arguably wasted because the OS then has to do the same thing (but, thankfully, can at least do other things at the same time). So, looking for USB boot media takes time, and since the overwhelmingly common case is that users don't want to boot off USB, it's time that's almost always wasted.

One of the requirements for Windows 8 certified hardware is that it must complete firmware initialisation within a specific amount of time, something that Microsoft refer to as "Fast Boot". Meeting these requirements effectively makes it impossible to initialise USB, and it's likely that certain other things will also be skipped. If you've got a USB keyboard then this obviously means that your keyboard won't work until the OS starts, but even i8042 setup takes time and so some laptops with traditional PS/2-style keyboards may not set it up. That means the system will ignore the keyboard no matter how much you hammer it at boot, and the firmware will boot whichever OS it finds.

For a newly purchased device, that's going to be Windows 8. It's not too much of a problem with a fully installed Windows 8, since you can hold down shift while clicking the reboot icon and get a menu that lets you reboot into the firmware menu. Windows sets a flag in a UEFI variable and reboots the system, the firmware sees that flag and does full hardware initialisation and then drops you into the setup environment. It takes slightly longer to get into the firmware, but that's countered by the time you save every time you don't want to get into the firmware on boot.

So what's the problem? Well, the Windows 8 setup environment doesn't offer that reboot icon. Turn on a brand new Windows 8 system and you have two choices - agree to the Windows 8 license, or power the machine off. The only way to get into the firmware menu is to either agree to the Windows 8 license or to disassemble the machine enough that you can unplug the hard drive[1] and force the system to fall back to offering the boot menu.

I understand the commercial considerations that result in it ranging from being difficult to impossible to buy new hardware without Windows pre-installed, but up until now it was still straightforward to install an alternative OS without agreeing to the Windows license. Now, installing alternative operating systems on many new systems will require you to give up certain rights even if you want nothing other than to reach the system firmware menu.

I'm firmly of the opinion that there are benefits to Secure Boot. I'm also in favour of setups like Fast Boot. But I don't believe that anyone should be forced to agree to a EULA purely in order to be able to boot their own choice of OS on a system that they've already purchased.

[1] Which is a significant and probably warranty-voiding exercise on many systems, and that's assuming that it's not an SSD soldered to the motherboard…

comment count unavailable comments

Syndicated 2013-05-28 21:41:43 from Matthew Garrett

A short introduction to TPMs

I've been working on TPMs lately. It turns out that they're moderately awful, but what's significantly more awful is basically all the existing documentation. So here's some of what I've learned, presented in the hope that it saves someone else some amount of misery.

What is a TPM?

TPMs are devices that adhere to the Trusted Computing Group's Trusted Platform Module specification. They're typically microcontrollers[1] with a small amount of flash, and attached via either i2c (on embedded devices) or LPC[2] (on PCs). While designed for performing cryptographic tasks, TPMs are not cryptographic accelerators - in almost all situations, carrying out any TPM operations on the CPU instead would be massively faster[3]. So why use a TPM at all?

Keeping secrets with a TPM

TPMs can encrypt and decrypt things. They're not terribly fast at doing so, but they have one significant benefit over doing it on the CPU - they can do it with keys that are tied to the TPM. All TPMs have something called a Storage Root Key (or SRK) that's generated when the TPM is initially configured. You can ask the TPM to generate a new keypair, and it'll do so, encrypt them with the SRK (or another key descended from the SRK) and hand it back to you. Other than the SRK (and another key called the Endorsement Key, which we'll get back to later), these keys aren't actually kept on the TPM - the running OS stores them on disk. If the OS wants to encrypt or decrypt something, it loads the key into the TPM and asks it to perform the desired operation. The TPM decrypts the key and then goes to work on the data. For small quantities of data, the secret can even be stored in the TPM's nvram rather than on disk.

All of this means that the keys are tied to a system, which is great for security. An attacker can't obtain the decrypted keys, even if they have a keylogger and full access to your filesystem. If I encrypt my laptop's drive and then encrypt the decryption key with the TPM, stealing my drive won't help even if you have my passphrase - any other TPM simply doesn't have the keys necessary to give you access.

That's fine for keys which are system specific, but what about keys that I might want to use on multiple systems, or keys that I want to carry on using when I need to replace my hardware? Keys can optionally be flagged as migratable, which makes it possible to export them from the TPM and import them to another TPM. This seems like it defeats most of the benefits, but there's a couple of features that improve security here. The first is that you need the TPM ownership password, which is something that's set during initial TPM setup and then not usually used afterwards. An attacker would need to obtain this somehow. The other is that you can set limits on migration when you initially import the key. In this scenario the TPM will only be willing to export the key by encrypting it with a pre-configured public key. If the private half is kept offline, an attacker is still unable to obtain a decrypted copy of the key.

So I just replace the OS with one that steals the secret, right?

Say my root filesystem is encrypted with a secret that's stored on the TPM. An attacker can replace my kernel with one that grabs that secret once the TPM's released it. How can I avoid that?

TPMs have a series of Platform Configuration Registers (PCRs) that are used to record system state. These all start off programmed to zero, but applications can extend them at runtime by writing a sha1 hash into them. The new hash is concatenated to the existing PCR value and another sha1 calculated, and then this value is stored in the PCR. The firmware hashes itself and various option ROMs and adds those values to some PCRs, and then grabs the bootloader and hashes that. The bootloader then hashes its configuration and the files it reads before executing them.

This chain of trust means that you can verify that no prior system component has been modified. If an attacker modifies the bootloader then the firmware will calculate a different hash value, and there's no way for the attacker to force that back to the original value. Changing the kernel or the initrd will result in the same problem. Other than replacing the very low level firmware code that controls the root of trust, there's no way an attacker can replace any fundamental system components without changing the hash values.

TPMs support using these hash values to decide whether or not to perform a decryption operation. If an attacker replaces the initrd, the PCRs won't match and the TPM will simply refuse to hand over the secret. You can actually see this in use on Windows devices using Bitlocker - if you do anything that would change the PCR state (like booting into recovery mode), the TPM won't hand over the key and Bitlocker has to prompt for a recovery key. Choosing which PCRs to care about is something of a balancing act. Firmware configuration is typically hashed into PCR 1, so changing any firmware configuration options will change it. If PCR 1 is listed as one of the values that must match in order to release the secret, changing any firmware options will prevent the secret from being released. That's probably overkill. On the other hand, PCR 0 will normally contain the firmware hash itself. Including this means that the user will need to recover after updating their firmware, but failing to include it means that an attacker can subvert the system by replacing the firmware.

What about using TPMs for DRM?

In theory you could populate TPMs with DRM keys for media playback, and seal them such that the hardware wouldn't hand them over. In practice this is probably too easily subverted or too user-hostile - changing default boot order in your firmware would result in validation failing, and permitting that would allow fairly straightforward subverted boot processes. You really need a finer grained policy management approach, and that's something that the TPM itself can't support.

This is where Remote Attestation comes in. Rather than keep any secrets on the local TPM, the TPM can assert to a remote site that the system is in a specific state. The remote site can then make a policy determination based on multiple factors and decide whether or not to hand over session decryption keys. The idea here is fairly straightforward. The remote site sends a nonce and a list of PCRs. The TPM generates a blob with the requested PCR values, sticks the nonce on, encrypts it and sends it back to the remote site. The remote site verifies that the reply was encrypted with an actual TPM key, makes sure that the nonce matches and then makes a policy determination based on the PCR state.

But hold on. How does the remote site know that the reply was encrypted with an actual TPM? When TPMs are built, they have something called an Endorsement Key (EK) flashed into them. The idea is that the only way to have a valid EK is to have a TPM, and that the TPM will never release this key to anything else. There's a couple of problems here. The first is that proving you have a valid EK to a remote site involves having a chain of trust between the EK and some globally trusted third party. Most TPMs don't have this - the only ones I know of that do are recent Infineon and STMicro parts. The second is that TPMs only have a single EK, and so any site performing remote attestation can cross-correlate you with any other site. That's a pretty significant privacy concern.

There's a theoretical solution to the privacy issue. TPMs never actually sign PCR quotes with the EK. Instead, TPMs can generate something called an Attestation Identity Key (AIK) and sign it with the EK. The OS can then provide this to a site called a PrivacyCA, which verifies that the AIK is signed by a real EK (and hence a real TPM). When a third party site requests remote attestation, the TPM signs the PCRs with the AIK and the third party site asks the PrivacyCA whether the AIK is real. You can have as many AIKs as you want, so you can provide each service with a different AIK.

As long as the PrivacyCA only keeps track of whether an AIK is valid and not which EK it was signed with, this avoids the privacy concerns - nobody would be able to tell that multiple AIKs came from the same TPM. On the other hand, it makes any PrivacyCA a pretty attractive target. Compromising one would not only allow you to fake up any remote attestation requests, it would let you violate user privacy expectations by seeing that (say) the TPM being used to attest to HolyScriptureVideos.com was also being used to attest to DegradingPornographyInvolvingAnimals.com.

Perhaps unsurprisingly (given the associated liability concerns), there's no public and trusted PrivacyCAs yet, and even if they were (a) many computers are still being sold without TPMs and (b) even those with TPMs often don't have the EK certificate that would be required to make remote attestation possible. So while remote attestation could theoretically be used to impose DRM in a way that would require you to be running a specific OS, practical concerns make it pretty difficult for anyone to deploy that at any point in the near future.

Is this just limited to early OS components?

Nope. The Linux kernel has support for measuring each binary run or each module loaded and extending PCRs accordingly. This makes it possible to ensure that the running binaries haven't been modified on disk. There's not a lot of distribution infrastructure for setting this up, but in theory a distribution could deploy an entirely signed userspace and allow the user to opt into only executing correctly signed binaries. Things get more interesting when you add interpreted scripts to the mix, so there's still plenty of work to do there.

So what can I actually use a TPM for?

Drive encryption is probably the best example (Bitlocker does it on Windows, and there's a LUKS-based implementation for Linux here) - while in theory you could do things like use your TPM as a factor in two-factor authentication or tie your GPG key to it, there's not a lot of existing infrastructure for handling all of that. For the majority of people, the most useful feature of the TPM is probably the random number generator. rngd has support for pulling numbers out of it and stashing them in /dev/random, and it's probably worth doing that unless you have an Ivy Bridge or other CPU with an RNG.

Things get more interesting in more niche cases. Corporations can bind VPN keys to corporate machines, making it possible to impose varying security policies. Intel use the TPM as part of their anti-theft technology on education-oriented devices like the Classmate. And in the cloud, projects like Trusted Computing Pools use remote attestation to verify that compute nodes are in a known good state before scheduling jobs on them.

Is there a threat to freedom?

At the moment, probably not. The lack of any workable general purpose remote attestation makes it difficult for anyone to impose TPM-based restrictions on users, and any local code is obviously under the user's control - got a program that wants to read the PCR state before letting you do something? LD_PRELOAD something that gives it the desired response, or hack it so it ignores failure. It's just far too easy to circumvent.

Summary?

TPMs are useful for some very domain-specific applications, drive encryption and random number generation. The current state of technology doesn't make them useful for practical limitations of end-user freedom.

[1] Ranging from 8-bit things that are better suited to driving washing machines, up to full ARM cores
[2] "Low Pin Count", basically ISA without the slots.
[3] Loading a key and decrypting a 5 byte payload takes 1.5 seconds on my laptop's TPM.

comment count unavailable comments

Syndicated 2013-05-07 17:18:31 from Matthew Garrett

Update on leaked UEFI signing keys - probably no significant risk

According to the update here, the signing keys are supposed to be replaced by the hardware vendor. If vendors do that, this ends up being uninteresting from a security perspective - you could generate a signed image, but nothing would trust it. It should be easy enough to verify, though. Just download a firmware image from someone using AMI firmware, pull apart the capsule file, decompress everything and check whether the leaked public key is present in the binaries.

The real risk here is that even if most vendors have replaced that key, some may not have done. There's certainly an argument that shipping test keys at all increases the probability that a vendor will accidentally end up using those rather than generating their own, and it's difficult to rule out the possibility that that's happened.

comment count unavailable comments

Syndicated 2013-04-05 23:21:05 from Matthew Garrett

Leaked UEFI signing keys

A hardware vendor apparently had a copy of an AMI private key on a public FTP site. This is concerning, but it's not immediately obvious how dangerous this is for a few reasons. The first is that this is apparently the firmware signing key, not any of the Secure Boot keys. That means it can't be used to sign a UEFI executable or bootloader, so can't be used to sidestep Secure Boot directly. The second is that it's AMI's key, not a board vendor - we don't (yet) know if this key is used to sign any actual shipping firmware images, or whether it's effectively a reference key. And, thirdly, the code apparently dates from early 2012 - even if it was an actual signing key, it may have been replaced before any firmware based on this code shipped.

But there's still the worst case scenario that this key is used to sign most (or all) AMI-based vendor firmware. Can this be used to subvert Secure Boot? Plausibly. The attack would involve producing a new, signed firmware image with Secure Boot either disabled or with an additional key installed, and then to reflash that firmware. Firmware images are very board-specific, so unless you're engaging in a very targeted attack you either need a large repository of firmware for every board you want to attack, or you need to perform in-place modification.

Taking a look at the firmware update tool used for AMI systems, the latter might be possible. It seems that the AMI firmware driver allows you to dump the existing ROM to a file. It'd then be a matter of pulling apart the firmware image, modifying the key database, putting it back together, signing it and flashing it. It looks like doing this does require that the user enter the firmware password if one's set, so the simplest mitigation strategy would be to do that.

So. If this key is used by most vendors shipping AMI-based firmware, and if it's a current (rather than test) key, then it may well be possible for it to be deployed in an automated malware attack that subverts the Secure Boot trust model on systems running AMI-based firmware. The obvious lesson here is that handing out your private keys to third parties that you don't trust is a pretty bad idea, as is including them in source repositories.

(Wow, was this really as long ago as 2004? How little things change)

comment count unavailable comments

Syndicated 2013-04-05 19:18:36 from Matthew Garrett

Secure Boot and Restricted Boot.

I gave a presentation at Libreplanet this weekend on the topic of Secure Boot and Restricted Boot. There's a copy of the video here - it should be up on the conference site at some point. It turned out to be excellent timing, in that a group in Spain filed a complaint with the European Commission this morning arguing that Microsoft's imposition of Secure Boot on the x86 client PC market is anticompetitive. I suspect that this is unlikely to succeed (the Commission has already stated that the current implementation appears to conform to EU law), and I fear that it's going to make it harder to fight the real battle we face.

Secure Boot means different things to different people. I think the FSF's definition is a useful one - Secure Boot is any boot validation scheme in which ultimate control is in the hands of the owner of the device, while Restricted Boot is any boot validation scheme in which ultimate control is in the hands of a third party. What Microsoft require for x86 Windows 8 devices falls into the category of Secure Boot - assuming that OEMs conform to Microsoft's requirements, the user must be able to both disable Secure Boot entirely and also leave Secure Boot enabled, but with their own choice of trusted keys and binaries. If the FSF set up a signing service to sign operating systems that met all of their criteria for freeness, Microsoft's requirements would permit an end user to configure their system such that it refused to run non-free software. My system is configured to trust things shipped by Fedora or built locally by me, a decision that I can make because Microsoft require that OEMs support it. Any system that meets Microsoft's requirements is a system that respects the freedom of the computer owner to choose how restrictive their computer's boot policy is.

This isn't to say that it's ideal. The lack of any common UI or key format between hardware vendors makes it difficult for OS vendors to document the steps users must take to assert this freedom. The presence of Microsoft as the only widely trusted key authority leaves people justifiably concerned as to whether Microsoft will be equally aggressive in blacklisting its own products as it will be in blacklisting third party ones. Implementation flaws in a (very) small number of systems have resulted in correctly signed operating systems failing to boot, requiring users to update their firmware before being able to install anything but Windows.

But concentrating on these problems misses the wider point. The x86 market remains one where users are able to run whatever they want, but the x86 market is shrinking. Users are purchasing tablets and other ARM-based ultraportables. Some users are using phones as their primary computing device. In contrast to the x86 market, Microsoft's policies for the ARM market restrict user freedom. Windows Phone and Windows RT devices are required to boot only signed binaries, with no option for the end user to disable the signature validation or install their own keys. While the underlying technology is identical, this differing set of default policies means that Microsoft's ARM implementation is better described as Restricted Boot. The hardware vendors and Microsoft define which software will run on these systems. The owner gets no say.

And, unfortunately, Microsoft aren't alone. Apple, the single biggest vendor in this market, implement effectively identical restrictions. Some Android vendors provide unlockable bootloaders, but others (either through personal preference or at the behest of phone carriers) lock down their platforms. A naive user is likely to end up purchasing a device that will, in the absence of exploited security flaws, refuse to run if any system components are modified. Even in cases where the underlying components are built using free software, there's no guarantee that the user will have the ability to assert any of those freedoms.

Why does this matter? Some of these platforms (notably Windows RT and iOS, but also some Android-based devices) will even refuse to run unsigned applications. Users are unable to write their own software and distribute it to others without agreeing to often onerous restrictions. Users with the misfortune of living in the wrong country may be forbidden from even that opportunity. The vendor may choose to block applications that compete with their own, reducing innovation. The ability to explore and tinker with the components of the system is restricted, making it harder for users to learn how modern operating systems work. If I own a perfectly functional phone that no longer receives vendor updates, I don't even have the option of paying a third party to ensure that I can't be compromised by a malicious website and risk the loss of passwords or financial details. The user is directly harmed by these restrictions.

I won't argue that there are no benefits to curated software ecosystems. I won't even argue against devices shipping with a locked down policy by default. I will strongly argue that the owner of a device should not only have the freedom to choose whether they wish to remain within those locked-down boundaries, but should also have the freedom to impose their own boundaries. There should be no forced choice between freedom and security.

Those who argue against Secure Boot risk depriving us of the freedom to make a personal decision as to who we trust. Those who argue against Secure Boot while ignoring Restricted Boot risk depriving us of even more. The traditional PC market is decreasing in importance. Unless we do anything about it, free software will be limited to a niche group of enthusiasts who've carefully chosen from a small set of devices that respect user freedom. We should have been campaigning against Restricted Boot 10 years ago. Don't delay it even further by fighting against implementations that already respect user freedom.

comment count unavailable comments

Syndicated 2013-03-27 00:28:32 from Matthew Garrett

376 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!