Older blog entries for mjg59 (starting at number 313)

Feeding the trolls

A few years ago I got up on stage and briefly talked about how the Linux community contained far too many people who were willing to engage in entirely inappropriate behaviour, how this discouraged people from getting involved and how we weren't very good at standing up against that sort of behaviour. Despite doing this in front of several hundred people, and despite the video of me doing so then being uploaded to the internet, this got me a sum total of:

  • No death threats
  • No discussion about any of my physical attributes or lack thereof
  • No stalkers
  • No accusations that I was selling out the Linux community
  • No accusations that I was a traitor to my gender
  • No real negative feedback at all[1]

Which is, really, what you'd expect, right? The internet seems intent on telling me otherwise:

Well, she didn't do herself any favors by talking at conferences about women in tech, or setting up a feminist movement. If you wanted to attract abuse, that's a good way to go about it. It should be expected.
(Source)

MikeeUSA is a troll. He has no means to actually harm anyone, and he does it purely for the lulz.

Thus, MikeeUSA trolled a woman, and she took the bait. I just don't get why this is news, I've been trolled before, I don't get a news story.

(Source)

I was going to start a rant about how this behavior is encouraged by the macho men online, but this was just one guy harassing her. "Due to harassment" reads as due to harassment from the community, but she gave in to one idiot. She let him win.
(Source)

The full comment thread has rather more examples. If you stand up and say anything controversial, you should expect abuse. And if you let that abuse change your behaviour in any way, you've let the trolls win.

These attitudes are problematic.

The immediate assumption underlying such advice is that the degree of abuse is related to what you've said, not who you are. I'm reasonably visible in the geek world. I've said a few controversial things. The worst thing that's happened to me has been Ryan Farmer deciding to subscribe me to several thousand mailing lists. Inconvenient, but not really threatening. I haven't, for instance, been sent death threats. Nobody has threatened to rape me. And even if they had done, I wouldn't need to worry too much - there's a rather stronger track record of violent antifeminism being targeted at women than men.

I don't have to worry about this kind of thing. That means I don't get to tell other people that they should have expected it. Nor do I get to tell them that they should ignore it, or that if they don't call the police then they have no grounds to complain. And nor does anyone else.

The trolls don't win because someone decides that getting out of the tech business is more appealing than continuing to face abuse. The trolls win because we consider their behaviour acceptable and then blame the victim for inviting them in the first place. That needs to change.

[1] It was justifiably pointed out that saying all this while standing on stage next to a mostly naked guy wearing a loincloth with a raccoon tail covering his penis may have weakened my message somewhat.

comment count unavailable comments

Syndicated 2011-10-28 07:29:01 from Matthew Garrett

Management of UEFI secure booting

This post is made in a personal capacity. While it represents the outcome of multiple discussions on the topic with a variety of people, nothing written below should be construed as Red Hat's corporate position

The FSF have released a statement on UEFI secure boot. It explains the fundamental issue here, which isn't something as simple as "will OEMs let me install Linux". It's "Does the end user have the ability to manage their own keys".

Secure boot is a valuable feature. It does neatly deal with the growing threat of pre-OS malware. There is an incentive for it to be supported under Linux. I discussed the technical aspects of implementing support for it here - it's not a huge deal of work, and it is being worked on. So let's not worry about that side of things. The problem is with the keys.

Secure boot is implemented in a straightforward way. Each section of a PE-COFF file is added together and a hash taken[1]. This hash is signed with the private half of a signing key and embedded into the binary. When you attempt to execute a file under UEFI, the firmware attempts to decrypt the embedded hash. This requires that the firmware have a either a copy of the public half of the signing key in its key database, or for their to be a chain of trust from the signing key to a key in its key database. Once it has the decrypted hash, it generates its own hash of the binary and compares them. If they match, the binary is executed.

What happens if it doesn't match? Per the UEFI specification, the firmware can then prompt the user and ask if they want to execute it anyway. If the user accepts then the hash of the binary is remembered[4] and can continue to be executed in future. This is similar to what you get when you visit a self-signed https site, or when you connect to an ssh server for the first time - the user must explicitly state that they trust the software that is being booted.

This is a little awkward for a couple of reasons. First, it means that any updates to the bootloader would require the user to manually accept the binary again. Second, as we've seen with https, there's the risk of uesrs just blindly accepting things. If it's acceptable to do this off internal media[5] then malware could just rely on some number of users saying "yes". And thirdly, we've been told that Microsoft's Windows logo requirements forbid this option.

The first is problematic but manageable with appropriate documentation. The second is a real risk that makes this approach far less compelling. And the third is probably a showstopper.

So signed but untrusted binaries are probably out, on technological, UI and policy grounds. What else?

We can sign binaries with a trusted key. This gets us interoperability, but comes at a cost. The first issue here is that we still don't know what the process for getting a trusted key is going to be. One option is for something like a generic Linux key that's provided by all firmware vendors. As long as the OEM doesn't remove it, then that Linux key could be used to sign other keys that are used by individual Linux vendors. The main difficulty involved in this is having the Linux key be managed by an authority that OEMs can trust to only provide keys to trustworthy organisations and maintain control of the private half of the key. Remember, if any malware is signed with any of these keys, it'll boot on any machine carrying this key. It's a lot of trust. It's not obvious that there's an organisation with sufficient trust at the moment.

The second approach is for each Linux vendor to generate their own key and get every OEM to carry it. Let's rule this out on the basis that it's an approach that doesn't scale at all. Red Hat could probably get its key into a bunch of OEM systems. Canonical too. Oracle? Probably for the vendors they care about. It's likely to be much harder for everyone else, and you still end up with the risk that you've just bought a computer that'll boot Red Hat but not Ubuntu.

The third approach is for each Linux vendor to obtain a key from someone who's already trusted. The most obvious authority here is Microsoft. Ignoring the PR unpleasantness involved with Linux vendors having to do business with Microsoft to get signed, so far Microsoft haven't given us any indication that this will be possible.

So every approach that involves us signing with a trusted key comes with associated problems. There's also a common problem with all of them - you only get to play if you have your own trusted key. Corporate Linux vendors can probably find a way to manage this. Everyone else (volunteer or hobbyist vendors, end users who want to run their own OS, anyone who wants to use a modified bootloader or kernel) is shut out.

The obvious workaround is for them to just turn off secure boot. Ignoring the arguments over whether or not OEMs will provide that option[6], it benefits nobody for Linux installation to require disabling a legitimate security feature. It's also not likely to be in a standard location on all systems and may have different naming. It's a support nightmare. Let's focus on trying to find a solution that provides the security and doesn't have obvious scaling issues.

There's a pretty simple one. If the problem centres around installing signing keys, why not have a standardised way of installing signing keys?

What we've proposed for this is an extension of the current UEFI standard for booting off removable media. There's a standardised location for putting the bootloader for USB sticks or optical media. For 32-bit PCs, it's EFI/BOOT/BOOTIA32.EFI. For 64-bit, it's EFI/BOOT/BOOTX64.EFI. Our proposal is to specify a path for key storage on the media. If the user attempts to boot off an item of removable media and finds a key file, we would like the firmware to prompt the user to install the key. Once the key is installed, the firmware will attempt to boot the bootloader on the removable media.

How does this avoid the problems associated with prompting the user to boot untrusted binaries? The first is that there's no problem with updates. Because a key has been imported, as long as future bootloader updates are signed with the same key, they'll boot without prompting the user. The second is that it can be limited to removable media. If malware infects the system and installs itself onto the hard drive, the firmware won't prompt for key installation. It'll just refuse to boot and fall back on whatever recovery procedures the OEM has implemented. The only way it could get on the system would involve the user explicitly booting off removable media, which would be a significant hurdle. If you're at that stage then you can also convince the user to disable secure boot entirely.

This proposal has been brought up with the UEFI standards working group, and with the UEFI plugfest taking place next week we're hoping it'll be discussed. It seems to satisfy the dual requirements of maintaining system security and ensuring that users have the flexibility to choose what they run on their system. But as with all technical proposals, feedback is entirely welcome.

[1] Hilariously, Microsoft's signing tool gets this wrong by also adding the contents of gaps between sections in direct contravention of their own specification. This is fine for binaries generated by Microsoft's toolchain because they don't have any gaps, but since our binaries do contain gaps[2] and since the standard firmware implementation[3] does implement the specification correctly, any Linux-generated binaries signed with the Microsoft tool fail validation. Go team.
[2] Something that is, as far as we can tell, permitted by the PE-COFF specification
[3] Written by Intel, not Microsoft
[4] Note that this means that only the specific binary is considered trusted. No key is imported, and any other binaries signed with the same key will also need to be manually trusted in the same way.
[5] As would be required if you ever want to be able to update your bootloader
[6] And to forestall panic, at this point we expect that most OEMs will provide this option on most hardware, if only because customers will still want to boot Windows 7. We do know that some hardware will ship without it. It's not implausible that some OEMs will remove it in order to reduce their support burden. But for the foreseeable future, you'll be able to buy hardware that runs Linux.

comment count unavailable comments

Syndicated 2011-10-19 17:58:19 from Matthew Garrett

Margaret Dayhoff

It's become kind of a cliché for me to claim that the reason I'm happy working on ACPI and UEFI and similarly arcane pieces of convoluted functionality is that no matter how bad things are there's at least some form of documentation and there's a well-understood language at the heart of them. My PhD was in biology, working on fruitflies. They're a poorly documented set of layering violations which only work because of side-effects at the quantum level, and they tend to die at inconvenient times. They're made up of 165 million bases of a byte code language that's almost impossible to bootstrap[1] and which passes through an intermediate representations before it does anything useful[2]. It's an awful field to try to do rigorous work in because your attempts to impose any kind of meaningful order on what you're looking at are pretty much guaranteed to be sufficiently naive that your results bear a resemblance to reality more by accident than design.

The field of bioinformatics is a fairly young one, and because of that it's very easy to be ignorant of its history. Crick and Watson (and those other people) determined the structure of DNA. Sanger worked out how to sequence proteins and nucleic acids. Some other people made all of these things faster and better and now we have huge sequence databases that mean we can get hold of an intractable quantity of data faster than we could ever plausibly need to, and what else is there to know?

Margaret Dayhoff graduated with a PhD in quantum chemistry from Columbia, where she'd performed computational analysis of various molecules to calculate their resonance energies[3]. The next few years involved plenty of worthwhile research that aren't relevant to the story, so we'll (entirely unfairly) skip forward to the early 60s and the problem of turning a set of sequence fragments into a single sequence. Dayhoff worked on a suite of applications called "Comprotein". The original paper can be downloaded here, and it's a charming look back at a rigorous analysis of a problem that anyone in the field would take for granted these days. Modern fragment assembly involves taking millions of DNA sequence reads and assembling them into an entire genome. In 1960, we were still at the point where it was only just getting impractical to do everything by hand.

This single piece of software was arguably the birth of modern bioinformatics, the creation of a computational method for taking sequence data and turning it into something more useful. But Dayhoff didn't stop there. The 60s brought a growing realisation that small sequence differences between the same protein in related species could give insight into their evolutionary past. In 1965 Dayhoff released the first edition of the Atlas of Protein Sequence and Structure, containing all 65 protein sequences that had been determined by then. Around the same time she developed computational methods for analysing the evolutionary relationship of these sequences, helping produce the first computationally generated phylogenetic tree. Her single-letter representation of amino acids was born of necessity[4] but remains the standard for protein sequences. And the atlas of 65 protein sequences developed into the Protein Information Resource, a dial-up database that allowed researchers to download the sequences they were interested in. It's now part of UniProt, the world's largest protein database.

Her contributions to the field were immense. Every aspect of her work on bioinformatics is present in the modern day — larger, faster and more capable, but still very much tied to the techniques and concepts she pioneered. And so it still puzzles me that I only heard of her for the first time when I went back to write the introduction to my thesis. She's remembered today in the form of the Margaret Oakley Dayhoff award for women showing high promise in biophysics, having died of a heart attack at only 57.

I don't work on fruitflies any more, and to be honest I'm not terribly upset by that. But it's still somewhat disconcerting that I spent almost 10 years working in a field so defined by one person that I knew so little about. So my contribution to Ada Lovelace day is to highlight a pivotal woman in science who heavily influenced my life without me even knowing.

[1] You think it's difficult bringing up a compiler on a new architecture? Try bringing up a fruitfly from scratch.
[2] Except for the cases where the low-level language itself is functionally significant, and the cases where the intermediate representation is functionally significant.
[3] Something that seems to have involved a lot of putting punch cards through a set of machines, getting new cards out, and repeating. I'm glad I live in the future.
[4] The three-letter representation took up too much space on punch cards

comment count unavailable comments

Syndicated 2011-10-07 15:47:48 from Matthew Garrett

Supporting UEFI secure boot on Linux: the details

An obvious question is why Linux doesn't support UEFI secure booting. Let's ignore the issues of key distribution and the GPL and all of those things, and instead just focus on what would be required. There's two components - the signed binary and the authenticated variables.

The UEFI 2.3.1 spec describes the modification to the binary format required to produce a signed binary. It's not especially difficult - you add an extra entry to the image directory, generate a hash of the entire binary other than the checksum, the certificate directory entry and the signatures themselves, encrypt that hash with your key and embed the encrypted hash in the binary. The problem has been that there was a disagreement between Microsoft and Intel over whether this signature was supposed to include the PKCS header or not, and until earlier this week the only widely available developer firmware (Intel's) was incompatible with the only widely available signed OS (Microsoft's). There's further hilarity in that the specification lists six supported hash algorithms, but the implementations will only accept two. So pretty normal, really. Developing towards a poorly defined target is a pain. Now that there's more clarity we'll probably have a signing tool before too long.

Authenticated variables are the other part of the puzzle. If a variable requires authentication, the operating system's attempt to write it will fail unless the new data is appropriately signed. The key databases (white and blacklists) are examples of authenticated variables. The signing actually takes place in userspace, and the handoff between the kernel and firmware is identical for both this case and the unauthenticated case. The only problem in Linux's support here is that our EFI variable support was written to a pre-1.0 version of the EFI specification which stated that variables had a maximum size of 1024 bytes, and this limitation ended up exposed to userspace. So all we really need to do there is add a new interface to let arbitrary sized variables be written.

Summary: We don't really support secure boot right now, but that's ok because you can't buy any hardware that supports it yet. Adding support is probably about a week's worth of effort at most.

comment count unavailable comments

Syndicated 2011-09-23 15:48:20 from Matthew Garrett

UEFI secure booting (part 2)

Microsoft have responded to suggestions that Windows 8 may make it difficult to boot alternative operating systems. What's interesting is that at no point do they contradict anything I've said. As things stand, Windows 8 certified systems will make it either more difficult or impossible to install alternative operating systems. But let's have some more background.

We became aware of this issue in early August. Since then, we at Red Hat have been discussing the problem with other Linux vendors, hardware vendors and BIOS vendors. We've been making sure that we understood the ramifications of the policy in order to avoid saying anything that wasn't backed up by facts. These are the facts:


  • Windows 8 certification requires that hardware ship with UEFI secure boot enabled.
  • Windows 8 certification does not require that the user be able to disable UEFI secure boot, and we've already been informed by hardware vendors that some hardware will not have this option.
  • Windows 8 certification does not require that the system ship with any keys other than Microsoft's.
  • A system that ships with UEFI secure boot enabled and only includes Microsoft's signing keys will only securely boot Microsoft operating systems.

Microsoft have a dominant position in the desktop operating system market. Despite Apple's huge comeback over the past decade, their worldwide share of the desktop market is below 5%. Linux is far below that. Microsoft own well over 90% of the market. Competition in that market is tough, and vendors will take every break they can get. That includes the Windows logo program, in which Microsoft give incentives to vendors to sell hardware that meets their certification requirements. Vendors who choose not to follow the certification requirements will be at a disadvantage in the marketplace. So while it's up to vendors to choose whether or not to follow the certification requirements, Microsoft's dominant position means that they'd be losing sales by doing so.

Why is this a problem? Because there's no central certification authority for UEFI signing keys. Microsoft can require that hardware vendors include their keys. Their competition can't. A system that ships with Microsoft's signing keys and no others will be unable to perform secure boot of any operating system other than Microsoft's. No other vendor has the same position of power over the hardware vendors. Red Hat is unable to ensure that every OEM carries their signing key. Nor is Canonical. Nor is Nvidia, or AMD or any other PC component manufacturer. Microsoft's influence here is greater than even Intel's.

What does this mean for the end user? Microsoft claim that the customer is in control of their PC. That's true, if by "customer" they mean "hardware manufacturer". The end user is not guaranteed the ability to install extra signing keys in order to securely boot the operating system of their choice. The end user is not guaranteed the ability to disable this functionality. The end user is not guaranteed that their system will include the signing keys that would be required for them to swap their graphics card for one from another vendor, or replace their network card and still be able to netboot, or install a newer SATA controller and have it recognise their hard drive in the firmware. The end user is no longer in control of their PC.

If Microsoft were serious about giving the end user control, they'd be mandating that systems ship without any keys installed. The user would then have the ability to make an informed and conscious decision to limit the flexibility of their system and install the keys. The user would be told what they'd be gaining and what they'd be giving up.

The final irony? If the user has no control over the installed keys, the user has no way to indicate that they don't trust Microsoft products. They can prevent their system booting malware. They can prevent their system booting Red Hat, Ubuntu, FreeBSD, OS X or any other operating system. But they can't prevent their system from running Windows 8.

Microsoft's rebuttal is entirely factually accurate. But it's also misleading. The truth is that Microsoft's move removes control from the end user and places it in the hands of Microsoft and the hardware vendors. The truth is that it makes it more difficult to run anything other than Windows. The truth is that UEFI secure boot is a valuable and worthwhile feature that Microsoft are misusing to gain tighter control over the market. And the truth is that Microsoft haven't even attempted to argue otherwise.

comment count unavailable comments

Syndicated 2011-09-23 13:01:12 from Matthew Garrett

UEFI secure booting

Since there are probably going to be some questions about this in the near future:

The UEFI secure boot protocol is part of recent UEFI specification releases. It permits one or more signing keys to be installed into a system firmware. Once enabled, secure boot prevents executables or drivers from being loaded unless they're signed by one of these keys. Another set of keys (Pkek) permits communication between an OS and the firmware. An OS with a Pkek matching that installed in the firmware may add additional keys to the whitelist. Alternatively, it may add keys to a blacklist. Binaries signed with a blacklisted key will not load.

There is no centralised signing authority for these UEFI keys. If a vendor key is installed on a machine, the only way to get code signed with that key is to get the vendor to perform the signing. A machine may have several keys installed, but if you are unable to get any of them to sign your binary then it won't be installable.

This impacts both software and hardware vendors. An OS vendor cannot boot their software on a system unless it's signed with a key that's included in the system firmware. A hardware vendor cannot run their hardware inside the EFI environment unless their drivers are signed with a key that's included in the system firmware. If you install a new graphics card that either has unsigned drivers, or drivers that are signed with a key that's not in your system firmware, you'll get no graphics support in the firmware.

Microsoft requires that machines conforming to the Windows 8 logo program and running a client version of Windows 8 ship with secure boot enabled. The two alternatives here are for Windows to be signed with a Microsoft key and for the public part of that key to be included with all systems, or alternatively for each OEM to include their own key and sign the pre-installed versions of Windows. The second approach would make it impossible to run boxed copies of Windows on Windows logo hardware, and also impossible to install new versions of Windows unless your OEM provided a new signed copy. The former seems more likely.

A system that ships with only OEM and Microsoft keys will not boot a generic copy of Linux.

Now, obviously, we could provide signed versions of Linux. This poses several problems. Firstly, we'd need a non-GPL bootloader. Grub 2 is released under the GPLv3, which explicitly requires that we provide the signing keys. Grub is under GPLv2 which lacks the explicit requirement for keys, but it could be argued that the requirement for the scripts used to control compilation includes that. It's a grey area, and exploiting it would be a pretty good show of bad faith. Secondly, in the near future the design of the kernel will mean that the kernel itself is part of the bootloader. This means that kernels will also have to be signed. Making it impossible for users or developers to build their own kernels is not practical. Finally, if we self-sign, it's still necessary to get our keys included by ever OEM.

There's no indication that Microsoft will prevent vendors from providing firmware support for disabling this feature and running unsigned code. However, experience indicates that many firmware vendors and OEMs are interested in providing only the minimum of firmware functionality required for their market. It's almost certainly the case that some systems will ship with the option of disabling this. Equally, it's almost certainly the case that some systems won't.

It's probably not worth panicking yet. But it is worth being concerned.

comment count unavailable comments

Syndicated 2011-09-20 18:23:20 from Matthew Garrett

The Android/GPL situation

There was another upsurge in discussion of Android GPL issues last month, triggered by couple of posts by Edward Naughton, followed by another by Florian Mueller. The central thrust is that section 4 of GPLv2 terminates your license on violation, and you need the copyright holders to grant you a new one. If they don't then you don't get to distribute any more copies of the code, even if you've now come into compliance. TLDR; most Android vendors are no longer permitted to distribute Linux.

I'll get to that shortly. There's a few other issues that could do with some clarification. The first is Naughton's insinuation that Google are violating the GPL due to Honeycomb being closed or their "license washing" of some headers. There's no evidence whatsoever that Google have failed to fulfil their GPL obligations in terms of providing source to anyone who received GPL-covered binaries from them. If anyone has some, please do get in touch. Some vendors do appear to be unwilling to hand over code for GPLed bits of Honeycomb. That's an issue with the vendors, not Google.

His second point is more interesting, but the summary is "Google took some GPLed header files and relicensed them under Apache 2.0, and they've taken some other people's GPLv2 code and put it under Apache 2.0 as well". As far as the headers go, there's probably not much to see here. The intent was to produce a set of headers for the C library by taking the kernel headers and removing the kernel-only components. The majority of what's left is just structure definitions and function prototypes, and is almost certainly not copyrightable. And remember that these are the headers that are distributed with the kernel and intended for consumption by userspace. If any of the remaining macros or inline functions are genuinely covered by the GPLv2, any userspace application including them would end up a derived work. This is clearly not the intention of the authors of the code. The risk to Google here is indistinguishable from zero.

How about the repurposing of other code? Naughton's most explicit description is:

For example, Android uses “bootcharting” logic, which uses “the 'bootchartd' script provided by www.bootchart.org, but a C re-implementation that is directly compiled into our init program.” The license that appears at www.bootchart.org is the GPLv2, not the Apache 2.0 license that Google claims for its implementation.

, but there's no indication that Google's reimplementation is a derived work of the GPLv2 original.

In summary: No sign that Google's violating the GPL.

Florian's post appears to be pretty much factually correct, other than this bit discussing the SFLC/Best Buy case:

I personally believe that intellectual property rights should usually be enforced against infringing publishers/manufacturers rather than mere resellers, but that's a separate issue.

The case in question was filed against Best Buy because Best Buy were manufacturing infringing devices. It was a set of own-brand Blu Ray players that incorporated Busybox. Best Buy were not a mere reseller.

Anyway. Back to the original point. Nobody appears to disagree that section 4 of the GPLv2 means that violating the license results in total termination of the license. The disagreement is over what happens next. Armijn Hemel, who has done various work on helping companies get back into compliance, believes that simply downloading a new copy of the code will result in a new license being granted, and that he's received legal advice that supports that. Bradley Kuhn disagrees. And the FSF seem to be on his side.

The relevant language in v2 is:

You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License.

The relevant language in v3 is:

You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License

which is awfully similar. However, v3 follows that up with:

However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.

In other words, with v3 you get your license back providing you're in compliance. This doesn't mesh too well with the assumption that you can get a new license by downloading a new copy of the software. It seems pretty clear that the intent of GPLv2 was that the license termination was final and required explicit reinstatement.

So whose interpretation is correct? At this point we really don't know - the only people who've tried to use this aspect of the GPL are the SFLC, and as part of their settlements they've always reinstated permission to distribute Busybox. There's no clear legal precedent. Which makes things a little awkward.

It's not possible to absolutely say that many Android distributors no longer have the right to distribute Linux. But nor is it possible to absolutely say that they haven't lost that right. Any sufficiently motivated kernel copyright holder probably could engage in a pretty effective shakedown racket against Android vendors. Whether they will do remains to be seen, but honestly if I were an Android vendor I'd be worried. There's plenty of people out there who hold copyright over significant parts of the kernel. Would you really bet on all of them being individuals of extreme virtue?

comment count unavailable comments

Syndicated 2011-09-01 21:39:36 from Matthew Garrett

Further adventures in EFI booting

Many people still install Linux from CDs. But a growing number install from USB. In an ideal world you'd be able to download one image that would let you do either, but it turns out that that's quite difficult. Shockingly enough, it's another situation where the system firmware exists to make your life difficult.

Booting a hard drive is pretty easy. The BIOS reads the first 512 bytes off the drive, copies them to RAM and executes them. That code is then responsible for either starting your bootloader or identifying the currently active partition and jumping to its boot sector, but before too long you're in a happy place where you're executing whatever you want to. Life is good. So you'd think that CDs would work in a similar way. The ISO 9660 format even leaves a whole 32KB at the start of a filesystem, which is enough space for a pretty awesome bootloader. But no. This is not how CDs work. That would be far too easy.

Let's imagine we're back in the 90s. People want to be able to boot off CD without needing a boot floppy to do so. And you're a PC vendor with a BIOS that's been lovingly[1] forced into a tiny piece of flash and which has to execute out of an almost as tiny piece of RAM if you want your users to be able to play any games. Letting boot code read arbitrary content off the CD would mean adding a new set of interrupt hooks, and that's going to be even more complicated because CDs have a sector size of 2K while hard drives are 512 bytes[2] and who's going to pay to implement this and for the extra flash and RAM and look surely there has to be another way?

So, of course, another way was found. The El Torito specification defines a way for shoving a reference to some linear blocks into the ISO 9660 header. The BIOS reads those blocks into memory and then redirects either the floppy or hard drive access interrupts (depending on the El Torito type) to that region. The boot code can then proceed as if it had been read off a floppy without all the trouble of actually putting a floppy in the machine, and the extra code required in the system BIOS is minimal.

USB sticks, however, are treated as hard drives. The BIOS won't look for El Torito images on them. Instead, it'll try to execute a boot sector. That isn't there on a CD image. Sigh.

A few years ago a piece of software called isohybrid popped up and solved this problem nicely. isohybrid is a companion to isolinux, which itself is a bootloader that fits into an El Torito image and can then load your kernel and installer from CD. isohybrid takes an ISO image, adds an x86 boot sector and partition table and does some more fiddling to turn a valid ISO image into one that can be copied directly onto a USB stick and booted. The world suddenly becomes a better place.

But that's BIOS. EFI makes this easier, right? Right?

No. EFI does not make this easier.

Despite EFI being a modern firmware for the modern world[3], EFI implementations are not required to be able to understand ISO 9660. In fact, I've never seen one that does. FAT is all the spec requires, and FAT is typically all you get. Nor will EFI just execute some arbitrary boot code from the start of the CD. So, how does EFI boot off CD?

El Torito. Obviously.

It's not quite as bad as it sounds, merely almost as bad as it sounds. While the typical way of using El Torito for a long time was to use floppy or hard drive emulation, it also supports a "No emulation" mode. It also supports setting a type flag for your media, which means you can distinguish between images intended for BIOS booting and EFI booting. But the fact remains that your CD has to include an embedded FAT partition that then contains a bootloader that's able to read ISO 9660 because your firmware is too inept to handle that itself[4].

How about USB sticks? Thankfully, booting these on EFI doesn't require any boot sectors at all. Instead you just have to have a partition table, a FAT partition and a bootloader in a well known location in that FAT partition. The required partition is, in fact, identical to the one you need in an El Torito image. And so this is where we start introducing some extra hacks.

Like I said earlier, isohybrid fakes up an MBR and adds some boot code that points at the actual bootloader. It needs to do a little more on EFI. The first problem is that the isohybrid MBR partition has to cover the entire ISO 9660 filesystem on the USB stick so that the operating system can access it later, but the El Torito FAT image is inside that partition. A lot of MBR-based code becomes very unhappy if you try to set up a partition that's a subset of another partition. So we can't really use MBR. On to GPT.

GPT, or the GUID Partition Table, is the EFI era's replacement for MBR partitions. It has two main advantages over MBR - firstly it can cover partitions larger than 2TB without having to increase sector size, and secondly it doesn't have the primary/logical partition horror that still makes MBR more difficult than it has any right to be. The format is pretty simple - you have a header block 1 logical block into the media (so 512 bytes on a typical USB stick), and then a pointer to a list of partitions. There's then a secondary table one block from the end of the disk, which points at another list of partitions. Both blocks have multiple CRCs that guarantee that neither the header nor the partition list have been corrupted. It turns out to be a relatively straightforward modification of isohybrid to get it to look for a secondary EFI image and construct a GPT entry pointing at it. This works surprisingly well, and media prepared this way will boot EFI machines if burned to a CD or written to a USB stick.

There's a few quirks. Macs will show two boot icons for these CDs[6], one marked "EFI Boot" and one helpfully marked "Windows"[7], with the latter booting the BIOS El Torito image. That's a little irritating, but not insurmountable. The other issue is that older Macs won't look for boot loaders in the legacy locations. This is where things start getting horrible.

Back in the old days, Apple boot media used to have a special "blessed" folder. Attempting to boot would involve the firmware looking for such a folder and then using that to start itself up. Any folder in the filesystem could be blessed. Modern hardware doesn't use boot folders, but does use boot files. For an HFS+ filesystem, the inode of the bootloader is written to a specific offset in the filesystem superblock and the firmware simply finds that inode and executes it. And this appears to be all that older Macs support.

So, having written a small tool to bless an HFS+ partition, I tried the obvious first step of burning a CD with three El Torito images (one BIOS, one FAT, one HFS+). It failed. While Refit could see the bootloader in the HFS+ image, the firmware appeared to have no interest at all in booting off it. Yet Apple install media would boot. What was the difference?

The difference, obviously, was that these earlier Macs don't appear to support El Torito booting. The Apple install media contained an Apple partition map.

The Apple partition map (APM) is Apple's legacy partition table format. Apple mostly dropped it when they went to x86, where it's retained for two purposes. The first is for drives that need to be shared between Intel Macs and PPC ones. The second seems to be for their install DVDs. Some further playing revealed that burning a CD with an APM entry pointing at the HFS+ filesystem on the CD gave me a boot icon. Problem solved?

Not really. Remember how I earlier mentioned that ISO 9660 leaves 32KB at the start of the image, and that an isohybrid image then writes an MBR and boot sector in the first 512 bytes of that, and the GPT header starts 512 bytes into a drive? That means that it's easy to produce an ISO that has both a boot sector, MBR partition table and GPT. None of them overlap. APM, on the other hand, has a header that's located at byte 0 of the media, overlapping with the boot sector. And it has a partition listing that's located at sector 1, overlapping with the GPT. Is all lost?

No. Merely sanity.

The first thing to remember is that the boot sector is just raw assembler. It's a byte stream that's executed by the CPU. And there's a lot of things you can tell a CPU to do that result in nothing happening. Peter Jones pointed out that the only bits of the AFP header you actually need are the letters "ER", followed by the sector size as a two byte big endian integer. These disassemble to harmless instructions, so we can simply move the boot code down a little and stick these at the beginning. A PC that executes it will read straight through the bizarre (but harmless) Apple bytes and then execute the real boot code.

The second thing that's important here is that we were just given the opportunity to specify the sector size. The GPT is only relevant when the image is written to a USB stick, so assumes a sector size of 512 bytes. So when the GPT starts one sector into the drive, it's actually starting 512 bytes into the drive. APM also starts one sector into the drive, but we can simply put a different sector size into the header and suddenly we're able to choose where that's going to be. 2K seems like a good choice, and so the firmware will now look for the header at byte 2048.

That's still in them middle of the GPT partition listing, though. Except we can avoid that as well. GPT lets you specify where the partition listing starts and doesn't require it to be immediately after the header. So we can offset the partition listing to, say, byte 8192 and leave a hole for the Apple partition map.

And, shockingly, this works. Setting up a CD this way gives a boot icon on old Macs. On new Macs, it gives three - one for legacy boot, one for EFI boot via FAT and one for EFI boot via HFS. Less than ideal, but eh. The one remaining problem is that this doesn't work for USB sticks (the firmware sees the GPT and ignores the APM), so we also need to add a GPT entry for the HFS+ partition. Job done.

So, it is possible to produce install media that will work if burned to CD or written to a USB stick. It's even possible to produce a version that will work on Macs, as long as you're willing to put up with three partition tables and an x86 boot sector that doubles as an APM header. And patches to isohybrid to do all of this will be turning up as soon as I tidy the code to the point where it works without having to hack in offsets by hand.

[1] Insert some other adverb here if you feel like it
[2] Why yes, 15 years later BIOSes still tend to assume 512 bytes. Which is why your 4K sector disk is much harder to work with than you'd like it to be.
[3] Ever noticed how the modern world involves a great deal of suffering, misery and death? EFI fits into that world perfectly.
[4] Obviously if you want your media to be bootable via both BIOS and EFI you need to produce a CD with two El Torito images. BIOS systems should ignore the image that says it's for EFI, and EFI systems should ignore the BIOS one. Some especially creative BIOS authors[5] have decided that users shouldn't have their choices limited in such a way, and so pop up a screen that says:


1.

2.

Select CD-ROM boot type:

and wait for the user to press a key. The lack of labels after the numbers is not a typographical error on my part.
[5] Older (pre-2009, and some 2009 models) Apple hardware has this bug if a dual-El Torito CD is booted via the BIOS compatibility layer. This is especially unfortunate because said machines often fail to provide a working keyboard emulation at this stage, resulting in you being stuck forever at an impressively unhelpful screen. This isn't a Linux bug, since it's happening before we've run any of our code at all. It's not even limited to Linux. 64-bit install media for Vista SP1, Windows 7 and Server 2008 all have similar El Torito layout and all trigger the same bug on Apple hardware. Apple's aware of this, and has resolved the issue by declaring that these machines don't support 64 bit Windows.
[6] Even further investigation reveals that Apple will show you as many icons as there are El Torito images, which is a rare example of Apple giving the user the freedom to brutally butcher their extremities if they so desire
[7] "Windows" is Apple code for "Booting via BIOS compatibility". The Apple boot menu will call any filesystem with a BIOS boot sector Windows.

Syndicated 2011-07-26 12:48:46 from Matthew Garrett

OSCON

Last night, Tim O'Reilly posted about O'Reilly's desire to ensure that conferences they run are free of harassment and as welcoming to as much of the community as possible. This comes on the back of a brief campaign by various people concerned that the absence of such a policy at one of the largest open source conferences was a problem. O'Reilly turned out to be highly responsive, and it's a credit to everyone involved that this got worked out in such a short space of time.

OSCON's an interesting conference. First, it's huge. There's upwards of 15 simultaneous tracks in the main conference. It covers a huge range of topics, from Linux to pretty much any piece of open source middleware you can think of, to web technologies to hardware hacking to free culture. It's a cross section of pretty much everything that can plausibly be described as open source. As such, the demographics are very different to a typical Linux conference, or even a typical single-field technical conference such as PyCon. We may have discussed these issues at length in the Linux community, but that's such a small part of the OSCON audience that it's not surprising that awareness is lower.

Which means this is a pretty big deal. Sexual harassment isn't something that's limited to Linux conferences. It's prevalent throughout the entire open source world. Having a conference that represents a broader part of that world than any other accept that this is a genuine problem is a massive step towards raising awareness of it in the wider community.

Syndicated 2011-07-25 22:25:47 from Matthew Garrett

Contributor License Agreement corner cases

If you don't have any interest in tedious licensing discussion then I recommend not reading this. It's not going to be a great deal of fun.

Having said that:

I'm looking at the Canonical Individual Contributor License Agreement (pdf here). In contrast to the previous copyright assignment, it merely grants a broad set of rights to Canonical, including the right to relicense the work under any license they choose. Notably, it does not transfer copyright to Canonical. The contributor retains copyright.

So here's a thought experiment. Canonical release a project under the GPLv3. I produce a significant modification to it. It's now clearly a derived work of Canonical's project, so I have to distribute it under the GPLv3. I have no right to distribute it under a proprietary license. I sign the CLA and provide my modification to Canonical. The grant I give them includes giving them permission to relicense my work under any license they choose. As copyright holders to the original work, they may also change the license of that work. But, notably, I have not granted them copyright to my work. I continue to hold that.

Now someone else decides to extend the functionality that I added. By doing so they are creating a work that's both a derivative work of Canonical's code and also a derivative work of my code. How can they sign the CLA? I granted Canonical the right to grant extra permissions on my work. I didn't grant anyone else that right.

This wasn't a problem with the copyright assignment case, because as copyright holders Canonical could simply grant that permission to all downstream recipients. But Canonical aren't the copyright holder, and unless they explicitly relicense my work I don't see any way that they can accept derivatives of it. The only way I can see this working is if all Canonical code is actually distributed under an implicit license that's slightly more permissive than the GPLv3. But there's nothing saying that it is at present.

Now, I'm obviously not a lawyer. I may be entirely wrong about the above. But asymmetric CLAs introduce an additional level of complexity into the entire process of contributing that make it even more difficult for a potential contributor to become involved. I've spent far more time than most worrying about licenses and even I don't understand exactly what I'm giving up, which is ironic given that a stated aim is usually that they increase certainty about licensing. Is the opportunity to relicense really worth alienating people who would otherwise be doing free work for you?

Syndicated 2011-07-25 17:25:03 from Matthew Garrett

304 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!