Older blog entries for mjg59 (starting at number 321)

TVs are all awful

A discussion a couple of days ago about DPI detection (which is best summarised by this and this and I am not having this discussion again) made me remember a chain of other awful things about consumer displays and EDID and there not being enough gin in the world, and reading various bits of the internet and wikipedia seemed to indicate that almost everybody who's written about this has issues with either (a) technology or (b) English, so I might as well write something.

The first problem is unique (I hope) to 720p LCD TVs. 720p is an HD broadcast standard that's defined as having a resolution of 1280x720. A 720p TV is able to display that image without any downscaling. So, naively, you'd expect them to have 1280x720 displays. Now obviously I wouldn't bother mentioning this unless there was some kind of hilarious insanity involved, so you'll be entirely unsurprised when I tell you that most actually have 1366x768 displays. So your 720p content has to be upscaled to fill the screen anyway, but given that you'd have to do the same for displaying 720p content on a 1920x1080 device this isn't the worst thing ever in the world. No, it's more subtle than that.

EDID is a standard for a blob of data that allows a display device to express its capabilities to a video source in order to ensure that an appropriate mode is negotiated. It allows resolutions to be expressed in a bunch of ways - you can set a bunch of bits to indicate which standard modes you support (1366x768 is not one of these standard modes), you can express the standard timing resolution (the horizontal resolution divided by 8, followed by an aspect ratio) and you can express a detailed timing block (a full description of a supported resolution).

1366/8 = 170.75. Hm.

Ok, so 1366x768 can't be expressed in the standard timing resolution block. The closest you can provide for the horizontal resolution is either 1360 or 1368. You also can't supply a vertical resolution - all you can do is say that it's a 16:9 mode. For 1360, that ends up being 765. For 1368, that ends up being 769.

It's ok, though, because you can just put this in the detailed timing block, except it turns out that basically no TVs do, probably because the people making them are the ones who've taken all the gin.

So what we end up with is a bunch of hardware that people assume is 1280x720, but is actually 1366x768, except they're telling your computer that they're either 1360x765 or 1368x769. And you're probably running an OS that's doing sub-pixel anti-aliasing, which requires that the hardware be able to address the pixels directly which is obviously difficult if you think the screen is one size and actually it's another. Thankfully Linux takes care of you here, and this code makes everything ok. Phew, eh?

But ha ha, no, it's worse than that. And the rest applies to 1080p ones as well.

Back in the old days when TV signals were analogue and got turned into a picture by a bunch of magnets waving a beam of electrons about all over the place, it was impossible to guarantee that all TV sets were adjusted correctly and so you couldn't assume that the edges of a picture would actually be visible to the viewer. In order to put text on screen without risking bits of it being lost, you had to steer clear of the edges. Over time this became roughly standardised and the areas of the signal that weren't expected to be displayed were called overscan. Now, of course, we're in a mostly digital world and such things can be ignored, except that when digital TVs first appeared they were mostly used to watch analogue signals so still needed to overscan because otherwise you'd have the titles floating weirdly in the middle of the screen rather than towards the edges, and so because it's never possible to kill technology that's escaped into the wild we're stuck with it.

tl;dr - Your 1920x1080 TV takes a 1920x1080 signal, chops the edges off it and then stretches the rest to fit the screen because of decisions made in the 1930s.

So you plug your computer into a TV and even though you know what the resolution really is you still don't get to address the individual pixels. Even worse, the edges of your screen are missing.

The best thing about overscan is that it's not rigorously standardised - different broadcast bodies have different recommendations, but you're then still at the mercy of what your TV vendor decided to implement. So what usually happens is that graphics vendors have some way in their drivers to compensate for overscan, which involves you manually setting the degree of overscan that your TV provides. This works very simply - you take your 1920x1080 framebuffer and draw different sized black borders until the edge of your desktop lines up with the edge of your TV. The best bit about this is that while you're still scanning out a 1920x1080 mode, your desktop has now shrunk to something more like 1728x972 and your TV is then scaling it back up to 1920x1080. Once again, you lose.

The HDMI spec actually defines an extension block for EDID that indicates whether the display will overscan or not, but doesn't provide any way to work out how much it'll overscan. We haven't seen many of those in the wild. It's also possible to send an HDMI information frame that indicates whether or not the video source is expecting to be overscanned or not, but (a) we don't do that and (b) it'll probably be ignored even if we did, because who ever tests this stuff. The HDMI spec also says that the default behaviour for 1920x1080 (but not 1366x768) should be to assume overscan. Charming.

The best thing about all of this is that the same TV will often have different behaviour depending on whether you connect via DVI or HDMI, but some TVs will still overscan DVI. Some TVs have options in the menu to disable overscan and others don't. Some monitors will overscan if you feed them an HD resolution over HDMI, so if you have HD content and don't want to lose the edges then your hardware needs to scale it down and let the display scale it back up again. It's all awful. I recommend you drink until everything's already blurry and then none of this will matter.

comment count unavailable comments

Syndicated 2012-01-03 17:46:40 from Matthew Garrett

Clarifying the "secure boot attack"

Yesterday I wrote about an alleged attack on the Windows 8 secure boot implementation. As I later clarified, it turns out that the story was, to put it charitably, entirely wrong. The attack is a boot kit targeted towards BIOS-based boots. It lives in the MBR. It'll never be executed on any UEFI systems, let alone secure boot ones. In fact, this is precisely the kind of attack that secure boot is intended to protect against. So, context.

The MBR contains code that's executed by the BIOS at boot time. This code is unverifiable - it's permitted to have arbitrary functionality. There's only 440 bytes, but that's enough to jump to somewhere else and read code from elsewhere. There's no way for the BIOS to know that this code is malicious. And one thing this code can obviously do is load the normal boot code and modify it to behave differently. Any self-validation code in the loader can be patched out at this point. The modified loader will then load the kernel, and potentially also modify it. At this point, you've lost. Any attempts to validate the code can be redirected to the original code and so everything will look fine, up until the point where the user runs a specific application and suddenly your kernel is sending all your keystrokes over UDP to someone in Nigeria.

These attacks exist now. They're in the wild. In a normal UEFI world you'd do the same thing by just replacing the UEFI bootloader. But with secure boot you'll be able to validate that the bootloader is appropriately signed and if someone's modified it you'll drop into some remediation mode that recovers your files, from install media if necessary.

Obviously, this protection is based on all the components of secure boot (ie, everything that runs before ExitBootServices() is called) being perfect. As I said, if any of them accept untrusted input and misinterpret it in such a way that they can be tricked into running arbitrary code, you'll still have problems. But when discussing the pros and cons of secure boot, it's important to make sure that we're talking about reality rather than making provably false assertions.

comment count unavailable comments

Syndicated 2011-11-18 21:03:08 from Matthew Garrett

Attacks on secure boot

This is interesting. It's obviously lacking in details yet, but it does highlight one weakness of secure boot. The security for secure boot is all rooted in the firmware - there's no external measurement to validate that everything functioned as expected. That means that if you can cause any trusted component to execute arbitrary code then you've won. So, what reads arbitrary user data? The most obvious components are any driver that binds to user-controlled hardware, any filesystem driver that reads user-provided filesystems and any signed bootloader that reads user-configured data. A USB drive could potentially trigger a bug in the USB stack and run arbitrary code. A malformed FAT filesystem could potentially trigger a bug in the FAT driver and run arbitrary code. A malformed bootloader configuration file or kernel could potentially trigger a bug in the bootloader and run arbitrary code. It may even be possible to find bugs in the PE-COFF binary loader. And once you have the ability to run arbitrary code, you can replace all the EFI entry points and convince the OS that everything is fine anyway.

None of this should be surprising. Secure boot is predicated upon the firmware only executing trusted material until the OS handoff. If you can sidestep that restriction then the entire chain of trust falls down. We're talking about a large body of code that was written without the assumption that it would have to be resistant to sustained attack, and which has now been put in a context where people are actively trying to break it. Bugs are pretty inevitable. I'd expect a lot of work to be done on firmware implementations between now and Windows 8 ship date.

comment count unavailable comments

Syndicated 2011-11-17 16:52:03 from Matthew Garrett

GPT disks in a BIOS world

Starting with Fedora 16 we're installing using GPT disklabels by default, even on BIOS-based systems. This is worth noting because most BIOSes have absolutely no idea what GPT is, which you'd think would create some problems. And, unsurprisingly, it does. Shock. But let's have an overview.

GPT, or GUID Partition Table, is part of the UEFI specification. It defines a partition table format that allows up to 128 partitions per disk, with 64 bit start and end values allowing partitions up to 9.4ZB (assuming 512 byte blocks). This is great, because the existing MBR partitioning format only allows up to 2.2TB when using 512 byte blocks. But most BIOSes (and most older operating systems) don't understand GPT, so plugging in a GPT-partitioned disk would result in the system believing that the drive was uninitialised. This is avoided by specifying a protective MBR. This is a valid MBR partition table with a single partition covering the entire disk (or the first 2.2TB of the disk if it's larger than that) and the partition type set to 0xee ("GPT Protective"). GPT-unaware BIOSes and operating systems will see a partition they don't understand and simply ignore it.

But how do we boot a GPT-labelled disk with a protective MBR on a system that doesn't understand GPT? The key here is that BIOS is pretty dumb. Typically a BIOS will see a disk and just attempt to execute the code in the first sector. This MBR code knows how to do the rest of the boot, including parsing the partition table if necessary. The BIOS doesn't need to care at all.

Of course, some BIOSes choose to care. We've seen a small number of machines that, when exposed to a GPT disk, refuse to boot because they parse the MBR partition map and don't like what they see. This is typically accompanied by a message along the lines of "No operating system found". What we've found is that they're looking for a partition marked with the bootable flag, and if no partitions are marked bootable they assume that there's no OS. This is in contrast to the traditional use of the flag, which is merely a hint to the MBR as to which partition boot code it should execute.

So, should we set that flag? The UEFI specification specifically forbids it - table 15 states that the BootIndicator byte must be set to 0. Once again we're left in an unfortunate position where the specification and reality collide in an awkward way.

If this happens to you after a Fedora 16 install, you have two choices. The first is to reinstall with the "nogpt" boot argument. The installer will then set up a traditional MBR partition table. The second is to boot off a live CD and run fdisk against the boot disk. It'll give a bunch of scary warnings. Ignore them. Hit "a", then "1", then "w" to write it to disk. Things ought to work then. We'll figure out something better for F17.

comment count unavailable comments

Syndicated 2011-11-17 15:54:30 from Matthew Garrett

Making timeouts work with suspend

A reasonably common design for applications that want to run code at a specific time is something like:

time_t wakeup_time = get_next_event_time();
time_t now = time(NULL);

This works absolutely fine, except that sleep() ignores time spent with a suspended system. If you sleep(3600) and then immediately suspend for 45 minutes you'll wake up after 105 minutes, not 60. Which probably isn't what you want. If you want a timer that'll expire at a specific time (or immediately after resume if that time passed during suspend), use the POSIX timer family (timer_create, timer_settime and friends) with CLOCK_REALTIME. It's a signal-driven interface rather than a blocking one, so implementation may be a little more complicated, but it has the advantage of actually working.

comment count unavailable comments

Syndicated 2011-11-17 13:51:47 from Matthew Garrett

Properly booting a Mac

This is mostly for my own reference, but since it might be useful to others:

By "Properly booting" I mean "Integrating into the boot system as well as Mac OS X does". The device should be visible from the boot picker menu and should be selectable as a startup disk. For this to happen the boot should be in HFS+ format and have the following files:

  • /mach_kernel (can be empty)
  • /System/Library/CoreServices/boot.efi (may be booted, if so should be a symlink to the actual bootloader)
  • /System/Library/CoreServices/SystemVersion.plist which should look something like
    <xml version="1.0" encoding="UTF-8"?>
    <plist version="1.0">
            <string>Fedora 16</string>
That's enough to get it to appear in the startup disk boot pane. Getting it in the boot picker requires it to be blessed. You probably also want a .VolumeIcon.icns in / in order to get an appropriate icon.

Now all I need is an aesthetically appealing boot loader.

comment count unavailable comments

Syndicated 2011-11-09 22:06:07 from Matthew Garrett

Understanding the current state of UEFI

This story has been floating around for a week or so. The summary is that someone bought a system that has UEFI and is having trouble installing Linux on it. In itself, not a problem. But various people have either conflated this with the secure boot issue or suggested that UEFI is a fundamentally anti-Linux technology.

Right now there are no machines shipping to the public with secure boot enabled. None at all. If you're having problems installing Linux on a machine with UEFI then it's not because of secure boot. So what is actually causing the problem?

UEFI is a complicated specification, with 2.3.1A being 2214 pages long. It's a large body of code. There's a lot of subtleties. It's very easy for people to get things wrong. For example, we've seen issues where calling SetVirtualAddressMap() resulted in the firmware referencing boot services code, a clear violation of the spec on the firmware authors' part. We've also found machines that failed to boot because grub wasn't aligning its stack properly, a clear violation of the spec on our part.

Software is difficult. People make mistakes. When something mysteriously fails to work the immediate assumption should be that you've found a bug, not a conspiracy. Over time we'll find those bugs and fix them, but until then just treat UEFI boot failures like any other bug - annoying, but not malicious.

comment count unavailable comments

Syndicated 2011-11-03 17:47:36 from Matthew Garrett

UEFI secure boot white paper

As people have probably noticed, last week we published a white paper on the UEFI secure boot issue. This was written in collaboration with Canonical and wouldn't have been possible without the tireless work of Jeremy Kerr. It's the kind of problem that affects the entire Linux community, and I'm glad that we've both demonstrated that being competitors is less important than working together for the benefit of everyone.

comment count unavailable comments

Syndicated 2011-11-03 17:47:12 from Matthew Garrett

Feeding the trolls

A few years ago I got up on stage and briefly talked about how the Linux community contained far too many people who were willing to engage in entirely inappropriate behaviour, how this discouraged people from getting involved and how we weren't very good at standing up against that sort of behaviour. Despite doing this in front of several hundred people, and despite the video of me doing so then being uploaded to the internet, this got me a sum total of:

  • No death threats
  • No discussion about any of my physical attributes or lack thereof
  • No stalkers
  • No accusations that I was selling out the Linux community
  • No accusations that I was a traitor to my gender
  • No real negative feedback at all[1]

Which is, really, what you'd expect, right? The internet seems intent on telling me otherwise:

Well, she didn't do herself any favors by talking at conferences about women in tech, or setting up a feminist movement. If you wanted to attract abuse, that's a good way to go about it. It should be expected.

MikeeUSA is a troll. He has no means to actually harm anyone, and he does it purely for the lulz.

Thus, MikeeUSA trolled a woman, and she took the bait. I just don't get why this is news, I've been trolled before, I don't get a news story.


I was going to start a rant about how this behavior is encouraged by the macho men online, but this was just one guy harassing her. "Due to harassment" reads as due to harassment from the community, but she gave in to one idiot. She let him win.

The full comment thread has rather more examples. If you stand up and say anything controversial, you should expect abuse. And if you let that abuse change your behaviour in any way, you've let the trolls win.

These attitudes are problematic.

The immediate assumption underlying such advice is that the degree of abuse is related to what you've said, not who you are. I'm reasonably visible in the geek world. I've said a few controversial things. The worst thing that's happened to me has been Ryan Farmer deciding to subscribe me to several thousand mailing lists. Inconvenient, but not really threatening. I haven't, for instance, been sent death threats. Nobody has threatened to rape me. And even if they had done, I wouldn't need to worry too much - there's a rather stronger track record of violent antifeminism being targeted at women than men.

I don't have to worry about this kind of thing. That means I don't get to tell other people that they should have expected it. Nor do I get to tell them that they should ignore it, or that if they don't call the police then they have no grounds to complain. And nor does anyone else.

The trolls don't win because someone decides that getting out of the tech business is more appealing than continuing to face abuse. The trolls win because we consider their behaviour acceptable and then blame the victim for inviting them in the first place. That needs to change.

[1] It was justifiably pointed out that saying all this while standing on stage next to a mostly naked guy wearing a loincloth with a raccoon tail covering his penis may have weakened my message somewhat.

comment count unavailable comments

Syndicated 2011-10-28 07:29:01 from Matthew Garrett

Management of UEFI secure booting

This post is made in a personal capacity. While it represents the outcome of multiple discussions on the topic with a variety of people, nothing written below should be construed as Red Hat's corporate position

The FSF have released a statement on UEFI secure boot. It explains the fundamental issue here, which isn't something as simple as "will OEMs let me install Linux". It's "Does the end user have the ability to manage their own keys".

Secure boot is a valuable feature. It does neatly deal with the growing threat of pre-OS malware. There is an incentive for it to be supported under Linux. I discussed the technical aspects of implementing support for it here - it's not a huge deal of work, and it is being worked on. So let's not worry about that side of things. The problem is with the keys.

Secure boot is implemented in a straightforward way. Each section of a PE-COFF file is added together and a hash taken[1]. This hash is signed with the private half of a signing key and embedded into the binary. When you attempt to execute a file under UEFI, the firmware attempts to decrypt the embedded hash. This requires that the firmware have a either a copy of the public half of the signing key in its key database, or for their to be a chain of trust from the signing key to a key in its key database. Once it has the decrypted hash, it generates its own hash of the binary and compares them. If they match, the binary is executed.

What happens if it doesn't match? Per the UEFI specification, the firmware can then prompt the user and ask if they want to execute it anyway. If the user accepts then the hash of the binary is remembered[4] and can continue to be executed in future. This is similar to what you get when you visit a self-signed https site, or when you connect to an ssh server for the first time - the user must explicitly state that they trust the software that is being booted.

This is a little awkward for a couple of reasons. First, it means that any updates to the bootloader would require the user to manually accept the binary again. Second, as we've seen with https, there's the risk of uesrs just blindly accepting things. If it's acceptable to do this off internal media[5] then malware could just rely on some number of users saying "yes". And thirdly, we've been told that Microsoft's Windows logo requirements forbid this option.

The first is problematic but manageable with appropriate documentation. The second is a real risk that makes this approach far less compelling. And the third is probably a showstopper.

So signed but untrusted binaries are probably out, on technological, UI and policy grounds. What else?

We can sign binaries with a trusted key. This gets us interoperability, but comes at a cost. The first issue here is that we still don't know what the process for getting a trusted key is going to be. One option is for something like a generic Linux key that's provided by all firmware vendors. As long as the OEM doesn't remove it, then that Linux key could be used to sign other keys that are used by individual Linux vendors. The main difficulty involved in this is having the Linux key be managed by an authority that OEMs can trust to only provide keys to trustworthy organisations and maintain control of the private half of the key. Remember, if any malware is signed with any of these keys, it'll boot on any machine carrying this key. It's a lot of trust. It's not obvious that there's an organisation with sufficient trust at the moment.

The second approach is for each Linux vendor to generate their own key and get every OEM to carry it. Let's rule this out on the basis that it's an approach that doesn't scale at all. Red Hat could probably get its key into a bunch of OEM systems. Canonical too. Oracle? Probably for the vendors they care about. It's likely to be much harder for everyone else, and you still end up with the risk that you've just bought a computer that'll boot Red Hat but not Ubuntu.

The third approach is for each Linux vendor to obtain a key from someone who's already trusted. The most obvious authority here is Microsoft. Ignoring the PR unpleasantness involved with Linux vendors having to do business with Microsoft to get signed, so far Microsoft haven't given us any indication that this will be possible.

So every approach that involves us signing with a trusted key comes with associated problems. There's also a common problem with all of them - you only get to play if you have your own trusted key. Corporate Linux vendors can probably find a way to manage this. Everyone else (volunteer or hobbyist vendors, end users who want to run their own OS, anyone who wants to use a modified bootloader or kernel) is shut out.

The obvious workaround is for them to just turn off secure boot. Ignoring the arguments over whether or not OEMs will provide that option[6], it benefits nobody for Linux installation to require disabling a legitimate security feature. It's also not likely to be in a standard location on all systems and may have different naming. It's a support nightmare. Let's focus on trying to find a solution that provides the security and doesn't have obvious scaling issues.

There's a pretty simple one. If the problem centres around installing signing keys, why not have a standardised way of installing signing keys?

What we've proposed for this is an extension of the current UEFI standard for booting off removable media. There's a standardised location for putting the bootloader for USB sticks or optical media. For 32-bit PCs, it's EFI/BOOT/BOOTIA32.EFI. For 64-bit, it's EFI/BOOT/BOOTX64.EFI. Our proposal is to specify a path for key storage on the media. If the user attempts to boot off an item of removable media and finds a key file, we would like the firmware to prompt the user to install the key. Once the key is installed, the firmware will attempt to boot the bootloader on the removable media.

How does this avoid the problems associated with prompting the user to boot untrusted binaries? The first is that there's no problem with updates. Because a key has been imported, as long as future bootloader updates are signed with the same key, they'll boot without prompting the user. The second is that it can be limited to removable media. If malware infects the system and installs itself onto the hard drive, the firmware won't prompt for key installation. It'll just refuse to boot and fall back on whatever recovery procedures the OEM has implemented. The only way it could get on the system would involve the user explicitly booting off removable media, which would be a significant hurdle. If you're at that stage then you can also convince the user to disable secure boot entirely.

This proposal has been brought up with the UEFI standards working group, and with the UEFI plugfest taking place next week we're hoping it'll be discussed. It seems to satisfy the dual requirements of maintaining system security and ensuring that users have the flexibility to choose what they run on their system. But as with all technical proposals, feedback is entirely welcome.

[1] Hilariously, Microsoft's signing tool gets this wrong by also adding the contents of gaps between sections in direct contravention of their own specification. This is fine for binaries generated by Microsoft's toolchain because they don't have any gaps, but since our binaries do contain gaps[2] and since the standard firmware implementation[3] does implement the specification correctly, any Linux-generated binaries signed with the Microsoft tool fail validation. Go team.
[2] Something that is, as far as we can tell, permitted by the PE-COFF specification
[3] Written by Intel, not Microsoft
[4] Note that this means that only the specific binary is considered trusted. No key is imported, and any other binaries signed with the same key will also need to be manually trusted in the same way.
[5] As would be required if you ever want to be able to update your bootloader
[6] And to forestall panic, at this point we expect that most OEMs will provide this option on most hardware, if only because customers will still want to boot Windows 7. We do know that some hardware will ship without it. It's not implausible that some OEMs will remove it in order to reduce their support burden. But for the foreseeable future, you'll be able to buy hardware that runs Linux.

comment count unavailable comments

Syndicated 2011-10-19 17:58:19 from Matthew Garrett

312 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!