Older blog entries for mjg59 (starting at number 413)

Hacker News metrics (first rough approach)

I'm not a huge fan of Hacker News[1]. My impression continues to be that it ends up promoting stories that align with the Silicon Valley narrative of meritocracy, technology will fix everything, regulation is the cancer killing agile startups, and discouraging stories that suggest that the world of technology is, broadly speaking, awful and we should all be ashamed of ourselves.

But as a good data-driven person[2], wouldn't it be nice to have numbers rather than just handwaving? In the absence of a good public dataset, I scraped Hacker Slide to get just over two months of data in the form of hourly snapshots of stories, their age, their score and their position. I then applied a trivial test:

  1. If the story is younger than any other story
  2. and the story has a higher score than that other story
  3. and the story has a worse ranking than that other story
  4. and at least one of these two stories is on the front page
then the story is considered to have been penalised.

(note: "penalised" can have several meanings. It may be due to explicit flagging, or it may be due to an automated system deciding that the story is controversial or appears to be supported by a voting ring. There may be other reasons. I haven't attempted to separate them, because for my purposes it doesn't matter. The algorithm is discussed here.)

Now, ideally I'd classify my dataset based on manual analysis and classification of stories, but I'm lazy (see [2]) and so just tried some keyword analysis:








Keyword Penalised Unpenalised
Women 13 4
Harass 2 0
Female 5 1
Intel 2 3
x86 3 4
ARM 3 4
Airplane 1 2
Startup 46 26


A few things to note:
  1. Lots of stories are penalised. Of the front page stories in my dataset, I count 3240 stories that have some kind of penalty applied, against 2848 that don't. The default seems to be that some kind of detection will kick in.
  2. Stories containing keywords that suggest they refer to issues around social justice appear more likely to be penalised than stories that refer to technical matters
  3. There are other topics that are also disproportionately likely to be penalised. That's interesting, but not really relevant - I'm not necessarily arguing that social issues are penalised out of an active desire to make them go away, merely that the existing ranking system tends to result in it happening anyway.

This clearly isn't an especially rigorous analysis, and in future I hope to do a better job. But for now the evidence appears consistent with my innate prejudice - the Hacker News ranking algorithm tends to penalise stories that address social issues. An interesting next step would be to attempt to infer whether the reasons for the penalties are similar between different categories of penalised stories[3], but I'm not sure how practical that is with the publicly available data.

(Raw data is here, penalised stories are here, unpenalised stories are here)


[1] Moving to San Francisco has resulted in it making more sense, but really that just makes me even more depressed.
[2] Ha ha like fuck my PhD's in biology
[3] Perhaps stories about startups tend to get penalised because of voter ring detection from people trying to promote their startup, while stories about social issues tend to get penalised because of controversy detection?

comment count unavailable comments

Syndicated 2014-10-30 15:19:57 from Matthew Garrett

On joining the FSF board

I joined the board of directors of the Free Software Foundation a couple of weeks ago. I've been travelling a bunch since then, so haven't really had time to write about it. But since I'm currently waiting for a test job to finish, why not?

It's impossible to overstate how important free software is. A movement that began with a quest to work around a faulty printer is now our greatest defence against a world full of hostile actors. Without the ability to examine software, we can have no real faith that we haven't been put at risk by backdoors introduced through incompetence or malice. Without the freedom to modify software, we have no chance of updating it to deal with the new challenges that we face on a daily basis. Without the freedom to pass that modified software on to others, we are unable to help people who don't have the technical skills to protect themselves.

Free software isn't sufficient for building a trustworthy computing environment, one that not merely protects the user but respects the user. But it is necessary for that, and that's why I continue to evangelise on its behalf at every opportunity.

However.

Free software has a problem. It's natural to write software to satisfy our own needs, but in doing so we write software that doesn't provide as much benefit to people who have different needs. We need to listen to others, improve our knowledge of their requirements and ensure that they are in a position to benefit from the freedoms we espouse. And that means building diverse communities, communities that are inclusive regardless of people's race, gender, sexuality or economic background. Free software that ends up designed primarily to meet the needs of well-off white men is a failure. We do not improve the world by ignoring the majority of people in it. To do that, we need to listen to others. And to do that, we need to ensure that our community is accessible to everybody.

That's not the case right now. We are a community that is disproportionately male, disproportionately white, disproportionately rich. This is made strikingly obvious by looking at the composition of the FSF board, a body made up entirely of white men. In joining the board, I have perpetuated this. I do not bring new experiences. I do not bring an understanding of an entirely different set of problems. I do not serve as an inspiration to groups currently under-represented in our communities. I am, in short, a hypocrite.

So why did I do it? Why have I joined an organisation whose founder I publicly criticised for making sexist jokes in a conference presentation? I'm afraid that my answer may not seem convincing, but in the end it boils down to feeling that I can make more of a difference from within than from outside. I am now in a position to ensure that the board never forgets to consider diversity when making decisions. I am in a position to advocate for programs that build us stronger, more representative communities. I am in a position to take responsibility for our failings and try to do better in future.

People can justifiably conclude that I'm making excuses, and I can make no argument against that other than to be asked to be judged by my actions. I hope to be able to look back at my time with the FSF and believe that I helped make a positive difference. But maybe this is hubris. Maybe I am just perpetuating the status quo. If so, I absolutely deserve criticism for my choices. We'll find out in a few years.

comment count unavailable comments

Syndicated 2014-10-30 00:45:32 from Matthew Garrett

Linux Container Security

First, read these slides. Done? Good.

Hypervisors present a smaller attack surface than containers. This is somewhat mitigated in containers by using seccomp, selinux and restricting capabilities in order to reduce the number of kernel entry points that untrusted code can touch, but even so there is simply a greater quantity of privileged code available to untrusted apps in a container environment when compared to a hypervisor environment[1].

Does this mean containers provide reduced security? That's an arguable point. In the event of a new kernel vulnerability, container-based deployments merely need to upgrade the kernel on the host and restart all the containers. Full VMs need to upgrade the kernel in each individual image, which takes longer and may be delayed due to the additional disruption. In the event of a flaw in some remotely accessible code running in your image, an attacker's ability to cause further damage may be restricted by the existing seccomp and capabilities configuration in a container. They may be able to escalate to a more privileged user in a full VM.

I'm not really compelled by either of these arguments. Both argue that the security of your container is improved, but in almost all cases exploiting these vulnerabilities would require that an attacker already be able to run arbitrary code in your container. Many container deployments are task-specific rather than running a full system, and in that case your attacker is already able to compromise pretty much everything within the container. The argument's stronger in the Virtual Private Server case, but there you're trading that off against losing some other security features - sure, you're deploying seccomp, but you can't use selinux inside your container, because the policy isn't per-namespace[2].

So that seems like kind of a wash - there's maybe marginal increases in practical security for certain kinds of deployment, and perhaps marginal decreases for others. We end up coming back to the attack surface, and it seems inevitable that that's always going to be larger in container environments. The question is, does it matter? If the larger attack surface still only results in one more vulnerability per thousand years, you probably don't care. The aim isn't to get containers to the same level of security as hypervisors, it's to get them close enough that the difference doesn't matter.

I don't think we're there yet. Searching the kernel for bugs triggered by Trinity shows plenty of cases where the kernel screws up from unprivileged input[3]. A sufficiently strong seccomp policy plus tight restrictions on the ability of a container to touch /proc, /sys and /dev helps a lot here, but it's not full coverage. The presentation I linked to at the top of this post suggests using the grsec patches - these will tend to mitigate several (but not all) kernel vulnerabilities, but there's tradeoffs in (a) ease of management (having to build your own kernels) and (b) performance (several of the grsec options reduce performance).

But this isn't intended as a complaint. Or, rather, it is, just not about security. I suspect containers can be made sufficiently secure that the attack surface size doesn't matter. But who's going to do that work? As mentioned, modern container deployment tools make use of a number of kernel security features. But there's been something of a dearth of contributions from the companies who sell container-based services. Meaningful work here would include things like:


  • Strong auditing and aggressive fuzzing of containers under realistic configurations
  • Support for meaningful nesting of Linux Security Modules in namespaces
  • Introspection of container state and (more difficult) the host OS itself in order to identify compromises

These aren't easy jobs, but they're important, and I'm hoping that the lack of obvious development in areas like this is merely a symptom of the youth of the technology rather than a lack of meaningful desire to make things better. But until things improve, it's going to be far too easy to write containers off as a "convenient, cheap, secure: choose two" tradeoff. That's not a winning strategy.

[1] Companies using hypervisors! Audit your qemu setup to ensure that you're not providing more emulated hardware than necessary to your guests. If you're using KVM, ensure that you're using sVirt (either selinux or apparmor backed) in order to restrict qemu's privileges.
[2] There's apparently some support for loading per-namespace Apparmor policies, but that means that the process is no longer confined by the sVirt policy
[3] To be fair, last time I ran Trinity under Docker under a VM, it ended up killing my host. Glass houses, etc.

comment count unavailable comments

Syndicated 2014-10-23 07:47:36 from Matthew Garrett

Actions have consequences (or: why I'm not fixing Intel's bugs any more)

A lot of the kernel work I've ended up doing has involved dealing with bugs on Intel-based systems - figuring out interactions between their hardware and firmware, reverse engineering features that they refuse to document, improving their power management support, handling platform integration stuff for their GPUs and so on. Some of this I've been paid for, but a bunch has been unpaid work in my spare time[1].

Recently, as part of the anti-women #GamerGate campaign[2], a set of awful humans convinced Intel to terminate an advertising campaign because the site hosting the campaign had dared to suggest that the sexism present throughout the gaming industry might be a problem. Despite being awful humans, it is absolutely their right to request that a company choose to spend its money in a different way. And despite it being a dreadful decision, Intel is obviously entitled to spend their money as they wish. But I'm also free to spend my unpaid spare time as I wish, and I no longer wish to spend it doing unpaid work to enable an abhorrently-behaving company to sell more hardware. I won't be working on any Intel-specific bugs. I won't be reverse engineering any Intel-based features[3]. If the backlight on your laptop with an Intel GPU doesn't work, the number of fucks I'll be giving will fail to register on even the most sensitive measuring device.

On the plus side, this is probably going to significantly reduce my gin consumption.

[1] In the spirit of full disclosure: in some cases this has resulted in me being sent laptops in order to figure stuff out, and I was not always asked to return those laptops. My current laptop was purchased by me.

[2] I appreciate that there are some people involved in this campaign who earnestly believe that they are working to improve the state of professional ethics in games media. That is a worthy goal! But you're allying yourself to a cause that disproportionately attacks women while ignoring almost every other conflict of interest in the industry. If this is what you care about, find a new way to do it - and perhaps deal with the rather more obvious cases involving giant corporations, rather than obsessing over indie developers.

For avoidance of doubt, any comments arguing this point will be replaced with the phrase "Fart fart fart".

[3] Except for the purposes of finding entertaining security bugs

comment count unavailable comments

Syndicated 2014-10-02 16:40:29 from Matthew Garrett

My free software will respect users or it will be bullshit

I had dinner with a friend this evening and ended up discussing the FSF's four freedoms. The fundamental premise of the discussion was that the freedoms guaranteed by free software are largely academic unless you fall into one of two categories - someone who is sufficiently skilled in the arts of software development to examine and modify software to meet their own needs, or someone who is sufficiently privileged[1] to be able to encourage developers to modify the software to meet their needs.

The problem is that most people don't fall into either of these categories, and so the benefits of free software are often largely theoretical to them. Concentrating on philosophical freedoms without considering whether these freedoms provide meaningful benefits to most users risks these freedoms being perceived as abstract ideals, divorced from the real world - nice to have, but fundamentally not important. How can we tie these freedoms to issues that affect users on a daily basis?

In the past the answer would probably have been along the lines of "Free software inherently respects users", but reality has pretty clearly disproven that. Unity is free software that is fundamentally designed to tie the user into services that provide financial benefit to Canonical, with user privacy as a secondary concern. Despite Android largely being free software, many users are left with phones that no longer receive security updates[2]. Textsecure is free software but the author requests that builds not be uploaded to third party app stores because there's no meaningful way for users to verify that the code has not been modified - and there's a direct incentive for hostile actors to modify the software in order to circumvent the security of messages sent via it.

We're left in an awkward situation. Free software is fundamental to providing user privacy. The ability for third parties to continue providing security updates is vital for ensuring user safety. But in the real world, we are failing to make this argument - the freedoms we provide are largely theoretical for most users. The nominal security and privacy benefits we provide frequently don't make it to the real world. If users do wish to take advantage of the four freedoms, they frequently do so at a potential cost of security and privacy. Our focus on the four freedoms may be coming at a cost to the pragmatic freedoms that our users desire - the freedom to be free of surveillance (be that government or corporate), the freedom to receive security updates without having to purchase new hardware on a regular basis, the freedom to choose to run free software without having to give up basic safety features.

That's why projects like the GNOME safety and privacy team are so important. This is an example of tying the four freedoms to real-world user benefits, demonstrating that free software can be written and managed in such a way that it actually makes life better for the average user. Designing code so that users are fundamentally in control of any privacy tradeoffs they make is critical to empowering users to make informed decisions. Committing to meaningful audits of all network transmissions to ensure they don't leak personal data is vital in demonstrating that developers fundamentally respect the rights of those users. Working on designing security measures that make it difficult for a user to be tricked into handing over access to private data is going to be a necessary precaution against hostile actors, and getting it wrong is going to ruin lives.

The four freedoms are only meaningful if they result in real-world benefits to the entire population, not a privileged minority. If your approach to releasing free software is merely to ensure that it has an approved license and throw it over the wall, you're doing it wrong. We need to design software from the ground up in such a way that those freedoms provide immediate and real benefits to our users. Anything else is a failure.

(title courtesy of My Feminism will be Intersectional or it will be Bullshit by Flavia Dzodan. While I'm less angry, I'm solidly convinced that free software that does nothing to respect or empower users is an absolute waste of time)

[1] Either in the sense of having enough money that you can simply pay, having enough background in the field that you can file meaningful bug reports or having enough followers on Twitter that simply complaining about something results in people fixing it for you

[2] The free software nature of Android often makes it possible for users to receive security updates from a third party, but this is not always the case. Free software makes this kind of support more likely, but it is in no way guaranteed.

comment count unavailable comments

Syndicated 2014-09-24 06:59:09 from Matthew Garrett

ACPI, kernels and contracts with firmware

ACPI is a complicated specification - the latest version is 980 pages long. But that's because it's trying to define something complicated: an entire interface for abstracting away hardware details and making it easier for an unmodified OS to boot diverse platforms.

Inevitably, though, it can't define the full behaviour of an ACPI system. It doesn't explicitly state what should happen if you violate the spec, for instance. Obviously, in a just and fair world, no systems would violate the spec. But in the grim meathook future that we actually inhabit, systems do. We lack the technology to go back in time and retroactively prevent this, and so we're forced to deal with making these systems work.

This ends up being a pain in the neck in the x86 world, but it could be much worse. Way back in 2008 I wrote something about why the Linux kernel reports itself to firmware as "Windows" but refuses to identify itself as Linux. The short version is that "Linux" doesn't actually identify the behaviour of the kernel in a meaningful way. "Linux" doesn't tell you whether the kernel can deal with buffers being passed when the spec says it should be a package. "Linux" doesn't tell you whether the OS knows how to deal with an HPET. "Linux" doesn't tell you whether the OS can reinitialise graphics hardware.

Back then I was writing from the perspective of the firmware changing its behaviour in response to the OS, but it turns out that it's also relevant from the perspective of the OS changing its behaviour in response to the firmware. Windows 8 handles backlights differently to older versions. Firmware that's intended to support Windows 8 may expect this behaviour. If the OS tells the firmware that it's compatible with Windows 8, the OS has to behave compatibly with Windows 8.

In essence, if the firmware asks for Windows 8 support and the OS says yes, the OS is forming a contract with the firmware that it will behave in a specific way. If Windows 8 allows certain spec violations, the OS must permit those violations. If Windows 8 makes certain ACPI calls in a certain order, the OS must make those calls in the same order. Any firmware bug that is triggered by the OS not behaving identically to Windows 8 must be dealt with by modifying the OS to behave like Windows 8.

This sounds horrifying, but it's actually important. The existence of well-defined[1] OS behaviours means that the industry has something to target. Vendors test their hardware against Windows, and because Windows has consistent behaviour within a version[2] the vendors know that their machines won't suddenly stop working after an update. Linux benefits from this because we know that we can make hardware work as long as we're compatible with the Windows behaviour.

That's fine for x86. But remember when I said it could be worse? What if there were a platform that Microsoft weren't targeting? A platform where Linux was the dominant OS? A platform where vendors all test their hardware against Linux and expect it to have a consistent ACPI implementation?

Our even grimmer meathook future welcomes ARM to the ACPI world.

Software development is hard, and firmware development is software development with worse compilers. Firmware is inevitably going to rely on undefined behaviour. It's going to make assumptions about ordering. It's going to mishandle some cases. And it's the operating system's job to handle that. On x86 we know that systems are tested against Windows, and so we simply implement that behaviour. On ARM, we don't have that convenient reference. We are the reference. And that means that systems will end up accidentally depending on Linux-specific behaviour. Which means that if we ever change that behaviour, those systems will break.

So far we've resisted calls for Linux to provide a contract to the firmware in the way that Windows does, simply because there's been no need to - we can just implement the same contract as Windows. How are we going to manage this on ARM? The worst case scenario is that a system is tested against, say, Linux 3.19 and works fine. We make a change in 3.21 that breaks this system, but nobody notices at the time. Another system is tested against 3.21 and works fine. A few months later somebody finally notices that 3.21 broke their system and the change gets reverted, but oh no! Reverting it breaks the other system. What do we do now? The systems aren't telling us which behaviour they expect, so we're left with the prospect of adding machine-specific quirks. This isn't scalable.

Supporting ACPI on ARM means developing a sense of discipline around ACPI development that we simply haven't had so far. If we want to avoid breaking systems we have two options:

1) Commit to never modifying the ACPI behaviour of Linux.
2) Exposing an interface that indicates which well-defined ACPI behaviour a specific kernel implements, and bumping that whenever an incompatible change is made. Backward compatibility paths will be required if firmware only supports an older interface.

(1) is unlikely to be practical, but (2) isn't a great deal easier. Somebody is going to need to take responsibility for tracking ACPI behaviour and incrementing the exported interface whenever it changes, and we need to know who that's going to be before any of these systems start shipping. The alternative is a sea of ARM devices that only run specific kernel versions, which is exactly the scenario that ACPI was supposed to be fixing.

[1] Defined by implementation, not defined by specification
[2] Windows may change behaviour between versions, but always adds a new _OSI string when it does so. It can then modify its behaviour depending on whether the firmware knows about later versions of Windows.

comment count unavailable comments

Syndicated 2014-09-16 22:51:31 from Matthew Garrett

Self-signing custom Android ROMs

The security model on the Google Nexus devices is pretty straightforward. The OS is (nominally) secure and prevents anything from accessing the raw MTD devices. The bootloader will only allow the user to write to partitions if it's unlocked. The recovery image will only permit you to install images that are signed with a trusted key. In combination, these facts mean that it's impossible for an attacker to modify the OS image without unlocking the bootloader[1], and unlocking the bootloader wipes all your data. You'll probably notice that.

The problem comes when you want to run something other than the stock Google images. Step number one for basically all of these is "Unlock your bootloader", which is fair enough. Step number two is "Install a new recovery image", which is also reasonable in that the key database is stored in the recovery image and so there's no way to update it without doing so. Except, unfortunately, basically every third party Android image is either unsigned or is signed with the (publicly available) Android test keys, so this new recovery image will flash anything. Feel free to relock your bootloader - the recovery image will still happily overwrite your OS.

This is unfortunate. Even if you've encrypted your phone, anyone with physical access can simply reboot into recovery and reflash /system with something that'll stash your encryption key and mail your data to the NSA. Surely there's a better way of doing this?

Thankfully, there is. Kind of. It's annoying and involves a bunch of manual processes and you'll need to re-sign every update yourself. But it is possible to configure Nexus devices in such a way that you retain the same level of security you had when you were using the Google keys without losing the freedom to run whatever you want. Here's how.

Note: This is not straightforward. If you're not an experienced developer, you shouldn't attempt this. I'm documenting this so people can create more user-friendly approaches.

First: Unlock your bootloader. /data will be wiped.
Second: Get a copy of the stock recovery.img for your device. You can get it from the factory images available here
Third: Grab mkbootimg from here and build it. Run unpackbootimg against recovery.img.
Fourth: Generate some keys. Get this script and run it.
Fifth: zcat recovery.img-ramdisk.gz | cpio -id to extract your recovery image ramdisk. Do this in an otherwise empty directory.
Sixth: Get DumpPublicKey.java from here and run it against the .x509.pem file generated in step 4. Replace /res/keys from the recover image ramdisk with the output. Include the "v2" bit at the beginning.
Seventh: Repack the ramdisk image (find . | cpio -o -H newc | gzip > ../recovery.img-ramdisk.gz) and rebuild recovery.img with mkbootimg.
Eighth: Write the new recovery image to your device
Ninth: Get signapk from here and build it. Run it against the ROM you want to sign, using the keys you generated earlier. Make sure you use the -w option to sign the whole zip rather than signing individual files.
Tenth: Relock your bootloader
Eleventh: Boot into recovery mode and sideload your newly signed image.

At this point you'll want to set a reasonable security policy on the image (eg, if it grants root access, ensure that it requires a PIN or something), but otherwise you're set - the recovery image can't be overwritten without unlocking the bootloader and wiping all your data, and the recovery image will only write images that are signed with your key. For obvious reasons, keep the key safe.

This, well. It's obviously an excessively convoluted workflow. A *lot* of it could be avoided by providing a standardised mechanism for key management. One approach would be to add a new fastboot command for modifying the key database, and only permit this to be run when the bootloader is unlocked. The workflow would then be something like

  • Unlock bootloader
  • Generate keys
  • Install new key
  • Lock bootloader
  • Sign image
  • Install image
which seems more straightforward. Long term, individual projects could do the signing themselves and distribute their public keys, resulting in the install process becoming as easy as
  • Unlock bootloader
  • Install ROM key
  • Lock bootloader
  • Install ROM
which is actually easier than the current requirement to install an entirely new recovery image.

I'd actually previously criticised Google on the grounds that using custom keys wasn't possible on Android devices. I was wrong. It is, it's just that (as far as I can tell) nobody's actually documented it before. It's important that users not be forced into treating security and freedom as mutually exclusive, and it's great that Google have made that possible.

[1] This model fails if it's possible to gain root on the device. Thankfully this would never hold on what's that over there is that a distraction?

comment count unavailable comments

Syndicated 2014-07-04 22:10:02 from Matthew Garrett

The desktop and the developer

I was at the OpenStack Summit this week. The overwhelming majority of OpenStack deployments are Linux-based, yet the most popular laptop vendor (by a long way) at the conference was Apple. People are writing code with the intention of deploying it on Linux, but they're doing so under an entirely different OS.

But what's really interesting is the tools they're using to do so. When I looked over people's shoulders, I saw terminals and a web browser. They're not using Macs because their development tools require them, they're using Macs because of what else they get - an aesthetically pleasing OS, iTunes and what's easily the best trackpad hardware/driver combination on the market. These are people who work on the same laptop that they use at home. They'll use it when they're commuting, either for playing videos or for getting a head start so they can leave early. They use an Apple because they don't want to use different hardware for work and pleasure.

The developers I was surrounded by aren't the same developers you'd find at a technical conference 10 years ago. They grew up in an era that's become increasingly focused on user experience, and the idea of migrating to Linux because it's more tweakable is no longer appealing. People who spend their working day making use of free software (and in many cases even contributing or maintaining free software) won't run a free software OS because doing so would require them to compromise on things that they care about. Linux would give them the same terminals and web browser, but Linux's poorer multitouch handling is enough on its own to disrupt their workflow. Moving to Linux would slow them down.

But even if we fixed all those things, why would somebody migrate? The best we'd be offering is a comparable experience with the added freedom to modify more of their software. We can probably assume that this isn't a hugely compelling advantage, because otherwise it'd probably be enough to overcome some of the functional disparity. Perhaps we need to be looking at this differently.

When we've been talking about developer experience we've tended to talk about the experience of people who are writing software targeted at our desktops, not people who are incidentally using Linux to do their development. These people don't need better API documentation. They don't need a nicer IDE. They need a desktop environment that gives them access to the services that they use on a daily basis. Right now if someone opens an issue against one of their bugs, they'll get an email. They'll have to click through that in order to get to a webpage that lets them indicate that they've accepted the bug. If they know that the bug's already fixed in another branch, they'll probably need to switch to github in order to find the commit that contains the bug number that fixed it, switch back to their issue tracker and then paste that in and mark it as a duplicate. It's tedious. It's annoying. It's distracting.

If the desktop had built-in awareness of the issue tracker then they could be presented with relevant information and options without having to click through two separate applications. If git commits were locally indexed, the developer could find the relevant commit without having to move back to a web browser or open a new terminal to find the local checkout. A simple task that currently involves multiple context switches could be made significantly faster.

That's a simple example. The problem goes deeper. The use of web services for managing various parts of the development process removes the need for companies to maintain their own infrastructure, but in the process it tends to force developers to bounce between multiple websites that have different UIs and no straightforward means of sharing information. Time is lost to this. It makes developers unhappy.

A combination of improved desktop polish and spending effort on optimising developer workflows would stand a real chance of luring these developers away from OS X with the promise that they'd spend less time fighting web browsers, leaving them more time to get on with development. It would also help differentiate Linux from proprietary alternatives - Apple and Microsoft may spend significant amounts of effort on improving developer tooling, but they're mostly doing so for developers who are targeting their platforms. A desktop environment that made it easier to perform generic development would be a unique selling point.

I spoke to various people about this during the Summit, and it was heartening to hear that there are people who are already thinking about this and hoping to improve things. I'm looking forward to that, but I also hope that there'll be wider interest in figuring out how we can make things easier for developers without compromising other users. It seems like an interesting challenge.

comment count unavailable comments

Syndicated 2014-05-19 02:53:16 from Matthew Garrett

Oracle continue to circumvent EXPORT_SYMBOL_GPL()

Oracle won their appeal regarding whether APIs are copyrightable. There'll be ongoing argument about whether Google's use of those APIs is fair use or not, and perhaps an appeal to the Supreme Court, but that's the new status quo. This will doubtless result in arguments over whether Oracle's implementation of Linux APIs in Solaris 10 was a violation of copyright or not (and presumably Google are currently checking whether they own any code that Oracle reimplemented), but that's not what I'm going to talk about today.

Oracle own some code called DTrace (Wikipedia has a good overview here - what it actually does isn't especially relevant) that was originally written as part of Solaris. When Solaris was released under the CDDL, so was DTrace. The CDDL is a file-level copyleft license with some restrictions not present in the GPL - as a result, combining GPLed code with CDDLed code will (in the absence of additional permission grants) result in a work that is under an inconsistent license and cannot legally be distributed.

Oracle wanted to make DTrace available for Linux as part of their Unbreakable Linux product. Integrating it directly into the kernel would obviously cause legal issues, so instead they implemented it as a kernel module. The copyright status of kernel modules is somewhat unclear. The GPL covers derivative works, but the definition of derivative works is a function of copyright law and judges. Making use of explicitly exported API may not be sufficient to constitute a derivative work - on the other hand, it might. This is largely untested in court. Oracle appear to believe that they're legitimate, and so have added just enough in-kernel code (and GPLed) to support DTrace, while keeping the CDDLed core of DTrace separate.

The kernel actually has two levels of exposed (non-userspace) API - those exported via EXPORT_SYMBOL() and those exported via EXPORT_SYMBOL_GPL(). Symbols exported via EXPORT_SYMBOL_GPL() may only be used by modules that claim to be GPLed, with the kernel refusing to load them otherwise. There is no technical limitation on the use of symbols exported via EXPORT_SYMBOL().

(Aside: this should not be interpreted as meaning that modules that only use symbols exported via EXPORT_SYMBOL() will not be considered derivative works. Anything exported via EXPORT_SYMBOL_GPL() is considered by the author to be so fundamental to the kernel that using it would be impossible without creating a derivative work. Using something exported via EXPORT_SYMBOL() may result in the creation of a derivative work. Consult lawyers before attempting to release a non-GPLed Linux kernel module)

DTrace integrates very tightly with the host kernel, and one of the things it needs access to is a high-resolution timer that is guaranteed to monotonically increase. Linux provides one in the form of ktime_get(). Unfortunately for Oracle, ktime_get() is only exported via EXPORT_SYMBOL_GPL(). Attempting to call it directly from the DTrace module would fail.

Oracle work around this in their (GPLed) kernel abstraction code. A function called dtrace_gethrtimer() simply returns the value of ktime_get(). dtrace_gethrtimer() is exported via EXPORT_SYMBOL() and therefore can be called from the DTrace module.

So, in the face of a technical mechanism designed to enforce the author's beliefs about the copyright status of callers of this function, Oracle deliberately circumvent that technical mechanism by simply re-exporting the same function under a new name. It should be emphasised that calling an EXPORT_SYMBOL_GPL() function does not inherently cause the caller to become a derivative work of the kernel - it only represents the original author's opinion of whether it would. You'd still need a court case to find out for sure. But if it turns out that the use of ktime_get() does cause a work to become derivative, Oracle would find it fairly difficult to argue that their infringement was accidental.

Of course, as copyright holders of DTrace, Oracle could solve the problem by dual-licensing DTrace under the GPL as well as the CDDL. The fact that they haven't implies that they think there's enough value in keeping it under an incompatible license to risk losing a copyright infringement suit. This might be just the kind of recklessness that Oracle accused Google of back in their last case.

comment count unavailable comments

Syndicated 2014-05-11 15:14:50 from Matthew Garrett

Home entertainment implementations are pretty appaling

I picked up a Panasonic BDT-230 a couple of months ago. Then I discovered that even though it appeared fairly straightforward to make it DVD region free (I have a large pile of PAL region 2 DVDs), the US models refuse to play back PAL content. We live in an era of software-defined functionality. While Panasonic could have designed a separate hardware SKU with a hard block on PAL output, that would seem like unnecessary expense. So, playing with the firmware seemed like a reasonable start.

Panasonic provide a nice download site for firmware updates, so I grabbed the most recent and set to work. Binwalk found a squashfs filesystem, which was a good sign. Less good was the block at the end of the firmware with "RSA" written around it in large letters. The simple approach of hacking the firmware, building a new image and flashing it to the device didn't appear likely to work.

Which left dealing with the installed software. The BDT-230 is based on a Mediatek chipset, and like most (all?) Mediatek systems runs a large binary called "bdpprog" that spawns about eleventy billion threads and does pretty much everything. Runnings strings over that showed, well, rather a lot, but most promisingly included a reference to "/mnt/sda1/vudu/vudu.sh". Other references to /mnt/sda1 made it pretty clear that it was the mount point for USB mass storage. There were a couple of other constraints that had to be satisfied, but soon attempting to run Vudu was actually setting a blank root password and launching telnetd.

/acfg/config_file_global.txt was the next stop. This is a set of tokens and values with useful looking names like "IDX_GB_PTT_COUNTRYCODE". I tried changing the values, but unfortunately made a poor guess - on next reboot, the player had reset itself to DVD region 5, Blu Ray region C and was talking to me in Russian. More inconveniently, the Vudu icon had vanished and I couldn't launch a shell any more.

But where there's one obvious mechanism for running arbitrary code, there's probably another. /usr/local/bin/browser.sh contained the wonderful line:

export LD_PRELOAD=/mnt/sda1/bbb/libSegFault.so
, so then it was just a matter of building a library that hooked open() and launched inetd and dropping that into the right place, and then opening the browser.

This time I set the country code correctly, rebooted and now I can actually watch Monkey Dust again. Hurrah! But, at the same time, concerning. This software has been written without any concern for security, and it listens on the network by default. If it took me this little time to find two entirely independent ways to run arbitrary code on the device, it doesn't seem like a stretch to believe that there are probably other vulnerabilities that can be exploited with less need for physical access.

The depressing part of this is that there's no reason to believe that Panasonic are especially bad here - especially since a large number of vendors are shipping much the same Mediatek code, and so probably have similar (if not identical) issues. The future is made up of network-connected appliances that are using your electricity to mine somebody else's Dogecoin. Our nightmarish dystopia may be stranger than expected.

comment count unavailable comments

Syndicated 2014-04-21 00:37:43 from Matthew Garrett

404 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!