Older blog entries for dkg (starting at number 63)

in memory of Aaron Swartz

I was upset to learn about Aaron Swartz's death last week. I continue to be upset about his loss, and about our loss. He didn't just show promise of great things to come in the future -- he had already done more work for the public good than many of us will ever do. I'd only met him IRL a couple times, but (like many others) i had encountered him on the 'net in many places. He was a good person, someone i didn't need to always agree with to respect.

I read Russ Allbery's posts about Aaron and "slacktivism" with much appreciation. I had been ambivalent about signing the whitehouse.gov petition asking for the removal of the prosecutor for overreach, because I generally distrust the effectiveness of online petitions (and offline petitions, for that matter). But Russ's analysis convinced me to go ahead and sign it. The petition is concrete, clear (despite wanting a grammatical proofread), and actionable.

For people willing to go beyond petition signing to civil disobedience, The Aaron Swartz Memorial JSTOR Liberator is an option. It makes it straightforward to potentially violate the onerous JSTOR terms of service by re-publishing a public-domain article from JSTOR to archive.org, where it will be accessible to anyone directly.

As someone who builds and maintains information/communications infrastructure, i have very mixed feelings about most online civil disobedience, since it often takes the form of a Distributed Denial of Service (DDoS) attack of some sort. DDoS attacks of public services are notoriously difficult to defend against without having huge resources to throw at the problem, so encouraging participation in a DDoS often feels a little bit like handing out cans of gasoline when you know that everyone is living in a house of straw.

However, the JSTOR Liberator is not a DDoS at all -- it's simply a facilitation of people breaking the JSTOR Terms of Service (ToS), some of the same terms that Aaron was facing charges for violating. So it is a well-targeted way to demonstrate that the prosecutions were overreaching.

I wanted to take issue with one of Russ' statements, though. In his second post about the situation, Russ wrote:

Social activism and political disobedience are important and often valuable things, but performing your social activism using other people's stuff is just rude. I think it can be a forgivable rudeness; people can get caught up in the moment and not realize what they're doing. But it's still rude, and it's still not the way to go about civil disobedience.
While i generally agree with Russ' thoughtful consideration of consent, I have to take issue with this elevation of some sort of hyper-extended property right over the moral agency that drives civil disobedience.

To use someone else's property for the sake of a just cause without damaging the property or depriving the owner of its use is not "forgivable rudeness" -- it's forgivable, laudable even, because it is just. And the person using the property doesn't need to be "caught up in the moment and not realize what they're doing" for it to be acceptable.

Civil disobedience often involves putting some level of inconvenience or discomfort on other people, including innocent people. It might be the friends and family of the activist who have to deal with the jail time; it might be the drivers stuck in a traffic jam caused by a demonstration; it might be the people forced to shop elsewhere because the store's doors are barricaded by protestors.

All of these people could be troubled by the civil disobedience more than MIT's network users and admins were troubled by Aaron's protest, and that doesn't make the protests described worse or "not the way to go about civil disobedience." The trouble highlights a more significant injustice, and in its troubling way does what it can to help right it.

Aaron was a troublemaker, and a good one. He will be missed.

Tags: aaronsw

Syndicated 2013-01-15 08:35:00 from Weblogs for dkg

universally accessible storage for the wary user

A friend wrote me a simple question today. My response turned out to be longer than i expected, but i hope it's useful (and maybe other people will have better suggestions) so i thought i'd share it here too:

Angela Starita wrote:

I'd like to save my work in a location where I can access it from any computer. I'm wary of using the mechanisms provided by Google and Apple. Can you suggest another service?
Here's my reply:

I think you're right to be wary of the big cloud providers, who have a tendency to inspect your data to profile you, to participate in arbitrary surveillance regimes, and to try to sell your eyeballs to advertisers.

Caveat: You have to trust the client machine too

But it's also worth remembering that the network service provider is not the only source of risk. If you really mean "accessing your data from any computer", that means the computer you're using to access the data can do whatever it wants with it. That is, you need to trust both the operator of these "cloud" services, *and* the administrator/operating system of the client computer you're using to access your data. For example, if you log into any "secure" account from a terminal in a web café, that leaves you vulnerable to the admins of the web café (and, in the rather-common case of sloppily-administered web terminals, vulnerable to the previous user(s) of the terminal as well).

Option 0: Portable physical storage

One way to have your data so that you can access it from "any computer" is to not rely on the network at all, but rather to carry a high-capacity MicroSD card (and USB adapter) around with you (you'll probably want to format the card with a widely-understood filesystem like FAT32 instead of NTFS or HFS+ or ext4, which are only understood by some of the major operating systems, but not all).

Here is some example hardware:

Almost every computer these days has either a microSD slot or a USB port, while some computers are not connected to the network. This also means that you don't have to rely on someone else to manage servers that keep your data available all the time.

Note that going the microSD route doesn't remove the caveat about needing to trust the client workstation you're using, and it has another consideration:

You'd be responsible for your own backup in the case of hardware failure. You're responsible for your own backup in the case of online storage too, of course -- but the better online companies are probably better equipped than most of us to deal with hardware failure. OTOH, they're also susceptible to some data loss scenarios that we aren't as individual humans (e.g. the company might go bankrupt, or get bought by a competitor who wants to terminate the service, or have a malicious employee who decides to take revenge). Backup of a MicroSD card isn't particularly hard, though: just get a USB stick that's the same size, and regularly duplicate the contents of the MicroSD card to the USB stick.

One last consideration is storage size -- MicroSD cards are currently limited to 32GB or 64GB. If you have significantly more data than that, this approach might not be possible, or you might need to switch to a USB hard disk, which would limit your ability to use the data on computers that don't have a USB port (such as some smartphones).

Option 1: Proprietary service providers

If you don't think this portable physical storage option is the right choice for you, here are a couple proprietary service providers who offer some flavor of "cloud" storage while claiming to not look at the contents of your data:

I'm not particularly happy with either of those, though, in part because the local client software they want you to run is proprietary, so there's no way to verify that they are actually unable to access the contents of your data. But i'd be a lot happier with either wuala or spideroak than i would be with google drive, dropbox, or iCloud.

Option 2: What i really want

I'm much more excited about the network-accessible, free-software, privacy-sensitive network-based storage tool known as git-annex assistant. The project is spearheaded by Joey Hess, who is one of the most skilled and thoughtful software developers i know of.

"assistant" (and git-annex, from which it derives) has the advantage of being pretty agnostic about the backend service (many plugins for many different cloud providers) and allows you to encrypt your data locally before sending it to the remote provider. This also means you can put your encrypted data in more than one provider, so that if one of the providers fails for some reason, you can be relatively sure that you have another copy available.

But "assistant" won't be ready for Windows or Android for several months (builds are available for Linux and Mac OS X now), so i don't know if it meets the criterion for "accessible from any computer". And, of course, even with the encryption capabilities, the old caveat about needing to trust the local client machine still applies.

Syndicated 2013-01-09 01:12:00 from Weblogs for dkg

libasound2-plugins is a resource hog!

I run mpd on debian on "igor", an NSLU2 -- a very low-power ~266MHz armel machine, with no FPU and a scanty 32MiB of RAM. This serves nicely to feed my stereo with music that is controllable from anywhere on my LAN. When playing music and talking to a single mpd client, the machine is about 50% idle.

However, during a recent upgrade, something wanted to pull in pulseaudio, which in turn wanted to pull in libasound2-plugins, and i distractedly (foolishly) let it. With that package installed, after an mpd restart, the CPU was completely thrashed (100% utilization) and music only played in stutters of 1 second interrupted by a couple seconds of silence. igor was unusable for its intended purpose.

Getting rid of pulseaudio was my first attempt to fix the stuttering, but the problem remained even after pulse was all gone and mpd was restarted.

Then i did a little search of which packages had been freshly installed in the recent run:

grep ' install .* <none> ' /var/log/dpkg.log
and used that to pick out the offending package.

After purging libasound2-plugins and restarting mpd, the igor is back in action.

Lesson learned: on low-overhead machines, don't allow apt to install recommends!

echo 'APT::Install-Recommends "0";' >> /etc/apt/apt.conf
And it should go without saying, but sometimes i get sloppy: i need to pay closer attention during an "apt-get dist-upgrade"

Tags: alsa, apt, low-power, mpd

Syndicated 2012-12-21 01:50:00 from Weblogs for dkg

set default margins for OpenOffice as a sysadmin?

I'm maintaining a lab of debian squeeze machines that run OpenOffice.org (i'm considering upgrading to LibreOffice from squeeze-backports). I'd like to adjust the default page margins for all users of Writer. Most instructions i've found suggest ways to do this as a single user, but not how to make the change system-wide. I don't want to ask every user of these machines to do this (and i also don't want to tamper with each home directory directly -- that's not something i can maintain reliably).

Alas, i can find no documentation about how to change the default page margins system-wide for either Oo.o or LibreOffice. Surely this is something that can be done without a recompile. What am i missing?

Syndicated 2012-12-06 17:30:00 from Weblogs for dkg

Error messages are your friend (postgres is good)

Here is a bit of simple (yet subtly-flawed) sql, which produces different answers on different database engines:

0 dkg@pip:~$ cat test.sql
drop table if exists foo;
create table foo (x int, y int);
insert into foo VALUES (1,3);
insert into foo VALUES (1,5);
select y from foo group by x;
0 dkg@pip:~$ sqlite3 < test.sql
0 dkg@pip:~$ mysql -N dkg < test.sql
0 dkg@pip:~$ psql -qtA dkg < test.sql
ERROR:  column "foo.y" must appear in the GROUP BY clause or be used in an aggregate function
LINE 1: select y from foo group by x;
0 dkg@pip:~$ 
  • Clear error reporting and
  • an insistence on explicit disambiguation
are two of the many reasons postgresql is my database engine of choice.

Tags: errors, postgresql, sql

Syndicated 2012-12-04 20:32:00 from Weblogs for dkg

more proprietary workarounds, sigh

In supporting a labful of Debian GNU/Linux machines with NFS-mounted home directories, i find some of my users demand a few proprietary programs. Adobe Flash is one of the most demanded, in particular because some popular streaming video services (like Amazon Prime and Hulu) seem to require it.

I'm not a fan of proprietary network services, but i'm happy to see that Amazon Prime takes Linux support seriously enough to direct users to Adobe's Linux Flash "Protected Content" troubleshooting page (Amazon Prime's rival NetFlix, by comparison, has an abysmal record on this platform). Of course, none of this will work on any platform but i386, since the flash player is proprietary software and its proprietors have shown no interest in porting it or letting others port it :(

One of the main issues with proprietary network services is their inclination to view their customer as their adversary, as evidenced by various DRM schemes. In two examples, the Flash Player's DRM module appears to arbitrarily break when you use one home directory across multiple machines. Also, the DRM module appears to depend on HAL, which is being deprecated by most of the common distributions.

Why bother with this kind of gratuitous breakage? We know that video streaming can and does work fine without DRM. With modern browsers, freely-formatted video, and HTML5 video tags, video just works, and it works under the control of the user, on any platform. But Flash appears to throw up unnecessary hurdles, requiring not only proprietary code, but deprecated subsystems and fiddly workarounds to get it functional.

I'm reminded of Mako's concept of "antifeatures" -- how much engineering time and effort went into making this system actually be less stable and reliable than it would have otherwise been? How could that work have been better-directed?

Syndicated 2012-11-27 08:39:00 from Weblogs for dkg

KVM, Windows XP, and Stop Error Code 0x0000007B

i dislike having to run Windows as much as the next free software developer, but like many sysadmins, i am occasionally asked to maintain some legacy systems.

A nice way to keep these systems available (while not having to physically maintain them) is to put them in a virtual sandbox using a tool like kvm. While kvm makes it relatively straightforward to install WinXP from a CD (as long as you have the proper licensing key), it is more challenging to transition a pre-existing hardware windows XP installation into a virtual instance, due to Windows only wanting to boot to ide chipsets that it remembers being installed to.

In particular, booting a disk image pulled from a soon-to-be-discarded physical disk can produce a Blue Screen of Death (BSOD) with the message:

Stop error code 0x0000007B

This seems like it's roughly the equivalent (in a standard debian GNU/Linux environment) of specifying MODULES=dep in /etc/initramfs-tools/initramfs.conf, and then trying to swap out all the hardware.

At first blush, Microsoft's knowledge base suggests doing an in-place upgrade or full repartition and reinstall, which are both fairly drastic measures -- you might as well just start from scratch, which is exactly what you don't want to have to do for a nursed-along legacy system which no one who originally set it up is even with the organization any more.

Fortunately, a bit more digging in the Knowledge Base turned up an unsupported set of steps that appears to be the equivalent of setting MODULES=most (at least for the IDE chipsets). Running this on the old hardware before imaging the disk worked for me, though i did need to re-validate Windows XP after the reboot by typing in the long magic code again. i guess they're keying it to the hardware, which clearly changed in this instance.

Such silliness to spend time working around, really, when i'd rather be spending my time working on free software. :/

Syndicated 2012-05-29 23:49:00 from Weblogs for dkg

Compromising webapps: a case study

This paper should be required reading for anyone developing, deploying, or administering web applications.

Syndicated 2012-03-16 01:42:00 from Weblogs for dkg

Adobe leaves Linux AIR users vulnerable

A few months ago, Adobe announced a slew of vulnerabilities in its Flash Player, which is a critical component of Adobe AIR:

Adobe recommends users of Adobe AIR 2.6.19140 and earlier versions for Windows, Macintosh and Linux update to Adobe AIR


June 14, 2011 - Bulletin updated with information on Adobe AIR

However, looking at Adobe's instructions for installing AIR on "Linux" systems, we see that it is impossible for people running a free desktop OS to follow Adobe's own recommendations:
Beginning June 14 2011, Adobe AIR is no longer supported for desktop Linux distributions. Users can install and run AIR 2.6 and earlier applications but can't install or update to AIR 2.7. The last version to support desktop Linux distributions is AIR 2.6.
So on the exact same day, Adobe said "we recommend you upgrade, as the version you are using is vulnerable" and "we offer you no way to upgrade".

I'm left with the conclusion that Adobe's aggregate corporate message is "users of desktops based on free software should immediately uninstall AIR and stop using it".

If Adobe's software was free, and they had a community around it, they could turn over support to the community if they found it too burdensome. Instead, once again, users of proprietary tools on free systems get screwed by the proprietary vendor.

And they wonder why we tend to be less likely to install their tools?

Application developers should avoid targeting AIR as a platform if they want to reach everyone.

Tags: adobe, proprietary software, security

Syndicated 2011-11-21 17:41:00 from Weblogs for dkg

unreproducible buildd test suite failures

I've been getting strange failures on some architectures for xdotool. xdotool is a library and a command-line tool to allow you to inject events into an existing X11 session. I'm trying to understand (or even to reproduce) these errors so i can fix them.

The upstream project ships an extensive test suite; this test suite is failing on three architectures: ia64, armel, and mipsel; it passes fine on the other architectures (the hurd-i386 failure is unrelated, and i know how to fix it). The suite is failing on some "typing" tests -- some symbols "typed" are getting dropped on the failing architectures -- but it is not failing in a repeatable fashion. You can see two attempted armel builds failing with different outputs:

The first failure shows [ and occasionally < failing under a us,se keymap (that is, after the test-suite's invocation of setxkbmap -option grp:switch,grp:shifts_toggle us,se):

Running test_typing.rb
Setting up keymap on new server as us
Loaded suite test_typing
Finished in 19.554214 seconds.

  1) Failure:
    [test_typing.rb:58:in `_test_typing'
     test_typing.rb:78:in `test_us_se_symbol_typing']:
<"`12345678990-=~ !@\#$%^&*()_+[]{}|;:\",./<>?:\",./<>?"> expected but was
<"`12345678990-=~ !@\#$%^&*()_+]{}|;:\",./>?:\",./<>?">.

14 tests, 14 assertions, 1 failures, 0 errors
The second failure, on the same buildd, a day later, shows no failures under us,se, but several failures under other keymaps:
Running test_typing.rb
Setting up keymap on new server as us
Loaded suite test_typing
Finished in 16.784192 seconds.

  1) Failure:
    [test_typing.rb:58:in `_test_typing'
     test_typing.rb:118:in `test_de_symbol_typing']:
<"`12345678990-=~ !@\#$%^&*()_+[]{}|;:\",./<>?:\",./<>?"> expected but was
<"`12345678990-=~ !@\#$%^&*()_+]{}|;:\",./<>?:\",./<>?">.

  2) Failure:
    [test_typing.rb:58:in `_test_typing'
     test_typing.rb:108:in `test_se_symbol_typing']:
<"`12345678990-=~ !@\#$%^&*()_+[]{}|;:\",./<>?:\",./<>?"> expected but was
<"`12345678990-=~ !@\#$%^&*()_+[]{|;:\",./<>?:\",./<>?">.

  3) Failure:
    [test_typing.rb:58:in `_test_typing'
     test_typing.rb:88:in `test_se_us_symbol_typing']:
<"`12345678990-=~ !@\#$%^&*()_+[]{}|;:\",./<>?:\",./<>?"> expected but was
<"`12345678990-=~ !@\#$%^&*()_+{}|;:\",./>?:\",./<>?">.

14 tests, 14 assertions, 3 failures, 0 errors
I've tried to reproduce on a cowbuilder instance on my own armel machine; I could not reproduce the problem -- the test suites pass for me.

I've asked for help on the various buildd lists, and from upstream; no one resolutions have been proposed yet. I'd be grateful for any suggestions or hints of things i might want to look for. It would be a win if i could just reproduce the errors.

Of course, one approach would be to disable the test suite as part of the build process, but it has already helped catch a number of other issues with the upstream source. It would be a shame to lose those benefits.

Any thoughts?

Syndicated 2011-06-21 15:05:00 from Weblogs for dkg

54 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!