Older blog entries for dkg (starting at number 62)

universally accessible storage for the wary user

A friend wrote me a simple question today. My response turned out to be longer than i expected, but i hope it's useful (and maybe other people will have better suggestions) so i thought i'd share it here too:

Angela Starita wrote:

I'd like to save my work in a location where I can access it from any computer. I'm wary of using the mechanisms provided by Google and Apple. Can you suggest another service?
Here's my reply:

I think you're right to be wary of the big cloud providers, who have a tendency to inspect your data to profile you, to participate in arbitrary surveillance regimes, and to try to sell your eyeballs to advertisers.

Caveat: You have to trust the client machine too

But it's also worth remembering that the network service provider is not the only source of risk. If you really mean "accessing your data from any computer", that means the computer you're using to access the data can do whatever it wants with it. That is, you need to trust both the operator of these "cloud" services, *and* the administrator/operating system of the client computer you're using to access your data. For example, if you log into any "secure" account from a terminal in a web café, that leaves you vulnerable to the admins of the web café (and, in the rather-common case of sloppily-administered web terminals, vulnerable to the previous user(s) of the terminal as well).

Option 0: Portable physical storage

One way to have your data so that you can access it from "any computer" is to not rely on the network at all, but rather to carry a high-capacity MicroSD card (and USB adapter) around with you (you'll probably want to format the card with a widely-understood filesystem like FAT32 instead of NTFS or HFS+ or ext4, which are only understood by some of the major operating systems, but not all).

Here is some example hardware:

Almost every computer these days has either a microSD slot or a USB port, while some computers are not connected to the network. This also means that you don't have to rely on someone else to manage servers that keep your data available all the time.

Note that going the microSD route doesn't remove the caveat about needing to trust the client workstation you're using, and it has another consideration:

You'd be responsible for your own backup in the case of hardware failure. You're responsible for your own backup in the case of online storage too, of course -- but the better online companies are probably better equipped than most of us to deal with hardware failure. OTOH, they're also susceptible to some data loss scenarios that we aren't as individual humans (e.g. the company might go bankrupt, or get bought by a competitor who wants to terminate the service, or have a malicious employee who decides to take revenge). Backup of a MicroSD card isn't particularly hard, though: just get a USB stick that's the same size, and regularly duplicate the contents of the MicroSD card to the USB stick.

One last consideration is storage size -- MicroSD cards are currently limited to 32GB or 64GB. If you have significantly more data than that, this approach might not be possible, or you might need to switch to a USB hard disk, which would limit your ability to use the data on computers that don't have a USB port (such as some smartphones).

Option 1: Proprietary service providers

If you don't think this portable physical storage option is the right choice for you, here are a couple proprietary service providers who offer some flavor of "cloud" storage while claiming to not look at the contents of your data:

I'm not particularly happy with either of those, though, in part because the local client software they want you to run is proprietary, so there's no way to verify that they are actually unable to access the contents of your data. But i'd be a lot happier with either wuala or spideroak than i would be with google drive, dropbox, or iCloud.

Option 2: What i really want

I'm much more excited about the network-accessible, free-software, privacy-sensitive network-based storage tool known as git-annex assistant. The project is spearheaded by Joey Hess, who is one of the most skilled and thoughtful software developers i know of.

"assistant" (and git-annex, from which it derives) has the advantage of being pretty agnostic about the backend service (many plugins for many different cloud providers) and allows you to encrypt your data locally before sending it to the remote provider. This also means you can put your encrypted data in more than one provider, so that if one of the providers fails for some reason, you can be relatively sure that you have another copy available.

But "assistant" won't be ready for Windows or Android for several months (builds are available for Linux and Mac OS X now), so i don't know if it meets the criterion for "accessible from any computer". And, of course, even with the encryption capabilities, the old caveat about needing to trust the local client machine still applies.

Syndicated 2013-01-09 01:12:00 from Weblogs for dkg

libasound2-plugins is a resource hog!

I run mpd on debian on "igor", an NSLU2 -- a very low-power ~266MHz armel machine, with no FPU and a scanty 32MiB of RAM. This serves nicely to feed my stereo with music that is controllable from anywhere on my LAN. When playing music and talking to a single mpd client, the machine is about 50% idle.

However, during a recent upgrade, something wanted to pull in pulseaudio, which in turn wanted to pull in libasound2-plugins, and i distractedly (foolishly) let it. With that package installed, after an mpd restart, the CPU was completely thrashed (100% utilization) and music only played in stutters of 1 second interrupted by a couple seconds of silence. igor was unusable for its intended purpose.

Getting rid of pulseaudio was my first attempt to fix the stuttering, but the problem remained even after pulse was all gone and mpd was restarted.

Then i did a little search of which packages had been freshly installed in the recent run:

grep ' install .* <none> ' /var/log/dpkg.log
and used that to pick out the offending package.

After purging libasound2-plugins and restarting mpd, the igor is back in action.

Lesson learned: on low-overhead machines, don't allow apt to install recommends!

echo 'APT::Install-Recommends "0";' >> /etc/apt/apt.conf
And it should go without saying, but sometimes i get sloppy: i need to pay closer attention during an "apt-get dist-upgrade"

Tags: alsa, apt, low-power, mpd

Syndicated 2012-12-21 01:50:00 from Weblogs for dkg

set default margins for OpenOffice as a sysadmin?

I'm maintaining a lab of debian squeeze machines that run OpenOffice.org (i'm considering upgrading to LibreOffice from squeeze-backports). I'd like to adjust the default page margins for all users of Writer. Most instructions i've found suggest ways to do this as a single user, but not how to make the change system-wide. I don't want to ask every user of these machines to do this (and i also don't want to tamper with each home directory directly -- that's not something i can maintain reliably).

Alas, i can find no documentation about how to change the default page margins system-wide for either Oo.o or LibreOffice. Surely this is something that can be done without a recompile. What am i missing?

Syndicated 2012-12-06 17:30:00 from Weblogs for dkg

Error messages are your friend (postgres is good)

Here is a bit of simple (yet subtly-flawed) sql, which produces different answers on different database engines:

0 dkg@pip:~$ cat test.sql
drop table if exists foo;
create table foo (x int, y int);
insert into foo VALUES (1,3);
insert into foo VALUES (1,5);
select y from foo group by x;
0 dkg@pip:~$ sqlite3 < test.sql
5
0 dkg@pip:~$ mysql -N dkg < test.sql
3
0 dkg@pip:~$ psql -qtA dkg < test.sql
ERROR:  column "foo.y" must appear in the GROUP BY clause or be used in an aggregate function
LINE 1: select y from foo group by x;
               ^
0 dkg@pip:~$ 
  • Clear error reporting and
  • an insistence on explicit disambiguation
are two of the many reasons postgresql is my database engine of choice.

Tags: errors, postgresql, sql

Syndicated 2012-12-04 20:32:00 from Weblogs for dkg

more proprietary workarounds, sigh

In supporting a labful of Debian GNU/Linux machines with NFS-mounted home directories, i find some of my users demand a few proprietary programs. Adobe Flash is one of the most demanded, in particular because some popular streaming video services (like Amazon Prime and Hulu) seem to require it.

I'm not a fan of proprietary network services, but i'm happy to see that Amazon Prime takes Linux support seriously enough to direct users to Adobe's Linux Flash "Protected Content" troubleshooting page (Amazon Prime's rival NetFlix, by comparison, has an abysmal record on this platform). Of course, none of this will work on any platform but i386, since the flash player is proprietary software and its proprietors have shown no interest in porting it or letting others port it :(

One of the main issues with proprietary network services is their inclination to view their customer as their adversary, as evidenced by various DRM schemes. In two examples, the Flash Player's DRM module appears to arbitrarily break when you use one home directory across multiple machines. Also, the DRM module appears to depend on HAL, which is being deprecated by most of the common distributions.

Why bother with this kind of gratuitous breakage? We know that video streaming can and does work fine without DRM. With modern browsers, freely-formatted video, and HTML5 video tags, video just works, and it works under the control of the user, on any platform. But Flash appears to throw up unnecessary hurdles, requiring not only proprietary code, but deprecated subsystems and fiddly workarounds to get it functional.

I'm reminded of Mako's concept of "antifeatures" -- how much engineering time and effort went into making this system actually be less stable and reliable than it would have otherwise been? How could that work have been better-directed?

Syndicated 2012-11-27 08:39:00 from Weblogs for dkg

KVM, Windows XP, and Stop Error Code 0x0000007B

i dislike having to run Windows as much as the next free software developer, but like many sysadmins, i am occasionally asked to maintain some legacy systems.

A nice way to keep these systems available (while not having to physically maintain them) is to put them in a virtual sandbox using a tool like kvm. While kvm makes it relatively straightforward to install WinXP from a CD (as long as you have the proper licensing key), it is more challenging to transition a pre-existing hardware windows XP installation into a virtual instance, due to Windows only wanting to boot to ide chipsets that it remembers being installed to.

In particular, booting a disk image pulled from a soon-to-be-discarded physical disk can produce a Blue Screen of Death (BSOD) with the message:

Stop error code 0x0000007B
or
(INACCESSABLE_BOOT_DEVICE)

This seems like it's roughly the equivalent (in a standard debian GNU/Linux environment) of specifying MODULES=dep in /etc/initramfs-tools/initramfs.conf, and then trying to swap out all the hardware.

At first blush, Microsoft's knowledge base suggests doing an in-place upgrade or full repartition and reinstall, which are both fairly drastic measures -- you might as well just start from scratch, which is exactly what you don't want to have to do for a nursed-along legacy system which no one who originally set it up is even with the organization any more.

Fortunately, a bit more digging in the Knowledge Base turned up an unsupported set of steps that appears to be the equivalent of setting MODULES=most (at least for the IDE chipsets). Running this on the old hardware before imaging the disk worked for me, though i did need to re-validate Windows XP after the reboot by typing in the long magic code again. i guess they're keying it to the hardware, which clearly changed in this instance.

Such silliness to spend time working around, really, when i'd rather be spending my time working on free software. :/

Syndicated 2012-05-29 23:49:00 from Weblogs for dkg

Compromising webapps: a case study

This paper should be required reading for anyone developing, deploying, or administering web applications.

Syndicated 2012-03-16 01:42:00 from Weblogs for dkg

Adobe leaves Linux AIR users vulnerable

A few months ago, Adobe announced a slew of vulnerabilities in its Flash Player, which is a critical component of Adobe AIR:

Adobe recommends users of Adobe AIR 2.6.19140 and earlier versions for Windows, Macintosh and Linux update to Adobe AIR 2.7.0.1948.

[...]

June 14, 2011 - Bulletin updated with information on Adobe AIR

However, looking at Adobe's instructions for installing AIR on "Linux" systems, we see that it is impossible for people running a free desktop OS to follow Adobe's own recommendations:
Beginning June 14 2011, Adobe AIR is no longer supported for desktop Linux distributions. Users can install and run AIR 2.6 and earlier applications but can't install or update to AIR 2.7. The last version to support desktop Linux distributions is AIR 2.6.
So on the exact same day, Adobe said "we recommend you upgrade, as the version you are using is vulnerable" and "we offer you no way to upgrade".

I'm left with the conclusion that Adobe's aggregate corporate message is "users of desktops based on free software should immediately uninstall AIR and stop using it".

If Adobe's software was free, and they had a community around it, they could turn over support to the community if they found it too burdensome. Instead, once again, users of proprietary tools on free systems get screwed by the proprietary vendor.

And they wonder why we tend to be less likely to install their tools?

Application developers should avoid targeting AIR as a platform if they want to reach everyone.

Tags: adobe, proprietary software, security

Syndicated 2011-11-21 17:41:00 from Weblogs for dkg

unreproducible buildd test suite failures

I've been getting strange failures on some architectures for xdotool. xdotool is a library and a command-line tool to allow you to inject events into an existing X11 session. I'm trying to understand (or even to reproduce) these errors so i can fix them.

The upstream project ships an extensive test suite; this test suite is failing on three architectures: ia64, armel, and mipsel; it passes fine on the other architectures (the hurd-i386 failure is unrelated, and i know how to fix it). The suite is failing on some "typing" tests -- some symbols "typed" are getting dropped on the failing architectures -- but it is not failing in a repeatable fashion. You can see two attempted armel builds failing with different outputs:

The first failure shows [ and occasionally < failing under a us,se keymap (that is, after the test-suite's invocation of setxkbmap -option grp:switch,grp:shifts_toggle us,se):

Running test_typing.rb
Setting up keymap on new server as us
Loaded suite test_typing
Started
...........F..
Finished in 19.554214 seconds.

  1) Failure:
test_us_se_symbol_typing(XdotoolTypingTests)
    [test_typing.rb:58:in `_test_typing'
     test_typing.rb:78:in `test_us_se_symbol_typing']:
<"`12345678990-=~ !@\#$%^&*()_+[]{}|;:\",./<>?:\",./<>?"> expected but was
<"`12345678990-=~ !@\#$%^&*()_+]{}|;:\",./>?:\",./<>?">.

14 tests, 14 assertions, 1 failures, 0 errors
The second failure, on the same buildd, a day later, shows no failures under us,se, but several failures under other keymaps:
Running test_typing.rb
Setting up keymap on new server as us
Loaded suite test_typing
Started
..F.F.F.......
Finished in 16.784192 seconds.

  1) Failure:
test_de_symbol_typing(XdotoolTypingTests)
    [test_typing.rb:58:in `_test_typing'
     test_typing.rb:118:in `test_de_symbol_typing']:
<"`12345678990-=~ !@\#$%^&*()_+[]{}|;:\",./<>?:\",./<>?"> expected but was
<"`12345678990-=~ !@\#$%^&*()_+]{}|;:\",./<>?:\",./<>?">.

  2) Failure:
test_se_symbol_typing(XdotoolTypingTests)
    [test_typing.rb:58:in `_test_typing'
     test_typing.rb:108:in `test_se_symbol_typing']:
<"`12345678990-=~ !@\#$%^&*()_+[]{}|;:\",./<>?:\",./<>?"> expected but was
<"`12345678990-=~ !@\#$%^&*()_+[]{|;:\",./<>?:\",./<>?">.

  3) Failure:
test_se_us_symbol_typing(XdotoolTypingTests)
    [test_typing.rb:58:in `_test_typing'
     test_typing.rb:88:in `test_se_us_symbol_typing']:
<"`12345678990-=~ !@\#$%^&*()_+[]{}|;:\",./<>?:\",./<>?"> expected but was
<"`12345678990-=~ !@\#$%^&*()_+{}|;:\",./>?:\",./<>?">.

14 tests, 14 assertions, 3 failures, 0 errors
I've tried to reproduce on a cowbuilder instance on my own armel machine; I could not reproduce the problem -- the test suites pass for me.

I've asked for help on the various buildd lists, and from upstream; no one resolutions have been proposed yet. I'd be grateful for any suggestions or hints of things i might want to look for. It would be a win if i could just reproduce the errors.

Of course, one approach would be to disable the test suite as part of the build process, but it has already helped catch a number of other issues with the upstream source. It would be a shame to lose those benefits.

Any thoughts?

Syndicated 2011-06-21 15:05:00 from Weblogs for dkg

the bleeding edge: btrfs (poor performance, alas)

I'm playing with btrfs to get a feel for what's coming up in linux filesystems. To be daring, i've configured a test machine using only btrfs for its on-disk filesystems. I really like some of the ideas put forward in the btrfs design. (i'm aware that btrfs is considered experimental-only at this point).

I'm happy to report that despite several weeks of regular upgrade/churn from unstable and experimental, i have yet to see any data loss or other serious forms of failure.

Unfortunately, i'm not impressed with the performance. The machine feels sluggish in this configuratiyon, compared to how i remember it running with previous non-btrfs installations. So i ran some benchmarks. The results don't look good for btrfs in its present incarnation.

The simplified test system i'm running has Linux kernel 2.6.39-rc6-686-pae (from experimental), 1GiB of RAM (no swap), and a single 2GHz P4 CPU. It has one parallel ATA hard disk (WDC WD400EB-00CPF0), with two primary partitions (one btrfs and one ext3). The root filesystem is btrfs. The ext3 filesystem is mounted at /mnt

I used bonnie++ to benchmark the ext3 filesystem against the btrfs filesystem as a non-privileged user.

Here are the results on the test ext3 filesystem:

consoleuser@loki:~$ cat bonnie-stats.ext3 
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
loki          2264M   331  98 23464  11 10988   4  1174  85 39629   6 130.4   5
Latency             92041us    1128ms    1835ms     166ms     308ms    6549ms
Version  1.96       ------Sequential Create------ --------Random Create--------
loki                -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  9964  26 +++++ +++ 13035  26 11089  27 +++++ +++ 11888  24
Latency             17882us    1418us    1929us     489us      51us     650us
1.96,1.96,loki,1,1305039600,2264M,,331,98,23464,11,10988,4,1174,85,39629,6,130.4,5,16,,,,,9964,26,+++++,+++,13035,26,11089,27,+++++,+++,11888,24,92041us,1128ms,1835ms,166ms,308ms,6549ms,17882us,1418us,1929us,489us,51us,650us
consoleuser@loki:~$ 
And here are the results for btrfs (on the main filesystem):
consoleuser@loki:~$ cat bonnie-stats.btrfs 
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
loki          2264M    43  99 22682  17 10356   6  1038  79 28796   6  86.8  99
Latency               293ms     727ms    1222ms   46541us     504ms   13094ms
Version  1.96       ------Sequential Create------ --------Random Create--------
loki                -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  1623  33 +++++ +++  2182  57  1974  27 +++++ +++  1907  44
Latency             78474us    6839us    8791us    1746us      66us   64034us
1.96,1.96,loki,1,1305040411,2264M,,43,99,22682,17,10356,6,1038,79,28796,6,86.8,99,16,,,,,1623,33,+++++,+++,2182,57,1974,27,+++++,+++,1907,44,293ms,727ms,1222ms,46541us,504ms,13094ms,78474us,6839us,8791us,1746us,66us,64034us
consoleuser@loki:~$ 
As you can see, btrfs is significantly slower in several categories:
  • writing character-at-a-time is *much* slower: 43K/sec vs. 331K/sec
  • reading block-at-a-time is slower: 28796K/sec vs. 39629K/sec
  • all forms of file creation and deletion are nearly an order of magnitude slower
  • Random seeks are almost as fast, but they swamp the CPU
I'm hoping that i just configured the test wrong somehow, or that i've done something grossly unfair in the system setup and configuration. (or maybe i'm mis-reading the bonnie++ output?) Maybe someone can point out my mistake, or give me pointers for what to do to try to speed up btrfs.

I like the sound of the features we will eventually get from btrfs, but these performance figures seem like a pretty rough tradeoff.

Tags: benchmarks, bonnie, btrfs

Syndicated 2011-05-10 18:17:00 from Weblogs for dkg

53 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!