Older blog entries for raph (starting at number 199)

Printing

I had lunch with, then spent the afternoon with Brian Stell of Mozilla. He is on a mission to make Mozilla printing work well on Linux, especially the i18n bits. Already, he's provided us with fixes and test cases for incremental Type 11 fonts.

There are a lot of tricky issues, and the GNU/Linux community has historically done a poor job dealing with printing. It's an area where cooperation between many diverse pieces is needed, but there's nothing that's really motivating a solution except people's frustration. Brian is trying to solve the problem the Right Way for Mozilla, with the hope that others might follow.

Among other things, Brian is trying to figure out which versions of PostScript and PDF to target. For compatibility with existing printers, you want to generate lowest common denominator PostScript. But there are also advantages to generating more recent versions. For example, you can drop alpha-transparent PNG's into a PDF 1.4 file and they'll render correctly. That's not possible with any version of PostScript, or with earlier versions of PDF. On the font side, many (most?) existing printers can't handle incremental Type 11 fonts, even though they're in PostScript LanguageLevel 3 and also many Adobe interpreters before that (2015 and later).

A good solution would be to generate the latest stuff, and have converters that downshift it so it works with older printers. Alas, no good solution exists now. Ghostscript can rasterize without a problem, but sending huge bitmap rasters to PostScript printers is slow and generally not a good idea. pdftops can preserve the higher level structure (fonts, Beziers, etc.), but is limited in many other ways, among other reasons because it doesn't contain an imaging engine. So, at least for the time being, it seems like the best compromise is to have a codebase that generates various levels of PostScript and PDF.

A chronic problem in for GNU/Linux is a mechanism for users to install fonts, and applications to find them. At least five major application platforms need fonts: Gnome, KDE, Mozilla, OpenOffice, and Java. You also have a number of important traditional (I don't really want to say "legacy") applications that use fonts: TeX, troff, and Ghostscript among them. Plenty of other applications need fonts, including all the vector graphics editors, Gimp, and so on. I suppose I should mention X font servers too.

Most applications that need fonts have a "fontmap" file of some kind. This file is essentially an associative array from font name to pathname in the local file system where the font can be found. Actually, you want a lot more information than just that, including encoding, glyph coverage, and enough metadata at least to group the fonts into families. In some cases, you'll want language tags, in particular for CJK. Unicode has a unified CJK area, so a Japanese, a Simplified Chinese and a Traditional Chinese font can all cover the same code point, but actually represent different glyphs. If you're browsing a Web page that has correct language tagging, ideally you want the right font to show up. Unfortunately, people don't generally do language tagging. In fact, this is one area where you get more useful information out of non-Unicode charsets than from the Unicode way (a big part of the reason why CJK people hate Unicode, I think). If the document (or font) is in Shift-JIS encoding, then it's a very good bet that it's Japanese and not Chinese.

This is why, for example, the gs-cjk team created a new fontmap format (CIDFnmap) for Ghostscript. In addition to the info in the classic Ghostscript Fontmap, the CIDFnmap contains a TTC font index (for .ttc font files which contain multiple fonts), and a mapping from the character set encoding to CID's, for example /Adobe-CNS1 or /Adobe-GB1 for Simplified and Traditional Chinese, respectively.

To make matters even more complicated, as of 7.20, we have yet another fontmap format, the xlatmap. The goals are similar to the CIDFnmap, but with different engineering tradeoffs. One of my tasks is to figure out what to do to unify these two branches.

In any case, there are really three places where you need to access fonts, and hence fontmaps. First, the app needs to be able to choose a font and format text in that font. That latter task requires font metrics information, including kerning and ligature information for Latin fonts, and potentially very sophisticated rules for combining characters in complex scripts. These rules are sometimes embedded in the font, particularly OpenType formats, but more often not for older formats such as the Type1 family. Interestingly, you don't need the glyphs for the formatting step.

The second place where you need the font is to display it on the screen. Historically, these fonts have lived on the X server. But the new way is for the client to manage the font. The XRender extension supports this approach well, as it supports server-side compositing of glyphs supplied by the client. Even without XRender, it makes sense to do the glyph rendering and compositing client-side, and just send it to the X server as an image. Maybe 15 years ago, the performance tradeoff would not have been acceptable, but fortunately CPU power has increased a bit since then.

The Xft library is one possible way to do font rendering, but I'm not very impressed so far. Among other things, it doesn't do subpixel positioning, so rendering quality will resemble Windows 95 rather than OS X.

The third place where you need the font is when you're printing the document. In most cases today, it's a good tradeoff for the app to embed the font in the file you're sending to the printer. That way, you don't have to worry about whether the printer has the font, or has a different, non-matching version. If you do that, then Ghostscript doesn't actually need to rely on fontmaps at all; it just gets the fonts from the file. However, a lot of people don't embed fonts, so in that case, Ghostscript has to fudge.

So how do you install a font into all these fontmaps? Currently, it's a mess. There are various hackish scripts that try to update multiple fontmaps, but nothing systematic.

One way out of the mess would be to have a standard fontmap file (or possibly API), and have all interested apps check that file. Keith Packard's fontconfig package is an attempt to design such a file format, but so far I'm not happy with it. For one, it's not telling me all the information I need to do substitution well (the main motivation in Ghostscript for the CIDFnmap and xlatmap file formats). Another matter of taste is that it's an XML format file, so we'd need to link in an XML parser just to figure out what fonts are installed. I'd really prefer not to have to do this.

I realize that the Right Thing is to provide enough feedback to KeithP so that he can upgrade the file format, and we can happily use it in Ghostscript. But right now, I don't feel up to it. The issues are complex and I barely feel I understand them myself. Also, I'm concerned that even fixing fontconfig for Ghostscript still won't solve the problems for other apps. After all, Ghostscript doesn't really need the font metrics, just the glyphs. Even thoroughly obsolete apps like troff need to get .afm files for Type1 fonts (much less font metrics from TrueType fonts). GUI apps on GNU/Linux haven't really caught up to the sophistication of mid-'80s desktop publishing apps on the Mac. As far as I can tell, fontconfig currently has no good story for metrics or language info.

What would really impress me in a standard for fontmap files is a working patch to get TeX to use the fonts. But perhaps this is an overly ambitious goal.

In any case, I really enjoyed meeting Brian in person today, and commend him for having the courage to attack this huge problem starting from the Mozilla corner.

31 May 2002 (updated 1 Jun 2002 at 19:09 UTC) »
Farmer's market

We took the kids to the Benicia Farmer's Market for the first time of the season. Max loved the pony ride and petting zoo. Alan has become good friends with Lacey, the 9yo daughter of the family that brings the animals, and they had a great time running around together. Alan also ran into about 4 kids he knows from school. It was pleasant weather, the sunset was beautiful, and it was altogether a very nice evening.

More on grain size

See David McCusker's response to yesterday's posting on grain size.

A couple of people pointed out that my estimate for modem latency was too low. Assume 150ms and 4K/s, that gives you a product of 0.6K. That actually makes the clustering around 4K even better.

I've been thinking some more about the relatively large "optimum" grain size for disks. David has obviously put a lot of thought into the problem. What I'll present tonight is a simple technique, optimized for ease of analysis. I'm sure that better techniques are possible, and IronDoc may already implement them. But this is a lecture, not a product :)

Assume 4K blocks and 1M grains, so a grain contains 256 blocks. Coincidentally, if each child reference takes 16 bytes, that's the maximum number of children each internal node can have. The minimum number is half the maximum. That's a big part of what makes it a btree.

In a large tree, each subtree consisting of a node one from the bottom, and its immediate children (leaf nodes all) gets its own grain. Grains are fixed size. The file consists of a sequence of grains; they're not interleaved at all. Maybe the file is a little shorter because the last grain isn't completely full.

Now we can analyze the utilization. For files smaller than one grain (1M), utilization is the same as for classical btrees: 1/2 in the worst case. For larger files, the utilization of blocks within a grain is also 1/2 in the worst case, so the total worst-case utilization is 1/4.

A grain can contain free blocks, but the file need not store any free grains. If you want to delete a grain, you copy the last grain in the file over it, then truncate the file. As a result, the order of grains within the file can become totally random, but you really don't care. The grain is large enough that one disk seek per grain isn't a killer.

There are tricks you can do to improve utilization. One special case bears mentioning - the append-only scenario. In the classical btree algorithm, you get worst-case utilization. When you overflow a block, you split it in half. The first half never gets touched again, so the utilization remains 1/2. It's pretty easy to tweak it so that in this case, utilization is near unity. It's a common enough case that I think this tweak is sufficient. Remember, 1/4 is only worst case, and disk is really cheap.

As an order-of-magnitude ballpark, reading or writing an entire grain should take about 40ms. Reading or writing an individual block is around 20ms. Sometimes you'll need to do an operation on an entire grain, for example when splitting or merging a subtree 1 level up from the bottom.

I'm not trying to make the argument that this is the best file layout. I'm sure there are better tradeoffs, preserving similar locality while improving utilization. But I do like things that are simple to understand. See also McCusker on engineering design tradeoffs, short and well worth reading.

XOR metric

The XOR metric as proposed in the Kademlia work has a lot of people excited. I've been thinking about similar things in the context of my PhD research for about 3 years now. But none of my thinking was really written down anywhere, much less published. There's some discussion on the bluesky list of other similar ideas in the literature, as well.

The Kademlia people have actually done the work and published the paper. Perhaps just as important as the design is their analysis. Designers of p2p networks tend to be very optimistic that the resulting real-world network will have the desired properties. I'm pleased to see p2p design moving toward real quantitative design.

The XOR metric is good in very large nets. You assign an ID to each node, for example the hash of its public key. Suppose that you distribute blocks among the nodes based on the hash of the block, so that a block is likely to be found in nodes with "nearby" hashes. Then, if you know the hash of a block, you have the problem of finding a nearby node.

If the net is reasonably small, then maybe you broadcast contact information for new nodes joining. That way, all nodes know about all other nodes, so the problem is simple. But as the net scales up, this strategy won't work as well.

Hence the XOR metric. Simplified, the "nearness" of two id's is the number of bits in common in the prefix. So, the nearness of "10100..." and "10010..." is 2, because the first two bits (but not the first three) are identical. Each node now keeps track of a small number of other nodes, as few as one for each distinct "nearness" value (which scales as lg N).

Now if you have a target id in hand, keep iterating this step: choose the closest node to the target id. Ask them for contact information for a closer node. You'll get to the closest node within lg N steps, as each step increases the nearness by one. A picture for the analysis resembles walking down a binary tree.

Many p2p network designs need to find arbitrary other nodes, and are expected to scale up. If so, there are two choices: use an algorithm such as Kademlia's (or based on Chord or Tapestry, which were the inspirations for Kademlia) to find short routes, or pray. It's a good test, I think, of the health of the project.

The stamp-trading network described in Chapter 7 of my thesis is another twist on this idea. The lg N topology is similar, but requests flow only along trusted links in the network. It's an open question whether it will in fact find short routes given this additional constraint.

(Thanks to Roger Dingledine for comments on an earlier draft, and to Zooko for good discussion and links. Responsibility for oversimplification remains mine.)

casper

casper.ghostscript.com seems to be down. I hope it's just because transbay.net, the colo host, is moving, and that it'll be up again in the morning. Ah well.

Linux or Mac

David McCusker wants a development machine for home. I think he would be happy with either a Linux box or a Mac running OS X. In the former case, he'll have to spend some time dealing with Linux weirdness, in the latter, he'll have to spend some time dealing with OS X weirdness.

If the goal is raw price/performance, Wintel is a clear win. Buying the cheapest parts on Pricewatch is somewhere around 50% of a comparable-spec Mac. But Macs are arguably more elegant than Wintel. And, if stuff like having digital cameras Just Work when you plug them in is important, there is no contest.

When I ordered spectre, raw horsepower was in fact a driving factor. I need to do regression testing as quickly as possible. Otherwise, I would have been sorely tempted to get a Mac.

David also expresses the wish to do cross-platform GUI coding. Unfortunately, there's no good story here. Drawing is only a small part of the overall problem, so using something like OpenGL won't help much. I do expect the situation to improve over time. wxWindows is probably closest, and has OSX support in development versions.

What is the optimum grain size?

One of the most important factors affecting performance of a tree access protocol is grain size. Here I'll present a very simplistic analysis. A lot of people never bother to do any.

How do you slice up a tree into grains? I propose to serialize the tree, chop the serialization into fixed-size blocks (leaf nodes), and use a btree-like structure to tie this together. The cool new idea is to count parentheses in the btree nodes. This lets you fly around the tree without having to dig into the leaves.

There are other ways to do it, of course. You can use one grain per node. You can use one grain for the whole tree, perfectly reasonable if the tree is smallish. You can also try to aggregate subtrees into grains. On disk, the advantage of fixed-size blocks is good utilization. On a network, you don't care about utilization of blocks, but you still might care about a large variance in grain size.

For simplicity, let's assume that the tree is read-only. We will analyze two usage patterns. The first is simply to traverse the whole tree. The second is to navigate to random nodes in sequence.

Traversing the tree means scanning the serialization linearly. This is cool. You only touch each block once. Assume that the time to fetch a block is a latency value, plus the size of the block divided by the bandwidth. Total time for traversing the tree is (tree size in bytes) / (bandwidth in bytes per second) + (tree size in bytes) * latency / (block size in bytes). It is easy to see that large blocks are more efficient.

The random node case is different. When you fetch a block, only a small part of it is relevant. The rest is waste. The time to fetch a node is latency + (block size in bytes) / bandwidth. This time, small blocks are more efficient.

What is the optimum grain size, then, for a mixture of both cases? When (block size) = latency * bandwidth, both individual cases are exactly a factor of two slower than their limiting best-case (infinitely large blocks in in the case of a traversal, infinitely small in the case of random nodes). Thus, the optimum will be on the order of latency * bandwith.

What is latency * bandwidth for real devices? Here's a quick table. Don't worry about small inaccuracies. We're trying to get the order of magnitude right. This is just disk and network. Memory hierarchy is important too, but the analysis is considerably different, so I won't do that tonight.

modern disk: latency 10ms, max bw 50M/s: 500K
wireless net 802.11b: latency 2.5ms, max bw 0.5M/s: 1.25K
modem: latency 50ms, max bw 4K/s: 0.2K
100Mb lan: latency 0.3ms, max bw 10M/s: 3K
dsl down from a nearby server: 20ms, 100K/s: 2K
dsl up to a nearby server: 20ms, 10K/s: 0.2K
dsl down international: 200ms, 100K/s: 20K
dsl up international: 200ms, 10K/s: 2K

In Unix, the traditional block size is 4K. It's interesting that this value is not far from the mark for networks, even over a very broad range of performance. So the traditional block size is actually still reasonable.

Disk is the outlier. What's more, latency * bandwidth scales very roughly as the square root of areal density, and that's scaling like mad. It used to be 4K, but that was a long time ago.

But 500K is two whole orders of magnitude bigger. If we believe this analysis, then access to a tree on disk will spend 99% of the time seeking, and 1% accessing useful data. That would be bad.

The story is a bit more complex, though. Real disk-based systems spend a huge amount of effort trying to increase locality, or the clustering of items likely to be accessed in a cluster. If this effort is successful, then the effective grain size goes up. Caching with prefetching is a particularly effective technique. Modern OS kernels implement prefetching, and so do drives. In fact, when you request a 4K block from a drive, it will usually spend on the order of 10ms seeking to the right track, then 5ms waiting for the disk to spin to the right sector. Given a typical raw transfer rate of 50M/s, that means 250K or so of data will fly past the read head. In a modern disk, all that goes into a cache. Then, when the kernel requests blocks from that range, it gets them immediately.

So, to do a btree efficiently, you have (at least) two choices. You could specify a really large block size, and not worry about the order of blocks on the disk. Another method is to use a small block size, but try hard to cluster nearby blocks. This demands more intelligence when allocating blocks within the file. It's also well known from filesystems that it's hard to avoid fragmentation when the utilization is very high. When there is ample free space, there are more candidates to choose from to try to optimize locality.

Of course, in the read-only case, you can allocate the blocks exactly in traversal order. In this case, 4K blocks are again reasonable. The problem is avoiding fragmentation as you start updating the tree.

David McCusker talks about this a bit in database areas. In it, he suggests 2K blocks, but allocation logic that works in grains of 16K. That's still not good enough (by more than an order of magnitude) if the grains are randomly scattered in the file. Maybe he's doing something else to try to cluster grains; it's not clear to me. But I do believe it is a tricky problem.

The network version is in many ways simpler (this is a relief, because in other important ways it is harder). You don't have to worry about locality, as there really is no such concept. The network latency is the same no matter which block you request. You also don't have to worry as much about utilization, because it's possible to simply skip sending unused byte ranges. Blocks can be variable in size, too, unlike the fixed blocks of disks.

As I warned, this analysis is oversimplified. Yet, I think it is useful to understand real performance problems. It gives a lot of insight into the appeal of flat files for mailboxes. At 50M/s, you can grep an entire 250M mailbox in five seconds. A dumb database design, by contrast, may use a hash table to allocate individual messages effectively at random within the file. Thus, if you try to search through the messages in order, each read will take 15ms or so. Five seconds will give you enough time to search 300 messages, consistent with the two orders of magnitude discrepancy between the typical block size of 4K and the optimum grain size of about 500K for disks.

Thus, sophisticated file formats have a danger of creating serious performance problems. But I consider that a quantitative problem, one that yields well to careful analysis and design. To me, those are the most fun!

29 May 2002 (updated 29 May 2002 at 08:33 UTC) »
Linked

I'm reading Linked: The New Science of Networks, by Barabasi. I'll have quite a bit more to say when I'm finished reading it, but in the meantime, if you're interested in networks, I can highly recommend it. In particular, if you're trying to do p2p, then run, do not walk, to your friendly local independent bookstore.

Advogato and community

anselm: you raise good questions. With luck, Advogato can become more vital without sacrificing its thoughtful tone.

In any case, I think the secret to success in online community is for it to be firmly based on real, human community. That's sometimes tricky in the highly geographically dispersed world of free software, but worth cultivating.

Why Athshe will not be based on XML

Athshe is a (completely vaporware) suite of tools for tree-structured data, including an API for tree access, a simple serialized file format, a more sophisiticated btree-based random access file format, and a network protocol for remote tree access and update. Everybody is doing trees these days, because XML is hot. Yet, I do not plan to base Athshe on XML. Why?

In short, because it makes sense for Athshe to work with a simpler, lower level data language. The simplicity is a huge win because Athshe will take on quite a bit of complexity trying to optimize performance and concurrency (transactions). Also, XML is weak at doing associative arrays, and I think Athshe should be strong there. Lastly, the goals of Athshe are focussed on performance, which is a bit of an impedance mismatch with most of XML community.

I hardly feel I have to justify the goal of simplicity - it's such an obvious win. So I won't.

The children of an XML element are arranged in a sequence. However, in a filesystem, the children of a directory are named; the directory is an associative array mapping names to child inodes. I believe this to be an incredibly useful and powerful primitive to expose. Many applications can use associative arrays to good advantage if they are available. One example is mod_virgule, which leverages the associative array provided by the filesystem to store the acct/<username> information.

XML (or, more precisely, DOM) actually does contain associative array nodes. Unfortunately, these are Attribute nodes, so their children are constrained to be leaves. So you get the complexity without the advantages :)

A very common technique is to simulate an associative array by using a sequence of key/value pairs. This is essentially the same concept as the property list from Lisp lore. XPath even defines syntax for doing plist lookup, for example child::para[attribute::type="warning"] selects all para children of the context node that have a type attribute with value warning. However, mandating this simulation has obvious performance problems, and may also make it harder to get the desired semantics on concurrent updates. In particular, two updates to different keys in an associative array may not interfere, but the two updates to the plist simulation very likely will.

Nonetheless, this concept of simulation is very powerful. I believe it is the answer to the "impedance mismatch" referenced above. Athshe's language consists of only strings, lists, associative arrays, and simple metadata tags. It's not at all hard to imagine mapping "baby XML" into this language. In Python syntax, you'd end up with something like:

<p>A <a href="#foo">link</a> and some <b>bold</b> text.</p>
becomes:
['p', {}, 'A ', ['a', {'href': '#foo'}, 'link'], ' and some ', ['b', {}, 'bold'], ' text.']

With a little more imagination, you could get DTD, attributes, entities, processing instructions, and CDATA sections in there.

In fact, mappings like this are a common pattern in computer science. They resemble the use of an intermediate language (or virtual machine) in compilers. These types of mappings also tend to be interfaces or boundaries between communities. Often, the lower-level side has a more quantitative, performance-oriented approach, while the higher-level side is more concerned with abstraction. Cisco giveth, and the W3C taketh away :)

Credit where credit is due

My recent piece on link encryption drew deeply on an IRC conversation with Roger Dingledine and Bram, and Zooko provided the OCB link.

A good application for an attack-resistant trust metric

A best-seller on Amazon.

Note especially how the shill reviews have very high "x out of y people found the following review helpful" ratings. A perfect example of a system which is not attack resistant. Reminds me of the /. moderation system :)

Thanks to Will Cox for the link.

Ghostscript

I've been busy hacking on Well Tempered Screening. I've got it random-access, and I've coded up the enumeration so you can use PostScript code to define the spot function. I still have to grapple with the "device color" internals, which intimidates me.

On a parallel track, the DeviceN code tree, which up to now has lived in a private branch, is shaping up fairly nicely, at least using regression testing as a metric. We should be able to get it checked into HEAD soon.

Web UI's and robustness

Yesterday, I linked a claim by Dave Winer that users like to use Web browsers as their UI, and a note of frustration by Paul Snively, who lost some work on his blog, and blamed the "Web browser as UI" approach. Paul has since retracted his original post, but I think there's a deeper issue that deserves to be explored. I've lost a fair amount of work posting to Advogato as well, so let that be the whipping boy rather than Radio.

Using the Web browser for the UI has a different set of design tradeoffs than a traditional desktop app. Some things get easier, others get harder. Lots of things can lead to a bad (or good) user experience, including things not foreseen by the software's designer. I know I didn't pay a great deal of attention to robustness when designing Advogato.

I'm not going to try to catalog all the advantages and disadvantages of Web-based UI's - that's a daunting task. Instead, I'll focus on robustness, or the risk of losing work.

I used to lose coding work fairly frequently. Now I don't, because I do lots of things to improve robustness. First, I use an editor with good robustness features. Second, I check my work into a remote CVS server. Third, I back up the repository to CD-R every week. I also frequently copy local working files to another machine on my network, and send patches to mailing lists. As a result, it's been quite a while since I've lost any coding work.

I still lose Advogato posts, though, most recently about a week ago. Why? For one, Netscape 4.7x's text entry box doesn't have any of the paranoia that real editors have about losing work. In fact, pressing Alt-Q (the same command as "format paragraph" in Emacs) is a quick shortcut for "lose all state and quit". This could be fixed in the client, but as the designer of Advogato, I don't have much say about that.

There is more I could do but don't, though. I could make the "Preview" button store the draft in a log, persistent for a day or so and then garbage collected. You could, if you chose, edit in the browser and click "Preview" regularly, much as I regularly press Ctrl-X Ctrl-S in Emacs. In fact, I rather like this idea, as it has other advantages. For example, you could share the draft link with others.

Similarly, I could implement something like version control for diary entries. When you post a new edit, it saves the old version (or, perhaps, the diffs) somewhere. Thus, if you clobber your work (as I did a few days ago), you can get it back. Again, there are other advantages to this approach. This is basically ReversibleChange, one of the "soft security" ideas popular in the Wiki world.

A very high-tech client might even be able to implement something analogous to the autosave feature of Emacs. It would periodically upload the current state of the text entry to the server for safekeeping. However, this would require some pretty major changes to the way the Web works, so I'm not holding my breath.

In the meantime, there are alternatives. For one, it's possible to use an XML-RPC client instead of a Web browser. Many of these clients encourage you to use your favorite editor, which helps with robustness. The client could also keep a log of everything submitted. Such an approach would be complementary to the server-side tweaks I mentioned above.

In the meantime, I now generally write my diary entries in Emacs, then simply cut-and-paste into the browser. It's not perfect, but it works.

Baseball

A Quaker friend of ours, Ricki Anne Jones, invited us (Heather, the kids, and I) to today's A's game. She gets a luxury box every year because she's such a loyal fan, and she invites a bunch of her friends. It was fun. Alan and Max had another kid to play with, and they avoided melting down, so this was the first time they'd lasted through the entire game.

W3C

Dan Brickley of the W3C showed up on #p2p-hackers tonight. We talked about AaronSw's manifesto, among other things. It was a fairly pleasant conversation, but I still feel that the W3C is pretty badly broken. Dan encouraged me to write up some my Athshe stuff (I was trying to talk about it online). As a result, my next Athshe blog will be "why Athshe will not be based on XML."

New computer

I ordered my new dual-Athlon from KC Computers. I'll keep a running log, especially in case other people want to follow the same recipe.

The final price was about $1600. Picking the absolute bottom price from Pricewatch, the parts add up to around $1000, not including assembly and shipping. I consider it money well spent, because I figure I have a considerably lower risk of something going wrong and eating up lots of my time.

Even so, I've been quite satisfied every time I've bought something off Pricewatch, even when the prices seem too good to be true. Good reputation information about the sellers (not unlike what ebay does) would seem to decrease the risk even more.

There are lots of things that can go wrong. The seller could turn out to be shady. The parts could turn out to be defective, possibly seconds or returned merchandise. The parts could be completely legitimate, but of low quality (like the infamous IBM Deskstar 75GXP drives). They could be individually ok, but subtly incompatible with each other, apparently a very serious problem with early Athlon platforms. They could be just fine, but not well supported by the operating system. This last problem has a wide range of variability, as it varies depending on the OS flavor. Something like recompiling the kernel to update a driver may be perfectly reasonable for a sophisticated user, but out of reach for others.

In all these cases, good metadata could help. If I knew I was getting parts with a good chance of working well in the system, I'd have no problem with going through Pricewatch-style vendors, and wielding the screwdriver myself.

Such a metadata system could be quite high-tech. For one, it could compute total cost including aggregating of shipping, and sales tax. It could take shipping delay into account, as well. Optimizing this sounds like a dramatically scaled-down version of the ITA fare calculation used on Orbitz. It's appealing because it maps to minimizing labor and energy costs in the real world, not just getting the best outcome of a game.

You could also do stuff like autogenerating recipes (select this BIOS option, set hdparm to that, etc), and incorporating feedback from others with similar configurations. Even more extreme would be to customize a distribution. Custom kernels, in particular, seem like a win.

A huge part of the value of a brand (such as IBM or Dell) is the QA work they do, essentially creating metadata about reliability and compatibility as part of building and delivering systems. Even so, the assurance is far from absolute. For example, IBM Thinkpad 600's have a defective battery design, causing them to die too early. Metadata from TP600 owners may be more useful input to a buying decision than "IBM is a good brand".

Another reason to believe that a high-tech metadata is useful is the huge variability in the needs of users who run free software. One size most definitely does not fit all. This, I think, is one reason why companies such as VA Research have had such a difficult time being competitive.

There was a lot of talk about "mass customization" being part of the new economy, but not much follow-through. Most dot-com retailers were little more than mail order outfits that happened to publish their catalog through the Web rather than on paper (in fact, many have since added paper catalogs to their mix).

I'm certainly not going to put this kind of metadata system together myself, but I do think it would make an interesting project for someone. Clearly, this type of service would be worth real money. I'm not alone in believing that metadata is important. Amazon has very high quality metadata about books, and that's why their site is so valuable.

26 May 2002 (updated 26 May 2002 at 08:50 UTC) »
Advice to young standards authors

aaronsw posted The Standards Manifesto a few days ago. In it, he expresses frustration with the W3C. I can certainly identify.

I've had a fair amount of experience with standards processes over the years. I'm sure I'll have more. Most, but not all, of the experiences have been unpleasant. Standards are incredibly important, but not much fun. Standards committees, in particular, tend to be particularly tedious, and not very smart (even if the individual members of the committee are). In fact, standards committees are in many ways the least qualified entities to be writing standards.

Designing good standards is an art, and an underappreciated one to boot. One of the most important quality metrics is complexity. A standard should be as free from unneeded complexity as possible. The cost of a spec goes up by roughly an order of magnitude from spec to prototype, and again from prototype to production-quality implementation. It's too easy for glorified technical writers to come up with a new feature if they don't have to implement it themselves.

Standards reflect the process that creates them. The biggest problem with standards processes is lack of openness. There is a spectrum, ranging from complete transparency (the IETF at its best, where everybody is welcome to show up at meetings, and "rough consensus and working code" carries the day), to evil cartels such as the DVD CCA and the BPDG. In these extreme cases, only members are allowed to participate, only members are allowed to see the spec, and there are NDA's and all kinds of legal restraints to "protect" implementations of the standard. The W3C is somewhere in the middle of this continuum. In general, you have to be a paying member to participate, the deliberations are private (and NDA'd), but the resulting spec is public, comments from the public are solicited, and there are (usually) no patent royalties. The W3C seemed to be headed down a darker path, including promotion of patent licensing, but to their credit they responded to public outcry and backed off.

It is often said that "the great thing about standards is that there are so many to choose from." I propose that the same is true of standards processes. Small, motivated groups (or even individuals) can and should make standards. In fact, their work is often the most important. Examples abound. While there was a standards committee around JPEG, the really important work was done by the IJG (mostly Thomas Lane), which standardized a patent-free subset of JPEG and produced a very high quality free implementation. Sockets, developed by Bill Joy in the late seventies and quick to become the universal API for the Internet, were ignored by standards committees until just a few years ago. Committees tended to favor things like XTI, now mercifully dead.

Standards bodies are reasonably good at codifying existing practice. They suck at doing research. A good process winnows needless complexity from a standard, focussing on the essence of the problem. It's almost a quantitative science, as different approaches to the same problem may differ quite significantly in complexity.

A naive person might assume that building a global information network, scaling from coke machines to supercomputers, would be a harder problem than, say, synchronizing audio and video in multimedia. Yet, the TCP/IP documents (RFC's 793 and 791) weigh in at about 128 pages and are reasonably complete, while SMIL is about 500 pages, and includes lots of other standards by reference.

The economic incentives for closed, proprietary (and complex) standards are powerful. Who would spend grueling hours in a standards committee to help create a beautiful, free, simple standard, out of pure altruism? In fact, much of this work is similar to writing free code, but it tends to be quite a bit less fun.

I think the salvation lies in making the creation of new standards more fun. I'm not sure the best way to do this, but can offer my own experiences. The most fun I've had in a standards process has been IJS. It was in many ways a purposeful experiment in process. I deliberately narrowed the scope (things like UI and i18n are not included), while still trying to solve an important problem (making it easy to create printer drivers decoupled from the rasterization engine). I also acted as a dictator with respect to the spec. I didn't include suggestions from the mailing list until I was convinced that they were necessary, and properly thought through. Another key part of the process was the reference implementation, not merely free but also designed explicitly to adapt easily to other people's drivers rather than impose my own framework.

Also important was the fact that IJS built on the work done by HPIJS, a working protocol and codebase that merely had a few portability issues and some aspects specific to HP printers. I didn't have to take on a whole research project.

IJS is of course not perfect, but I do think it's doing its job. My time working on it is justified in the improved user experience and decreased support load, and it was kinda fun. (see, it wasn't altruism, it was enlightened self-interest :) The next time I get involved in a standards process, it's going to look a lot more like IJS than a W3C working group.

So, my advice to Aaron? First, by all means seek a better process than the W3C's. I have no doubts that such a process can be found. Second, be clear on whether the task is primarily research into what should be standardized, or codifying a well-understood domain. In the former case, it makes sense to just go do stuff. In the latter case, finding consensus is more important. Third, strive for simplicity, Feel especially free to ignore those who wish to add their own pet complication, especially if they don't share your vision.

Last, and perhaps most important, treat it as something that's supposed to be fun. Don't get too emotionally wrapped up, especially over the question of whether the rest of the world is getting on board. If you create something really good, it will have an impact. If the standard is simple, it will be expedient for others to implement.

Selected quotes from other blogs

From Dave Winer's Scripting News:

Anyway, we went far and wide and swung around to desktop websites, a subject near and dear to my heart. He wondered why more Mac developers weren't using the combo of PHP and Apache that comes bundled with every Mac. I think it's just a matter of time before Unix developers get there. Users like apps that run in the browser.

From Paul Snively's Gadfly Redux:

OK, this is twice now that I've had a ton of news queued up, posted some things, and... *poof*. Dozens of news items gone. Thank God I know that three pressing ones are actually comments from Paul Prescod and are safely ensconced with YACCS. But there are stacks of other things I wanted to respond to, and they're gone.

You know, I try to be patient. I try to be reasonable. I get a chuckle out of it when Dave says in best ha-ha-only-serious fashion that they write software that sucks. But Radio has a combination of serious architectural flaws: a browser interface that allows the browser's notion of object identity and the database's to get out of sync, possibly due to browser page caching and navigation using the back and forward buttons; and the lack of transactions in the underlying database. Sooner or later, this combination will result in what I've now seen twice.

I'd be a lot happier if Radio would just be an ordinary desktop application. Editing in the browser isn't a win, especially on the Mac. I'm totally with the local flyweight server idea. It's just that I want a well-integrated, rich client to go with it.

I myself have lost plenty of work editing in a browser. Here, I think, we have the classic quality tradeoff between a universal client, and one optimized for a specific task. This is one of the reasons I'm so happy to see all the client work happening around Advogato :)

PhD

Heather got the paper with the gold seal on it. We went out for sushi to celebrate.

New machine

I got a bunch of recommendations for gettng a new machine built. Thanks, Zooko, wmf, and Krishnakumar B. Any of the recommendations I got would be a better deal than fiddling with it myself.

Creative Commons

The Creative Commons looks both interesting and useful. I'll probably release some stuff (other than software) under one of their licenses.

Link encryption

For fun, I've been looking at link level encryption techniques. TLS (formerly SSL) is surprisingly not too hairy. The main thing wrong with it is lots of options, and X.509 certs. The latter, of course, are a big turnoff.

One of the (numerous) gotchas is the "side channel" attack against RSA encryption with PKCS #1 padding. Basically, if the server will tell you whether the decryption of an attacker-provided RSA integer has valid padding, that will reveal bits of the private key. The way to avoid trouble is to not return an immediate error indication. Ideally, you just treat an invalidly padded decryption as random, and wait for a future message authentication to fail.

One of the major issues in link encryption is to get both privacy and authentication (which implies integrity). It's easy to get this wrong. In fact, all of the traditional cipher modes such as CBC, CFB, etc., allow some systematic modification of the decrypted plaintext without necessarily being detected.

Thus, conservative designs (including TLS) generally apply both an encryption step and an authentication (MAC) step. TLS does MAC and then encryption, but a relatively recent paper by Bellare and Namprepre proves that doing encryption first, then MAC, is somewhat stronger.

I really like the paper, because it spells out pretty clearly what's secure. I believe that the traditional rules of thumb for choosing a cipher mode (such as the discussion in Applied Crypto) go out the window. All of the privacy-protecting modes (including CBC, CFB, and counter aka CTR) are equally good. CTR, in particular, has gotten a bad name because it doesn't even try to do integrity protection, but it's nice in many ways: it parallelizes, it's simple, and the proof of security is pretty simple too. This paper makes a nice case for it.

The preferred MAC algorithm these days tends to be HMAC, which is just about as fast as the underlying hash algorithm, and is secure against extension attacks. If you want a conservative recipe, CBC or CTR plus HMAC seem like good choices. Another choice is a CBC-MAC. The advantage is that the security is well understood, and you have one less primitive to rely on. If the encryption is sound, your authentication will also be sound. The disadvantage is that that a CBC-MAC based on AES is somewhere around half as fast as SHA1.

A lot of people are trying to do encryption and authentication in one go, to save the time of doing the separate MAC step. It turns out to be a very hard problem. Quite a few of the proposed modes for AES fall into this category. Of them, IAPM and OCB seem to be the most appealing. Unlike the many failed previous attempts (including the NSA's own DCTR mode), they have proofs of security. In addition, they're parallelizable, unlike traditional MAC's. The big problem is that they're patented.

I can't figure out whether ABC (accumulated block chaining) is any good or not. It claims to do a good job on authentication, but in its simplest form can leak bits about the plaintext. They have a fix (a 1-bit rotation), but I'm not smart enough to figure out whether it's as secure as possible. Another disadvantage is that it's just as unparallelizable as CBC.

It's interesting that most of the academic literature deals with a single message of known size. The link encryption problem is subtly different. Typically, you have a stream of messages, with a length prefixed to each. You want to check the authenticity of each message, and also prevent replay attacks. The patent free "generic composition" mode advocated by Rogaway comes up a bit short. In particular, you can't decrypt a block unless you know whether it's the last block in the message. This could be a problem if you want to allow very short messages. You could of course specify a minimum message length (in which case the first block would never be the last block, and after decrypting it you have the length field so you know where the last block is), or try the decryption both ways. Neither is all that satisfying.

In addition, having a length prefix lets you avoid a lot of the complexity that gets added to avoid extension attacks. If you were trying to do link encryption as simply as possible, patent-free, and with conservative design, I propose the following recipe. Use AES in CTR mode to generate a pseudorandom stream. For each message in the sequence, prepend the length, and then XOR against this stream. This is your ciphertext for the message. Take the SHA1 hash of (the session key) + (a sequence number) + (the ciphertext). This is your MAC tag (actually, you only need the first 96 or 128 bits of it to attain reasonable security). The sequence numbers can be byte counts or message counts; the essential thing is that they never repeat. The sender sends alternating message ciphertexts and MAC tags. The receiver decrypts the length header first, then decrypts the rest of the ciphertext and computes the same MAC tag as the sender. If this tag matches the one sent in the stream, the message is valid.

Of course, don't take my advice for it. In most cases, just using a good TLS implementation is a better idea than trying to handroll your own. But I had fun learning this.

Soapy (in)security

I got this email from Paul Prescod:

Don't be so sanguine about SOAP security. It could have been designed NOT to use port 80 and NOT to look like HTTP traffic. It was clearly designed to camoflage itself as Web traffic. It would have been easy to send it over sockets and another port. You say that any solution would have security issues. Well yes, everything networked has security issues, but SOAP's very generic RPC design encourages problems like the SOAP::Lite dispatch problem. Some people are writing programs to wrap any COM object in SOAP. Standard application protocols like HTTP and SMTP strongly encourage you to build an abstraction layer that maps from the worlds they talk of "resources" and "mailboxes" to the implementation world: "objects". SOAP does not encourage the introduction of that mapping layer. In fact, standard SOAP/WSDL tools encourage you to take existing code and "generate" a SOAP/WSDL interface to it. Three examples: Visual Studio.NET, axis "java2wsdl" and SOAP::Lite.

Well said. People who know something about security are saying that the nature of Soap makes it especially fertile grounds for security vulnerabilities.

I said, "I believe Dave when he says that Soap wasn't explicitly designed to go through firewalls." But my point here is that it doesn't matter. The intent of the designers is irrelevant, unless they did careful work to make Soap security as good as it could have been. I see no signs of such work.

New machine

I need to buy a fast machine, in large part to run the Ghostscript regression tests. By far the best bang for the buck appears to be a dual Athlon. A Tyan Tiger MPX motherboard, two MP 1900+ Atlha, half a gig of RAM, and a disk costs less than $1000 in parts, as listed on PriceWatch.

It's been a while since I put a machine together myself. I'm a little apprehensive, because I know it will take a nontrivial amount of time, and there's a chance it won't work right when I do get it together. I'd rather pay somebody else to do it. Unfortunately, the big name PC's don't do a good job providing this kind of machine. Is there a smaller shop that can do this? Last time around, I bought a dual Celery from The Computer Underground, which was great, but they're not in the box biz anymore.

Alternatively, is there an Advogatan who can do a good job building such a box and putting Debian on it, and who'd like to make a couple hundred bucks?

Otherwise, I'll just order the parts and hope for the best. It doesn't look that hard, and I can probably get some help on the hardware side locally.

I'm also interested in hearing recommendations and anti-recommendations from people who have gone this route. My friend at the coffee shop suggests a water-cooled case as a way of cutting down noise. Is this reasonable?

Ok, the response to my bit on Soap security was pretty negative. I didn't communicate what I was trying to say very well. Apologies.

Soap is probably not any worse than CGI or other Web things. But my point is that it's not any better, either. I guess it all depends on whether you think that's good enough. (I don't.)

Firewalls are pretty useless. The main security implication of going through port 80 is that it will eliminate the need to explicitly open a Soap-specific port in the firewall, in the off chance it wasn't left open anyway. So talking about Soap and firewalls isn't very enlightening. In any case, I believe Dave when he says that Soap wasn't explicitly designed to go through firewalls.

PhinisheD

This afternoon, I helped Heather print out the final paper copies of her dissertation. Tomorrow, she hands it in for binding in the library and microfilming. The entire time I've known her, this has been a goal to work towards. Now it's actually done.

For historical reasons, the thesis is in troff format. I consider myself a smart guy, but fixing things was a usability nightmare. I had to rely on a mix of man pages, "helpful" web tutorials, a book on troff, and the groff s.tmac source. Even then, I ended up hand-tweaking a bunch of stuff. Ugh.

I helped format some Chinese hanzi as illustrations. I feel absolutely terrible about the fact that I used a Japanese font even though the context was Chinese, but didn't want to spend the time to redo all the eps's. Fortunately, all of the hanzi are quite similar visually to the corresponding Japanese kanji, so it's pretty unlikely that anyone will notice, much less care. rillian accuses me of perpetuating Unicode FUD, though. I guess I'll just have to live with that on my conscience.

Kids

Alan is resisting doing homework with me, and also resisting reading I'm not sure why. He has a lot of anxiety, and these days seems to be expressing it a lot more.

Max loves cooking. Tonight he helped me make a stir-fry. He had a little difficulty stirring, and decided that he needed a "big" spatula. When I told him that it was a big spatula, he asked for "giant". I thought it was cute, anyway :)

It's fun to play the "can you say X" game with both kids. For Max, the edge is currently "peanut butter jelly belly". A week or so ago, he had trouble saying this, usually dropping one of the words. Alan has no trouble with "uvulopalatopharyngoplasty". Finding harder words is going to be challenging :)

190 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!