Older blog entries for raph (starting at number 192)


Heather got the paper with the gold seal on it. We went out for sushi to celebrate.

New machine

I got a bunch of recommendations for gettng a new machine built. Thanks, Zooko, wmf, and Krishnakumar B. Any of the recommendations I got would be a better deal than fiddling with it myself.

Creative Commons

The Creative Commons looks both interesting and useful. I'll probably release some stuff (other than software) under one of their licenses.

Link encryption

For fun, I've been looking at link level encryption techniques. TLS (formerly SSL) is surprisingly not too hairy. The main thing wrong with it is lots of options, and X.509 certs. The latter, of course, are a big turnoff.

One of the (numerous) gotchas is the "side channel" attack against RSA encryption with PKCS #1 padding. Basically, if the server will tell you whether the decryption of an attacker-provided RSA integer has valid padding, that will reveal bits of the private key. The way to avoid trouble is to not return an immediate error indication. Ideally, you just treat an invalidly padded decryption as random, and wait for a future message authentication to fail.

One of the major issues in link encryption is to get both privacy and authentication (which implies integrity). It's easy to get this wrong. In fact, all of the traditional cipher modes such as CBC, CFB, etc., allow some systematic modification of the decrypted plaintext without necessarily being detected.

Thus, conservative designs (including TLS) generally apply both an encryption step and an authentication (MAC) step. TLS does MAC and then encryption, but a relatively recent paper by Bellare and Namprepre proves that doing encryption first, then MAC, is somewhat stronger.

I really like the paper, because it spells out pretty clearly what's secure. I believe that the traditional rules of thumb for choosing a cipher mode (such as the discussion in Applied Crypto) go out the window. All of the privacy-protecting modes (including CBC, CFB, and counter aka CTR) are equally good. CTR, in particular, has gotten a bad name because it doesn't even try to do integrity protection, but it's nice in many ways: it parallelizes, it's simple, and the proof of security is pretty simple too. This paper makes a nice case for it.

The preferred MAC algorithm these days tends to be HMAC, which is just about as fast as the underlying hash algorithm, and is secure against extension attacks. If you want a conservative recipe, CBC or CTR plus HMAC seem like good choices. Another choice is a CBC-MAC. The advantage is that the security is well understood, and you have one less primitive to rely on. If the encryption is sound, your authentication will also be sound. The disadvantage is that that a CBC-MAC based on AES is somewhere around half as fast as SHA1.

A lot of people are trying to do encryption and authentication in one go, to save the time of doing the separate MAC step. It turns out to be a very hard problem. Quite a few of the proposed modes for AES fall into this category. Of them, IAPM and OCB seem to be the most appealing. Unlike the many failed previous attempts (including the NSA's own DCTR mode), they have proofs of security. In addition, they're parallelizable, unlike traditional MAC's. The big problem is that they're patented.

I can't figure out whether ABC (accumulated block chaining) is any good or not. It claims to do a good job on authentication, but in its simplest form can leak bits about the plaintext. They have a fix (a 1-bit rotation), but I'm not smart enough to figure out whether it's as secure as possible. Another disadvantage is that it's just as unparallelizable as CBC.

It's interesting that most of the academic literature deals with a single message of known size. The link encryption problem is subtly different. Typically, you have a stream of messages, with a length prefixed to each. You want to check the authenticity of each message, and also prevent replay attacks. The patent free "generic composition" mode advocated by Rogaway comes up a bit short. In particular, you can't decrypt a block unless you know whether it's the last block in the message. This could be a problem if you want to allow very short messages. You could of course specify a minimum message length (in which case the first block would never be the last block, and after decrypting it you have the length field so you know where the last block is), or try the decryption both ways. Neither is all that satisfying.

In addition, having a length prefix lets you avoid a lot of the complexity that gets added to avoid extension attacks. If you were trying to do link encryption as simply as possible, patent-free, and with conservative design, I propose the following recipe. Use AES in CTR mode to generate a pseudorandom stream. For each message in the sequence, prepend the length, and then XOR against this stream. This is your ciphertext for the message. Take the SHA1 hash of (the session key) + (a sequence number) + (the ciphertext). This is your MAC tag (actually, you only need the first 96 or 128 bits of it to attain reasonable security). The sequence numbers can be byte counts or message counts; the essential thing is that they never repeat. The sender sends alternating message ciphertexts and MAC tags. The receiver decrypts the length header first, then decrypts the rest of the ciphertext and computes the same MAC tag as the sender. If this tag matches the one sent in the stream, the message is valid.

Of course, don't take my advice for it. In most cases, just using a good TLS implementation is a better idea than trying to handroll your own. But I had fun learning this.

Soapy (in)security

I got this email from Paul Prescod:

Don't be so sanguine about SOAP security. It could have been designed NOT to use port 80 and NOT to look like HTTP traffic. It was clearly designed to camoflage itself as Web traffic. It would have been easy to send it over sockets and another port. You say that any solution would have security issues. Well yes, everything networked has security issues, but SOAP's very generic RPC design encourages problems like the SOAP::Lite dispatch problem. Some people are writing programs to wrap any COM object in SOAP. Standard application protocols like HTTP and SMTP strongly encourage you to build an abstraction layer that maps from the worlds they talk of "resources" and "mailboxes" to the implementation world: "objects". SOAP does not encourage the introduction of that mapping layer. In fact, standard SOAP/WSDL tools encourage you to take existing code and "generate" a SOAP/WSDL interface to it. Three examples: Visual Studio.NET, axis "java2wsdl" and SOAP::Lite.

Well said. People who know something about security are saying that the nature of Soap makes it especially fertile grounds for security vulnerabilities.

I said, "I believe Dave when he says that Soap wasn't explicitly designed to go through firewalls." But my point here is that it doesn't matter. The intent of the designers is irrelevant, unless they did careful work to make Soap security as good as it could have been. I see no signs of such work.

New machine

I need to buy a fast machine, in large part to run the Ghostscript regression tests. By far the best bang for the buck appears to be a dual Athlon. A Tyan Tiger MPX motherboard, two MP 1900+ Atlha, half a gig of RAM, and a disk costs less than $1000 in parts, as listed on PriceWatch.

It's been a while since I put a machine together myself. I'm a little apprehensive, because I know it will take a nontrivial amount of time, and there's a chance it won't work right when I do get it together. I'd rather pay somebody else to do it. Unfortunately, the big name PC's don't do a good job providing this kind of machine. Is there a smaller shop that can do this? Last time around, I bought a dual Celery from The Computer Underground, which was great, but they're not in the box biz anymore.

Alternatively, is there an Advogatan who can do a good job building such a box and putting Debian on it, and who'd like to make a couple hundred bucks?

Otherwise, I'll just order the parts and hope for the best. It doesn't look that hard, and I can probably get some help on the hardware side locally.

I'm also interested in hearing recommendations and anti-recommendations from people who have gone this route. My friend at the coffee shop suggests a water-cooled case as a way of cutting down noise. Is this reasonable?

Ok, the response to my bit on Soap security was pretty negative. I didn't communicate what I was trying to say very well. Apologies.

Soap is probably not any worse than CGI or other Web things. But my point is that it's not any better, either. I guess it all depends on whether you think that's good enough. (I don't.)

Firewalls are pretty useless. The main security implication of going through port 80 is that it will eliminate the need to explicitly open a Soap-specific port in the firewall, in the off chance it wasn't left open anyway. So talking about Soap and firewalls isn't very enlightening. In any case, I believe Dave when he says that Soap wasn't explicitly designed to go through firewalls.


This afternoon, I helped Heather print out the final paper copies of her dissertation. Tomorrow, she hands it in for binding in the library and microfilming. The entire time I've known her, this has been a goal to work towards. Now it's actually done.

For historical reasons, the thesis is in troff format. I consider myself a smart guy, but fixing things was a usability nightmare. I had to rely on a mix of man pages, "helpful" web tutorials, a book on troff, and the groff s.tmac source. Even then, I ended up hand-tweaking a bunch of stuff. Ugh.

I helped format some Chinese hanzi as illustrations. I feel absolutely terrible about the fact that I used a Japanese font even though the context was Chinese, but didn't want to spend the time to redo all the eps's. Fortunately, all of the hanzi are quite similar visually to the corresponding Japanese kanji, so it's pretty unlikely that anyone will notice, much less care. rillian accuses me of perpetuating Unicode FUD, though. I guess I'll just have to live with that on my conscience.


Alan is resisting doing homework with me, and also resisting reading I'm not sure why. He has a lot of anxiety, and these days seems to be expressing it a lot more.

Max loves cooking. Tonight he helped me make a stir-fry. He had a little difficulty stirring, and decided that he needed a "big" spatula. When I told him that it was a big spatula, he asked for "giant". I thought it was cute, anyway :)

It's fun to play the "can you say X" game with both kids. For Max, the edge is currently "peanut butter jelly belly". A week or so ago, he had trouble saying this, usually dropping one of the words. Alan has no trouble with "uvulopalatopharyngoplasty". Finding harder words is going to be challenging :)

Expensive infrastructure

I'll try my best not to make David McCusker's head explode tonight.

His IronDoc and my Athshe are both examples of "expensive infrastructure." To do a good job implementing the ideas requires a lot of complex code.

David again makes the crucial distinction between interface complexity and implementation complexity. Ideally, you have a system which is easy to use, but might do very smart, sophisticated things under the hood. In the best case, doing things well at a lower level actually makes things simpler in the higher levels.

Filesystems are the perfect example. The basic interface is pretty simple. In fact, files and directories are the basic concepts exported to the UI, even in systems designed for novice users. Yet, you have implementations such as XFS, which weighs in at 140 kloc or so.

Simpler implementations are possible, of course. In fact, the only thing that could possibly justify this kind of complexity is to be a lot better. And if you compare a modern filesystem implementation against, say, the original Minix filesystem in Linux (about 2000 lines of code), there is no contest. Performance, scalability, and robustness have all improved dramatically over the years.

Yet, that original 2 kloc implementation was important. If you had told Linus that he had to implement a highly scalable journalling filesystem before he could release the Linux kernel, the project would have been dead in the water. As the kernel became more popular, however, the payoff for more sophisticated implementations became clearer. For one, a better filesystem implementation improves matters for all of the applications (totalling many millions of lines of code, no doubt) running on Linux, without making those apps any more complex.

What is the "right level" of complexity for something like IronDoc? I'm not sure. It's not popular enough yet for the amortized payoff argument to work. This is a bit of a Catch-22; it won't become popular until it's reasonably good. The usual way around this dilemma is to build a much simpler prototype first.

I think this kind of "expensive infrastructure" is a good place for highly talented programmers to spend their time. It is a theme which runs through quite a bit of my work, including, today, Ghostscript and the continuing evolution of its graphics library.

Soap and security

Security experts such as Bruce Schneier and Alan Cox are rightfully concerned about the security vulnerabilites of new systems built on protocols such as Soap. At his ETCon talk, Schneier quipped that Soap is "firewall-friendly" in the same way that a bullet is "skull-friendly".

Dave Winer's response is pigheaded. In fact, his new "standard response" says almost nothing about security.

Dave has asserted that getting through firewalls was not an explicit goal of Soap. Could be; doesn't matter. The fact is that it will be used to get through firewalls. More precisely, firewall configurations based on Web assumptions will be used to filter traffic that breaks these assumptions. In general, you have to look pretty deeply inside a Soap message to figure out how much (if any) damage it can do.

Firewalls are only part of the problem. Expect a flowering of attacks that simply request things through Soap unanticipated by the designers of the system. With highly dynamic scripting languages, it's even harder to imagine all the ways that a clever attacker might put together the pieces made available to him; I am sure that the SOAP::Lite security hole was merely the first of its kind to come to light.

In theory, it should be possible to create secure services based on Soap. In practice, it's just not going to happen. As Schneier argued so well in his talk, increasing security is expensive. In addition to the labor and expertise needed for audits, paying attention to security slows down development, curbs features, and generally makes software less competitive in the market. The nature of the software industry guarantees that products will be developed and shipped with security holes.

I'm not saying that Soap is a bad thing. Indeed, for many applications, the advantages surely outweigh the disadvantages. Also, if you try to build those same applications with competing techniques, there will no doubt be security vulnerabilities from those, as well. What I am saying is that people should be conscious of the security problems that history and reflection tell us will occur.

I met Dave briefly at ETCon. I'm sorry we didn't have a chance to chat more.

Trees, again

A general suite of format/protocol/api for trees such as Athshe is a pretty complicated piece of code. McCusker's blob spec alone, is daunting in its complexity. I'm trying to get a sense of where the complexity is coming from. Obviously, some of it is optimization, for example trying to maximize leaf occupancy, or minimize the number of mutated nodes in a copy-on-write case.

McCusker asks what I mean about network protocols and file formats. His observation that disk access and network latencies are similar is spot-on. I am not really talking about tuning the optimization. My main concern is concurrency. A network protocol must deal with locking and change notification. Transactions would be nice too.

Consider the read-only case first. Concurrency becomes a non-issue. In fact, a perfectly good protocol is to request blocks from a btree file format. If the block size divided by the bandwidth is comparable to the request latency, then you're keeping the connection reasonably full (I'm not worried about small factors).

If you do care, you can hide the latency in a number of ways too, including prefetching. If the client task is to fetch a complete subtree, it can pipeline several requests, hiding almost all latency without the risk of fetching unneeded blocks. However, this requires asynchrony in the client API, a source of complexity. Such is the tradeoff.

I'm not going to go into the details of adding network writes tonight. A simple locking discipline, with caching in the kernel, might work well in a concurrent local disk case. The client drills down from the root every time it accesses the tree, but these requests will usually hit in the cache. If you did something similar over the network, latency could kill you.

One approach is for the server to invalidate blocks (possibly) in the client's cache. The response to a write lock request guarantees that all such invalidations have been sent (we assume message ordering). The bandwidth for the invalidations is trivial compared with the blocks themselves. The main cost is the latency round-trip for the write lock. Also, contention for the write lock could be a serious problem.

A much more sophisticated approach is to optimistically send write requests under the assumption that the relevant blocks are still valid. The server checks, and only commits if so. Invalidations are still useful as a way of improving the chances of a valid commit, but are not needed for correctness. In any case, cleaning up in case the optimism was unwarranted is a challenge.

Cache invalidation is related to change notification, which is the stronger property of actually notifying the application when a node has changed. Change notification is essential for the Model/View paradigm, as it is what triggers recomputation of the View. Doing it generally and efficiently is a hard problem, but one that has been studied in detail. Many new designs include some ad-hoc form of change notification. Polling is very simple and robust, but is not very efficient, especially if you care about notification latency. Thus, you see crude things such as the <cloud> interface in RSS (and OPML). The prospect of solving the problem well is a good motivation for Athshe.

I'll write about these ideas in more detail later.

20 May 2002 (updated 20 May 2002 at 20:18 UTC) »
Doctor Mommy

Heather's PhD graduation ceremony was today. She has two of three signatures, and still needs to submit an archival copy, but otherwise is basically done. I am so proud of her.

The valedictory speech by Daniel Hernandez (also known as the editor of the Daily Cal) was very good. Prof. Christ's speech was also good. Prof. Adelman's speech was not bad, but had a somewhat defensive tone regarding the importance of English vs other more ostensibly practical pursuits, including a potshot at computer science.

Heather's father and youngest sister were there, as were the kids and I. Also, Marty and Jan. Jan seemed offended that I shook her hand in parting (as opposed to a hug). Oh well.

Low tech trust vs. high tech trust

I had an interesting discussion with Roger Dingledine and Luke (hacker-name armitage) over lunch at ETCon about the importance of manual input into attack-resistant trust metrics. Advogato uses the explicit,

[shit - this part of my entry got deleted - anyone have a cached version? Add "retrieve previous version" to m/ wishlist.]

Outline follows:

Trust metrics need manual input. Advogato uses manually-entered trust links. Google uses hyperlinks, which are usually manually entered.

"Low tech trust" is a pattern of manual information flow across trust links. Example: links to recommended content, represented by "hot lists" in the early days of the Web, blogrolling today. You can find a well recommended blog by starting with one you know, and clicking blogroll links. Google's PageRank calculates something similar.

Example: personal music recommendations. Automated recommendation systems (Firefly, Amazon) don't work as well as just getting recommendations from friends.

Both examples are attack-resistant, borrowing the analysis from trust metrics. In blog rolls, trust graph is explicit. In music recommendations, trust graph is implicit in flow of recommendations (they only flow between friends).

Low tech trust is good. Automated computation is harder, but opens interesting doors. Example: manual clicking on links is good at finding high quality random site, but Google is good at finding high quality site relevant to search term (and quickly, too). Killer apps for trust metrics will probably follow the same pattern.


Why am I so fascinated by trees?

One of my pie-in-the-sky projects is Athshe, a somewhat nebulous data format for trees. The writeup isn't that good, apologies. In any case, I've been kicking the ideas around in my head for quite some time now, and I still think it's worthwhile.

I'll probably write about some of the more detailed pieces, but I want to get some sense of the large scale across this time.

I think formats for tree-structured data have many intertwingled aspects. It might help to consider XML and filesystems as two special cases.

Abstract tree language

This is basically the language of what's allowed. Obviously, there are trees. The children of a node may be primarily sequential (as in XML), or primarily mapped by name (as in filesystems). What's in a leaf is also important - Unicode text in XML, arbitrary binary in filesystems.

It's good for the abstract tree language to be simple. Often, if you want more complex stuff, you find you can map it into simpler types fairly easily.

Athshe has exactly four types in the language: list, map, bytes, and tag (similar to MIME Content-Type). It's an appealing set, but I'm not sure it's quite right. In particular, I get the feeling that references (perhaps analogous to the # part of URL's) may be important too.


Once you have a tree in hand, you want to serialize it out as a file, so you can store it in a file or move it over the net as one chunk. Also, because parsing is so well understood, it's often more useful to talk about the serialization than the abstract tree language.

For a given tree language, there may be many different serializations. In the case of filesystems, there's tar, pax, zip, and many others. XML is mostly just XML, but there's stuff like wbxml too. ASN.1 is another example of an abstract tree language with many different encodings.

Access API

If you want to actually process trees, you'll need an API to access the tree data. In the case of XML, the API of choice is DOM. In the case of filesystems, it's the traditional Unix open/read/write/close API, or a variation.

The access API has many implications for performance. For one, grain size is important. For another, it's important for nodes to be very, very cheap, but active node references (open file descriptors) can be much less cheap. DOM doesn't quite get this right.

Another important API consideration is how calls map across the process boundary. For files, most calls cross the process boundary, which generally implies that files are not good for very small grains. However, note that fread() and fwrite() manage to cache a goodly fraction of file accesses within the process.

This is a good place for complexity to set in. There are all kinds of important issues in real systems only incidentally related to the abstract data language, such as asynchrony (nonblocking I/O), integrity (see fsync), access control, and so on. Applications may also want generalized undo (versioning), copy-on-write, transactions, and more.

Finally, the access API is a reasonably good place for change notification to happen. Unix has been traditionally deficient in this area; it's a standard feature in NT. In the case of XML, DOM has it, but with flaws.


It used to be enough to access a tree local to disk or memory. Now, you usually want to do it over the network too. For this, you need a protocol.

Protocols for remote access to filesystems are a very rich, fertile, and increasingly mature area. People use this stuff heavily, though not quite yet universally. Performance is important, and there are lots of clever ideas. A major performance goal is to cut down on the number of network boundary crossings.

In the XML world, the most important protocol is HTTP, which basically gives you the entire tree as one chunk. You can also run DOM through over CORBA, as described in the DOM spec, but the performace would be horrible. There's no good general way to access a large tree incrementally.

Random access file format

Filesystem trees typically don't get manipulated as tarfiles. Usually, the filesystem implementation stores the tree on disk in some random access format. Only a small part needs to be in RAM, and it's pretty quick to get to an arbitrary place in the tree. Again, this is a mature area. In the case of filesystems, it's almost a solved problem.

XML doesn't really have a random access file format. There are such things as "hierarchical databases", but they are not popular.

The main idea of Athshe is that a simple abstract data language makes the problem of building a high-performance suite of tools more tractable. The major performance theme is scalability. Operations on a small local tree should be almost as fast as a pure in-memory implementation. On the other end of the scale, it should be possible to access huge remote trees efficiently. Chunk size in your process and network boundary crossings should match the chunk size of the application; in particular, traversing a complete subtree should resemble transferring it as a serialization.

I find it very difficult to communicate why this is such a good thing. But see a recent thread on wmf's discussion page, which reminded me of my Athshe ideas.


AaronSw's ETCon schedule with weblog links is useful and interesting. One day soon, most conferences will be annotated like this, probably in realish time.

17 May 2002 (updated 17 May 2002 at 23:07 UTC) »

I'll start by responding to a few threads here, namely simonstl on ASN.1, and sej on EPS/PDF.


simonstl recently brought up ASN.1 as a possible alternative to XML. It's worth looking at, if for no other reason than to be aware of history so you don't have to repeat it. The advantages are clear: it is much more compact than XML for binary-oriented data, and it is quite mature. However, the associated schema language isn't especially nice (schema language design is a very hard problem), and it carries quite a bit of historical baggage.

In particular, implementations have tended to be horrible. In the olden days, most implementations were "compilers" that autogenerated C code to marshal and unmarshal ASN.1 data in and out of C structures. This mostly-static approach loses a lot of flexibility, of course. Code quality is another issue. I am unaware of any general ASN.1 tool that approaches the quality of open source XML tools such as DV's excellent libxml (although it's possible things have improved since I last looked).

Another major piece of baggage is X.509, a truly horrid data format for public key certificates using ASN.1 as the binary syntax. Most sane people who have encountered it run away screaming. See Peter Gutmann's x.509 style guide for a more detailed critique.

So it's worth looking at ASN.1, but it's certainly no magic bullet.

Incidentally, ASN.1 is one of many attempts to define low-level binary data formats for tree structured data. I hear good things about bxxp/beep, but haven't studied it carefully enough to critique it. Off the top of my head, there's sexp, which is the binary data format for SDSI (the name reflects its kinship with Lisp S-expressions). Further, most RPC mechanisms define binary encodings. Don't forget IIOP (Corba), XDR (Sun RPC), and so on. I'm sure all have advantages and disadvantages.

What exactly are the goals? As I see it, the main problems with XML are bloat (for general data; XML bloat in marked up text is quite acceptable) and complexity (XML is only medium-bad here). Binary formats help a lot with the former, and may or may not with the latter. But there are a lot of other considerations, including:

  • Quality of implementations (XML wins big).

  • Associated schema languages (messy; importance varies).

  • Suitability for dynamic mutation (XML's DOM is a reasonable, if inefficient, API for local tree manipulation; most other formats don't have anything similar, and their more rigid nature would make it harder).

  • Scaling. XML itself doesn't scale particularly well to huge trees, but is usually implemented in conjunction with URI's, which do. A binary data format can either help or hinder this goal.

Update: ASCII vs Binary, by David Reed.

Transparency, PostScript, PDF

sej brings up the desire to handle vector objects with transparency. Basically, PostScript can't do it, and probably never will. PDF 1.4, on the other hand, has very rich transparency capabilities. It's a bit more complex, and caution is needed when dealing with a standard controlled by a highly proprietary organization such as Adobe, but it has compelling technical advantages.

Here are two signs that the rest of the world is moving to PDF: pdflatex appears to be far more vital than dvips. Mac OS X uses PDF for the imaging and printing metafile.


I went to the O'Reilly Emerging Technology Conference (formerly known as the P2P conference) on Wednesday. It was great fun. I met a lot of people, some of whom I've known for years (such as Jim McCoy). Others I've known online, but met for the first time in person (Kevin Burton, Aaron Swartz, Dave Winer).

I particularly enjoyed hanging out with Jim, Roger Dingledine, Bram Cohen, Zooko, and Wes Felter. It is such a luxury to be able to interact with people on such a deep level. In my opinion, the best people to associate with are those you can teach and learn from. Thanks all of you for making my Wednesday a day of intense teaching and learning.

A lot of people have heard of my work, and are reading my thesis. I think Roger is largely to blame for this; a number of people mentioned hearing about it from him. I had great conversations with the MusicBrainz and nocatauth folks, both of which would be killer applications for attack-resistance.

I have this uneasy feeling there's something important that I left out. Also, one risk of naming names is that people might infer something from a name being left out. Not to worry - I genuinely enjoyed meeting everybody I met at ETCon.


Reading David McCusker's log fills me with sadness. The personal troubles he's going through resonate deeply for me, as I've been through similar things. I'm in a much happier space now, for which I consider myself very fortunate.

His technical work is also quite compelling. When we meet, one of the main things I want to talk about is whether to join forces on some of our pie-in-the-sky dream projects. Collaboration can be a nightmare when the other person doesn't understand what you're trying to do, or particularly share your goals. I get the feeling that working with him might be otherwise.

David talks about being busy and not having much time for things. I can certainly relate to that. He does, of course, have time to meet with me, whether he realizes it or not. Other things will of course slide, so it's worth considering their importance in the long run.

11 May 2002 (updated 11 May 2002 at 23:00 UTC) »

I've been thinking quite a bit about runtimes recently. It's an interesting topic in general, but I also have a more specific interest: I'm designing Fitz, a next-generation 2D graphics library, and I want this library to be as useful as possible. So I want to avoid gratuitous runtime incompatibility.

Bertrand Meyer's Polyglot Programming is a very insightful discussion of some of the consequences of Microsoft's Common Language Runtime (CLR). Here's a particularly interesting quote:

The language openness of .NET is a welcome relief after the years of incessant Java attempts at language hegemony. For far too long, the Sun camp has preached the One Language doctrine. The field of programming language design has a long, rich history, and there is no credible argument that the alpha and omega of programming, closing off any future evolution, was uttered in Silicon Valley in 1995. Microsoft's .NET breaks this lock.

Absolutely. What the CLR provides instead is a runtime hegemony. It seems to be a fairly high quality attempt, and is well on its way to achieving huge market share. The idea of a runtime hegemony provides almost all the advantages of a language hegemony, with the added advantage of providing a much easier migration path for components written in non-native languages to be assimilated into the runtime. Do note the echoes between Bertrand's discussion of libraries and Paul Graham's in Being Popular.

It remains to be seen, I believe, whether it's worth writing new CLR code in any language other than C#. It's a nice enough language, and you have the advantage of not having to worry about the language-to-runtime mapping Bertrand mentions towards the end of his essay. But other languages might turn out to provide compelling advantages over C# for this environment, and the CLR will, as Bertrand argues, support them.

There is now a flowering of thought, design, and implementation of runtimes on the free software side. This world is resistant to hegemony. Instead of one dominant runtime, we will see lots of different approaches. In the short term, this is a serious disadvantage compared with CLR, because it doesn't provide a good story for people (such as myself) who just want to write software. In the long term, I think, it could lead to much stronger working knowledge about how to knit together systems out of disparate components.

I'll list here a few selected projects I find interesting, with some comments focussed on runtime and integration.

  • CPython. The CPython runtime is carefully layered on top of the C runtime, supporting the highly dynamic Python, while allowing full access to the wealth of C libraries. In fact, CPython + C can be seen as an "aggregate language", actually a fairly compelling platform. Other language implementations, such as Ruby, fall into this class as well.

  • Pyrex. Pyrex is essentially a hybrid language with syntax and semantics fairly similar to Python, but able to call C directly. Thus, we essentially have a 3-language aggregate, consisting of CPython + Pyrex + C.

  • Lisp. Lisp is a mature language, with mature, high quality implementations. Most Lisp implementations provide a Foreign Function Interface (FFI), which is capable of directly calling C. However, the details of the FFI tend to differ from implementation to implementation, so you can't really have a Lisp + C aggregate, but rather a family of CMUCL + C, AutoLisp + C, etc. See Design Issues for Foreign Function Interfaces for a much deeper discussion. Also note Arc, which could well become the most compelling Lisp dialect.

  • Parrot. See this interview with Dan Sugalski for much more info on why Parrot is interesting.

  • Ruby/Python. This project attempts to knit Ruby and Python together. I'm including it largely because it illustrates some technical difficulties in integrating runtimes: the last posted version integrates Python 1.5 (which uses reference counting) with Ruby (which is fully gc'ed). It's relatively straightforward to integrate N runtimes when only one is gc'ed - it is the master, and all others are slaves. It's far harder to have more than one, and Ruby/Python seems not to have grappled with the problem yet.

  • JVM (Java) + JNI. This is still a reasonable approach, but free implementations are still not very good. They will get there in time. Note also Jython, which knits the Python language into the JVM runtime.

  • Swig, an "interface language" for easy integration of C and scripting languages. One of the most fascinating aspects of Swig is that a single interface can generate wrappers for many different scripting languages.

  • Corba. Most people these days use Corba to autogenerate low quality network protocols, layered on top of IIOP. However, one of the original goals of Corba was to facilitate integration between a number different languages. It is a mature standard, and fairly well implemented even in the free world, but is fairly painful in practice.

  • Mono and DotGNU, free clones of Microsoft's CLR.

These projects have widely varying goals and ambitions. Some will no doubt succeed, others fail. I expect our community to learn a lot in the process. And perhaps, some day, programmers can sit down and write code without facing difficult decisions about which runtimes to be compatible with, and which to be incompatible with.

Update: Stacking them up: a Comparison of Virtual Machines by John Gough. A well-written, technical comparison of JVM and CLR.

A poem on awakening

Twelve people emerge from sleep camp
Untroubled by dreams
Completely oblivious

183 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!