Older blog entries for raph (starting at number 366)

ltnb

Wow, it's been a long time since my last post. I've been kinda in hermit-mode, spending more time with the family, and not feeling as productive as I should be with work. I think I'm emerging now. Things are going fairly well, and I have quite a few things I want to write about here.

OT

I also have a few things I want to write about which are more political and less related to free software. I finally did a bit of a brain-dump on my other blog. If you've been following the updates on my boys, then by all means don't miss the bit of improv radio theater we recorded a few days ago.

Life

It's been a busy few weeks - my mom, then my brother and his girlfriend came to visit, and somewhere in the middle of all that we had an Artifex staff meeting.

Things are quieting down now. We had a nice family evening, playing video games and doing a little papercraft. I tried the tiger, just on cheap paper and b/w laser printing, and it came out ok. Of course, Max then wanted to do one of the motorcycles, but I convinced him that we would some other day.

BitTorrent and RSS

There's a thread going around the net on the benefits of combining RSS with BitTorrent. I agree there's something there, but want to make a distinction between the "easy" combination which is quite feasible right now, and one that requires a bit more rocket science (actually, Internet protocol design, but from what I know of both, the latter is more difficult to do well). In the "easy" combination, you have your whole RSS infrastructure exactly as it is now, but use BitTorrent to distribute the "attachments". People have been experimenting with RSS enclosures (for speech, music, video, and whatnot) for a while, but they're not hugely popular yet. One of the reasons is the difficulty and expense of providing download bandwidth for the large files that people will typically want to enclose. BitTorrent can solve that.

In fact, BitTorrent's strengths seem to mesh well with RSS. BT shines when lots of people want to download the same largish file at the same time - it's weaker at providing access to diverse archives with more random patterns of temporal access. Also, BT scales nicely with the number of concurrent downloaders - you get about equally good performance with a dozen or ten thousand. So if someone shoots a really cool digital video, posts it to their blog, then gets Slashdotted, it all still flows.

Integrating BT with a daemon that retrieves RSS feeds in the background has other advantages, as well. If the person opens the file a while after the download begins (which might be as soon as the RSS is updated), most or all of the latency of downloading that file can be hidden. Further, since the BT implementation is released under a near-public domain license, it should be relatively easy for people to integrate it into their blog-browsing applications.

An example of a blog that would work superbly with BT is Chris Lydon's series of interviews.

But Steve Gillmor's article isn't primarily about enclosures - it suggests that we can use BT to manage the RSS feed itself. I think there's something to the idea, but the existing protocol and implementation isn't exactly what's needed. BT is best at downloading large static files. You start with a "torrent" file, which is essentially a Merkle hash tree of the file packaged up with a URL where the "tracker" can be reached. All peers uploading and downloading the file register with the tracker, and get a list of other peers to connect with. Then, peers exchange blocks of the file with each other, using very clever techniques to optimize the overall throughput. After each block is transferred, its hash is checked against what's in the torrent file, and discarded if it doesn't match.

But RSS files themselves are relatively small, so it's unlikely that all that much bandwidth would be saved sending torrent files and running a tracker, as opposed to simply sending the RSS file itself. Further, the big performance problem with RSS is the tradeoff between polling the RSS feed infrequently, resulting in large latencies between the time the feed is updated and viewers get to see it, or polling it frequently and chewing up tons of bandwidth from the server. BT doesn't do much to help with this - you'd be polling the torrent file exactly as frequently as you're polling the RSS file now.

I believe, however, that the BitTorrent protocol could be adapted into one that solves the problem of change notification. The protocol is very smart, and already has much of the infrastructure that's needed. In particular, peers already do notify each other when they receive new blocks. That's not change notification because the contents of the blocks are immutable (and that's enforced by checking the hash), but it's not too hard to see how it could be adapted. At heart, you'd replace the static hash tree of the existing torrent file format with a digital signature. The "publisher" node would then send new digitally signed blocks into the network, where they'd be propagated by the peers. There'd be essentially no network activity in between updates, and, as in the existing BitTorrent protocol, the load on the publisher node would be about the same whether it was feeding a dozen or ten thousand listeners. I'd also expect latency to scale very nicely as well (probably as the log of the number of peers, and with fast propagation along the low latency "backbone" of the peer network).

I'd hate to see such a beautiful work of engineering restricted to just providing RSS feeds - ideally, it would be general enough to handle all sorts of different applications which require change notification. One such is the propagation of RPM or Debian package updates, which obviously has strong requirements for both scaling and robustness. The main thing that's keeping it from happening, I think, is the dearth of people who really understand the BitTorrent protocol.

Proof systems

I've been hacking a bit on my toy proof language. Aside from slowly bringing the verifier up to the point where it checks everything that should be checked, I'm also hacking up an implementation of the HOL inference rules constructed in ZF set theory.

It's immensely satisfying to construct proofs that are correct with high assurance, which is such a contrast from hacking code - any time you write nontrivial code, you know it's got lots of bugs in it, many of which no doubt can be exploited to create security vulnerabilities.

Fat chance

Oooh. Turns out the FAT file system is patented, and Microsoft has a commercially reasonable licensing plan for it. Of course, given the way the patent system is structured, this kind of thing is absolutely inevitable.

As I've written before, I think Paul Krugman got it wrong when he argued that Microsoft's monopoly position is not much of a threat. On the other hand, kudos to him for publicizing the rotten state of the voting machine industry. Those in the know have been aware of serious problems for some time, but the media so far has been doing a good job of keeping the unwashed masses blissfully unaware. Love Krugman or hate him, it'll be harder to keep this issue quiet now.

Color profiling

A followup to my recent post calling into question Argyll's multidimensional interpolation algorithm. I had some email back and forth with Graeme, and it's now clear that the multidimensional interpolation itself is quite good. The "wiggliness" in my profiles seems to have been caused by an overzealous attempt to optimize the per-channel curves. With those disabled, I'm now getting some really, really good profiles.

I still think that my idea of a diffusion equation based interpolation algorithm might have merit, but considering the good results I'm seeing now from Argyll, it pretty much goes on the back burner along with two or three dozen others. It might become interesting if reasonably priced handheld spectrocolorimeters with a USB port ever make it to market - typical professional profiles are made from 3000 or more patches, but it might be possible to get good results from a much smaller sample, if done right.

More proof links

Claus Dahl sent in a link to yet another proof system/language. In the meantime, I've been corresponding with Norm Megill of Metamath fame, Freek Wiedijk, and Michael Meyling of Hilbert II. There's something addictive about formal proofs. How else to explain why there are so many projects to work on them?

Turkey day

We had a good family Thanksgiving at Heather's father's house, with her youngest sister also. We had a roast turkey and all the traditional trimmings. Sometimes I'm pretty cynical, but this year all the tradition, and being close to family, felt good.

Yet another letter-quality display

Sharp has introduced its latest Zaurus in the US. It's the successor of the C7x0 series, which had a nice 216 dpi screen, but was only available in Japan. This one looks pretty good. It's got impressive wireless connectivity, a good CPU (400MHz xScale), and, unlike the first widely-available letter quality handheld, runs a decent operating system (it uses a Linux kernel).

It's pretty clear to me by now that handhelds are quickly moving to high resolution displays. The maxed-out handheld and ultralight laptop categories are blending into each other, so don't be surprised to see these displays become even more widespread in the next couple of years or so.

Semantic Web

I've long been skeptical of the Semantic Web. It's always had the feel of academic research to me. Standards organizations are best at codifying existing practice, not at doing basic research, and I haven't been all that impressed by the W3C in other domains either, so I've had low expectations.

Of course, I'm not the only one to engage in W3C-bashing. Lately, Clay Shirky wrote an essay which pretty much shreds the idea of the Semantic Web to ribbons. I've been reading some of the ensuing discussion, and much of it is quite interesting.

If you find Shirky's anti-Semantic Web arguments compelling, please read what Tim Bray has to say on the matter. Tim's been involved with the Semantic Web and the W3C for a long time (he's in their Technical Advisory Group now), so you could say he's not an impartial observer. But his essay is a skeptical, hardheaded look at what the first step toward a real semantic web is likely to look like.

I want to make one observation. The metadata Tim talks about (roughly, the kind of data you find in corporate annual reports, only fresher and potentially more detailed) does not have a serious trust issue. In general, corporations will post their own data, and they will be accountable for any errors. Nothing on the Web is really free of authenticity problems, but this kind of data is as close as it gets.

I think this property is important, and stands in stark contrast to most of the metadata you might be interested in. For example, a really nice piece of metadata would be "e-commerce store X has excellent customer service / an alarmingly high rate of complaints." Since this particular bit of metadata is likely to have a significant effect on the flow of revenue, the incentive to manipulate it is simply too great for amoral businesses to resist. People manipulate Google for much the same reasons, and with pretty good success, even though Google has sophisticated mechanisms in place for resisting such attacks.

If the Semantic Web is to become a general tool for automatically generating useful inferences, rather than merely a research toy, the questions of how to make sure the metadata is trustworthy will have to be resolved. I think that's especially true for the more AI-like goals of the project. In the mean time, I think there is much lower-hanging fruit, as Tim Bray describes; data which inherently has a lot of structure, and clear criteria for authenticity and accuracy, but which is now published on the Web in a hodge-podge of flashy web-page formats.

Proof systems

I stumbled across Yet Another Proof System. This one looks still somewhat embryonic, but still possibly interesting. I'm definitely not convinced that the GFDL is the right license for mathematical proofs. It's gone through a few name changes recently, but it was interesting to see the similarity between one of their recent ones, "Hilbert II", and Ghilbert, the name of my toy proof system.

Meanwhile, my system is steadily gaining capabilities. It now accepts the entire set.mm of Metamath (translated into Lisp-like syntax). I've also got a Python program which can prove statements of the form "x + y = z", where x and y are arbitrary natural numbers (currently expressed in binary). I'm also playing with axiomatizations of Z2 and HOL, and indeed have been able to prove the first dozen or so derived inference rules in Chapter 8 of the HOL System description (Kananaskis version). If I can get two major axiom systems to connect in some meaningful way (so that proofs in one can make use of theorems in the other), that would be cooool.

Video jukebox

I want to get one of those little low-noise 1GHz Via C3 systems for the family this Christmas. I'm not so concerned right now about the software it'll run it (in particular, it's not essential to record video off-air, nor provide splashy on-screen graphics). On the other hand, it must run MPlayer, output to NTSC video, and have 802.11b. I'd really like it to run Linux, but I understand that certain things like video-out can still be a bit painful. An outboard VGA->NTSC converter would be the easiest way to solve that problem, but I worry that it might not deliver quite the quality of a direct NTSC output, especially when it comes to smoothness of motion.

Has anyone here successfully put together such a system? I'd especially like recommendations for things that work smoothly with a Linux kernel, and/or recipes for configuring the pieces to work together. I'll happily post the winning recipes when I get the stuff going.

chema

It's hard to believe that Chema Celorio has indeed passed away. We spent quite a bit of time together at the first Linux Printing Summit. I was really impressed by his passion, and have enjoyed working with him since. He will be sorely missed.

Multidimensional interpolation

The Argyll color management system uses a very sophisticated technique based on thin-plate splines to for its profile creation. The core operation is to take the colors for a few hundred patches and fit it to a smooth multidimensional function, typically from R^4 to R^3 (CMYK tint values to a 3-dimensional colorspace such as Lab).

I'm seeing some wiggliness in the profiles I'm generating. I'm not sure why; it might be measurement error on my part. But I'm starting to suspect it may be the interpolation algorithm itself. I've read a recent survey, but it doesn't provide clear guidance on which technique provides the smoothest results.

My own personal inclination would be to create the interpolation function by solving the heat-diffusion equation with a source at patch point, so as to fit the measurements. I'm convinced that this approach would give an optimum level of smoothness, but I see problems at the boundaries, even when using the free boundary assumption.

To simplify the discussion, assume that we have to extrapolate beyond the range of colors measured. Even in the simple case of two points, one at (0, 0) and another at (1, 1), the only solution I'd consider optimally smooth is the straight line f(x)=x. Yet, a straightforward solution of the heat diffusion equation with sources at 0 and 1 is going to have a slope of zero at the asymptotes.

I'm thinking of two possibilities: first, there might be a different boundary assumption that converges to a slope of 1 at both asymptotes. It's not too hard to imagine that somebody else might have come across such a thing; you'd need to to minimize the global curvature (magnitude of second derivative) for the solution.

Another possibility is to solve the heat diffusion equation for the derivative of the desired solution. In the 1-D case, it's pretty easy to see how it would work. For your sources, you use a derivative-of-Gaussian shape. For the example above, the derivative would obviously converge to a constant 1 (the free boundary assumption for the derivative seems like the right thing here). However, it's not immediately obvious to me how to generalize this to higher dimensions (in fairness, rillian tried to explain the usual finite-difference multidimensional integration technique, but I didn't quite grasp it at the time; I'll ask again).

But it's possible I'm going over well-known territory, and a reader better versed in diffusion problems can just tell me the answer. I'll find out soon, and even if not I'll report again on what I find.

Politics

My new favorite blog is Talking Points Memo. That's where I found a link to this video clip in which Wesley Clark kicks the ass of some lame Fox News reporter. One of the most satisfying bits of video I've seen in a while.

8 Nov 2003 (updated 8 Nov 2003 at 08:26 UTC) »
How much resolution is enough?

Faithful readers of this blog will have already noted my fascination with high resolution displays. Desktops seem stuck at 100dpi dot matrix quality for the moment, but the portable space is heating up nicely. For one, it looks like the recent release of the Toshiba e805 is a harbinger of the next major rev of the Pocket PC platform. And if 200dpi isn't good enough, Sharp has started showing a prototype of a 300 dpi model.

Just how much resolution is enough? Are pixels going to keep getting smaller, or is there some natural resolution beyond which there isn't a noticeable improvement? I've just submitted the manuscript for my paper on this topic, which I'll present at the EI04 conference in San Jose next January. The quick answer is that, for desktops and laptops, this natural resolution is around 170 dpi, and about 240 for handhelds. Devices within striking range of these targets are available now, but not yet mainstream.

Attacks on the GPL

I'm not sure people realize how seriously the GPL is under attack these days. Yeah, SCO is trash-talking it, but I think this could ultimately be good for us, as they're on the verge of becoming thoroughly discredited, even to the point where shill reporters won't be able to repeat their baloney with a straight face.

I find the official policy of the BSA more disturbing. Of course one expects the BSA to lobby for the benefit of the proprietary software industry, but don't forget that IBM and Novell (the proud new owner of SuSE) are also members in good standing. (Thanks to this Groklaw article for the heads-up)

Take a look at this paragraph in particular:

2. Ensure that government funded research is available to all. Government plays an important role in financing basic software research. When public funds are used to support software research and development, such funding should be made equally available to all developers, regardless of the development model chosen to do the research. The innovations that result from this work should be licensed in ways that take into account both the desirability of building a commons of knowledge and the desirability of applying basic research advances to commercialized products. The dissemination of results broadly in this manner has contributed to a virtuous cycle of innovation in which government funding for basic research advances the set of knowledge available to the public by spurring advances in commercial products. These commercial products in turn generate growth and tax revenue necessary to fund future rounds of public research.

Unless I'm totally misreading it, it's saying that government-funded computer science research projects should not release GPL software, preferring BSD-like licensing if it's going to be released freely. I'm not sure how close this kind of trade-association policy is to becoming the law of the land (not that such things ever happen here in America), but I consider it highly alarming. At the very least, I consider it plausible that this lobbying is having a chilling effect on American government funding of GPL'ed projects. At least other countries seem to be starting to see the light.

Is ZF set theory a hack?

It's hard to believe, but apparently some offhand comments made here have inspired a serious paper comparing various axiomatizations for the foundations of mathematics, primarily in terms of the complexity needed to represent these axioms. Check out "Is ZF a hack?" on Freek Wiedijk's notes page.

I don't think there is a One True Axiomatization for mathematics. In fact, I think the lesson we should take away from Gödel's incompleteness theorems is that, in a very real formal sense, such a thing is impossible.

But, if we can't have that, what can we use? As I've written before, I think the answer is a variant of Postel's Law: be conservative in what axioms you require, but liberal in what axioms you're compatible with.

In that case, you'll use ZFC if you want to prove things about hairy infinite objects, but probably a weaker axiomatization such as Z2 or HOL for most everyday work.

Actually, what I think is needed is a system of polymorphic axioms that express basic concepts such as finite data structures and functions, that can be linked with any of the popular primitive axiom-sets. Finite data structures are fairly straightforward (although I did manage to make an embarrassing typo in the formulation for pair in ZFC - it should be (a + b) * (a + b) + b). Functions are going to be harder. Among other things, I'll need to deal with partiality in some reasonable way. But I think that good engineering solutions are possible.

Such an axiom system wouldn't justify better from a philosophical perspective, but I do think it would be a serious improvement in one practical sense: making formal theorems as portable as possible. I think ordinary mathematicians don't worry about this too much, because it usually goes without saying that ordinary, everyday mathematics can be ported (at least in principle) to any appropriate formal framework, so it isn't actually necessary to do it. Things are different, though, if you're trying to stitch together a world-wide repository of formal proofs.

Paul Krugman

Oh, and berend, if I had to pick an entry for the Krugman Gotcha Contest, it would probably be this piece saying that Microsoft's monopoly is not so dangerous after all. Does it surprise you that I don't agree with everything the man has ever written? And yes, I've also been disappointed at the level of sloppiness in some of his columns, if for no other reason than the ammunition it gives to his detractors.

But I find that he has a lot to say. Even when I don't agree with him, I find that he writes thoughtfully, and in such a way as to provoke thoughtful discussion. I'm not sure I see the point of the more vicious Krugman detractors (of whom Luskin seems to be staking out a leadership role). It seems to me that their central point is that we should disagree with everything that Krugman has ever written because he's been wrong a few times. That's one big giant ad hominem attack, and I have little patience for such nonsense.

26 Oct 2003 (updated 26 Oct 2003 at 00:03 UTC) »
Groklaw

I'm finding that the SCO case is a far more compelling drama than any of the law TV shows I've indulged in watching over the years. It doesn't hurt, of course, that I find the outcome important.

And what better way to watch this drama than by reading newspapers and articles in the technology press? I'm very impressed by their probing investigative reporting, and very reassured that their professional objectivity is giving me a clear, unbiased view of the story.

Not!

If you want to really follow the SCO case, the only game in town is Groklaw. Investigative journalism may well be dead in the corporate media, but it is alive and well, nay thriving, at Groklaw. Over there, they actually do the "shoe-leather journalism" that the traditional press claims as their domain but has largely forsaken. When a document is filed at the courthouse in Utah or Delaware, somebody (for example, Frank Sorenson in Utah) goes and scans it. A deep analysis, with plenty of links and citations, is posted by the next day. Any flaws or weak spots in the analysis are pointed out by thorough readers. Pamela Jones, the editor of Groklaw, picks through these reader comments, and includes the best of them in the next posting. I don't know if judges are supposed to read outside sources, but if so they'll certainly find Groklaw to be a treasure trove.

By contrast, coverage in respected newspapers is quite lame. John Markoff ran a piece in the NYT a couple of weeks ago, in which he makes numerous important (as well as trivial) errors. And the scoop he got, while real (a company formerly affiliated with SCO settled a GPL infringement lawsuit, claiming that they shouldn't pay damages because the infringement was unintentional), is thin gruel compared with the hearty meals served up almost daily at Groklaw.

And the so-called "trade press" is barely worthy of our derision. They're just as likely to cite a scumbag shill such as Rob Enderle as an authoritative source such as Groklaw, and they invariably get their facts wrong, sometimes painfully so. But I need hardly bad-mouth the trade press in this community. We get our news from good outlets such as LWN.

Groklaw shows off what a distributed, open-source news gathering and reporting community, with a strong editor, can do. There are many, many more such news stories, possibly too many for an amateur community to be able to handle. SCO is surely a special case. They have provoked the collective anger of the free software community; a unique accomplishment, with a unique result. Whether that result can generalize up to news-gathering on a wider scale, I have no idea, but if it can, that would be hugely exciting.

Epson 2200

I've been writing a high quality inkjet driver for Ghostscript, and I've been doing my testing on an Epson Stylus Photo 2200. I'm passionate about printing, but this is the first time I've really fallen in love with a device in a long time. As far as I'm concerned, this device has solved essentially all of the problems of making high quality digital prints except for the cost of the materials.

First, the photo quality is stunning, truly comparable to real photographic prints. It accomplishes this by using ridiculously tiny (4 picoliter) dots, real 1440x720 resolution, and 7 colors: the usual CMYK plus light cyan, magenta, and black. I find the light black ink to be an especially significant improvement - on 6-color devices, whenever the black starts phasing in, you really see those big, ugly black dots against the colored-dot background. The light black ink makes a nice gray, blending beautifully with the other colors. You simply cannot see the dither patterns with your naked eye. Under a magnifier, you do see them, but they look more like film grain than the kind of halftoning usually associated with inkjets.

The second problem they've solved is the colorfastness of the inks. The 2200 is pigment-based, while the consumer devices (such as the 870) use dyes. Dye-based prints are simply not suitable if you plan on keeping them around for any period of time. Further, if you expose them to any kind of sunlight, the fading and color shifts become noticeable within days.

The specs say that the new pigment inks are a lot more stable than the dyes, but there were a few reasons to be skeptical. The previous generation printer, the 2000, also had pigment inks, but they were far from archival, exhibiting a serious orange shift. Also, they sell an "Archival Matte" paper that, as far as I can tell, has the equivalent of sunscreen on its surface, so that the paper will absorb as much of the UV as possible before it gets to the inks. Are they engaging in heroic measures to make up for weak longevity of the inks themselves?

I'm never really satisfied with such questions until I do an experiment myself. So, I printed a test pattern on a piece of plain paper, and taped it to the outside of my south-facing window, so it would be subject to the full measure of California sun and the elements. A month later, the test pattern is still there. In particular, the cyan+magenta+yellow patches show no sign of color shift. Believe me, neither commercial offset printing nor ordinary color prints wouldn hold up nearly so well under these conditions. In fact, the paper is starting to show some signs of degrading, including a slight yellowing and a more brittle-feeling texture.

So, it's not exactly a scientific test, but I think you can make prints on the 2200 with confidence that your grandchildren will still be able to enjoy them. Highly recommended.

[Full disclosure: Epson generously donated the 2200 I used, but has not provided any additional funding. Hey, I'm a programmer and blogger, not a journalist, and I love getting toys! Let this be a notice that sending me toys can result in excellent publicity. I could ues, hmmm, let's see: a high-res handheld with WiFi, a mini-ITX box for media playback at the house, and of course a digital camera.]

GE optical mini-mouse

Another piece of gear is the $15 GE HO98094 USB optical mini-mouse I bought at Target lsat week. It tracks well, works flawlessly under RH 9, and feels good in my hand (the full size Mitsumi on my desktop, which I've always been happy with, now feels clunky). Recommended, and kudos to the people responsible for making the multiplexing between built-in and USB mouse so seamless.

By the way, about a year ago I made the transition from focus-follows-mouse to click-to-focus, based on what appeared to me to be a consensus that the latter was becoming the default. It's ok, but I do miss being able to change focus just by moving the mouse. On the other hand, I wasn't using Alt-Tab much, and obviously that's a huge convenience with click-to-focus. So I was somewhat pleasantly surprised to find that the scroll wheel has (mostly) focus-follows-mouse semantics even when the window manager is set to click-to-focus. The one annoying exception seems to be emacs, which always scrolls the buffer with the keyboard focus.

I'm still not totally happy with the way focus works overall. By far my biggest annoyance is the way that Mozilla tends to arbitrarily grab focus when I'm in the middle of typing, based on

Correspondence

I'm having a number of fascinating correspondences in email now. For one, I decided to follow up by email to Berend De Boer's recommendation of a piece criticizing a recent op-ed by Krugman, and I'm glad of it. I think there's a good chance we'll both learn something from the discourse -- something that seems to be happening more rarely as the public debate between "liberals" and "conservatives" seems to be getting more polarized.

Also, I'm corresponding with both Freek Wiedijk and Norman Megill (of Metamath fame) about proof systems, in particular my nutty idea to design a proof format much like Metamath, but with sound definitions and good support for modular, portable proofs. Freek is urging me to take a closer look at Mizar. I don't care for their closed-source approach (including the centralization of the theorem library), and there doesn't seem to be much good documentation for beginners, but I agree; it seems to have a lot going for it. With luck, I can rip off, uhm, I mean adapt their best ideas. :)

24 Oct 2003 (updated 24 Oct 2003 at 06:20 UTC) »
Letter quality handheld

The first mass-market computer with a "letter quality" display (which I define as 192 dpi or higher) was released today. Not surprisingly, it's a handheld. Too bad it ships with such a lousy OS.

AltiVec and SSE2

One of the projects I've been working on lately is SIMD-optimized versions of my inkjet error diffusion code. It'll be released soon, so all you guys can take a look, but in the meantime here are my impressions.

AltiVec and SSE2 are very, very similar. Both operate on 128 bit words, which can be segmented into four singles or 32-bit ints, eight shorts, or 16 chars (plus a couple more, depending on the chip). In both cases, the raw computational bandwidth available is absolutely stunning - well-tuned code will run at two instructions per clock cycle. That's something like 24 GFLOPS theoretical peak for a 3 GHz P4. Wow.

Not surprisingly, it's tricky to get that peak performance. A lot depends on how parallel the problem is. A lot of 2D graphics code is embarassingly parallel, which makes it pretty easy to get good SIMD performance. Not so error diffusion algorithms, which have some very tight serialization constraints. I get around this by running four planes at a time. This is a very reasonable approach for printers such as my Epson Photo 2200 which use 7 inks (so only 1/8 of the compute bandwidth is wasted), but does make things a bit trickier.

You really feel the constraint of pipeline latencies. On the P4, you can't use the result of a load until 6 cycles after the load instruction (and that's assuming it hits in L1 cache). Considering that ideally you're issuing two instructions per cycle, that means you can't use the results of a load until twelve instructions down. That's a lot, especially when you've only got 8 registers. On the flip side, the bandwidth is incredible. You can issue one load operation per cycle, for an awe-inspiring peak L1 bandwidth of 48 GB/s.

Bottom line, running 4 planes in AltiVec is about the same speed as running one plane in the old C code. SSE2 is about 1.5 the speed. So, at least for this problem, the potential for speedup is greater on the P4 architecture than G4. I haven't analyzed the code closely, but I suspect that the main culprit is the conditional branches in the C version of the code, which have all been replaced by bitwise logical operations in the vectorized version. Mispredicted conditional branches are performance death on the P4.

As I've said, I found the two SIMD approaches more alike than different. Here are some differences I noted, though:

  • AltiVec feels cleaner, richer, and better designed, even though SSE2 has more instructions. I'm sure a big part of the problem is that it took several generations for SSE2 to evolve - I consider MMX seriously underpowered, and SSE (Pentium III) lacks packed integer operations, critically important for image processing.

  • AltiVec has 32 registers, SSE2 8.

  • The tools for AltiVec are more mature. I really appreciated having the C compiler schedule the instructions for me (using "intrinsics"). Intrinsics are available for SSE2 in bleeding-edge versions of GCC, but don't ship with RH9.

  • It's easier to understand AltiVec performance; it's better documented, and tools like CHUD really help (I used amber and simg4).

  • All that said, with currently available chips, the raw bandwidth of the P4 outstrips the G4/G5. While the G4's implementation of AltiVec is excellent, the clock speed is pitifully slow by today's standards. The G5 runs with faster clock, but takes a step backward in how much gets done per cycle (in many more cases, only one instruction can be issued per clock).

I think both architectures are becoming reasonably stable, but it's still easy to find computers that don't support SIMD well, especially laptops and the cool-running Via chips. My desktop is a dual Athlon, which is sadly SSE-only. I also hear that AMD64's implementation of SSE2 is lackluster. So, the performance win still depends a lot on the particulars of the system you're using. I suspect that'll improve with time, as newer models phase in.

SIMD and graphics hardware represent two significantly different approaches for 2D graphics optimization, with different strengths and weaknesses. I feel that SIMD is ultimately the biggest win for printing applications, largely because it easily accommodates sophisticated color transformations and so on. Even so, its raw bandwidth will lead to great performance on interactive display applications. I'd be happy to stake out the SIMD-optimized rendering territory, while largely leaving optimization through offloading to the video card to Xrender/Cairo. In any case, it looks like some fun times are ahead!

On Grokking BitTorrent

Tim Bray recently tried out BitTorrent and didn't see what all the fuss is about. Maybe I can help.

At heart, BitTorrent is nothing more or less than a way of distributing largish (by today's standards, anyway) files around the Internet efficiently, reliably, and without the need for the original distributor to subsidize the bandwidth costs. It is truly legality-neutral, in that there are compelling applications in each of the domains of legal, illegal, and ethical but technically illegal. It is not spyware or anything nasty like most of the commercial P2P clients. Rather, it's distributed under a very liberal license, which makes it possible to integrate deeply with other applications that might need of large files. You haven't seen any of these applications yet because BitTorrent is still young.

Combined with three other technological developments, BitTorrent makes digital video practical for ordinary people. (The other developments are, of course, spiffy video codecs such as MPEG-4, GHz computers, and Mbit/s Internet.) From a technological point of view, people can what what they want, when they want. I think that's a game-changer.

Of course, most of the movies and TV shows that people want to watch are illegal, but that's just because Hollywood hasn't figured out a business model that takes into account the fact that such a thing is now, just this year, practical on a wide scale. Actually, I feel it's a shame that the MPAA isn't successful in their crusade to reeducate Internet movie traders; if they were, it would greatly increase interest in public domain materials, as well as artists who want to release their work under licenses such as Creative Commons. Ultimately, of course, I hope a good business model does emerge, preferably one that rewards the actual creators.

In the meantime, there's plenty of perfectly legal content to be had, including Linux distros, freely available audio such as etree, game demos, and more. I'm not sure whether Fanimatrix counts as perfectly legal or not, but in any case I'm sure they appreciate not having to cough up a dime for bandwidth every time someone clicks on the download link.

And anime, one of the earliest adopters of BitTorrent, is also very interesting. While releasing Japanese shows with English subtitles is technically a copyright violation, most in the scene abide by an ethical code which looks out for the interests of the creators - as soon as a show is licensed in the US, it is removed. Before then, I'm sure the exposure to hardcore anime fans helps shows find a market. It also helps out the US licensees, because they get to see which shows are popular. It's hard to point to anyone who's really getting hurt.

As it happens, our family watches the majority of its TV off BitTorrent now. Heather and I are both more than a bit interested in Japanese culture (she spent two years there and has written a novel based on her experiences), and the kids really love the shows. I like reading the subtitles to them; it feels like a nice middle ground between slackjawed TV watching and reading from books (of course, we do more than our share of the latter, as well). Just this evening, Max was asking when the next episode of Machine Robo Rescue was coming out, and yesterday he was absolutely thrilled to get a Hyper Fire Robo toy, imported to the anime/manga store at our local Asian market. Commerce!

There's another impact BT is poised to make. Now, all of a sudden, digital video is on the computer technology curve, not the video technology curve. HDTV has been the video technology of the future for about 20 years now. I still don't have any HDTV gear, but I've watched some movie trailers in 720p, downloaded using BitTorrent, of course. HD isn't widespread yet, but all you need is 3 GHz computers instead of 1 GHz, nice monitors, and fast DSL like they have in Korea. Does anyone seriously doubt that this will happen, and soon?

Lastly, the BT protocol itself is exceedingly robust (it uses cryptographically sound hashes to ensure the integrity of the files it transfers), and has a simple interface to the rest of the world. That's maybe not so easy to see on a PC where flakiness and needless complexity rule, but quite important when integrating BitTorrent into an appliance. I'm hoping to buy one of those mini-ITX boxes for the family this Christmas, and stick some simple Python scripts on it so the kids can select their own shows to download.

P.S. Tim, the reason why your player didn't have the right codec is essentially political, and easily fixed by, say, a full MPlayer install. Maybe you haven't seen it with your own eyes yet, but hopefully I've helped you see why your Editor-of-a-very well-known-publication friend is right. BitTorrent is a game-changer.

Voting

The Independent has a story reinforcing my recent comments about shady dealings in electronic voting. It's nice to see this story getting more widespread attention.

I have one nutty idea about how to maybe make a difference: take out an ad in the IACREOT newsletter. Unlike the NYT full-page ad idea that's floating around in blogspace, it would only cost a few hundred bucks, and it would give a good opportunity to roundly criticize election officials for rolling over for corrupt corporate merchants of tamper-friendly voting machines, and encourage them to clearly take sides. I imagine that very few election officials actively want to preside over one of the darkest eras in democracy facing this country.

Google failure

A couple of followups on my piece on Google's vulnerability to DNS abuse. Charles Cox points out a WebmasterWorld post from a Google person this March that clearly points out that they're aware of the problem and are trying to fix it. I'm not sure exactly why they're still so vulnerable, then.

Also, Andrew Clausen sent me his draft paper analyzing the attack-resistance of Google and its PageRank algorithm. I haven't had a chance to study it in detail yet, but from a skim it looks very good. If you're at all interested in trust issues, it's a must-read

Proofs

Thanks, chalst, for your link to the Girard book. I will indeed study it, which should help clear up my ignorance of the finer points of consequence relations and consistency proofs.

I haven't been writing much about my thoughts on proof systems, but I have been continuing to play with them quite a bit. I've been chipping away at a prototype of the ideas I wrote about here. It's all based very solidly on Metamath, but adds a sound definitional framework (so that axioms and definitions are clearly separated), and the outline of a modular system so that individual proofs can import and export "interfaces", and then you can just wire a big mesh (acyclic network) of these proofs my making the interfaces match up. I think that my current design can do this, but I'm never certain about things until I implement them.

And perhaps there my goal is a bit overly ambitious for my current level of ability. I'm trying to cook up not just one set of axioms but three (based on Z2, HOL, and ZFC, in order of increasing strength), so that you can write individual proofs using the least powerful axiomatization necessary, but then use those proofs in any system at least as strong. Not only that, but I want the theorems proved to be polymorphic. For example, some of the theorems most interesting to me are over the structures induced by pairs and "options" (typically defined as either None or Some x). These include sequences, dictionaries, trees, graphs, and so on, in other words the usual space of "data structures".

All the axiom systems listed above can easily support these data structures, but just proving your theorems in Z2 is not fully satisfying. If you prove a general theorem about trees, there's no reason it shouldn't work for, say, trees of transfinite ordinals, even though there's no way to represent transfinite ordinals in Z2. Most people designing proof systems just pick an axiomatization. That's part of the reason why it's traditionally difficult to port proofs from one system to another - there's no guarantee you can when the underlying axiomatizations differ. If I'm successful, then proof authors and proof users will be subject to a form of Postel's Law: be conservative in what axioms you need, but liberal in what axioms you can work with.

Pairs and options in three axiomatizations

I've never seen these all together like this, and I consider them pretty. They're particularly simple variants of functions that pair up primitive values, and set up option types. The specification of correctness is:

pair(a, b) = pair(a', b') <-> a = a' ^ b = b'
some(a) = some(a') <-> a = a'
some(a) != none().

Z2:
pair(a, b) = (a + b) * (a + b) + b * b
some(a) = a + 1
none() = 0

HOL:
pair(a, b) = \lambda x \lambda y. x = a ^ y = b
some(a) = \lambda x. x = a
none() = \lambda x. false

ZFC:
pair(a, b) = {{a}, {a, b}}
some(a) = {a}
none() = {}

357 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!