Older blog entries for raph (starting at number 357)

Voting

The Independent has a story reinforcing my recent comments about shady dealings in electronic voting. It's nice to see this story getting more widespread attention.

I have one nutty idea about how to maybe make a difference: take out an ad in the IACREOT newsletter. Unlike the NYT full-page ad idea that's floating around in blogspace, it would only cost a few hundred bucks, and it would give a good opportunity to roundly criticize election officials for rolling over for corrupt corporate merchants of tamper-friendly voting machines, and encourage them to clearly take sides. I imagine that very few election officials actively want to preside over one of the darkest eras in democracy facing this country.

Google failure

A couple of followups on my piece on Google's vulnerability to DNS abuse. Charles Cox points out a WebmasterWorld post from a Google person this March that clearly points out that they're aware of the problem and are trying to fix it. I'm not sure exactly why they're still so vulnerable, then.

Also, Andrew Clausen sent me his draft paper analyzing the attack-resistance of Google and its PageRank algorithm. I haven't had a chance to study it in detail yet, but from a skim it looks very good. If you're at all interested in trust issues, it's a must-read

Proofs

Thanks, chalst, for your link to the Girard book. I will indeed study it, which should help clear up my ignorance of the finer points of consequence relations and consistency proofs.

I haven't been writing much about my thoughts on proof systems, but I have been continuing to play with them quite a bit. I've been chipping away at a prototype of the ideas I wrote about here. It's all based very solidly on Metamath, but adds a sound definitional framework (so that axioms and definitions are clearly separated), and the outline of a modular system so that individual proofs can import and export "interfaces", and then you can just wire a big mesh (acyclic network) of these proofs my making the interfaces match up. I think that my current design can do this, but I'm never certain about things until I implement them.

And perhaps there my goal is a bit overly ambitious for my current level of ability. I'm trying to cook up not just one set of axioms but three (based on Z2, HOL, and ZFC, in order of increasing strength), so that you can write individual proofs using the least powerful axiomatization necessary, but then use those proofs in any system at least as strong. Not only that, but I want the theorems proved to be polymorphic. For example, some of the theorems most interesting to me are over the structures induced by pairs and "options" (typically defined as either None or Some x). These include sequences, dictionaries, trees, graphs, and so on, in other words the usual space of "data structures".

All the axiom systems listed above can easily support these data structures, but just proving your theorems in Z2 is not fully satisfying. If you prove a general theorem about trees, there's no reason it shouldn't work for, say, trees of transfinite ordinals, even though there's no way to represent transfinite ordinals in Z2. Most people designing proof systems just pick an axiomatization. That's part of the reason why it's traditionally difficult to port proofs from one system to another - there's no guarantee you can when the underlying axiomatizations differ. If I'm successful, then proof authors and proof users will be subject to a form of Postel's Law: be conservative in what axioms you need, but liberal in what axioms you can work with.

Pairs and options in three axiomatizations

I've never seen these all together like this, and I consider them pretty. They're particularly simple variants of functions that pair up primitive values, and set up option types. The specification of correctness is:

pair(a, b) = pair(a', b') <-> a = a' ^ b = b'
some(a) = some(a') <-> a = a'
some(a) != none().

Z2:
pair(a, b) = (a + b) * (a + b) + b * b
some(a) = a + 1
none() = 0

HOL:
pair(a, b) = \lambda x \lambda y. x = a ^ y = b
some(a) = \lambda x. x = a
none() = \lambda x. false

ZFC:
pair(a, b) = {{a}, {a, b}}
some(a) = {a}
none() = {}
How to defeat Google's PageRank

I've been noticing a lot more evil spam results from Google searches. The easiest way to see them is to try a somewhat dodgy search query, such as "snes pokemon rom". Obviously, I find this interesting, because PageRank is supposed to be an attack-resistant trust metric, just like here at Advogato. If someone has succeeded in attacking it, it would be interesting to find out why.

As far as I was able to figure out, these spam sites use a handful techniques to achieve high Google ranking. Some are related to PageRank, and then there's the generation of random, Markov-chain text to fake out the relevance scores. For example, the top hit, www.jrcrush.com/pc_pokemon_game/ pokemon/pokemon_snes_rom.asp, shows up with this context:

... true to life heaviness. Another toughest pokemon snes rom passionately downloads the evolution for a battle. When you see an avariciousness ...

But this isn't the result you get when you actually visit the page; it seems to be custom generated just for search engines. I've seen other pages that seem to be dynamically generated based on the query in the referer URL. Giving different results than given to search engines has many problems, not the least of which is that it's the best way to get around Google's otherwise solid policy of not returning porn pages for non-adult searches. I'm no prude, but I don't think the average person searching for "pokemon snes roms" ought to be served porn ads.

But this is just relevance. To get to the top of a search, a site has to have good relevance and a high PageRank score. How did such an obvious spam site achieve such a good score? The answer, not surprisingly, is abuse of DNS. In the case of jrcrush.com, it used to be the web site for the Columbus Crush, a junior hockey league based in Ohio. Then, the domain lapsed and got parked at Go Daddy. Within a few months, a scammer took it over. In the meantime, plenty of pages still link to it, even though the link has rotted. There's also evidence that it was listed directly in the Yahoo directory until recently.

Even though Google is showing itself to be vulnerable, the theory of attack resistance is holding up well. According to my analysis, in an attack-resistant system, there should be a near-linear relationship between the "cost" of the attack and the amount of damage done. Quantifying the cost is tricky, of course, because no abstract model will precisely capture real-world cost. The way I do it is to divide up all nodes (in the case of PageRank, a node is roughly equal to a webpage) into bad and otherwise. The latter category is further divided into "good" and "confused". A confused node is one that has a link to a bad node, for whatever reason. My quantification of attack cost is simply the number of confused nodes.

And now we see that by subverting DNS, an attacker can, in one fell swoop, exploit a potentially large number of "confused" nodes. In any situation involving security, the attacker will always go after the most vulnerable link. DNS has many great strengths (without it, URLs, and thus the Web, would have been infinitely more painful), but it sits in a position where all Internet users are forced to trust it, and it has not earned that trust.

There are any number of ways to fix the attack outlined above (and I'm sure Google is working on it), but, long term, the best way is to fix DNS itself. It's clearly broken, but it's not obvious how to best fix it. To me, it's obvious that people need to be building research protypes for better DNS-like service. Obviously, I think that trust needs to be baked-in, but others may have even better ideas.

Another letter quality display

As I've pointed out before, the real movement in high-resolution displays these days is in very small devices. Fujitsu is developing a 250 dpi 4" display, and recently showed a prototype at a Japanese trade show. Still a while before it'll be at your local Fry's, but you can get 216 dpi in Japan now.

Arnold

We, the voters of California, have apparently just lost our friggin' collective minds. I just hope Arnold doesn't do too much damage.

The Great Unraveling

I just read Paul Krugman's book, "The Great Unraveling". It's a great, great book. I'm especially impressed at the way Krugman was able to gain so much insight about the Bush administration simply by looking critically at their economic policy. Highly recommended.

Chris Lydon

Also highly recommended: Chris Lydon's series of audio interviews posted to the Net. These are a great demonstration of how the Net can deliver content completely beyond the reach of the corporate media. I'll be listening, and hope to see more things like it.

QXGA

170 dpi laptop displays haven't reached commodity status, but they're getting closer. Eurocom has a "mobile workstation" with one, priced $600 above the next lowest res, which is actually fairly affordably priced.

My guess is that we'll see more of these. It will be very hard to pass one up when it comes time to replace my laptop.

The Final Solution

You might be an anti-spam kook if you have discoverd the final, ultimate solution to the spam problem (FUSSP). I scored shockingly high on the test. Of course, I realize that using a trust metric to defeat spam, while probably effective, won't be easy.

Electronic voting

Something is seriously rotten in the land of electronic voting. Consider:

  • Rebecca Mercuri was thrown out of a meeting of the IACREOT (International Association of Clerks, Records, Election Officials, and Treasurers) a couple of months ago for voicing criticism of the electronic voting machines being sold.
  • A group of researchers pulished a searing criticism of Diebold's touchscreen voting machines. These machines are a total joke in terms of security - they're based on Microsoft Access, so everything, even the audit logs, can easily be tampered with. Further, their use of crypto is spotty and contains amateurish mistakes such as reusing IV's in CBC mode. Diebold's response is lame, simply ignoring many of the points scored in the original paper.
  • The State of Maryland, on the verge of buying lots of Diebold machines, commissioned an "independent" study of the machines from SAIC (another cog in the military-industrial machine), which identified "several high-risk vulnerabilities" and concludes that the system is not compliant with Maryland's standards. The somewhat unbelievable response from the president of Diebold is: "The thorough system assessment conducted by SAIC verifies that the Diebold voting station provides an unprecedented level of election security."
  • The chief executive of Diebold is also working for the Bush campaign, and, in a recent fund raising letter, wrote that he is "committed to helping Ohio deliver its electoral votes to the president next year."
  • Even though Diebold is emerging as the owner of the most smoking gun, the other election machine vendors aren't coming across as being much better.
  • Diebold successfully takes down blackboxvoting.org by issuing a DMCA notice to their ISP, based solely on links posted at the site.
  • Leaked memos clearly indicate that Diebold routinely violates election guidelines, among other things by using versions of their software other than those certified.
  • In spite of all this, the state of Maryland is going forward with the Diebold contract.

This is a big story, I think. Even the mainstream press is starting to cover it. If there are any people reading this in Maryland who are good smart cards, just put in a million votes for the Green candidates. That ought to wake up the powers that be, and maybe the winner can do some good in the meantime.

It's also clear that we can do some good by raising a stink. The IEEE was all set to approve incredibly weak standards for electronic voting machines, but in response to the EFF's action alert, they actually sent it back to the drawing board.

Diary backlog

It's been a while since my last entry, and there are a lot of things I want to write about. Fortunately, most of them will keep.

Much of the past two weeks was taken up with our Ghostscript staff meeting, our exhibit at the Seybold show, and the various preparation and followup.

One of the nicest things about the time was to have fellow hackers stay with us, first Ralph Giles, then Tor Andersson. The kids got to spend time with both, and I think it was very enriching for them, and I hope enjoyable for our guests.

In fact, this was the first time meeting Tor in person. I'm really enjoying working with him - it's clear we have many goals in common. We had a few days of very intense discussion, covering the good and bad in TeX, the Fitz design (of course), and many meditations on the way software should be built.

Silliness

I've enjoyed drinking from a Sun Java mug since I first became its proud owner, back when Sun was making the original push. A few days ago, Sun announced the "Java Desktop", with all kinds of neat indemnification for its buyers. A few days ago, the handle broke off of my Java mug. I have never dropped it and didn't abuse it in any way, just mostly used it for nuking water for tea. Is there a causal relation between these two events?

I come up with all kinds of funny things in my dreams, but rarely such a good pun as this one: it's clear that discount merchandising needs its own XML language for describing the various flavors of clearance sales, everyday discounts, and so on. Thus, I propose "Markdown Markup language". It's a shame I didn't hook up with Tim Bray at Seybold; we were both there, and I'm sure he would have gotten a chuckle out of it.

Two notes on performant systems

It used to be that performance was a central component of just about every computer project. It had to be; computer time cost so much that wasting it was a real problem. These days, it is so cheap that we tend to be focussed on how to waste it most effectively - should all that power go into interpreter cycles in a high level language (such as Python), into building more robust abstractions for storage (such as SQL databases), or somewhere else entirely? The tradeoff is often stated as computer cycles (cheap) vs programmer cycles (expensive), but I'm not sure that's it entirely; the latter can always be outsourced to China.

So two notes on projects emphasizing performance caught my attention the last couple of days. The first is a criticism of Subversion by Tom Lord, who happens to be the author of Arch, a competing version control system (thanks to jdub for the link). The other is a post by Tim Bray writing about his recent experience writing a performance critical module in C. Both writers have lots of insight and experience, and are worth listening to.

A common element of both posts is how you have to take care to preserve performance in the currently trendy XML world. But one of the interesting things about Tim Bray's experience is that he was still able to get the performance he needed, and not too painfully either.

One of the nice things about XML is that it doesn't force you to work at a high level of abstraction. The spec itself is essentially a bridge between a low-level representation (a sequence of textual bytes in a fairly simple grammar) and a higher-level abstraction (trees of textlike thingies). By contrast, DOM is an abstraction that basically forces you to work at the higher level, with essentially a 1-1 mapping between the things in the abstraction (nodes and so on) and the objects that represent them. If Tim were forced to do his project in DOM, rather than having the choice of using low-level XML tools such as expat and his hand-rolled finite state machine, performance would have suffered unbearably.

The use of XML gives runtime compatibility with tools designed at a higher level of abstraction. In particular, I'm sure Tim could easily describe what his C program does in terms of XML nodes, etc. This used to be central to the craft of programming: take a description of the desired task, and create an efficient implementation of that task. These days, the world is more complex, so trying to figure out what the desired task is takes most of the programmer's time, and, as for efficiency, we can let Moore's Law take care of that.

Designers of abstractions should take this lesson from XML. Being a bridge between two different levels of abstraction is a good thing. A number of my favorite things have that flavor: the Unix process abstraction, the Python/C runtime embedding. Compare the latter with the JVM, which basically forces you to do everything at the higher level of abstraction of the virtual machine.

Lego Bionicle game

The kids stumbled across the Lego Mata Nui Online Game II. It's interesting because it's one of the more intensive uses of 2D graphics I've seen in a while. However, the implementation leaves something to be desired. It's too slow to be playable on the kids' 500MHz iMac, but fine on my dual 1.6GHz Athlon box. Even so, the implementation (in Flash) is a bit flaky.

The storyline is very much reminiscent of those Infocom text-adventure games of the '80s (and of course, today's retro revival), but with prettier graphics, orders of magnitude more computing requirement, and lots more crashing. Other than that, I'm not very knowledgeable about the Myst family of games, but it's probably a direct rip-off^Wtribute.

Handhelds

I've been looking at handheld devices, and have gotten a Sony Clie SJ20 ($100, 160dpi grayscale screen, 33MHz 68000) to play with. This class of machine is just a little too puny to take seriously; int's are 16 bits, you start running into various 64k limitations when you do anything real, and there's no real libc. However, the next generation of Palms is starting to look interesting indeed - these tend to have reasonably fast ARM chips, and the pricing is moving down, likely squeezing out the 68000-based models pretty soon.

High-end Palm 5 devices such as the Sony UX50 are even more interesting, not least because they now have WiFi networking built-in. But the big story for me is the display. The resolution is going up, and they're getting nicer in other ways as well.

Maybe someday, we'll all be running a free environment such as GPE on our handhelds, but in the meantime there's a demand for apps on PalmOS.

I haven't really looked into the build system for PalmOS 5, but it seems a little daunting. In particular, the free tools seem to be lagging the official ones (Mac and Windows only). In an ideal world, building would "just work", but we're certainly not there yet.

Pattern analysis

Here's a little amateur "pattern analysis" of my own. On one side, this quote from a c't interview of SCO's Chris Sontag (translated from German original by "Apogee"):

c't: You are acting fairly belligerent on this forum. You declared war against open source, since it becomes destructive for the software industry. Does the whole movement have to die so that a few software companies can live well?

McBride: Actually, that was more aimed at the GPL, not open source as a whole. There's a lot of very valuable effort in open source. But the extreme interpretation that nobody himself owns anything that he developed himself, that can't remain like this. With this, created value gets destroyed. The GPL must change or it will not survive in the long run. I have discussed with many exponents of the open source side about this already.

On the other side, Bill Gates, in the keynote address of the Microsoft Government Leaders Conference in Seattle, April 2002:

"Then you get to the issue of who is going to be the most innovative. You know, will it be capitalism, or will it be just people working at night? There's always been a free software world. And you should understand Microsoft thinks free software is a great thing. Software written in universities should be free software. But it shouldn't be GPL software. GPL software is like this thing called Linux, where you can never commercialize anything around it; that is, it always has to be free. And, you know, that's just a philosophy. Some said philosophy wasn't around much anymore, but it's still there. And so that's where we part company."

I'm not much of a conspiracy theorist myself, so I'll leave it to the rest of our crack IBM-funded* team of rocket scientists to run the spectral recognition and see what comes out.

* Full disclosure: in point of fact, IBM is a significant customer of my employer. Irony abounds, no?

Anger

I'm find Steven Rainwater's suggestion of an organized counterattack against SCO appealing, but ultimately I think our interests are best served by acting honorably and being careful to tell the truth. That way, the contrast between our approach and SCO's should be most apparent, even to lawyers and judges.

Even so, I want to acknowledge the incredible feeling of anger that is rising in me. It is incredibly unfair that a bunch of opportunists can hire a bunch of unethical* lawyers, stir up a tremendous amount of press for their increasingly outrageous lies, and ultimately profiting through insider stock trades, while those of us who actually create value by making software have to struggle.

Of course, I realize that the world is under no obligation to be fair, but the very institutions which claim to uphold justice and fairness, namely the courts and the press, seem compromised. I'm quite sure that when Judge Kimball finally rules on the case, he will not be kind to SCO. But this could take years. In the mean time, the press and the stock players continue to take SCO quite seriously, just on the basis of having filed a multibillion dollar lawsuit.

In the long term, this case could be very good for Linux and free software in general. It is very clearly a case of good guys vs. bad guys. The SCO execs and lawyers are, in fact, playing the latter role quite admirably. As such, I think the story has a much better chance of playing to the public than a dry philosophical debate over copyright, the public domain, and the public interst. It also has a much better chance of playing to the public than a venture capitalist-fueled hype wave, which, keep in mind, is the last taste the mass public has gotten of the Linux story.

So I think there is something we can do. Most newspapers at least pay lipservice to factual accuracy. Adopt your local paper and hold them to it, at least for articles on the SCO case (of course, no harm is done if this effort spills over into other aspects of free software). When doing so, be very professional. Don't fight FUD with counter-FUD. Concentrate on clearly verifiable inaccuracies, and provide journalist-friendly support for all your claims.

* Among the ethical lapses of Boies, Schiller, and Flexner in the SCO case, the clearest and most egregious are making of frivolous claims, and being a party to all the lying. The firm is not new to ethical controversy, including misrepresentations in the 2000 election case. Indeed, a Bar grievance committee in Tallahassee recently found probable cause that Boies had violated rules against misconduct. I sincerely hope their role in the SCO mess does not go uninvestigated.

8.11

Ralph did most of the actual work getting Ghostscript 8.11 out. It looks like a really good release. I'm pleased.

Email is pretty useless

Our email server has been completely inundated with the latest worm. As a result, email has been pretty much non-functional, and I've had to put in way too much time to nurse it along.

The entire email infrastructure is really decrepit. It's a classic tragedy of the commons situation - nobody is really responsible for keeping it healthy. I'm also not at all impressed by the technical performance of the sendmail + mailman + procmail + spamassassin combo I'm using. It just melts down when you start throwing real load at it. For one, the default configuration of mailman is qrunner (which runs once a minute from a cron job) to process no more than 300 messages. So, if you're getting more than 5 spams a second, prepare to watch the disk fill up. Fix that (set QRUNNER_MAX_MESSAGES higher in /var/lib/mailman/Mailman/mm_cfg.py), and watch the system reach its fork limit. This is really bad engineering.

Between this and the spam problem, email is looking like it has a really poor cost/performance ratio. It's no wonder people are flocking to alternatives wherever possible. A big part of the reason casper is hit so heavily is that it hosts the mailing lists where we do most of our work. I guess we're going to need to start looking carefully at migrating all that to web-boards, for all the disadvantages those entail.

I don't see any encouraging signs that suggest that email is going to get fixed any time soon (after all, this is version F of the worm, which means we've had the benefit of A through E as training exercises). That's sad, but it also means there's a tremendous opportunity for somebody to create something better. It will come as no surprise to my readers that I have some ideas how to do this. I feel tempted to write a more detailed essay, but don't really have the time right now.

Honor at low levels

Thanks to Jesper Louis Anderson for pointing me to Judy. It is damn interesting. There are some good lessons in the code. I've been discussing it a bit with Tor, because there are some parallels with the way I want to lay out Fitz trees in memory.

No honor at all

I'm astonished at the gall of the theiving liars that call themselves SCO. I'm even more astonished at the fact that their stock price seems to be doing fairly well, in spite of all the insider trading and stock manipulation that's going on. The fact that these parasites are making serious money rather than going to jail gives me despair about the way society is run these days.

LWN has had incredible coverage, but in general the mainstream media shows themselves to be utterly useless. By giving these clowns a forum to make their outrageous claims, they're just giving them credebility in the eyes of the sheep readership (which, no doubt, overlaps considerably with the poor shnooks who are taking long positions in the stock).

Clear and informative error messages

For any software which is to be considered mission-critical, one of the top priorities must be to produce clear and informative error messages when something goes wrong. It might be helpful to consider this the primary goal, with production of the correct result a pleasant side effect of the special case of no errors.

Of course, as maintainer of Ghostscript, I bear a great deal of responsibility for violating this principle myself. So, at the risk of the pot calling the kettle black, I humbly present criticisms of some existing free software projects, and suggestions about how to improve matters.

My most recent bad experience with cryptic error messages was a simple permissions problem in Subversion. A log file had 644 permissions, where 664 was needed. However, the actual error report looked something like this:

svn: Couldn't find a repository
svn: No repository found in 'svn+ssh://svn.ghostscript.com/home/subversion/fitz'

Trying to track the problem down, I ran svn locally on the machine hosting the repository, resulting in this error:

svn: Couldn't open a repository.
svn: Unable to open an ra_local session to URL
svn: Unable to open repository 'file:///home/subversion/fitz'
svn: Berkeley DB error
svn: Berkeley DB error while opening environment for filesystem /home/subversion/fitz/db:
DB_RUNRECOVERY: Fatal error, run database recovery

I ended up diagnosing the problem using strace, which did print out a clear and informative error message, once I found it:

open("/home/subversion/fitz/db/log.0000000002", O_RDWR|O_CREAT|O_LARGEFILE, 0666) = -1 EACCES (Permission denied)

How did Subversion succeed in transforming such a clear error condition into such a confusing (and alarming) report? I think it's likely that the main culprit is the use of abstractions which do not support the error reporting goal as stated above. If you have a tower of abstractions, then it is essential for each abstraction in the tower to support it.

Of course, aside from Ghostscript, one of the absolute worst offenders for error reporting is the auto* toolchain. A simple problem such as a missing library often results in cryptic error messages, usually the fallout from incorrect macro substitution.

Macro substitution, while an appealingly powerful abstraction, is absolutely hopeless when it comes to mission-critical error recovery. In a typical scenario, you'd use macro expansion to rewrite your goal (create a suitable configuration file for building a program) into subgoals (such testing whether certain compiler flags work), and so on. However, when something goes unexpectedly wrong in one of the subgoal steps, it's all but impossible to trace that back up to the original goal - the only thing that remains is the expansion. Using procedures to break a goal into subgoals works in much the same way as macro expansion, but doesn't suffer from this inherent problem - when something goes wrong, the caller can look at the error returned by the callee. Of course, it's still the responsibility of the coder to actually check the return code and do something appropriate with it; all too often ignored.

chalst: is this link evidence enough of vendor participation?

see here for yesterday's entry

348 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!