Older blog entries for raph (starting at number 319)

Booger

Joey DeVilla's favorite amalgam of "Google" and "Blogger" is "Booger". Yeah!

There is one thing, I think, that Google and a blog hosting engine inside the same trust boundary can do that would be somewhat difficult otherwise: making backlinks work really well, based on both linguistic analysis for relevance and, of course, PageRank. It's possible to use a trust metric to automate links between blogs in a more distributed context, but so far nobody's been smart enough and motivated enough to actually try to build it. It's probably a lot more likely to happen in a centralized, infrastructure-rich setting.

Off-topic

Here are two interesting and related interviews. The first describes how child psychiatry has a history of being science-resistant, but advances in the field are overcoming this. The second describes some of the cutting-edge research being done at the NIMH, and the palpable enthusiasm of Dr. Manji in being part of the community. I've long been fascinated by the signalling and computation that goes on in networks of cells, and found my interest rekindled by this interview.

This essay by Kanan Makiya is interesting. He's far from a disinterested party, of course, but I certainly agree that these kinds of discussions should be taking place out in the open. Real democracy is messy and unpredictable. Perhaps it's even true that, as the State Department under Colin Powell and the CIA believe, "it could have a destabilising influence on the region."

Google buys Blogger

Breaking news: Google buys Pyra. This only kinda makes sense to me. As I've written before, Google and blogs have a synergistic relationship, but to pick a single platform in this time of experimentation and ferment seems odd.

cactus: I see your point about word "blog" being the latest hype fad, but it is a useful word. To my mind, it simply means posting your writings online in a reverse-chronological format, and with plenty of Web links for further reading. Advogato diaries qualify.

Of course, what people do with the format varies widely. Some write about their cat's hairballs. Others use it as a tool for intellectual inquiry, and perhaps to participate in the distributed leadership of the free software community. In fact, by numbers alone there are many more of the former.

One of these days I'm going to have to write up my thoughts on "humble elitism". (when I mentioned this phrase to Heather, she asked me if it was like "compassionate conservatism", so I think I'll have to pick a different name). I strive to make my blog one of the elite, but only by pouring thought and good writing. Usually, "elitism" refers to some kind of caste system. And of course, on any given diary entry I'm liable not to live up to my goals. In any case, I certainly enjoy trying.

A country code for VoIP

I saw on boingboing a few days ago that there's now a country code reserved for Internet phones. I had a little difficulty understanding what that meant, but think I've got it now. Essentially, this is a way to bring VoIP phones into the standard phone number namespace. It is in this sense a dual of ENUM, which is a gateway to access the phone number namespace through DNS.

From what I can see, this new country code is being run by FWD (Free World Dialup). You register for a free account using a simple, straightforward Web form, and you get a number. Mine is 18408. Then, you point your SIP phone's config to the FWD server, register, and then when people query the FWD server for your number, they find your phone. For example, to reach my phone, dial sip:18408@fwd.pulver.com (try it; I'll try to keep a phone app running).

This number also now exists in the POTS number namespace, but your phone company won't route to it yet because they're evil. As soon as public pressure overcomes their evilness, you'll be able to reach my VoIP phone simply by dialing 011 +87810 18408 from your US phone.

I think this is a huge step. To the extent that people can call your phone, it makes it practical to go VoIP only. Of course, you can do that today with a service such as Vonage, but that costs $40/month, and this is free.

From what I can gather, FWD is going to make a little money off "long distance charges" from phone companies that peer with them. I like this idea - it would seem to provide a revenue stream that would actively promote the use of VoIP phones. You can bet that the telcos are going to drag their heels as much as possible.

I think there's one more piece to this, which is phone cards. Even if your scumbag incumbent telco won't peer with FWD, you'll probably be able to shell out $20 for a phone card with a company that will. There's no reason why these companies can't provide service for a penny or two a minute. The standard phonecard service, after all, is basically two telco to Internet gateways joined back-to-back. Here, the caller just buys one of them. So this basically solves the problem of being callable by my Mom. All she has to dial is 1-800-call-crd, then a (typically 10 digit) pin, then 011 87810 18404. Only 34 digits, but at least she'll be able to reach me.

Phones

PC's running phone software don't make good phones. A dedicated piece of hardware is better. Even aside from the general flakiness of sound cards and drivers, phones are a lot better at ringing and being always on.

You can buy a phone like a Cisco ATA 186 for about $150 from eBay, but I think the price is going to come down to $50 or so once D-Link or Linksys gets into the game. Basically, it's the same gear as a phone with a built-in digital answering machine (AT&T brand $30 at Best Buy), plus a 10/100 Ethernet interface.

In any case, I tried out kphone and gnome-meeting again, and was successfully able to complete calls with both. I had trouble compiling GM 0.96, so no doubt I'll give it another go when I upgrade to RH 8.1.

I'm less impressed with kphone. I could receive audio ok but not transmit, so I took a look at the code to see what was wrong. The actual audio interface code is buggy and unsophisticated. One of the most basic problems is their use of usleep(0) to wait for the next timer tick for basic scheduling. This, of course, is hideously dependent on the details of the underlying kernel scheduler, and in any case, gives you very poor temporal resolution on PC hardware. Even worse, if 5 ticks go by without an audio packet being ready, the code reads a packet and drops it on the floor, for what reason I don't know.

There's also a problem with the kernel audio drivers I'm using (alsa 0.90beta12 with Linux 2.4.19). Even though kphone does a SNDCTL_DSP_SETFRAGMENT ioctl to set the fragment size to 128 bytes, the actual value, as returned from SNDCTL_DSP_GETISPACE, is 2048 bytes, which is way too big (it's 125ms). Combined with the packet-dropping logic above, the net result was no audio.

People should not have to worry about this. I think it makes sense to wait until you can get a Chinese-made phone with Speex in it at commodity prices. Hopefully, this will happen soon.

A good homepage

I came across Miles Nordin's web site last night after following a link from the Java discussion on our front page. I found myself immediately observed. Miles writes well, is well read, and has a fabulously critical attitude. Many of the other pages, especially those having to do with wireless networking, are worth reading.

Word

cinamod: I basically agree with everything you say. If Abiword or OO are good enough, and the code is clean enough to be split out as a batch renderer, then there's no need for a separate codebase.

I've had a look at the Word document format, and it's not quite so bad as I was expecting. The documentation is atrocious, but the format itself seems fairly reasonable. Of course, I'm sure that if I got into the details I'd find lots of corner cases and bad hacks.

The main thing not to like is the obvious lack of design for forwards and backwards compatibility. No doubt, this is economically motivated - gotta keep that upgrade treadmill going.

On the plus side, the format was clearly designed with an implementation in mind (as opposed to the W3C process, for which implementation is a distasteful afterthought). It's fairly easy to see how to process a Word file very efficiently, in both CPU time and memory usage. For example, resolving stylesheets is a straightforward linear chain, as opposed to all the nutjob nondeterministic stack automaton stuff in CSS, or the mini-Lisp in DSSSL/XSLT.

I'm tempted to write here about Word's plex/fkp/character run architecture as opposed to the more generic tree approach we tend to see these days, but probably most people would be bored with that level of detail. The top-level point is that algorithms for manipulating Word's structures on-disk are straightforward, while manipulating trees efficiently on-disk seems to require a lot of cleverness. Of course, with RAM so cheap these days, it's reasonable to ask whether memory-constrained processing of files is important at all.

The Word format is too tightly bound to a specific implementation, and it certainly shows in what documentation Microsoft has produced. They often seem to confuse the interface, which in this case is the on-disk representation of the document, with the implementation details.

In any case, I'm glad I've learned more about the file format. Its popularity means we have to deal with it somehow. Further, as PDF continues to become document-like and less of a pure graphical representation, it's important to understand the influence that the Word design has on its evolution.

I've commented before on the need for a good, open, editable document format. The lack of adequate documentation and Microsoft's proprietary lock on change control make the Word format unappealing. I've certainly thought about designing my own document format, but it's not easy to make a word-processing format much better than Word, or a graphics-oriented format much better than PDF. So that's probably a windmill I'd be happiest not tilting at.

UTF-8

forrest: Yes, Unicode/UTF-8 should be the default charset and encoding for Advogato (technically, UTF-8 is not a charset). So basically I need to convert all the Latin-1 stuff in the database over, then switch over the reported charset.

By the way, Google search results are now multilingual, with Russian, Japanese, and other alphabets all mixed in on the same page. They seem to have gone back and forth on this; even recently I got the "results can not be displayed in this character set" message. In any case, I think it's cool.

More blog navel-gazing

I expected to get a lot of response from my last entry, but I didn't. I tried to argue it fairly and carefully, to best reach an audience of journalists (to whom I expect it would be considered quite controversial), but to my usual readers I expect I'm preaching to the choir. Perhaps if I had blamed the media for their role in unbelievable ignorance of Americans, it would have stirred up more response.

In any case, there are some downsides to blogging, or at least areas where it needs work. For one, not everybody is capable of criticial reading (from the survey above, the fraction would seem to be less than 17%). The mainstream media is actually pretty good in distilling a story down to a form where busy people can absorb it quickly. Blogs aren't, at least not yet. I'm hopeful that technical innovations can help with that, not least the use of trust metrics to ferret out the good material, but of course people have to be writing that first.

Needless to say, I didn't get any e-mails from newspaper editors on why they're not covering Bruce Kushnick's book. The most parsimonous answer is that their souls are simply 0wnz0red, and they're no more capable of breaking a story on the corruption of the telecoms industry than Hilary Rosen is capable of writing an editorial on how music trading is sometimes good for artists.

But (and this is a big but), the blog world is not (yet) doing a good job covering this story either. Bruce's publication of the book is a good start, but there's a lot of followup work to be done: fact-checking, correcting mistakes, unearthing more evidence, summarizing the highlights, getting the word out. This is exactly the sort of thing that journalists claim to be good at, because they have the resources to do it. Perhaps bloggers don't, although my personal belief is that it's the kind of work that lends itself to the sort of distributed effort that's so effective in creating free software.

Word to PDF

Thanks for the great feedback from cinamod and cuenca on this topic. I'll try to respond.

I'm not sure whether it's better to try to create a batch renderer project now, or whether it's best to work on existing tools, such as the renderer in AbiWord. If the latter is really, really good, then it can be used as a batch renderer, and we're done.

Even if everybody's needs are being well met by the existing projects, in retrospect I think there would have been significant advantages to have done the batch renderer first. As cuenca points out, it's a considerably simpler problem because you don't have to design your data structures for incremental update and so on. So I think there would have been high-quality rendering much earlier than we're seeing now with the GUI-focussed work.

In any case, for people contemplating new projects to work with complex file formats, I think the advice is sound: do the batch processor first, then adapt it to work interactively. ImageMagick and netpbm happened before Gimp, and for a good reason.

Absolutely an important part of such a project is a regression suite. Even better, it should be possible to use such a suite with other Word processors, such as GUI editors.

I'm not enthusiastic about transcoding into another existing document format such as TeX. This path makes it easy to get basic formatting right, but probably much harder to get it really good. The idea of TeX code to match Word's formatting quirks makes me cringe.

AlanShutko: It's not surprising that Word's layout has changed over the years. In fact, it's fair to say that interchange and compatibility in the Word universe only works well if everybody is using the same version. I'm sure that that the fact that this fuels upgrading is merely a coincidence :)

Even so, that doesn't make the problem impossible, just harder. I believe that Word documents self-identify the version of Word that generated them. Therefore, in theory at least, it should be possible to create a pixel-perfect rendering of the document as seen by the writer. SMB has many implementation variances, but that doesn't stop Samba from being viable. The goal, as usual, should be "least surprise".

Of course the rendering depends on the font metrics. Is there anyone who believes it shouldn't? Depending on the printer is a misfeature, of course, but as I've argued above, a "best effort" is likely to make people happy.

Fear

Patriot II draft

How blogs are better than mainstream media

The Washington Post recently ran a "journalist checks out blogs, doesn't quite see what the big deal is all about" story recently. A lot of these have been appearing lately; this one seems entirely typical. I've been thinking about the differences between blogs and mainstream journalism for some time, so the appearance of this story in a highly regarded newspaper, and Dave Winer's criticism of the piece, inspired me to speak to the issue.

The main theme of the piece, as usual, is that blogs are an interesting phenomenon, but cannot take the place of professional news organizations. The typical blogger, according to the piece, posts mostly opinion and links to news stories from the mainstream media, as opposed to real reporting.

This is basically true, I think, but rather misses the point. Blogs are incredibly diverse, with a wide distribution of things like writing quality, fairness, objectivity, originality, passion, and so on. The average blog, frankly, scores pretty low on all these scales. But I tend not to read to many of those. I seek out the exceptional blogs, the ones that inform and delight me, move me with their words, bring stories to life, make me think. Even though these are a small fraction of all blogs written, I'm able to find quite a few of them.

By contrast, mainstream media tends to be uniformly mediocre. The actual difference in quality between a top newspaper and an average one is small. In fact, thanks to wire services, they tend to run most of the same content. In computers and software, aside from a handful of good technology reporters such as John Markoff and Dan Gillmor, there is almost no good reporting.

I don't read blogs the same way I read the paper, and that difference, I think, captures how blogs can be so much better. My "toolkit" consists of three essential elements: blogs, critical reading, and Google. In combination, they give me a reading diet that is, on most topics, vastly superior to what I'd get from reading the mainstream media.

To me, critical reading has two major pieces. First, trying to separate the wheat from the chaff. This is especially hard on the Internet (and in blogspace), because there is a lot of chaff out there. Second, reading multiple different views on a story, and trying to determine the truth from the bits for which there is consensus, and also to understand the real disagreements at the root of the differing views.

Synthesizing an understanding from multiple views is important because I don't have to depend on the objectivity of the writer. It is, of course, very important to judge how credible the writer is, what their biases are, and to what extent they let that distort the story. This isn't easy, and it's possible to get wrong. Even so, I find that I get a much clearer picture after reading two or more passionate stories from different sides, than one objective, dispassionate story.

Objectivity, while a noble goal, comes at a price. In the context of the media business, it usually guarantees that the reporter doesn't know much about the subject at hand. This, in turn, is most clearly detectable as a high rate of technical errors (Dave Winer points out some in the article under discussion), and the more worrisome, but less quantifiable, lack of insight. Ignorance about a topic also makes journalists more vulnerable to manipulation, at worst simply parroting press releases and "backgrounders". More typical is the way the mainstream papers accepted the SF police's estimate of 55,000 at the Jan 18 marches, even though the actual number was about triple that.

And on a lot of topics, learning about an issue leads one almost inevitably to take a side. Take the management of DNS for example. Of the people who know what's going on, those who do not have an interest in the status quo are almost all outraged. It's hard to find somebody who's both knowledgeable and objective, so insisting on the latter serves the story poorly.

The importance of Google

If you are going to read critically and sift through various viewpoints, the key questions are "what are other people saying about this?" and "how do these viewpoints differ?". As mentioned above, it's not trivial to find good alternate sources. But it's a skill one can learn, and there are tools that can help. Among the most important of these is Google. On any given topic, construct a nice search query, pass it to Google, and in a hundred milliseconds or so you'll be presented with lots of good links to choose from. Not all will be relevant or well-written, but you only have to sift through a dozen or two before coming up with a winner, and you can tell quite a bit from the results page, without even having to visit the link.

I'll give a couple of examples on how Google can provide more information than mainstream press articles. First was Nicholas Kristof's July 2, 2002 editorial in the New York Times entitled "Anthrax? The F.B.I. yawns". This editorial referred to a mysterious "Mr. Z". For whatever reasons (fear of libel suits, perhaps?), the New York Times saw not fit to print the name of this individual, so people reading the papers were in the dark for a while. Googling, of course, revealed the name readily.

A more mundane example is this Washington Post story on a terrorist scare. A kid on the plane asked a flight attendant to pass to the pilot a napkin inscribed "Fast, Neat, Average". This is an Air Force Academy catchphrase, the standard response to an "O-96" dining hall feedback form, and, according to USAF folklore, also used in Vietnam as a challenge-response. Cadets and graduates sometimes write the phrase on a napkin in the hope that the pilot is USAF-trained. In this case, the kid turned out to be a neighbor of an AFA cadet, without much good sense about how cryptic notes might get interpreted. In any case, the Washington Post article carefully omits the response ("Friendly, Good, Good"), even though it's easy enough to find through Google, among other places in a speech by President George H. W. Bush.

Other newspapers do worse. The Washington Times manages to misquote the three-word phrase. The AP wire story, as published by the Seattle Post-Intelligencer, CNN, and other papers, doesn't even bother with the O-96 dining hall part of the story.

Why isn't there any coverage of Teletruth?

The systemic corruption of the telecom industry is one of the most important stories since Enron, but you won't find it in your newspaper. Why not? Bruce Kushnick has written a book detailing the crimes of the telecom corporations, but nobody on the mainstream press is following up on it. A Google News search returns exactly one result from either "Teletruth" or "Bruce Kushnick", and that appears to be a press release.

I'm having real trouble understanding why this story isn't getting any coverage in the mainstream press. I'm having even more trouble reconciling this fact with the ideals of objectivity as professed by journalists. If you're a working editor or journalist, especially in the tech sector, did your publication make a decision not to run the story? Why? I'd really appreciate more insight. Even if Bruce Kushnick is a complete nut (which I doubt), it seems as relevant as the Raelians.

I consider it quite plausible, even likely, that this is a huge story, but for whatever reason, readers of newspapers are completely in the dark about it. Critical readers of blogs, though, aren't.

Conclusion

Just about every time I've had the opportunity to check a mainstream news story, I've found it riddled with errors. Every time I've been interviewed by the mainstream press, the resulting story significantly distorted what I was trying to say, and from what I read in other blogs, this experience is very common. Even in the off chance that a tech story is factually correct, I don't learn much from it. There are important voices missing from mainstream media, especially those critical of big companies, or, more importantly, providing a credible alternative.

By contrast, the best of the blogs I read are passionate, well-informed, topical, and insightful. They don't make a lot of stupid factual errors, but those that slip through are corrected quickly. The best blogs are partial but fair, and up-front about their biases, as opposed to pretending to be totally objective.

It's not just technology reporting, either, although that's obviously close to the hearts of the early blogging community. I think the flaws of mainstream reporting, and the potential of blogging to address those flaws, generalize to many other areas of interest. I'm sure, though, that newspapers are a very information source for sports gamblers, and will continue to be important in that role for quite some time.

It takes more time and effort to get one's information through critical reading of blogs than it does to read the paper, but the results are well worth it. To paraphrase Thomas Jefferson, were it left to me to decide whether we should have newspapers without blogs, or blogs without newspapers, I should not hesitate a moment to prefer the latter.

Enfilade theory

I had a truly great irc discussion with tor today. Among other things, he brought up Enfilade theory, which is one of the things the Xanadu folk came up with. It's easy to be put off by the presentation, including the strange terminology and all the boasting about how secret it all used to be, but I think at heart it's a useful tool for talking and reasoning about computations over trees.

As far as I can tell, the "wid" and "disp" properties are very similar to synthesized and inherited attributes in attribute grammars. The canonical example of "wid" is the number of bytes contained in the node and its descendants. Given two subtrees and their byte counts, it's easy to compute the byte count of their combination - just add them. Byte counting is simple, but there are lots more properties that can be computed in this manner. "Widiativeness" describes the generalization. One key insight is that all widiative properties are also associative.

Whatever properties are chosen, an enfilade is basically just a tree with the enfilade properties stored in the interior nodes. The tree engine has to update these values whenever the tree gets updated, but the cost is only proportional to the depth of the tree - O(log n) when the tree is balanced.

While "wid" properties propagate up, "disp" properties propagate down. In Fitz, the graphics state is essentially a disp property. You have interior nodes that set graphics state parameters, and affect all child nodes. When there are multiple nodes on a path from the root to a leaf node that set the same parameter, the deepest takes precedence. Similarly, clip paths intersect (which happens to be commutative), and coordinate transform matrices multiply.

Bounding boxes are a "wid" property, with the combining rule being rectangle union. In the PDF 1.4 data model, RGBA renderings of groups (whether nested as a Form XObject or q/Q pair) are almost wid, the exception being non-Normal blending modes. For these, the RGBA rendering of an object may depend on the colors that were rendered beneath it. This has implications for caching (I won't go into details here, because they're probably only interesting to one or two other people).

Another "wid" property is the tree representation for Athshe I've been thinking about for a while. You want to efficiently represent a tree with arbitrary structure, meaning it may or may not be balanced, and nodes may have ridiculously small or large fanout. The solution is to store a serialization of the tree in a B-tree, using open and close parentheses in addition to leaf symbols. There is not necessarily any relationship between the tree structure and the b-tree structure; the latter is always balanced with a tight distribution of fanouts. I wrote about storing some summary information about parenthesis balancing in B-tree nodes.

It makes sense to describe this summary info in enfilade terminology. My proposal is an enfilade that stores a "wid" property at each B-tree node. The "wid" property is a tuple of two integers. For open paren, it's (1, 0), for close paren it's (-1, -1), and for leaf nodes, it's (0, 0). The combining operator is defined as combine((a, b), (c, d)) = (a + c, min(b, a + d)). When this info is stored at each B-tree node, something magic happens: it's possible to do all the standard node navigation methods (parent, first child, prev and next sibling) using only O(log n) accesses to B-tree nodes (proof of this claim is left as an exercise to the reader).

I hope I've presented a wide range of issues for which enfilade theory is a useful reasoning tool. I think it's also useful to criticize bad designs. My favorite example is (of course) from the W3C.

The application of CSS stylesheets to XML trees is almost a disp property. However, the adjacent sibling selector in CSS2 breaks disp-ness. When I was working on implementing CSS, I looked through a lot of stylesheets, including searching all the XUL glorp in Mozilla at the time. I found a couple of uses of the adjacent sibling selector, but all could be factored out. In other words, the actual stylesheet had the disp property even though the adjacent sibling primitive lacked it. Again, this hurts performance and makes implementations far more complex than they should be. Perhaps if the designers of CSS2 had been aware of enfilade theory, they could have avoided this mistake.

(at dinner tonight, Heather asked me what I was thinking about. I responded, "enfilade theory". she asked me what that meant, and I promised her that after reading tonight's blog, she'd know more about it than she ever wanted to. hopefully, i've made good on my promise :)

librsvg and batch formatting

It warms my heart to see librsvg thriving under the care of Dominic Lachowicz. I'm more of a starter than a finisher, so for me it's a sign of success when a project of mine makes the transition to a new maintainer.

There's one lesson from Gill and librsvg that I think generalizes to other projects. Looking back, I really wish I had done librsvg (a batch renderer) first, then tackled Gill (a full interactive editor). It would have been possible to attain useful results and a good quality codebase much more quickly if I had done that.

In particular, I think there's a huge role for a batch Microsoft Word formatting engine. There are a bunch of free word processors that can read Word documents, but given their broad focus and the difficulty of interactive GUI apps in general, I think it'll be a long time before any of them can open tough Word documents and format them perfectly. But for a batch formatter (with, say, PDF output), I think it's a very reachable goal. Aside from its obvious practical utility and value as a code resource, it would also be a great tool to test GUI word processors against.

Crowd estimation

Since I wrote about estimating the Jan 18 peace march in San Francisco, I was interested to see Lisa Rein's link to a Forum program on the topic. Among the guests was Farouk El-Baz, who is the author of the paper on the Million Man March estimates. In any case, the consensus now seems to be 150,000 to 200,000 people. The original SF police estimate, widely reported, of 55,000 was a gross underestimate. I'm gratified to learn that my own numbers are closer to the mark.

Gems

I ordered an assortment of gems from Pehnec Gems a couple of weeks ago, and have been enjoying them with Alan and Max since they came. Small very sparkly objects give rise to a surprisingly strong emotional response. The cubic zirconia are quite beautiful, very similar to diamonds but a bit more colorful, and I'm also fond of the sapphire and emerald (real but lab-grown).

Batteries

Slashdot linked my ThinkPad battery page a few days ago. Still no response from IBM, which is really not the behavior one expects from a reputable company.

A grab bag of responses today, plus some actual technical content.

Venezuela

Thanks to guerby for his response to my entry on Venezuela. The situation there is clearly very complex, and I absolutely agree that trying to become informed by reading one blog is unwise. It's cool that he's delved into the issues without being partisan to one side or the other; that seems to be rare.

Indeed, one of the great strengths of the blog format is the ability to respond; to provide even more context for readers. The newspaper I get sucks, but there's precious little I can do about it.

Trolls

I agree with djm that a "don't feed the trolls" policy is probably the wisest. I usually read the recentlog with a threshold of 3, so I don't tend to even notice troll posts unless someone else points to them.

Crowd estimation

jfleck's story about estimating the Rose Parade crowd sounds quite a bit like this one. One clarification: my Market Street numbers are based on a per-person area of four feet by four feet, or 16 square feet. Based on this, I am quite confident that my figure of 80,000 is a lower bound on the total who participated in the march. Now that I have some idea how the police come up with their crowd estimates (basically, guess), I see no reason to prefer their numbers over any others.

The war

I'd like to thank Zaitcev for his thoughtful criticism of my opposition to the war. He's made me think, which is a good thing no matter what you believe.

I agree with his point that Islamic fundamentalism is a powerful and destructive force, especially when used as the justification for dictatorships. Coexistence between the Moslem sector of the world and the West is clearly going to be one of the biggest challenges in the coming decades.

But I think that even if one agrees with the fundamental premise that military action is the best way to respond, there is plenty to criticize in the US administration's war plans. For one, from everything that I see, Iraq isn't the most virulent source of Islamic fundamentalism, not even close (it doesn't even show up on this map). Second, a pre-emptive attack based on no hard evidence, or possibly lack of compliance with UN resolutions is virtually guaranteed to fuel hatred of the US in the Muslim world, not to mention strong anti-American feelings throughout the world. No need to speculate; it's starting now just based on the rhetoric of war, not (yet) thousands of people dying.

Finally, even if the warmongers are dead right, starting a war with potential consequences of this magnitude demands very careful debate and deliberation, at least in a free society.

The Onion's take would be funny if it weren't so darned close to the truth.

Free(?) fonts

Bitstream has announced that they're donating freely redistributable fonts. It's always nice to see more font choices. Now seems to be a good time to remind people, though, that the URW fonts that ship with every Linux distribution were purchased by Artifex and released under GPL license. I'm not sure whether license Bitstream chooses will be Debian-free or now, especially given that they haven't given the text of it yet.

Distributed, web-based trust metric

Inspired by discussions with Kevin Burton, I've been thinking a bit recently about using Web infrastructure to make a distributed trust metric. I think it's reasonable, if suboptimal.

The basic idea is that each node in the trust graph maps to a URL, typically a blog. From that base URL, there are two important files that get served: a list of outedges (most simply, a text file containing URL's, one per line); and a list of ratings. In the blog context specifically, this could be as simple as a 1..10 number and URL for each rating. But, while the outedges are other nodes in the trust graph, the ratings could be anything: blogs, individual postings, books, songs, movies, whatever.

So, assuming that these files are up on the Web, you make the trust metric client crawl the Web, starting from the node belonging to the client (every client is its own seed), then evaluate the trust metric on the subgraph retrieved by crawling. The results are merely an approximation to the trust metric results that you'd get by evaluating the global graph, but the more widely you crawl, the better the approximation gets.

A simple heuristic for deciding which nodes to crawl is to breadth-first search up to a certain distance, say 4 hops away (from what I can tell, this is exactly what Friendster uses to evaluate one's "personal network"). But a little thought reveals a better approach: choose sites that stand to contribute the largest confidence value to the trust metric. This biases towards nodes that might be more distant, but are highly trusted, and against successors of nodes with huge outdegree. Both seem like good moves.

What are the applications? The most obvious is to evaluate "which blogs are worth reading", which is similar to the diary rankings on Advogato. Perhaps more interesting is using it to authenticate backlinks. If B comments on something in A's blog, then A's blog engine goes out and crawls B's blog, determines that B's rating is good enough, then posts the link to B's entry on the publicly rendered version of the page.

I see some significant drawbacks to using the Web infrastructure, largely because of the lack of any really good mechanism for doing change notification. There's a particularly unpleasant tradeoff between bandwidth usage, latency, and size of the horizon. However, with enough spare bandwidth sloshing around it might just work. Further, in the Web Way of doing things, you solve those problems at the lower level. It may well be that this is a more reliable path to a good design than trying to design a monolithic P2P protocol that gets all the engineering details right.

I'm not planning on implementing this crawling/tmetric client any time soon, but would be happy to help out someone else. For one, it would be very, very easy to make Advogato export the relevant graph data.

Mainstream media sucks

You'd think that Saturday's peace marches would be considered fairly important, given that we are on the brink of war. But the Contra Costa Times (the Sunday paper we get) saw fit to run a one-column story below the fold, with the headline "Thousands in S.F. rally against war", as opposed to a 5-column above the fold report on a ball game, headlined "Ground Zero".

Most media reports of the march accept the police estimate of 55,000, while the organizers estimate 200,000. In this age of helicopters and instant high-powered image processing, the magnitude of this discrepancy is surprising.

It turns out that crowd estimation is quite a tricky business. There's an excellent paper on the efforts to count the number of marchers at the Million Man March in Washington in October 1995. I ran a few sets of numbers myself, and came up with estimates ranging from a minimum of 80,000 to a maximum of 180,000 (the number of people who can fit in Civic Center Plaza). The estimate I have the most confidence in is the number who filled Market Street from Embarcadero to Civic Center. Given four feet square per person (which I consider very conservative), 2 miles by 120 feet, that works out to about 80,000.

The energy at the march was great. Heather and I are happy that we could be a part of it. What really struck me was the diversity of the crowd. While there were a few lefty radicals in evidence, mostly we saw ordinary folks of all ages, largely white but with plenty of other races represented. The message is clear. Lots of Americans don't want us to go to war. One of my favorite signs read simply, "Why?"

I'm sure the story about the Raiders was really good.

Mainstream media sucks more

From Dave Winer, I came across a blog on the situation in Venezuela. It's beautifully written, and conveys events with clarity, compelling narrative, and passion. These qualities are not highly respected in mainstream journalism, particularly the latter. The presumption is that journalists should be impartial and objective, so actually having an opinion on something is frowned upon, much less expressing it publicly. Thus, the New York Times, for which Francisco was a beat reporter, brought up "conflict of interest" concerns, which Francisco felt the best way to resolve was to resign.

Meanwhile, wire service reports are, as usual, bland, if factually correct. There's no way anyone without a personal interest in Venezuela is going to learn what's really going on from reading about it in the papers. They'd probably get a pretty good idea, though, that the instability there is responsible for an increase in gas prices here in the US.

So here's a clearcut case, I think, where blogs are simply better than mainstream journalism. I wish Francisco and all the people of Venezuela the best through the present crisis.

New laptop

Heather needs a laptop - the Thinkpad 600 is starting to go flaky. We settled on a 14" iBook. A big reason is the battery. Heather often goes to cafes to write, and often finds it hard to plug in, so reasonable battery life is very important. Even going by the specs, the iBook is about twice as good as IBM's R series, and given IBM's past performance, the latter could well be a lot worse after a few months of real use.

It's not a perfect machine. To me, the biggest drawback is the low resolution of the screen (91 dpi). The CPU is slow by modern standards (800 MHz), but not having to burn battery to fuel a multi-GHz chip is a Good Thing. Running Safari, it ought to be plenty zippy, certainly a dramatic upgrade over RH 8 Mozilla on the TP600.

With Jaguar pre-loaded, it'll take considerably less time to get up and running than a PC-class notebook, and this way we won't be forced to buy a Windows license.

310 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!