Older blog entries for raph (starting at number 328)

21 Mar 2003 (updated 21 Mar 2003 at 04:35 UTC) »
Arrested for peaceful protest

I attended the noon rally at UC Berkeley, followed by a peaceful protest in Sproul Hall, and am proud to report that I have a misdemeanor arrest on my record as a result. It was a disturbing show of force by the University and police.

None of the protestors were in the least bit violent, and we weren't even keeping people from their business in Sproul Hall (we were in the front foyer; people could still access all offices through the north and south entrances). Several speakers talked of treating the police officers with respect (one young woman has many police officers in her family), and were roundly applauded. The nonviolent legacy of Martin Luther King was invoked repeatedly.

Even so, for whatever reason the University saw fit to arrest 117 of us anyway. The protesters were peaceful, but the police pinched people, pried them away, and carried them out. I would find this quite understandable if we were violent (as were some of the protests in San Francisco), or if we were causing disruption any more serious than the lines during the busy season for financial aid, but for police to forcibly haul people away from a public building in a University feels very wrong. I wonder if the decision to make mass arrests might have been partially motivated by a desire to create more publicity for the anti-war cause. It seems more likely to be malice or stupidity, though, given the arrogant and patronizing attitude of Vice Chancellor Horace Mitchell, who briefly addressed us before the police began their action.

The SF Chronicle has a brief story on the event, and UC Berkeley has a press release. I haven't yet managed to find myself in any photos or video footage, but if you see me, let me know :). There are some other arrest photos I found. I managed to record some audio from a message left on Heather's cell phone.

For coverage of the protests in San Francisco, see Kevin Burton's report and photos, and Lisa Rein's incredibly moving blog coverage.

As I posted yesterday, I am taking two days off just to learn what I can about the war, meditate, and resist in whatever way I can. Tomorrow's entry will return to the normal format of mind-numbingly detailed writing about technical things I find interesting. However, I'll probably start up a personal blog so I can write about religion, politics, and other issues without having to worry about whether they're on-topic here.

Kids

We haven't really talked to the kids about the war yet. Alan wrote a blog entry last night. He typed the first three words himself :) Even more exciting, he got an inexpensive used digital camera for his birthday. I'm hoping that he'll want to post a few of his pictures as well.

Max loves playing with digital images on the computer even more than Alan does - he's had a great time exploring the zooming and contrast controls in iPhoto. Oh, and he can peel carrots by himself now too.

19 Mar 2003 (updated 20 Mar 2003 at 01:39 UTC) »
War

I have given notice that I will be taking two personal days off from work as soon as war begins, and I'll handle my free software community contacts the same way. War looks imminent, if indeed it hasn't started already. I'm prepared to march in San Francisco, and just need to coordinate with my family.

I'm on #war-news on irc.freenode.org, sifting through the reports.

UTC 2204: War has begun.

UTC 2218: There will be a potluck, meeting for worship, and vigil at the Berkeley Friends Church this evening, around 7:00 to 7:30. I expect to be there with the family. Update UTC 0001: This is a "called meeting" of the Quakers, not a general-purpose peace vigil. Non-Quakers are welcome to attend, but do keep in mind that it is a silent meeting. There is a potluck at 6:30, and the meeting begins at 7:30.

UTC 2229: An anti-war protester has died in a fall off the Golden Gate Bridge. NPR confirms air-strikes against surface-to-surface artillery inside the Iraqi border.</a>

UTC 2243: Firefight begins.

UTC 2251: Iraqi helicopters fire on Kurdish village.

It doesn't appear to be full-scale conflict yet, but it's certainly imminent.

UTC 2339: It appears that I may have jumped the gun a bit in asserting that the war has begun in earnest. The bombing in the no-fly zone is actually not that new - similar bombing has been going on for a while.

Indeed, there may actually be hope that the invasion can be stopped. A story in the Independent Online quotes an anonymous State Department official as being willing to make a deal for Saddam's exile. However unlikely this may be, it would save untold lives, be considered a victory for America and Bush, and probably save the life of Saddam Hussein and his family.

In any case, I continue to pray for this to play out with minimal loss of life.

UTC 2353: Kevin Burton has set up a "chump" to archive the URL's we're posting on #war-news.

UTC 0137 I've been listening to NPR and reading stories on the Net for about 3.5 hours, and am tiring of it. Whether in Berkeley or at home, I look forward to spending the evening with my family.

Modular factoring

I haven't gotten much response to my last post on factoring codebases into smaller modules, but I have thought about the problem a bit more.

The first item is the desire to have a common runtime discipline that spans more than one module. The main problem here is that the C language doesn't nail down particular runtime decisions. In our case, the main things we need are memory allocation (one thing we need that standard C library malloc/free doesn't give us is a way to constrain total memory usage - for example, so that an allocation when near capacity causes items to be evicted from caches), exceptions, and extremely basic types such as string (C strings are inadequate because in many cases we do need embedded 0's), list, dict (hash table), and atom. A great many languages supply these as part of the language itself or as part of standard runtime, but C is not among them.

Of course, the fact that C doesn't nail down the runtime is in many ways a feature, not a bug. Different applications have different runtime needs, and a single general-purpose runtime is not always optimum. Perhaps more importantly, these richer runtimes tend not to be compatible with each other. In the case of Fitz, we need to bind it into Ghostscript (written in C with its own wonky runtime), Python test frameworks, and hopefully other applications written in a variety of high level languages.

In any case, with regard to the specific question of whether we're going to split our repository and tarballs into lots of small modules or one big one, for now I've decided to go for the latter, but with clear separation of the modules into subdirectories. That should preserve our ability to easily split into separate modules should that turn out to be a clear win, while making life easier for the hapless person just trying to compile the tarballs and get the software to run.

BitTorrent

BitTorrent absolutely rocks. Basically, it gives you a way to host large downloads (either large files, large numbers of downloaders, or both) without chewing up too much of your own bandwidth. Rather, downloaders share blocks with each other.

I think this has killer potential for Linux distributions and the like. I know most servers hosting RH 8.0 were seriously overloaded when that came out. I think that BitTorrent could be a far more effective way to get the ISO's out than standard HTTP/FTP. Of course, Red Hat probably won't push this, because much of their business model is founded on the relative slowness and inconvenience of public FTP servers as opposed to their pay service.

There's also a lot of potential into wiring BitTorrent into package downloaders such as apt and rpm. Some of the folks on #p2p-hackers think that WebRaid might be a better solution, but in any case I can see BT working well.

We're going to try distributing Ghostscript using BitTorrent, and see how it works.

These are legitimate (and very important) uses of BitTorrent, but it's most likely that the next big jump in popularity will come from other quarters. BitTorrent excels at serving up gigabyte-scale files with good performance and robustness, with minimal bandwidth and infrastructure needs. It shouldn't take a genius to figure out what this will get used for. The exciting (and scary) part is that Bram might soon find himself with millions of users.

Fitz

Tor and I have been working a bit more on the Fitz design. We have to nail down a number of open decisions about coding style and the like. We don't want to cut a lot of code, and then find we need to redo big chunks. Our current draft is on the Wiki under CodingStyle. Many of the decisions are somewhat arbitrary, but even so aesthetics are important. We want to be able to look at the code with pride.

One of the most difficult issues is how to split up the code into modules. What level of granularity is best? The Ghostscript codebase tends to be fairly monolithic, and a large part of our goal is to refactor it into independent modules.

Clearly, a full featured PDF app will use all the modules, but it's also easy to imagine more lightweight clients that just use some of them. Perhaps an instructive example is a PDF validity checking tool (known as "preflight" in the graphic arts world). Such a tool has to parse PDF files and process the PDF streams, but need not actually construct a display tree for rendering.

One obvious approach is to make a giant hairball that contains everything. Clients just link in the library, and use what they need. There's little added complexity in the build and packaging processes, and there's no chance that the individual pieces will get out of sync with each other. However, it's not very elegant.

Another approach is to split everything into the smallest sensible modules. An immediate problem is that many of the modules will want to share infrastructure, particularly having to do with the runtime. For example, one of the things we're hammering out is a "dynamic object" protocol incorporating strings, lists, dicts (hashtables), names (atoms), and numbers. These kinds of objects show up all the time in PostScript and PDF documents, and are a handy way to pass parameters around. If we parse such an object out of a PDF file, and want to pass it as a parameter to the filter library, it would be really nice for the type to match.

So, in the "many small libs" scenario, I think there would be one base library ("magma") containing shared runtime infrastructure: at first, just memory allocation, exception handling, and dynamic objects, but possibly also loading of dynamic plug-ins and maybe threading support. All the other modules will allocate their memory and throw exceptions in a magma context, and pass around magma dynamic objects as needed.

The filter library would be the first such other module. It's small, very well defined, and will probably be quite stable once it's done. The Fitz tree and rendering engine would probably be the biggest, and see intense development over a period of time. Other obvious modules include a low-level (syntactic) PDF parser, and a higher level module that traverses PDF pages and builds Fitz display trees. We'd also need a module for font discovery (unfortunately, quite platform specific), and, eventually, one for text layout as well.

The problem is that support for packaging and versioning of libraries is generally pretty painful. There are lots of opportunities for these libraries to get out of sync, and many more testing permutations, especially if people are trying to use different versions of the same libs at the same time. Also, I worry that the fine-grained factorization might be confusing to users ('how come, in order to display a JPEG image, i use mg_ functions to create the JPEG parameter dictionary, pass that into an sr_ function to create the JPEG decode filter, and plumb the result of that into an fz_ function to draw it?') There are also some fairly difficult decisions about where certain logic should live. A good example is PDF functions. There's a good argument to put them in Fitz, but it's easy to imagine it in the PDF semantics module as well.

A related question is whether language bindings should be shipped as part of a library, or as a separate module. My experience has been that separate language bindings are often very painful to use, because of subtle version mismatches. The bindings tend to lag a bit, and they're much pickier about which exact lib version they're linked against than your average app.

There are other intermediate stages between the two extremes, but it's not yet clear to me whether any are clearly better. One such possibility is to have a single common namespace, but a bunch of smaller lib files so you only link the pieces you need. In the other direction, we could keep the source highly modular, with separated namespaces as above, but mash them all together into a single library as part of the build process (in fact, we'd probably want to do this anyway for Windows targets).

I know a lot of other projects struggle with the same issues. For example, 'ldd nautilus' spits out no less than 57 libraries on my system. Of these, glib corresponds fairly closely to the magma layer above, and is used by many (but not all) of the other libs. Perhaps coincidentally, many users find that building and installing Gnome apps is difficult.

At the other extreme, I've noticed that media player tarballs tend to include codecs and suchlike in the source distributions, often tweaked and customized. Mplayer-0.90pre8 has 11 subdirectories with 'lib' in the name. The advantage is that building mplayer is fairly easy, and that (barring a goof-up by the producer of the tarball) versions of the libraries always match the expectation of the clients. The disadvantage, of course, is that mplayer's libmpeg2 is not shared with transcode's or LiViD's. Also, it's harder to do something like install a new codec on your system that will just work with all the players.

Perhaps, over time, it will become less painful to distribute code as a set of interdependent libraries. In the meantime, we have to strike the right balance between keeping our codebases and development processes modular, and keeping life pleasant for users. I'm not sure the best way to do it, so I'd appreciate hearing the experiences of others.

Autopackage, namespaces, and DNS

I read about autopackage in LWN recently. It seems like a useful project, and I wish it well. I've certainly run into my own share of pain trying to install VoIP software and the like recently.

I'm very happy to see thought going into the question of what packages should look like. I've always felt that Linux package formats have been somewhat ad hoc and given over to the "scripting mentality", and that most distros sidestep the fundamental problem of resolving dependencies and versions by trying to create a snapshot of packages that just happen to work together. Over the long term, I'd love to see this replaced with something more systematic.

A good test of agility for package frameworks is whether they work on systems other than Unix. One of the most interesting things I've seen in this space is CLR Assemblies. From what I've seen, these really do try to be systematic and general, but of course are bound to the CLR runtime.

Indeed, one of the reasons that Java is so disappointing as a desktop platform is that they had the opportunity to really address the packaging problem, but blew it. The reality of Java packages is quite a mess: classpaths, .class files, jar files, war files, and of course "Web start" in a futile attempt to paper over the whole mess.

There is one aspect to autopackage's design that immediately struck me: its use of a DNS-rooted namespace. In fact, DNS is becoming the de-facto root for all kinds of namespaces, of which of course the Web is one of the biggest. This would be very cool if it weren't for the fact that the management of DNS is so corrupt. Even so, it basically works.

One of the discussions I had with John Gilmore at CodeCon was about what a next-generation DNS replacement should look like. I do believe that it's possible to fix many of the political problems of current DNS with better technology. Specifically, the single trust root of the existing DNS is just too tempting a target for parasites like the ICANN leadership. A better system would have distributed trust.

But I don't envy the person who tries to replace DNS with something better. One of the thorniest questions is what the policy for name disputes should be. I'm partial to pure first-come, first-served, largely because it's the only policy simple enough for people to understand, but I think it would encounter a lot of resistance in the real world. In particular, there's nothing to prevent squatters from bulk-registering all the words and trademarks in the world.

But what is a better policy? You can't really talk about a name service being secure unless you've specified a formal policy. It's a thorny problem. I sketched one possibility in my FC '00 submission, and am writing up an expanded version of that as a chapter in my thesis. It is in many ways an appealing design, but even I don't have confidence it's what the world should adopt.

So hopefully, we'll have smart people continue to put some thought into what kind of name service we really want. DNS is a very impressive accomplishment, and of course hugely useful, but eventually we're going to want something better.

Bayes and scoring

There's a lot of talk of Bayesian spam filtering these days, including an implementation the latest SpamAssassin beta. Indeed, Bayes is cool, but did you know that it's actually equivalent to systems that assign a score to each word (or other feature) and add them up?

Paul Graham popularized Bayesian statistics in his Plan for Spam. He analyzes word frequencies in a corpus of spam, and of non-spam, so each word gets a probability that it's spam. For example, "viagra" might be assigned a probability of 0.99 spam, and "eigenvector" 0.01 or so.

Then, when a mail comes in, you look at the 10 words with the most extreme probabilities (words common to both spam and non-spam don't tell you much and so won't be counted). Bayesian statistics will give you a probability that the email is spam or not, assuming that the probabilities of the individual words are independent (not a really valid assumption, but perhaps close enough).

The combining formula for two probabilities is ab / (ab + (1 - a) (1 - b)). But use the transform f(x) = log(x) - log(1 - x), and the equivalent combining rule is just f(a) + f(b). Do the math!

So you don't really need Bayes to do this computation. Perhaps it's most useful to think of the Bayesian math as giving a sound probabilistic interpretation of score addition, which seems fairly ad hoc at first sight.

Doing this kind of combination entirely in linear space is interesting to me, because it seems much easier to combine with other techniques. After all, eigenvector-based trust metrics are based directly on linear algebra. I haven't wrapped my head completely around how I'd meld these two ideas together, but it certainly is intriguing.

Sleep study

I had my annual physical yesterday (with a new doctor), and again the topic of doing a home sleep study came up. I'm pretty sure I have sleep apnea, but when I got a sleep study done last year, it was inconclusive. It showed only snoring, no actual apnea, but it also showed no REM sleep, which is strange. I'd really like to be able to take measurements over a longer period of time.

Primarily, I want to measure 4 things: EEG (for determining sleep stage), sound (snoring), airflow, and pulse-oximeter (for determining whether breathing is supplying adequate oxygen).

I've done plain sound measurements already, using the mic input of my laptop. EEG's are more challenging. Pro equipment costs a small fortune - even on eBay, there doesn't seem to be a good supply. I see a DIY project called OpenEEG, which might work. Obviously, I need electrodes and pre-amps, but it seems to me an off the shelf D/A card (maybe a labjack) might save me some time, and me more versatile for picking up the other inputs.

Most of the people doing home EEG seem to be into biofeedback, but from what I can tell the requirements are fairly similar.

I'd like to hook up with others who might be interested in building a home sleep study, or who can give me tips on finding the sweet spot between spending too much time and too much money. If I'm successful, I definitely want to post my recipes and software, as it's very likely to be useful for others.

Well, CodeCon is over. I think my talk went pretty well. At least, I got some good questions after the talk, which is always encouraging.

Talk of war

chalst: I'm basically in agreement with cmm here. The free software community has a some advantages over the unwashed masses; we can mostly read and write, even sometimes think, and we're very comfortable with challenging the conventional wisdom. But I don't think there's anything that gives us any special insight or privilege compared to other thoughtful people.

Ordinarily, I would consider discussions of politics to be off-topic for this site, but this war threatens to affect us so deeply that I think it deserves some attention from everybody. It's a scary thought, but if it goes badly, it could change some priorities; we could be worrying more about how to treat radiation burns than whether it should be "Linux" or "GNU/Linux".

That said, given the focus here, I'd like to see mostly posts that bring insight, or have some special relevance for free software people. There's an awful lot of stuff written on the Net about the war, and frankly, most of it is dreck. That includes knee-jerk anti-Bush flaming just as much as knee-jerk pro-war (or "anti-peace", as I prefer to call it :) sentiment. I much prefer things that make me think. John Perry Barlow's Sympathy for the Devil is one recent such piece.

I pray that we can avert a large-scale conflagration in which many people die, and hatred of America rises to a fever pitch. I think the uncertainty about it is really hard on people - a lot of people around me seem down, and a friend of mine has observed a trend of "shabbiness".

CSS

sdodji: have you looked at the RCSS codebase at all? It uses some clever algorithms to efficiently do the CSS selector processing. It wasn't written with the Simple API for CSS in mind, but you might find some of it useful in any case. You're welcome to use the code any way you see fit, and if you want me to explain some of the more rocket-scientific aspects, just ask.

Work

A lot of cool things are happening. For one, rillian is getting good results out of the jbig2 code. It actually renders nontrivial PDF files now, although it needs some cleanup to make the error handling more robust, etc. It sounds like we'll have real users soon.

I'm also very, very excited to be working with tor on the design of Fitz and related things. I think the first chunk of released code will be a library of filters for PS/PDF (mostly used for compressed images). This will give us a chance to gain some valuable experience with the new runtime discipline in the context of a well-defined problem domain.

Conscious design of runtimes is fun, but challenging. Our main goals are ease of integration with diverse codebases, performance, and robustness. I've been carefully studying the Ghostscript stream implementation, and have found a number of small bugs, areas where performance can be improved, and ways in which we can better tolerate exceptional and corner cases. I think the new code will be altogether simpler as well.

So we're really trying to do things right. One of the elements going into the runtime is an interface for atoms (in the Lisp sense; they're called "names" in PostScript/PDF lingo). These need to be very fast, have an easy interface, and not leak (I found it interesting to learn that Java interned strings did leak until the JVM 1.3 and weak references). After some discussion, I think we've arrived at a good answer.

Tor and I are mostly using irc to communicate, and it's working well. We had another wide-ranging discussion today, including careful analysis of Quartz Extreme and general design questions about how to get inter-app transparency working well in both software-only and hardware-accelerated environments.

These are exciting times! I'm happy to be alive.

An interim entry from the floor of CodeCon, thanks to wireless networking provided by Up Networks.

Alan's blog

Last night, Alan wrote the first entry in his new blog. I typed most of it, but he's rapidly getting better at keying.

I'm hoping that this blog will motivate his writing. I'm sure he'll appreciate feedback (for now, just send it to me).

My Codecon slides

I'm putting up a draft of my presentation. Some of it might be difficult to follow without the narration, but you might find it interesting nontheless.

Crowd counting

I've been following the various crowd estimates for the peace marches and demonstrations in San Francisco. Traditionally, it's very much an inexact science, and estimates vary widely.

For the last march, the SF Chronicle did something very cool: they took high-resolution timestamped aerial photographs, measured them, and posted them to the Web. Surprisingly, this count (65,000 at the 1:45pm snapshot) is considerably smaller than the consensus estimate (200,000, including people who left the march earlier or joined later).

In the grand scheme of things, the exact number marching is not that important. The Jan 18 one was an amazing expression from the people, and my friends who were at the Feb 16 one tell me that it was even more intense. It's not just San Franciscans, and it's not just Americans. People from all over the world have expressed themselves.

Even so, the wide range of estimates, and the variations in the reporting, illustrate the impact of viewpoint on what should be, after all, a fairly easily quantifiable, objective truth. We are being asked to evaluate the risks of going to war against the risks of not going to war, based on data that's at least an order of magnitude fuzzier than the simple question of how many people were on the streets of San Francisco. This is not easy.

I am not impressed with the International Answer people's response: `"Oh my word. Come on, that's ridiculous," said Bill Hackwell, spokesman.' It's possible he was simply quoted out of context, but I'm curious to know exactly what he thought was ridiculous.

I am passionately anti-war, even more passionately anti this war, but most deeply pro-truth. The Chronicle showed how seat of the pants guesstimating can be replaced, using a bit of technology, with hard data. I think this is progress, and fervently hope that we see more of it.

Codecon, day 1

I just got back from the first day of codecon and the Google-sponsored speaker reception afterwards. I was expecting it to be intense, but misunderestimated exactly how so. I met a lot of people, including old friends, more than a few cypherpunks, people I know online but met for the first time in person, and people I've been wanting to meet for a while. There are lots more people I didn't get a chance to really talk to; hopefully Monday.

Google is snatching up lots of smart people now. Spencer Kimball and Peter Mattis, of Gimp fame, are reunited once again (in fact, for almost a year, but I only just learned this). We had a very nice talk. They're both passionate about their work for Google. There's a reason why Google is able to provide such an amazingly valuable service, and it has a lot to do with the caliber of people working for them. I also enjoyed talking with Nelson Minar.

I also got to meet Larry Page, but felt like I kinda flubbed it. I also managed to just about lose my temper with John Gilmore arguing about what properties a next-generation DNS should have. This caught me off guard - I'm generally pretty levelheaded. I did apologize, and afterwards John said it was the best discussion about DNS he'd had in a while, so I guess not all is lost.

Vipul, of Vipul's Razor and now CloudMark, is very cool. I was struck by his depth of thinking, and his efforts to balance the technology, the social good (including free software releases), and the business. We talked about some of my more speculative ideas about how to use trust to defeat spam, and we really connected. He seemed to immediately understand the goals of my research, and I appreciated his perspective on deploying real systems for paying customers. I hope we get to work together.

Of the Codecon talks, my favorite was the panel on version control, with Larry McVoy (Bitkeeper), Greg Stein (Subversion), and Jonathan Shapiro (OpenCM). The conference organizers were nervous that it would degenerate into a licensing flamewar, but they needn't have worried. It was obvious that the panelists have a tremendous amount of respect for each other's work, and that the differences between these projects largely reflect differing goals.

A common theme was how difficult it is to get configuration management right. Everybody seriously underestimated how much time it would take to get a usable system going. Also, while there was definite agreement that CVS is broken and not easily fixable, there wasn't a clear consensus that most people a strong motivation to migrate from CVS to any of these new systems. CVS actually works reasonably well for most open-source projects, where you don't typically have lots of people pounding concurrently on one file. This kind of scenario is very common with paying customers, and Bitkeeper handles it well. Of course, any modern configuration management tool (with atomic transactions, robust tracking of changes, etc.) will be able to do a much better job than CVS, but that's not saying much.

I haven't decided whether the Web-based infrastructure of Subversion (particularly WebDAV as the client/server protocol) is a good thing or a bad thing. I think it depends a lot on what kind of user we're talking about. Windows and Mac can mount WebDAV right onto the desktop, which means that unsophisticated users can do version controlled operations just by clicking and dragging. For some applications, this is a huge win, because you can do things like back out unintentionally bungled changes, roll the clock backwards to get a consistent snapshot at some particular time, and so on. These are real problems that users have, and which the stock filesystem based implementation of folders doesn't solve.

For free software programmers, I don't see this as such a big win. Regarding integration with existing tools, people don't mount WebDAV folders from an Emacs mode, but there are Emacs modes for CVS. Then you have to deal with cruft like HTTP authentication (most Subversion deployment seems to use HTTP basic auth over SSL, which I guess is workable, but doesn't strike me as exactly the right way to do this).

In any case, I'm really glad that good work is happening in this space, and I'm hopeful that a really viable alternative to CVS will emerge. Subversion could well be it, but that's not a given, and in the long run, one of the other projects could turn out to be more robust, scalable, and overall a better match for the needs of free software developers.

Oh, and while I generally respect Larry's right to license BitKeeper however he wants, I did not at all get a warm and fuzzy feeling about it. In fact, it feels to me that his "free use" licensing terms are in fairly direct conflict with the spirit of the free software community. I am definitely not tempted to use it for Ghostscript or related projects. But if you're looking at BitKeeper as an alternative to Perforce or some other proprietary CM system, take a look; there's a good chance it'll do what you want.

Booger

Joey DeVilla's favorite amalgam of "Google" and "Blogger" is "Booger". Yeah!

There is one thing, I think, that Google and a blog hosting engine inside the same trust boundary can do that would be somewhat difficult otherwise: making backlinks work really well, based on both linguistic analysis for relevance and, of course, PageRank. It's possible to use a trust metric to automate links between blogs in a more distributed context, but so far nobody's been smart enough and motivated enough to actually try to build it. It's probably a lot more likely to happen in a centralized, infrastructure-rich setting.

Off-topic

Here are two interesting and related interviews. The first describes how child psychiatry has a history of being science-resistant, but advances in the field are overcoming this. The second describes some of the cutting-edge research being done at the NIMH, and the palpable enthusiasm of Dr. Manji in being part of the community. I've long been fascinated by the signalling and computation that goes on in networks of cells, and found my interest rekindled by this interview.

This essay by Kanan Makiya is interesting. He's far from a disinterested party, of course, but I certainly agree that these kinds of discussions should be taking place out in the open. Real democracy is messy and unpredictable. Perhaps it's even true that, as the State Department under Colin Powell and the CIA believe, "it could have a destabilising influence on the region."

319 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!