Based on the feedback from my last post, as well as a little digging on my own, I decided to try the Plantronics CT10. I'll let you know how I like it.
Ghostscript
The 7.20 release process is well underway. rillian put together a release candidate tarball. Feel free to try it. So far, there has been a fair amount of feedback on gs-devel, largely of build problems.
The other major user-visible thing that's happening is that we're trying to tighten up security. In particular -dSAFER now blocks reading of arbitrary files, in addition to writing. Unfortunately, this breaks gv on pdf files, because gv writes out a temporary PS file as a wrapper for the PDF.
I'm preparing a patch now for gv, but it's still tricky getting the fixes in the hands of users, largely because the last release of gv appears to be 5 years old.
Security vs convenience is a delicate tradeoff. I'm trying to do the right thing, but I still have the feeling that it will cause difficulty for users.
Trust metrics and Slashdot
I've heard from a couple of people that the Slashdot crew made the assertion at a panel at SXSW that "Advogato doesn't scale". This angers me. The are nontrivial algorithms at the core of Advogato, so it's not obvious that it does scale. I put quite a bit of work into optimizing these algorithms. Lately, I've been playing with the trust metric code itself, as well as the Advogato implementation. There's no question that the website has performance issues, but most of those are due to the simpleminded arrangement of account profiles as XML files in the filesystem. On a cold cache, crawling through all 10k of these XML files takes about a minute. Once that's done, though, the actual computation of the network flow takes somewhere around 60 milliseconds on my laptop (you have to do three of these to run the Advogato trust metric).
So it's clear that Advogato isn't anywhere near its scaling limits. If we needed to scale it a lot bigger, the obvious next step is to move to an efficient representation of the trust graph. bzip2 on graph.dot gets it down to 134k, so there's obviously a lot of room for improvement.
So I'm upset that misinformation went out to a fairly large number of people. In order to spread the word, I hereby challenge Slashdot to a public duel. In return for their analysis of the scaling limits of Advogato and of the underlying trust metric, I promise to do an analysis of why the Slashdot moderation system sucks. I will cover both mathematical analysis, pointing out its failure to be even modestly attack resistance, as well as some discussion on why this leads to lower quality results in practice.
I hope they take me up on this, as I believe such an analysis would be useful and interesting. If not, I hope they have the courtesy to retract their assertion about Advogato's scaling.
Trust metrics and LNUX
Slashdot is not the only LNUX property that has rubbed me the wrong way. The "peer rating" page on SourceForge has, since its inception, contained this text:
The SourceForge Peer Rating system is based on concepts from Advogato. The system has been re-implemented and expanded in a few ways.
The Peer Rating box shows all rating averages (and response levels) for each individual criteria. Due to the math and processing required to do otherwise, these numbers incoporate responses from both "trusted" and "non-trusted" users.
The actual system they implemented is a simple popularity poll. There's nothing wrong with this. I can certainly appreciate that they've been busy with other things, and trust research is obviously not a core focus of SourceForge. But I really wish they wouldn't pretend that it's something it's not.
I came to the realization a couple of weeks ago that this text is a big part of the reason why I feel such Schadenfreude over the woes of LNUX. Keep in mind, I think SourceForge has been a wonderful service, and I think the people who work on it are great (I know several personally). But I think the corporate structure at LNUX has hurt the goals of free software quality and intellectual inquiry. I have nothing against them trying to make a living doing a proprietary PHP candy shell over legacy free software tools, but I don't think their interests intersect much with mine - which is advancing the state of free software by doing high quality work.
Again, if they revised that text, I wouldn't be quite so happy when I see the red next to their symbol on the LWN stocks page.
Trust metrics and the future
I firmly believe that attack-resistant trust metrics based on peer certification have the potential to solve a lot of important problems. Obviously, I'm biased, because they are, after all, the focus of my thesis. It's certainly possible that, in practice, they won't work out nearly as well as my visions.
Even so, even the most critical person would have to admit that there's a possibility that trust metrics can be the basis for robust, spam-free messaging systems, authorization for ad-hoc wireless networks, not to mention the possibility of making metadata in general trustworthy, without which the real-world implementation of the idealistic "semantic web" concept will surely degenerate into yet another tool for spammers and related slimeballs.
"More research is needed." However, I'm not being paid to do this research, so my own contributions are strictly for fun. Further, centralized systems are much more likely to be profitable to corporations than democratic peer systems (cough, VeriSign, cough). Thus, the research is going to be driven by the currently tiny core of academic and free-software researchers that actually understand and appreciate the issues.
I am hopeful that interest will pick up, especially now that I realize that Google's PageRank algorithm is fundamentally an attack-resistant trust metric. I think everybody will concede that Google is waaaaay better than their competition. If attack-resistant trust metrics work so well for evaluating Web search relevance, then why not in other domains?
This is, I believe, a message that is ripe for more widespread dissemination. That's the main reason why I'm posting this entry, rather than nursing my perceived slights in private.
Call your congress-critters
When the DMCA was being proposed, I thought it was a terrible idea, but I didn't let my congresscritters know. It passed. There is some relation between these two facts.
The CBDTPA is, like the DMCA, a terrible idea. Not only does it have the potential to seriously impact the legal development of free software, there is also no Village People song to which the acronym can be sung. Thus, this time I decided I would at least do something, so I heeded this EFF alert and gave my two Senators and one Representative a call. It was a surprisingly easy and pleasant process. In all three cases, a staffer picked up the phone, and seemed genuinely eager to listen to concerns from their constituents. Not only that, they seemed at least moderately familiar with the issue, indicating that it's been getting some response.
You can make the response even bigger. Do it.