I've been having a lot of fun discussing with Bram the ideas mentioned yesterday. Thinking about formal math in the context of the Web is a lot of fun.
I've been having a lot of fun discussing with Bram the ideas mentioned yesterday. Thinking about formal math in the context of the Web is a lot of fun.
The Fitz Wiki actually has some interesting content now, although of course it's still very new.
Bram brought up an intriguing idea: an online repository for theorems and proofs, in which the proofs are in a formal language and can be checked automatically.
This is in many ways an easier problem than automated theorem proving, an area that's soaked up a lot of research attention without all that much to show for it. Here, we optimize for proofs written by people.
Rigorously formal proofs have a reputation for being tedious and verbose. I'm not sure they have to be, but even so I think it makes sense to store a low-level proof format in which all steps are spelled out. People developing proofs might work in a higher-level language, and have it compiled down. An appealing consequence is that people can experiment with higher-level proof systems.
There are still a lot of challenges, though. For one, mathematicians are not able to agree on a set of fundamental axioms that are "right". In fact, the debates over various axioms runs almost as hot as Perl vs Python.
For two, there are many different ways of encoding the same idea into mathematical logic. Plain-old unsigned integers, for example, readily admit Peano encodings, Church encodings, and so on. None of these logic-based are at all efficient if you're dealing with big numbers, so you probably want another encoding as a sequence of binary digits. But if you prove "2+2=4" in one encoding, how can you use the result in another?
I think this question is very analogous to runtime incompatibilities between various programming languages, or even within a single programming language. In C, you can have many different implementations of a simple list.
One of the best-developed formal proof repositories is Metamath. It deals with the runtime incompatibility problems by just choosing one. Overall, proofs don't look that scary, but I still think there are language design issues that get in the way of larger-scale use. The syntax is twisted a bit to look like math. For example, '+' looks like an infix operator. To me, it makes much more sense to write +(x, y) in the low level language, and possibly compile infix notation down to the lower level.
It's pretty easy to prototype something like this. You want search and directory capabilities to get a handle on the potentially huge number of theorems, but that's all pretty well understood in the Web world. My guess is that we'll start to see more experimentation in this area, and possibly a useful proof repository within a few years.
The next step is to populate the Wiki. I'll try to write a little something every day, just like I do here.
There's something fishy going on with the Anthrax investigation. All the evidence points to the culprit being someone hooked into the US defense establishment. At the same time, the FBI's investigation has been laughably incompetent.
Then, most remarkably, the New York Times published an editorial (reg required) talking about an individual who many believe is the culprit, but who has barely been investigated. The editorial itself doesn't name the name, but a few minutes searching on Google reveals it to be amost certainly one Steven J. Hatfill.
If this is true, it's a pretty big story. For one, it drives home the need for transparency and accountability in our government. The defense establishement has been largely exempt, because we have generally trusted that they need their secrecy to protect us from the bad guys. But if they are the bad guys, everything changes.
This story has been extremely differentially reported in the US and overseas media. Aside from this one editorial in the NYTimes, I've been able to find almost no coverage in the US mainstream press. If this is "objective journalism", fuck it, I'd much rather have Pravda. Fortunately, we have a vital independent press on the Web. I've been tracking this story through rotten.com.
I don't know that Hatfill did it, but it seems like by far the best lead the FBI has. For them not to be pursuing him vigorously, and for the US mainstream media not to be reporting the story, verges on the criminal.
I got my first color output today. Wanna see a sample? (note: 1.4M)
Between Antony and tor, the discussion is really heating up. I should set up a real mailing list, so it gets properly archived and searchable and so on.
AaronSw pointed out that searching on "advogato" in google yields a "Google is hiring" adword. Considering the otherwise absolute radio silence from them, it's a very cool bit of feedback.
Woohoo! I got my first Well Tempered Screening output this morning. If you like, you can take a look at before and after. If you want to print them, try: pngtops golfer.png | pnmtops -equalpixel -dpi 300 | lpr
WTS solves three problems. It replaces the (possibly objectionable) patterning artifacts with a finely dispersed noise. It renders angles extremely precisely, where Ghostscript's existing halftones, even in "AccurateScreens" mode, are much less accurate than Adobe's. Finally, Ghostscript uses a "tile cache". As the cell grows in size, it can overflow the tile cache. In that case, performance can degrade horribly. By contrast, WTS thresholds on the fly.
There's more work to do, but I'm quite pleased.
Name services are an important battleground for trust. The question, "who deserves the right to name X?" is a deep one, and the answer is far more social than technical.
Much of the original motivation for my thesis work was a desire to build a better name system. It's a hard problem, though. More research is needed, or so I'm fond of saying.
What if you used the eigenvector metadata engine? Assertions would be of the form "name X belongs to public key 0x12345". When more than one such assertion exists for any given name, the one with the higher confidence value wins.
Obviously, this has the risk of "balkanizing" the namespace. Certain "controversial" names would resolve differently from community to community. While this isn't a serious problem for RealNames-like keyword searching, it is a problem if you're trying to publish stable URLs.
But here's the big question: even though the system is capable of balkanization, would it be a problem in reality? It's not clear to me. People can make local decisions to reduce the ambiguity in resolving names. For one, "registrants" can choose names less likely to be challenged. For two, when choosing the trust links for your community, you can emphasize people who take the game seriously and try to bind names to the rightful owner.
If the controversy can't be resolved, then the name is simply not useful as a global resource. Is this so bad? There are certainly a ton of names owned by squatters now, and can't be used, so it's not sure that the proposal would be any worse.
I'm not convinced that this is the best naming scheme. I personally favor first-come, first-served naming policies, and my PhD research goes into some detail about how to implement those. But I do think it's worth thinking about alternatives. What we have now isn't very good, and it isn't going to get any better by itself.
MuPdf and Fitz
I took a look at tor's MuPdf. It's very unfinished, but it's still very impressive. The unhinted, antialiased text is very pretty, and performance seems to be excellent. In many ways, it's a model for what I want Fitz to be. In many other ways, Ghostscript is the model, because it's already solved the problem of rendering the full PDF imaging model.
Antony Courtney has posted an architectural goals document for Fitz, with edits from me. There's a lot more I want to write, but I don't have big gobs of time right now. Also, it's obviously getting near time to set up the public infrastructure for the project: CVS and mailing lists as bare minimum, but I also want a Wiki.
Gene Kan died last week, probably a suicide. Sad.
Developer release 7.21 is out. We're going to merge in the DeviceN code next, so this is probably an "island of stability".
Deep thinking vs not
By contrast, here we see "oopster", which is a specialization of Jxta to move around the OpenOffice binaries (which needs this kind of thing because of bloat, but that's another story). They're just beginning to think about the problems that Bram has already solved.
In the thinking world, one of the great applications of BitTorrent will probably be ISO's of free software distributions.
sej: What you are proposing (adding transparency to PostScript) is adding nonstandard extensions to PostScript. It's possible, but I'm not sure what the real advantage is over PDF.
Jeff Darcy has written an interesting piece on protocols that contain reimplementations of TCP. Looking through the archives, Jeff has a lot of good stuff on protocols. Designing good protocols is an esoteric art, and should probably get more attention. It's nice to see writing in the open, even though Jeff works for a proprietary company.
Roughly, Jeff is to protocols as David McCusker is to trees. I think we should do a Vulcan mind-meld of the two, because I think an efficient protocol for remote access to trees is interesting. It's a central component of the Athshe vaporware project, but I freely admit that there many aspects I don't how to do right. Maybe in the absence of mind-meld technology, we can just bounce ideas off each other.
Dirt and noise followup
I wrote a few days ago about "dirt and noise". One of the things I was thinking about is the demonstration in Wolfram's book that the simple "rule 110" automaton can run a universal Turing machine. The straightforward implication is that, even for such ridiculously simple automata, if you know the initial state, you cannot predict whether a certain pattern (corresponding to the halting of the Turing machine) will ever occur. Simple universal machines are not new, but Matthew Cook's proof that rule 110 qualifies certainly pushes the simplicity envelope.
But here's a question. What if your cellular automaton didn't proceed purely mathematically, but had occasional bit errors? Does this make the halting problem go away? If so, does predicting the presence or absence of patterns become tractable?
Incidentally, the best review I've seen so far is this one, posted on the MAA website.
The book may be more interesting as art or literature rather than science. Certainly, the argument for deep equivalence in all fields of human intellectual endeavor is at the heart of Magister Ludi (Das Glasperlenspiele), by Herman Hesse.
tromey wrote more about his feelings upon receiving emails saying "automake sucks".
Think about the analogue in the real world. Let's say you spent years refining an artwork, and many copies were distributed. Then somebody comes up to you with a copy in hand, says, "you know this artwork? it sucks," and rips it in half. I am a peaceful person, but I would be quite understanding if you just hit him.
The principle of it is the same on the net, but of course there are a lot more assholes within easy reach of you here. Scale matters. In a smaller community, it would be a lot harder to get away with behavior like that, at least without being called on it.
I get some "thanks" emails, a number of impolite emails asking for favors, and a sprinkling of hostile ones. I really appreciate the former, and know I should send out more myself. The hostile ones I happily ignore. As for the ones in the middle, I somehow never seem to have time to respond. Ah well.
Last time I got a "you suck" message (it was here, three weeks ago), it was very easy to ignore, because the person who wrote it had already firmly established his credentials as someone whose opinion I cared nought about.
Thanks, Tom, for your work on automake. I use it, and it really does make life easier. The fact that it's actively maintained is quite a luxury. You're one of a handful of people actually doing a usable automated build tool, and I appreciate that.
That said, now I'll talk about how automake sucks :) No, actually I'll respond to tromey's thoughtful comments on what a systematically designed build tool should look like.
I agree with the four bulleted points (integrate configuration with build, single invocation, tools as objects, caching), although I maybe I don't quite understand the implications of treating tools as objects. Does this simply mean that when you upgrade the C compiler, you rebuild everything?
Derived object caching is particularly interesting, not just because it makes builds faster, but because (if done right) it should make switching between variant builds much more pleasant. I want to be able to build with a -pg option and then have a binary I can profile. Even better, --with-checker (and other tools requiring compile-time support).
I've also been refining my thoughts since my rebar entry to the Software Carpentry competition. At the time, I proposed an XML syntax, but don't believe in that any more. I see makefiles as an integral part of the program. They are just as worthy of human readibility as the program code. But that's just a syntactic difference.
More fundamentally, I believe that the build tool should be a library combined with a simple lazy functional programming language. This is a somewhat quirky choice for a programming language, but I think well justified for this application. Laziness lets the implementation find and exploit parallelism without explicit direction from the programmer.
Other approaches, such as cons, provide a library over some available scripting language. That's also a reasonable approach, and keeps you from having to invent a new language. But I still favor the new language. For one, scripting languages tend to be pretty big, because they're trying to support general purpose computing. In this context, the language could be tiny. Second, I think the lazy approach has distinct technical advantages.
One such I'll try to illustrate here. A build script in the current rebar prototype looks something like this:
hello_o = compile/C ('hello.c'); foo_o = compile/C ('foo.c'); hello = link/C ([hello_o, foo_o]);
You could implement this sequentially, but because compile is functional (side effects are not permitted), the implementation is free to run the first two lines in parallel. In fact, the current prototype does so. But of course all serious build systems have some way to -j2.
More exciting, if an error happens (let's say foo.c doesn't exist), you can easily trace that to the exact line in the build file. I consider providing very high quality error messages to be vitally important. This is certainly an area where auto* falls down. With auto*, by the time you trip over the error, the build script has been macro-substituted and otherwise mangled beyond recognition.
In fact, I'd go so far as to say that a build system should be a tool for diagnosing build problems, with the happy side effect of actually building the target if there aren't any.
A fundamental question: should building a project require a separately downloaded build tool, or should a distribution ship with enough of the build tool included to successfully run on target systems?
I lean toward the former. You can't ship a distribution without making some assumptions about available tools. autoconf assumes working sh and make. This is a reasonable assumption on Unix systems, but not on others.
Even so, I think it makes sense for code distributions to ship with a copy of the build tool (a good reason to keep it small and simple!) so that it will run on some subset of legacy systems. One of the advantages of doing it as a functional language is that you can have an implementation optimized for size and portability, rather than performance and features. In particular, the "tiny" flavor might eschew sophisticated dependency analysis.
Drawing the lines
I think it's very difficult to figure out where to draw the lines between a build tool and the rest of the system. In particular, the overlap between the build tool and the package system is considerable.
You can think of /usr/local/ as a very ad-hoc package database. The "make install" and "make uninstall" targets of makefiles do the basic package install and uninstall. /usr/local/lib/pkgconfig/ is one effort to collect metadata about installed packages. Distributions aren't very good at the basic function of finding libraries, and they also aren't very good at maintaining a separate development sandbox, especially one that can be manipulated without root privs. That's a good reason for /usr/local to continue.
The other way works too. Your build tool could build a .deb or .rpm, and you could rely on the package system to do it right. In fact, this approach is already common on Debian for kernel installs (make-kpkg). For one, it makes hardly any sense to install a kernel as non-root.
In the long term, to do things really well will probably require parallel evolution of the package system and the build tools. That's a bit daunting; it's likely to take a lot of time.
I care deeply enough about a good build system that I've put a lot of thought into it and have built a prototype. Tom, if I get the sense that your new approach is good, I will help you with it. I'll start just by dumping some of my thoughts (this can happen in my diary).
By the way, building Ghostscript is the acid test of a build tool. If a build tool can handle Ghostscript, it can handle anything.
gary just merged the various mod_virgule trees, and the site is now running the new code. One visible consequence is the (n new) notation for article comments. Less visibly, we can now proceed with courage, knowing that adding new features isn't going to cause pain for a future merge.
There are some performance issues that crept back in. Please be patient.
rillian has been hacking jbig2dec with help from me, and the progress is encouraging. It now decodes 042_10.jb2, with symbol dictionary and text region. My guess is that it will be decoding CVision streams very soon, as that's basically all the functionality they use. Before too long, we'll be inviting people to bring test images to throw at the code.
The main challenge it faces, I think, is that there isn't a single well-designed tool, but a hodgepodge of different tools with overlapping functionality. Further, the boundaries between these tools are more or less set in stone for historical reasons.
Thus, I feel that it's possible to do a lot better than auto*. On the other hand, people aren't doing it, so from a practical point of view the question is moot.
I'm also very glad that you're thinking about how to move forward. Obviously, you have lots of practical experience that the "redo it from scratch" crowd lacks.
Another factor in your email volume is that automake has a lot of users. This speaks well to it being a useful tool, I think.
Bottom line: keep up the good work, and ignore the whiners.
Yesterday I blogged a feeling of frustration about the propagation of my trust ideas into the world. I got a great response back from Paul Crowley, indicating that he is reading and understanding what I'm doing. Thanks, that's what I wanted. Tonight, I'll wrote a bit more about my feelings. Feel free to skip such self-indulgent drivel, and/or to downgrade my rating.
Jeff Darcy called me on my obnoxious tone in a posting of mine from about a month ago. He's absolutely right. It's unreasonable of me to demand or expect that other people are listening to what I say. Not only that, but doing so I'd be setting myself up for disappointment big time.
Even so, I do have some feelings which I think are valid. Given the choice, I prefer two-way interactions. This blog offers many rich opportunities for such interactions, so much so that I can afford to ignore people who ignore me.
But Slashdot and SourceForge are not just blissfully ignoring me. The /. crew said disparaging and wrong things about the Advogato trust metric at a panel. SourceForge misrepresents their stupid popularity poll as being in some way an "expanded" version of the concepts from Advogato. Both of these things hurt my feelings, and also hurt the diffusion of my ideas. Now that I think about it, it's likely that the latter is the reason why I'm finding it difficult to get over the former.
I've gotten no ping replies from Googlers. They're probably all too busy computing their post-IPO net worths. Not that it matters anyway; there's no possibility of real communication, because they're all bricked up behind massive NDA walls.
Prompted by AaronSw, I found that there's a fair amount of literature on PageRank. Most of it is from "search engine optimizers", and much of it is of very low intellectual quality. The page I linked yesterday is the best of the lot; in particular, it has good references. I sent email to the author, Markus Sobek, and got a very nice reply. See, I'm not just whining about lack of exposure, I'm actually trying to do something about it :)
Anyway, I thought some today about a particular problem: what to do if you have a known attacker (someone who's trying to acquire a high PageRank for a site which doesn't deserve it). Pre-Google, you had lots of people doing this with huge keyword lists. Now it's harder, but of course it can be done. These days, when it is done, it's usually over highly-connected nets of interlinked pages, rather than individual nodes.
So let's assume you have a list of cheater nodes. How can you tweak PR to counteract the deliberate manipulation? One particular goal is to reduce the rank of all the interlinked pages, without having to carefully enumerate all the nodes. So the simplest technique, of assigning a PageRank of zero to all cheaters after running the algorithm, doesn't work well.
On the other hand, removing the cheater nodes before running PageRank is likely to work well. I'd guess this is what Google actually does.
Even so, I think there is a better solution, which is to run PageRank over the whole graph (including cheaters), but to zero out the rank of all cheaters after each iteration. Why is this better? Because adding more successors dilutes the flow of rank. Keeping the cheater node in the graph, but zeroing the rank, gives you the most dilution. This is particularly effective, I think, against these interlinked page attacks. You can express this nicely in linear algebra, and I'd expect you'd be able analyze the improvement quantitatively, using the attack-resistance tools I'm doing in my thesis.
You might even be able to take this a step further, flowing some kind of negative rank backwards through the graph. The assumption is that, if a node links to a cheater node, its other links are also suspect, and should therefore count for less in the rank computation. Let me propose a more concrete algorithm. Flip all edges in the graph, and do a PageRank with your cheater set as the seed (E(u) in the terminology of the PageRank paper). This gives you a R(u) rank for each node in the graph. Normalize so that the cheater nodes all have unity rank.
Now, in your second pass, do the usual PageRank, but after each iteration, multiply all ranks by 1-R(u) as obtained in the first pass. This will zero out the cheater nodes, and also reduce the juice for the nodes that link them.
Should I patent this idea? I'm not sure.
tor asked how Fitz was coming along. Well, right now the Ghostscript 8.0 release stuff is taking priority. But I do expect to be able to devote huge chunks of time to Fitz after the 8.0 release. Right now, the ideas are simmering in my head.
raph: I did take a long hard look at ghostscript, and initially I wanted to use it as the graphics engine for my toolkit. However, I felt like I was stuck in a maze of twisty little passages, all different. Also, performance was a bit of an issue. I want analytical (as opposed to supersampled) anti-aliasing of text and graphics. I need unhinted font rendering, since I plan on using my toolkit as part of a typesetting system, where hinting, snapping to integer pixels and suchlike is a big no-no.
Yes, these are all very good things to be concerned about. Ghostscript's code complexity is a real problem, and I think it affects our commercial sales as well as free users. Analytical supersampling is cool, but 4x supersampling is not that bad. Ghostscript's implementation is quite a bit faster than I expected; I thought Libart would blow it away, but it holds its own.
Unhinted font rendering is very dear to my heart, and I agree very desirable when doing typesetting. It turns out that integer-fitting is just a decision taken by Ghostscript, not a deep architectural principle. I have a draft of a patch to make GS do unhinted (and subpixel positioned) text rendering, but haven't had the time yet to polish it for release. It's on my list for the 8.0 release though (and is therefore slowing down Fitz :), because I think it will be such a nice improvement for users.
Another issue which I didn't see you mention is images. Ghostscript currently breaks them down into lots of tiny rectangles (or parallelograms, in the case of general affine transforms). This isn't all that bad a choice when printing to high resolution, but definitely limits interactive screen update performance. Performance also becomes a problem with interpolated images.
The PDF viewer was really more a test program for my drawing layer, but it grew out of proportion and all the other PDF viewers seemed lacking in one respect or another so I decided to keep going.
I agree; the Ghostscript graphics renderer is poorly tuned for interactive editing. Also, PDF viewers do suck, even though I think Ghostscript handles the rendering parts pretty well.
Fitz will address a lot of the issues you raise, but it still pretty much vaporware. I'm looking forward to ramping the project up soon, at which point it might make sense to join forces. We should probably talk about our respective goals.
New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.
Keep up with the latest Advogato features by reading the Advogato status blog.
If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!