Older blog entries for raph (starting at number 270)

Metamath and HOL

I got a nice email from Michael Norrish, suggesting that maybe HOL is not as difficult as I suspected. The core of HOL Light is only about 1000 lines of ML. That's an appealing level of simplicity. Of course, ML is particularly well suited for doing HOL-style logic, but that hardly counts as a strike against it!

Without meaning any disrespect for other projects such as Mizar, QED, Mathweb, NuPrl, etc., for the stuff I'm interested in, HOL and Metamath are definitely the two systems of interest. They're both simple, they're both founded on mature work in mathematical logic, and there's enough mathematics done in both to make comparisons useful.

Unfortunately, the two systems are different enough to pose major compatibility problems. At heart, functions are different beasts. In ZF(C) set theory, they're essentially untyped, and allowed to be partial. They can be hugely infinite, but can't take themselves as arguments. Thus, even such a simple and familiar expression such as lambda x.x can't be directly expressed as a ZF set.

HOL functions, on the other hand, are typed, but total within that type. Thus, the function lambda x. 1/x is problematic. Harrison's construction of the reals in HOL arbitrarily takes 1/0 = 0. The trick of polymorphic types lets you apply a function to itself, with just enough limitations to avoid the paradoxes. So lambda x.x is back in the club.

I get the feeling that there's been work on connecting the two flavors of formal system, but it seems technical and difficult to me. This is disappointing.

In any case, I feel this interest in logic burning itself out for now. I'm sure I'll come back to it later; it's been an interest of mine for a long time. I still feel that a Web-centered distributed proof repository could be a very interesting thing, and that the tools I'm comfortable with could be applied well. But what I do not need right now is another big project.

Ghostscript

The big DeviceN merge happened, and it's not too bad. There are still some regressions, but they're settling down. A big chunk of my recent work has been putting together a framework to test whether 6-color DeviceN rendering is consistent with plain CMYK. I'm becoming an even firmer believer in testing, and I think that graphics rendering systems are particularly suitable for automated testing.

Fitz

I'm craving a chunk of time to sit down and write Fitz design ideas, but with the Ghostscript 8.0 release process taking all my time, it's not been easy. Even so, some aspects of the design seem to be coming into relatively clear focus, even as details on things like font and text rendering seem elusive. I'm particularly excited about the design of the runtime discipline. In particular, it should be a lot easier to understand and integrate with other systems than the current Ghostscript runtime discipline. I don't see a lot of work on conscious design for runtimes; more commonly, designs inherit runtimes from somewhere else (language, library, predecessor). As such, I'm hoping that the runtime document will make interesting reading even for people not in the market for a graphics rendering library.

Quote-plus

jeremyw is correct, of course. Simply saying "permission to quote is granted" or something similar is almost as concise, and far clearer to most people.

Metamath

The second big problem with Metamath is the fact that it nails down specific constructions. It's important to note, however, that this is a characteristic of the set.mm database, not the Metamath framework itself.

The solution to this problem, I think, is to use "theory interpretation." Instead of saying, for example, "0 is the empty set, and x + 1 is x U {x}", you say, "for all sets N and functions +, -, *, /, whatever satisfying the usual axioms for arithmetic." Then, if you ever need to show the existence of such a thing, for example to use arithmetic as a temporary while proving something else, you can use the standard construction, or any construction for that matter.

The question is how much more awkward this will make proofs. Metamath's choice of a definite construction is good for overall simplicity. I guess there's really only one way to find out.

Quote-plus

It is considered good etiquette not to quote email without permission. However, these days, emails are often part of a broader discussion spanning blogs, web fora, and so on. It's increasingly easy to run afoul of this etiquette rule. Conversely, it seems awkward to say something like "permission granted to quote" when I respond to, say, a blog entry in an email.

Thus, I propose the two-character string "+ as a shorthand indicating that permission to quote, with attribution, is granted. Permission is also granted to integrate the email into any copylefted documentation (most copylefts do not require attribution).

Analogously, "- is a request that the email not be quoted, perhaps because it contains confidential or sensitive information. Lastly, when someone is known to be familiar with the convention, "? asks whether quoting is permitted.

Spread the meme! "+

Metamath

Bram pointed me to this thesis on implementing real numbers within HOL. I heartily recommend this thesis to people following this thread (if any). It's very interesting to compare to Metamath's construction of the reals. Unfortunately, these constructions are not compatible. One significant difference is that Metamath seems to support partial functions, so 1/0 is not in the range of the divide function, while HOL wants to have total functions within the type, so 1/0 must have some value (Harrison chooses 0 to simpify the details). As such, proofs from one probably can't be easily ported to the other without serious handwork.

I feel I understand Metamath reasonably well now. It has some issues, but it's overwhelming strength is that it's simple. For example, I believe that a fully function proof verifier could be done in about 300 lines of Python. I wonder how many lines of Python a corresponding verifier for HOL would be; I'd guess around an order of magnitude larger. That kind of difference has profound implications. Norm Megill is certainly to be commended for the "simplicity engineering" he's put into Metamath.

For the purpose of putting doing Web-distributed proofs, Metamath has a few shortcomings. I think they can be fixed, especially given the underlying simplicity. I'll talk about these problems and possible fixes over the next few days.

Definitions in Metamath have two closely related problems. Definitions are introduced exactly the same way as axioms. As such, it's far from obvious when a definition is "safe". For example, you could add definitions for the untyped lambda calculus, which would introduce the Russell set paradox. The second problem is that there is a single namespace for newly defined constants. You wouldn't be able to combine proofs from two different sources if they defined the same constant two different ways.

Here's my proposal to fix these problems. Choose a highly restricted subclass of definitions that is clearly safe. For example, you could say that any definition of the form "newconst x y z = A" or "newconst x y z <-> phi", with newconst not appearing in A or phi, is acceptable. I propose to introduce new syntax that clearly identifies such definitions. You could use existing syntax, so that such definitions become axioms, but can be checked easily, or you could have other syntax that sets the new constant apart from its "macro expansion". That's a style preference.

Now let's talk about namespaces. I have a strong preference for using hashes as global names, because (assuming the hash function is strong), you don't get collisions. As such, it should be possible to mix together arbitrary proofs without danger. Here's an outline proposal.

Take the definition axiom, and replace the newly defined constant with some token, say $_. Hash the result. That is the "global name". When you're developing proofs, you'll probably want a (more or less) human-readable "pet name", but this is actually irrelevant for verification. Here's an example in Metamath notation.

Here's Metamath's definition of the empty set

$( Declare the symbol for the empty or null set. $)
$c (/) $. $( null set $)

$( Extend class notation to include the empty set. $)
c0 $a class (/) $.

$( Designate x as a set variable for use within the null set definition. $)
$v x $.
$f set x $.

$( Define the empty set. $)
dfnul2 $a |- (/) = { x | -. x = x } $.

So here's what gets hashed:

$a class $_ $. $f set x $. $a |- $_ = { x | -. x = x } $.

Take the SHA-1 hash of this string. Then I propose that #274b1294a7d734a6e3badbf094190f46166159e4 can be used (as both a label and a constant, as these namespaces are independent) whenever the empty set is needed. A proof file would of course bind this string to a shorter name, such as (/). When importing a proof file from another, the binding would be local to the file. (Currently, Metamath has only a file include facility similar to C's preprocessor #include, but an import facility with better namespace management would be quite a straightforward addition, especially considering that Metamath already has ${ $} scoping syntax).

Obviously, there are some details to be worked out, particularly nailing down exactly what gets hashed, but I think the idea is sound.

Schooling

Alan's Mindstorms arrived a couple of days ago. These promise to be quite fun (and of course educational :). So far, he's settling into first grade very easily. We begin the half-homeschooling starting on Monday.

Even so, I get the sense that Max is going to be the one most into computers. He's learning the OS X interface impressively well. Last time we resumed the computer, a folder was highlighted, and he said, "it's clicked." Then, when I ran Stuffit to unpack a game demo, he noted the icon and said, "it's squishing it." He's also the one that said, "I love Alan's 'puter".

Spam

Paul Graham has a new piece on spam is different. He also includes my comment in response.

I have great hopes for word-based (or "Bayesian") scoring. Paul's arguments for why it will work seem convincing to me. In particular, keeping per-user word lists should help enormously with two big problems: the ability for spammers to "optimize" around a common scoring scheme; and differing opinions about what constitutes spam.

I still think there may be a role for trust, but it's also possible that scoring by itself will work so well that adding trust isn't really necessary. In any case, I'll do my best to write up my ideas for using trust to beat spam.

trust

Bram and I had a great discussion about anti-certs. He is exploring the space of adding anti-certs to the trust computation, but is finding it complex. Many of the proposals would seem to require vastly greater computation than the simple eigenvector and network flow models I've proposed.

The leading alternative to that seems to be a way to use anti-certs as input to a process which removes positive certs from the graph. If you disagree with the ratings of user X, it might be interesting to analyze the influence of X on ratings transmitted to you through the graph, then remove your local edges which carry the most influence from X. In general, this boils down to optimizing edge weights. Lastly, anti-certs don't have be about individual users (nodes in the graph). They can be about ratings you disagree with. You don't really have to know where the bogus ratings come from, as long as you know how to tune your local edges to minimize them.

As always, a big part of the challenge is presenting a sensible UI. I've made the Advogato UI for certs as simple as I know how, and the user community is supposed to be sophisticated, yet it seems that many people can't manage to do certs and ratings the way they're supposed to be. Bram's anti-cert UI is straightforward to implement. In addition to "Master, Journeyer, and Apprentice", you'd just add one or more negative categories.

Metamath

I have more insight into the lambda definition yesterday. I believe that the definition of (lambda x x) is correct - it should be equivalent to the identity relation defined in Metamath. However, it is quite definitely a class and not a set. As a result, trying to use it as a function argument just yields the empty set.

This is probably a reasonable way out of the logic thicket. The problem is that the untyped lambda calculus is subject to the Russell set paradox.

I was talking about this with Bram, and he called ZF set theory a "hack." I more or less agree, but playing with it in the context of Metamath has led me to appreciate how powerful a hack it is. With a small number of relatively simple axioms, it gets you a rich set of infinities, but avoids the particular ones that bring the whole formal system crumbling down. You get integers, reals, tuples, sequences (finite and infinite), transfinites, and functions from set to set. You don't get untyped lambda calculus. Overall, it's probably a good tradeoff.

Metamath

I believe that the formulation of lambda I was wondering about yesterday is:

$d y A $.
df-lambda $a |- ( lambda x A ) = { <. x , y >. | y = A } $.

This lets me prove such things as ( ( lambda x x ) ` y ) = y. I'm probably not at the full power of the lambda calculus, though, because here "y" is required to be a set, while a lambda term is a class. So it's not clear I can apply this lambda term to itself.

School

Tomorrow is Alan's first day of 1st grade. It promises to be exciting.

Metamath

I've been chewing over Metamath in my head some more. It's very appealing in lots of ways, especially its simplicity, but a few things need to be done differently, I think, to make it work as a distributed, web-based proof repository.

The biggest problem is namespaces. On the web, everybody has to be able to define their own local namespace, while also being able to refer to other things in the world. Metamath, by contrast, uses short, cryptic, and impermanent names for its theorems. On that last point, set.mm includes this note: "While this file is complete and correct, it may undergo revisions from time to time (including theorem name changes, which means any new theorems you add may not always remain compatible)." That won't do.

Actually, I'm pretty convinced by now that the best name for a theorem is the theorem itself. If the size of the name is a concern, a hash can be substituted. However, an advantage of using the whole theorem is that, if a correct proof cannot be found on the web, someone can probably prove it by hand.

The namespace of newly defined keywords is also problematic. In fact, Metamath does not automatically check whether definitions are ambiguous. Having a definition collision could lead to contradiction, which would be bad. For definitions that simply expand as macros, I suppose you could use the expansion (or a hash thereof) as the permanent name. However, not all definitions in Metamath have this form.

The second big problem, as I've mentioned earlier, is whether to keep theorems abstract, or pin them down to a single model. Metamath does the latter, which helps keeps theorems simple, but it also makes them nonportable to other models. I think it's possible to do abstract theorems in Metamath (you quantify over all possible models that satisfy the axioms), but probably notationally very cumbersome. Even so, it's similar to the way a lot of mathematics is done: you see a lot of theorems over, say, fields, without them pinning down which field is meant.

I'm still not sure whether formulae should be represented as strings of symbols (subject to implicit parsing through wff rules), or as trees. My gut feeling is the latter, because it sidesteps the issue of whether the parsing grammar is ambiguous. The former is a slightly simpler representation, though, and also closer to human-readable mathematics.

There are a few stylistic choices which are unusual, but probably reasonable. For one, the only primitive inference rules are the Modus Ponens and generalization. Equality and substitution are handled through theorems just like everything else. For another, instead of classifying variables as "free" or "bound", Metamath just places distinctness constraints on variables. A lot of the primitive operations such as substitution are cleverly defined to place no more distinctness constraints than necessary.

By the way, does anyone have a formal spec for lambda? Here's what I have, but I don't know if it's right:

( lambda x A ) = { y | y = E. z <. z , [ x / z ] A >. }
y,z,A distinct

Project death

AaronSw notes the demise of Pepper, a shareware text editor. Apparently, the author is tired of it.

This highlights one of the major differences between proprietary and free software. People lose the motivation to work on projects all the time. But free software projects are like extremely hardy seeds; they can dry up and last a long time, and when a fertile environment comes along, sprout and flourish. When the author has particular rights over the code, a big risk is that it can really die.

The history of ncurses might be illuminating on this subject. Throughout its early history, it's license wasn't quite free, and in fact the original author basically had veto rights. It would be really interesting for someone to do a critical history of this project. There was a lot of strife surrounding it. If you read "Homesteading the Noosphere," it might help to know that ncurses provided much of the background for the discussion.

Bytecode interpreters

One of the interesting threads (pphaneuf McCusker JDarcy) that's been bouncing around lately is the design of bytecodes for interpretation. It's a different problem than instruction set design for CPU's. There, RISC is a win because a stripped down instruction set can support higher clock speeds. But for a bytecode interpreter, you're going to have a certain amount of overhead to go through an iteration of the interpreter main loop, so you might as well make it do more.

But a lot of this argumentation leaves me unsatisfied. If you've got a bytecode interpreter in a system, it's a dead giveaway that you're optimizing for something other than raw performance, for example overall system complexity or portability. Both are worth optimizing for, of course.

The stack machine vs. memory transfer machine argument is interesting. I think it comes down to where you're willing to put your complexity. If the compiler from source language to bytecode must be simple, then stack machines are a win, in large part because they support arbitrary depths of expression nesting without any special case code. With memory transfer, you either have to deal with variable sized fields for representing the memory cell (and thus variable sized instructions), or you have to implement spilling when depth goes above (say) 256.

On the other hand, if you're trying to ship virtual machines in tiny embedded devices, then it makes a lot of sense to keep the VM as simple as possible, even at a cost of more sophistication in the compiler.

I note a number of interesting memory-transfer based virtual machines, including Parrot and MMIX. Virtual machine design is a delicate art. I'm happy to see it undergoing something of a renaissance.

Alan

School starts on Monday, and we're starting to gear up. For those of you joining late, we have a hybrid plan. He'll do half a day in public school (1st grade), and half a day at home. For the math-and-science part of the curriculum, one of the things I plan is Mindstorms. We ordered the starter set today. I'm pretty excited; it looks cool.

Ostensibly, Heather will cover language arts and I'll do the Spock stuff, but ironically he started on an illustrated story this evening with me. It's called "Cretaceous Park 1". Ok, so it's a bit derivative, but we're having fun. The words flow easily, and then he puts a great deal of effort and concentration into drawing and coloring the accompanying pictures. I'll probably scan it and put it on the net when he's done.

Max

Max's language development continues to be amazing to behold. He's able to communicate very clearly now. When Alan had a tantrum a few days ago, Max picked up the Nerf rocket launcher and said to us, "I'm going to shoot him." We burst out laughing, of course. Other samples include "I want to press the button on Alan's toy" and "I like your shirt. It has fire on it."

261 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!