mglazer: Hypothetical question -- Would you feel any regret for your anti-arab invective if you heard that it motivated a vicious hate crime against innocent arabs?

The month so far from my last diary entry has been the most stressful month of a year with many stressful months: I feel relieved just to have enough space and emotional energy to be typing this diary entry. To give a taste of my month: on or around the 17th me and my wife were challenged by four thugs on the U1 Berlin underground line, one of who kicked me in the eye, sending me to hospital for two days. That was the second most stressful event of this month... I'm much better now, and am satisfied to say the police have a very good chance of catching them: there are three eye-witnesses, and the police (who have been great; the Berlin police have a not entirely undeserved bad reputation in Germany) want me to come in and look at photos.

**Web based proof exchange systems**:

I am still very enthusiastic about raph's work here, although I am
disappointed he has used python for the implementation. Apart from
that, I think raph's writings show excellent taste and insight. What I
think he has got right:

- Metamath is probably the simplest system good enough to do the job. I don't think Hilbert systems are the best possible systems from a proof theoretic point of view, but the problem with the more sophisticated frameworks is that none of them are yet ``the right thing'';
- Definitions are the hardest thing to get right in a Hilbert system based framework, and I think here is the place where it is easiest to criticise metamath. Raph seems to have a pretty good grasp of the issues here, which are riddled with subtleties, and doesn't seem bound by doing things the way the experts have always done them, which ensures that his work will be interesting whether or not it is successful.

There have been lots of interesting comments appearing on recentlog, but I've not had a chance to write any responses. I'll try to do this in the next few weeks.

graydon:

Great post!

lkcl:

Nice series of articles on windows and its free rivals. Just a
little point: just because POSIX specifies a file system that doesn't
support interesting permissions models doesn't mean that linux and
FreeBSD can't: both of the have VFS layers that allow them to do just
this sort of thing.

Wishing all advogatans a *Happy New Year*!

dan: Alan Cox's diary, November the 27th, "strange lisp people
turn up from Oxford" -- that wouldn't be you by any chance, would it?

Can you put a price on heroism?

Yes! And you can manage it sensibly or wastefully too! I've been growing irritated with the Economist over recent years (since the departure of Rupert Pennant-Rea, I think...), and I hardly buy it anymore, but the occasional article like this one keeps me checking the website...

There's an article by Thierry Coquand, A new paradox in type theory that gives a good explanation of Reynold's argument that polymorphism is not set-theoretic. I think this goes some way towards making important ideas about the nature of the polymorphic lambda-calculus accessible purely through online references, though a lot of important references are from the 70s and 80s and would benefit from good online summaries. I'd be grateful to hear of more references. BTW mail sent to cas@janeway.inf.tu-dresden.de will get to me, I'm quite happy that my main email address has been lost from the internet, since I don't have to spend so much time deleting spam...

jameson: Were you in Thomas Streicher's group when you were at Darmstadt? Who are you working with at Boulder? Despite my reservations, Girard's book is already something of a classic: I don't think you would have any problems convincing the librarian to try to get a copy.

Postscript: The last `Communications of the ACM' issue has as it main focus an event that I think will prove to be of great importance to computer science: The ACM has rejected licensing of software engineers essentially because code construction is not a normal engineering task. Interesting also in this light is Phil Greenspun's article reflecting on management at Ars Digita. Conclusions? I don't think software will *ever* be a `normal' engineering discipline, the way that civil engineering is today. I do think that the `Wild West' nature of the disipline will not last, however, because I think that in the next <n> decades it will become feasible to construct large systems that are proven to satisfy a formal specification.

Thanks to raph, Michael Norrish was able to get in touch with me. Here is his message (it took me rather a long time to post this, but better late than never) which clarifies the status of HOL:

Dear Charles,I got to your Advogato diary entries through those of Raph Levien's. Just a few things about HOL:

Regards,

- It's not based on System F, but a polymorphic version of Church's simple type theory, making its type system even simpler than that of ML. This also means that it's easy to give it a set theoretic semantics.
- HOL is a direct descendent of LCF, the project that invented ML. HOL's first developer was Mike Gordon was one of the people that worked on LCF with Robin Milner in Edinburgh and Stanford. LF is later, mainly unrelated, project.
Michael.

raph: I wish it was normal for most university professors to apply the same intellectual energy and curiosity that you do to fields outside their main area of enquiry... To answer the questions you ask:

- A theory of functions doesn't have a consequence relation, but
Goedel showed, in his Dialectica paper, how it is possible for a
calculus of functions to have the same mathematical strength as a proof
system. This technique is called functional interpretation, and it is
a very important technique in proof theory. Under this system, the
pure predicate calculus (ie. FOL) can be captured by the simply-typed
lambda calculus, Peano Arithmetic by the extension of the simply-typed
lambda calculus by an iterator, and Z_2 by the polymorphic lambda
calculus.
Your question was: how is it that the polymorphic lambda calculus (ie. System F) can have the same strength as Z_2? It's hard to explain without reference to Girard's proof of strong-normalisation for system F, but the essential idea is that (i) the impredicative quantification over types possessed by system F is powerful because a lambda-term possessing this form of type abstraction is abstracted over its *own* type; (ii) this quantification doesn't introduce vicious circles because we can stratify the terms of the calculus by their types, assigning a rank to them based on how deeply their most deeply nested type quantifier occurs, and the type system forbids application of terms of given rank to terms of equal or higher rank. In this scheme applications of Pi^1_k comprehension in Z_2 correspond to terms of rank k. This is a difficult idea: if you really want to understand it you should read Girard's book.

- In describing the above I avoided the details about classical vs. intuitionistic systems. With classical proof systems, the functional interpretation does not proceed as directly as for intuitionistic systems, and we need to do a bit of meta-mathematical massaging to get our correspondence. The intuitionistic logics, such as the calculus of constructions that lies behind Coq, that are in direct correspondence with lambda calculi of system F-like strength obviously do not have the principle of the excluded middle; rather less obviously it is not straightforward to add the principle without making the calculus inconsistent. So in fact classical vs. intuitionistic turns out to be an important distinction. Total vs. partial functions I think is not so important, since a good definitional scheme will allow you to move between the two representations.

raph: Two points:

- If you don't check that the translation is good,
then there is every chance that either:
- the proofs you carry out in your rich system do not correspond naturally to proofs in Z_2;
- your rich system does not share the same consequence relation as Z_2;

- I can't say what the best books are, since I picked up most of what I know from fairly obscure sources. The text I have read that gives the best feel of what life is like in a a proof theory based on the lambda calculus is Jean-Yves Girard's `Proofs and Types' (with appendices by Paul Taylor and Yves Lafont). This text gives many of the most important results, in particular the proof of strong normalisation for System F and Martin-Loef's demonstration that this SN result is equivalent to the consistency of SOA, but be warned that this text is challenging, cavalier about definitions and has a very non-mainstream agenda. Also the book does not explain why polymorphism is not set-theoretic; for this I recommend John Reynold's orginal paper.

raph: A few points about your translation:

- You don't need to encode first-order terms as singleton second-order variables when moving from quantification over functions to quantification over predicates. Instead you can replace each quantification over functions by two (or more) quantifiers, one over predicates that occurs in the same place as the original (together with an implication to effectively retrict the quantifier to total functions), and a transformation of each equation into a derived proposition with only first-order quantifiers.
- Secondly, I am not sure, but the way you word your diary entry suggests you may not be respecting the distinction between the set of total functions and the set of functions that SOA can prove total.
- Finally, you need to check your translation actually captures the properties you are after. I think the most natural way to do this is to have translations going in each direction (say phi, psi), with the property that for each proposition P of Z_2^{fun} P<=>psi(phi(P)) and for each proposition P of Z_2^{pred} P<=>phi(psi(P)). I'd also add an informal requirement: each of phi and psi can reasonably be considered `meaning conserving', ie. they establish the same proposition in a different form.

Given that you want a Z_2-like consequence relation, I don't think either formulation is `the right thing' as a proof exchange format: the theories should have a rich set of datatypes and quantifiers, split into first order data-types of countable cardinality (finite enumeration sets, finite lists and tuples of countables, integers, rationals, polynomials, algebraic numbers, finite binary trees, etc.), and second-order data-types of uncountable cardinality (eg. infinite streams of bits, infinite streams of integers, infinite binary trees, n-ary relations over finite datatypes, n-ary functions upon finite datatypes, etc.). The point is that one can find good translations from such richer theories into the simpler axiomatisations of Z_2 that metamathematicians use.

Bram: Quite so, I'm afraid I am guilty of being careless about the terminology complexity theorists use: I come from graph rewriting where I call this the graph matching problem. Sorry for wasting your time.

graydon: Sorry for the long time in responding to your last response to me. I keep meaning to reply, but each time I start composing a diary entry, I spend far too long thinking about raph's ideas about proofs and run out of time. Just briefly, I want to explain what the serious problems are with Hehner's article, and give a reasonable definition of syntactic metaprogramming. At least I can deal with one of your points about LISP macros: a macro system by definition gives a static code transformation. It is just that it makes sense for most LISP macro systems to carry out this transformation at run-time, as happens when there is no compile time (eg. interpreters), in the context of (eval ...) statements, and when you want staged computation (eg. in dynamic compilation). Insisting on pre-processing as your source transformation system causes all kinds of tiresome difficulties when you move away from the C batch-processing paradigm.

Bram: The graph isomorphism problem in its
general form is known to be NP-hard, this is a standard
result. See, eg.:

[ATCH99] Algorithms and Theory of Computation Handbook, Mikhail J. Atallah, ed., CRC Press LLC, 1999.

raph: web-based proof assistants, some random comments:

- I think the OBJ3 suggestion is interesting, but I don't think it is something you need to get right straightaway: I would say it was a version 2 sort of issue. What it offers is a way of modularising axiomatic frameworks in a way that gives inheritance, so that the group axiomatisiation can inherit from the monoid axiomatisiation. Drawbacks: it involves category theory, nontrivial implementation, and you have to think carefully about namespace issues. Trying to figure out what basic axioms most peopl should use for the base theory is probably the best place to start.
- Since Nat <=> Nat x Nat we have the following are equivalent:
- One place predicates over Nat;
- Sets of Nat;
- Sets of Nat x Nat;
- Two place relations over Nat.

- One thing I'd like to see in a web-based proof assistant, is
some support for different classes of proof, with documentation,
so that a proof can have an quickly-checked for-machine proof, a well-documented research paper like proof (also machine checkable), and an elementary long-but-in-easy-steps didiactic proof, complete with URLs for background reading.

raph:
I'm not surprised by the quality of the responses you have
received to your recent musings on web-based proof systems:
this is a difficult area, both technically and philosophically
and as a perfect outsider you have talked about the topic intelligently and
persuasively. I think that there is every chance that your web-based
proof-exchange protocols will prove to be successful and important.

Various comments:

- Solovay's comment about strength of ZFC and HOL, while true, may be misleading. I am not familiar with HOL, but I believe that it is based on the Girard-Reynolds polymorphic lambda calculus: the embedding of such a calculus into ZFC is not straightforward; see John Reynold's classic paper, `polymorphism is not set-theoretic'.
- A poster suggested thinking about translations between proof representations using category theoretic formalisms. I think this is probably a good idea. The poster also mentioned topos theory in this connection: this is possibly more work than is necessary for your purposes.
- I suggested second-order arithmetic as a good basis formalism: I think it makes life easy if most people use a single formalism, and the extra strength of ZFC is almost never necessary (also one can just add axioms to SOA to obtain equivalent strength to ZFC).

**New HTML Parser**: The long-awaited libxml2 based HTML parser
code is live. It needs further work but already handles most
markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!