Older blog entries for chalst (starting at number 49)

1 Nov 2002 (updated 22 Feb 2005 at 08:17 UTC) »
raph: A few points about your translation:
  • You don't need to encode first-order terms as singleton second-order variables when moving from quantification over functions to quantification over predicates. Instead you can replace each quantification over functions by two (or more) quantifiers, one over predicates that occurs in the same place as the original (together with an implication to effectively retrict the quantifier to total functions), and a transformation of each equation into a derived proposition with only first-order quantifiers.
  • Secondly, I am not sure, but the way you word your diary entry suggests you may not be respecting the distinction between the set of total functions and the set of functions that SOA can prove total.
  • Finally, you need to check your translation actually captures the properties you are after. I think the most natural way to do this is to have translations going in each direction (say phi, psi), with the property that for each proposition P of Z_2^{fun} P<=>psi(phi(P)) and for each proposition P of Z_2^{pred} P<=>phi(psi(P)). I'd also add an informal requirement: each of phi and psi can reasonably be considered `meaning conserving', ie. they establish the same proposition in a different form.
The point I made in my last diary entry about saying the two formulations of Z_2 were `trivially' the same was not to say the translations don't have tricky details, but rather to say there is no real conceptual distance between the two formulations: we can give translation functions that have only linear growth in the size of propositions, and satisfies the above criteria, and has exactly the same consequence relation on first-order terms. Such translations cannot be found between Z_2 and systems based on the polymorphic lambda-calculus, even though some of the latter class of systems have the same provability strength as the former. In other words, you have to choose between Z_2-like consequence relations and Coq-like consequence relations right at the start. I think that Z_2 is the right choice if you care about mathematics more than metamathematics and proving existing programs correct rather than automatic program extraction.

Given that you want a Z_2-like consequence relation, I don't think either formulation is `the right thing' as a proof exchange format: the theories should have a rich set of datatypes and quantifiers, split into first order data-types of countable cardinality (finite enumeration sets, finite lists and tuples of countables, integers, rationals, polynomials, algebraic numbers, finite binary trees, etc.), and second-order data-types of uncountable cardinality (eg. infinite streams of bits, infinite streams of integers, infinite binary trees, n-ary relations over finite datatypes, n-ary functions upon finite datatypes, etc.). The point is that one can find good translations from such richer theories into the simpler axiomatisations of Z_2 that metamathematicians use.

Bram: Quite so, I'm afraid I am guilty of being careless about the terminology complexity theorists use: I come from graph rewriting where I call this the graph matching problem. Sorry for wasting your time.

graydon: Sorry for the long time in responding to your last response to me. I keep meaning to reply, but each time I start composing a diary entry, I spend far too long thinking about raph's ideas about proofs and run out of time. Just briefly, I want to explain what the serious problems are with Hehner's article, and give a reasonable definition of syntactic metaprogramming. At least I can deal with one of your points about LISP macros: a macro system by definition gives a static code transformation. It is just that it makes sense for most LISP macro systems to carry out this transformation at run-time, as happens when there is no compile time (eg. interpreters), in the context of (eval ...) statements, and when you want staged computation (eg. in dynamic compilation). Insisting on pre-processing as your source transformation system causes all kinds of tiresome difficulties when you move away from the C batch-processing paradigm.

25 Oct 2002 (updated 3 Aug 2004 at 18:11 UTC) »
Bram: The graph isomorphism problem in its general form is known to be NP-hard, this is a standard result. See, eg.:
[ATCH99] Algorithms and Theory of Computation Handbook, Mikhail J. Atallah, ed., CRC Press LLC, 1999.

raph: web-based proof assistants, some random comments:

  1. I think the OBJ3 suggestion is interesting, but I don't think it is something you need to get right straightaway: I would say it was a version 2 sort of issue. What it offers is a way of modularising axiomatic frameworks in a way that gives inheritance, so that the group axiomatisiation can inherit from the monoid axiomatisiation. Drawbacks: it involves category theory, nontrivial implementation, and you have to think carefully about namespace issues. Trying to figure out what basic axioms most peopl should use for the base theory is probably the best place to start.
  2. Since Nat <=> Nat x Nat we have the following are equivalent:
    • One place predicates over Nat;
    • Sets of Nat;
    • Sets of Nat x Nat;
    • Two place relations over Nat.
    Since functions Nat => Nat are a subspace of the two place relations over Nat, and two place relations are isomorphic to the functions Nat x Nat => {0,1}, moving between the two representations for Z_2 is pretty trivial (with the right definitional framework).
  3. One thing I'd like to see in a web-based proof assistant, is some support for different classes of proof, with documentation,

    so that a proof can have an quickly-checked for-machine proof, a well-documented research paper like proof (also machine checkable), and an elementary long-but-in-easy-steps didiactic proof, complete with URLs for background reading.

raph: I'm not surprised by the quality of the responses you have received to your recent musings on web-based proof systems: this is a difficult area, both technically and philosophically and as a perfect outsider you have talked about the topic intelligently and persuasively. I think that there is every chance that your web-based proof-exchange protocols will prove to be successful and important.

Various comments:

  • Solovay's comment about strength of ZFC and HOL, while true, may be misleading. I am not familiar with HOL, but I believe that it is based on the Girard-Reynolds polymorphic lambda calculus: the embedding of such a calculus into ZFC is not straightforward; see John Reynold's classic paper, `polymorphism is not set-theoretic'.
  • A poster suggested thinking about translations between proof representations using category theoretic formalisms. I think this is probably a good idea. The poster also mentioned topos theory in this connection: this is possibly more work than is necessary for your purposes.
  • I suggested second-order arithmetic as a good basis formalism: I think it makes life easy if most people use a single formalism, and the extra strength of ZFC is almost never necessary (also one can just add axioms to SOA to obtain equivalent strength to ZFC).

advotake: A nice quick hack; use expect as an intelligent xml-rpc client to post advogato diary entries.

redi's proposed hack: <quote>redi </quote> I like the idea. I think `redirect' is a better name, though...

graydon: I think the thing I have enjoyed most in our past communications is discovering just how differently it is possible for an intelligent and well-informed computer scientist to view the field of programming languages. From some of your past posts, in no particular order:

  • TCL: Actually Tcl is one of my favourite languages (in the guise of Don Libe's `expect'). I just don't think that string manipulation is `real' syntactic metaprogramming. To my taste, Alan Bawden's `Quasiquotation in LISP' article presents the definitive case against string manipulation as meta programming.
  • Hehner's introduction to Goedel's proof at his website is the work of an incompetent. Maybe he just had a bad day...
  • I've had a look at camlp4 (at last): it's a nice solution to a problem that doesn't arise in languages like LISP that don't get fixated on `source code' vs `object code'. Because in LISP-like languages (read <file>), (macro-expand <s-exp>) and (eval <s-exp> <environment>) are just normal programs, the semantics of the language need make no distinction between read time vs run time, or between interpreted code and compiled code. It confuses beginners, but once you've grokked the basic idea it makes life much simpler.

raph: I very much like the way your ideas are going with web-based proof assistants. Some thoughts:

  • On proofs amd code - I recommend reading Euclid's Elements; there are a lot of parallels between what you say and the way he goes about constructing his arithmetic and geometry;
  • I think tactics were invented by LF-ers, not HOL-ers. I suppose HOL is based on the Edinburgh LF (the chaps who invented ML in the 1970s);
  • I think the `right' logic for most purposes (ie. the axiomatic framework we might the `east pole' on the `more mathematical' scale of Wiedijk's graph) is classical second-order arithmetic. It is probably the most intensively studied strong proof theory in mathematics: it pretty much immediately encodes all weaker systems, and most stronger systems can naturally be expressed in it.

Bram: You can protect against this kind of thing (buffer overflows, character set attacks, etc.) on either the server or client side by applying the `command' design pattern and then embedding the control language in your favourite tcl-alike.

http://www.advogato.org/person/Roger/: Is there a clean correspondence between SlideML and Slatex?

suse->debian: I've converted my laptop from a Suse 6.4 box to a stable debian woody. Pretty smooth, but a few lumps are proving difficult to iron out:

  • My X server crashes when I suspend. An ugly apmd vs. XFree86 vs. Trident vs. God knows what else conflict.
  • I can't get video output and laptop display from the same client plane.
  • Debconf has put an ugly network environment script in my init.d directory which I can't track down.
  • I have a funny DNS vs. wwwoffle conflict.
  • The usual round of trouble with .Xresources being handled differently by every other program...
Note how 60% of my troubles are caused by X misconfigurations...

reading: Wittgenstein's Lectures on the Foundations of Mathematics. I'm a glutton for punishment...

work: is going well |-D

Bram: It's a mistake to infer the existence of large cardinals from the consistency of large cardinal set theory. I agree that it is likely that large cardinal set theory is consistent, but the falseness`of 'consistency implies existence' follows from thinking about theories like PA+not(con(PA)): if PA is true then it is consistent, and so is PA+not(con(PA)) (by Goedel's incompleteness theorems), but surely this theory isn't true!
Back from Dresden where I took part in a two-week workshop of `Proof theory and Computation' and gave a five-lecture course on `Harmonic type theory' (my course was more-or-less on an attempt to give a philosophical foundation-for/critique-of the formulae-as-types correspondence). I found the other lectures very interesting, especially that of Francois Lamarche, who has made me take seriously the `Geometry of Computation' thesis (due to the linear logic crowd), which, roughly, is that computational activity is fundamentally geometric in nature, and we should formulate our computational calculi in a way that respects the topological structure of computation in a more illuminating way.

Goingware's `Make a bonfire of your reputations': I liked reading this mini-essay very much, although I am not persuaded by it. Why not? Well, I think sometimes other people's opinions, though wrong, are threatening enough that you shouldn't challenge them. Two extreme cases make the point: at the height of Stalin's power, speaking out was quite simply suicide. `Making a bonfire of your reputations', and challenging cruelty and injustice was not a wise course of action for any Russian who put value on their life. Much better to keep silent, bide your time, and do what little things you can do without endgangering yourself.

A rather different example would be the position of a full-time Scientologist who has come eventually to the realisation that their `Church' is in fact a psycho-terrorist mafia. Here, the power that threatens those who speak out is not lethal, but rather the ability of the organisation to destroy its heretics economically and psychologically.

What's the conclusion? Well, I think to be truly happy with yourself you need to feel the power to do and say what you think is right regardless of whom you might annoy, but it really is the case that the other things in life you might lose by doing this (above: your life, your sanity and your security!) might be more important still.

World cup

The race goes not to the swift nor the battle to the strong, nor does food come to the wise nor wealth to the brilliant nor favor to the learned; but time and chance happen to them all Ecclesiates 9(11)

...but even so the three teams I'm supporting (in order, England, Germany and Senegal) are all through to the quarter final :>.

graydon: It's good to see you posting diary entries again!

I agree that Paul Graham's writings have an excessively partisan flavour, but to defend the superiority of LISP macros over its alternatives:

  • LISP macros are not preprocessor macros: there is flexibility over when macros are expanded, and indeed macro expansion can be done at run-time using `eval'.
  • I'm surprised to see you think TCL `satisfies the menu': I think string manipulation is a fundamentally flawed way of doing this kind of thing. IMO, Paul Graham is entitled to dismiss TCL as not having a real `syntactic metaprogramming system'.
  • We've talked about this before (and I still haven't looked further into ocamlp4 vs. MetaML, so I may be being unfair to ocamlp4): I don't think that the macro facilities offered in the ML world are as powerful as those in the LISP world. The whole issue of how to integrate a real macro system into a statically typed language is in need of further work.
Having said this, I think the Common LISP macro facilties look antiquated being used to what is on offer in the Scheme world. Common LISP was fixed just before a lot of advances were made in this technology: hygienic macros, syntactic closures, `Macros that work', first-class macros; this is stuff which can't be grafted onto the language using defmacro. The design issues around macros are tied up with the issues around module systems, and this is an area where Common LISP is just plain broken.

raph (from 11/5/2002): I think it's a bit premature to say arc ... could well become the most compelling LISP dialect. I think the *discussion* around arc is going to be very interesting (it already is), but it is not clear to me from what he has written so far that Paul Graham has any ideas that will put arc substantially ahead of where Olin Shiver's scsh already is for the kind of tasks he has in mind. We'll see.

Grit: I think the ability to guess domain names isn't an important one, informativeness of URLs is much more important. If musedoma do their job right, then one knows that sfmoma.museum is indeed what one is looking for.

The point I was making about persistence is that for a renewal policy to make sense it would have to apply to all/most domain names, which directly undermines the utility of a DNS.

shlomif: I'd say an important advantage of Java over C++ and (to a lesser extent) perl is that it is pretty predictable: there are relatively few nasty snafus in writing code. Maybe `less insights' is the price, but I can see why people might use it to get the job done.

The `It's not real code' argument against Perl is lame, but IMO Perl doesn't scale well; I have to say I don't think of Perl as a `proper' programming language.

tk: But badvogato *is* a master, if not with that honour in the advogato trust metric...

Grit: The proposals I am familiar with for expanding the TLD domain name system are suggesting that many of the new TLDs will have different policies (ie. like .edu/.gov/.mil), so they are not proposing the .com/.org/.net free-for-all you show to be absurd. A renewal policy I think is an awful idea: the point about the DNS is precisely to have persistent points of reference to internet resources.

freetype: Are you familiar with the work on region management (ie. a compile-time alternative to garbage collection)?

lkcl: Congratulations on the new job!

tk: I don't read geek code. Well... OK then, `y!' means you're a chap?

shlomif: Check what version of ghostscript you are using: 5.50 had serious problems with generating and rendering PDF. You can check that the PDF is OK using acroread.

12 May 2002 (updated 12 May 2002 at 16:29 UTC) »
raph: I found Bertrand Meyer's article to be biased against Java: generics have been planned for Java since the first release (rumor has it that the first release of the Java SDK was almost cacelled because it didn't support them), and has been `backported' to all the old releases: one of the reasons for the delay was getting it right... The JVM is really not a bad target for most languages: the absence of longjumps (ie. continuations) in the JVM is a showstopper for scheme but otherwise the architecture doesn't fare too badly. The platform comparison you cite by John Gough talks about the overhead Java introduces by its need to box and unbox representations: that isn't the whole story, since a good JVM->native code compiler can eliminate most of this cost (I was not so impressed by this article: it didn't even mention the longjump issue).

My prediction is that there will be a big growth in domain specific languages for the .NET platform, and these will be popular with developers. I guess that the C++.NET language will be a failure for the reasons Joel Spolsky gives. I'll be interested to see if FORTRAN.NET takes off for scientific computing.

Lastly, maybe you will find Java/CNI interesting (the `Cygnus Native Interface' for Java compiled to UNIX/C using gcj).

Just certified Perrin to Journeyer for his work on Freeciv.

Postscript: Just upgraded my certification of tk to Journeyer based on his/her wide involvement in free software projects and interesting diary entries.

40 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!