1 Nov 2002 (updated 22 Feb 2005 at 08:17 UTC)
»
raph: A few points about your translation:
- You don't need to encode first-order terms as singleton
second-order variables when moving from quantification over functions
to quantification over predicates. Instead you can replace each
quantification over functions by two (or more) quantifiers, one over predicates
that occurs in the same place as the original (together with an
implication to effectively retrict the quantifier to total functions),
and a transformation of each equation into a derived proposition with
only first-order quantifiers.
- Secondly, I am not sure, but the
way you word your diary entry suggests you may not be respecting the
distinction between the set of total functions and the set of functions
that SOA can prove total.
- Finally, you need to check your
translation actually captures the properties you are after. I think
the most natural way to do this is to have translations going in each
direction (say phi, psi), with the property that for each proposition
P of Z_2^{fun} P<=>psi(phi(P)) and for each proposition P of
Z_2^{pred} P<=>phi(psi(P)). I'd also add an informal requirement:
each of phi and psi can reasonably be considered `meaning conserving',
ie. they establish the same proposition in a different form.
The point I made in my last diary entry about saying the two
formulations of Z_2 were `trivially' the same was not to say the
translations don't have tricky details, but rather to say there is no
real conceptual distance between the two formulations: we can give
translation functions that have only linear growth in the size of
propositions, and satisfies the above criteria, and has exactly the
same consequence relation on first-order terms. Such translations
cannot be found between Z_2 and systems based on the polymorphic
lambda-calculus, even though some of the latter class of systems have
the same provability strength as the former. In other words, you have
to choose between Z_2-like consequence relations and Coq-like
consequence relations right at the start. I think that Z_2 is the
right choice if you care about mathematics more than metamathematics
and proving existing programs correct rather than automatic program
extraction.
Given that you want a Z_2-like consequence relation, I don't think
either formulation is `the right thing' as a proof exchange format:
the theories should have a rich set of datatypes and quantifiers,
split into first order data-types of countable cardinality (finite
enumeration sets, finite lists and tuples of countables, integers,
rationals, polynomials, algebraic numbers, finite binary trees, etc.),
and second-order data-types of uncountable cardinality (eg. infinite
streams of bits, infinite streams of integers, infinite binary trees,
n-ary relations over finite datatypes, n-ary functions upon finite
datatypes, etc.). The point is that one can find good translations
from such richer theories into the simpler axiomatisations of Z_2 that
metamathematicians use.
Bram: Quite so, I'm afraid I am guilty of being
careless about the terminology complexity theorists use: I come from
graph rewriting where I call this the graph matching problem. Sorry
for wasting your time.
graydon: Sorry for the long time in responding to
your last response to me. I keep meaning to reply, but each time I
start composing a diary entry, I spend far too long thinking about
raph's ideas about proofs and run out of time. Just briefly, I want
to explain what the serious problems are with Hehner's article, and
give a reasonable definition of syntactic metaprogramming. At least I
can deal with one of your points
about LISP macros: a macro system by definition gives a static code
transformation. It is just that it makes sense for most LISP macro
systems to carry out this transformation at run-time, as happens when
there is no compile time (eg. interpreters), in the context of (eval
...) statements, and when you want staged computation (eg. in dynamic
compilation). Insisting on pre-processing as your source
transformation system causes all kinds of tiresome difficulties when
you move away from the C batch-processing paradigm.