I feel like about 150% of my usable time is spoken for these days. But I am getting through the work. Hopefully, the load will lighten a bit soon.

**Ghostscript**

I just sent a patch to code-review adding 0 and 45 degree angle cases to Well Tempered Screening. This is the last of the 8.0 release-critical features, so hopefully we'll be able to do a release soon. I expect to do a 7.32 beta tomorrow to let people experiment with WTS.

A nice bug image resulted from tonight's debugging. I don't really have an explanation for it. It's supposed to look like this.

**Proofs**

I think I can answer the question I posed last time about how to transform second order arithmetic with function applications into a "pure" axiomatization. I'll try to sketch it here.

Every production in the source language grammar has a translation into the target language. In particular, every (numeric) term in the source becomes a singleton set in the target. Constants and variables just become {x} or {c}. The interesting case is function application, which looks something like this ([term] means the translation of source-language term into the target language):

[f(a)] = {y | \exists x. x \in [a] ^ pair(x, y) \in [f]}

We can define pair(x, y) as (x + y) * (x + y) + x * x. There are other choices as well, but that's probably the simplest.

In order to translate proofs, you need to establish that every
(translated) term really *is* a singleton, and that every function
is total. I think these proofs can be generated straightforwardly,
because their structure follows the parse of the source language
expressions.

The above assumes we have a definitional framework for set comprehensions. I see two approaches. First, instead of having a comprehension axiom, you could just allow comprehensions to be written, as above, for set-valued terms. The fact that all terms are assumed to exist implies the usual comprehension axiom.

Another approach, more consistent with the usual theory, is to transform set-valued terms into "predicates with a hole" (here, [x := v] foo means the substitution of foo for x in foo; I'll gloss over the technicalities of substitution here, which can be quite tricky).

[{x | phi}] = [x := *hole*]phi

[a \in S] = \exists x. x \in [a] ^ [*hole* := x] [S]

[S0 = S1] = \forall x. [*hole* := x] [S0] <-> [*hole* := x] [S1]

I expect you rarely actually *do* the transformation into the
target language; you'll usually do all the steps in the source
language. But having the relationship formalized, I feel, is
important. Among other things, it should let you freely intermix
theorems over the same fundamental axioms, but with different
definitional frameworks. This does *not* seem to be a feature of
most proof systems, which in general treat definitions as new axioms,
usually with some meta-level justification that they're conservative.

Why different definitional frameworks? Partial functions may be one
good reason. In this world, terms are translated into sets which may
be either a singleton or empty. Some things change; for example,
\exists x. *term* is no longer a theorem, as it is in the total
world. Even so, when translated into the target language, exactly the
same things are provable. So, you should be able to do stuff like
build a function in the partial world, prove that it's total, and then
use it in the total world with little hassle.

**Kids**

We went trick-or-treating tonight, and it was fun - this year had a more social flavor than previous. We knew a bunch of the people we met, and really visited at two houses. Plus, none of the decorations or costumes scared either of the kids, which I think is a first.

Alan went as a lion-headed dragon, and Max as a "dino-store" (his "s" is still soft). Their cuteness was appreciated by all, and the candy haul was substantial.

**Googlism**

Raph Levien is generally quite pleased with his googlism results.