Older blog entries for chalst (starting at number 131)

7 Dec 2004 (updated 7 Dec 2004 at 07:31 UTC) »
Wikipedia
I've been spending a fair amount of time trying to improve the quality of the logic pages at wikipedia. I've just posted a slightly polemical comment in one of the talk pages about the name "Curry-Howard isomorophism":
I've got a long past with this topic, so much so that I probably can't be objective about certain details: my doctorate was on the so-called classical Curry-Howard correspondence and begins with a rant about the misleading way the topic is made use of in many works in the field. While the centre of the topic was Bill Howard's 1969 manuscript, which indeed presented an isomorphism between two formal systems, there are reasons to prefer not to call the topic the Curry-Howard isomorphism, and these are mine:
  • Howard's manuscript was called The formulae-as-types notion of construction, the terminology formulae-as-types correspondence is closest to the source;
  • The topic does not consist in the first place of showing an isomosophism, rather it consists of establishing first an intuitive observation of a correlation between certain notions of construction in type theory, and certain patterns of inference in logic; the presentation of an isomorphism is rather a virtuoso flourish that adds force to this observation. To insist on talk of isomorphism obscures the most important part, which is the intuitive and I think necessarily informal part;
  • There are many important aspects of the theory that cannot be put into isomorphism, because what would have to be regarded as the logical counterpart of the type theoretic construction makes no logical sense: the clearest development of this problem is in Zhaohui Luo's Computation and Reasoning, where he discusses the problems applying the formulae-as-types correspondence to the Extended Calculus of Constructions (essentially one cannot mix dependent sum with System F like polymorphism and expect the result to be entirely logic-like);
  • There's something artificial about Howard's isomorphism: while the simply-typed lambda calculus is a calculus of an obviously fundamental nature, there's something forced about the logical end, namely the idea of labelling assumptions and implies elimination rules. The labels are needed for the isomorphism to work, but before the work of Curry and Howard, no-one working with natural deduction thought to keep track of these; instead the assumption was simply that all assumptions matching the implies elimination rule are discharged.
As a last point, if I am convinced that 'Curry-Howard isomorphism' is the more widely used term, the people whose judgement I respect most, mostly prefer 'formulae-as-types correspondence'; and at the risk of sounding elitist, I think we should prefer the better term even when it is less popular. My preference for this page are:
  1. Formulae-as-types correspondence;
  2. Curry-Howard correspondence;
  3. Curry-Howard isomorphism;
with the latter only under sufferance; also note that the phrase 'Curry-Howard' is disrespectful to those logicians who contributed fundamental insights; I would particularly single out William Tait in this respect.

ncm's testimonial
A couple of days ago, a rather stressy morning was brightened by noticing that ncm posted a fun and mostly generous Orkut testimonial. But really, he's got my LISP infatuation all wrong! I guess I'll post something in my diary saying why I got into scheme/LISP...

Planet LISP ...which reminds me, this should be my first diary entry syndicated to the multitudes who read Planet LISP (though thanks to a comment of dan, maybe my last post counts as the first?). Well, hello there... A comment with my thoughts on Kent Pitman's idea of programming lamnguages as political parties and what Common LISP and Scheme can learn from each other is planned in the not too distant future.

LtU on types A lot of energy is being expended by Lambda the Ultimate regulars on various issues around type systems: not in itself surprising, but what I find a bit odd is how little this broad-minded group (my dogmatic self excluded, of course) are making contact on the fundamental issues. A list of some recent threads:

  1. Why type systems are interesting
  2. Why type systems are interesting - part II
  3. Why type systems are interesting - part III: latent types
  4. Looking for Classic Types Thread
  5. What's a definition of "type" that beginners can understand?
  6. Definition of Type
26 Oct 2004 (updated 27 Oct 2004 at 08:30 UTC) »
CLRFI Snippets
From comp.lang.lisp:
  1. It was suggested that the CLRFI for a timer facility might be worth putting together (Pascal Bourguignon, <874qkyrcq7.fsf@thalassa.informatimago.com>);
  2. Paolo Amoroso first suggested the idea of a CLRFI process back in 1999 (<36da4b51.4026275@news.mclink.it>);
  3. Threading and even mmap have been suggested as possible subject matter for a CLRFI (see, eg., Pascal Bourguignon, <871xg5v1ea.fsf@thalassa.informatimago.com>). FWIW, I don't think CL needs a mmap CLRFI! Though maybe a general UNIX/Linux system call interface would be worthwhile.

Wondering aloud...
What are "agents"? Wikipedia says software agents are:

a piece of autonomous, or semi-autonomous proactive and reactive, computer software.
That's a lot of complex, highly technical, words to define what one would hope was a simple and fundamental concept. So a start might be to ask some more elementary questions:
  1. Are they a design pattern? If so, what are its characteristics?
  2. Are they a kind of process? If so, what kind of abservations distinguish agents from other processes?
  3. Or, cynically, is "agent" a buzzword that scientists use to get research proposals rubber stamped by research funding councils, and companies use to give the semblance of being up-to-date?
Maybe fxn has some thoughts...
Postscript: Indeed he does.
I wish I did that
John Quiggin posts some time management tips where he reveals:
I discipline myself by setting word targets. I try to write 500 to 750 words of new material every day. 500 words a day might not sound much, but if you can manage it 5 days a week for 40 weeks a year, you've got 100 000 words, which is enough for half a dozen journal articles and a small book. So, that's my target. If I haven't written enough one day, I try to catch it up the next day and so on.
My publication list would be a mite more impressive if I had followed this discipline in the 5 years since finishing my doctorate...
8 Oct 2004 (updated 8 Oct 2004 at 12:09 UTC) »
Interim Reports
Two links:
  • Paul Graham posted his talk to this year's ILC, Some work on ARC, a progress report on the Arc language. I'm sorry to see he didn't cosider Alan Bawden's approach to first-class macros, but I guess it wouldn't have fitted in with his overall approach. And I guess the value of this is that it does make clear what his approach is.
  • The proceedings of last month's Scheme workshop are available, including the interim report of the R6RS editors committee. It looks like Flatt's PLT/MzScheme module system is going to be the basis for evaluating module systems; and agreeing to a module system is the first task the R6RS committee have set themselves; for a partial introduction to the issues there's a summary of module systems by David Rush from 2001.
5 Oct 2004 (updated 22 Oct 2004 at 12:35 UTC) »
Some CLRFI stuff...
Some links:
  • Chris Double: "But if the CLRFI implementations could be shipped with the Common Lisp implementations themselves then there would be no porting necessary and that would be a great thing." -- There's a long lag with SRFIs between the standards appearing, and their being incorporated in language implementations, but for library SRFIs the expectation is that the best will become standards and the others can be supported in an appropriate package system.
  • Dave Roberts gives a nice summary of why lightweight, open language processes are good.
  • I posted a story on LtU; hopefully it attracts interesting comments;
  • And for the sake of completeness, François-René Rideau doesn't like the CLRFI name...

And some things that I guess would have been less inflammatory than what I did put in my last CLRFI wishlist...

  1. A good regexp library, perhaps modelled on Olin Shivers' SRE proposal, which I think is in tune with the CL approach (perhaps they should be renamed LREs?);
  2. XML reader/writer, perhaps along the lines of SSAX (as I think S-XML is);
  3. UNIX job control (so far missing from the SRFI process);
  4. An evaluation of approaches to type inferencing for Common LISP (in order to avoid misunderstandings, this is not the same as asking for CL to become strongly typed; also missing from the SRFIs).

... and crhodes on me on CLRFIs
crhodes writes (this and all following quotes):

Well, the goal of evolving Common Lisp isn't, or shouldn't be, to turn it into something which looks and quacks just like the fashionable duck of the day.
Quite, but a CLRFI need not be judged by a timeless standard. If a CLRFI is very useful in the next five years, but after that is regarded as a dead-end, it may still be a worthwhile standardisation effort, if that effort isn't so great and the standard does create short-term benefits.
Scheme needed the SRFIs to allow people to decide on a shared vocabulary for completely ordinary programs
I won't argue with this, but scheme is getting benefits from the process that go beyond making up for its inhibited language stadardisation process. I'd cite SRFI 40 as an example, but perhaps it won't appeal to all the hard-noses out there. Incidentally, the SRFI process has given rise to a new core standardisation process for scheme.
a brief perusal seems to indicate that most of the implemented SRFIs are essentially replicating the standard Common Lisp ``library''
FWIW, it's a reqirement of the SRFI process that all SRFIs must be implemented to be finalised. What you say is more or less true of about 1/3 of the final SRFIs, but I think if you look more closely at many SRFIs that on the surface appear to be replicating CL are in fact doing stuff that goes beyond CLtL2.
Would adding these features instead encourage people from other communities to leap on the Common Lisp bandwagon? I think that's unlikely: why accept Common Lisp with Schemey bits tacked on if you already have Scheme?
Thinking of bandwagons is maybe not for the best: if I'm on a bandwagon, I'm on the scheme bandwagon, but I read and code CL, and I think highly of the language. I'd like to hope that CL standardisers spare a thought for the interests of the periphery CL universe as well as for the stalwarts.
  1. Continuations It's too late now, I suppose, to say I thought twice about putting first-class continuations on my list, but then later thought that if we were to document one-shot continuations, we may as well document multi-shot continuations as well...
    So a Scheme adept might bemoan the lack of full multi-shot re-entrant continuations, while ignoring (or being ignorant of) the fact that requiring such effectively kills off any hope for compilation to efficient code.
    Thus do myths arise... It is well-established that writing good, correct, optimising compilers in the presence of continuations is not easy. The possibility of reentrant continuations makes some useful optimisations fail, but one may always apply these optimisations if one knows these jumps cannot occur. Furthermore, most optimisations can be adjusted to cope with the presence of continuations. There is less justification for the claim that code with first-class continuations cannot be efficient than for the claim that code with implicit memory management cannot be efficient.

    there is no demand from people currently using Common Lisp for these features.
    Well, Jeffrey Mark Siskind, author of the CL Screamer constraints package, left CL for scheme precisely because he wanted first-class continuations. There are advantages to having first-class language support for features you make heavy use of, a point that CLers usually are not slow to make when arguing with schemers. I'd identify this kind of problem domain (marrying CL with intensively backtracking logic programmingish constructs) as a key demand point for call/cc in CL.
    there are portable libraries for achieving single-shot continuations, or macros enabling a natural expression of continuation-passing style
    WRT to single-shot continutations, standardising rival approaches to language constructs is one of the key things one would hope to achieve with the CLRFIs. Using CPS to model continuations has its drawbacks, namely it is difficult to optimise this code, and typically one cannot represent non-call/cc based control constructs, such as dynamic-wind, in them.

    Call/cc potentially interacts with many Common LISP facilities, nailing these down would be hard work, but the kind of work that is the point of language standardisation.

    Just to make clear, the point of suggesting a CLRFI for Common LISP is not to suggest that call/cc should be implemented on all Common LISPs, only to point out that call/cc is in principle compatible with CL semantics and CL as a whole may benefit from some implementations having support for continuations. Furthermore, a value of the SRFI process has been to document difficulties that certain language constructs present for certain implementations: this kind of open standards process doesn't just map the road ahead for the core language, it also helps expose the existence of niche implementations and their attendant constituencies.
    Postscript: Just recalled a quote from Kent Pitman that nicely characterises the nice but too costly school of thought on call/cc:

    > Combined with the fact that there seems to be no real need for
    > continuations in the "real world" [1], it's really hard to justify
    > the inclusion of such a feature.
    I don't know if I'd go so far as to say no 'need' in the sense of 'no use'. There is call for them, and they could be handy if we had them, but the price is too high. And if we offered call/cc in the way I propose, it probably would just annoy the Scheme community. Hmmm.... (sound of wheels turning)
  2. Hygienic macros By comparison with call/cc, having a standard library for hygienic macros in Common LISP does not really have a downside. There is a standard technology, SYNTAX-CASE, that is strictly more expressive than the SYNTAX-RULES style macros of R5RS scheme, and Common LISP's DEFMACRO (more expressive in the sense that it cannot be expressed be either, but can be used to define both).
    Hygenic macros in Common Lisp would be a solution to a non-problem: given the years of experience in programming in the language (not my years alone: other people's years too) it is clear that the lack of hygiene in the macro system is simply not an issue to the Lisp community.
    I'm aware of this but, judging from c.l.l., beginners often run into hygiene problems in their early attempts at macro writing, and the CL house style for writing semi-hygienic macros is more complex to write than the scheme approach. There is, as far as I can see, no down side to having hygienic macros available to CL, if it is achieved by means of SYNTAX-CASE.

    Postscript: There's a good, short, introduction to the problem of trying to define in a USENET message, Message-ID: <fb74251e.0403161910.3820fddf@posting.google.com>, by Will Clinger.

  3. Scripting
    As for #! support, it's really not clear to me why that should need to be available in a standard fashion
    Are you doubting the value of #! invocations for scripting, or for having a CLRFI standard for dealing with them. WRT the first, they are convenient, I use them heavily with scsh (though it may be worth noting that the Tcl community seems to deprecate them). WRT the second, the discussion leading to SRFI-22 brought out some non-obvious issues with existing implementations; this kind of issue raising is a key advantage of these kinds of open standardisation processes and I hope they so prove for Common LISP.

CLRFI
crhodes wrote:
On this last, my view is that the Lisp community, whatever that is, certainly needs a process to evolve the language (preferably backwards-compatibly, but some things might need to be removed root-and-branch), it also needs people to do the work to produce high-quality standardization material: and it's more the lack of that that's lacking, and no amount of process will generate it. I'd love to be proved wrong.

I'm very glad to see that Common LISP has an open language standards process. Christophe is quite right to point out that having one doesn't mean good language evolution will happen: people have to write one: today the SRFI process is in a very healthy state, producing lots of useful language standardisation for scheme, but it took a lot of time to get that way. Olin Shivers started the SRFI ball rolling with an excellent list library, but I don't think that it was until about SRFI-22 or so that the whole process looked like it clearly had self-sustaining forward momentum.

For what it's worth, an outsiders view of what Common LISP needs; for all I know there are standard solutions to the following problems:

  • Standards for international character sets, especially UNICODE, and for localisation;
  • A hygienic macros library that integerates properly with CL's regular macro expansion (a solved problem, and I know there are toy CLs with hygienic macros, but do any of the major implementations support them?);
  • Standards for (i) one-shot continuation, and (ii) multi-shot continuations;
  • Good C/POSIX FFI support, and support for invoking CL programs as #! scripts.
I was disappointed when poking around the clrfi site to see absolutely no mention of the SRFI site, process or history. Is this a sign that the CL/scheme wars are still not over? Of course, CLers could never, never learn anything of value from any experience that happened in the scheme world, no, why the very thought...

Monads
The thread Explaining Monads has just dropped off the LtU front page; I'll link to it so I have a permanent reference, and I'll just make a point: why, when people are trying to explain the Haskell IO monad, does the very fundamental categorical construct of an algebra (or, perhaps more relevantly, coalgebra) never seem to surface? A little bit more formalism would eliminate whole clouds of demonstrably confusing hand-waving about "Haskell programs talk in a side effect free way about side effects". The formalism is that much more worth carrying out since it is so essential to the motivation for monads in the first place.

22 Sep 2004 (updated 5 Apr 2005 at 09:01 UTC) »
Self-Verifying Theories
I've been meaning to put something together about consistent theories that can prove their own consistency; a few weeks ago I got drawn into working on the logic pages at Wikipedia (and a few others), and have finally written something.

A bit of context: I discovered about such theories from a talk Dan Willard gave at the FOL75 conference last autumn, and its taken me a while to figure out what to make of them. The most obvious consequence is that it shows the usually described dichotomy between theories not strong enough to talk about themselves, and theories that can't prove their own consistency to be a false one: but we knew that anyway because of the existence of theories with axioms equivalent to claiming that they are inconsistent.

What follows is adapted from material at the Self-verifying theories and Consistency proof entries I created today.

Dan Willard showed a few years back that there are a family of consistent systems over the language of first-order arithmetic that are capable of proving their own consistency (in what follows, all formal theories are assumed to be first-order theories with decidable axiom schemes)

  • In outline, the key is to formalise enough of the Goedel machinery to talk about provability internally without being able to formalise diagonalisation.
  • Diagonalisation depends upon not being able to prove multiplication total (and in the earlier versions of the result, addition also); addition and multiplication are not function symbols of the language, instead subtraction and division are, with the addition and multiplication predicates being defined in terms of these.
  • Then we can't prove:
    (All x,y)(Exists z) multiply(x,y,z).
    Now totality of muliplication is a Pi-0-2 sentence, and the theory is organiused so that we can prove only the Pi-0-1 results we need; and provability can be expressed in terms of a tableau search algorithm.
  • Provability of consistency can then simply be added as a Pi-0-1 axiom. The resulting system can be proven consistent by means of a relative consistency argument with respect to regular arithmetic.
  • As an aside: it is the Pi-0-2 sentences, such as the totalility of multiplication, that are dangerous; it is safe to add any Pi-0-1 truth of arithmetic to the theory; the self-verifying systems can be quite strong in a ceratin sense.
As a result we can divide theories into (at least) five kinds, depending on the relationship of their completeness, and their possibility of expressing (or easily being extended to express) their own provability predicate:
  1. Inconsistent theories, which have no models;
  2. Theories which cannot talk about their own provability relation, such as Tarski's axiomatisation of point and line geometry, and Presburger arithmetic. These theories are satisfactorily described by the model we obtain from the completeness theorem; we can say such systems are complete;
  3. Theories which can talk about their own consistency, and which include the negation of the sentence asserting their own consistency. Such theories are complete with respect to the model one obtains from the completeness theorem, but contain as a theorem the derivability of a contradiction, in contradition to the fact that they are consistent;
  4. Essentially incomplete theories, that is, theories capable of expressing their own provability relation and can carry out a diagonal argument. In other words, the theories that Goedel's incompleteness theorem treats;
  5. And lastly, the self-verifying systems described above.
Is that all? It's the taxonomy I so far understand, but I also had another revelation regarding the incompleteness theorems, through attending a short lecture course by Stephen Simpson on mass problems, a formalism in recursion theory aimed at expressing fine distinctions in degrees of solvability; they also happen to be express the notion of maximal consistent extensions of arithmetic theories nicely. I plan to summarise this idea in the near future (no promises about how long exactly), but the point I want to make now is that the theory gives a nice twist on the usual hierarchy of provability strengths:
  • An immediate and well-known consequence of the incompleteness theorems is that one can organise formal theories in a partial order according to relative consistency (A is higher than B if A can prove B consistent); the partial order the extends upwards endlessly;
  • A less well-known fact, rendered crystal-clear on Simpson's account, is that all essentially incomplete theories have the same potential to be extended to stronger theories; that is, among the essentially incomplete theories, it is always possible to extend weaker theories so that they fully describe the consequences of the stronger theory. One doesn't need the recursion theoretic machinery to see this; one can see this using the usual Goedelian machinery, but I find the expression using mass problems much more forceful.

Latex for Logicians
Via Richard Zach, Peter Smith's LaTeX for Logicians page, an annotated set of links to LaTeX resources for typestting tree/tableau proofs, logic symbols, and class files for logic journals.

Postscript: A minor correction in the above thanks to Robert Solovay. I added a little bit more, but much more should be said.
Postscript #2: wo was there too...

15 Sep 2004 (updated 20 Oct 2008 at 12:37 UTC) »
Common LISP vs. scheme anecdote
Phil Greenspun has an anecdote of a large data set computation he coded in MIT Scheme, could not coax into functioning, and received advice from Gerry Sussman that this sort of task could only be coded in C(?), before he successfully recoded it in Common LISP. No date for the events, he wrote it up in 1998, and I'd guess it's from 1990ish [via lemonodor]. And for the record, I should say I hope that these kinds of problems with scheme are being sorted out with the SRFI process...

Whiler I'm on CLisp & scheme, there's a 2001 article by Kent Pitman, which outlines a genealogical argument for CLisp and scheme to have become different enough that not both can be called LISP. It's a nice piece, but I don't agree with the conclusion: in terms of language construction, there is more that unites these languages -- to use the metaphor of taxonomy, if they do not belong to the same genus, they still belong to the same family. And, indeed, one can expect that changes in programming language tool support will change the perceived distance between the languages.

Cole on Al-Qaeda
Juan Cole has a summary, September 11 and Its Aftermath , of Al-Qaeda's objectives in its terrorist campaign.

Recentlog
raph's personal news: Nasty situation -- you have my sympathies, and I hope the two of you can find a good arrangement with your children.
mslicker on Advogato Wiki: Interesting idea. I'm tempted to say that it couldn't make things worse. Project pages are near enough wikis as it stands, without the convenient syntax. On the other hand, I think that wikipedia has the tremendous advantage that the project is very good at forging common purpose. In principle we have that here at advogato, in practice there is no real shared activity here, so the divisions are most prominent. I have some very speculative ideas of how to start remedying this, which I hope to post on soon.

Edit (2008-10-20): Fixed another hyperlink.

2 Sep 2004 (updated 15 Sep 2004 at 12:58 UTC) »
When tub thumping goes wrong
From the non-GOP reaction to Zell Miller's speech, it looks like one lunatic might have completely sabotaged what might otherwise have turned out to have been a good convention for the Republicans. Bush has some work cut out for him now. No doubt berend will see the good side of Zell's piece...
Postscript: The above aside at berend was a bit catty: he's actually got one of the more readable diaries here and did not deserve it.

Conference tour
I'm in the North of England (nostalgia!) next week for the British Logic Colloquium in Leeds and the Advances in Modal Logic #5 in Manchester, where I will be giving a talk on the proof theory of modal logic. Oh, and I'm flying to/from Liverpool, just so it doesn't feel left out. </b>

1 Sep 2004 (updated 1 Sep 2004 at 20:11 UTC) »
New Graham essay
Pointing to the new Paul Graham essay is the Advogato equivalent of "F1rst P0st!". This one's on "The Age of the Essay": I'm reminded of Orwell's comments on the age of the pamphlet.

122 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!