Older blog entries for chalst (starting at number 125)

22 Sep 2004 (updated 5 Apr 2005 at 09:01 UTC) »
Self-Verifying Theories
I've been meaning to put something together about consistent theories that can prove their own consistency; a few weeks ago I got drawn into working on the logic pages at Wikipedia (and a few others), and have finally written something.

A bit of context: I discovered about such theories from a talk Dan Willard gave at the FOL75 conference last autumn, and its taken me a while to figure out what to make of them. The most obvious consequence is that it shows the usually described dichotomy between theories not strong enough to talk about themselves, and theories that can't prove their own consistency to be a false one: but we knew that anyway because of the existence of theories with axioms equivalent to claiming that they are inconsistent.

What follows is adapted from material at the Self-verifying theories and Consistency proof entries I created today.

Dan Willard showed a few years back that there are a family of consistent systems over the language of first-order arithmetic that are capable of proving their own consistency (in what follows, all formal theories are assumed to be first-order theories with decidable axiom schemes)

  • In outline, the key is to formalise enough of the Goedel machinery to talk about provability internally without being able to formalise diagonalisation.
  • Diagonalisation depends upon not being able to prove multiplication total (and in the earlier versions of the result, addition also); addition and multiplication are not function symbols of the language, instead subtraction and division are, with the addition and multiplication predicates being defined in terms of these.
  • Then we can't prove:
    (All x,y)(Exists z) multiply(x,y,z).
    Now totality of muliplication is a Pi-0-2 sentence, and the theory is organiused so that we can prove only the Pi-0-1 results we need; and provability can be expressed in terms of a tableau search algorithm.
  • Provability of consistency can then simply be added as a Pi-0-1 axiom. The resulting system can be proven consistent by means of a relative consistency argument with respect to regular arithmetic.
  • As an aside: it is the Pi-0-2 sentences, such as the totalility of multiplication, that are dangerous; it is safe to add any Pi-0-1 truth of arithmetic to the theory; the self-verifying systems can be quite strong in a ceratin sense.
As a result we can divide theories into (at least) five kinds, depending on the relationship of their completeness, and their possibility of expressing (or easily being extended to express) their own provability predicate:
  1. Inconsistent theories, which have no models;
  2. Theories which cannot talk about their own provability relation, such as Tarski's axiomatisation of point and line geometry, and Presburger arithmetic. These theories are satisfactorily described by the model we obtain from the completeness theorem; we can say such systems are complete;
  3. Theories which can talk about their own consistency, and which include the negation of the sentence asserting their own consistency. Such theories are complete with respect to the model one obtains from the completeness theorem, but contain as a theorem the derivability of a contradiction, in contradition to the fact that they are consistent;
  4. Essentially incomplete theories, that is, theories capable of expressing their own provability relation and can carry out a diagonal argument. In other words, the theories that Goedel's incompleteness theorem treats;
  5. And lastly, the self-verifying systems described above.
Is that all? It's the taxonomy I so far understand, but I also had another revelation regarding the incompleteness theorems, through attending a short lecture course by Stephen Simpson on mass problems, a formalism in recursion theory aimed at expressing fine distinctions in degrees of solvability; they also happen to be express the notion of maximal consistent extensions of arithmetic theories nicely. I plan to summarise this idea in the near future (no promises about how long exactly), but the point I want to make now is that the theory gives a nice twist on the usual hierarchy of provability strengths:
  • An immediate and well-known consequence of the incompleteness theorems is that one can organise formal theories in a partial order according to relative consistency (A is higher than B if A can prove B consistent); the partial order the extends upwards endlessly;
  • A less well-known fact, rendered crystal-clear on Simpson's account, is that all essentially incomplete theories have the same potential to be extended to stronger theories; that is, among the essentially incomplete theories, it is always possible to extend weaker theories so that they fully describe the consequences of the stronger theory. One doesn't need the recursion theoretic machinery to see this; one can see this using the usual Goedelian machinery, but I find the expression using mass problems much more forceful.

Latex for Logicians
Via Richard Zach, Peter Smith's LaTeX for Logicians page, an annotated set of links to LaTeX resources for typestting tree/tableau proofs, logic symbols, and class files for logic journals.

Postscript: A minor correction in the above thanks to Robert Solovay. I added a little bit more, but much more should be said.
Postscript #2: wo was there too...

15 Sep 2004 (updated 20 Oct 2008 at 12:37 UTC) »
Common LISP vs. scheme anecdote
Phil Greenspun has an anecdote of a large data set computation he coded in MIT Scheme, could not coax into functioning, and received advice from Gerry Sussman that this sort of task could only be coded in C(?), before he successfully recoded it in Common LISP. No date for the events, he wrote it up in 1998, and I'd guess it's from 1990ish [via lemonodor]. And for the record, I should say I hope that these kinds of problems with scheme are being sorted out with the SRFI process...

Whiler I'm on CLisp & scheme, there's a 2001 article by Kent Pitman, which outlines a genealogical argument for CLisp and scheme to have become different enough that not both can be called LISP. It's a nice piece, but I don't agree with the conclusion: in terms of language construction, there is more that unites these languages -- to use the metaphor of taxonomy, if they do not belong to the same genus, they still belong to the same family. And, indeed, one can expect that changes in programming language tool support will change the perceived distance between the languages.

Cole on Al-Qaeda
Juan Cole has a summary, September 11 and Its Aftermath , of Al-Qaeda's objectives in its terrorist campaign.

Recentlog
raph's personal news: Nasty situation -- you have my sympathies, and I hope the two of you can find a good arrangement with your children.
mslicker on Advogato Wiki: Interesting idea. I'm tempted to say that it couldn't make things worse. Project pages are near enough wikis as it stands, without the convenient syntax. On the other hand, I think that wikipedia has the tremendous advantage that the project is very good at forging common purpose. In principle we have that here at advogato, in practice there is no real shared activity here, so the divisions are most prominent. I have some very speculative ideas of how to start remedying this, which I hope to post on soon.

Edit (2008-10-20): Fixed another hyperlink.

2 Sep 2004 (updated 15 Sep 2004 at 12:58 UTC) »
When tub thumping goes wrong
From the non-GOP reaction to Zell Miller's speech, it looks like one lunatic might have completely sabotaged what might otherwise have turned out to have been a good convention for the Republicans. Bush has some work cut out for him now. No doubt berend will see the good side of Zell's piece...
Postscript: The above aside at berend was a bit catty: he's actually got one of the more readable diaries here and did not deserve it.

Conference tour
I'm in the North of England (nostalgia!) next week for the British Logic Colloquium in Leeds and the Advances in Modal Logic #5 in Manchester, where I will be giving a talk on the proof theory of modal logic. Oh, and I'm flying to/from Liverpool, just so it doesn't feel left out. </b>

1 Sep 2004 (updated 1 Sep 2004 at 20:11 UTC) »
New Graham essay
Pointing to the new Paul Graham essay is the Advogato equivalent of "F1rst P0st!". This one's on "The Age of the Essay": I'm reminded of Orwell's comments on the age of the pamphlet.
4 Aug 2004 (updated 4 Aug 2004 at 16:24 UTC) »
Did Al-Qaeda Game Bush?
Juan Cole has published a thought provoking item, Did al-Qaeda Game Bush into Iraq War?. Long excerpt:
Douglas Jehl of the New York Times explains how Ibn al-Shaykh Libi, a high al-Qaeda official of Libyan extraction, was captured in fall of 2001 and alleged to CIA interrogators that Iraq had provided al-Qaeda with training in chemical and biological weapons.

Later on, Abu Zubaydah and Khalid Shaykh Muhammad were captured in Pakistan. Abu Zubaydah was wounded in the course of being captured and was put on heavy duty pain killers, and was interrogated in part while under their influence. Both he and KSM maintained that Bin Laden had forbidden any operational cooperation with Iraq, because it was ruled by an infidel secular Arab socialist regime.

When the CIA came back to Libi with these statements of his colleagues, he folded and admitted he had lied.

What is going on here? It has been suggested that Libi told the CIA whatever they wanted to hear because they tortured him. But there is another possibility, which is that he deliberately misled them. Libi is also the source of a report in January 2002 that al-Qaeda had targeted the US naval base in Bahrain. That allegation was never confirmed, and it is possible that it was also a lie, intended to draw US resources away from Afghanistan, or to make the US cautious about using the base.

I think Bin Laden and his lieutenants wanted to provoke wars between the US and Muslim states. I think they knew that the 9/11 attacks would guarantee a US war on Afghanistan, and that they were confident they could draw the US into the country and defeat it, as they had the Soviets.

hat they were trying to provoke a US/Afghanistan war and knew their actions would provoke one is suggested in several ways. First, they made no effort to have the hijackers on 9/11 employ aliases or cover their tracks. A toddler could have traced Nawaf al-Hazmi and Khalid al-Mihdar back to al-Qaeda camps in Afghanistan. They made their reservations under their own names! All of the hijackers had. Counter-terrorism chief Richard Clarke was astounded that these men had even been let on the planes under those names, many of which were well known to US intelligence. Likewise, Bin Laden hand-picked the Saudi "muscle" that he sent along at the last minute, from among young men personally loyal to him, and who would be known to be his men. September 11 was a way of waving a huge red flag from Afghanistan at the American bull.

Not sure I am convinced by Cole's anaylsis, but it does put another angle on the motivations for war.

NYRB on Darfur
John Ryle has an article online in the coming New York Review of Books, Disaster in Darfur. This is one of the best analyses into the political motivations behind what the US is doing diplomatically about the disaster: not that I've found much worth reading on the topic.

3 Aug 2004 (updated 4 Aug 2004 at 14:32 UTC) »
On the design of proof exchange systems
There's a post by Neil Tennant on The Role of Formalization on the FOM list, where he says:
> This reminded me
> of an anecdote of a student of axiomatic set theory who insisted in
> writing all of his proofs for homework exercises in a extremely
> rigorous fashion. All of his proofs were symbolic and with the
> appropriate software these proofs could probably have been checked by a
> computer. The grader was a bit confused as to what to do with these
> proofs since they were correct and very rigorous but it was not really
> what was expected from him. After a while, the professor and the grader
> recognized the fact that it was in some sense admirable that he wrote
> his proofs so formally, but requested him to start writing his proofs
> at a higher level of abstraction, which was more appropriate for
> human-readability.

Perhaps readability by human beings is best ensured not so much by "writing ... proofs at a higher level of abstraction", as by endeavouring always to use such formalisms as permit one to give perfectly formal proofs that are more nearly *homologous* to the patterns of informal reasoning that human mathematicians typically employ.

...

IMHO the formalization of mathematics, if it is too enable machine-checking as well as readability-by-human-beings, is going to need much more careful design of its basic rules of inference. We need to deal not only with connectives, quantifiers and the identity sign, but also directly with the various mathematical primitives. By "mathematical primitives" here I mean not just the epsilon in a Bourbaki-style axiomatization of a given branch of mathematics in ZFC. I mean, rather, the notions that are treated as linguistically primitive in the usual course of ordinary mathematical talk.

This is exactly right, and this is what Raph's proof exchange format has to get right if it is really to break out the "formalised mathematics" ghetto.

Intelligence Post
Jean-Paul Ney [fr], a freelance French TV field journalist, has started a slashdot-alike (PHP Nuke based) intelligence/terrorism news portal, Intelligence Post, which I recommend. It's got its teething problems (eg. it has an RSS aggregator whose choices are all Linux/BSD sites, not really on target for most of its audience; a few bugs, eg. number of comments doesn't display correctly), but the site does an excellent job of selecting and bringing together news articles within its remit: I would have missed two very interesting stories about Iran had I not found this site. There's a few folk here I know will be interested in this site, hence the recommendation.

The two Iran items I mentioned above are:

  1. Iran's Nuclear Ambitions, and
  2. Israeli intelligence agents infiltrating Iran - report.

Felix on the Google IPO
Felix Salmon has a really excellent analysis of the Google IPO.

Recentlog
cTaylor: Your ideas for a grammar checking engine look nice, and you seem to be going about it in the right way. I wouldn't usually say something so trivial, but since you think you might be driven insane by your project...

crhodes: Protocol oriented programming I've never heard of before (I'm guessing you didn't invent it), but the way you put it fits in nicely with modern ideas in formal specifications. There's some interest in the idea that you can give description languages for protocols that are logic programming languages such that the specification determines the execution, (ie. the way PROLOG is supposed to work).

mmangino: De Paul university, eh? A doctorate-brother of mine, Corin Pitcher, works there. Maybe an interesting contact for you? He does both the difficult theory and the messy practice of computer science, and he has a background in mathematics.

26 Jul 2004 (updated 29 Sep 2009 at 12:06 UTC) »
News sources
berend wrote: That's why I love blogs. News can no longer be ignored or hidden.

That's a thought that unites Berend and Raph, despite the different types of news that makes them happy when it breaks out from behind the media skirts. Pessimistic thought: appearing in a blog doesn't mean a news story has made it to the wider social awareness. Blogs are still obscure, and precious few stories are successfully nurtured to the point where mainstream news media are forced to take note of them.

Optimistic note: there's now news sources for lots of the emerging topics I'm interested in, and I can now bring my expertise to ears that would not otherwise hear it. I'm now a contributing editor at Ehud Lamm's Lambda the Ultimate (not yet made use of the privilege), and am a founding member of a team of a to be soon launched group weblog on the contact area between computer science and philosophy; here is the announcement.

And on the crucial question: which is the best news source for international issues: NYT vs. WaPo? Josh Marshall weighs in, saying:

Over time you get a good sense of which news outlets consistently generate new information and which don't. And by this measure -- on the issues I follow closely, which I'd say are foreign policy, defense policy, intelligence and national politics -- the Post consistently outclasses the Times, particularly on the first three topics. When it comes to who's generating fresh information rather than summarizing the story a few days later or relying on hand-fed stories, my experience putting together this site tells me I usually end up finding new information -- which stands up over time -- in the Post.

My own take: I've never liked the NYT, it always has a parochial feel when reporting foreign news (especially Germany: I don't recall ever reading an article on a story in Germany in which Nazis were not at least alluded to); the WaPo is infinitely better in this regard, and as Josh says, you have the sense that you are reading the stories by people who have done the spadework. I prefer the Financial Times to both, btw.

7 Jul 2004 (updated 7 Jul 2004 at 19:06 UTC) »
Liskov Substitution Property
graydon writes on the failure of object oriented programming languages to enforce the Liskov substitution property. I tend to make very limited use of OO in my programming, but I wonder how bad the lack here is. Oleg's example is easy to solve: what one would do is have each class inherit from a common abstract class; my guess is that LSP violations hardly ever occur in code written by experienced OO programmers (an analogous point would be violations of occurs-check in Prolog code: it happens rarely enough that Prolog implementors think it is not worth the overhead of defining unification to detect occurs check violations).

Graydon also says this issue lends further evidence to my belief that language design is actually regressing: even better evidence for this is the corruption of `lexical' scoping in Python, a disease which I note has spread (hint: lexical scope is entirely determined by the syntactic structure of the source code).

Copenhagen Consensus
In a piece of underreported news, Bjorn Lomborg has announced he will step down as chair of the Danish Environmental Assessment Institute. John Quiggin has two pieces assessing his major achievement in this time: part one criticising the composition of the panel, and part two criticising the idea of trade offs implicit in the latter. He's surely right on the first point, but on the second, I'm not sure that coming up with a cost-benefit ranking does presuppose the picture of trade offs Quiggin criticises; instead it can tell you in what direction we should be pressuring for change.

On a general point, I think Quiggin is the best blogger on matters of environmental economics.

7 Jul 2004 (updated 7 Jul 2004 at 10:10 UTC) »
Congratulations fxn!
Congratulations on the birth of your daughter!

Server up
Has it really only been six weeks? It's absence is a reminder how much I value this place: thanks to all who helped bring it back. Nice to see permalinks on recentlog.

Computer Science Weblogs
In the meantime, something has happened at Lambda the Ultimate: it was one of the sites slated to close down with the changes at Manila, and has moved to lambda-the-ultimate.org, on a new server. There's been a big improvement in response time, and the site has had a bit of a facelift.

23 May 2004 (updated 23 May 2004 at 15:49 UTC) »
Computation weblogs
An addition to my "Computer science weblogs", whose home has moved to www.linearity.org/cas/weblog/complogs.html, and whose name has changed to "Computation Weblogs":

Andrew Birkett's blog, subtitled "Thoughts of a software engineer", has a heavy programming languages design and implementation slant, and is generally excellent:

  • Optimization asks the question "Why do programs have to get more complicated when you try to speed them up?" and why is it not possible to write code rather like *ML and XML transformers, so that one writes a simple program that is readable, maintainable and easy to reason about, and transformations on this code that make it into efficient code.
  • Yet another compiler compiler and and It's MetaTurtles, all the way down gives a nice introduction to the idea of automating the process of compiler production by starting from formal specifications of the programming language.

    Nice, but I have to quibble: Andrew asserts of denotational semantics: Unfortunately, having all these mathematical objects and theorems floating around isn't getting you much closer to having a compiler for the language, which simply isn't true: a denotational semantics provides you with the recipe for automatically generating an interpreter, which in principle can automatically transformed into a compiler by partial evaluation. The real problem with denotational semantics is that it's proven difficult to provide them for non-toy languages.

  • Of limited scope and Predicate objects talks about some issues around variables whose meaningfulness varies through their lifetime.
  • Monads by stealth motivates a C++ abstraction that turns out to be the same thing as a paradigmatic monad.

Graydon's recent thoughts on pointers
I don't share graydon's dislike of garbage collection (in particular, functional programming becomes tiresome with explicit memory management), but I do think there is a lack of languages that allow garbage collection, allow user control without any enforced runtime, and allow one flexibility in choosing mixtures of the two (Scheme48/prescheme sort of allows this, but the runtime is not optional, and the mixing is not flexible or convenient). In particular I would like to see a language where one could, without undue pain, transform code written assuming garbage collection into code with explicit memory management. There's a lot of know-how of how to do this is the scheme community

graydon wrote:

I believe there is some relationship between this approach and the "linear naming" research that chalst is doing.
There is definitely a similar way of thinking going on, although some important differences:
  • The implementation of "linear names" has always been as reversible pointers, ie. each pointer points to a back pointer; pointers are usually implemented as pairs of C pointers to data structures plus offsets to the backpointer, allowing several reversible pointers to point to the same data structure; thus names are not quite the same thing as ownership;
  • There are no weak pointers in anything written or implemented. Alan Bawden and I thought of adding them, they would have uses, but nothing was ever done. Instead sharing is done explicitly, by having special data structures that allow the data structure to be connected to arbitrarily many users by means of sharing trees. This is related to the way proof nets are handled for linear logic, hence the "linear" in linear naming.
  • We are happy with cycles, but then we are not trying to eliminate garbage collection (yet); disconnected subgraphs are garbage. There is a resource management problem since data structures may span many hosts and may move unpredictably; there's a fairly easy solution, but possibly ideas such as yours could improve on that.

A Security Kernel Based on the Lambda-Calculus
I've linked to Jonathan Rees' 1995 PhD thesis before, but am plugging it again, since it was mentioned on Lambda the Ultimate.

116 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!