Douglas Jehl of the New York Times explains how Ibn al-Shaykh Libi, a high al-Qaeda official of Libyan extraction, was captured in fall of 2001 and alleged to CIA interrogators that Iraq had provided al-Qaeda with training in chemical and biological weapons.Not sure I am convinced by Cole's anaylsis, but it does put another angle on the motivations for war.
Later on, Abu Zubaydah and Khalid Shaykh Muhammad were captured in Pakistan. Abu Zubaydah was wounded in the course of being captured and was put on heavy duty pain killers, and was interrogated in part while under their influence. Both he and KSM maintained that Bin Laden had forbidden any operational cooperation with Iraq, because it was ruled by an infidel secular Arab socialist regime.
When the CIA came back to Libi with these statements of his colleagues, he folded and admitted he had lied.
What is going on here? It has been suggested that Libi told the CIA whatever they wanted to hear because they tortured him. But there is another possibility, which is that he deliberately misled them. Libi is also the source of a report in January 2002 that al-Qaeda had targeted the US naval base in Bahrain. That allegation was never confirmed, and it is possible that it was also a lie, intended to draw US resources away from Afghanistan, or to make the US cautious about using the base.
I think Bin Laden and his lieutenants wanted to provoke wars between the US and Muslim states. I think they knew that the 9/11 attacks would guarantee a US war on Afghanistan, and that they were confident they could draw the US into the country and defeat it, as they had the Soviets.
hat they were trying to provoke a US/Afghanistan war and knew their actions would provoke one is suggested in several ways. First, they made no effort to have the hijackers on 9/11 employ aliases or cover their tracks. A toddler could have traced Nawaf al-Hazmi and Khalid al-Mihdar back to al-Qaeda camps in Afghanistan. They made their reservations under their own names! All of the hijackers had. Counter-terrorism chief Richard Clarke was astounded that these men had even been let on the planes under those names, many of which were well known to US intelligence. Likewise, Bin Laden hand-picked the Saudi "muscle" that he sent along at the last minute, from among young men personally loyal to him, and who would be known to be his men. September 11 was a way of waving a huge red flag from Afghanistan at the American bull.
NYRB on Darfur
John Ryle has an article online in the coming New York Review of Books, Disaster in Darfur. This is one of the best analyses into the political motivations behind what the US is doing diplomatically about the disaster: not that I've found much worth reading on the topic.
> This reminded meThis is exactly right, and this is what Raph's proof exchange format has to get right if it is really to break out the "formalised mathematics" ghetto.
> of an anecdote of a student of axiomatic set theory who insisted in
> writing all of his proofs for homework exercises in a extremely
> rigorous fashion. All of his proofs were symbolic and with the
> appropriate software these proofs could probably have been checked by a
> computer. The grader was a bit confused as to what to do with these
> proofs since they were correct and very rigorous but it was not really
> what was expected from him. After a while, the professor and the grader
> recognized the fact that it was in some sense admirable that he wrote
> his proofs so formally, but requested him to start writing his proofs
> at a higher level of abstraction, which was more appropriate for
Perhaps readability by human beings is best ensured not so much by "writing ... proofs at a higher level of abstraction", as by endeavouring always to use such formalisms as permit one to give perfectly formal proofs that are more nearly *homologous* to the patterns of informal reasoning that human mathematicians typically employ.
IMHO the formalization of mathematics, if it is too enable machine-checking as well as readability-by-human-beings, is going to need much more careful design of its basic rules of inference. We need to deal not only with connectives, quantifiers and the identity sign, but also directly with the various mathematical primitives. By "mathematical primitives" here I mean not just the epsilon in a Bourbaki-style axiomatization of a given branch of mathematics in ZFC. I mean, rather, the notions that are treated as linguistically primitive in the usual course of ordinary mathematical talk.
Jean-Paul Ney [fr], a freelance French TV field journalist, has started a slashdot-alike (PHP Nuke based) intelligence/terrorism news portal, Intelligence Post, which I recommend. It's got its teething problems (eg. it has an RSS aggregator whose choices are all Linux/BSD sites, not really on target for most of its audience; a few bugs, eg. number of comments doesn't display correctly), but the site does an excellent job of selecting and bringing together news articles within its remit: I would have missed two very interesting stories about Iran had I not found this site. There's a few folk here I know will be interested in this site, hence the recommendation.
The two Iran items I mentioned above are:
Felix on the Google IPO
Felix Salmon has a really excellent analysis of the Google IPO.
cTaylor: Your ideas for a grammar checking engine look nice, and you seem to be going about it in the right way. I wouldn't usually say something so trivial, but since you think you might be driven insane by your project...
crhodes: Protocol oriented programming I've never heard of before (I'm guessing you didn't invent it), but the way you put it fits in nicely with modern ideas in formal specifications. There's some interest in the idea that you can give description languages for protocols that are logic programming languages such that the specification determines the execution, (ie. the way PROLOG is supposed to work).
mmangino: De Paul university, eh? A doctorate-brother of mine, Corin Pitcher, works there. Maybe an interesting contact for you? He does both the difficult theory and the messy practice of computer science, and he has a background in mathematics.
That's a thought that unites Berend and Raph, despite the different types of news that makes them happy when it breaks out from behind the media skirts. Pessimistic thought: appearing in a blog doesn't mean a news story has made it to the wider social awareness. Blogs are still obscure, and precious few stories are successfully nurtured to the point where mainstream news media are forced to take note of them.
Optimistic note: there's now news sources for lots of the emerging topics I'm interested in, and I can now bring my expertise to ears that would not otherwise hear it. I'm now a contributing editor at Ehud Lamm's Lambda the Ultimate (not yet made use of the privilege), and am a founding member of a team of a to be soon launched group weblog on the contact area between computer science and philosophy; here is the announcement.
And on the crucial question: which is the best news source for international issues: NYT vs. WaPo? Josh Marshall weighs in, saying:
Over time you get a good sense of which news outlets consistently generate new information and which don't. And by this measure -- on the issues I follow closely, which I'd say are foreign policy, defense policy, intelligence and national politics -- the Post consistently outclasses the Times, particularly on the first three topics. When it comes to who's generating fresh information rather than summarizing the story a few days later or relying on hand-fed stories, my experience putting together this site tells me I usually end up finding new information -- which stands up over time -- in the Post.
My own take: I've never liked the NYT, it always has a parochial feel when reporting foreign news (especially Germany: I don't recall ever reading an article on a story in Germany in which Nazis were not at least alluded to); the WaPo is infinitely better in this regard, and as Josh says, you have the sense that you are reading the stories by people who have done the spadework. I prefer the Financial Times to both, btw.
Graydon also says this issue lends further evidence to my belief that language design is actually regressing: even better evidence for this is the corruption of `lexical' scoping in Python, a disease which I note has spread (hint: lexical scope is entirely determined by the syntactic structure of the source code).
In a piece of underreported news, Bjorn Lomborg has announced he will step down as chair of the Danish Environmental Assessment Institute. John Quiggin has two pieces assessing his major achievement in this time: part one criticising the composition of the panel, and part two criticising the idea of trade offs implicit in the latter. He's surely right on the first point, but on the second, I'm not sure that coming up with a cost-benefit ranking does presuppose the picture of trade offs Quiggin criticises; instead it can tell you in what direction we should be pressuring for change.
On a general point, I think Quiggin is the best blogger on matters of environmental economics.
Has it really only been six weeks? It's absence is a reminder how much I value this place: thanks to all who helped bring it back. Nice to see permalinks on recentlog.
Computer Science Weblogs
In the meantime, something has happened at Lambda the Ultimate: it was one of the sites slated to close down with the changes at Manila, and has moved to lambda-the-ultimate.org, on a new server. There's been a big improvement in response time, and the site has had a bit of a facelift.
Andrew Birkett's blog, subtitled "Thoughts of a software engineer", has a heavy programming languages design and implementation slant, and is generally excellent:
Nice, but I have to quibble: Andrew asserts of denotational semantics: Unfortunately, having all these mathematical objects and theorems floating around isn't getting you much closer to having a compiler for the language, which simply isn't true: a denotational semantics provides you with the recipe for automatically generating an interpreter, which in principle can automatically transformed into a compiler by partial evaluation. The real problem with denotational semantics is that it's proven difficult to provide them for non-toy languages.
Graydon's recent thoughts on pointers
I don't share graydon's dislike of garbage collection (in particular, functional programming becomes tiresome with explicit memory management), but I do think there is a lack of languages that allow garbage collection, allow user control without any enforced runtime, and allow one flexibility in choosing mixtures of the two (Scheme48/prescheme sort of allows this, but the runtime is not optional, and the mixing is not flexible or convenient). In particular I would like to see a language where one could, without undue pain, transform code written assuming garbage collection into code with explicit memory management. There's a lot of know-how of how to do this is the scheme community
I believe there is some relationship between this approach and the "linear naming" research that chalst is doing.There is definitely a similar way of thinking going on, although some important differences:
A Security Kernel Based on the Lambda-Calculus
I've linked to Jonathan Rees' 1995 PhD thesis before, but am plugging it again, since it was mentioned on Lambda the Ultimate.
My own investment in the question: I'd rather like ZF to be consistent, but I wouldn't be upset if it wasn't. You can avoid worries about strong axioms by using only geometric theories in your basis theory (or metatheory, if you prefer); these can be shown by simple proof-theortic means (cut elimination) to be consistent, and IIRC they can capture all Pi^0_1 consequences of any consistent theory, or in other words, they can be quite strong but are never too strong.
Carlin made her first strike against the system on May 1st, with an exercise of public nudity in the Tiergarten. She seems to have a talent for autonomiste activism. She's 15 weeks old tomorrow.
Posctscript: Ah, just read fxn's entry: he beat me to the above by two days, but there's extra information in my entry... There's actually something of a literature of correct-seeming ZF inconsistency proofs; someone should write up the history of them.
I think the idea of a "software component" is a wrong and damaging distraction from a fundamental fact: program text is the ultimate component technology.I almost completely agree, but actually I think there's nothing wrong with a properly thought through VM model of exchangeable code. No doubt, though, improper application of the component mindset has very widespread and farreaching consequences, almost all malignant.
BTW, I completely agree with what you said about language wars, though if we were forced (at gunpoint, naturally) to choose between C# and Java, I think Java has to be the choice. Mono, I think, is a very, very, dangerous thing for GNOME.
New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.
Keep up with the latest Advogato features by reading the Advogato status blog.
If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!