An addition to my "Computer science weblogs", whose home has moved to www.linearity.org/cas/weblog/complogs.html, and whose name has changed to "Computation Weblogs":
Andrew Birkett's blog, subtitled "Thoughts of a software engineer", has a heavy programming languages design and implementation slant, and is generally excellent:
- Optimization asks the question "Why do programs have to get more complicated when you try to speed them up?" and why is it not possible to write code rather like *ML and XML transformers, so that one writes a simple program that is readable, maintainable and easy to reason about, and transformations on this code that make it into efficient code.
- Yet another compiler compiler and
and It's MetaTurtles, all the way down gives a nice introduction
to the idea of automating the process of compiler production by
starting from formal specifications of the programming language.
Nice, but I have to quibble: Andrew asserts of denotational semantics: Unfortunately, having all these mathematical objects and theorems floating around isn't getting you much closer to having a compiler for the language, which simply isn't true: a denotational semantics provides you with the recipe for automatically generating an interpreter, which in principle can automatically transformed into a compiler by partial evaluation. The real problem with denotational semantics is that it's proven difficult to provide them for non-toy languages.
- Of limited scope and Predicate objects talks about some issues around variables whose meaningfulness varies through their lifetime.
- Monads by stealth motivates a C++ abstraction that turns out to be the same thing as a paradigmatic monad.
Graydon's recent thoughts on pointers
I don't share graydon's dislike of garbage collection (in particular, functional programming becomes tiresome with explicit memory management), but I do think there is a lack of languages that allow garbage collection, allow user control without any enforced runtime, and allow one flexibility in choosing mixtures of the two (Scheme48/prescheme sort of allows this, but the runtime is not optional, and the mixing is not flexible or convenient). In particular I would like to see a language where one could, without undue pain, transform code written assuming garbage collection into code with explicit memory management. There's a lot of know-how of how to do this is the scheme community
I believe there is some relationship between this approach and the "linear naming" research that chalst is doing.There is definitely a similar way of thinking going on, although some important differences:
- The implementation of "linear names" has always been as reversible pointers, ie. each pointer points to a back pointer; pointers are usually implemented as pairs of C pointers to data structures plus offsets to the backpointer, allowing several reversible pointers to point to the same data structure; thus names are not quite the same thing as ownership;
- There are no weak pointers in anything written or implemented. Alan Bawden and I thought of adding them, they would have uses, but nothing was ever done. Instead sharing is done explicitly, by having special data structures that allow the data structure to be connected to arbitrarily many users by means of sharing trees. This is related to the way proof nets are handled for linear logic, hence the "linear" in linear naming.
- We are happy with cycles, but then we are not trying to eliminate garbage collection (yet); disconnected subgraphs are garbage. There is a resource management problem since data structures may span many hosts and may move unpredictably; there's a fairly easy solution, but possibly ideas such as yours could improve on that.
A Security Kernel Based on the Lambda-Calculus
I've linked to Jonathan Rees' 1995 PhD thesis before, but am plugging it again, since it was mentioned on Lambda the Ultimate.