Older blog entries for crhodes (starting at number 31)

Well, since I'm now officially a wonder twin (read Dan; he's a better writer and more interesting), I feel obliged to give a status report... instead of doing something useful with my time, I've been doing what might well turn out to be the most thankless port in SBCL's brief history. How many people do you know who will want to run a native-code Common Lisp compiler on the HPPA (aka "parisc") platform? Anyone?

So why, then? Well, to understand that, you need to understand a bit of history, and a bit of software engineering. The history first: CMUCL historically supported compilation on Alpha, HPPA, MIPS, RT, SPARC and x86 platforms (as well as the PowerPC, briefly); however, partly because of motivation and partly because of cmucl's build process, CMUCL currently only supports SPARC and x86. SBCL's build process is such that, in contrast to CMUCL, building binaries is trivial, so, since the backends are as good as they ever were, it is much easier for SBCL to support more platforms, particularly since we can piggyback on Debian's "buildds". Then, once all the viable CMUCL backends are ported, we can perform some much-needed surgery.

One strange problem is now less strange, though it remains a problem. Delegation works!

On the other hand, no-one's bitten on the strange "jump far into weeds" problem. What have we discovered so far?

call_into_lisp, the function that ends up jumping into Lispland, does so by an indirection. Relevant code snippets:

X86:

        movl     8(%ebp),%eax   # lexenv?
        ...
        call    *CLOSURE_FUN_OFFSET(%eax)

PPC:

        lwz reg_CODE,SIMPLE_FUN_SELF_OFFSET(reg_LEXENV)
        addi reg_LIP,reg_CODE,6*4-FUN_POINTER_LOWTAG
        mtctr reg_LIP
        slwi reg_NARGS,reg_NL2,2
        bctr

A working Lisp image has the top-level function being referenced from very close to the top of dynamic space. My broken image has the top-level function very far from the top of dynamic space. This would tend to indicate that the PURIFY stage (when Lisp data are collected and anything remaining compacted) didn't work on the x86.

Here's where the fun begins: the changes involved didn't obviously touch the purify machinery at all. Investigations are ongoing, if hampered by the fact that I tried (three times, on three different architectures) to compile with the wrong patch installed. Hey ho.

SBCL 0.7.6 is out.

This was a more fraught release than previous, maybe because we're playing around with some low-level suboptimalities; obviously we want to fix them, but the time scale is quite challenging. The good news is that it seems that Dan's stack checking stuff (a) has landed and (b) is here to stay. It's a much better scheme architecturally, and it also means that I can read disassembly without having to filter out n calls to SB-KERNEL:%DETECT-STACK-EXHAUSTION...

The bad news? Well. F'rinstance, there's the vexing matter of floating point arithmetic. I was so pleased to have fixed SBCL's signal handling code on x86/ and PPC/Linux. Ha. One of the problems of testing on only one machine (per architecture/os combination) is that you don't necessarily catch all your assumptions. In this case, when I tried the nice shiny new code on Dan's iMac:

* (/ 1.0 0.0)
1.0

Ouch. So, I went back over to the Sourceforge compile farm machine (IBM RS6000, running Debian), and tried it there, just to check that I wasn't going completely mad:

* (/ 1.0 0.0)
Segmentation fault

The temptation to weep and curse was fairly strong; however, before turning myself in for crimes against good programming I did note that the machine in question had just changed from running a 2.4.high kernel to a 2.2.low one; given my previous experience with signals on SPARC/Linux, I'm willing to believe that it's not my fault.

Still, on the plus side, we appear to have a fan, even if he hasn't actually used the system. Off to sing in Paris for a week, so my two outstanding strange problems are left in the capable hands of my co-maintainers. Phew.

So, obviously, my consciousness had registered that this weblogging phenomenon was taking off. From the header in Dan's to the fact that there are even mostly-Lisp weblogs, the evidence was fast becoming compelling.

However, when my significant other (one who has had a fair amount of exposure to technology over the years, bless her, but who let us say hasn't exactly enthused over it) announces that, encouraged by a Guardian competition, she has started her own weblog, I have to confess feeling that I am being left behind technologically by the great British public. Did we think that technology was going to be the great leveller, giving equality of opportunity? While this is probably still as untrue as it has been in the past, maybe, just maybe, expression of self has become more possible.

I seem to be sinking lower and lower.

As Dan mentions, we happy few in the Common Lisp world seem to be working at an absurdly low level. I mean, OK, we're compiler implementors, but the three previous sizeable improvements to SBCL from the pair of us seem to be better stack exhaustion detection, better floating point exception support, and correct undefined-function handling on the PowerPC platform. You will observe that those patches are mostly not touching any of the 100kloc of the Lisp code in the implementation.

Maybe there is something to this CLIM thing after all.

My presentation yesterday went well (there was a decent audience; not too many of them fell asleep; a couple of questions at the end), but the star of the show for me was Gilbert Baumann's demonstration of the Closure web browser.

Closure was, in 1999, the first web browser to pass the W3C CSS1 compliance test suite. Since then, all sorts of nifty things have been implemented, including a CLIM frontend and the TeX line-breaking algorithm. Certainly, his demo (and Robert Strandh's introduction to CLIM) has given me ideas for killer apps...

So, that was one conference. It's somewhat entertaining on a number of levels; firstly, being in a room with lots of really clever people is a very good thing; secondly, watching those really clever people disagree violently with each other is amusing; thirdly, getting new ideas for my own research has to help with the impending third year of Ph.D. studies nightmare.

Should you, the dear reader, be interested in the nature of Dark Energy, a brief summary: Monday and Tuesday were devoted to experimental techniques and observational results. It saddened me slightly to see some of the theorists take time off during these sessions, because Physics has to be driven by experiment to work (otherwise it's simply Mathematics... oh, wait, what department am I in again? Still, I learnt a fair bit about the Cosmic Microwave Background baloon experiments (MAXIMA and BOOMERanG), the Type Ia Supernovae observations, Weak Lensing, all apparently pointing towards the ‘Concordance Cosmology’ of (Ωm, ΩΛ) = (0.3, 0.7).

The last plenary session on Tuesday was devoted to the question “Is evidence for Dark Energy compelling?” Based on the previous paragraph, one would have to say ‘yes’, as the observations strongly point towards a non-zero Cosmological Constant. But wait! The CMB results depend on assuming only adiabatic perturbations; we don't have a model for the Type Ia supernovae, and there is the problem of the cosmic distance ladder; and weak lensing observations can easily be contaminated by strong lensing effects. Is it possible that systematic experimental effects can lead to a false concordance (or, more cynically, is it possible that experimentalists will choose the method of analysis that leads to an answer close to the one that they're expecting)? Sadly, the history of science points to a ‘yes’ answer to that question, too. Based on this, I skipped Tuesday afternoon's session to go shopping.

Wednesday to Friday were more theoretical days (well, the days themselves weren't theoretical, but the talks were on theoretical subjects), so I skipped fewer talks. Highlights: Gia Dvali, not so much for his talk's content as for the way he said it – he actually made an 09:00 start tolerable; Sacha Vilenkin, for the bravery in extolling the virtues of the anthropic principle to a mostly hostile audience; and, of course, having my own work presented (all the glory and none of the responsibility). Maybe a side note about the anthropic principle is in order: it comes in a number of flavours, ranging in character from “We're here” through “We're here because we're here” to “Everything in the Universe is your fault”. As presented by Vilenkin, it was a very reasonable argument, essentially saying that, given that we exist, we have a non-uniform prior probability on cosmological parameters, so we shouldn't use a uniform prior when we do Bayesian statistics. This seemed reasonable to me (maybe he shouldn't have said that the anthropic principle ‘predicted’ an ΩΛ of 0.7) but didn't meet with much approval among my peers. It's a shame, because the anthropic principle is a useful tool in the chest of a physicist (notably used by Fred Hoyle in the prediction of the resonance in Carbon-12, at just the right energy for the triple-α collision to work...

The conclusion from the Colloque was really along the lines of “We have no real idea what Dark Energy is like or where it comes from. But that's not a problem, because it leaves us plenty of room for writing articles which everyone else can cite.” Though I did like the attitude of the final session chair: “If I could ask God one question, it would be ‘How many dimensions does the Universe have?’; hopefully He would answer with a number... a real number... if we're really lucky, an integer...”

And now, off to Bordeaux for Libre Software Meeting. I should stop writing this diary entry, and start writing my talk on “SBCL: The best thing since sliced bread?”

It's conference season!

First up, the Institut Astrophysique de Paris' Colloquium on the Nature of Dark Energy. So, in other words, work-related. Meeting with colleagues, discussion of our model, running some simulations... not very different from work in Cambridge, really, except for the backdrop of research talks all day long too. The corresponding shindig last year was quite fun, though the temptation to snooze during the afternoon sessions was fairly high.

Then off to the Libre Software Meeting in Bordeaux, where I get to give a presentation about SBCL. Fortunately, with a lot of help from wnewman, there are going to be some interesting things to talk about, both from the point of view of real users, and also from the point of pure computer science. The presentation slides will be available afterwards, for the eager hordes fascinated by lisp compiler technology (all two of you).

16 Jun 2002 (updated 16 Jun 2002 at 10:42 UTC) »
[17:41] <Krystof> It's compiling!
[17:41] <Krystof> make-target-2 is running!
[17:41] <Krystof> WOOOOHOOOOO!
[17:41] <wnewman> ship it. extensive testing is for weenies

Well, that might be a little exuberant; maybe an IRC excerpt isn't the best way of summarizing an achievement. So what have we done? Well, we now have a Common Lisp compiler, written in Common Lisp that can be built from a mostly unrelated Lisp compiler.

To people in the C world, this may not seem unusual. After all, gcc is built initially by vendor C compilers, then by itself. The difference between compiling C and compiling Lisp (well, OK, a difference) is that the act of compilation changes the state of the compiler.

This therefore raises portability issues as soon as you try to model the act of compilation itself. For instance, CMUCL, in its build process, scribbles over its own internal data structures, which is fine, as long as you're not trying to change the representation of those data structures; if you are, you need to find some way of bootstrapping the changes.

As of this week, though, we can state with a small degree of confidence that one can make arbitrary (consistent) changes to the source of SBCL and not have to deal with the bootstrapping question. What does this buy us? Nothing, really, except a small degree of confidence that one can change various representations without having nasty surprises jump out at us. This is not an end-user-visible improvement, really. It does give the maintainers a warm, fuzzy feeling, though.

mwh: if you think valgrind doesn't like Python's memory allocator, well, try running it on CMUCL or SBCL...

Killing that process was fairly essential.

22 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!