Older blog entries for crhodes (starting at number 76)

... and we're back! Inevitably, by the law of Professor Murphy, just as I'm about to leave for the Libre Software Meeting; also, by Parkinson's corollary, I've forgotten all the bloggable stuff that I could write here. Go look at this (scroll down to the end) for the executive summary.

One thing: I attended (and gave a paper [slides] at) the 1st European Lisp/Scheme Workshop in Oslo. That was fun; meeting new people, putting names to IRC nicks, and so on.

And now, I'm really looking forward to going to the hothouse of Lisp programming that is Bordeaux: as well as a good list of attendees, the locals (Robert Strandh, Tim Moore, Iban Hatchondo, Matthieu Villeneuve) should provide good conversation too. Particularly looking forward to the lightning presentations; less looking forward to finishing my talk...

À bientôt!

Something else that happened in the dark ages of the diary: I went to scheme-uk again. This should not be taken as indicative of any paradigm shift towards the Dark Side; however, a chance to hear Shriram Krishnamurthi speak is not to be sniffed at.

So what did he speak about? Shriram actually gave two talks: the second was probably more audience-pleasing (in that, well, it involved scheme: specifically DrScheme and FrTime), on the beginnings of functional GUI programming. Neat, but time will tell whether it's a practical advance on the current callback-based paradigm.

The first talk was, serendipitously, closely related to work I've been doing: Shriram presented results of examining the coverage of deltas (individual CVS checkins, say) by test suites, coming up with the mildly surprising result that the coverage is strongly bimodal: most changes affect either very few tests (one or two) or the whole suite.

This is similar in method and scope to work that I'll be presenting in Oslo at the Lisp/Scheme Workshop (side note: only €25 for attendance) on classifying benchmarks by examining the changes in timings caused by source deltas. Shriram's talk also leads naturally into repeating his analysis using pfdietz's test suite, with the added advantage that it's cross-implementation, and so not contaminated by a biassed worldview. On the other hand, the ANSI CL test suite isn't finished yet; this might cause a little difficulty in comparing results.

Still. Interesting stuff.

Long time no diary. Partly this was server troubles at advogato, and partly natural laziness and indolence. Does this augur good or ill for a forthcoming academic research career? We shall see.

What have we learnt in the meantime, anyway? Well, those who follow this diary may have observed a certain bias towards technical minutiae of Common Lisp and its ANSI specification. We have an amusing unintended consequence for your delectation this month: on a strict reading of the specification, it is not possible to portably define a constant equal to a floating point zero.

How did they manage that? Well, the ANSI committee might have sometimes overlooked things, but they weren't completely nuts. So it's a combination of a series of things, all but one reasonable.

  • an implementation is free to evaluate DEFCONSTANT's value form at compile-time, run-time or both. This is reasonable, so that the compiler can do constant-folding.
  • the consequences are undefined if DEFCONSTANT attempts to set an already-bound constant to a value that is not EQL to itself. In the vernacular, this means that it's OK to set a constant to a value that's the same as the one it's already got.
  • the file compiler dumps objects according to an abstract similarity relationship, such that when the loader (linker, to you static language unwashed masses) gets involved the resulting object is similar to the one that was dumped. A reasonable specification, given that the loader can theoretically be from a different lisp session altogether.
  • the TYPE-OF an object cannot involve the MEMBER type specifier. Reasonable, because otherwise (type-of <x>) is trivially `(member ,<x>), which doesn't tell anyone anything.
  • similarity on numbers is defined as being of the same type and with the same mathematical value (under =).

Superficially, this last requirement looks reasonable, but there's a catch! 0.0 and -0.0 are necessarily of the same type, since the only type that can distinguish between them is a MEMBER type (or equivalently the EQL type); they are, in IEEE parlance, also representations of the same mathematical value; (= 0.0 -0.0) is true. Thus, technically, an implementation is free, on compiling and loading

(defconstant +foo+ -0.0)
to go as follows: firstly, evaluate the value form at compile-time, getting -0.0, and do the defconstant. +FOO+ now has the value -0.0. Then dump this code, and obeying the similarity rules, dump it such that the -0.0 is now 0.0. Now on loading, we attempt to set +FOO+, currently with value -0.0, to the value 0.0, which is not EQL to -0.0; hence we get undefined consequences. No, I don't plan to exploit this loophole in SBCL; we get enough complaints about defconstant as it is.

The moral of this tale? Maybe this.

In other news: no noticeable progress on the pi calculus, no.

Differential calculus, check. Integral calculus, check. Tensor calculus: well, at least the University of Cambridge believes I can do this adequately, so check. Lambda calculus is not a problem; though I haven't got the complete exposure to β-reduction and whatnot, I can fake it plausibly enough; check.

I'd vaguely heard of the pi calculus before; I had ignored it as being something Computer Sciency that was more-or-less completely irrelevant to practical programming. I can't honestly say that yesterday's joint uk-lispers/scheme-uk meeting (or the ensuing discussion at the pub) opened my eyes or changed my mind about it, but it might be worth a second read anyway.

The context? Dan gave a talk about SBCL's threading implementation, based on Operating System threads (and not userland implementations); we talked, among other things, about clone(); futexes and furwocks; locking and safety in data structures; signal handling (with particular reference to Linux on the SPARC — you don't want to know); and the default value of dynamic (“special”) variables in new threads.

The pi calculus bit relates, if I understood things correctly, to what primitives the implementation threading model should expose. There are probably cool theoretical results, but the impression I got yesterday was that no-one yet had a real-world implementation of anything interesting... the idea is that pi calculus gives you a set of ultraprimitives on top of which, given operating system support, one can build all the locking and synchronization operations one would like; the current situation is that we provide a set of primitives that are good for building some things and bad at others. Clearly I need to go away and read about things.

Ugly tiepography sighted.

In my defence, it's probably quite tricky to get pretty ties. To start with, the curves should have a variable thickness through the tie, so we're automatically dealing with filled polygons at the very least. In addition, there's a bunch of functionality hidden behind the ties (for instance, export to MIDI) which is rather better implemented. And this was the result of a relatively short hacking period.

On the other hand, it was probably harder than it should be; there is a relatively odd use of data structures (notes in gsharp are currently read-only, so any modification to one involves creating a near-copy) that threw me for a while. In addition, the system layout isn't ideal for incremental changes. Still, I'm relatively pleased with it, because we can probably take my patch forward.

On a different note, I verified today that sbcl builds under the latest release of clisp. The clisp developers broke the build with their previous release by sabotaging all attempts to use their pretty-printer; pleas to use sbcl compilation as a regression test before releasing have so far been ignored.

News from the trenches: more speed!

No-one doubts the utility of regression test suites to verify that a given change doesn't cause more problems than it fixes (well, most people don't, but the cmucl team still seems to be in the stone age on this issue). Regression tests, and conformance tests in addition (thank you pfdietz), allow developers to manage the complexity explosion involved in interactions between areas of functionality.

While one of the guiding principles of sbcl is maintainability (considered at least by me as more important than speed), it is nonetheless important to know the performance impact of changes, even when such changes are motivated by correctness issues. In a similar way to regression tests, the complexity of a large system is such that an apparently innocuous change can have a large effect on the performance characteristics of a priori unrelated areas.

It isn't quite so important to know before making checkins the performance delta of a change, but it is important to be able to find and fix regressions before too long. The problem until now is that there has been no systematic collection of data — so it was quite hard to deal with performance issues objectively.

Now, with (other) developers having reasonably fast computers, it is possible to keep track of these things, given a decent benchmark suite. I'm not sure that Eric Marsden's cl-bench suite is “decent”, but it's the best we've got. So thank you to Andreas Fuchs, who is not only tracking CVS HEAD on Eric's benchmarks, but has also collected very interesting historical data (SBCL releases on that latter series of graphs are approximately monthly).

What's doubly interesting about all that historical data is that not only can we identify regressions, as I've indicated above, but we can also use statistical methods to estimate similarities in codepaths covered by the individual benchmarks. Hierarchical clustering (on which more later, I hope) even on fairly naïve distance measures, reveals some interesting similarities. Graph layout in that plot courtesy of McCLIM's graph formatting protocol and a couple of home-rolled methods (which I have since improved to give a clearer dendrogram). Cool beans.

I'd forgotten how much work making binary releases was.

chandler suggested that we release binaries for as many platforms as possible for this month's sbcl release — particularly since last month's release didn't happen. So, log in remotely to five different hosts, start screen (after compiling it, in one case), run a compile (which in practice can take overnight on old and cranky hardware), swear at sourceforge's compile farm which has decided not to allow logins this morning, make binary distribution tarballs, take md5sums, scp all over the place, bzip2 (oh, yes, not all hosts have md5sum or bzip2 installed), sign md5 document, ftp to sourceforge, fight with the sourceforge web interface of death (you are in a maze of twisty web pages, all alike).

Still. Mostly done now. If your platform doesn't have a binary here, and you expected there to be one, shout. Or preferably build one yourself.

In the interests of more screenshots, here's an example of what one can do in a real Lisp application. Full disclosure: all of the layers of software above glibc (sbcl, McCLIM and gsharp itself) have version numbers below 1.0 — so don't be too surprised that it doesn't display the second beam of the semiquavers...

Consider the engraving of a piece of music such as Dvořák's Humoresque. You know the one: ‘dum-de dum-de dum-de dum-de dum-de dum-de dum-de dum-de dum-de dum-de dum-de dum-de daaaaah’. Well, typing this in in a linear fashion is an absolute pain; the engraver must change the input state between each note, or must modify each note after placing it.

What are computers for? Well, one would hope that at least one could use them to automate repetitive tasks. Emacs, for one, has keyboard macros for just this. So, first enter the notes, then C-x ( ] . C-f [ [ C-f C-x ) defines a keyboard macro for turning crotchets into real rhythm, and C-x e executes it. Apply a liberal dash of barlines, and voilà! Kudos to Robert Strandh for a neat patch (and, let's face it, for writing most of gsharp and a fair bit of McCLIM in the first place). One could also imagine writing a specialized function to alter the input state in the desired fashion automatically; that wouldn't be hard either, I think; but this is already a good feature to have.

Some bugs are more fun than others.

<Kryztof>   could be a ppc backend bug                                21:16:47
<antifuchs> what's the difference that makes the ppc act up on that?  21:16:56

So apart from demonstrating my latent psychic abilities, this bug was responsible for quite some head-scratching:

(iter (for i in '(1 2 3)) (+ i 50))
  => NIL ; on x86 and sparc
  => ERROR "2 is not a LIST" ; on ppc

Eventually, though, I tracked it down (:TRACE-FILE as an option to COMPILE-FILE is a wonderful thing), and it was amusing. The distilled test case I came up with was as follows

(defun foo () (values 1 2 3 4 5 6 7))
(defun bar (fn)
  (let (aa bb cc dd ee ff gg hh)
    (multiple-value-bind (a b c d e f g h)
        (funcall fn)
      (setq aa a)
      (setq bb b)
      (setq cc c)
      (setq dd d)
      (setq ee e)
      (setq ff f)
      (setq gg g)
      (setq hh h))
    (values aa bb cc dd ee ff gg hh)))
(assert (null (nth 7 (bar #'foo))))

So what's going on here? Well, the relevant piece of information here is that since calling FOO only returns seven values, the 8th (or, counting from zero, the 7th) should default to NIL. So that's what the ASSERT is doing; what's going wrong? To answer that, it helps to know that Gary Byers based his PPC backend of CMUCL (from which SBCL liberally borrowed) on the already-existing SPARC backend. However, the SPARC instruction set architecture has branch delay slots, such that the instruction following a branch is executed whether the branch is taken or not; the PPC has no such thing. So the cleverly-optimized loop for defaulting unknown values in the SPARC backend, when transcribed into PPC, doesn't work nearly so well...

Latest new toy: gsharp. The first surprise is that it's a Lisp application that looks, well, nice. OK, nice-ish. It's version 0.2, it has time to improve.

The second surprise was that the current CVS is noticeably more responsive than the version I'd been reading and playing with. I like it when things like that happen.

So, in any case, there are plenty of things missing, but I think it has potential. Let's get hacking!

(This cringe-inducingly upbeat tone will not last, I promise. Normal behaviour will resume shortly)

67 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!