Older blog entries for crhodes (starting at number 164)

Before I forget: I was at what called itself the very first Emacs conference last weekend. The first thing to say is that I had a lot of fun!

I experimented with semi-live-tweeting the thing -- indeed, it was noticed by an external viewer that I was the only live-tweeter on #emacsconf using an emacs-based twitter client. I'm not sure about whether this is a good thing to do or not; certainly, in lectures that I'm giving, it's irritating (and distracting) if students are glued to their ludicrously expensive phones rather than to my perfectly-crafted narrative, but I accept that it happens – and if any of my undergraduate lecturers is reading this: I apologize profusely for reading the newspaper and trying to do the Inquisitor crossword during the Saturday lectures (but I wasn't the only one...). In this instance, I aimed not to create any disturbance, and mostly managed to tweet during breaks rather than during the actual talks.

The actual talks? This had something of the feel of a European Common Lisp Meeting (or its precursors) from about a decade ago: several times I heard the expression of surprise along the lines of “Who would have thought there were so many of us?” Particularly so many who were willing to turn up on the Saturday of an extended weekend to be cooped away from the sunshine all da... no, wait, being in the warm was a bonus. (It snowed on me at lunchtime). The actual talks were a good part of the draw: I had admired Sacha Chua's posts about using org-mode for, well, everything, back when I was picking it up for GTD (Gah, I am a long way off the wagon!) and John Wiegley is a name that has popped up in many a place that I have investigated (emacs obviously, but also Common Lisp and personal accounting), so the fact that they were doing a double-act for the keynote was a great draw (and it made a great start to the day).

The other talks were all interesting, though some were more relevant to me as an emacs outsider than others: highlights for me were the insanity of embedding a gtk-emacs inside another emacs (memories of McCLIM craziness) using Joakim Verona's oddly-named XWidgets; a call to arms on EmacsWiki from Nick Ferrier; Sam Aaron's emacs-based music/live coding system; and John Wiegley's rapid tour through emacs and emacs-lisp productivity enhancers (made me feel like a complete newbie). There was a lot about packaging systems for emacs lisp libraries and applications, about which I expressed a certain amount of skepticism, and Luke Gorrie gave a talk about SLIME, from (almost) the opposite perspective of the talk he gave at ECLM in 2005. (I have got this far without mentioning them, but I can't leave the subject without linking to Sacha Chua's talk sketchnotes; the keynote was excellent, and these notes are icing.)

The conversations between talks were good too; I ducked out of some lunch ones to visit Camden Lock market, but there was plenty of time to socialize. Possibly the weirdest moment for me came when Reuben Thomas showed up between Easter weekend services; I had last seen Reuben in a crazy performance of Handel's Giulio Cesare many, many years ago, and the context-switch that I think both of us had to go through to place the other was lengthy and extended. Small world.

I feel very lucky to have been able to participate. Thanks from me to the organizers, Aleksandar Simic for doing what seemed to be a lot of the heavy lifting and Forward for hosting. And I'll idle a bit more strongly on the emacs IRC channel and attempt to discover more of the London-based hacking community, time permitting...

I released SBCL 1.1.6 before the weekend, after the customary one-week period of code freeze and request for testing. Unfortunately, it turns out that the release contains a bug which affects a substantial number of quicklisp libraries: the compiler's transformation of svref was sped up, but unfortunately without sufficient generality, so code doing svref on a symbol-macro fails to compile.

This bug has been fixed in the master branch; this post is somewhat in the vein of a public service announcement: don't use SBCL 1.1.6 if your code or anything it depends on does svref on a symbol-macro. (If it does, a quick workaround is to replace the svrefs with arefs). As well as the public service announcement, though, I have a question for the wider community: how can we encourage testing the code and clearly communicating the results just before the release happens rather than just after? It's somewhat frustrating to have a week-long code freeze, then bug reports of serious issues a few hours after the release is made... and unfortunately answers of the form “just test everything yourself during the freeze period” aren't currently practical. Maybe someday.

25 Nov 2012 (updated 11 Jan 2014 at 13:55 UTC) »

In the last episode, we discovered some things about the implementation of discriminating functions in SBCL's CLOS, and also I discovered that I had actually documented some of it many years ago in the SBCL Internals manual: as well as the chapter on discriminating functions, there's some interesting stuff about how to make slot-value tolerably efficient. And so I sent Faré the functions for precompiling generic functions, and congratulated myself on a job well done.

And inevitably I got a reply, by return: “it looks like this doesn't work with eql-specializers.” Also “our generic functions have 417 eql-specializers between them.” Ah.

Faré is quite right, the code in the previous diary entry will signal an error on finding a method with eql specializers – and even if it were fixed to not have that problem, the underlying (cacheing) mechanism for efficient method calls, based on looking at the identity of the class-of each argument, fails to work when there are applicable methods with eql specializers (because the set of applicable methods for arguments of a class will vary on whether the argument matches an eql-specializer or not).

So, it is perhaps not surprising that there is an alternative optimized discriminating function mechanism which comes into play if a generic function has a substantial number of methods with eql-specializers: instead of a cacheing discriminating function, we generate and use a dispatching discriminating function, which generates a network of type tests to distinguish between all the possible cases of applicable methods, based on the actual types of the arguments – and importantly in this context, those types can include eql types.

The same sort of support exists in SBCL for constructing a dispatching discriminating function in advance as I presented for constructing a cacheing discriminating function. Something like the following:

(defun precompile-dispatching-gf (gf)
  (let* ((lock (sb-pcl::gf-lock gf)))
    (setf (sb-pcl::gf-precompute-dfun-and-emf-p (sb-pcl::gf-arg-info gf)) t)
    (multiple-value-bind (dfun cache info)
        (sb-pcl::make-final-dispatch-dfun gf)
      (sb-thread::call-with-recursive-system-lock ; or -SPINLOCK
       (lambda () 
         (sb-pcl::set-dfun gf dfun cache info)
         (sb-mop:set-funcallable-instance-function gf dfun))
       lock))))

(loop for x being the external-symbols of :cl
      initially (progn (fmakunbound 'bar) (fmakunbound 'baz)
                       (defgeneric bar (y z)) (defgeneric baz (y z)))
      do (eval `(defmethod bar ((y (eql ',x)) z) (list y z)))
         (eval `(defmethod baz ((y (eql ',x)) z) (list y z))))

(precompile-dispatching-gf #'bar)

(time (bar 'defmethod 3)) ; 6,250 processor cycles
(time (baz 'defmethod 3)) ; 1,098,642,785 processor cycles

I suppose that would make a noticeable difference to application startup times. Next up, unless I've made further oversights which need correction: automating this process, and some incidental thoughts.

A long, long time ago (but in this galaxy), I used lilypond. Back in the days when I did relatively frequent consort singing of secular music from the Renaissance, a number of factors (the relative scarcity and cost of professional editions; the ambition to perform somewhat obscure works; and the need to make typesetting music indistinguishable from five-dimensional General Relativity, at least to a casual eye) meant that lilypond was a natural fit. It was far from perfect, and its difficult workflow informed some later work, but I used it and it was good.

Time passes... and now I have a four-year-old daughter who is interested in making music, or at least in banging notes on the piano. There are probably many kinds of teaching materials available; the one that we picked up (Dogs and Birds, somewhat based on Kodály's teaching methods), has pictures of animals inside the noteheads, to help build associations between the notation and the keys. And it was good.

But, of course, the child is not satisfied with the somewhat artificial ‘melodies’ in the teaching book; she wants as well to play tunes that she knows, for example ones that they are learning at school for the upcoming extravaganza that is the Christmas Performance. Fair enough; but she is still very attached to the animal noteheads, so what to do?

Some time later, after a bit of work with image manipulation tools (GIMP and inkscape, I thank you), I have 14 Encapsulated Postscript files, one white and one black for each notename, which are tolerably close to the ones she knows. After that, it is a simple matter to convince lilypond to use those noteheads. “How simple?” I hear you cry... approximately this simple:


#(set-global-staff-size 36)

#(define black-mapping
   (list
    (cons (ly:make-pitch 0 0 NATURAL) "c-black.eps")
    (cons (ly:make-pitch 0 1 NATURAL) "d-black.eps")
    (cons (ly:make-pitch 0 2 NATURAL) "e-black.eps")
    (cons (ly:make-pitch 0 3 NATURAL) "f-black.eps")
    (cons (ly:make-pitch 0 4 NATURAL) "g-black.eps")
    (cons (ly:make-pitch 0 5 NATURAL) "a-black.eps")
    (cons (ly:make-pitch 0 6 NATURAL) "b-black.eps")))

#(define white-mapping
   (list
    (cons (ly:make-pitch 0 0 NATURAL) "c-white.eps")
    (cons (ly:make-pitch 0 1 NATURAL) "d-white.eps")
    (cons (ly:make-pitch 0 2 NATURAL) "e-white.eps")
    (cons (ly:make-pitch 0 3 NATURAL) "f-white.eps")
    (cons (ly:make-pitch 0 4 NATURAL) "g-white.eps")
    (cons (ly:make-pitch 0 5 NATURAL) "a-white.eps")
    (cons (ly:make-pitch 0 6 NATURAL) "b-white.eps")))

#(define (notename-equals? p1 p2)
   (= (ly:pitch-notename p1) (ly:pitch-notename p2)))

#(define (notehead-text grob)
   (let* ((pitch (ly:event-property (event-cause grob) 'pitch))
          (duration (ly:event-property (event-cause grob) 'duration))
          (mapping (if (ly:duration duration (ly:make-duration 1 0))
                       black-mapping white-mapping))
          (epsname (cdr (assoc pitch mapping notename-equals?))))
     (markup #:general-align Y CENTER (#:epsfile X 2 epsname))))

notepics = {
  \override NoteHead #'stencil = #ly:text-interface::print
  \override NoteHead #'text = #notehead-text
}

{ \notepics e'4 e' e'2 | e'4 e' e'2 | e'4 g' c' d' | e'1 | 
  f'4 f' f' f' | f' e' e' e' | e' d' d' e' | d'2 g' |
  e'4 e' e'2 | e'4 e' e'2 | e'4 g' c' d' | e'1 |
  f'4 f' f' f' | f' e' e' e' | g' g' f' d' | c'1 }

{ \notepics \relative c' { 
  g'4 g g d | e e d2 | b'4 b a a | g2. d4 | 
  g g g d | e e d2 | b'4 b a a | g2. d4 | 
  g g g d | g g g2 | g4 g g g | g g g g |
  g g g d | e e d2 | b'4 b a a | g1 | } }

For posterity, and to postpone my deletion from planet.lisp a few more months... I was asked by Faré about whether it is possible to speed up the first calls to some user-defined generic functions in SBCL.

To understand that request, first we need to understand the implementation strategy for the discriminating function in SBCL's metaobject protocol. The discriminating function is responsible for computing the set of methods which are applicable given the arguments to the generic function, determining the effective method (the actual code to be run given the generic function's method combination), and then running that effective method.

One possible strategy for implementation is to do nothing at all in advance – simply at each call of the generic function, call compute-applicable-methods to find the ordered set of applicable methods, apply method combination to that set to generate an effective method form, compile it, and then call the resulting function with the argument list and the methods list. This is about as slow as it sounds: while it is correct, since it is typical for generic functions to be called more than once with the same classes of arguments, there is wasted effort in this strategy from repeating the same computations over and over again.

Fortunately, it is possible to do better. Quite apart from some special cases (slot accessors, predicates, single methods – documented in the SBCL Internals manual), if the generic function has the same methods, the result of compute-applicable-methods on arguments of the same classes will be the same (hence compute-applicable-methods-using-classes), and if it also has an unchanged method combination the effective method form and function will be unchanged. This suggests cacheing the result of computing the effective method, attempting a lookup based on the classes of the arguments before the slow path, and invalidating the cache if things change (e.g. adding or removing methods to the generic function or a change in the class hierarchy).

All the above is in “Efficient Method Dispatch in PCL” (Kizcales & Rodruigez, 1990); while there are more details in SBCL's implementation (including an alternate strategy for cacheing based on type dispatch rather than class hash codes) that paper is recommended reading for anyone wanting to understand how to get tolerable function call speed in generic-function-based environments. But there remains the problem of the initial state of the discriminating function: what should it be? The natural choice is of an empty cache, so that the cache gets filled based on what effective methods are actually used, but that means that there can be a substantial amount of work to do at startup to warm up all the caches for all the generic functions in a system. In fact, in the days when I used to develop and deploy CLIM-based applications, this was so noticeable that the deployment script I used started the application up, exited it and scrubbed the application state before dumping the memory image, precisely so that the generic function caches had relevant entries, meaning that application startup was much faster. How much faster? To get some kind of sense, let's look at an example:


(defgeneric foo (x)
  (:method ((x string)) (concatenate 'string "x" x))
  (:method ((x integer)) (1+ x))
  (:method ((x pathname)) (pathname-type x))
  (:method ((x generic-function)) (sb-mop:generic-function-name x)))

(time (foo 3))     ; 239,230 processor cycles
(time (foo 3))     ; 223,855 processor cycles
(time (foo 3))     ; 73,925 processor cycles
(time (foo 3))     ; 4,100 processor cycles
(time (foo #'foo)) ; 176,225 processor cycles
(time (foo #'foo)) ; 5,340 processor cycles
(time (foo "x"))   ; 103,090 processor cycles
(time (foo "x"))   ; 9,645 processor cycles
(time (foo #p""))  ; 136,780 processor cycles
(time (foo #p""))  ; 6,560 processor cycles

Eyeballing these (and knowing something more about the details of the implementation, which always helps), we can estimate the overhead as being about 400,000 processor cycles plus about 100,000 per distinct concrete class passed as an argument: call it about 0.5ms total per generic function based on a 1GHz processor. It doesn't take too many generic functions with empty caches called in sequence (as in protocol-heavy frameworks like CLIM) to make this startup delay noticeable.

What if it isn't possible to run the application beforehand? Well, it is possible to fill caches by hand. Here's one way to do it:


(defun precompile-gf (gf)
  (let* ((lock (sb-pcl::gf-lock gf)))
    (setf (sb-pcl::gf-precompute-dfun-and-emf-p (sb-pcl::gf-arg-info gf)) t)
    (let ((classes-list (mapcar (lambda (x) (sb-mop:method-specializers x))
                                (sb-mop:generic-function-methods gf))))
      (multiple-value-bind (dfun cache info)
          (sb-pcl::make-final-dfun-internal gf classes-list)
        (sb-thread::call-with-recursive-system-lock ; or -SPINLOCK
         (lambda () 
           (sb-pcl::set-dfun gf dfun cache info)
           (sb-mop:set-funcallable-instance-function gf dfun))
         lock)))))

This computes the set of classes directly named in the method specializers, and pre-fills the cache with entries corresponding to direct instances of those classes: as long as no subsequent changes occur, every call involving direct instances will be a cache hit and there will be no expensive recomputations:


(fmakunbound 'foo)
(defgeneric foo (x)
  (:method ((x string)) (concatenate 'string "x" x))
  (:method ((x integer)) (1+ x))
  (:method ((x pathname)) (pathname-type x))
  (:method ((x generic-function)) (sb-mop:generic-function-name x)))
(precompile-gf #'foo)

(time (foo #p""))  ; 5,995 processor cycles
(time (foo #p""))  ; 5,940 processor cycles
(time (foo 3))     ; 173,955 processor cycles
(time (foo "x"))   ; 81,945 processor cycles
(time (foo #'foo)) ; 148,435 processor cycles

The “direct instances” restriction there is a strong restriction: if the hierarchy is based around protocol classes (used in specializers) and implementation classes (used as concrete classes for instantiation) then the initial cache filling will be useless (as in the case above: in SBCL, 3 is a direct instance of the (implementation-specific) FIXNUM class.

What's the second try, then? If the class hierarchy is not too deep and the generic functions don't have too many multiple-argument specializers, then pre-filling caches with all possible class argument combinations might not be totally prohibitive:


(defun class-subclasses (class)
  (let ((ds (copy-list (sb-mop:class-direct-subclasses class))))
    (if (null ds)
        (list class)
        (append (list class) 
                (remove-duplicates (mapcan #'class-subclasses ds))))))

(defun class-product (list)
  (if (null list)
      (list nil)
      (let ((first (class-subclasses (car list)))
            (rest (class-product (cdr list)))
            result)
        (dolist (c first result)
          (setf result (nconc (mapcar (lambda (r) (cons c r)) rest) result))))))

(defun exhaustive-class-list (gf)
  (let ((specs (mapcar #'sb-mop:method-specializers 
                       (sb-mop:generic-function-methods gf))))
    (remove-duplicates (mapcan #'class-product specs) :test #'equal)))

(defun precompile-gf-harder (gf)
  (let* ((lock (sb-pcl::gf-lock gf)))
    (setf (sb-pcl::gf-precompute-dfun-and-emf-p (sb-pcl::gf-arg-info gf)) t)
    (let ((classes-list (exhaustive-class-list gf)))
      (multiple-value-bind (dfun cache info)
          (sb-pcl::make-final-dfun-internal gf classes-list)
        (sb-thread::call-with-recursive-system-lock ; or -SPINLOCK
         (lambda () 
           (sb-pcl::set-dfun gf dfun cache info)
           (sb-mop:set-funcallable-instance-function gf dfun))
         lock)))))

(fmakunbound 'foo)
(defgeneric foo (x)
  (:method ((x string)) (concatenate 'string "x" x))
  (:method ((x integer)) (1+ x))
  (:method ((x pathname)) (pathname-type x))
  (:method ((x generic-function)) (sb-mop:generic-function-name x)))
(precompile-gf-harder #'foo)

(time (foo 3))     ; 4,510 processor cycles
(time (foo #p""))  ; 6,060 processor cycles
(time (foo "x"))   ; 10,685 processor cycles
(time (foo #'foo)) ; 5,985 processor cycles

So, nirvana? Well, what happens if the class hierarchy is not so friendly, and the exhaustive classes list is too exhausting? Stay tuned for the next thrilling episode...

A most excellent day. I mistimed my morning, and had to miss breakfast in order to arrive in time to catch the ferry to Preko (a village on the island opposite Zadar) with a small but perfectly-formed band of intrepid explorers: Didier Verna, David Johnson-Davies, Alessio Stalla and Nils Bertschinger. Given the attractive description of the castle of St Michael as a "telecommunications station", with my employment, how could I not attempt the walk. And so we walked through the sunshine to the top of the mountain, to see what we could see (the other side of the mountain, it turns out, but also more islands, more sea, more sun...). Some fun discussions, about the more esoteric parts of various lisp dialects, machine learning, method-combination-based hacks, the nature of objects, and so on (quick riddle: what are the possible fixed points of type-of?) Then, as we were sitting down to yet another Southern European lunch, what should happen but an extremely localized rain shower?! How odd.

Then it was back to reality; it was time to face the fact that I was going to have to leave the sunny, sleepy Croatian coast to return to rainy, cold, difficult real life. (It has its compensations...) A very good conference; a big thank you to Franjo Pehar and Damir Kero, and the University of Zadar, for being such excellent hosts; it's been a great three days.

Happy Protest Day.

Day two of ELS 2012 began with war stories from Ernst van Waning on his work as a consultant for SRI (his talk should of course not be confused with his employer's views; Pascal Costanza's opinions from yesterday were similarly disclaimed) on their AURA project. We had a confusing video-in-video demonstration, which is perhaps taking the conservative approach to the Demo Effect a little bit too far. A point that might be worth mentioning again is that Lisp is not immune to memory leaks; in the KM (warning: 1990s-compatible website) knowledge management software, many interned symbols get generated during the course of a query and do not get removed later. We've encountered this in SBCL in the past; for example, in the SBCL build itself, towards the end of the process, we compile and load PCL, then in a fresh bootstrap image just load the fasls before dumping the final image; this is so that many internal symbols created by reading the code don't end up in the final image. (Though something that has been suggested a few times is that packages should intern their symbols in a hash table which is weak: dear cleverweb, is there any way in which this can be detected? Think like pfdietz).

After coffee, and surprisingly packaged filled croissants, Marco Benelli gave us his experiences of using Scheme (specifically Gambit-C) in industrial automation. There were some interesting constraints – the approach had to support uClinux as well as more full-fat distributions, and... exotic architectures like sh. After that came Gunnar Völkel, talking about Algorithm Engineering with Clojure ("Algorithm Engineering" here seemed to mean the cycle of algorithm design, analysis, implementation and evaluation); the start of the talk was about implementing tracing and profiling, using a name-based registry before function definition to specify interceptions which wrap function implementation, which worried me because it seemed like a description of what should already exist (the lack of documented support for advice/fwrappers in SBCL notwithstanding). After that, we had a description of their team's Experiment Definition Language, used to generate code in an Experiment Setup Language, which then performs a whole experimental run (of the order of weeks) for various different parameters. I'm not convinced about the composability of the interception design; one issue is that since it overrides the defining forms, it is automatically incompatible with any similar extension (just as SERIES is incompatible with SCREAMER and Lexicons: each of them wants to own defun) – another is that, because the interceptions modify the source code, there's no sensible ordering: one of the example functions was both traced and profiled, which means that either the profiling code becomes traced (where the user is probably not interested in the execution of the otherwise-invisible profiling code) or the tracing code becomes profiled (which detracts from the utility of the profiling data).

One more long lunch break later, and we're into the afternoon iteration: Irène Durand talking about enumerators, and Alessio Stalla about do+, both dealing with ways of structuring iteration. Irène's taxonomy of enumerators might bear closer attention, while Alessio's "I don't hate loop" polemic against the use of a code walker in an iteration construct (iterate) seemed to be a totally reasonable point. There was some interesting probing of the limitations as well as the extensibility of the design – Pascal Costanza brought up the fact that loop allows e.g. if foo collect bar into quux else nconc baz into quux (forgive the attack of metasyntosis), while this variety of accumulation function into the same accumulator appears not possible in the do+ design.

Lightning talks: a virtual filesystem based on queries; return-with-dynamic-extent; HIV as recursive immune system process; pathnames (gah! pathnames!); high-performance network appliances; interoperability choices; and homoiconicity (Didier gives highly-engaging lightning talks: "musical notation is Tuning-complete"). Then announcements (a big thanks to Marco Antoniotti, Franjo Penar and his local team), dinner, bed.

ELS 2012 Liveblogging! Well, I'm constrained to one diary entry a day, so maybe it's a bit of a stretch to claim that I've joined the socially network world, but baby steps...

First impression: Zadar is really quite pretty. Shiny white stone, clean, old buildings, seafront. My first impression is probably coloured by the fact that when I left from London, the weather was 4°C and pouring with rain – and I emerged from the plane into 25°C heat and a blue sky. I confess, I even had a little nap near the Sea Organ while waiting for the evening meeting and welcome reception; at that reception, let us just say that a good amount of Maraschino and another good amount of the local beer was consumed, both in good company. Also, it's asparagus season; yum.

This morning, after a generous welcome from Marco Antoniotti, this year's programme chair, Juanjo García Ripoll gave a very interesting overview of ECL and its history, and made some good points about its design philosophy. The key argument is probably that designing ECL for embeddability adds options, rather than being a limitation; he made a plausible case for those things which are currently lost compared with more traditional implementations – particularly, image saving – are reimplementable, at least up to a point. Juanjo also listed a number of good improvements in ECL since the last time: Unicode support, multithreading, improved CLOS and MOP support, and plenty of other things.

After a good long coffee break, we had the first paper sessions: first, a presentation of Climb (no website yet, apparently), an image processing toolkit developed by Laurent Senta with Didier Verna; some interesting stuff in there, even if the dreaded Demo Effect came along. A particularly neat-looking demo of a (prototype) visual environment for chaining processing tasks; performance is a bit more of a hot topic (read: not yet implemented), both in terms of parallelizing individual operations and (I think) in terms of compiling networks of processing tasks to minimize redundant computation. After that, Giovanni Anzani gave an autocad-based talk on calculating and visualizing optimal (for some value of "optimal"; sufficient for architecture, anyway) points of intersection of incommensurate measurements. Again, a pretty nifty demo, this time within AutoCAD using AutoLisp; somewhat surprisingly, it seems that there is no matrix manipulation library support within AutoLisp. (I think I need to read the paper for this work, to understand exactly what the problem the method presented is aiming to solve).

One lesson in Southern European lunchtimes later (even longer than academic lunchtimes!) we were into the second session, starting with Alessio Stalla talking about ABCL and its interoperability with Java. I got a shout-out, because in amongst the various integrations of ABCL with its JVM host was a note that the sequences in ABCL support the extensible sequence protocol that I proposed in 2007; the example given was of using instances of Java java.util.List class instances as Lisp sequences, directly. The demo effect struck again; instead of launching slime, a button in the modified Java web framework made the compiler enter an infinite compiling loop. Bad luck. (Demoing things is a particular nightmare, I know; the trick as far as I have managed to formalize it is to leave as little as possible to chance: this includes even typing, unless you're very confident: use short file or variable names, define key bindings or keyboard macros, or write scripts to do things for you.) Nils Bertschinger talked about probabilistic programming in Clojure: implementing Metropolis-Hastings sampling of program paths with given probabilities, and consequently allowing conditioning on some program choice points and Bayesian inference on the hidden parameters. It looks interesting, but the killer feature of Clojure (immutable data structures, for cheap undo) might also be the cause of a performance problem. Still, looks interesting. The demo worked.

Pascal Costanza rounded out the day's schedule with his discussion on reflection in Lisp and elsewhere, talking about fexprs, 3-lisp and macros through to metaobject protocols. Unfortunately, as a regular attendee at Lisp events, I've seen much of it before; it's still interesting, but maybe I need to get out more. To dinner!

I got interviewed a couple of weeks ago. (If you're reading this on Planet Lisp, you probably already know this). I had a quick update to one of the points I made, but failed to write it down anywhere and have since forgotten. So, instead, a somewhat delayed and probably more dull update. (What, more dull than the delayed response of someone to their own interview? Well, be ready to be amazed. But I can't help but feel that the additional information was that I had forgotten some whole class of Lisp programming that I actually do, which is a bit embarrassing in an interview in a series titled "Lisp Hackers"...).

Maybe the first thing was to dip my toes back into SBCL maintenance: some nice if minor fixes from me this cycle: one was a simultaneous bug fix and optimization to modular arithmetic, motivated by pfdietz's resurfacing and running of his random form tester, which inevitably revealed that we have been slack in the last five years or so (where does the time go?); the other was a fix to the powerpc implementation of ldb, which broke the build after the previous fix. All sorted out now, phew. (And there's lots of other stuff that's gone in this month, unlike the previous "month" which sort of rolled on for the best part of three months, so it's probably worth testing).

But onward, to my desire to learn a bit more about Emacs Lisp. I've used emacs for many a year – indeed, the interview reminded me that I learnt Lisp by being given a difficult problem to work on, instructions on how to start XEmacs, and time to read USENET – but have never considered myself a real Emacs User; my ~/.emacs is so tiny, I daren't show it to the world for fear that I will lose all my remaining Lisp Hacker credibility. Acting with the view that the best way to learn is to do, shortly after starting to use EMMS as a media player, I've implemented support for DISCNUMBER metadata (this matters if, like me, you have a large number of multiple-disc sets). I've also revived (again) SWANKR, putting it up on github also, since I have started getting patches; I look forward to exploring this "social coding" idea. And I have also written a hacky but just-about-usable interface to the BBC iPlayer (using the excellent but fairly user-hostile get_iplayer utility) – particularly pleasing to my family now that digital switchover has reached London and I no longer possess equipment capable of receiving the UK television signal.

I now return to the Teclo vortex. But I am going to Zadar for the European Lisp Symposium; I hope to see some old and new faces there!

As I said in my last entry, I was in Amsterdam for ECLM 2011, once again smoothly organized by Edi Weitz and Arthur Lemmens, but this time under the aegis of the Stichting Common Lisp Foundation (of which more a bit later). After leaving the comfortable café, where Luke and Tobias (along with a backpack's worth of computing equipment on its way to visit St Petersburg) eventually turned up, it was time to go for the Saturday evening dinner, held at Brasserie Harkema. In the olden days, when I had time to do a certain amount of public-facing Lisp development, I got used to receiving the adulation of a grateful public – this time, at the dinner, I happened to sit next to someone called Lars from Netfonds. “Hmm,” said something at the back of my mind, ”that rings a bell.” Lars who? Lars Magne Ingebrigtsen. My inner fanboy went a bit squeee – even to the point of explaining what gmane was to a third party in his presence. Still, it was nice to be able to say a heartfelt “thank you” in person to someone whose software has saved me time and a certain amount of embarrassment. Other topics of conversation at the dinner included a discussion with R. Matthew Emerson (of Clozure) about the social aspects of Free Lisp development, a topic on which I have written before; contrasting the attitudes and experiences of contributors and users (small and large) of Clozure CL and SBCL was interesting. It was also nice to be able to talk about Lisp-based music analysis, synthesis and generation programs; reminding myself that I do still know about that landscape enough to fill people in.

The meeting itself, as others have observed over the years, is only partly about the talks: a substantial part of the goodness is in the chats over coffee and lunch. Edi and I reminisced about meeting in the venue, Hotel Arena, at a precursor to ECLM (in autumn 2004, I think... I certainly remember being approximately penniless, just after starting my first job); other people present then (as well as Arthur) included Nick Levine, Luke Gorrie, Peter van Eynde, Jim Newton, Pascal Costanza, Marc Battyani, Nicholas Neuss... many of whom were around for the rematch; a total of 95 people registered for the meeting, and the hall (part disco, part church) for the talks felt pleasantly full.

Of the talks, I was most interested in the material of Jack Harper's talk, concerning some of the constraints involved in building a product for (human) fingerprinting, and asserting that using Lisp in this product was not a problem. (Favourite quote: “batteries are complicated things”). I was a little bit disappointed that few of the speakers actually interacted with any code at all (Luke may claim that writing his slides in Squeak Smalltalk counts, but I beg to differ); in fact, Paul Miller of Xanalys was the only one of the speakers spending substantial time demonstrating anything related to the subject of the talk – and that only because the canned demo movie refused to display on the projector. Luke's talk appeared to go down well; the obvious first question came and went, and there were some more interesting questions from the floor. Star of the show was Zach Beane's talk about quicklisp; I spend a lot of time presenting or watching presentations in each of my capacities, and it's nice to have a refreshingly different (and deadpan) delivery, with good use of slides to complement the spoken content. I hope that he's right that his personal scalability will not be taxed, and that volunteers will find ways to assist in the project by taking ownership of particular tasks.

While Hans Hübner may have attempted to be controversial in his opinion slot about style guides for CL, the real controversy for me was Dave Cooper's announcement of the Stichting Common Lisp Foundation. Now, the Foundation has clearly done one thing that is helpful: provided legal and financial infrastructure so that the financial risk of hosting an ECLM is not borne entirely by two individuals; the corporate entity can potentially, after acquiring a buffer, provide the seed funding needed and, if necessary, absorb small ECLM losses (not that I believe there has been one, but hypothetically) through other fund-raising activities. On the other hand, when I asked the question as to how the Stichting CL Foundation would aim to distinguish itself from the ALU, the response from Dave Cooper was that the only difference would be that the foundation would focus on CL, where the ALU's remit extends to all members of the Lisp family. Such a narrowing of focus is, I think, potentially beneficial – indeed, when going through my email archives to look for the date of the 2004 meeting, I found a lucid rationale from Dan Barlow explaining that he had chosen to make CLiki's focus specifically DFSG-free Unix Lisp software in order to promote a sense of cohesion (rather than being motivated primarily by a strongly-held belief about the inherent superiority of DFSG-licensed software). But I don't think that the ALU's only weakness is that it spreads its Lisp net too wide: I think it has lost track of what it as an entity wants to do beyond perform a similar function for the ILC as Stichting has performed for the ECLM; Nick Levine, in his talk about how to find Lisp resources, observed that the ALU has a valuable piece of real estate – the lisp.org domain – which does not seem to be used to grow or meet the needs of the Lisp community, whether Common Lisp specifically or Lisp more generally. I found it a little sad that, Edi and Arthur aside, the overlap between the ALU board and Stichting CL Foundation directors is 100%.

After the longer talks came the lighting ones, and I took the opportunity to repeat my talk and demo about swankr, my implementation of the SLIME backend for R, from the European Lisp Symposium in April. Erik Huelsmann announced ABCL 1.0, a far better milestone to announce at the ECLM rather than my sneaky announcement of SBCL 0.9 (six years ago!? Doesn't time fly! Also, what ugly slides...). And after some more lightning (and less-lightning) talks, it was time to wrap up with drinks, dinner, and good conversation.

155 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!