Older blog entries for crhodes (starting at number 189)

25 Apr 2014 (updated 6 Oct 2014 at 21:14 UTC) »

just like old times

Sometimes, it feels like old times, rolling back the years, and so on. One of the things I did while avoiding work on my PhD (on observational tests of exotic cosmologies inspired by then-fashionable high-energy physics theories, since you ask) was to assist in resurrecting old and new CMUCL backends for SBCL, and to do some additional backend hacking. I learnt a lot! (To be fair, I learnt a lot about General Relativity too). And I even wrote about some of the funny bits when porting, such as getting to a working REPL with a completely broken integer multiply on SPARC, or highly broken bignums when working on a proper 64-bit backend for alpha.

On the #sbcl IRC channel over the last few days, the ARM porting crew (not me! I've just been kibitzing) has made substantial progress, to the point that it's now possible to boot a mostly-working REPL. Some of the highlights of the non-workingness:

  (truncate (expt 2 32) 10)
21074691228794313799883120286105
6

(fib 50)
6277101738309684039858724727853387073209631205623600462197

(expt 2.0 3)
2.0

(sb-ext:run-program "uname" '() :search t :output *standard-output*)
Linux
Memory fault at 3c

Meanwhile, given that most ARM cores don't have an integer division instruction, it's a nice bit of synchronicity that one of this year's Summer of Code projects for SBCL is to improve division by constant integers; I'm very much looking forward to getting involved with that, and hopefully also bringing some of the next generation of SBCL hackers into the community. Maybe my writing a while back about a self-sustaining system wasn't totally specious.

Syndicated 2014-04-25 16:43:10 (Updated 2014-10-06 20:48:14) from notes

http-content-negotiation-and-generalized-specializers

I promised a non-trivial example of a use for generalized specializers a while ago. Here it is: automatic handling of HTTP (RFC 2616) Content-Type negotiation in computed responses.

In RESTful services and things of that ilk, a client indicates that it wants to apply a verb (GET, for example) to a particular resource (named by a URN, or possibly identified by a URI). This resource is a conceptual object; it will have zero or more concrete manifestations, and content negotiation provides a way for the client to indicate which of those manifestations it would prefer to receive.

That's all a bit abstract. To take a concrete example, consider the woefully incomplete list of books in my living room at openlibrary. A user operating a graphical web browser to access that resource is presumed to want to retrieve HTML and associated resources, in order to view a shiny representation of the information associated with that resource (a “web page”, if you will). But the human-oriented representation of the information is not the only possible one, and it is common practice in some circles to provide machine-readable representations as well as human-oriented ones, at the same URL; for example, try:

  curl -H 'Accept: application/json' https://openlibrary.org/people/csrhodes/lists

and observe the difference between that and visiting the same URL in a graphical browser.

How does the web server know which representation to send? Well, the example has given away the punchline (if the links above to RFC sections haven't already). The graphical web browser will send an Accept header indicating that it prefers to receive objects with presentational content types – text/html, image/* and so on; the browser I have to hand sends text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 as its Accept header, meaning “please give me text/html or application/xhtml+xml, or failing that application/xml, or failing that anything else”. If the server has more than one representation for a given resource, it can use this client-supplied information to provide the best possible content; if the client has particular requirements – for example, attempting to find machine-readable content for further processing – it can declare this by specifying particular acceptable content-types in its Accept header.

For a resource for which more than one representation exists, then, the server must dispatch between them based on the client Accept header. And this is exactly a non-standard dispatch of the kind I've been discussing. Consider a resource http://foo.example/ which is implemented by sending the return value of the generic function foo back to the client:

  (defgeneric foo (request)
  (:generic-function-class accept-generic-function))

The default behaviour is somewhat a matter of taste, but one reasonable choice is that if no content-type matches we should use the defined HTTP status code to indicate that the responses we could generate are not acceptable to the client:

  (defmethod foo ((request t))
  (http:406 request))

Maybe we have a couple of presentational representations for the resource:

  (defmethod foo ((request (accept "text/plain")))
  "Foo")

(defmethod foo ((request (accept "text/html")))
  "<!DOCTYPE html>
<html>
 <head><title>Foo</title></head>
 <body><p>Foo</p></body>
</html>")

And we might have some machine-readable representations:

  (defmethod foo ((request (accept "text/turtle")))
  "@prefix foo: <http://example.org/ns#> .
@prefix : <http://other.example.org/ns#> .
foo:bar foo: : .")

(defmethod foo ((request (accept "application/rdf+xml")))
  "<?xml version=\"1.0\"?>
<rdf:RDF xmlns:rdf=\"http://www.w3.org/1999/02/22-rdf-syntax-ns#\"
         xmlns:foo=\"http://example.org/ns#\">
  <rdf:Description about=\"http://example.org/ns#bar\">
    <foo:>
      <rdf:Description about=\"http://other.example.org/ns#\"/>
    </foo:>
  </rdf:Description>
</rdf:RDF>")

(I apologize to any fans of XML/RDF if I have mangled that).

Now a graphical web browser sending an accept header of text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 as above will cause the server to send the HTML version, as that is the most specific applicable method to that accept string. Given this, it is perfectly possible to construct specialized clients with alternative preferences expressed in the accept header. A terminal-based client might prioritize text/plain over text/html (though in fact neither w3m nor lynx does that, at least in the versions I have installed). A client for the Semantic Web might instead accept data in serialized RDF, preferring more modern serializations, by sending an accept string of text/turtle,application/rdf+xml;q=0.9. All these clients could each be served the resource in their preferred format.

The case of serving one of a set of alternative files hosted on the filesystem in response to a request with an arbitrary accept string is different; in this case, it doesn't make sense to do the dispatch through methods. If we were to try to do so, it would look something like

  (defmethod filesystem-handler (url (content-type (accept "text/html")))
  (let ((prospect (pathname-for-url url "html")))
    (if (probe-file prospect)
        (serve-file prospect "text/html")
        (call-next-method))))

but we would need to define one such method per possible mime-type we might want to serve: doable, but unnecessary compared with the alternative strategy of finding all content-types servable for a given url, then choosing the one with the highest score:

  (defun filesystem-handler (url accept)
  (do* ((prospects (files-for url) (cdr prospects))
        (mime-types (mapcar #'mime-type prospects) (cdr mime-types))
        result mime-type (q 0))
       ((null prospects) (serve-file result mime-type))
     (when (> (q (car mime-types) accept) q)
       (setf result (car prospects) 
             mime-type (car mime-types) 
             q (q (car mime-types))))))

(the set of files on the filesystem effectively already define a set of methods for a given url; it doesn't make sense to try to mirror that as a set of reified methods on a generic function. Also, I've written this out using do* largely to keep the do*-is-not-that-bad society alive.)

Anyway. There's an interesting detail I've elided so far; not only do response-generating functions have to generate the content they wish to send in the response; they also have to indicate what content-type they are actually sending. Our accept-generic-function already handles dispatching on content-type; can it also take responsibility for setting the content-type of the response?

Why yes! The way to do this is using a method combination; it might look something like this:

  (defvar *actual-content-type*)

(defgeneric handle-content-type (request))

(define-method-combination content-negotiation/or ()
  ((around (:around))
   (primary () :required t))
  (:arguments request)
  (labels ((transform/1 (method)
             `(make-method
               (progn
                 (let ((s (car (sb-mop:method-specializers ,method))))
                   (when (typep s 'accept-specializer)
                     (setf *actual-content-type* (media-type s))))
                 (call-method ,method))))
           (transform (primaries)
             (mapcar #'(lambda (x) `(call-method ,(transform/1 x)))
                     primaries))
           (wrap (form)
             `(let ((*actual-content-type*))
                (multiple-value-prog1
                    ,form
                  (handle-content-type ,request)))))
    (let ((form (if (rest primary)
                    `(or ,@(transform primary))
                    `(call-method ,(transform/1 (car primary))))))
      (if around
          (wrap `(call-method ,(first around)
                              (,@(rest around) (make-method ,form))))
          (wrap form)))))

This behaves just like the or built-in method-combination, except that when calling a primary method whose specializer for the first argument is one of our accept-specializers, the content-type of the specializer is stored in a special variable; the last thing the effective method does is to call the new handle-content-type generic function, passing it the original generic function's first argument.

Now let's redefine our foo generic function to have the new method combination, and a method on handle-content-type:

  (defgeneric foo (request)
  (:generic-function-class accept-generic-function)
  (:method-combination content-negotiation/or))

(defmethod handle-content-type ((x string))
  (format t "Content-Type: ~A~%" *actual-content-type*))

and now, finally, we can try it all out:

  SPECIALIZABLE> (foo "text/plain,text/html;q=0.9,*/*;q=0.8")
Content-Type: text/plain
"Foo"

SPECIALIZABLE> (foo "text/turtle,application/rdf+xml;q=0.9")
Content-Type: text/turtle
"@prefix foo: <http://example.org/ns#> .
@prefix : <http://other.example.org/ns#> .
foo:bar foo: : ."

SPECIALIZABLE> (foo "audio/mp3")
406

OK, but by what magic do these accept-specializer objects exist and function? I wrote a paper about that, with Jan Moringen and David Lichteblau: as part of my ongoing open access experimentation, the version we submitted to the European Lisp Symposium is viewable at Goldsmiths' e-prints repository or on arXiv. The ELS Chairs have just announced a deadline extension, so there's still time (until March 23) for anyone to submit technical papers or abstracts for tutorials and demonstration sessions: please do, and I hope to see many of my readers there.

Syndicated 2014-03-13 12:22:26 from notes

sbcl release management in the air

Just because I'm attending mobile world congress doesn't mean that everything else stops. It's the end of the month, so it must be time to release sbcl. This month's is a little unsatisfying, because we've only achieved one of the two things I'd hoped for: we've been cautious and conservative after last month's landing of the new register allocator, but we haven't sorted out what's going on to cause the less active architectures to fail to build. There are some workarounds that have been mooted; for one reason and another no-one has had the time to check whether they actually work, and while there's partial progress on identifying the root cause of the build failure on sparc it is only partial.

Nevertheless, minor architectures have been broken before, and no-one particularly benefits from not releasing, so 1.1.16 it is. Actually making the release is a little more challenging than usual: I aim to release by the end of the month, and in practice that means it must be done today, 28th February. However, this is the day that I am returning from Barcelona, so I am less in control of laptop power and connectivity than usual for a release day. And to add to the challenge, I am trying this time to address the valid complaints that the binaries built on my laptop don't actually run on released versions of Linux, thanks to the change in the semantics of memcpy (glibc changed the implementation in its version 2.14 to exploit the freedom given to return undefined results if the source and destination overlap) and the behaviour of the linker and versioned symbols.

So over breakfast I dusted off my squeeze chroot (that doesn't sound dodgy at all), and tried to work out how to get a runnable version of SBCL in there at all (answer: build using clisp and link to the chroot's libc). At lunchtime, I used the café's wireless to check for any last-minute changes, and in the airport I found a power socket, so I started the release build. Of course it didn't finish before the gate opened, and in any case I wasn't going to chance uploading sbcl binaries over the free airport wifi (15 minutes, woo!)

I've performed some release stunts before. SBCL 0.9 was released by William Harold Newman and simultaneously announced by me at the first European Common Lisp Meeting in Amsterdam in 2005. Last year, I released SBCL 1.1.8 “live” as part of a lightning talk at the European Lisp Symposium in Madrid. But I think this is the first time that an SBCL release was even partially effected from the air. Next stop, space?

Syndicated 2014-02-28 22:10:21 from notes

25 Feb 2014 (updated 6 Oct 2014 at 21:14 UTC) »

sbcl summer of code 2014 participation

I received notification yesterday that SBCL was accepted as a “mentoring organization” to Google's Summer of Code 2014 programme. What this means is that eligible students can think about applying to be mentored by members of the SBCL team to work on a development project – and to be paid a stipend for that work.

The next steps for students are:

  • get involved with the community: join #lisp and #sbcl on the freenode IRC network; read the sbcl-devel mailing list (e.g. at gmane.lisp.steel-bank.devel; have a go at some of the bugs marked as ‘easy’
  • look at the existing list of project ideas, and see if any of those ideas would form a good basis for a project that would sustain your interest for three months or so.
  • read Paul Khuong's essay on getting started with SBCL from last year: it is still absolutely applicable to this year's programme, or indeed getting into SBCL hacking independently.

The period for student applications to be mentored as part of SBCL's participation is from 10th March until 21st March 2014; for best results, get engaged with the community and potential mentors well before the cutoff point.

Last year two projects were funded, and both completed successfully, even if merging them into the mainline is/was slower than one might like. I'm looking forward to seeing what this year brings!

Syndicated 2014-02-25 12:01:20 (Updated 2014-10-06 20:48:14) from notes

3 Feb 2014 (updated 5 Mar 2014 at 15:13 UTC) »

generic function precompilation

Once upon a time, I wrote two blog entries about precompiling the dispatch for generic functions in order to improve the responsiveness at application startup, and obviously planned a third but didn't get around to writing it. Some time later, I asked the lazyweb about it, and unfortunately this time didn't get anything much back. On further reflection, I think I was going to discuss a couple of additional technical points.

First of all, there's the question about having this kind of precompilation on all the time; wouldn't it be nice if generic functions were always at their most efficient, ready to be called? Well, yes, but what we actually want is for functions to be at their most efficient when we want to call them, and we don't particularly care at other times. Why draw that distinction? Well, when we write code that uses generic functions, it is usually of the form:

  (defgeneric foo (x y))
(defmethod foo ((x t) (y t)) ...)
(defmethod foo ((x integer) (y integer)) ...)
(defmethod foo ((x rational) (y float)) ...)

a sequence of method definitions; sometimes contiguous, as above; often distributed among various files. loading such code will execute the defmethod forms in series. And each of those executions will call add-method, changing the generic function's methods, so any aggressive dispatch precompilation scheme will kick in at every defmethod. That would be bad enough, being O(N) in the number of methods, but it's actually worse than that: the amount of work involved in precompilation will tend to be O(N) in the number of methods, or in the number of concrete classes applicable to those methods, so there's O(N2) or worse behaviour from having precompilation active all the time. SBCL is routinely mocked for having slow-loading FASL files (originally, FASL probably stood for FASt Load; we don't advertise that much); expensive and poorly-scaling computations at load-time probably won't win us any extra friends. (Though having written that, I now wonder if recording changed generic functions using the dependent-update protocol, and hooking load or asdf:load-system to precompile after loading a whole file or system, might be a tasteful compromise).

Secondly, there's the detail of exactly how to pre-fill the cache for a given generic function (given that its specializers are mostly or entirely classes). The simplest strategy, of filling the cache with exactly those classes used as specializers, fails as soon as abstract or protocol classes are used to organize the implementation of the generic function. The next simplest, which is to fill the cache with all possible subclasses of specializers at all argument positions, well, just reading that should set alarm bells ringing – it might not be too problematic for generic functions with one argument, but the cross-product effect of multiple arguments will probably cause the implementation to collapse under its own weight into a virtual black hole. It might be that the best approach is to allow the user to specify an exact set of classes: and to allow the user to introspect a running system to find out which class combinations have actually been seen and hence cached in a given session.

In fact, SBCL itself depends on a particular, specific version of this. The generic function print-object is used for printing almost everything; even without user specialization, it is called whenever a structure-object or standard-object needs to be output to any stream. Users can write methods on it, though, and that then invalidates any previous precomputed dispatch information. But there are times when it's really important that print-object just work, without any kind of extra computation to go on: in particular, when the debugger needs to inform the user that the Lisp is running out of storage space (heap or stack), it's pretty important that that reporting can happen without using more heap or stack than necessary. So for print-object, we never reset the cache state to empty; it is always prefilled with entries for control-stack-exhausted, binding-stack-exhausted, alien-stack-exhausted, heap-exhausted-error and restart for its first argument. (This strategy only works if there are no specializations of the second argument to print-object anywhere in the system; there's no good reason for a user to try to specialize the second argument in any case, so we emit a warning if they do that and hope that they read and learn from it.)

Syndicated 2014-02-03 09:56:36 (Updated 2014-03-05 14:36:05) from notes

extended specializer responses

I got a few responses to my call for use cases for generalized specializers; I'll try to summarize them in this post. Before I do, it's perhaps worth noting that the work on generalized specializers that Jim Newton and I did was written up at about the same time as Charlotte Herzeel, Jorge Vallejos, Theo D'Hondt and Pascal Costanza's work on filtered dispatch, and so our Custom Specializers in Object-Oriented Lisp paper doesn't refer to it. I'll talk more about filtered dispatch in a later post, when I tie all these threads together, but for those who aren't aware of it it's worth looking at, along with ContextL for dispatch that varies based on dynamic program state.

Meanwhile, some other ideas for extended specializers: James Anderson suggested duck typing, and I think working along similar lines Patrick Stein thought of specializers that could dispatch on the dimensions of array arguments (or length of sequence arguments). That feels to me as though there should be a fairly compelling use case, but the toy example of

  (defmethod cross-product ((x (length 3)) (y (length 3)))
  ...)

doesn't generalize. I suppose there might be an example of selecting particular numerical algorithms based on overall matrix dimensions or properties, something like

  (defmethod invert ((x (total-size> 1000000000)))
  ... invert big matrix ...)
(defmethod invert ((x (total-size<= 1000000000)))
  ... invert small matrix ...)

but it doesn't feel concrete yet.

On the other hand, just about everyone's first response to the question (Jan Moringen, and the crowded wisdom of #lisp IRC) was pattern specializers, usually through regular expressions or optima in particular; one example concrete use case given was to dispatch among handlers for urls. This is a very interesting case; I first considered this in the early days of extended specializers, and indeed mop-27.impure.lisp from the SBCL test suite captures one other use case, in a (toy) simplifier for arithmetic expressions, redolent of implementations of symbolic differentiation from a bygone age. At the time, I took that example no further, both because I didn't have a real use case, and also because I didn't fancy writing a full-strength pattern matching engine; now that optima exists, it's probably worth revisiting the example.

In particular, it's worth considering how to handle capturing of pattern variables. To continue with the toy simplifier, the existing dispatch and specializer implementation would allow one to write

  (defmethod simplify1 ((p (pattern (+ x 0))))
  (cadr p))

but what would be really neat and convenient would be to be able to write

  (defmethod simplify1 ((p (pattern (+ x 0))))
  x)

and in order to do that, we need to intercede in the generation of the actual method function in order to add bindings for pattern variables (which, helpfully, optima can tell us about).

The awkwardness in terms of the protocol – beyond the problems with metaprogramming at compile-time in the first place – is that make-method-lambda operates in ignorance of the specializers that will be applied to the method. In fact, this was the root cause of a recently-reported SBCL bug: SBCL itself needs to communicate the name and method lambda list from defmethod to make-method-lambda, and has no other way of doing that than special variables, which weren't being cleared to defend against nested calls to make-method-lambda through code-walking and macroexpansion.

But make-method-lambda is the function which generates the method body; it is exactly the function that we need to extend or override in order to do the desired automatic matching. So how can we expose this to the metaprogrammer? We can't change the signature of make-method-lambda; it might seem obvious to extend it by adding &optional arguments, but that changes the signature of the generic function and would break backwards compatibility, as methods must accept the same number of optional arguments as their generic function does; similarly, adding keyword arguments to the generic function doesn't work.

We could export and document the special variables that we ourselves use to propagate the information; in some ways that's cleanest, and has the virtue of at least having been tested in the wild. On the other hand, it feels unusual, at least in the context of the metaobject protocol; usually, everything of interest is an argument to a protocol function. So, we could define a new protocol function, and have make-method-lambda default to trampolining to it (say, make-method-lambda-with-specializer-specifiers; the basic idea, though not the horrible name, is due to Jan Moringen). But we'd need to be careful in doing that; there is some implementation-specific logic deep in PCL that performs extra optimizations to methods (the fast-method calling convention) if the system detects that only the standardized methods on make-method-lambda are applicable to a particular call; if we add any extra metaobject protocol functions in this area, we'll need to make sure that we correctly update any internal logic based on detecting the absence of metaprogrammer overrides or extensions...

Syndicated 2014-01-28 17:03:09 from notes

more prototypes with multiple dispatch

I received a couple of points in feedback regarding protoypes with multiple dispatch. Firstly, from Pascal Costanza, the observation that there's already a prototype-based dispatch system present in the quasi-standard CL MOP; the key is to view all the prototype objects as singletons, and to use introspection to manage delegation and the class-prototype as the actual object. Thus (any mistakes are mine, not Pascal's):

  (defun clone (o)
  (let* ((scs (sb-mop:class-direct-superclasses (class-of o)))
         (new (make-instance 'standard-class :direct-superclasses scs)))
    (sb-mop:finalize-inheritance new)
    (sb-mop:class-prototype new)))

(defun add-delegation (o d)
  (let ((scs (sb-mop:class-direct-superclasses (class-of o))))
    (reinitialize-instance (class-of o) 
                           :direct-superclasses (list* (class-of d) scs))))

I don't think that this works in quite the same way as a traditional prototype-based system, mind you: here, a cloned object inherits all the behaviours provided by delegation, but not the behaviours from the cloned object itself. It's pretty close, though, and might help those to whom all this is new (but who are intimately familiar with the CLOS MOP – hm, actually, the intersection of those two sets might be a little small) get a bit of a handle on this.

Secondly, Lee Salzman was kind enough to get in touch, and we had a conversation about his thesis and his reflections on that work. I'll paraphrase, so again any misrepresentation is mine, but regarding the question at the end of my previous post about the semantics of redefinition, Lee was quite clear that the expectation was that previously-cloned objects should retain their previous behaviour, even if the original object is later updated (in prototypes and multiple dispatch terms, even if the original method is redefined). In other words, the ??? is indeed expected by prototype-oriented programmers to be (FOO[1] /B/).

Lee also said that, in his subsequent experience, he has come to believe that class-based systems are a necessity – particularly in image-based systems (where there is a persistent world). He's happy with prototypes as a mechanism in a world with static programme descriptions, where you can just edit the source code, then compile and run the programme, but believes that directly modifying running programmes makes classes or some equivalent abstraction necessary. I'm not sure I completely understand this point (though I'm fairly convinced by now that I don't much like the semantics of method redefinition in a world with long-lived objects), but I'm still waiting for a good example to highlight the advantages of prototype-oriented programmes.

Meanwhile, what does this imply for the implementation of prototypes with multiple dispatch within the extended CLOS metaobject protocol? Defining a new method specialized on an must override any existing behaviour for that object, but must leave unchanged the previous behaviour for any clones; that therefore implies that the generic function cannot discard the old method, as it must remain applicable to any clones. The consequences of this include: two prototype specializers, specialized on the same object, must not be considered to be the same specializers (otherwise the standard semantics of method definition would replace the old method with the new); and that the specializers must be annotated with the method that they belong to (so that it is possible to compute for a given specializer and argument whether that argument is acceptable to the specializer, remembering that the acceptability of an object to a prototype specializer in this world is primarily a function of the object itself). One piece of good news: the CLOS MOP already provides a mechanism for associating specializers with methods, through the specializer-direct-methods generic function, so that's at least some of the battle.

All of this merits a fuller description, a reference implementation, and some example code using it. I'm working on the first two, aiming to top up my all-important publications list by presenting at some conferences (and also helping to justifying my attendance); I'd love to hear about ideas for the third...

Syndicated 2014-01-24 14:41:58 from notes

prototypes with multiple dispatch

On New Year's Day (did I have nothing better to do?) I asked the wider world real life uses of non-standard method selection, and I hinted that I had an real-life example of my own. This post does not discuss that example.

Instead, I'm going to discuss my limited understanding of another non-standard (at least, non-CLOS-standard) method selection system, and demonstrate what its use looks like in my current development world. I'm talking about a prototype-based object system: specifically, a mostly-faithful reimplementation of the Prototypes with Multiple Dispatch [1,2] system found in Slate and described by Lee Salzman and Jonathan Aldrich. I'll try to introduce the semantics as I understand them to an audience used to class-based multiple dispatch, but I'm left with some questions that I don't know how to answer, not being used to programming in this way myself.

So, first, what's going on? Well, the proper answer might be to read the linked papers, which have a relatively formal specification of the semantics, but the high-level idea could perhaps be summarised as finding what happens when you try to support prototype-based object systems (where the normal way to instantiate an object is to copy another one; where objects implement some of their functionality by delegating to other objects; and where single-dispatch methods are stored in the object itself) and multiple dispatch systems (where methods do not have a privileged receiver but dispatch on all of their arguments) simultaneously. The authors found that they could implement some reasonable semantics, and perform dispatch reasonably efficiently, by storing some dispatch metainformation within the objects themselves. The example code, involving the interactions between fish, healthy sharks and dying sharks, can be translated into my extended-specializer CL as:

  (defpvar /root/ (make-instance 'prototype-object :delegations nil))
(defpvar /animal/ (clone /root/))
(defpvar /fish/ (clone /root/))
(defpvar /shark/ (clone /root/))
(defpvar /healthy-shark/ (clone /root/))
(defpvar /dying-shark/ (clone /root/))
(add-delegation /fish/ /animal/)
(add-delegation /shark/ /animal/)
(add-delegation /shark/ /healthy-shark/)
(defgeneric encounter (x y)
  (:generic-function-class prototype-generic-function))
(defmethod encounter ((x /fish/) (y /healthy-shark/))
  (format t "~&~A swims away~%" x))
(defmethod encounter ((x /fish/) (y /animal/))
  x)
(defgeneric fight (x y)
  (:generic-function-class prototype-generic-function))
(defmethod fight ((x /healthy-shark/) (y /shark/))
  (remove-delegation x)
  (add-delegation x /dying-shark/)
  x)
(defmethod encounter ((x /healthy-shark/) (y /fish/))
  (format t "~&~A swallows ~A~%" x y))
(defmethod encounter ((x /healthy-shark/) (y /shark/))
  (format t "~&~A fights ~A~%" x y)
  (fight x y))

(compare figures 4 and 7 of [1]; defpvar is secretly just defvar with some extra debugging information so I don't go crazy trying to understand what a particular #<PROTOTYPE-OBJECT ...> actually is.)

Running some of the above code with

  (encounter (clone /shark/) (clone /shark/))

prints

  #<PROTOTYPE-OBJECT [/HEALTHY-SHARK/, /ANIMAL/] {10079A8713}> fights
#<PROTOTYPE-OBJECT [/HEALTHY-SHARK/, /ANIMAL/] {10079A8903}>

and returns

  #<PROTOTYPE-OBJECT [/DYING-SHARK/, /ANIMAL/] {10079A8713}>

(and I'm confident that that's what is meant to happen, though I don't really understand why in this model sharks aren't fish).

The first question I have, then, is another lazyweb question: are there larger programs written in this style that demonstrate the advantages of prototypes with multiple dispatch (specifically over classes with multiple dispatch; i.e. over regular CLOS). I know of Sheeple, another lisp implementation of prototype dispatch, probably different in subtle or not-so-subtle ways from this one; what I'm after, though, probably doesn't exist: if there's no easy way of using prototype dispatch in Lisp, it won't be used to solve problems, and some other way will be used instead (let's call that the computer programmer's weak version of the Sapir-Whorf hypothesis). What's the canonical example of a problem where prototype-based object systems shine?

The second question I have is more technical, and more directly related to the expected semantics. In particular, I don't know what would be expected in the presence of method redefinition, or even if method redefinition is a concept that can make sense in this world. Consider

  (defpvar /a/ (clone /root/))
(defgeneric foo (x)
  (:generic-function-class prototype-generic-function))
(defmethod foo ((x /a/)) `(foo[1] ,x))
(defpvar /b/ (clone /a/))
(foo /a/) ; => (FOO[1] /A/)
(foo /b/) ; => (FOO[1] /B/)
(defmethod foo ((x /a/)) `(foo[2] ,x))
(foo /a/) ; => (FOO[2] /A/)
(foo /b/) ; => ???

What should that last form return? Arguments from the prototype-oriented world would, I suspect, lead to the desired return value being (FOO[1] /B/), as the redefinition of the method on foo specialized to /a/ is completely irrelevant to /b/. This is my reading of the semantics described in [1], for what it's worth. Arguments from the world of generic functions and expected behaviour would probably argue for (FOO[2] /B/), because redefining a method is an action on a generic function, not an action on a set of application objects. And the argument of my implementation in practice (at present, subject to change) is to signal no-applicable-method, because method redefinition is the successive removal of the old method and addition of the new one, and removal of the old method of the generic function affects dispatch on all the objects, whereas adding the new one affects dispatch on just /a/.

Syndicated 2014-01-18 21:20:32 (Updated 2014-01-19 08:56:20) from notes

why investigate databases of audio?

At the launch meeting for Transforming Musicology (I should perhaps say that technically this was in the pub after the launch meeting), Laurence Dreyfus asked me why I was experimenting with a collection of recordings when planning my Josquin/Gombert case study, as opposed to the perhaps more natural approach of investigating the motivic or thematic similarity of music starting with the notated works (manuscripts, contemporary or modern editions).

By this stage in the evening, I was not feeling at my best – a fact unrelated to the venue of the conversation, I hasten to add – and I wouldn't be surprised if my answer was not a model of clarity. Here's another go, then, beginning with an anecdote.

When I was a PhD student, a long, long time ago in a discipline far, far, away, the research group I was in had a weekly paper-reading club, where students and postdocs, and the occasional permanent member of staff, would gather together over lunch and try to work and think through an article in the field. Usually, the paper selected was roughly contemporary, maybe published in the last few years, and therefore (because of the publishing norms in Applied Mathematics and Theoretical Physics) the paper would be available on the arXiv, and so could be acquired, and printed, without leaving one's desk. (Picking up the printed copy could double up as one's daily exercise quota.) One week, however, some imp put down a paper that predated the arXiv's rise to prominence. I have two clear memories, which should nevertheless be treated as unreliable: one of myself, spending about an hour of elapsed time in the mathematical library, registering, finding the journal, finding the volume and photocopying the article; the other of the lunch meeting, discovering that those of us having seen copies of the article (let's not go as far as having read the article) being definitively in the minority.

There are several lessons to be drawn from this, beyond further evidence of the laziness and hubris of the typical PhD student. The availability of information, in whatever form, makes it more likely to be useful and more likely to be used. It's timely to reflect on the case of Aaron Swartz, who confronted this issue and faced disproportionate consequences; I'm pleased to say that I am involved in my institution's working group on Open Access, as we try to find a way of simultaneously satisfying all the requirements imposed on us as well as extending our ability to offer a transformative experience.

How does this affect musicology, and our particular attempt to give it the transformative experience? Well, one issue with music in general is its lack of availability.

That is of course a ridiculous statement, designed to be provocative: in this era of downloadable music, streaming music, ripped music, music in your pocket and on your phone, music has never been more available. But the music available in all those forms is audio; the story for other forms of musical data (manuscript notation, chord sequences, performing editions, even lyrics) is far less good. Sometimes that is for copyright reasons, as rights-holders attempt to limit the distribution of their artifacts; sometimes that is for economic reasons, as the resources to digitize and encode public-domain sources of notation are unavailable. But the net effect is that while musical audio is limited and expensive, it is often not possible to purchase large amounts of musical data in other forms at any price.

So, in my still-hypothetical case study of Josquin/Gombert attribution, I could use notation as my primary source of data if I had encodings of sources or editions of the pieces I'm interested in – the needles, if you will; if I had those, I could verify that some computational algorithm could identify the association between sections of the chansons, Lugebat and Credo. That encoding task would be tedious, but achievable in – let's say – a week or so for the three pieces. The problem with that is that that is only half the battle; it is the construction of the needle, but for a method like this to be properly tested it's not fair just to let it loose on the needle; a search system has to be given the chance to fail by hiding the needle in a suitable haystack. And constructing a haystack – encoding a few hundreds of other Renaissance vocal works – is the thing that makes it less than practical to work on search within databases of notation, at least for this long-tail repertoire.

And that's why at the moment I investigate databases of musical audio: I can get the data, and in principle at least so can other people. If you can do it better, or just differently, why not consider applying for one of the Transforming Musicology mini-projects? There will be a one-day exploratory event at Lancaster University on the 12th February to explore possible project ideas and collaborations, and having as wide a pool of interested parties as possible is crucial for the production and execution of interesting ideas. Please come!

Syndicated 2014-01-13 20:35:49 from notes

11 Jan 2014 (updated 13 Jan 2014 at 21:11 UTC) »

slime has moved to github

This is probably not news to anyone reading this, but: SLIME has moved its primary source repository to github.

One of my projects, swankr (essentially, a SLIME for R, reusing most of the existing communication protocol and implementing a SWANK backend in R, hence the name), obviously depends on SLIME: if nothing else, the existing instructions for getting swankr up and running included a suggestion to get the SLIME sources from common-lisp.net CVS, which as Zach says is a problem that it's nice no longer to have. (Incidentally, Zach is running a surveydirect link – to assess possible effects of changes on the SLIME userbase.)

So, time to update the instructions, and in the process also

  • hoover up any obvious patches from my inbox;
  • improve the startup configuration so that the user need do no explicit configuration beyond a suitable entry in slime-lisp-implementations;
  • update the BUGS.org and TODO.org files for the current reality.

That's all safely checked in and pushed to the many various locations which people might think of as the primary swankr repository. But those org-mode files are exported to HTML to produce the minimal swankr web presence, and the script that does that was last run with org-mode version 7.x, while now, in the future, org-mode is up to version 8 (and this time the change in major version number is definitely indicative of non-backwards-compatible API changes).

This has already bitten me. The principles of reproducible research are good, and org-mode offers many of the features that help: easily-edited source format, integration with a polyglossia of programming languages, high-quality PDF output (and adequate-quality HTML output, from the same sources) – I've written some technical reports in this style, and been generally happy with it. But, with the changes from org-mode 7 to org-mode 8, in order to reproduce my existing documents I needed to make not only (minor) source document changes, but also (quite major) changes to the bits of elisp to generate exported documents in house style. Reproducible research is difficult; I suppose it's obvious that exact reproduction depends on exact software versions, platform configurations and so on – see this paper, for example – but how many authors of supposedly reproducible research are actually listing all the programs, up to exact versions of dynamically-linked libraries they link to, used in the preparation and treatment of the data and document?

In any case, the changes needed for swankr are pretty minor, though I was briefly confused when I tried org-html-export-as-html first (and no output was produced, because that's the new name for exporting to an HTML buffer rather than an HTML file. The swankr website should now reflect the current state of affairs.

Syndicated 2014-01-11 20:24:36 (Updated 2014-01-13 20:35:49) from notes

180 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!