Older blog entries for crhodes (starting at number 205)

a year in review

A brief retrospective, partly brought to you by grep:

  • CATS credits earnt: 30 (15 + 7.5 + 7.5) at level 7
  • crosswords solved: >=40
  • words blogged: 75k
  • words blogged, excluding crosswords: 50k
  • SBCL releases made: 12 (latest today!)
  • functioning ARM boards sitting on my desk: 3 (number doing anything actually useful, beyond SBCL builds: 0 so far, working on it)
  • emacs packages worked on: 2 (iplayer squeeze)
  • public engagement events co-organized: 1
  • ontological inconsistencies resolved: 1
  • R packages made: 1 (University PR departments offended: not enough)

Blogging’s been a broad success; slightly tailed off of late, what with

  • Crawl Dungeon Sprint wins: 2 (The Pits, Thunderdome: both GrFi)

so there’s an obvious New Year’s Resolution right there.

Happy New Year!

Syndicated 2014-12-31 22:27:37 from notes

ref2014 data in R package form

As threatened, one last post about REF2014. Actually this post is not so much about REF2014 any more; it’s mostly about R. I’ve finally found the time to package up the REF2014 data as a convenient R package, along with accompanying documentation and a “vignette”. Some quick observations about the process:

  • it’s not desperately easy to find authoritative documentation on best practices, or even appropriate first steps. There’s authoritative documentation on the format and contents of an R package, but that documentation suffers from describing what not to do without describing what to do instead (example); there’s Hadley Wickham’s suggestions for best practices, but not quite enough of those suggestions were applicable to this case (data-only, converted from upstream formats). Putting it all together took more brain cycles than I care to admit.

  • Sweave is a bit clunky. It does the job but I can’t help but compare it to org-mode and markdown (R has knitr for markdown vignettes, but I couldn’t get rmarkdown working in the budgeted time, so fell back to Sweave).

  • it’s quite nice to be forced to document all the columns of a data frame. I say “forced”; it’s not that strong, but having R CMD check tell me about all the bad things I’ve done is a decent motivator.

I’m not sure what the licence of the REF2014 data is (or even if that is a question that makes sense). It appears I’m not the only one who is unsure; the Web and Internet Science crowd at Southampton have put up a RDF version of the REF2014 data, and Christopher Gutteridge doesn’t know about licensing either. Meanwhile, in the general belief that having this dataset more available is likely to be positive, all things considered, have at it (and tell me where the packaging has gone wrong...).

Syndicated 2014-12-28 22:44:45 (Updated 2014-12-28 22:55:18) from notes

research excellence framework more ranks to choose from

Apparently the Pro-Warden of Research and Enterprise at my institution discovered the ‘GPA intensity’ measure for ranking Universities: or at least saw fit this morning to circulate to all staff a message saying

This league table both confirms and affirms Goldsmiths as a research-intensive university within an inclusive research culture – something we can all be proud of.

which is at least sort-of true – as I suggested last night, I think an intensity measure is just about on the side of the angels. One flaw in the REF process is the element of choice in who to submit, though there are problems in just disallowing that choice, including the coercion of staff onto teaching-only contracts; maybe the right answer is to count all employees of Universities, and not distinguish between research-active and teaching-only (or indeed between academic and administrator): that at least would give a visibility of the actual density (no sniggering at the back, please) of research in an institution.

As should be obvious from yesterday’s post, I don’t think that the league table generated by GPA intensity, or indeed the league table by unweighted GPA, or indeed any other league table has the power to confirm anything in particular. I think it’s generally true that Goldsmiths is an institution where a lot of research goes on, and also that it has a generally open and inclusive culture: certainly I have neither experienced nor heard anything like the nastiness at Queen Mary, let alone the tragedy of Stefan Grimm at Imperial. But I don’t like the thoughtless use of data to support even narratives that I might personally believe, so let’s challenge the idea that GPA intensity rank is a meaningful measure.

What does GPA intensity actually mean? It means the GPA that would have been achieved by the institution if all eligible staff were submitted to the REF, rather than just those who actually were, and that all the staff who weren’t originally submitted received “unclassified” (i.e. 0*) assessments for all their contributions. That’s probably a bit too strong an interpretation; it’s easy to see how institutions would prefer not to submit staff with lower-scoring outputs, even if their scoring would be highly unlikely to be 0*, whether they are optimizing for league tables or QR funding. So intensity-corrected GPA probably overestimates the “rank” of institutions whose strategy was biased towards submitting a higher proportion of their staff (and conversely underestimates the “rank” of institutions whose strategy was to submit fewer of their staff).

It probably is fair to guess, though, that the unsubmitted staff don’t have many 4* “world leading” outputs (whatever that means), honourable exceptions aside. So the correction for intensity to compare institutions by percentages of outputs assessed as 4* is probably reasonable. Here it is:

4* outputs with/without intensity

Another case where an intensity correction is probably reasonable is when comparing institutions by some combination of their 4* and 3* scores overall (i.e. including Impact and Environment): this time, because the 4* and (probably to a lesser extent) 3* scores will probably be one of the inputs to QR funding (if that carries on at all), and the intensity correction will scale the funding from a per-submitted-capita to a per-eligible-capita basis: it will measure the average QR funding received per member of staff. As I say, we don’t know about QR funding in the future; using the current weighting (3×4*+1×3*), the picture looks like this:

4*/3* 3:1 overall with/without intensity

But wait, does it make sense to rank these at all? Surely what matters here is some kind of absolute scale: small differences between per-capita QR funding at different institutions will be hardly noticeable, and even moderate ones are unlikely to be game-changers. (Of course, institutions may choose not to distribute any QR funding they receive evenly; such institutions could be regarded as having a less inclusive research culture...). So, if instead of plotting QR ranks, we plot absolute values of the QR-related “score”, what does the picture look like?

4*/3* 3:1 overall values with/without intensity

This picture might be reassuringly familiar to the UK-based research academic: there’s a reasonable clump of the names that one might expect to be characterized as “research-intensive” institutions, between around 89 and 122 on the right-hand (intensity-corrected) scale; some are above that clump (the “winners”: UCL, Cambridge, Kings London) and some a bit below (SOAS). Of course, since the members of that clump are broadly predictable just from reputation, one might ask whether the immense cost of the REF process delivers any particular benefit. (One might ask. It’s not clear that one would ever get an answer).

I should stop playing with slopegraphs and resume work on packaging up the data so that other people can also write about REF2014 instead of enjoying their “holidays”.

Syndicated 2014-12-23 15:13:49 from notes

research excellence framework and public relations

Last week saw the publication of the results of the “Research Excellence Framework” – a title which does almost nothing to describe what it actually is. To those reading not from academia, or from those portions of academia where this monstrosity has not penetrated, the REF involved university departments gathering sets of up to four publications from a subset of their research-active staff who were employed by them on 1st December 2013; writing submission documents to explain why those publications should be considered world-leading or at least internationally excellent; gathering and documenting “Impact” case studies, and writing about the general environment for performing research. These submissions then went to assessment panels, who did some unknowable things with them before producing assessments of the submissions: for each of the three measures (outputs, impact and environment), the percentages of that submission judged to be at particular levels, measured from 4* (world-leading) down to 0* (unclassified), and these assessments will affect in some as yet undetermined way Quality-Related funding for Higher Education Institutions (if QR funding continues at all). Those data are now published, along with, for each department, the count of full-time-equivalent research staff considered as part of the submission, and (inexplicably, about eight hours later) figures from a different higher-education agency estimating the number of staff who would have been eligible to be considered as part of each submission.

If your first reaction to all of this is “wow, that’s a well-thought-through system that will in no way become subject to Goodheart’s law”, well, I envy you your worldview. Gaming the REF starts early: since it is the employment status on 1st December 2013 that matters for the purpose of this exercise, rather than the location at which any given piece of research was done (or the address for correspondence given on the published research), economic forces quite naturally set up a transfer window, where researchers likely to score highly in the REF if included in a submission are able to name high prices for moving in the six months leading up to the census date – and there’s then much less flexibility in the academic labour market for a good year or two afterwards. Other employment distortions happen too; for example, there’s an incentive to approach researchers based outside the UK, and offer to pay them a fractional wage (say 0.2FTE) in exchange for them offering up four of their publications, assumed to be high-scoring, for submission to the UK REF. Given the way the QR funding is distributed, this is in effect a subsidy to departments with well-connected heads, and to already-successful overseas researchers.

But aside from the distortions that arise from producing the submissions, and the simultaneously unclear and opaque way that judgment on those submissions is made, at least the result of all of this is a table of numbers, which don’t lie, right? Right.

No.

I happened to be in a different country on REF results day, doing different work. So my main view of what was going on was following the #REF2014 twitter stream. And maybe I envy myself my previous worldview, because I was not prepared for the deluge of mendaciousness. I don’t know what I was expecting – a discussion between academics, maybe, or a quick dissection of the figures, and some links to news articles and blog posts. What I actually got was corporate account after account claiming victory in the REF, using various ways of weighting and ordering the measures. Particularly egregious examples included:

  • failing to distinguish between case studies and overall research, typically seen in “x% of our research has world-class impact” or similar: @unisouthampton @cardiffphilos @ITSLeeds @UWEGradSchool @LancasterManage

  • talking about “research power”, which multiplies a GPA-type score by the FTE quantity of staff submitted to the assessment. By introducing this, larger institutions can successfully confound a notional measure of quality with a measure of quantity to produce something essentially meaningless (except that it will likely be some similar formula which determines QR funding – but most University costs are per-capita costs anyway, so this still doesn’t make much sense): @LawLeicester @CityUniHealth @UniofReading @EconomicsatYork @UoNresearch @ScienceLeeds

  • simple gibberish: @CovUniResearch, and the Guardian gets a special prize for attempting to use the REF to compare apples to oranges.

It was also disappointing to see corporate twitter accounts attempting to find measures to optimize their ranking positions after the fact; John O’Leary has observed that at least four institutions have claimed overall “victory”, as if one institution’s research could defeat another’s. As well as overall victory, though, a seemingly-infinite number of departmental accounts claimed victory, or a top-ten position, or a top-twenty (or 16th, or 22nd, or 66th) as if that was meaningful. Do we really believe that the panels, in all their wisdom, can assess the difference between “internationally excellent” and “world-leading” to a sufficient accuracy that differences in GPA scores in the third significant place are meaningful? In other words, where are the error bars on these measurements?

To finish up: I spent too long today downloading the HEFCE and HESA data, cleaning it up, swearing at UCL’s acquisition of the Institute of Education, and messing around with ggplot. I hope to publish the cleaned-up data files for others to play with in the holidays; in the meantime, I leave you with this illustration of the game-playing: since there are multiple outcomes from one set of REF measurements, different institutions will have used different strategies for choosing which of their staff to submit and which not: to attempt to optimize their Grade Point Average, their (hoped-for) QR funding, or their likelihood of a good headline on 18th December 2014. We can measure some of that by looking at the difference between GPA scores, and GPA scores scaled by the proportion of eligible staff who were actually submitted. To illustrate that, I’ve made a Tufte-style slopegraph; the only concession to modernity is that the steeper the slope, the darker the ink of the line. (Modernity in character encoding – sorry, Glyndŵr University – and font-antialiasing is completely unaddressed).

GPA with/without intensity

You can decide whether either of the GPA measures is more or less meaningful; I have no particular axe to grind (though I suspect that my institution might, and on balance I think they are on the side of the angels who know how to dance slightly better on the head of a pin). One message this graph tells that everyone should be able to agree on is that it illustrates different strategies being employed – if there were uniformity across the sector, the lines would be generally horizontal. (And the real tragedy of all this is that clever, dedicated people at institutions of research and learning spent actual time and energy thinking about REF submission strategies, instead of doing something interesting and potentially useful).

In some sense, I hope not to come back to this subject. But it’s the holidays, and I need something that seems enough like work avoidance that it will distract me from preparing my lectures for next term...

Syndicated 2014-12-22 23:14:03 (Updated 2014-12-22 23:17:31) from notes

17 Nov 2014 (updated 17 Nov 2014 at 13:14 UTC) »

hearing wagner data preparations

Last week’s activity – in between the paperwork, the teaching, the paperwork, the paperwork, the teaching and the paperwork – was mostly taken up in preparations for the Hearing Wagner event, part of the AHRC’s Being Human festival.

Being a part of the Being Human festival gave us the opportunity to work to collect data that we wouldn’t otherwise have had access to: because of the fortuitous timing of the Mariinsky Theatre’s production of the Ring at the Birmingham Hippodrome between 5th and 9th November, we were able to convince funders to allow us to offer free tickets to Birmingham Conservatoire students, in exchange for being wired up to equipment measuring their electrodermal activity, blood flow, and hand motion.

Why collect these data? Well, on of the themes of the Transforming Musicology project as a whole is to examine the perception of leitmotive, particularly Wagner’s use of them in the Ring, and the idea behind gathering these data is to have ecologically-valid (in as much as that is possible when there’s a device strapped to you) measurements of participants’ physical responses to the performance, where those physical responses are believed to correlate with emotional arousal. Using those measurements, we can then go looking for signals of responses to leitmotives, or to other musical or production cues: as well as the students attending the performance, some of the research team were present backstage, noting down the times of events in the staging of (subjective) particular significance – lighting changes, for example.

And then all of these data come back to base, and we have to go through the process of looking for signal. And before we can do anything else, we have to make sure that all of our data are aligned to a reference timeline. For each of the operas, we ended up with around 2GB of data files: up to 10 sets of data from the individual participants, sampled at 120Hz or so; times of page turns in the vocal score, noted by a musicologist member of the research team (a coarse approximation to the sound experienced by the participants); timestamped performance annotations, generated by a second musicologist and dramaturge. How to get all of this onto a common timeline?

Well, in the best of all possible worlds, all of the clocks in the system would have been synchronized by ntp, and that synchronization would have been stable and constant throughout the process. In this case, the Panglossians would have been disappointed: in fact none of the various devices was sufficiently stably synchronized with any of the others to be able to get away with no alignment.

Fortunately, the experimental design was carried out by people with a healthy amount of paranoia: the participants were twice asked to clap in unison: once in the backstage area, where there was only speed-of-sound latency to the listeners (effectively negligible), and once when seated in the auditorium, where there was additional latency from the audio feed from the auditorium to backstage. Those claps gave us enough information, on the rather strong assumption that they were actually simultaneous, to tie everything together: the first clap could be found on each individual measuring device by looking at the accelerometer data for the signature, which establishes a common timeline for the measurement data and the musicologists; the second clap gives a measure for the additional latency introduced by the audio feed. Since the participants’ claps weren’t actually simultaneous – despite the participants being music students, and the clap being conducted – we have a small error, but it’s likely to be no more than about one second.

And this week? This week we’ll actually be looking for interesting signal; there’s reason to believe that electrodermal activity (basically, the change in skin conductance due to sweat) is indicative of emotional arousal, and quite a sensitive measure of music-induced emotion. This is by its nature an exploratory study: at least to start with, we’re looking at particular points of interest (specified by musicologists, in advance) for any correlation with biosignal response – and we’ll be presenting initial results about anything we find at the Hearing Wagner event in Birmingham this weekend. The clock is ticking...

edit: see also a similar post on the project blog

Syndicated 2014-11-17 10:58:17 (Updated 2014-11-17 12:49:11) from notes

reproducible builds - a month ahead of schedule

I think this might be my last blog entry on the subject of building SBCL for a while.

One of the premises behind SBCL as a separate entity from CMUCL, its parent, was to make the result of its build be independent of the compiler used to build it. To a world where separate compilation is the norm, the very idea that building some software should persistently modify the state of the compiler probably seems bizarre, but the Lisp world evolved in that way and Lisp environments (at least those written in themselves) developed build recipes where the steps to construct a new Lisp system from an old one and the source code would depend critically on internal details of both the old and the new one: substantial amounts of introspection on the build host were used to bootstrap the target, so if the details revealed by introspection were no longer valid for the new system, there would need to be some patching in the middle of the build process. (How would you know whether that was necessary? Typically, because the build would fail with a more-or-less – usually more – cryptic error.)

Enter SBCL, whose strategy is essentially to use the source files first to build an SBCL!Compiler running in a host Common Lisp implementation, and then to use that SBCL!Compiler to compile the source files again to produce the target system. This requires some contortions in the source files: we must write enough of the system in portable Common Lisp so that an arbitrary host can execute SBCL!Compiler to compile SBCL-flavoured sources (including the standard headache-inducing (defun car (list) (car list)) and similar, which works because SBCL!Compiler knows how to compile calls to car).

How much is “enough” of the system? Well, one answer might be when the build output actually works, at least to the point of running and executing some Lisp code. We got there about twelve years ago, when OpenMCL (as it was then called) compiled SBCL. And yet... how do we know there aren't odd differences that depend on the host compiler lurking, which will not obviously affect normal operation but will cause hard-to-debug trouble later? (In fact there were plenty of those, popping up at inopportune moments).

I’ve been working intermittently on dealing with this, by attempting to make the Common Lisp code that SBCL!Compiler is written in sufficiently portable that executing it on different implementations generates bitwise-identical output. Because then, and only then, can we be confident that we are not depending in some unforseen way on a particular implementation-specific detail; if output files are different, it might be a harmless divergence, for example a difference in ordering of steps where neither depends on the other, or it might in fact indicate a leak from the host environment into the target. Before this latest attack at the problem, I last worked on it seriously in 2009, getting most of the way there but with some problems remaining, as measured by the number of output files (out of some 330 or so) whose contents differed depending on which host Common Lisp implementation SBCL!Compiler was running on.

Over the last month, then, I have been slowly solving these problems, one by one. This has involved refining what is probably my second most useless skill, working out what SBCL fasl files are doing by looking at their contents in a text editor, and from that intuiting the differences in the implementations that give rise to the differences in the output files. The final pieces of the puzzle fell into place earlier this week, and the triumphant commit announces that as of Wednesday all 335 target source files get compiled identically by SBCL!Compiler, whether that is running under Clozure Common Lisp (32- or 64-bit versions), CLISP, or a different version of SBCL itself.

Oh but wait. There is another component to the build: as well as SBCL!Compiler, we have SBCL!Loader, which is responsible for taking those 335 output files and constructing from them a Lisp image file which the platform executable can use to start a Lisp session. (SBCL!Loader is maybe better known as “genesis”; but it is to load what SBCL!Compiler is to compile-file). And it was slightly disheartening to find that despite having 335 identical output files, the resulting cold-sbcl.core file differed between builds on different host compilers, even after I had remembered to discount the build fingerprint constructed to be different for every build.

Fortunately, the actual problem that needed fixing was relatively small: a call to maphash, which (understandably) makes no guarantees about ordering, was used to affect the Lisp image data directly. I then spent a certain amount of time being thoroughly confused, having managed to construct for myself a Lisp image where the following forms executed with ... odd results:

  (loop for x being the external-symbols of "CL" count 1)
; => 1032
(length (delete-duplicates (loop for x being the external-symbols of "CL" collect x)))
; => 978

(shades of times gone by). Eventually I realised that

  (unless (member (package-name package) '("COMMON-LISP" "KEYWORD" :test #'string=))
  ...)

was not the same as

  (unless (member (package-name package) '("COMMON-LISP" "KEYWORD") :test #'string=)
  ...)

and all was well again, and as of this commit the cold-sbcl.core output file is identical no matter the build host.

It might be interesting to survey the various implementation-specific behaviours that we have run into during the process of making this build completely repeatable. The following is a probably non-exhaustive list – it has been twelve years, after all – but maybe is some food for thought, or (if you have a particularly demonic turn of mind) an ingredients list for a maximally-irritating CL implementation...

  • Perhaps most obviously, various constants are implementation-defined. The ones which caused the most trouble were undoubtably most-positive-fixnum and most-negative-fixnum – particularly since they could end up being used in ways where their presence wasn’t obvious. For example, (deftype fd () `(integer 0 ,most-positive-fixnum)) has, in the SBCL build process, a subtly different meaning from (deftype fd () (and fixnum unsigned-byte)) – in the second case, the fd type will have the intended meaning in the target system, using the target’s fixnum range, while in the first case we have no way of intercepting or translating the host’s value of most-positive-fixnum. Special mentions go to array-dimension-limit, which caused Bill Newman to be cross on the Internet, and to internal-time-units-per-second; I ended up tracking down one difference in output machine code from a leak of the host’s value of that constant into target code.
  • Similarly, random and sxhash quite justifiably differ between implementations. The practical upshot of that is that these functions can’t be used to implement a cache in SBCL!Compiler, because the access patterns and hence the patterns of cache hits and misses will be different depending on the host implementation.
  • As I’ve already mentioned, maphash does not iterate over hash-table contents in a specified order, and in fact that order need not be deterministic; similarly, with-package-iterator can generate symbols in arbitrary orders, and set operations (intersection, set-difference and friends) will return the set as a list whose elements are in an arbitrary order. Incautious use of these functions tended to give rise to harmless but sometimes hard-to-diagnose differences in output; the solution was typically to sort the iteration output before operating on any of it, to introduce determinism...
  • ... but it was possible to get that wrong in a harder-to-detect way, because sort isnot specified to be stable. In some implementations, it actually is a stable sort in some conditions, but for cases where it’s important to preserve an already-existing partial order, stable-sort is the tool for the job.
  • The language specification explicitly says that the initial contents of uninitialized arrays are undefined. In most implementations, at most times, executing (make-array 8 :element-type (unsigned-byte 8)) will give a zero-filled array, but there are circumstances in some implementations where the returned array will have arbitrary data.
  • Not only are some constants implementation-defined, but so also are the effects of normal operation on some variables. *gensym-counter* is affected by macroexpansion if the macro function calls gensym, and implementations are permitted to macroexpand macros an arbitrary number of times. That means that our use of gensym needs to be immune to whatever the host implementation’s macroexpansion and evaluation strategy is.
  • The object returned by byte to represent a bitfield with size and position is implementation-defined. Implementations (variously) return bitmasks, conses, structures, vectors; host return values of byte must not be used during the execution of SBCL!Compiler. More subtly, the various boole-related constants (boole-and and friends) also need special treatment; at one point, their host values were used when SBCL!Compiler compiled the boole function itself, and it so happens that CLISP and SBCL both represent the constants as integers between 0 and 15... but with a different mapping between operation and integer.
  • my last blog entry talked about constant coalescing, and about printing of (quote foo). In fact printing in general has been a pain, and there are still significant differences in interpretation or at least in implementation of pretty-printing: to the extent that at one point we had to minimize printing at all in order for the build to complete under some implementations.
  • there are a number of things which are implementation-defined but have caused a certain amount of difficulty. Floating point in general is a problem, not completely solved (SBCL will not build correctly if its host doesn’t have distinct single- and double-float types that are at least approximately IEEE754-compliant). Some implementations lack denormalized numbers; some do not expose signed zeros to the user; and some implementations compute (log 2d0 10d0) more accurately than others, including SBCL itself, do. The behaviour of the host implementation on legal but dubious code is also potentially tricky: SBCL’s build treats full warnings as worthy of stopping, but some hosts emit full warnings for constructs that are tricky to write in other ways: for example to write portable code to handle multiple kinds of string, one might write (typecase string (simple-base-string ...) ((simple-array character (*)) ...)) (string ...)) but some implementations emit full warnings if a clause in a typecase is completely shadowed by other clauses, and if base-char and character are identical in that implementation the typecase above will signal.

There were probably other, more minor differences between implementations, but the above list gives a flavour of the things that needed doing in order to get to this point, where we have some assurance that our code is behaving as intended. And all of this is a month ahead of my self-imposed deadline of SBCL’s 15th birthday: SBCL was announced to the world on December 14th, 1999. (I’m hoping to be able to put on an sbcl15 workshop in conjunction with the European Lisp Symposium around April 20th/21st/22nd – if that sounds interesting, please pencil the dates in the diary and let me know...)

Syndicated 2014-11-08 22:38:14 (Updated 2014-11-10 17:00:04) from notes

still working on reproducible builds

It’s been nearly fifteen years, and SBCL still can’t be reliably built by other Lisp compilers.

Of course, other peoples’ definition of “reliably” might differ. We did achieve successful building under unrelated Lisp compilers twelve years ago; there were a couple of nasty bugs along the way, found both before and after that triumphant announcement, but at least with a set of compilers whose interpretation of the standard was sufficiently similar to SBCL’s own, and with certain non-mandated but expected features (such as the type (array (unsigned-byte 8) (*)) being distinct from simple-vector, and single-float being distinct from double-float), SBCL achieved its aim of being buildable on a system without an SBCL binary installed (indeed, using CLISP or XCL as a build host, SBCL could in theory be bootstrapped starting with only gcc).

For true “reliability”, though, we should not be depending on any particular implementation-defined features other than ones we actually require – or if we are, then the presence or absence of them should not cause a visible difference in the resulting SBCL. The most common kind of leak from the host lisp to the SBCL binary was the host’s value of most-positive-fixnum influencing the target, causing problems from documentation errors all the way up to type errors in the assembler. Those leaks were mostly plugged a while ago, though they do recur every so often; there are other problems, and over the last week I spent some time tracking down three of them.

The first: if you’ve ever done (apropos "PRINT") or something similar at the SBCL prompt, you might wonder at the existence of functions named something like SB-VM::|CACHED-FUN--PINSRB[(EXT-2BYTE-XMM-REG/MEM ((PREFIX (QUOTE (102))) (OP1 (QUOTE (58))) (OP2 (QUOTE (32))) (IMM NIL TYPE (QUOTE IMM-BYTE))) (QUOTE (NAME TAB REG , REG/MEM ...)))]-EXT-2BYTE-XMM-REG/MEM-PRINTER|.

What is going on there? Well, these functions are a part of the disassembler machinery; they are responsible for taking a certain amount of the machine code stream and generating a printed representation of the corresponding assembly: in this case, for the PINSRB instruction. Ah, but (in most instruction sets) related instructions share a fair amount of structure, and decoding and printing a PINSRD instruction is basically the same as for PINSRB, with just one #x20 changed to a #x22 – in both cases we want the name of the instruction, then a tab, then the destination register, a comma, the source, another comma, and the offset in the destination register. So SBCL arranges to reuse the PINSRB instruction printer for PINSRD; it maintains a cache of printer functions, looked up by printer specification, and reuses them when appropriate. So far, so normal; the ugly name above is the generated name for such a function, constructed by interning a printed, string representation of some useful information.

Hm, but wait. See those (QUOTE (58)) fragments inside the name? They result from printing the list (quote (58)). Is there a consensus on how to print that list? Note that *print-pretty* is bound to nil for this printing; prior experience has shown that there are strong divergences between implementations, as well as long-standing individual bugs, in pretty-printer support. So, what happens if I do (write-to-string '(quote foo) :pretty nil)?

  • SBCL: "(QUOTE FOO)", unconditionally
  • CCL: "'FOO" by default; "(QUOTE FOO)" if ccl:*print-abbreviate-quote* is set to nil
  • CLISP: "'FOO", unconditionally (I read the .d code with comments in half-German to establish this)

So, if SBCL was compiled using CLISP, the name of the same function in the final image would be SB-VM::|CACHED-FUN--PINSRB[(EXT-2BYTE-XMM-REG/MEM ((PREFIX '(102)) (OP1 '(58)) (OP2 '(32)) (IMM NIL TYPE 'IMM-BYTE)) '(NAME TAB REG , REG/MEM ...))]-EXT-2BYTE-XMM-REG/MEM-PRINTER|. Which is shorter, and maybe marginally easier to read, but importantly for my purposes is not bitwise-identical.

Thus, here we have a difference between host Common Lisp compilers which leaks over into the final image, and it must be eliminated. Fortunately, this was fairly straightforward to eliminate; those names are never in fact used to find the function object, so generating a unique name for functions based on a counter makes the generated object file bitwise identical, no matter how the implementation prints two-element lists beginning with quote.

The second host leak is also related to quote, and to our old friend backquote – though not related in any way to the new implementation. Consider this apparently innocuous fragment, which is a simplified version of some code to implement the :type option to defstruct:

  (macrolet ((def (name type n)
             `(progn
                (declaim (inline ,name (setf ,name)))
                (defun ,name (thing)
                  (declare (type simple-vector thing))
                  (the ,type (elt thing ,n)))
                (defun (setf ,name) (value thing)
                  (declare (type simple-vector thing))
                  (declare (type ,type value))
                  (setf (elt thing ,n) value)))))
  (def foo fixnum 0)
  (def bar string 1))

What’s the problem here? Well, the functions are declaimed to be inline, so SBCL records their source code. Their source code is generated by a macroexpander, and so is made up of conses that are generated programmatically (as opposed to freshly consed by the reader). That source code is then stored as a literal object in an object file, which means in practice that instructions for reconstructing a similar object are dumped, to be executed when the object file is processed by load.

Backquote is a reader macro that expands into code that, when evaluated, generates list structure with appropriate evaluation and splicing of unquoted fragments. What does this mean in practice? Well, one reasonable implementation of reading `(type ,type value) might be:

  (cons 'type (cons type '(value)))

and indeed you might (no guarantees) see something like that if you do

  (macroexpand '`(type ,type value))

in the implementation of your choice. Similarly, reading `(setf (elt thing ,n) value) will eventually generate code like

  (cons 'setf (cons (cons 'elt (list 'thing n)) '(value)))

Now, what is “similar”? In this context, it has a technical definition: it relates two objects in possibly-unrelated Lisp images, such that they can be considered to be equivalent despite the fact that they can’t be compared:

similar adj. (of two objects) defined to be equivalent under the similarity relationship.

similarity n. a two-place conceptual equivalence predicate, which is independent of the Lisp image so that two objects in different Lisp images can be understood to be equivalent under this predicate. See Section 3.2.4 (Literal Objects in Compiled Files).

Following that link, we discover that similarity for conses is defined in the obvious way:

Two conses, S and C, are similar if the car of S is similar to the car of C, and the cdr of S is similar to the cdr of C.

and also that implementations have some obligations:

Objects containing circular references can be externalizable objects. The file compiler is required to preserve eqlness of substructures within a file.

and some freedom:

With the exception of symbols and packages, any two literal objects in code being processed by the file compiler may be coalesced if and only if they are similar [...]

Put this all together, and what do we have? That def macro above generates code with similar literal objects: there are two instances of '(value) in it. A host compiler may, or may not, choose to coalesce those two literal '(value)s into a single literal object; if it does, the inline expansion of foo (and bar) will have a circular reference, which must be preserved, showing up as a difference in the object files produced during the SBCL build. The fix? It’s ugly, but portable: since we can’t stop an aggressive compiler from coalescing constants which are similar but not identical, we must make sure that any similar substructure is in fact identical:

  (macrolet ((def (name type n)
             (let ((value '(value)))
               `(progn
                  (declaim (inline ,name (setf ,name)))
                  (defun ,name (thing)
                    (declare (type simple-vector thing))
                    (the ,type (elt thing ,n)))
                  (defun (setf ,name) (value thing)
                    (declare (type simple-vector thing))
                    (declare (type ,type . ,value))
                    (setf (elt thing ,n) . ,value)))))
  (def foo fixnum 0)
  (def bar string 1))

Having dealt with a problem with quote, and a problem with backquote, what might the Universe serve up for my third problem? Naturally, it would be a problem with a code walker. This code walker is somewhat naïve, assuming as it does that its body is made up of forms or tags; it is the assemble macro, which is used implicitly in the definition of VOPs (reusable assembly units); for example, like

  (assemble ()
  (move ptr object)
  (zeroize count)
  (inst cmp ptr nil-value)
  (inst jmp :e DONE)
 LOOP
  (loadw ptr ptr cons-cdr-slot list-pointer-lowtag)
  (inst add count (fixnumize 1))
  (inst cmp ptr nil-value)
  (inst jmp :e DONE)
  (%test-lowtag ptr LOOP nil list-pointer-lowtag)
  (error-call vop 'object-not-list-error ptr)
 DONE))

which generates code to compute the length of a list. The expander for assemble scans its body for any atoms, and generates binding forms for those atoms to labels:

  (let ((new-labels (append labels
                          (set-difference visible-labels inherited-labels))))
  ...
  `(let (,@(mapcar (lambda (name) `(,name (gen-label))) new-labels))
     ...))

The problem with this, from a reproducibility point of view, is that set-difference (and the other set-related functions: union, intersection, set-exclusive-or and their n-destructive variants) do not return the sets with a specified order – which is fine when the objects are truly treated as sets, but in this case the LOOP and DONE label objects ended up in different stack locations depending on the order of their binding. Consequently the machine code for the function emitting code for computing a list’s length – though not the machine code emitted by that function – would vary depending on the host’s implementation of set-difference. The fix here was to sort the result of the set operations, knowing that all the labels would be symbols and that they could be treated as string designators.

And after all this is? We’re still not quite there: there are three to four files (out of 330 or so) which are not bitwise-identical for differing host compilers. I hope to be able to rectify this situation in time for SBCL’s 15th birthday...

Syndicated 2014-10-14 06:51:19 from notes

interesting pretty-printer bug

One of SBCL’s Google Summer of Code students, Krzysztof Drewniak (no relation) just got to merge in his development efforts, giving SBCL a far more complete set of Unicode operations.

Given that this was the merge of three months’ out-of-tree work, it’s not entirely surprising that there were some hiccups, and indeed we spent some time diagnosing and fixing a 1000-fold slowdown in char-downcase. Touch wood, all seems mostly well, except that Jan Moringen reported that, when building without the :sb-unicode feature (and hence having a Lisp with 8-bit characters) one of the printer consistency tests was resulting in an error.

Tracking this down was fun; it in fact had nothing in particular to do with the commit that first showed the symptom, but had been lying latent for a while and had simply never shown up in automated testing. I’ve expressed my admiration for the Common Lisp standard before, and I’ll do it again: both as a user of the language and as an implementor, I think the Common Lisp standard is a well-executed document. But that doesn’t stop it from having problems, and this is a neat one:

When a line break is inserted by any type of conditional newline, any blanks that immediately precede the conditional newline are omitted from the output and indentation is introduced at the beginning of the next line.

(from pprint-newline)

For the graphic standard characters, the character itself is always used for printing in #\ notation---even if the character also has a name[5].

(from CLHS 22.1.3.2)

Space is defined to be graphic.

(from CLHS glossary entry for ‘graphic’)

What do these three requirements together imply? Imagine printing the list (#\a #\b #\c #\Space #\d #\e #\f) with a right-margin of 17:

  (write-to-string '(#\a #\b #\c #\Space #\d #\e #\f) :pretty t :right-margin 17)
; => "(#\\a #\\b #\\c #\\
; #\\d #\\e #\\f)"

The #\Space character is defined to be graphic; therefore, it must print as #\ rather than #\Space; if it happens to be printed just before a conditional newline (such as, for example, generated by using pprint-fill to print a list), the pretty-printer will helpfully remove the space character that has just been printed before inserting the newline. This means that a #\Space character, printed at or near the right margin, will be read back as a #\Newline character.

It’s interesting to see what other implementations do. CLISP 2.49 in its default mode always prints #\Space; in -ansi mode it prints #\ but preserves the space even before a conditional newline. CCL 1.10 similarly preserves the space; there’s an explicit check in output-line-and-setup-for-next for an “escaped” space (and a comment that acknowledges that this is a heuristic that can be wrong in the other direction). I’m not sure what the best fix for this is; it’s fairly clear that the requirements on the printer aren’t totally consistent. For SBCL, I have merged a one-line change that makes the printer print using character names even for graphic characters, if the *print-readably* printer control variable is true; it may not be ideal that print/read round-tripping was broken in the normal case, but in the case where it’s explicitly been asked for it is clearly wrong.

Syndicated 2014-10-06 20:48:14 from notes

settings for gnome-shell extension

A long, long time ago, I configured my window manager. What can I say? I was a student, with too much free time; obviously hoursdays spent learning some configuration file format and tweaking some aspect of behaviour would be repaid many times over the course of a working life. I suppose one thing it led to was my current career, so it’s probably not all a loss.

However, the direct productivity benefits almost certainly were a chimera; unfortunately, systems (hardware and software) changed too often for the productivity benefit (if any) to amortize the fixed set-up time, and so as a reaction I stopped configuring my window manager. In fact, I went the other way, becoming extremely conservative about upgrades of anything at all; I had made my peace with GNOME 2, accepting that there was maybe enough configurability and robustness in the 2.8 era or so for me not to have to change too much, and then not changing anything.

Along comes GNOME 3, and there are howls all over the Internet about the lack of friendly behaviour – “friendly” to people like me, who like lots of terminals and lots of editor buffers; there wasn’t much of an outcry from other people with more “normal” computing needs; who knows why? In any case, I stuck with GNOME 2 for a long time, eventually succumbing at the point of an inadvisable apt-get upgrade and not quite hitting the big red ABORT button in time.

So, GNOME 3. I found that I shared a certain amount of frustration with the vocal crowd: dynamic, vertically-arranged workspaces didn’t fit my model; I felt that clicking icons should generate new instances of applications rather than switch to existing instances, and so on. But, in the same timeframe, I adopted a more emacs-centric workflow, and the improvements of the emacs daemon meant that I was less dependent on particular behaviours of window manager and shell, so I gave it another try, and, with the right extensions, it stuck.

The right extensions? “What are those?” I hear you cry. Well, in common with illustrious Debian Project Leaders past, I found that a tiling extension made many of the old focus issues less pressing. My laptop is big enough, and I have enough (fixed) workspaces, that dividing up each screen between applications mostly works. I also have a bottom panel, customized to a height of 0 pixels, purely to give me the fixed number of workspaces; the overview shows them in a vertical arrangement, but the actual workspace arrangement is the 2x4 that I’m used to.

One issue with a tiled window arrangement, though, is an obvious cue to which window has focus. I have also removed all window decorations, so the titlebar or border don’t help with this; instead, a further extension to shade inactive windows helps to minimize visual distraction. And that’s where the technical part of this blog entry starts...

One of the things I do for my employer is deliver a module on Perception and Multimedia Computing. In the course of developing that module, I learnt a lot about how we see what we see, and also how digital displays work. And one of the things I learnt to be more conscious about was attention: in particular, how my attention can be drawn (for example, I can not concentrate on anything where there are animated displays, such as are often present in semi-public spaces, such as bars or London City airport.)

The shade inactive windows extension adds a brightness-reducing effect to windows without focus. So, that was definitely useful, but after a while I noticed that emacs windows with some text in error-face (bold, bright red) in them were still diverting my attention, even when they were unfocussed and therefore substantially dimmed.

So I worked on a patch to the extension to add a saturation-reducing effect in addition to reducing the brightness. And that was all very well – a classic example of taking code that almost does what you want it to do, and then maintenance-programming it into what you really want it to do – until I also found that the hard-wired time over which the effect took hold (300ms) was a bit too long for my taste, and I started investigating what it would take to make these things configurable.

Some time later, after exploring the world with my most finely-crafted google queries, I came to the conclusion that there was in fact no documentation for this at all. The tutorials that I found were clearly out-dated, and there were answers to questions on various forums whose applicability was unclear. This is an attempt to document the approach that worked for me; I make no claims that this is ‘good’ or even acceptable, but maybe there’s some chance that it will amortize the cost of the time I spent flailing about over other people wanting to customize their GNOME shell.

The first thing that something, anything with a preference needs, is a schema for that preference. In this instance, we’re starting with the shade-inactive-windows in the hepaajan.iki.fi namespace, so our schema will have a path that begins “/fi/iki/hepaajan/shade-inactive-windows”, and we're defining preferences, so let’s add “/preferences” to that.

  <?xml version="1.0" encoding="UTF-8"?>
<schemalist>
  <schema path="/fi/iki/hepaajan/shade-inactive-windows/preferences/"

a schema also needs an id, which should probably resemble the path

            id="fi.iki.hepaajan.shade-inactive-windows.preferences"

except that there are different conventions for hierarchy (. vs /).

            gettext-domain="gsettings-desktop-schemas">

and then here’s a cargo-culted gettext thing, which is probably relevant if the rest of the schema will ever be translated into any non-English language.

In this instance, I am interested in a preference that can be used to change the time over which the shading of inactie windows happens. It’s probably easiest to define this as an integer (the "i" here; other GVariant types are possible):

      <key type="i" name="shade-time">

which corresponds to the number of milliseconds

        <summary>Time in milliseconds over which shading occurs</summary>
      <description>
        The time over which the shading effect is applied, in milliseconds.
      </description>

which we will constrain to be between 0 and 1000, so that the actual time is between 0s and 1s, with a default of 0.3s:

        <range min="0" max="1000"/>
      <default>300</default>

and then there's some XML noise

      </key>
  </schema>
</schemalist>

and that completes our schema. For reasons that will become obvious later, we need to store that schema in a directory data/glib-2.0/schemas relative to our base extension directory; giving it a name that corresponds to the schema id (so fi.iki.hepaajan.shade-inactive-windows.preferences.gschema.xml in this case) is wise, but probably not essential. In order for the schema to be useful later, we also need to compile it: that’s as simple as executing glib-compile-schemas . within the schemas directory, which should produce a gschemas.compiled file in the same directory.

Then, we also need to adapt the extension in question to lookup a preference value when needed, rather than hard-coding the default value. I have no mental model of the namespacing, or other aspects of the environment, applied to GNOME shell extensions’ javascript code, so being simultaneously conservative and a javascript novice I added a top-level variable unlikely to collide with anything:

  var ShadeInactiveWindowsSettings = {};
function init() {

The extension previously didn’t need to do anything on init(); now, however, we need to initialize the settings object, including looking up our schema to discover what settings there are. But where is our schema? Well, if we’re running this extension in-place, or as part of a user installation, we want to look in data/glib-2.0/schemas/ relative to our own path; if we have performed a global installation, the schema will presumably be in a path that is already searched for by the default schema finding methods. So...

      var schemaDir = ExtensionUtils.getCurrentExtension().dir.get_child('data').get_child('glib-2.0').get_child('schemas');
    var schemaSource = Gio.SettingsSchemaSource.get_default();

    if(schemaDir.query_exists(null)) {
        schemaSource = Gio.SettingsSchemaSource.new_from_directory(schemaDir.get_path(), schemaSource, false);
    }

... we distinguish between those two cases by checking to see if we can find a data/glib-2.0/schemas/ directory relative to the extension’s own directory; if we can, we prepend that directory to the schema source search path. Then, we lookup our schema using the id we gave it, and initialize a new imports.gi.Gio.Settings object with that schema.

      var schemaObj = schemaSource.lookup('fi.iki.hepaajan.shade-inactive-windows.preferences', true);
    if(!schemaObj) {
        throw new Error('failure to look up schema');
    }
    ShadeInactiveWindowsSettings = new Gio.Settings({ settings_schema: schemaObj });
}

Then, whenever we use the shade time in the extension, we must make sure to look it up afresh:

  var shade_time = ShadeInactiveWindowsSettings.get_int('shade-time') / 1000;

in order that any changes made by the user take effect immediately. And that’s it. There’s an additional minor wrinkle, in that altering that configuration variable is not totally straightforward; dconf and gettings also need to be told where to look for their schema; that’s done using the XDG_DATA_DIRS configuration variable. For example, once the extension is installed locally, you should be able to run

  XDG_DATA_DIRS=$HOME/.local/gnome-shell/extensions/shade-inactive-windows@hepaajan.iki.fi/data:$XDG_DATA_DIRS dconf

and then navigate to the fi/iki/hepaajan/shade-inactive-windows/preferences schema and alter the shade-time preference entry.

Hooray! After doing all of that, we have wrestled things into being configurable: we can use the normal user preferences user interface to change the time over which the shading animation happens. I’m not going to confess how many times I had to restart my gnome session, insert logging code, look at log files that are only readable by root, and otherwise leave my environment; I will merely note that we are quite a long way away from the “scriptable user interface” – and that if you want to do something similar (not identical, but similar) in an all-emacs world, it might be as simple as evaluating these forms in your *scratch* buffer...

  (set-face-attribute 'default nil :background "#eeeeee")
(defvar my/current-buffer-background-overlay nil)

(defun my/lighten-current-buffer-background ()
  (unless (window-minibuffer-p (selected-window))
    (unless my/current-buffer-background-overlay
      (setq my/current-buffer-background-overlay (make-overlay 1 1))
      (overlay-put my/current-buffer-background-overlay
       'face '(:background "white")))
    (overlay-put my/current-buffer-background-overlay 'window
                 (selected-window))
    (move-overlay my/current-buffer-background-overlay
                  (point-min) (point-max))))
(defun my/unlighten-current-buffer-background ()
  (when my/current-buffer-background-overlay
    (delete-overlay my/current-buffer-background-overlay)))

(add-hook 'pre-command-hook #'my/unlighten-current-buffer-background) 
(add-hook 'post-command-hook #'my/lighten-current-buffer-background)

Syndicated 2014-10-06 19:55:13 from notes

code walking for pipe sequencing

Since it seems still topical to talk about Lisp and code-transformation macros, here’s another worked example – this time inspired by the enthusiasm for the R magrittr package.

The basic idea behind the magrittr package is, as Hadley said at EARL2014, to convert from a form of code where arguments to the same function are far apart to one where they’re essentially close together; the example he presented was converting

  arrange(
  summarise
    group_by(
      filter(babynames, name == "Hadley"),
      year),
    total = sum(n)
  desc(year))

to

  b0 <- babynames
b1 <- filter(b0, name == "Hadley")
b2 <- group_by(b1, year)
b3 <- summarise(b2, total = sum(n))
b4 <- arrange(b3, desc(year))

only without the danger of mistyping one of the variable names along the way and failing to perform the computation that was intended.

R, as I have said before, is a Lisp-1 with weird syntax and wacky evaluation semantics. One of the things that ordinary user code can do is inspect the syntactic form of its arguments, before evaluating them. This means that when looking at a fragment of code such as

  foo(bar(2,3), 4)

where a call-by-value language would first evaluate bar(2,3), then call foo with two arguments (the value resulting from the evaluation, and 4), R instead uses a form of call-by-need evaluation, and also provides operators for inspecting the promise directly. This means R users can do such horrible things as

  foo <- function(x) {
    tmp <- substitute(x)
    sgn <- 1
    while(class(tmp) == "(") {
        tmp <- tmp[[2]]
        sgn <- sgn * -1
    }
    sgn * eval.parent(tmp)
}
foo(3) # 3
foo((3)) # -3
foo(((3))) # 3
foo((((3)))) # -3 (isn’t this awesome?  I did say “wacky”)

In the case of magrittr, the package authors have taken advantage of this to invent some new syntax; the pipe operator %>% is charged with inserting its first argument (its left-hand side, in normal operation) as the first argument to the call of its second argument (right-hand side). Hadley’s example is

  babynames %>%
  filter(name == "Hadley") %>%
  group_by(year) %>%
  summarise(total = sum(n)) %>%
  arrange(desc(year))

and this is effective because the data flow in this case really is a pipeline: there's a dataset, which needs filtering, then grouping, then summarization, then sorting, and each operation works on the result of the previous. This already needs to inspect the syntactic form of the argument; an additional feature is recognizing the presence of .s in the call, and placing the left-hand side value in that argument position instead of as the first argument if it is present.

In Common Lisp, there are some piping or chaining operators out there (e.g. one two three (search for ablock) four and probably many others), and they do well enough. However! They mostly suffer from similar problems that we’ve seen before: doing code transformations with not quite enough understanding of the semantics of the code that they’re transforming; again, that’s fine for normal use, but for didactic purposes let’s pretend that we really care about this.

The -> macro from http://stackoverflow.com/a/11080068 is basically the same as the magrittr %>% operator: it converts symbols in the pipeline to function calls, and places the result of the previous evaluation as the first argument of the current operator, except if a $ is present in the arguments, in which case it replaces that. (This version doesn’t support more than one $ in the argument list; it would be a little bit of a pain to support that, needing a temporary name, but it’s straightforward in principle).

Since the -> macro does its job, a code-walker implementation isn’t strictly necessary: pure syntactic manipulation is good enough, and if it’s used with just the code it expects, it will do it well. It is of course possible to express what it does using a code-walker; we’ll fix the multiple-$ ‘bug’ along the way, by explicitly introducing bindings rather than replacements of symbols:

  (defmacro -> (form &body body)
  (labels ((find-$ (form env)
             (sb-walker:walk-form form env
              (lambda (f c e)
                (cond
                  ((eql f '$) (return-from find-$ t))
                  ((eql f form) f)
                  (t (values f t)))))
             nil)
           (walker (form context env)
             (cond
               ((symbolp form) (list form))
               ((atom form) form)
               (t (if (find-$ form env)
                      (values `(setq $ ,form) t)
                      (values `(setq $ ,(list* (car form) '$ (cdr form))) t))))))
    `(let (($ ,form))
       ,@(mapcar (lambda (f) (sb-walker:walk-form f nil #'walker)) body))))

How to understand this implementation? Well, clearly, we need to understand what sb-walker:walk does. Broadly, it calls the walker function (its third argument) on successive evaluated subforms of the original form (and on variable names set by setq); the primary return value is used as the interim result of the walk, subject to further walking (macroexpansion and walking of its subforms) except if the second return value from the walker function is t.

Now, let’s start with the find-$ local function: its job is to walk a form, and returns t if it finds a $ variable to be evaluated at toplevel and nil otherwise. It does that by returning t if the form it’s given is $; otherwise, if the form it’s given is the original form, we need to walk its subforms, so return f; otherwise, return its form argument f with a secondary value of t to inhibit further walking. This operation is slightly at odds with the use of a code walker: we are explicitly not taking advantage of the fact that it understands the semantics of the code it’s walking. This might explain why the find-$ function itself looks a bit weird.

The walker local function is responsible for most of the code transformation. It binds $ to the value of the first form, then repeatedly sets $ to the value of successive forms, rewritten to interpolate a $ in the first argument position if there isn’t one in the form already (as reported by find-$). If any of the forms is a symbol, it gets listified and subsequently re-walked. Thus

  (macroexpand-1 '(-> "THREE" string-downcase (char 0)))
; => (LET (($ "THREE"))
;      (SETQ $ (STRING-DOWNCASE $))
;      (SETQ $ (CHAR $ 0))),
;    T

So far, so good. Now, what could we do with a code-walker that we can’t without? Well, the above implementation of -> supports chaining simple function calls, so one answer is “chaining things that aren’t just function calls”. Another refinement is to support eliding the insertion of $ when there are any uses of $ in the form, not just as a bare argument. Looking at the second one first, since it’s less controversial:

  (defmacro -> (form &body body)
  (labels ((find-$ (form env)
             (sb-walker:walk-form form env
              (lambda (f c e)
                (cond
                  ((and (eql f '$) (eql c :eval))
                   (return-from find-$ t))
                  (t f))))
             nil)
           (walker (form context env)
             (cond
               ((symbolp form) (list form))
               ((atom form) form)
               (t (if (find-$ form env)
                      (values `(setq $ ,form) t)
                      (values `(setq $ ,(list* (car form) '$ (cdr form))) t))))))
    `(let (($ ,form))
       ,@(mapcar (lambda (f) (sb-walker:walk-form f nil #'walker)) body))))

The only thing that’s changed here is the definition of find-$, and in fact it’s a little simpler: the task is now to walk the entire form and find uses of $ in an evaluated position, no matter how deep in the evaluation. Because this is a code-walker, this will correctly handle macros, backquotes, quoted symbols, and so on, and this allows code of the form

  (macroexpand-1 '(-> "THREE" string-downcase (char 0) char-code (complex (1+ $) (1- $))))
; => (LET (($ "THREE"))
;      (SETQ $ (STRING-DOWNCASE $))
;      (SETQ $ (CHAR-CODE $))
;      (SETQ $ (COMPLEX (1+ $) (1- $)))),
;    T

which, as far as I can tell, is not supported in magrittr: doing 3 %>% complex(.+1,.-1) is met with the error that “object '.' not found”. Supporting this might, of course, not be a good idea, but at least the code walker shows that it’s possible.

What if we wanted to augment -> to handle binding forms, or special forms in general? This is probably beyond the call of duty, but let’s just briefly imagine that we wanted to be able to support binding special variables around the individual calls in the chain; for example, we want

  (-> 3 (let ((*random-state* (make-random-state))) rnorm) mean)

to expand to

  (let (($ 3))
  (setq $ (let ((*random-state* (make-random-state))) (rnorm $)))
  (setq $ (mean $)))

and let us also say, to make it interesting, that uses of $ in the bindings clauses of the let should not count against inhibiting the insertion of $ in the first argument position of the first form in the body of the let, so

  (-> 3 (let ((y (1+ $))) (atan y)))

should expand to

  (let (($ 3)) (setq $ (let ((y (1+ $))) (atan $ y))))

So our code walker needs to walk the bindings of the let, merely collecting information into the walker’s lexical environment, then walk the body performing the same rewrite as before. CHALLENGE ACCEPTED:

  (defmacro -> (&body forms)
  (let ((rewrite t))
    (declare (special rewrite))
    (labels ((find-$ (form env)
               (sb-walker:walk-form form env
                (lambda (f c e)
                  (cond
                    ((and (eql f '$) (eql c :eval))
                     (return-from find-$ t))
                    (t f))))
               nil)
             (walker (form context env)
               (declare (ignore context))
               (typecase form
                 (symbol (if rewrite (list form) form))
                 (atom form)
                 ((cons (member with-rewriting without-rewriting))
                  (let ((rewrite (eql (car form) 'with-rewriting)))
                    (declare (special rewrite))
                    (values (sb-walker:walk-form (cadr form) env #'walker) t)))
                 ((cons (member let let*))
                  (unless rewrite
                    (return-from walker form))
                  (let* ((body (member 'declare (cddr form)
                                       :key (lambda (x) (when (consp x) (car x))) :test-not #'eql))
                         (declares (ldiff (cddr form) body))
                         (rewritten (sb-walker:walk-form
                                     `(without-rewriting
                                          (,(car form) ,(cadr form)
                                            ,@declares
                                            (with-rewriting
                                                ,@body)))
                                     env #'walker)))
                    (values rewritten t)))
                 (t
                  (unless rewrite
                    (return-from walker form))
                  (if (find-$ form env)
                      (values `(setq $ ,form) t)
                      (values `(setq $ ,(list* (car form) '$ (cdr form))) t))))))
      `(let (($ ,(car forms)))
         ,@(mapcar (lambda (f) (sb-walker:walk-form f nil #'walker)) (cdr forms))))))

Here, find-$ is unchanged from the previous version; all the new functionality is in walker. How does it work? The default branch of the walker function is also unchanged; what has changed is handling of let and let* forms. The main trick is to communicate information between successive calls to the walker function, and turn the rewriting on and off appropriately: we wrap parts of the form in new pseudo-special operators with-rewriting and without-rewriting, which is basically a tacky and restricted implementation of compiler-let – if we needed to, we could do a proper one with macrolet. Within the scope of a without-rewriting, walker doesn’t do anything special, but merely return the form it was given, except if the form it’s given is a with-rewriting form. This is a nice illustration, incidentally, of the idea that lexical scope in the code translates nicely to dynamic scope in the compiler; I can’t remember where I read that first (but it’s certainly not a new idea).

And now

  (macroexpand '(-> 3 (let ((*random-state* (make-random-state))) rnorm) mean))
; => (LET (($ 3))
;      (LET ((*RANDOM-STATE* (MAKE-RANDOM-STATE)))
;        (SETQ $ (RNORM $)))
;      (SETQ $ (MEAN $))),
;    T
(macroexpand '(-> 3 (let ((y (1+ $))) (atan y))))
; => (LET (($ 3))
;      (LET ((Y (1+ $)))
;        (SETQ $ (ATAN $ Y)))),
;    T

Just to be clear: this post isn’t advocating a smarter pipe operator; I don’t have a clear enough view, but I doubt that the benefits of the smartness outweigh the complexity. It is demonstrating what can be done, in a reasonably controlled way, using a code-walker: ascribing semantics to fragments of Common Lisp code, and combining those fragments in a particular way, and of course it’s another example of sb-walker:walk in use.

Finally, if something like this does in fact get used, people sometimes get tripped up by the package system: the special bits of syntax are symbols, and importing or package-qualifying -> without doing the corresponding thing to $ would lead to cryptic errors, wrong results and/or confusion. One possibility to handle that is to invent a bit more reader syntax:

  (set-macro-character #\¦
 (defun pipe-reader (stream char)
   (let ((*readtable* (copy-readtable)))
     (set-macro-character #\·
      (lambda (stream char)
        (declare (ignore stream char))
        '$) t)
   (cons '-> (read-delimited-list char stream t)))) nil)
¦"THREE" string-downcase (find-if #'alpha-char-p ·) char-code¦

If this is the exported syntax, it has the advantage that the interface can only be misused intentionally: the actual macro and its anaphoric symbol are both hidden from the programmer; and the syntax is reasonably easy to type – on my keyboard ¦ is AltGr+| and · is AltGr+. – and moderately mnemonic from shell pipes and function notation respectively. It also has all the usual disadvantages of reader-based interfaces, such as composability, somewhat mitigated if pipe-reader is part of the macro’s exported interface.

Syndicated 2014-09-25 14:01:45 from notes

196 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!