Older blog entries for crhodes (starting at number 157)

ELS 2012 Liveblogging! Well, I'm constrained to one diary entry a day, so maybe it's a bit of a stretch to claim that I've joined the socially network world, but baby steps...

First impression: Zadar is really quite pretty. Shiny white stone, clean, old buildings, seafront. My first impression is probably coloured by the fact that when I left from London, the weather was 4°C and pouring with rain – and I emerged from the plane into 25°C heat and a blue sky. I confess, I even had a little nap near the Sea Organ while waiting for the evening meeting and welcome reception; at that reception, let us just say that a good amount of Maraschino and another good amount of the local beer was consumed, both in good company. Also, it's asparagus season; yum.

This morning, after a generous welcome from Marco Antoniotti, this year's programme chair, Juanjo García Ripoll gave a very interesting overview of ECL and its history, and made some good points about its design philosophy. The key argument is probably that designing ECL for embeddability adds options, rather than being a limitation; he made a plausible case for those things which are currently lost compared with more traditional implementations – particularly, image saving – are reimplementable, at least up to a point. Juanjo also listed a number of good improvements in ECL since the last time: Unicode support, multithreading, improved CLOS and MOP support, and plenty of other things.

After a good long coffee break, we had the first paper sessions: first, a presentation of Climb (no website yet, apparently), an image processing toolkit developed by Laurent Senta with Didier Verna; some interesting stuff in there, even if the dreaded Demo Effect came along. A particularly neat-looking demo of a (prototype) visual environment for chaining processing tasks; performance is a bit more of a hot topic (read: not yet implemented), both in terms of parallelizing individual operations and (I think) in terms of compiling networks of processing tasks to minimize redundant computation. After that, Giovanni Anzani gave an autocad-based talk on calculating and visualizing optimal (for some value of "optimal"; sufficient for architecture, anyway) points of intersection of incommensurate measurements. Again, a pretty nifty demo, this time within AutoCAD using AutoLisp; somewhat surprisingly, it seems that there is no matrix manipulation library support within AutoLisp. (I think I need to read the paper for this work, to understand exactly what the problem the method presented is aiming to solve).

One lesson in Southern European lunchtimes later (even longer than academic lunchtimes!) we were into the second session, starting with Alessio Stalla talking about ABCL and its interoperability with Java. I got a shout-out, because in amongst the various integrations of ABCL with its JVM host was a note that the sequences in ABCL support the extensible sequence protocol that I proposed in 2007; the example given was of using instances of Java java.util.List class instances as Lisp sequences, directly. The demo effect struck again; instead of launching slime, a button in the modified Java web framework made the compiler enter an infinite compiling loop. Bad luck. (Demoing things is a particular nightmare, I know; the trick as far as I have managed to formalize it is to leave as little as possible to chance: this includes even typing, unless you're very confident: use short file or variable names, define key bindings or keyboard macros, or write scripts to do things for you.) Nils Bertschinger talked about probabilistic programming in Clojure: implementing Metropolis-Hastings sampling of program paths with given probabilities, and consequently allowing conditioning on some program choice points and Bayesian inference on the hidden parameters. It looks interesting, but the killer feature of Clojure (immutable data structures, for cheap undo) might also be the cause of a performance problem. Still, looks interesting. The demo worked.

Pascal Costanza rounded out the day's schedule with his discussion on reflection in Lisp and elsewhere, talking about fexprs, 3-lisp and macros through to metaobject protocols. Unfortunately, as a regular attendee at Lisp events, I've seen much of it before; it's still interesting, but maybe I need to get out more. To dinner!

I got interviewed a couple of weeks ago. (If you're reading this on Planet Lisp, you probably already know this). I had a quick update to one of the points I made, but failed to write it down anywhere and have since forgotten. So, instead, a somewhat delayed and probably more dull update. (What, more dull than the delayed response of someone to their own interview? Well, be ready to be amazed. But I can't help but feel that the additional information was that I had forgotten some whole class of Lisp programming that I actually do, which is a bit embarrassing in an interview in a series titled "Lisp Hackers"...).

Maybe the first thing was to dip my toes back into SBCL maintenance: some nice if minor fixes from me this cycle: one was a simultaneous bug fix and optimization to modular arithmetic, motivated by pfdietz's resurfacing and running of his random form tester, which inevitably revealed that we have been slack in the last five years or so (where does the time go?); the other was a fix to the powerpc implementation of ldb, which broke the build after the previous fix. All sorted out now, phew. (And there's lots of other stuff that's gone in this month, unlike the previous "month" which sort of rolled on for the best part of three months, so it's probably worth testing).

But onward, to my desire to learn a bit more about Emacs Lisp. I've used emacs for many a year – indeed, the interview reminded me that I learnt Lisp by being given a difficult problem to work on, instructions on how to start XEmacs, and time to read USENET – but have never considered myself a real Emacs User; my ~/.emacs is so tiny, I daren't show it to the world for fear that I will lose all my remaining Lisp Hacker credibility. Acting with the view that the best way to learn is to do, shortly after starting to use EMMS as a media player, I've implemented support for DISCNUMBER metadata (this matters if, like me, you have a large number of multiple-disc sets). I've also revived (again) SWANKR, putting it up on github also, since I have started getting patches; I look forward to exploring this "social coding" idea. And I have also written a hacky but just-about-usable interface to the BBC iPlayer (using the excellent but fairly user-hostile get_iplayer utility) – particularly pleasing to my family now that digital switchover has reached London and I no longer possess equipment capable of receiving the UK television signal.

I now return to the Teclo vortex. But I am going to Zadar for the European Lisp Symposium; I hope to see some old and new faces there!

As I said in my last entry, I was in Amsterdam for ECLM 2011, once again smoothly organized by Edi Weitz and Arthur Lemmens, but this time under the aegis of the Stichting Common Lisp Foundation (of which more a bit later). After leaving the comfortable café, where Luke and Tobias (along with a backpack's worth of computing equipment on its way to visit St Petersburg) eventually turned up, it was time to go for the Saturday evening dinner, held at Brasserie Harkema. In the olden days, when I had time to do a certain amount of public-facing Lisp development, I got used to receiving the adulation of a grateful public – this time, at the dinner, I happened to sit next to someone called Lars from Netfonds. “Hmm,” said something at the back of my mind, ”that rings a bell.” Lars who? Lars Magne Ingebrigtsen. My inner fanboy went a bit squeee – even to the point of explaining what gmane was to a third party in his presence. Still, it was nice to be able to say a heartfelt “thank you” in person to someone whose software has saved me time and a certain amount of embarrassment. Other topics of conversation at the dinner included a discussion with R. Matthew Emerson (of Clozure) about the social aspects of Free Lisp development, a topic on which I have written before; contrasting the attitudes and experiences of contributors and users (small and large) of Clozure CL and SBCL was interesting. It was also nice to be able to talk about Lisp-based music analysis, synthesis and generation programs; reminding myself that I do still know about that landscape enough to fill people in.

The meeting itself, as others have observed over the years, is only partly about the talks: a substantial part of the goodness is in the chats over coffee and lunch. Edi and I reminisced about meeting in the venue, Hotel Arena, at a precursor to ECLM (in autumn 2004, I think... I certainly remember being approximately penniless, just after starting my first job); other people present then (as well as Arthur) included Nick Levine, Luke Gorrie, Peter van Eynde, Jim Newton, Pascal Costanza, Marc Battyani, Nicholas Neuss... many of whom were around for the rematch; a total of 95 people registered for the meeting, and the hall (part disco, part church) for the talks felt pleasantly full.

Of the talks, I was most interested in the material of Jack Harper's talk, concerning some of the constraints involved in building a product for (human) fingerprinting, and asserting that using Lisp in this product was not a problem. (Favourite quote: “batteries are complicated things”). I was a little bit disappointed that few of the speakers actually interacted with any code at all (Luke may claim that writing his slides in Squeak Smalltalk counts, but I beg to differ); in fact, Paul Miller of Xanalys was the only one of the speakers spending substantial time demonstrating anything related to the subject of the talk – and that only because the canned demo movie refused to display on the projector. Luke's talk appeared to go down well; the obvious first question came and went, and there were some more interesting questions from the floor. Star of the show was Zach Beane's talk about quicklisp; I spend a lot of time presenting or watching presentations in each of my capacities, and it's nice to have a refreshingly different (and deadpan) delivery, with good use of slides to complement the spoken content. I hope that he's right that his personal scalability will not be taxed, and that volunteers will find ways to assist in the project by taking ownership of particular tasks.

While Hans Hübner may have attempted to be controversial in his opinion slot about style guides for CL, the real controversy for me was Dave Cooper's announcement of the Stichting Common Lisp Foundation. Now, the Foundation has clearly done one thing that is helpful: provided legal and financial infrastructure so that the financial risk of hosting an ECLM is not borne entirely by two individuals; the corporate entity can potentially, after acquiring a buffer, provide the seed funding needed and, if necessary, absorb small ECLM losses (not that I believe there has been one, but hypothetically) through other fund-raising activities. On the other hand, when I asked the question as to how the Stichting CL Foundation would aim to distinguish itself from the ALU, the response from Dave Cooper was that the only difference would be that the foundation would focus on CL, where the ALU's remit extends to all members of the Lisp family. Such a narrowing of focus is, I think, potentially beneficial – indeed, when going through my email archives to look for the date of the 2004 meeting, I found a lucid rationale from Dan Barlow explaining that he had chosen to make CLiki's focus specifically DFSG-free Unix Lisp software in order to promote a sense of cohesion (rather than being motivated primarily by a strongly-held belief about the inherent superiority of DFSG-licensed software). But I don't think that the ALU's only weakness is that it spreads its Lisp net too wide: I think it has lost track of what it as an entity wants to do beyond perform a similar function for the ILC as Stichting has performed for the ECLM; Nick Levine, in his talk about how to find Lisp resources, observed that the ALU has a valuable piece of real estate – the lisp.org domain – which does not seem to be used to grow or meet the needs of the Lisp community, whether Common Lisp specifically or Lisp more generally. I found it a little sad that, Edi and Arthur aside, the overlap between the ALU board and Stichting CL Foundation directors is 100%.

After the longer talks came the lighting ones, and I took the opportunity to repeat my talk and demo about swankr, my implementation of the SLIME backend for R, from the European Lisp Symposium in April. Erik Huelsmann announced ABCL 1.0, a far better milestone to announce at the ECLM rather than my sneaky announcement of SBCL 0.9 (six years ago!? Doesn't time fly! Also, what ugly slides...). And after some more lightning (and less-lightning) talks, it was time to wrap up with drinks, dinner, and good conversation.

I'm in Amsterdam for the European Common Lisp Meeting, 2011 vintage. Still wearing my two hats, as academic and
entrepreneur” – and, somewhat to my surprise, still enjoying it. Though I do have a fairly nasty cold, possibly a result of too many late nights (business), early mornings (children), and interaction with disease-ridden individuals (students).

I'm sitting in the cafe de jaren, a haunt which I think is popular with students – but today looks just plain popular. They seem very accomodating, with newspapers to wade through (admittedly, I brought my own), free Wifi, and tasty soup and sandwiches. I've been here before; in fact, getting on for a decade ago, my wife and I mislaid a copy of Asterix and the Somethings (dunno which) in Dutch. It's a pleasure to sit here, waiting for my colleagues to show up so that I can inspect Luke's presentation for blatant falsehoods off-message content. Looking forward to this evening's brasserie outing and of course the talks tomorrow – it'll be particularly interesting to see how Jack Harper's presentation compares with our Teclo experience –
and of course it'll be good to catch up with old friends, some of them in the flesh for the first time...

Hey, what happened to that resolution to blog weekly about being entrepreneurial? Well, it's been a long few months: course mostly delivered; PhD student approximately completed (well done, Ben); plenty of extra time to actually be entrepreneurial. Before I sink back down into the mire of too much to do and not enough time, an update!

I went to the 4th European Lisp Symposium, held at the Technical University of Hanburg-Harburg. It was great. Compared with last year, when I was Programme Chair, and volcano eruptions closed most of European airspace, leading to scrambles to find alternative keynote speakers and general stress about whether there were going to be any attendees at all, this was a breeze. Sure, I participated by reviewing a few contributions, but the event itself snuck up on me – I found myself on the Monday remembering that straight after my teaching duties on Wednesday, I needed to dash to the airport to catch a plane. Very pleasant; thanks to Didier Verna and Ralf Möller for making things so smooth that I could just turn up and assume that the event would be running perfectly – I know how much work it takes to get to that point.

It was good to catch up there with some of the wider Lisp world; there were about 60 attendees, including a solid transatlantic contingent. I couldn't quite allow myself to relax completely, and so ended up giving a lightning talk about R – a useful warmup for my slightly more substantial talk (slides; audio recording appears to have failed) at the Zürich Stuff'n'Lisp User Group. The cuteness of adding two lattice objects together (in graphical presentation form) to get a new graph combining the two originals seems never to get old, though since it's in fact six months old I did take the time this morning to commit and push the accumulated fixes to my public swankr git repository.

Right. Back to work work work fun hacking.

What I got for Christmas: sufficiently advanced Intel graphics drivers for 855GM, in Linux 2.6.37-rc7. No more missing mouse cursor on boot (and, icing on the Christmas cake, working video playback!) Thank you to those who worked on this, particularly since I couldn't actually work out how or where to submit useful bug reports (and so resorted to my usual strategy when dealing with laptop-related issues, which is to contact mjg59 by whatever means available and follow his suggestions as precisely as possible).

2 Dec 2010 (updated 2 Dec 2010 at 17:05 UTC) »

As the train I'm on ambles its unheated way through the unseasonably Wintry English countryside, it's time for another “weekly” exciting entrepreneurial update. Actually I should be properly working, not just talking about working, but there's a file I need for that elsewhere, and three's mobile Internet coverage evaporates about 3 minutes outside Waterloo station – if only there were a company dedicated to bettering mobile data infrastructure... So, here I am, with means, motive and opportunity to write a diary entry.

Since I last wrote, I have fought with R's handling of categorical variables in linear models; the eventual outcome was a score draw. The notion of a contrast is a useful one; very often, when we have a heap of conditions under which we observe some value, what we're interested in is not so much the predicted value given some condition, but the difference between the value under some condition and the value under some other: the canonical example for this is probably the difference between the condition of some group receiving a trial treatment, and the group receiving a control or placebo: the default contrast for unordered categorical variables in R is called the treatment contrast (contr.treatmen t).

In my particular case, I wanted to know the difference between any particular contrast and the average response – none of the categories I had in my system should have been privileged over any of the others, and there wasn't anything like a “control” group, so comparing against the overall average is a reasonable thing to want to do, and indeed it is supported in R through the use of the sum contrast contr.sum. However, this reveals a slight technical problem: the overall average and differences for each categorical variable is one more variable than the (effective) number of values; just as in simultaneous equations, this is a Bad Thing. (Technically, the system becomes undetermined.) So, in solving the system, one of the differences is jettisoned; my problem was that I wanted to visualise that information for all the differences, whether or not the last one was technically redundant – particularly since I wanted to offer a guideline as to which differences were most strongly different from the average, and I would be out of luck if the most unusual one happened to be the one jettisoned. Obviously I could trivially compute the last difference, simply from the constraint that all the differences must sum to zero (and actually dummy. coef does that for me); but what about its standard error?

Enter se.co ntrast. This operator allows the user to construct an arbitrary contrast, expressed most simply as a vector of contributions to that contrast and ask an aov object for the standard error of that contrast. Some experimentation later, for a linear model m for len observations, and a particular factor variable f, and a function class.ind to construct a matrix of class indicator values (i.e. for a vector vi of observations, construct a matrix xij where xij is 1 if observation i came from condition j, and zero otherwise), I think that:


  anova <- aov(m)
  ci <- class.ind(data[[f]])
  ci <- ci[,colSums(ci) != 0]
  contrasts <- ci %*% diag(1/colSums(ci)) %*% (diag(len)-
(1/len)*matrix(rep(1,len*len), nrow=len))
  ses <- se.contrast(anova, contrasts)
gives me a vector ses of the standard errors corresponding to the sum contrasts in my system, including the degenerate one. (As seems to be standard in this kind of endeavour, the effort per net line of code is huge; please do not think that I wrote these five lines of code off the top of my head. Thanks to denizens of the r-help mailing list and in particular to Greg Snow for his answer to my question about this).

So, this looks like total victory! Why have I described this as only a score draw? Well, because while the above recipe works for a single factor variable, in the case I am actually dealing with I have all sorts of interaction terms between factors, and between factors and numerical variables, and again I want to display and examine all the contrasts, not just some subset of them chosen so that the system of equations to solve is nondegenerate. This looked sufficiently challenging, and the analysis to be done looked sufficiently peripheral to the current business focus, that it's been shelved, maybe for a rematch in the new year.

My weekly diary schedule has already slipped! In my defence, last week was exceptional, because the major activity was neither entrepreneurial nor academic, but practical and logistical: moving house. A lengthy rant about the insanity of the English conveyancing system is probably not of great interest to readers of this diary, so I will save the accumulated feelings of helplessness and insecurity for some other outlet.

Meanwhile, back to work. It's the start of teaching next week; fortunately, I am teaching largely the same material as last year, so now is the time that I can reap the benefits of preparation time that spent on the course over the last two years. Inevitably, there will be new things to include and outdated material to remove or update, but by and large I should be able to deliver the same content.

This is a relief, because of course this year I only have one fifth of my time on academic-related activities. This means that various things have to be sacrificed or delegated, not least some of my extra-curricular activities such as being release manager of SBCL – so I'm very glad that Juho Snellman has volunteered to step in and do that for the next while. (He suffered the by-now traditional baptism of fire, dealing with amusing regressions late in the 1.0.42.x series, and released version 1.0.43 today; we'll see how his coefficient of grumpiness evolves over the next few months).

In the land of industry, what I've mostly been doing is drawing graphs. As the screenshot in last week's my previous entry suggests, I'm using R for data processing and visualisation; I have datasets with large numbers of variables, and the facilities for visualising those quickly and compactly with the lattice package (implementing Becker and Cleveland's trellis paradigm) are very convenient. By and large, progressing from prototype visualisation to presentation- or publication-quality visualisation is also straightforward, but I spent so long figuring out one thing that I needed to do this week that I'll document it here for posterity: that thing was to construct a graph using lattice with an axis break. It's not that it's absurdly difficult – there are plenty of hookable or parameterisable functions in the lattice graph-drawing implementation; the difficult part is finding out which functions to override, which hooks to use, and which traps to avoid.

The problem as I have found it is that when drawing a lattice plot, for these purposes, things happen in an inconvenient order. First, the axes, tickmarks and labels are drawn, using the axis function provided to the lattice call (or axis.default by default; then the data are plotted using the panel function. So, that would be fine; one could even hackily draw over the axis in the panel function to implement the axis break, at least if one remembers to turn clipping off with clip=list(panel="off") in par.settings. Except that the axis function doesn't actually draw the axis lines; instead, there's a non-overridable bit of plot.trellis which draws the box around the plot, effectively being the axis lines – and that happens after everything else.

So, piling hack upon hack: there's no way of not drawing the box. There is, however, a way of drawing the box with a line thickness of zero: pass axis.line=list(lwd=0) in par.settings as well. Ah, but then the tick marks have zero thickness too. Oh, but we can override that setting of axis.line$lwd within our custom axis function. (Each of these realisations took a certain amount of time, experimentation, and code reading to come to pass...). What it boils down to, in the end, is a call like


xyplot(gmeans.zoo,
       screens=1, col=c(2,3,4), lwd=2, lty=3, more=TRUE,
       ylim=c(0.5,1.7), scales=list(
                          x=list(at=dates, labels=date.labels),
                          y=list(at=c(0.5,1.0,1.5),
                            labels=c("- 50%", "± 0%", "+ 50%", "+ 100%"))),
       key=list(lines=list(col=c(2,3,4)),
         text=list(lab=c("5m", "500k", "galileo"))),
       xlab="Date",
       par.settings = list(axis.line=list(lwd=0),
         clip=list(panel="off", strip="off")),
       axis=function(side, scales, components, ...) {
         print(scales)
         lims <- current.panel.limits()
         trellis.par.set(axis.line=list(lwd=0.5))
         panel.axis(side=side, outside=TRUE, at=scales$at,
                    labels=scales$labels,
                    draw.labels=side %in% c("bottom", "left"), rot=0)
         panel.lines(lims$xlim[[1]], lims$ylim, col=1, lwd=1)
         panel.lines(lims$xlim[[2]], lims$ylim, col=1, lwd=1)
         panel.lines(c(lims$xlim[[1]], as.Date("2010-09-11")+0.45),
                     lims$ylim[[1]], col=1, lwd=1)
         panel.lines(c(lims$xlim[[2]], as.Date("2010-09-11")+0.55),
                     lims$ylim[[1]], col=1, lwd=1)
         panel.lines(c(lims$xlim[[1]], as.Date("2010-09-11")+0.45),
                     lims$ylim[[2]], col=1, lwd=1)
         panel.lines(c(lims$xlim[[2]], as.Date("2010-09-11")+0.55),
                     lims$ylim[[2]], col=1, lwd=1)
       },
       panel=function(x,y,...) {
         xs <- current.panel.limits()$xlim
         ys <- current.panel.limits()$ylim
         panel.xyplot(x,y,...)
         panel.polygon(as.Date("2010-09-11")+c(0.4,0.6,0.6,0.4),
                       c(ys[1]+0.05,ys[1]+0.05,ys[2]-0.05,ys[2]-0.05),
                       border="white", col="white", alpha=1)
         panel.lines(xs,1,col=1,lty=3)
         panel.lines(as.Date("2010-09-11")+c(0.5,0.6),
                     c(ys[1]-0.025,ys[1]+0.025), col="black")
         panel.lines(as.Date("2010-09-11")+c(0.4,0.5),
                     c(ys[2]-0.025,ys[2]+0.025), col="black")
         panel.lines(as.Date("2010-09-11")+c(0.5,0.6),
                     c(ys[2]-0.025,ys[2]+0.025), col="black")
         panel.lines(as.Date("2010-09-11")+c(0.4,0.5),
                     c(ys[1]-0.025,ys[1]+0.025), col=1, lwd=1)
         panel.text(as.Date("2010-09-20"),
                    t(gmeans.zoo[1,])+c(0.01,0,-0.01),
                    sprintf("%2.0f%%", round(100*t(gmeans.zoo[1,]-1))),
                    pos=4)
       })

allows me to draw a picture like

for us to show to interested parties.

New laptop video (intel 855GM) drivers holding up pleasantly well. One crash so far, from attempting to play a video; I haven't tried to reproduce it, since that's not something I do very often in any case. (Said video was of my own extremely minor contribution to UK televisual arts programming, so maybe my video drivers were wisely preventing me from narcissistic overexposure...)

Meanwhile, back in the land of sort-of-industrial research and development, I've been using (and simultaneously learning) R for analysing the meaty chunks of data that my colleagues are generating. A couple of my academic labmates were already R users, unashamedly using R for their data treatment needs, even when their data came from Lisp programs. Shocking, I know. So, when substantial datasets started landing on my lap a few months ago (not just from a group of people in mobile broadband; energy use data and computer usage metrics also crossed my desk) I decided that the time was ripe to learn some new tools and techniques.

Initial reactions: mostly positive. To help put my observations into context: I've dabbled in MATLAB before, and it's never quite stuck. The everything-is-a-matrix aspect was painful; graphical output was OK but nothing to write home about; and environmental support pretty painful. (GNU Octave suffers from all of these problems too, and then some; it does have the significant advantage, from my point of view, that at least there isn't the attempted lock-in from purchasing a bare proprietary wrapper over BLAS and LAPACK with the option to buy yet more functionality wrapping BLAS and LAPACK in slightly different ways). I've also used (a long, long time ago now) IDL, again a vector-oriented language; my memory of it is mostly faded, but I remember being satisfied with its graphing facilities and much more satisfied with an Emacs-mode than with its default User Interface. This was 1998; I dare say things would be much the same today...

By contrast, R has data types that are mostly comfortable to my inner Lisp programmer. Yes, number crunching is best done through vectors (matrices being a thin wrapper around vectors rather than a distinct language data type), but lists of vectors are fine, and are used to collect data into data.frames. It has a lightweight object system, with single dispatch on the class of the first argument; mind you, classes of objects are a pretty mutable concept in R, settable at arbitrary points in program execution (there's mostly no relationship between object class and object contents). Speaking of settable, there's the `<-` operator both for assignment and for mutation, like Common Lisp's setf. “Mutation” there might actually not be quite the right word; the evaluation semantics are mostly lexical binding and call-by-value, with the interpreter attempting to perform copy-on-write and deforestation optimizations. Speaking of interpreters, there's a reified environment and call stack at all stages of program execution, which makes the language mildly tricky to compile, particularly since environments are just about the only thing in the language which can be mutated; this aspect of the language has recently been the subject of discussion in the R community (and, would you believe it, one of the camps is advocating a rewrite of the engine using Common Lisp both as a model and as an implementation language, mentioning SBCL by name... sadly without increasing my citation count. Oh well.)

In any case, once I saw the reified environments, and also spotted parse (analogue to read) and eval, I started wondering whether the comint-based Emacs mode for R, Emacs Speaks Statistics, could be enhanced or replaced with some SLIME-like functionality. I was particularly interested in whether there was enough introspective capability to replace the default debugger, which I was not having much success in using at all. So I started digging into the documentation, finding first try, then tryCatch (OK, that's like handler-case, so far so normal), and then on the same help page found withCallingHandlers and withRestarts, direct analogues of handler-bind and restart-case. At that point it seemed logical, instead of trying to write some SLIME-like functionality for ESS, to simply write an R backend for SLIME. With some careful (ahem) adaptation of a stub backend Helmut Eller wrote for Ruby, I got cracking, and swankr now supports a slime REPL, slime scratch buffers, SLDB, the default inspector, and presentations.

Here's a teaser screenshot, demonstrating functionality not yet merged: image output for lattice objects, presented in the SLIME REPL, and usable in subsequent input. There's plenty in swankr that doesn't work, and plenty more that's buggy, but it might be good enough for the enthusiastic R/Lisp/Emacs crossover community to give it a try.

Lattice Presentations

(No, my screen is not that tall.)

In my new life as an itinerant entrepreneur, spending significant amounts of time both working from strange places and travelling on trains, my productivity depends at least in part on having a good development laptop. In my other current life lecturing in Creative Computing, I also need to be able to display reliably to a data projector. Neither of these was a problem until relatively recently.

For the last four years or so, after obtaining a recommendation from one of the masters of Linux and laptops, I've used an IBMlenovo X40: light, fairly rugged (has survived at least one drop), almost full-sized keyboard, and – importantly – fits into more or less any carrying equipment. However, the relative instability (which I am not the only one to experience) brought about by the change in Linux and X.org's to Kernel-Mode Switching was beginning to get worrying; the X40 has an Intel 855GM graphics card, which since Intel is participating heavily in KMS support and development, means that new things got turned on early; I follow squeeze (Debian testing), which gives me some of the thrills and spills of being on the leading edge of consumer Linux development; most pertinently for the start of the new academic year, I've been suffering from odd display corruption on the VGA output.

In various combinations of Xorg and kernel versions, trying to work out a set of versions that work for me, I have experienced

  • intermittent (about once a day) X server crashes, leaving the hardware in a sufficiently inconsistent state that the X server refuses to start again (linux v2.6.33ish, xserver-xorg-video-intel 2.9.1)
  • failure to start the X server on boot at all (linux v2.6.33ish, xserver- xorg-video-intel 2.12.0+legacy1-1)
  • missing mouse cursor on X server start (linux v2.6.35, xserver-xorg- video-intel 2.9.1)
  • substantial (~0.5s) latency about once every 10 seconds, with kslowd or kworkqueue processes taking about 20% of the CPU (linux v2.6.35 and v2.6.36-rc3, xserver-xorg-video-intel 2.9.1)

The good news is that a large number of Intel-graphics-related fixes appeared in Linus' master branch yesterday, and at least some of these problems are fixed; with a module parameter poll=0 for the kms_drm_helper module, the 0.5s latencies are gone (put options kms_drm_helper poll=0 in a file in /etc/modprobe.d). The VGA corruption I was experiencing seems to have been fixed somewhere between Linux versions 2.6.33 and 2.6.35; I have hopes, too, that the X server crashes might be substantially less frequent (but I haven't been running a single kernel long enough to check yet). The one remaining issue that is definitely still present is the missing mouse cursor when X first starts; a suspend/resume cycle works around this fairly reliably for me. Thank you to all those who've worked on fixing these problems.

I'm particularly glad that all of this is just about sufficiently fixed, because the alternative would have been to get a new laptop, and as well as my natural disinclination to purchase new stuff when not strictly necessary (and let's face it, the initial phases of a startup are not usually those where money is abundant), it seems to be largely impossible to get a new one with the form factor of the X40: with the growing prevalence of widescreens, just about every modern laptop is substantially bigger than this one, and thus would not fit in some of my carrying equipment. I will just have to preserve this one as long as possible.

148 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!