Older blog entries for danstowell (starting at number 20)

Notes on how we ran the SuperCollider Symposium 2012

I've just uploaded my notes on how we ran the SuperCollider Symposium 2012 (10-page PDF). sc2012 was a great event and it was a privilege to work with so many great people in putting it together. I hope these notes are useful to future organisers, providing some detailed facts and figures to give you some idea of how we did it.

The document includes details of the timing of things, the budgeting, promotional aspects. I also include some notes about outreach, which I think is important to keep in mind. It's important for community-driven projects to bring existing people together, and to attract new people - and for something like SuperCollider which doesn't have any institution funding it and pushing it forwards, these international gatherings of users are 100% vital both for the existing users angle and the new users angle. Happily, both of these aims can be achieved by putting on diverse shows featuring some of the best SuperCollider artists in the world :)

Shout outs to all the other organisers, who put loads of their own time and enthusiasm in (see the "credits" page), and hi to everyone else I met at the symposium.

(If you weren't there, see also Steve's great photos of sc2012.)

Syndicated 2012-05-02 15:43:54 (Updated 2012-05-02 15:44:09) from Dan Stowell

Why power to the people

What should you strive for?

  • Equal spread of power among all people.

Why? Three reasons, of which the third is the most important:

  1. Morality: Equal power per person is fair.
  2. Efficiency: Equal power is the most efficient way to make use of our combined human capacities.
  3. Instability: Power begets power, which means that it tends to "clump" - equal spread of power is not a stable state. Thus we have to continually work towards it, rather than achieve it and then relax.

Syndicated 2012-05-01 09:05:50 from Dan Stowell

How I made a nice map handout from OpenStreetMap

OpenStreetMap is a nice community-edited map of everything - and you can grab their data at any time. So in theory it should be the ideal thing to choose when you want to make a little map for an open-source conference or something like that.

For our event this year I made these nice map handouts. It took a while! Quite tricky for a first-timer. But they're nice pretty vector PDF maps, with my own custom fonts, colour choices etc.

For anyone who fancies having a go, here's what I did:

  1. I followed the TileMill "30 minute tutorial" to install and set up TileMill on my Ubuntu laptop. It takes longer than 30 minutes - it's still a little bit tricky and there's a bit of a wait while it downloads a lump of data too.
  2. I started a new map project based on the example. I wanted to tweak it a bit - they use a CSS-like stylesheet language ("MSS") to specify what maps are supposed to look like, and it's nice that you can edit the stylesheets and see the changes immediately. However, I found it tricky to work out what to edit to have the effect I wanted. Here's what I managed to do:

    • I changed the font choice to match the visual style of our website. That bit is easy - find where there are some fonts specified, and put your preferred font at the FRONT of all the lists.
    • I wanted to direct people to specific buildings, but the default style doesn't show building names. However, I noticed that it does show names for cemeteries... in labels.mss on line 306 there was

          #area_label[type='cemetery'][zoom>=10] {
      

      and I can add buildings to that:

          #area_label[type='building'][zoom>=10], 
          #area_label[type='cemetery'][zoom>=10] {
      
    • The underground train line was being painted on top of the buildings, which looks confusing and silly. To fix this I had to rearrange the layers listed in the Layers panel - drag the "buildings" layer higher up the list, above the "roads" ones.

  3. When I'd got the map looking pretty good, I exported it as an SVG image.
  4. Then I quit TileMill and started up Inkscape, a really nice vector graphics program. I load the SVG that I saved in the previous step.
  5. I edited the image to highlight specific items:
    • The neatest way to do this is to select all and put it all into a layer, then select the items you want to highlight and move them to a new layer above. Once they're in a separate layer, it's easier to use Inkscape's selection tools to select all these items and perform tweaks like thickening the line-style or darkening the fill colour.
    • Selecting a "word" on the map is not so easy because each letter is a separate "object", and so is the shadow underneath. If there's a single word or street-name you're working on, it's handy to select all the letters and group them into a group (Ctrl+G), so you can treat them as a single unit.
    • You can also add extra annotations of your own, of course. I had to add tube-station icons manually, cos I couldn't find any way of getting TileMill to show those "point-of-interest"-type icons. I think there's supposed to be a way to do it, but I couldn't work it out.
  6. The next job is to clip the map image - the map includes various objects trailing off to different distances, it's not a neat rectangle. In Inkscape you can do a neat clipping like this:
    • Select all the map objects. If you've been doing as I described you'll need to use "Select all in all layers" (Ctrl+Shift+A).
    • Group them together (Ctrl+G).
    • Now use the rectangle tool to draw a rectangle which matches the clipping area you want to use.
    • Select the two items - the rectangle and the map-item-group - then right-click and choose "Set clip". Inkscape unites the two objects, using the rectangle to create a clipped version of the other.
  7. Now with your neatly-cropped rectangle map, you can draw things round the outside (e.g. put a title on).
  8. If you ever need to edit inside the map, Inkscape has an option for that - right-click and choose "Enter group" and you go inside the group, where you can edit things without disturbing the neat clipping etc.
  9. Once you're finished, you can export the final image as a PDF or suchlike.

Syndicated 2012-04-22 10:11:21 (Updated 2012-04-22 10:25:33) from Dan Stowell

Implementing the GM-PHD filter

I'm implementing the GM-PHD filter. (The what? The Gaussian mixture Probability Hypothesis Density filter, which is for tracking multiple objects.) Implementing it in python, which is nice, but I'm not completely clear if it's working as intended yet.

Here's a screenshot of progress so far. Look at the first four plots in this picture, which are:

  1. The true trajectory of two simulated objects moving in 1D over time.
  2. Observations received, with "clutter" and occasional missed detections.
  3. The "intensity" calculated by the GM-PHD filter. This is the core state variable of the filter's model.
  4. Filtered trajectories output from the PHD filter.

So what do you think? Good results?

Not sure. It's clearly got rid of lots of the clutter - good. In fact yes it's got rid of the majority of the noise, hooray hooray. But the clutter right close to the the targets is still there, seems a bit mucky, in a kind of way that suggests it's not going to be easy to clear that up.

And there's also a significant "cold start" problem - it takes up to about 20 frames for the filter to be convinced that there's anything there at all. That's no real surprise, since there's an underlying "birth" model which says that a trail could spring into being at any point, but there's no model for "pre-existing" targets. There's nothing in the PHD and GMPHD papers which I've read which even mentions this, let alone accounts for it - I'm pretty sure that we'll either need to initialise the state to account for this, or always do some kind of "warmup" before getting any results out of the filter. That's not great, especially when we might be tracking things that only have a short lifespan themselves.

One thing: this is a one-dimensional problem I'm testing it on. PHD filters are usually used for 2D or 3D problems - and maybe there needs to be enough entropy in the representation for clutter to be distinguished more clearly from signals. That would be a shame, since I'd like to use it on 1D things like spectrogram frames.

More tests needed. Any thoughts gratefully received too

Syndicated 2012-03-29 17:00:13 (Updated 2012-03-29 17:03:11) from Dan Stowell

All Tomorrows Parties: Jeff Mangum

Just back from a fab All Tomorrow's Parties, this one curated by Jeff Mangum. As well as the bands, he curated quite an educational TV channel throughout the event - we got to learn about Chomsky, Zizek, the Bali islanders, oh and Monty Python on endless loop.

Some of the things I saw:

  • Elephant 6 Holiday Suprise - best thing about that was the ending, when they played a Sun Ra song and then started to process off the stage, led by the sousaphone player and the saw player (the saw player sticking his saw in the sousaphone and banging it!) - they led us outside singing the Sun Ra refrain, "This here, our invitation, we invite you, to our space world"...
  • Charlemagne Palestine played a wine glass nicely, but then when he settled into his long two-note piano tranceout it got really boring.
  • Joanna Newsom - quite amazing to see her play. That surprised me, I know her music but seeing her playing live, the intricacy of the harp and her twisty twindy vocals is kinda mesmerising. It's less interesting when she's playing the piano.
  • Matana Roberts and Seb Rochford did some delightful delicate free-jazz together. It's amazing watching Seb Rochford play, even when he isn't actually playing.
  • John Spencer Blues Explosion - amen to that.

That was all on the first day, fantastically enough. The best things about day two were:

  • Cream tea in town, with wortleberry jam, yum.
  • Flumes in the Butlin's swimming pool. The "space bowl" flume was brilliant. Word to the wise, if you're ever there...

Musically there wasn't much I planned to see on the second day. Two bands that are pretty new to me but I was looking forward to were Demdike Stare and Yamantaka // Sonic Titan. Both of them were a little bit underwhelming - Demdike Stare is atmospheric and has good video, but not sure it built up to much. Yamantaka were pretty good, especially their song "Queens", and they had some great costumery, with one of the singers looking like some big hair-creature out of a Studio Ghlibli film.

Sunday we had a lovely beef roast, though I cocked up the gravy so we had none. Then music. The Magic Band were a massive disappointment, not a credit to Beefheart's legacy IMHO, just some noodley noodle. However, they were bad enough that we went next door for Olivia Tremor Control who were fantastic. Their mixture of straight indie-pop and "musique concrete"-like sonic experimentation is just brill, neither of the two components losing out to the other.

Sun Ra Arkestra were also great fun, some great jazz ing. A bit more straightforward jazz than I might have expected, but with a notable appearance of a lovely electrical wind instrument, a buzzy little device played really well by the lead sax bloke.

Later on we joined a queue that had already been queueing for an hour to see Jeff Mangum. It was quite a pleasant queue and the ale people were delivering ale, so we didn't mind queueing for another three quarters of an hour (while Jeff played inside) and eventually went in to catch the last three tracks of his set, including "Two-headed boy" for which most of the crowd sang along. Lovely atmosphere in there. Though apparently the real closing event was a secret gig later that night where Jeff plus Elephant 6 crew, Sun Ra Arkestra and assorted others had a big old jam session...

Syndicated 2012-03-12 12:29:03 from Dan Stowell

I have switched to Bing for search

I have switched my browser's search engine from Google to Bing. I never thought it would come to this!

Years ago I migrated away from Microsoft, disliking what they were doing with their dominance. It feels odd to be deliberately turning to Microsoft now, for a very similar reason.

Google has unified what it does with your personal data, meaning that your emails, video views, web searches etc can all be smulged together for analytic/advertising purposes. I always resented Google's move into the "social" web - the best things that they make are NONsocial, tools that I use as tools - the web search being the main example. Google Scholar was a very important tool in my PhD thesis. Gmail is the best email interface I've used.

I don't want these tools mixed up with the social sharey web, and it made me uncomfortable when google "+1" buttons appeared in all the search results. This change in what they do with my personal data makes it even worse. My distaste is not really worries about what they'll do - but it's a growing problem to rely on just one company for many essential tools, definitely unhealthy, and I just want some of my web activity to be completely asocial and not built into the personal profile Google is building of me.

If you've not used Bing search before (I hadn't really), you might find it a bit funny how many of the Google search options are closely mirrored in the Bing interface - kinda comedy, but hopefully it'll make the transition easier. So far, the two things I really miss in Bing search are recent search results (e.g. in the past week) and scholar search. There's this thing called Microsoft Academic Search but it doesn't have as much content (I searched for "beatboxing" and most of my own research ain't in there - bah!).

But if I want to reduce my dependence on Google, I can't get rid of Youtube - that's where all the videos are - nor Gmail - that would be a massive wrench, changing email address. And I can't stop people giving me Google Maps links. So, even though search is what made Google what it is, weirdly it's the one thing of theirs I can cut out.

The nice thing about my having deliberately dropped Microsoft is that I don't depend on them for any service or system, and they don't have any data about me. So their Bing search can be exactly what I want it to be - a neutral, unpersonalised web search tool.

Syndicated 2012-03-04 08:12:17 (Updated 2012-03-04 08:15:50) from Dan Stowell

Simple PHP content negotiation for HTML vs TTL

I'm writing a simple PHP web application. Normally it outputs HTML, but for linked data purposes I'd also like to be able to output RDF in Turtle format, from the SAME URL. This is achieved using something called content negotiation.

Often you can configure your webserver to do the right thing, but in this case I don't have access to the Apache config. I haven't been able to find a simple bit of PHP code for the content negotiation (or at least, not one that behaves correctly) so here's my attempt.

Note that this is NOT a complete flexible content negotiation. It only handles the case where I can output HTML or TTL and nothing else:

  // Content-negotiation, here only choosing if we can output HTML or TTL
preg_match_all('|([\w+*]+/[\w+*]+)(;q=([\d.]+))?|', $_SERVER['HTTP_ACCEPT'], $acceptables, PREG_SET_ORDER);
$accept_html = 0;
$accept_ttl  = 0;
foreach($acceptables as $accarray){
        $acclev = isset($accarray[3]) ? $accarray[3] : 1;
        switch($accarray[1]){
                case 'text/html':
                case 'html/xml':
                        $accept_html = max($accept_html, $acclev);
                        break;
                case 'text/rdf+n3':
                case 'application/turtle':
                case 'application/rdf+n3':
                case 'text/turtle':
                        $accept_ttl  = max($accept_ttl , $acclev);
                        break;
                case '*/*':
                        $accept_html = max($accept_html, $acclev);
                        $accept_ttl  = max($accept_ttl , $acclev);
                        break;
        }
}

$negotiatesttl = $accept_ttl > $accept_html; // only output ttl if it's higher-requested than html

if($negotiatesttl){
    // output ttl
}else{
    // output html
}

Here's hoping it works.

Syndicated 2012-02-28 10:26:44 (Updated 2012-02-28 10:29:53) from Dan Stowell

Roast chicken thighs with spring onion salsa and coconut rice

Chicken thighs - this recipe makes them lovely and sticky and with a great accompaniment. It's rare that I cook chicken thighs in a way that I like, so I'm particularly impressed by this one - we liked it a lot. Takes 1 hour, serves 2.

3 chicken thighs 1/2 cup white wine or pink wine rice 1 lemon 2x2x2cm coconut block (approx) 3 spring onions 2 large tomatoes 1 red chilli

Preheat the oven to 220 C. Put a tablespoon or two of oil in a roasting tin, and rub the chicken thighs in the oil to get it all over, then leave the chicken thighs skin-up. Put this in the oven. Cook it for 45 minutes, turning the temperature down to 190 after the first 15 minutes and pouring the wine over them. Baste the chicken occasionally with the juices in the pan. After the full 45 minutes just turn the oven off and leave the chicken inside to rest.

Meanwhile, prepare the rice. Put the rice in a pan which has a tight-fitting lid, add the zest of 1/2 the lemon. Chop the coconut block finely and add it to the pan too. Put the pan on the heat, add just enough boiling water to cover plus a bit more, and put the lid on. Bring it all to the boil, stir, and then turn the heat right down to its lowest setting, to sit gently cooking with the lid on for 30 minutes. You can probably even turn the heat off, in the second half, to prevent burning/sticking.

Once the rice is underway, make the salsa. Rinse the spring onions, tomatoes and chilli. Chop the spring onions and tomatoes into small dice. Remove the seeds from the chilli, and chop the flesh finely. Put all of this into a bowl, and juice the lemon, then add the lemon juice to the bowl and stir all around. Let this sit and soak while the other things cook, so the lemon juice has a chance to soften things.

Syndicated 2012-02-28 03:00:57 from Dan Stowell

Perceptually-modelled audio analysis

This week I went to a research workshop in Plymouth called Making Sense of Sounds. It was all based around an EU project which aims to improve the state of the art in auditory models (i.e. models of what happens imbetween our ear and our consciousness, to turn a physical sound into an auditory perception) and also use them to help computers and machines to understand sound.

I won't blog the whole thing but just a few notes here. There was a lot of research on the streaming paradigm, and it's quite amazing how it's still possible to discover new facts about human hearing using such a simple sound. Basically, the sound is usually something like "bip boop bip, bip boop bip, bip boop bip", and the clever bit is that we can either hear this as a single stream or as two segregated streams (a bip stream and a boop stream), depending on the relative pitches and durations. It's an example of "bistable perception", just like famous optical illusions such as the Necker cube or the faces/vase thing. With modern EEG and fMIR brain scanning, this streaming paradigm shows some interesting facts about how we hear sounds - for example, it seems that our auditory system does entertain both "versions" at some point, but this resolves to just one choice at some point below conscious perception.


I was interested by Maria Chait's talk on change detection, and in conversation afterwards she pointed us to some recent research - see this 2010 paper by Scholl et al - which shows that humans have neurons which are able to detect note offsets, even though it's very well established that in behaviour we're very bad at noticing them - i.e. we often can't tell what happened when a sound stops, but it's usually pretty noticeable when a sound starts!

Those findings aren't completely incompatible, of course. It's plausible that in human evolution, sudden sounds were more important than sudden silences, even though both are informative.


Maneesh Sahani talked about two of his students' work. The one that was new to me was Phillip Herrmann's thesis on pitch perception and was a really interesting approach - rather than using a spectral or autocorrelation method, they started from a generative model in which we assume there is some pitch rate generating an impulse train, and some impulse response convolved with it, and also some gaussian noise etc, then this goes into some auditory model before arriving at a representation which we have to make inferences about. They then did inference applying this model to audio signals. The point is not whether this is an appropriate model for most sounds, just whether this assumption gets you far enough to do pitch perception in similar ways as humans do (with some of the attendant peculiarities).

One particularly nice experiment they came up with is another kind of "bistable perception" experiment where you have a train of impulses separated by 2ms, and every second impulse is optionally attenuated by some amount. So if there's no attenuation, you have a 2ms impulse train; if there's full attentuation, you have a 4ms impulse train; somewhere imbetween, you're somewhere imbetween. If you play these sounds to humans, they can report ambiguous pitch perception, sometimes detecting the higher octave, sometimes the lower, and this Herrmann/Sahani model apparently replicates the human data in a pretty good way that is not reflected in autocorrelation models.

Oh, also, over a diverse dataset, they apparently found a really clear square-root correlation between fundamental frequency and spectral centroid. (In other parts of the literature, it's not clear whether or not the two are correlated.) I'd like to see the data for this one - as I mentioned to Maneesh, there might be reasons to expect some data to do this by design (e.g. professional singers' voices). The point for Herrmann/Sahani is to see if the correlation exists in the data that might have "trained" our perception, so I'm not sure if things like professional singers should be included or not.

Maneesh Sahani also said at the start of his talk that Helmholtz (in the 19th century) came up with this idea of "perception as inference" - but then the electrical/computational signal-processing paradigm came along and everyone treated perception as processing. The modern Bayesian tendency, and its use to model perception, is a return to this "perception as inference". Is there anything that wasn't originally invented by Helmholtz?


Also Tom Walters' demo of his AIMC real-time perceptual model in C++ was nice, and it's code I'd like to make use of some time.

My own contribution, a poster about using chirplets to analyse birdsong, led to some interesting conversations. At least one person was sure I should be using auditory models instead of chirplets - which, given the context, I should have expected :)

Syndicated 2012-02-23 12:59:01 (Updated 2012-02-23 13:02:59) from Dan Stowell

Geomob: mapping mapping mapping

Geomob was interesting tonight. A couple of notes (for my own purposes really):

The Domesdaymap taking the Domesday project and putting it into a useable searchable map was great - the amazing thing about it is that, despite being one of the most important European surveys in pre-modern times, it wasn't turned into open data until one person discovered an academic's Access database and decided to make it into a useable service with an API and a CC licence. Good work!

Nestoria talked about their switch from Google Maps to OpenStreetMap, a tale which has been admirably blogged elsewhere and made a big splash. Apparently they use and really like a rendering engine (client-side) called Leaflet. They decided not to make their own tiles in the end, but despite that they said that TileMill for making yr own maps was fab, and everyone could and should use it for making all sorts of maps. Also, MapBox has some beautiful map renderings to look at.

"Mental Maps": two design students did some work warping OpenStreetMap data to fit people's mental maps of places. They applied it to the tube map too, and made a really lovely print of the result.

MapQuest gave some interesting detail about their server setup. Interesting for map/data/sysadmin nerds I mean, of course. They use a very homogeneous cluster system: each node is capable of rendering tiles, or pre-rendering routing, or whatever, and they allocate jobs according to demand using a "distributed queue" system; standard CDNs aren't so useful because with OpenStreetMap you can't be sure in advance how long the tiles should be cached; oh, and MapQuest uses different rendering "styles" for US, UK, and mainland Europe (and so on), because people in those countries have different expectations about how the map should look.

Syndicated 2012-02-16 17:52:13 from Dan Stowell

11 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!