Older blog entries for danstowell (starting at number 18)

How I made a nice map handout from OpenStreetMap

OpenStreetMap is a nice community-edited map of everything - and you can grab their data at any time. So in theory it should be the ideal thing to choose when you want to make a little map for an open-source conference or something like that.

For our event this year I made these nice map handouts. It took a while! Quite tricky for a first-timer. But they're nice pretty vector PDF maps, with my own custom fonts, colour choices etc.

For anyone who fancies having a go, here's what I did:

  1. I followed the TileMill "30 minute tutorial" to install and set up TileMill on my Ubuntu laptop. It takes longer than 30 minutes - it's still a little bit tricky and there's a bit of a wait while it downloads a lump of data too.
  2. I started a new map project based on the example. I wanted to tweak it a bit - they use a CSS-like stylesheet language ("MSS") to specify what maps are supposed to look like, and it's nice that you can edit the stylesheets and see the changes immediately. However, I found it tricky to work out what to edit to have the effect I wanted. Here's what I managed to do:

    • I changed the font choice to match the visual style of our website. That bit is easy - find where there are some fonts specified, and put your preferred font at the FRONT of all the lists.
    • I wanted to direct people to specific buildings, but the default style doesn't show building names. However, I noticed that it does show names for cemeteries... in labels.mss on line 306 there was

          #area_label[type='cemetery'][zoom>=10] {
      

      and I can add buildings to that:

          #area_label[type='building'][zoom>=10], 
          #area_label[type='cemetery'][zoom>=10] {
      
    • The underground train line was being painted on top of the buildings, which looks confusing and silly. To fix this I had to rearrange the layers listed in the Layers panel - drag the "buildings" layer higher up the list, above the "roads" ones.

  3. When I'd got the map looking pretty good, I exported it as an SVG image.
  4. Then I quit TileMill and started up Inkscape, a really nice vector graphics program. I load the SVG that I saved in the previous step.
  5. I edited the image to highlight specific items:
    • The neatest way to do this is to select all and put it all into a layer, then select the items you want to highlight and move them to a new layer above. Once they're in a separate layer, it's easier to use Inkscape's selection tools to select all these items and perform tweaks like thickening the line-style or darkening the fill colour.
    • Selecting a "word" on the map is not so easy because each letter is a separate "object", and so is the shadow underneath. If there's a single word or street-name you're working on, it's handy to select all the letters and group them into a group (Ctrl+G), so you can treat them as a single unit.
    • You can also add extra annotations of your own, of course. I had to add tube-station icons manually, cos I couldn't find any way of getting TileMill to show those "point-of-interest"-type icons. I think there's supposed to be a way to do it, but I couldn't work it out.
  6. The next job is to clip the map image - the map includes various objects trailing off to different distances, it's not a neat rectangle. In Inkscape you can do a neat clipping like this:
    • Select all the map objects. If you've been doing as I described you'll need to use "Select all in all layers" (Ctrl+Shift+A).
    • Group them together (Ctrl+G).
    • Now use the rectangle tool to draw a rectangle which matches the clipping area you want to use.
    • Select the two items - the rectangle and the map-item-group - then right-click and choose "Set clip". Inkscape unites the two objects, using the rectangle to create a clipped version of the other.
  7. Now with your neatly-cropped rectangle map, you can draw things round the outside (e.g. put a title on).
  8. If you ever need to edit inside the map, Inkscape has an option for that - right-click and choose "Enter group" and you go inside the group, where you can edit things without disturbing the neat clipping etc.
  9. Once you're finished, you can export the final image as a PDF or suchlike.

Syndicated 2012-04-22 10:11:21 (Updated 2012-04-22 10:25:33) from Dan Stowell

Implementing the GM-PHD filter

I'm implementing the GM-PHD filter. (The what? The Gaussian mixture Probability Hypothesis Density filter, which is for tracking multiple objects.) Implementing it in python, which is nice, but I'm not completely clear if it's working as intended yet.

Here's a screenshot of progress so far. Look at the first four plots in this picture, which are:

  1. The true trajectory of two simulated objects moving in 1D over time.
  2. Observations received, with "clutter" and occasional missed detections.
  3. The "intensity" calculated by the GM-PHD filter. This is the core state variable of the filter's model.
  4. Filtered trajectories output from the PHD filter.

So what do you think? Good results?

Not sure. It's clearly got rid of lots of the clutter - good. In fact yes it's got rid of the majority of the noise, hooray hooray. But the clutter right close to the the targets is still there, seems a bit mucky, in a kind of way that suggests it's not going to be easy to clear that up.

And there's also a significant "cold start" problem - it takes up to about 20 frames for the filter to be convinced that there's anything there at all. That's no real surprise, since there's an underlying "birth" model which says that a trail could spring into being at any point, but there's no model for "pre-existing" targets. There's nothing in the PHD and GMPHD papers which I've read which even mentions this, let alone accounts for it - I'm pretty sure that we'll either need to initialise the state to account for this, or always do some kind of "warmup" before getting any results out of the filter. That's not great, especially when we might be tracking things that only have a short lifespan themselves.

One thing: this is a one-dimensional problem I'm testing it on. PHD filters are usually used for 2D or 3D problems - and maybe there needs to be enough entropy in the representation for clutter to be distinguished more clearly from signals. That would be a shame, since I'd like to use it on 1D things like spectrogram frames.

More tests needed. Any thoughts gratefully received too

Syndicated 2012-03-29 17:00:13 (Updated 2012-03-29 17:03:11) from Dan Stowell

All Tomorrows Parties: Jeff Mangum

Just back from a fab All Tomorrow's Parties, this one curated by Jeff Mangum. As well as the bands, he curated quite an educational TV channel throughout the event - we got to learn about Chomsky, Zizek, the Bali islanders, oh and Monty Python on endless loop.

Some of the things I saw:

  • Elephant 6 Holiday Suprise - best thing about that was the ending, when they played a Sun Ra song and then started to process off the stage, led by the sousaphone player and the saw player (the saw player sticking his saw in the sousaphone and banging it!) - they led us outside singing the Sun Ra refrain, "This here, our invitation, we invite you, to our space world"...
  • Charlemagne Palestine played a wine glass nicely, but then when he settled into his long two-note piano tranceout it got really boring.
  • Joanna Newsom - quite amazing to see her play. That surprised me, I know her music but seeing her playing live, the intricacy of the harp and her twisty twindy vocals is kinda mesmerising. It's less interesting when she's playing the piano.
  • Matana Roberts and Seb Rochford did some delightful delicate free-jazz together. It's amazing watching Seb Rochford play, even when he isn't actually playing.
  • John Spencer Blues Explosion - amen to that.

That was all on the first day, fantastically enough. The best things about day two were:

  • Cream tea in town, with wortleberry jam, yum.
  • Flumes in the Butlin's swimming pool. The "space bowl" flume was brilliant. Word to the wise, if you're ever there...

Musically there wasn't much I planned to see on the second day. Two bands that are pretty new to me but I was looking forward to were Demdike Stare and Yamantaka // Sonic Titan. Both of them were a little bit underwhelming - Demdike Stare is atmospheric and has good video, but not sure it built up to much. Yamantaka were pretty good, especially their song "Queens", and they had some great costumery, with one of the singers looking like some big hair-creature out of a Studio Ghlibli film.

Sunday we had a lovely beef roast, though I cocked up the gravy so we had none. Then music. The Magic Band were a massive disappointment, not a credit to Beefheart's legacy IMHO, just some noodley noodle. However, they were bad enough that we went next door for Olivia Tremor Control who were fantastic. Their mixture of straight indie-pop and "musique concrete"-like sonic experimentation is just brill, neither of the two components losing out to the other.

Sun Ra Arkestra were also great fun, some great jazz ing. A bit more straightforward jazz than I might have expected, but with a notable appearance of a lovely electrical wind instrument, a buzzy little device played really well by the lead sax bloke.

Later on we joined a queue that had already been queueing for an hour to see Jeff Mangum. It was quite a pleasant queue and the ale people were delivering ale, so we didn't mind queueing for another three quarters of an hour (while Jeff played inside) and eventually went in to catch the last three tracks of his set, including "Two-headed boy" for which most of the crowd sang along. Lovely atmosphere in there. Though apparently the real closing event was a secret gig later that night where Jeff plus Elephant 6 crew, Sun Ra Arkestra and assorted others had a big old jam session...

Syndicated 2012-03-12 12:29:03 from Dan Stowell

I have switched to Bing for search

I have switched my browser's search engine from Google to Bing. I never thought it would come to this!

Years ago I migrated away from Microsoft, disliking what they were doing with their dominance. It feels odd to be deliberately turning to Microsoft now, for a very similar reason.

Google has unified what it does with your personal data, meaning that your emails, video views, web searches etc can all be smulged together for analytic/advertising purposes. I always resented Google's move into the "social" web - the best things that they make are NONsocial, tools that I use as tools - the web search being the main example. Google Scholar was a very important tool in my PhD thesis. Gmail is the best email interface I've used.

I don't want these tools mixed up with the social sharey web, and it made me uncomfortable when google "+1" buttons appeared in all the search results. This change in what they do with my personal data makes it even worse. My distaste is not really worries about what they'll do - but it's a growing problem to rely on just one company for many essential tools, definitely unhealthy, and I just want some of my web activity to be completely asocial and not built into the personal profile Google is building of me.

If you've not used Bing search before (I hadn't really), you might find it a bit funny how many of the Google search options are closely mirrored in the Bing interface - kinda comedy, but hopefully it'll make the transition easier. So far, the two things I really miss in Bing search are recent search results (e.g. in the past week) and scholar search. There's this thing called Microsoft Academic Search but it doesn't have as much content (I searched for "beatboxing" and most of my own research ain't in there - bah!).

But if I want to reduce my dependence on Google, I can't get rid of Youtube - that's where all the videos are - nor Gmail - that would be a massive wrench, changing email address. And I can't stop people giving me Google Maps links. So, even though search is what made Google what it is, weirdly it's the one thing of theirs I can cut out.

The nice thing about my having deliberately dropped Microsoft is that I don't depend on them for any service or system, and they don't have any data about me. So their Bing search can be exactly what I want it to be - a neutral, unpersonalised web search tool.

Syndicated 2012-03-04 08:12:17 (Updated 2012-03-04 08:15:50) from Dan Stowell

Simple PHP content negotiation for HTML vs TTL

I'm writing a simple PHP web application. Normally it outputs HTML, but for linked data purposes I'd also like to be able to output RDF in Turtle format, from the SAME URL. This is achieved using something called content negotiation.

Often you can configure your webserver to do the right thing, but in this case I don't have access to the Apache config. I haven't been able to find a simple bit of PHP code for the content negotiation (or at least, not one that behaves correctly) so here's my attempt.

Note that this is NOT a complete flexible content negotiation. It only handles the case where I can output HTML or TTL and nothing else:

  // Content-negotiation, here only choosing if we can output HTML or TTL
preg_match_all('|([\w+*]+/[\w+*]+)(;q=([\d.]+))?|', $_SERVER['HTTP_ACCEPT'], $acceptables, PREG_SET_ORDER);
$accept_html = 0;
$accept_ttl  = 0;
foreach($acceptables as $accarray){
        $acclev = isset($accarray[3]) ? $accarray[3] : 1;
        switch($accarray[1]){
                case 'text/html':
                case 'html/xml':
                        $accept_html = max($accept_html, $acclev);
                        break;
                case 'text/rdf+n3':
                case 'application/turtle':
                case 'application/rdf+n3':
                case 'text/turtle':
                        $accept_ttl  = max($accept_ttl , $acclev);
                        break;
                case '*/*':
                        $accept_html = max($accept_html, $acclev);
                        $accept_ttl  = max($accept_ttl , $acclev);
                        break;
        }
}

$negotiatesttl = $accept_ttl > $accept_html; // only output ttl if it's higher-requested than html

if($negotiatesttl){
    // output ttl
}else{
    // output html
}

Here's hoping it works.

Syndicated 2012-02-28 10:26:44 (Updated 2012-02-28 10:29:53) from Dan Stowell

Roast chicken thighs with spring onion salsa and coconut rice

Chicken thighs - this recipe makes them lovely and sticky and with a great accompaniment. It's rare that I cook chicken thighs in a way that I like, so I'm particularly impressed by this one - we liked it a lot. Takes 1 hour, serves 2.

3 chicken thighs 1/2 cup white wine or pink wine rice 1 lemon 2x2x2cm coconut block (approx) 3 spring onions 2 large tomatoes 1 red chilli

Preheat the oven to 220 C. Put a tablespoon or two of oil in a roasting tin, and rub the chicken thighs in the oil to get it all over, then leave the chicken thighs skin-up. Put this in the oven. Cook it for 45 minutes, turning the temperature down to 190 after the first 15 minutes and pouring the wine over them. Baste the chicken occasionally with the juices in the pan. After the full 45 minutes just turn the oven off and leave the chicken inside to rest.

Meanwhile, prepare the rice. Put the rice in a pan which has a tight-fitting lid, add the zest of 1/2 the lemon. Chop the coconut block finely and add it to the pan too. Put the pan on the heat, add just enough boiling water to cover plus a bit more, and put the lid on. Bring it all to the boil, stir, and then turn the heat right down to its lowest setting, to sit gently cooking with the lid on for 30 minutes. You can probably even turn the heat off, in the second half, to prevent burning/sticking.

Once the rice is underway, make the salsa. Rinse the spring onions, tomatoes and chilli. Chop the spring onions and tomatoes into small dice. Remove the seeds from the chilli, and chop the flesh finely. Put all of this into a bowl, and juice the lemon, then add the lemon juice to the bowl and stir all around. Let this sit and soak while the other things cook, so the lemon juice has a chance to soften things.

Syndicated 2012-02-28 03:00:57 from Dan Stowell

Perceptually-modelled audio analysis

This week I went to a research workshop in Plymouth called Making Sense of Sounds. It was all based around an EU project which aims to improve the state of the art in auditory models (i.e. models of what happens imbetween our ear and our consciousness, to turn a physical sound into an auditory perception) and also use them to help computers and machines to understand sound.

I won't blog the whole thing but just a few notes here. There was a lot of research on the streaming paradigm, and it's quite amazing how it's still possible to discover new facts about human hearing using such a simple sound. Basically, the sound is usually something like "bip boop bip, bip boop bip, bip boop bip", and the clever bit is that we can either hear this as a single stream or as two segregated streams (a bip stream and a boop stream), depending on the relative pitches and durations. It's an example of "bistable perception", just like famous optical illusions such as the Necker cube or the faces/vase thing. With modern EEG and fMIR brain scanning, this streaming paradigm shows some interesting facts about how we hear sounds - for example, it seems that our auditory system does entertain both "versions" at some point, but this resolves to just one choice at some point below conscious perception.


I was interested by Maria Chait's talk on change detection, and in conversation afterwards she pointed us to some recent research - see this 2010 paper by Scholl et al - which shows that humans have neurons which are able to detect note offsets, even though it's very well established that in behaviour we're very bad at noticing them - i.e. we often can't tell what happened when a sound stops, but it's usually pretty noticeable when a sound starts!

Those findings aren't completely incompatible, of course. It's plausible that in human evolution, sudden sounds were more important than sudden silences, even though both are informative.


Maneesh Sahani talked about two of his students' work. The one that was new to me was Phillip Herrmann's thesis on pitch perception and was a really interesting approach - rather than using a spectral or autocorrelation method, they started from a generative model in which we assume there is some pitch rate generating an impulse train, and some impulse response convolved with it, and also some gaussian noise etc, then this goes into some auditory model before arriving at a representation which we have to make inferences about. They then did inference applying this model to audio signals. The point is not whether this is an appropriate model for most sounds, just whether this assumption gets you far enough to do pitch perception in similar ways as humans do (with some of the attendant peculiarities).

One particularly nice experiment they came up with is another kind of "bistable perception" experiment where you have a train of impulses separated by 2ms, and every second impulse is optionally attenuated by some amount. So if there's no attenuation, you have a 2ms impulse train; if there's full attentuation, you have a 4ms impulse train; somewhere imbetween, you're somewhere imbetween. If you play these sounds to humans, they can report ambiguous pitch perception, sometimes detecting the higher octave, sometimes the lower, and this Herrmann/Sahani model apparently replicates the human data in a pretty good way that is not reflected in autocorrelation models.

Oh, also, over a diverse dataset, they apparently found a really clear square-root correlation between fundamental frequency and spectral centroid. (In other parts of the literature, it's not clear whether or not the two are correlated.) I'd like to see the data for this one - as I mentioned to Maneesh, there might be reasons to expect some data to do this by design (e.g. professional singers' voices). The point for Herrmann/Sahani is to see if the correlation exists in the data that might have "trained" our perception, so I'm not sure if things like professional singers should be included or not.

Maneesh Sahani also said at the start of his talk that Helmholtz (in the 19th century) came up with this idea of "perception as inference" - but then the electrical/computational signal-processing paradigm came along and everyone treated perception as processing. The modern Bayesian tendency, and its use to model perception, is a return to this "perception as inference". Is there anything that wasn't originally invented by Helmholtz?


Also Tom Walters' demo of his AIMC real-time perceptual model in C++ was nice, and it's code I'd like to make use of some time.

My own contribution, a poster about using chirplets to analyse birdsong, led to some interesting conversations. At least one person was sure I should be using auditory models instead of chirplets - which, given the context, I should have expected :)

Syndicated 2012-02-23 12:59:01 (Updated 2012-02-23 13:02:59) from Dan Stowell

Geomob: mapping mapping mapping

Geomob was interesting tonight. A couple of notes (for my own purposes really):

The Domesdaymap taking the Domesday project and putting it into a useable searchable map was great - the amazing thing about it is that, despite being one of the most important European surveys in pre-modern times, it wasn't turned into open data until one person discovered an academic's Access database and decided to make it into a useable service with an API and a CC licence. Good work!

Nestoria talked about their switch from Google Maps to OpenStreetMap, a tale which has been admirably blogged elsewhere and made a big splash. Apparently they use and really like a rendering engine (client-side) called Leaflet. They decided not to make their own tiles in the end, but despite that they said that TileMill for making yr own maps was fab, and everyone could and should use it for making all sorts of maps. Also, MapBox has some beautiful map renderings to look at.

"Mental Maps": two design students did some work warping OpenStreetMap data to fit people's mental maps of places. They applied it to the tube map too, and made a really lovely print of the result.

MapQuest gave some interesting detail about their server setup. Interesting for map/data/sysadmin nerds I mean, of course. They use a very homogeneous cluster system: each node is capable of rendering tiles, or pre-rendering routing, or whatever, and they allocate jobs according to demand using a "distributed queue" system; standard CDNs aren't so useful because with OpenStreetMap you can't be sure in advance how long the tiles should be cached; oh, and MapQuest uses different rendering "styles" for US, UK, and mainland Europe (and so on), because people in those countries have different expectations about how the map should look.

Syndicated 2012-02-16 17:52:13 from Dan Stowell

MCLD vs Kiti le Step, out now

Chordpunch has put online a video and free download of my performance last year with Kiti le Step. Check it out, here's the video:

Syndicated 2012-02-09 07:03:24 from Dan Stowell

9 Feb 2012 (updated 9 Feb 2012 at 12:41 UTC) »

What is a musical work? What is a performance of it?

Yesterday I went to a philosophy talk by Margaret Moore, on timbre and the ontology of music. I'd better say up front that I'm not a philosopher and I don't know the literature she was referring to. But I found it a frustrating talk - she was considering a position she calls "timbral sonicism" attributed to Julian Dodd, and asserting what she held to be problems with adding timbre (as well as pitch and duration) into the account of what a musical work can be, in terms of it being a normative description which a particular performance might or might not match.

I thought her argument had a couple of weird components in it: the dodgy assertion that there can never be a synthesiser whose sound was indistinguishable from that of a real instrument (unless the synth actually was functionally equivalent), and the requirement that a performance would have to match all dimensions of timbre (rather than just, say, the brightness dimension) in a performance before Dodd's inclusion of timbre as normative could make sense. But those problems are irrelevant for me because this "timbral sonicist" view is part of the "aesthetic empiricist" approach in which you have to claim that our evaluation of a music performance must only be done in terms of the sonic content of that performance. This is so clearly misguided that I don't see the point talking about it: this is the main reason I was frustrated. Music performances are so many and varied, and many other criteria come into our assessments - not only assessments of whether it was a good performance, but more importantly of whether it was indeed a performance of a particular work. We judge based on our own background and cultural expectations, we judge based upon what we see, on what we believe (e.g. whether the performers are humans or holograms).

But there are some interesting things in this philosophical consideration of the ontology of music, and it led me to think, so let me address one issue in my own way (with an uninformed disregard for any literature on the topic!):

This question is one that was floating about: What is a musical work? and more pertinently How do we judge whether a particular performance is indeed an instantiation of a particular musical work?

For me there are two really important components to answer this:

  1. The concept of "a musical work" only has meaning in some musical traditions, e.g. Western classical or Western pop. In other traditions (e.g. free improv, raga, and I think gamelan) the abstract structures that give form to a musical act have different granularities, and are brought to bear in different combinations.

  2. As Moore said, a musical work can be described as an abstract "sound structure" or a "normative type". The latter is Moore's preferred, and I think she draws some difference between those two, though I can't be sure what the exact differences are. I think the idea of a musical work as a normative type is a useful one, and it reminds me strongly of the idea of an abstract class or abstract type in object-oriented programming: a composer might specify a particular series of notes, for example, and not bother to specify every note's timbre, or not bother to specify which instrument must be used, so we consider it an incomplete specification. The specification is fuzzy as well as incomplete: a composer might specify "getting faster" but not exactly how much.

So in my way of thinking, putting these two points together, a musical work is not special: other abstract things that can be instantiated in a performance (genres, cliches, keys) are the same kind of normative type, and they don't have to sit in a hierarchical relationship to each other. Musical works don't have special status in general, but are a bundle of normative constraints which have a particular granularity that we are used to in Western music.

To say a musical performance is an instance of a particular musical work, then, we check if the constraints are satisfied. We'd need to allow for errors (a few constraints not met, a few constraints sort-of-met) - our tolerance depends on our expectations (maybe we tolerate timbre deviations more readily than pitch deviations, in a particular tradition; maybe we tolerate wider deviations in a school band than a professional orchestra). Criteria should also depend on context in the form of the background corpus - are enough contraints met that we can positively say this is a performance of work A and not of another work B?

But again, to describe it as work A vs work B is only really relevant in the Western idea of a "musical work", in which the piece (e.g. the sequence of notes) is so tightly specified that it's generally only ever a realisation of one work. In other situations, a performer might simultaneously be performing two traditional Irish tunes, woven in and out of each other, and that's the way these tunes are expected to be treated: the result is not a bastardised new work but a simultaneous realisation of two known normative types.

I must also state explicitly that I don't believe for a second that such normative types must only ever include acoustic or psychoacoustic properties (which is the line Moore was sticking to in her talk - whether to criticise it from within, or whether she believes it, I don't know). In some traditions in may be explicit or implicit that a work can only be played on a piano and not on a synthesiser: that's a constraint about the means of production, not about the sound that is produced. Our choice of how strongly to attend to that part of the specification affects our judgment of whether a particular performance counts as an instantiation of a particular work. But there is no a priori way to know what balance of judgments is correct: constraints are always fuzzy (was that definitely a C#, or was it slightly flat?) and pretty much any normative description of musical structure is under-specified.

In this view, pitch, timbre, rhythm, duration, instrumentation, lyrics, and potentially other stuff such as the performer's clothing all have the same status: they are examples of things that in the Western tradition are specified to a greater or lesser extent at the level of a "musical work". (Note that there's not much limit to what might be specified: in raga, the time of day is specified, though that idea might be a surprise to many Western listeners.) And musical works have the same status as genres, cliches, motifs etc, as bundles of constraints which I hope fit Moore's term "normative types". These constraints are brought to bear in what a performer chooses to do in a given performance, and also brought to bear by observers in deciding if it really was "a good/faithful rendition of the piece" or "a trad jazz show".

So is there a use for this? I can't speak for the philosophers, but in Music Information Retrieval I'm reminded of the task of "cover song identification", i.e. determining automatically if a recording is an instantiation of a particular piece (which might be represented as score, or might be represented as a reference recording). All too often, this task is reduced depressingly quickly to the question of whether the melody or chord sequence matches sufficiently. This is an impoverished idea of the "cover song" and fails badly for many widespread genres - an obvious one is hip-hop, but also much club music.

If it were possible, I'd like to imagine a system which does something like "cover song identification" by identifying from a wide number of potential dimensions the specific constraints that a musical work represents, over and above the constraints of any assumed background such as genre or common corpus of known works. It would then use these constraints to identify matching instances. In order to do this usefully, it would need to identify enough constraints that distinguish a work from other candidate works, but would need to leave enough dimensions free (or loosely specified) to allow interpretative variation. What can be held fixed, and what can be allowed to vary, clearly depends on musical tradition, so the context for such an inference would need to be aware not just of a corpus of musical work but probably some cultural parameters that couldn't be inferred directly from audio, no matter how much audio is available.

Syndicated 2012-02-09 04:36:07 (Updated 2012-02-09 07:00:37) from Dan Stowell

9 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!