Older blog entries for danstowell (starting at number 94)

Paris climate agreements (COP 21), sustainable energy and Britain

I'm happy that the Paris climate-change discussions seem to have had a positive outcome. Some telling quotes about it, with links to articles covering the Paris outcomes in more detail:

"This is an exciting moment in history. The debate is over and the vision of the future is low carbon." (New Scientist)

"By comparison to what it could have been, it’s a miracle. By comparison to what it should have been, it’s a disaster." (George Monbiot in The Guardian)

"The climate deal is at once both historic, important – and inadequate." (Simon Lewis in The Conversation)

and here's an analysis by CarbonBrief

An interesting aspect is the way countries have made commitments, and the agreement reifies a specific global target, while acknowledging that the countries' current commitments cannot actually meet that goal. Countries have to get together again in a few years to check on progress and hopefully to extend the ambition of their commitments, so that they eventually meet the overall target. That might sound like a cop-out but actually it strikes me as good politics/psychology. (However, I'm no expert. At least one observer, James Hansen, thinks it's all hot air without serious action on carbon taxation.)

I'd like to read about the UK's role in the negotiations, especially because the mind boggles on how they could have had much to say about reducing climate change while the current government has deliberately derailed the UK's burgeoning renewable energy industries. (Also for community energy schemes.) To be clear, the problem with what they did is not the fact of reducing subsidies - they were already scheduled to be gradually reduced - but changing the plan and reducing them suddenly, thus creating business uncertainty in that sector and making it a risky sector for investors in the medium term.

Renewable energy technologies are getting close to parity with fossil fuel generation, i.e. reaching a tipping point where people start to invest in them for simple financial reasons rather than altruism, and that could be the start of a really big acceleration. According to Simon Lewis (see above) the Paris agreement will help to accelerate the technologies' maturity, efficiency and profitability. I'd like to see British engineering play its part in this, and if the current UK government could only see which way the wind is blowing (ha!) and help British engineering to do this, that would be just great.

If you're interested in the technology/engineering/IT side of all this here are two excellent excellent things to read, which give lots of really concrete ideas:

Syndicated 2015-12-13 15:28:09 (Updated 2015-12-13 15:29:25) from Dan Stowell

Tracking fast frequency modulation (FM) in audio signals - a list of methods

When you work with birdsong you encounter a lot of rapid frequency modulation (FM), much more than in speech or music. This is because songbirds have evolved specifically to be good at it: as producers they have muscles specially adapted for rapid FM (Goller and Riede 2012), and as listeners they're perceptually and behaviourally sensitive to well-executed FM (e.g. Vehrencamp et al 2013).

Standard methods for analysing sound signals - spectrograms (or Fourier transforms in general) or filterbanks - assume that the signal is locally stationary, which means that when you consider a single small window of time, the statistics of the process are unchanging across the length of the window. For music and speech we can use a window size such as 10 milliseconds, and the signal evolves slowly enough that our assumption is OK. For birdsong, it often isn't, and you can see the consequences when a nice sharp chirp comes out in a spectrogram as a blurry smudge across many pixels of the image.

So, to analyse birdsong, we'd like to analyse our signal using representations that account for nonstationarity. Lots of these representations exist. How can we choose?

If you're impatient, just scroll down to the Conclusions at the bottom of this blog. But to start off, let's state the requirements. We'd like to take an audio signal and convert it into a representation that:

  • Characterises FM compactly - i.e. FM signals as well as fixed-pitch signals have most of their energy represented in a similar small number of coefficients;
  • Handles multiple overlapping sources - since we often deal with recordings having multiple birds;
  • Copes with discontinuity of the frequency tracks - since not only do songbirds make fast brief audio gestures, but also, unlike us they have two sets of vocal folds which they can alternate between - so if a signal is a collage of chirpy fragments rather than a continuously-evolving pitch, we want to be able to reflect that;
  • Ideally is fairly efficient to calculate - simply because we often want to apply calculations at big data scales;
  • Does the transformation need to be invertible? (i.e. do we need a direct method to resynthesise a signal, if all we know is the transformed representation?) Depends. If we're interested in modifying and resynthesising the sounds then yes. But I'm primarily interested in extracting useful information, for which purposes, no.

Last year we published an empirical comparison of four FM methods (Stowell and Plumbley 2014). The big surprise from that was that the dumbest method was the best-performing for our purposes. But I've encountered a few different methods, including a couple that I learnt about very recently, so here's a list of methods for reference. This list is not exhaustive - my aim is to list an example of each paradigm, and only for those paradigms that might be particularly relevant to audio, in particular bird sounds.

  • Let's start with the stupid method: take a spectrogram, then at each time-point find out which frequency has the most energy. From this list of peaks, draw a straight line from each peak to the one that comes immediately next. That set of discontinuous straight lines is your representation. It's a bit chirplet-like in that it expresses each moment as a frequency and a rate-of-change of frequency, but any signal processing researcher will tell you not to do this. In principle it's not very robust, and it's not even guaranteed to find peaks that correspond to the actual fundamental frequency. In our 2014 paper we tested this as a baseline method, and... it turned out to be surprisingly robust and useful for classification! It's also extremely fast to compute. However, note that this doesn't work with polyphonic (multi-source) audio at all. For big data analysis it's handy to be able to do this, but I don't expect it to make any sense for analysing a sound scene in detail.
  • Chirplets. STFT analysis assumes the signal is composed of little packets, and each packet contains a sine-wave with a fixed frequency. Chirplet analysis generalises that to assume that each packet is a sine-wave with parametrically varying frequency (you can choose linearly-varying, quadratically-varying, etc). See chirplets on wikipedia for a quick intro. There are different ways to turn the concept of a chirplet into an analysis method. Here are some applied to birds:
  • Filter diagonalisation method - an interesting method from quantum mechanics, FDM models a chunk of signal as a sum of purely exponentially decaying sinusoids. Our PhD student Luwei Yang recently applied this to tracking vibrato in string instruments. I think this is the first use of FDM for audio. It's not been explored much - I believe it satisfies most of the requirements I stated above, but I've no idea of its behaviour in practice.
  • Subspace-based methods such as ESPRIT. See for example this ESPRIT paper by Badeau et al. These are one class of sinusoidal tracking techniques, because they analyse a signal by making use of an assumed continuity from one frame to the next. In fact, this is a problem for birdsong analysis. Roland Badeau tested a birdsong recording for me and found that the very fast FM was a fatal problem for this type of method: the method simply needs to be able to rely on some relatively smooth continuity of pitch tracks, in order to give strong tracking results.
  • Fan chirp transform (Weruaga and Kepesi 2007) - when you take the FFT of a signal, we might say you analyse it as a series of "horizontal lines" in the time-frequency plane. The fan chirp transform tilts all these lines at the same time: imagine the lines, instead of being horizontal, all converge on a single vanishing point in the distance. So it should be particularly good for analysing harmonic signals that involve pitch modulation. Note that the angles are all locked together, so it's best for monophonic-but-harmonic signals, not polyphonic signals. My PhD student Veronica Morfi, before she joined us, extended the fan-chirp model to non-linear FM curves too: Morfi et al 2015.
  • Spectral reassignment methods. When you take the FFT of a signal, note that you analyse it as a series of equally-spaced packets on the frequency axis. The clever idea in spectral reassignment is to say, if we assume the packets weren't actually sitting on that grid, but we analysed them with the FFT anyway, let's take the results and move every one of those grid-points to an irregular location that best matches the evidence. You can extend this idea to allow each packet to be chirpy rather than fixed-freqency, so there you have it: run a simple FFT on a frame of audio, and then magically transform the results into a more-detailed version that can allow each bin to have its own AM and FM. This is good because it makes sense for polyphonic audio.
    • A particular example of this is the distribution derivative method (code available here). I worked with Sasho Musevic a couple of years ago, who did his PhD on this method, and we found that it yielded good informative information for multiple birdsong tracking. (Stowell et al 2013) Definitely promising. (In my later paper where I compared different FM methods, this gave a strong performance again. The main disbenefit, in that context, was simply that it took longer to compute than the methods I was comparing it against.) Also you have to make some peak-picking decisions, but that's doable. This summer, I did some work with Jordi Bonada and we saw the distribution derivative method getting very good precise results on a dataset of chaffinch recordings.
  • There's lots of work on multi-pitch trackers, and it would be incomplete if I didn't mention that general idea. Why not just apply a multi-pitch tracker to birdsong audio and then use the pitch curves coming out from that? Well, as with the ESPRIT method I mentioned above, the methods developed for speech and music tend to build upon assumptions such as relatively long, smooth curves often with hard limits to the depth of FM that can exist.
  • How about feature learning? Rather than design a feature transform, we could simply feed a learning algorithm with a large amount of birdsong data and get it to learn what FM patterns exist in the audio. That's what we did last year in this paper on large-scale birdsong classification - that was based on spectrogram patches, but it definitely detected characteristic FM patterns. That representation didn't explicitly recover pitch tracks or individual chirplets, but there may be ways to develop things in that direction. In particular, there's quite a bit of effort in deep learning on "end-to-end" learning which asks the learning algorithm to find its own transformation from the raw audio data. The transformations learnt by such systems might themselves be useful representations for other tasks.

Conclusions

So.... It's too soon to have conclusions about the best signal representations for FM in birdsong. But out of this list, the distribution derivative method is the main "off-the-shelf" tool that I'd suggest for high-resolution bird FM analysis (code available here), while feature-learning and filter diagonalisation are the approaches that I'd like to see more research on.

At the same time, I should also emphasise that machine learning methods don't need a nice clean understandable representation as their input. Even if a spectrogram turns birdsong into a blur when you look at it, that doesn't necessarily mean it shouldn't be used as the input to a classifier. Machine learning often has different requirements than the human eye.

(You might think I'm ignoring the famous rule garbage in, garbage out when I say a classifier might work fine with blurry data - well, yes and no. A spectrogram contains a lot of high-dimensional information, so it's rich enough that the crucial information can still be embedded in there. Even the "stupid method" I mentioned, which throws away so much information, preserves something of the important aspects of the sound signal. However modern classifiers work well with rich high-dimensional data.)

But if you're trying to do something specific such as clearly characterise the rates of FM used by a particular bird species, a good representation will help you a lot.

Syndicated 2015-12-08 09:34:47 (Updated 2015-12-08 09:49:35) from Dan Stowell

Dry-fried paneer

This is my approximation of the lovely dry-fried paneer served at Tayyabs, the famous Punjabi Indian place in East London. These amounts are for 1 as a main, or more as a starter. Takes about ten minutes:

  • 200g paneer, cut into bite-size cubes
  • 1 tbsp curry powder
  • 1 tsp ground cumin
  • 1/2 an onion, sliced finely
  • 1 red chilli, sliced
  • 1/2 tsp cumin seeds (optional)
  • a squeeze of lemon juice
  • 1 tsp garam masala (optional)
  • A few chives (optional)

First put the cubed paneer into a bowl, add the curry powder and cumin and toss to get an even coating.

Get a frying pan nice and hot, with about 1 tbsp of veg oil in it. Add the onion and chilli (and cumin seed if using). Note that you want the onion to be frying to be crispy at the end, so you want it finely sliced and separated (no big lumps), you want the oil hot, and you want the onion to have plenty of space in the pan. Fry it hot for about 4 minutes.

Add the paneer to the pan, and any spice left in the bowl. Shuffle it all around, it's time to get the paneer browning too. It'll take maybe another 4 minutes, not too long. Stir it now and again - it'll get nice and brown on the sides, no need to get a very even colour on all sides, but do turn it all around a couple of times.

Near the end, e.g. with 30 seconds to go, add the squeeze of lemon juice to the pan, and stir around. You might also like to sprinkle some garam masala into the pan too.

Serve the paneer with chive sprinkled over the top. It's good to have some bread to eat it with (e.g. naan or roti) and salad, or maybe with other indian things.

Syndicated 2015-11-26 14:27:06 from Dan Stowell

Reading list: excellent papers for birdsong and machine learning

I'm happy to say I'm now supervising two PhD students, Pablo and Veronica. Veronica is working on my project all about birdsong and machine learning - so I've got some notes here about recommended reading for someone starting on this topic. It's a niche topic but it's fascinating: sound in general is fascinating, and birdsong in particular is full of many mysteries, and it's amazing to explore these mysteries through the craft of trying to get machines to understand things on our behalf.

If you're thinking of starting in this area, you need to get acquainted with: (a) birds and bird sounds; (b) sound/audio and signal processing; (c) machine learning methods. You don't need to be expert in all of those - a little naivete can go a long way!

But here are some recommended reads. I don't want to give a big exhaustive bibliography of everything that's relevant. Instead, some choice reading that I have selected because I think it satisfies all of these criteria: each paper is readable, is relevant, and is representative of a different idea/method that I think you should know. They're all journal papers, which is good because they're quite short and focused, but if you want a more complete intro I'll mention some textbooks at the end.

  • Briggs et al (2012) "Acoustic classification of multiple simultaneous bird species: A multi-instance multi-label approach"

    • This paper describes quite a complex method but it has various interesting aspects, such as how they detect individual bird sounds and how they modify the classifier so that it handles multiple simultaneous birds. To my mind this is one of the first papers that really gave the task of bird sound classification a thorough treatment using modern machine learning.
  • Lasseck (2014) "Large-scale identification of birds in audio recordings: Notes on the winning solution of the LifeCLEF 2014 Bird Task"

    • A clear description of one of the modern cross-correlation classifiers. Many people in the past have tried to identify bird sounds by template cross-correlation - basically, taking known examples and trying to detect if the shape matches well. The simple approach to cross-correlation fails in various situations such as organic variation of sound. The modern approach, introduced to bird classification by Gabor Fodor in 2013 and developed further by Lasseck and others, uses cross-correlation, but it doesn't use it to guess the answer, it uses it to generate new data that gets fed into a classifier. At time of writing (2015), this type of classifier is the type that tends to win bird classification contests.
  • Wang (2003), "An industrial strength audio search algorithm"

    • This paper tells you how the well-known "Shazam" music recognition system works. It uses a clever idea about what is informative and invariant about a music recording. The method is not appropriate for natural sounds but it's interesting and elegant.

      Bonus question: Take some time to think about why this method is not appropriate for natural sounds, and whether you could modify it so that it is.

  • Stowell and Plumbley (2014), "Automatic large-scale classification of bird sounds is strongly improved by unsupervised feature learning"

    • This is our paper about large-scale bird species classification. In particular, a "feature-learning" method which seems to work well. There are some analogies between our feature-learning method and deep learning, and also between our method and template cross-correlation. These analogies are useful to think about.
  • Lots of powerful machine learning right now uses deep learning. There's lots to read on the topic. Here's a blog post that I think gives a good introduction to deep learning. Also, for this article DO read the comments! The comments contain useful discussion from some experts such as Yoshua Bengio. Then after that, this recent Nature paper is a good introduction to deep learning from some leading experts, which goes into more detail while still at the conceptual level. When you come to do practical application of deep learning, the book "Neural Networks: Tricks of the Trade" is full of good practical advice about training and experimental setup, and you'll probably get a lot out of the tutorials for the tool you use (for example I used Theano's deep learning tutorials).

    • I would strongly recommend NOT diving in with deep learning until you have spent at least a couple of months reading around different methods. The reason for this is that there's a lot of "craft" to deep learning, and a lot of current-best-practice that changes literally month by month, and anyone who gets started could easily spend three years tweaking parameters.
  • Theunissen and Shaevitz (2006), "Auditory processing of vocal sounds in birds"

    • This one is not computer science, it's neurology - it tells you how birds recognise sounds!

      A question for you: should machines listen to bird sounds in the same way that birds listen to bird sounds?

  • O'Grady and Pearlmutter (2006), "Convolutive non-negative matrix factorisation with a sparseness constraint"

    • An example of analysing a spectrogram using "non-negative matrix factorisation" (NMF), which is an interesting and popular technique for identifying repeated components in a spectrogram. NMF is not widely used for bird sound, but it certainly could be useful, maybe for feature learning, or for decoding, who knows - it's a tool that anyone analysing audio spectrograms should be aware of.
  • Kershenbaum et al (2014), "Acoustic sequences in non-human animals: a tutorial review and prospectus"

    • A good overview from a zoologist's perspective on animal sound considered as sequences of units. Note, while you read this, that sequences-of-units is not the only way to think about these things. It's common to analyse animal vocalisations as if they were items from an alphabet "A B A BBBB B A B C", but that way of thinking ignores the continuous (as opposed to discrete) variation of the units, as well as any ambiguity in what constitutes a unit. (Ambiguity is not just failure to understand: it's used constructively by humans, and probably by animals too!)
  • Benetos et al (2013), "Automatic music transcription: challenges and future directions"

    • This is a good overview of methods used for music transcription. In some ways it's a similar task to identifying all the bird sounds in a recording, but there are some really significant differences (e.g. the existence of tempo and rhythmic structure, the fact that musical instruments usually synchronise in pitch and timing whereas animal sounds usually do not). A big difference from "speech recognition" research is that speech recognition generally starts from the idea of there just being one voice. The field of music transcription has spent more time addressing problems of polyphony.
  • Domingos (2012), "A few useful things to know about machine learning"

    • lots of sensible, clearly-written advice for anyone getting involved in machine learning.

Textbooks:

  • "Machine learning: a probabilistic perspective" by Murphy
  • "Nature's Music: the Science of Birdsong" by Marler and Slabbekoorn - a great comprehensive textbook about bird vocalisations.

Syndicated 2015-11-13 03:18:31 (Updated 2015-11-13 03:27:52) from Dan Stowell

Emoji understanding fail

I'm having problems understanding people. More specifically, I'm having problems now that people are using emoji in their messages. Is it just me?

OK so here's what just happened. I saw this tweet which has some text and then 3 emoji. Looking at the emoji I think to myself,

"Right, so that's: a hand, a beige square (is the icon missing?), and an evil scary face. Hmm, what does he mean by that?"

I know that I can mouseover the images to see text telling me what the actual icons are meant to be. SO I mouseover the three images in turn and I get:

  • "Clapping hands sign"
  • "(white skin)"
  • "Grinning face with smiling eyes"

So it turns out I've completely misunderstood the emotion that was supposed to be on that face icon. Note that you probably see a different image than I do anyway, since different systems show different images for each glyph.

Clapping hands, OK fine, I can deal with that. Clapping hands and grinning face must mean that he's happy about the thing.

But "(white skin)"? WTF?

Is it just me? How do you manage to interpret these things?

Syndicated 2015-11-10 05:16:04 (Updated 2015-11-10 05:18:12) from Dan Stowell

PhD opportunity! Study machine learning and bird sounds with me

I have a fully-funded PhD position available to study machine learning and bird sounds with me!

For full details please see the PhD advertisement on jobs.ac.uk. Application deadline Monday 12th January 2015.

Please do email me if you're interested or if you have any questions.

– and is there anyone you know who might be interested? Send them the link!

Syndicated 2014-11-06 11:00:42 from Dan Stowell

Carpenters Estate - Is it viable or not?

Newham Council has handled the current Carpenters Estate protest shockingly badly. Issuing a press release describing the protesting mothers as "agitators and hangers-on" is just idiotically bad handling.

BUT they have also described Carpenters Estate as not "viable", and many commentators (such as Zoe Williams, Russell Brand) have lampooned them for it. After all, they can see the protesting mothers occupying a perfectly decent-looking little home. How can it be not "viable"?

Are they judging viability compared against the market rate for selling off the land? That's what Zoe Williams says, and that's what I assumed too from some conversations. But that's not it at all.

Newham's current problem with the Carpenters Estate is basically caused by the two different types of housing stock on the estate:

  • They have some tall old tower blocks which housed many hundreds of people, but they can't renovate them to a basic decent standard - the council can't afford to do it themselves and the leaseholders couldn't afford to shoulder the costs. (In council reports it's been calculated that the renovation cost per flat would cost more than the value of the flat itself - which means that the private leaseholders totally wouldn't be able to get a mortgage for the renovations.)
  • All the little two-storey houses next to the tower blocks are basically viable, at least in the sense that they should be easy to refurbish. However, they can't just leave people in those houses if they intend to demolish the tower blocks. I'm no expert in demolition but I assume it'd be impossible to demolish the 23-storey block next door while keeping the surrounding houses safe, and that's why Doran Walk is also slated for demolition.

So "not viable" means they can't find any way to refurbish those tower blocks to basic living standards - especially not in the face of the Tory cuts to council budgets - and that affects the whole estate as well as just the tower blocks. This appears to be the fundamental reason they're "decanting" people, in order to demolish and redevelop the whole place. (Discussed eg in minutes from 2012.) It's also the reason they have a big PR problem right now, because those two-storey houses appear "viable" and perfectly decent homes, yet they do indeed have a reason to get everyone out of them!

After the UCL plan for Carpenters Estate fell through it's understandable that they're still casting around for development plans, and we might charitably assume the development plans would be required to include plenty of social housing and affordable housing. You can see from the council minutes that they do take this stuff seriously when they approve/reject plans.

(Could the council simply build a whole new estate there, develop a plan itself, without casting around for partners? Well yes, it's what councils used to do before the 1980s. It's not their habit these days, and there may be financial constraints that make it implausible, but in principle I guess it must be an option. Either way, that doesn't really affect the question of viability, which is about the current un-demolished estate.)

But the lack of a plan has meant that there's no obvious "story" of what's supposed to be happening with the estate, which just leaves space for people to draw their own conclusions. I don't think anyone's deliberately misrepresenting what the council means when they talk about viability. I think the council failed badly in some of its early communication, and that led to misunderstandings that fed too easily into a narrative of bureaucratic excuses.

Syndicated 2014-10-01 16:15:10 (Updated 2014-10-01 16:35:26) from Dan Stowell

27 Sep 2014 (updated 28 Sep 2014 at 12:29 UTC) »

Carpenters Estate, Stratford - some background

"A group of local mothers are squatting next to London’s Olympic Park to tell the government we need social housing, not social cleansing" as featured in the Guardian and on Russell Brand's Youtube channel. The estate is Carpenters Estate, Stratford.

"Carpenters Estate," I thought to myself, "that rings a bell..."

It turns out Carpenters Estate is the one that UCL had proposed in 2011 to redevelop into a new university campus. The Greater Carpenters Neighbourhood "has been earmarked for redevelopment since 2010". "All proposals will take into account existing commitments made by the Council to those people affected by the re-housing programme." However, locals raised concerns, as did UCL's own Bartlett School (architecture/planning school) students and staff. (There's a full report here written by Bartlett students.) In mid-2013 negotiations broke down between Newham and UCL and the idea was ditched.

It seems that the council, the locals and others have been stuck in disagreement about the future of the estate for a while. At first the council promised to re-house people without breaking up the community too much, then it realised it didn't know how to do that, and eventually it came to the point where it's just gradually "decanting" people from the area and hoping that other things such as "affordable housing" (a shadow of a substitute for social housing) will mop things up. I can see how they got here and I can see how they can't find a good resolution of all this. But the Focus E15 mothers campaign makes a really good point, that irrespective of the high land prices (which probably mean Newham Council get offered some tempting offers), the one thing East London needs is social housing to prevent low-income groups and long-time locals from being forced out of London by gentrification.

The gentrification was already well underway before the London Olympic bid was won, but that had also added extremely predictable extra heat to the housing market around there. One part of the Olympic plan included plenty of "affordable housing" on the site afterwards - in August 2012, housing charity Shelter said it was good that "almost half" of the new homes built in the Athlete's Village would be "affordable housing". Oh but then they calculated that it wouldn't be that affordable after all, since the rules had been relaxed so the prices could go as high as 80% of market rate. (80% of bonkers is still crazy.)

Oh and it wasn't "almost half" (even though in the Olympic Bid they had said it would be 50% of 9,000 homes), by this point the target had been scaled back officially to about 40%. In November 2012 Boris Johnson insisted "that more than a third of the 7,000 new homes in the Olympic Park would be affordable". The Mayor said: 'There’s no point in doing this unless you can accommodate all income groups.'"

Oh but then in January 2014 Boris Johnson announced that they were changing their mind, and instead of 40% affordable housing, it's now going to be 30%. "Fewer homes will be built overall, and a smaller than promised percentage of those would be affordable." ("The dream of affordable housing is fading," said Nicky Gavron.) The new target contravenes the House of Lords Select Committee on Olympic Legacy report 2013-14 which said "It is important that a fair proportion, at least [...] 35%, of this housing is affordable for, and accessible to, local residents". Boris Johnson said it was a "price well worth paying" as a trade off for more economic activity. Strange assertion to make, since East London has bucketloads of economic activity and a crisis in social and affordable housing!

P.S. and guess why they decided not to build as many homes as they had planned? It's to make room for a cultural centre codenamed Olympicopolis. (Compare against this 2010 map of planned housing in the park.) Plans for this are led by... UCL! Hello again UCL, welcome back into the story. I love UCL as much as anyone - I worked there for years - but we need to fix the housing crisis a billion times more than we need to solve UCL's real-estate issues.

Syndicated 2014-09-27 09:04:18 (Updated 2014-09-28 07:47:12) from Dan Stowell

31 Aug 2014 (updated 31 Aug 2014 at 21:12 UTC) »

ArcTanGent 2014 festival

I'll admit it: I wasn't sure I could tolerate 48 hours of nothing but post-rock. Lots of great stuff in that scene - but all at once? Wouldn't it wear a bit thin? Well no, ArcTanGent festival was chuffing fab. My top three awesome stickers are awarded to:

  • Bear Makes Ninja - wow like math-rock with great indie-rock vocals and harmonies, and some blinding drumming which isn't obvious in that video I linked but you should really see.

  • AK/DK - a twopiece, and both of them play synths and effects and vocals and drums, shifting roles as they go to make great electro stuff totally live. Fun and danceable as hell.

  • Cleft - another twopiece, drums and guitar, using a loopstation to fill it out and make mathy tuneful stuff. Oh and great crowd interaction - this might violate postrock ethics but I do like a band that talks to the crowd. This crowd was pretty dedicated, they were actually singing along with the zany time-signature riffs.

Unfortunately we missed Rumour Cubes while putting our tent up in the rain, so I'll never know if they would have earnt a top awesome sticker. But loads of other stuff was also great: Jamie Lenman (from heavy to tuneful, like early Nirvana), Sleep Beggar (heavy angry hip-hop and chuffing rocking), Luo (ensemble postrock with some delicious intricate drum breaks), Year Of No Light (dark slow heavy doomy, like a black hole), Alarmist (another dose of good ensemble postrock), and Human Pyramids (sort of like a school orchestra playing postrock compositions... in a good way).

Almost all of these things I've mentioned were non-headline acts, and most of them were amazed to be in a tent with so many people digging their shit, since they were used to being the niche odd-time-signature weirdos at normal festivals :)

By way of contrast, a couple of the big names I found a bit boring to be honest, but I'll spare you that since overall the weekend was great with so much great stuff. Mono was a nice headliner to end with, enveloping, orchestral and often low-key - we were actually not "at" the main stage but sitting on a bench 50m or so up the slope. Lots of people were doing as we did, letting the sound wash its way up the hill as we took in the night.

I didn't join in the silent disco in the middle of the night but it had a lovely effect, as hundreds of people with headphones sang along to some indie rock classics, and from afar you could hear nothing except their perfectly-timed amateur indie choir, it sounded great.

Syndicated 2014-08-31 11:57:25 (Updated 2014-08-31 17:09:21) from Dan Stowell

Jabberwocky, ATP, and London

Wow. The Jabberwocky festival, organised by the people who did many amazing All Tomorrow's Parties festivals, collapsed three days before it was due to happen, this weekend. The 405 has a great article about the whole sorry mess.

We've been to loads of ATPs and I was thinking about going to Jabberwocky. Really tempted by the great lineup and handily in London (where I live). But the venue? The Excel Centre? A convention-centre box? I couldn't picture it being fun. The promoters tried to insist that it was a great idea for a venue, but it seems I was probably like a lot of people thinking "nah". (Look at the reasons they give, crap reasons. No-one ever complained at ATP about the bar queues or the wifi coverage. The only thing I complained about was that the go-karting track was shut!) I've seen a lot of those bands before, too, it's classic ATP roster, so if the place isn't a place I want to go to then there's just not enough draw.

That 405 article mentions an early "leak" of plans that they were aiming to hold it in the Olympic Park. Now that would have been a place to hold it. Apparently the Olympic Park claimed ignorance, saying they never received a booking, but that sounds like PR-speak pinpointing that they were in initial discussions but didn't take it further. I would imagine that the Olympic Park demanded a much higher price than Excel since they have quite a lot of prestige and political muscle - or maybe it was just an issue of technical requirements or the like. But the Jabberwocky organisers clearly decided that they'd got the other things in place (lineup etc) so they'd press ahead with London in some other mega-venue, and hoped that the magic they once weaved on Pontins or Butlins would happen in the Excel.

This weekend there will be lots of great Jabberwocky fall-out gigs across London. That's totally weird. And I'm sorry I won't be in London to catch any of them! But it's very very weird because it's going to be about 75% of the festival, but converted from a monolithic one into one of those urban multi-venue festivals. The sickening thing about that is that even though the organisers clearly cocked some stuff up royally, I still feel terrible for them having to go bust and get no benefit from the neat little urban fallout festival they've accidentally organised. Now if ATP had decided to run it that way, I would very likely have signed up for it, and dragged my mates down to London!

Syndicated 2014-08-14 04:50:27 from Dan Stowell

85 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!