Older blog entries for jamesh (starting at number 283)

django-openid-auth

Last week, we released the source code to django-openid-auth.  This is a small library that can add OpenID based authentication to Django applications.  It has been used for a number of internal Canonical projects, including the sprint scheduler Scott wrote for the last Ubuntu Developer Summit, so it is possible you’ve already used the code.

Rather than trying to cover all possible use cases of OpenID, it focuses on providing OpenID Relying Party support to applications using Django’s django.contrib.auth authentication system.  As such, it is usually enough to edit just two files in an existing application to enable OpenID login.

The library has a number of useful features:

  • As well as the standard method of prompting the user for an identity URL, you can configure a fixed OpenID server URL.  This is useful for deployments where OpenID is being used for single sign on, and you always want users to log in using a particular OpenID provider.  Rather than asking the user for their identity URL, they are sent directly to the provider.
  • It can be configured to automatically create accounts when new identity URLs are seen.
  • User names, full names and email addresses can be set on accounts based on data sent via the OpenID Simple Registration extension.
  • Support for Launchpad’s Teams OpenID extension, which lets you query membership of Launchpad teams when authenticating against Launchpad’s OpenID provider.  Team memberships are mapped to Django group membership.

While the code can be used for generic OpenID login, we’ve mostly been using it for single sign on.  The hope is that it will help members of the Ubuntu and Launchpad communities reuse our authentication system in a secure fashion.

The source code can be downloaded using the following Bazaar command:

bzr branch lp:django-openid-auth

Documentation on how to integrate the library is available in the README.txt file.  The library includes some code written by Simon Willison for django-openid, and uses the same licensing terms (2 clause BSD) as that project.

Syndicated 2009-04-14 08:25:56 from James Henstridge

Sansa Fuze

On my way back from Canada a few weeks ago, I picked up a SanDisk Sansa Fuze media player.  Overall, I like it.  It supports Vorbis and FLAC audio out of the box, has a decent amount of on board storage (8GB) and can be expanded with a MicroSDHC card.  It does use a proprietary dock connector for data transfer and charging, but that’s about all I don’t like about it.  The choice of accessories for this connector is underwhelming, so a standard mini-USB connector would have been preferable since I wouldn’t need as many cables.

The first thing I tried was to copy some music to the device using Rhythmbox.  This appeared to work, but took longer than expected.  When I tried to play the music, it was listed as having an unknown artist and album name.  Looking at the player’s filesystem, the reason for this was obvious: Rhythmbox had transcoded the music to MP3 and lost the tags.  Copying the ogg files directly worked a lot better: it was quicker and preserved the metadata.

Of course, getting Rhythmbox to do the right thing would be preferable to telling people not to use it.  Rhythmbox depends on information about the device provided by HAL, so I had a look at the relevant FDI files.  There was one section for Sansa Clip and Fuze players which didn’t list Vorbis support, and another section for “Sansa Clip version II”.  The second section was a much better match for the capabilities of my device.  As all Clip and Fuze devices support the extra formats when running the latest firmware, I merged the two sections (hal bug 20616, ubuntu bug 345249).  With the updated FDI file in place, copying music with Rhythmbox worked as expected.

The one downside to this change is that if you have a device with old firmware, Rhythmbox will no longer transcode music to a format the device can play.  There doesn’t seem to be any obvious way to tell if a device has a new enough firmware via USB IDs or similar, so I’m not sure how to handle it automatically.  That said, it is pretty easy to upgrade the firmware following the instructions from their forum, so it is probably best to just do that.

Syndicated 2009-03-24 10:21:05 from James Henstridge

PulseAudio

It seems to be a fashionable to blog about experiences with PulseAudio, I thought I’d join in.

I’ve actually had some good experiences with PulseAudio, seeing some tangible benefits over the ALSA setup I was using before.  I’ve got a cheapish surround sound speaker set connected to my desktop.  While it gives pretty good sound when all the speakers are used together, it sounds like crap if only the front left/right speakers are used.

ALSA supports multi-channel audio with the motherboard’s sound card alright, but apps producing stereo sound would only play out of the front two speakers.  There are some howtos on the internet for setting up a separate ALSA device that routes stereo audio to all the speakers in the right way, but that requires that I know in advance what sort of audio an application is going to generate: something like Totem could produce mono, stereo or surround output depending on the file I want to play.  This is more effort than I was usually willing to do, so I ended up flicking a switch on the amplifier to duplicate the front left/right channels to the rear.

With PulseAudio, I just had to edit the /etc/pulse/daemon.conf file and set default-sample-channels to 6, and it took care of converting mono and stereo output from apps to play on all the speakers while still letting apps producing surround output play as expected.  This means I automatically get the best result without any special effort on my part.

I’m not too worried that I had to tell PulseAudio how many speakers I had, since it is possible to plug in a number of speaker configurations and I don’t think the card is capable of sensing what has been attached (the manual documents manually selecting the speaker configuration in the Windows driver).  It might be nice if there was a way to configure this through the GUI though.

I’m looking forward to trying the “flat volume” feature in future versions of PulseAudio, as it should get the best quality out of the sound hardware (if I understand things right, 50% volume with current PulseAudio releases means you only get 15 bits of quantisation on a 16-bit sound card).  I just hope that it manages to cope with the mixers my sound card exports: one two-channel mixer for the front speakers, one two-channel mixer for the rear two speakers and two single channel mixers for the center and LFE channels.

Syndicated 2009-02-25 12:24:58 from James Henstridge

In Montreal

I’m in Montreal through to the end of next week.  The sub-zero temperatures are quite a change from Perth, where it got up to 39°C on the day I left.

The last time I was here was for Ubuntu Below Zero, so it is interesting seeing the same city covered in snow.

Syndicated 2009-02-24 19:12:16 from James Henstridge

In Hobart

Today was the first day of the mini-conferences that lead up to linux.conf.au later on this week.  I arrived yesterday after an eventful flight from Perth.

I was originally meant to fly out to Melbourne on the red eye leaving on Friday at 11:35pm, but just before I checked in they announced that the flight had been delayed until 4:00am the following day.  As I hadn’t had a chance to check in, I was able to get a pair of taxi vouchers to get home and back.  I only got about 2 hours of sleep though, as they said they would turn off the baggage processing system at 3am.  When I got back to the airport, I could see all the people who had stayed at the terminal spread out with airplane blankets.  A little before the 4:00am deadline, another announcement was made saying the plane would now be leaving at 5:00am.  Apparently they had needed to fly a replacement component in from over east to fix a problem found during maintenance.  Still, it seems it wasn’t the most delayed Qantas flight for that weekend and it did arrive in one piece.

As I had planned to spend a day in Melbourne visiting relatives, it didn’t cause any problems with the flight on to Hobart.  I had been invited to the “Ghosts” dinner, which was to start about an hour after my flight landed, so it was a bit of a rush to get to the university accommodation and then walk down the hill to the restaurant.

The dinner was pretty good, with organisers from all the previous LCA conferences plus the people organising the 2010 conference.  Unfortunately, I was the only one from the 2003 organisers able to attend.  It sounds like the 2010 organisers have things in hand, and the location should be great.

Syndicated 2009-01-19 13:01:21 from James Henstridge

Getting “bzr send” to work with GMail

One of the nice features of Bazaar is the ability to send a bundle of changes to someone via email.  If you use a supported mail client, it will even open the composer with the changes attached.  If your client isn’t supported, then it’ll let you compose a message in your editor and then send it to an SMTP server.

GMail is not a supported mail client, but there are a few work arounds listed on the wiki.  Those really come down to using an alternative mail client (either the editor or Mutt) and sending the mails through the GMail SMTP server.  Neither solution really appealed to me.  There doesn’t seem to be a programatic way of opening up GMail’s compose window and adding an attachment (not too surprising for a web app).

What is possible though is connecting via IMAP and adding messages to the drafts folder (assuming IMAP support is enabled).  So I wrote a small plugin to do just that.  It can be installed with the following command:

bzr branch lp:~jamesh/+junk/bzr-imapclient ~/.bazaar/plugins/imapclient

And then configure the IMAP server, username and mailbox according to the instructions in the README file.  You can then use “bzr send” as normal and then complete and send the draft at your leisure.

One nice thing about the plugin implementation is that it didn’t need any GMail specific features: it should be useful for anyone who has their drafts folder stored on an IMAP server and uses an unsupported mail client.

The main area where this could be improved would be to open up the compose screen in the web browser.  However, this would require knowing the internal message ID for the new message, which I can’t see how to access via IMAP.

Syndicated 2009-01-16 09:19:31 from James Henstridge

Using Twisted Deferred objects with gio

The gio library provides both synchronous and asynchronous interfaces for performing IO.  Unfortunately, the two APIs require quite different programming styles, making it difficult to convert code written to the simpler synchronous API to the asynchronous one.

For C programs this is unavoidable, but for Python we should be able to do better.  And if you’re doing asynchronous event driven code in Python, it makes sense to look at Twisted.  In particular, Twisted’s Deferred objects can be quite helpful.

Deferred

The Twisted documentation describes deferred objects as “a callback which will be put off until later”.  The deferred will eventually be passed the result of some operation, or information about how it failed.

From the consumer side, you can register one or more callbacks that will be run:

def callback(result):
    # do stuff
    return result

deferred.addCallback(callback)

The first callback will be called with the original result, while subsequent callbacks will be passed the return value of the previous callback (this is why the above example returns its argument). If the operation fails, one or more errbacks (error callbacks) will be called:

def errback(failure):
    # do stuff
    return failure

deferred.addErrback(errback)

If the operation associated with the deferred has already been completed (or already failed) when the callback/errback is added, then it will be called immediately. So there is no need to check if the operation is complete before hand.

Using Deferred objects with gio

We can easily use gio’s asynchronous API to implement a new API based on deferred objects.  For example:

import gio
from twisted.internet import defer

def file_read_deferred(file, io_priority=0, cancellable=None):
    d = defer.Deferred()
    def callback(file, async_result):
        try:
            in_stream = file.read_finish(async_result)
        except gio.Error:
            d.errback()
        else:
            d.callback(in_stream)
    file.read_async(callback, io_priority, cancellable)
    return d

def input_stream_read_deferred(in_stream, count, io_priority=0,
                               cancellable=None):
    d = defer.Deferred()
    def callback(in_stream, async_result):
        try:
            bytes = in_stream.read_finish(async_result)
        except gio.Error:
            d.errback()
        else:
            d.callback(bytes)
    # the argument order seems a bit weird here ...
    in_stream.read_async(count, callback, io_priority, cancellable)
    return d

This is a fairly simple transformation, so you might ask what this buys us. We’ve gone from an interface where you pass a callback to the method to one where you pass a callback to the result of the method. The answer is in the tools that Twisted provides for working with deferred objects.

The inlineCallbacks decorator

You’ve probably seen code examples that use Python’s generators to implement simple co-routines. Twisted’s inlineCallbacks decorator basically implements this for generators that yield deferred objects. It uses the enhanced generators feature from Python 2.5 (PEP 342) to pass the deferred result or failure back to the generator. Using it, we can write code like this:

@defer.inlineCallbacks
def print_contents(file, cancellable=None):
    in_stream = yield file_read_deferred(file, cancellable=cancellable)
    bytes = yield input_stream_read_deferred(
        in_stream, 4096, cancellable=cancellable)
    while bytes:
        # Do something with the data.  For this example, just print to stdout.
        sys.stdout.write(bytes)
        bytes = yield input_stream_read_deferred(
            in_stream, 4096, cancellable=cancellable)

Other than the use of the yield keyword, the above code looks quite similar to the equivalent synchronous implementation.  The only thing that would improve matters would be if these were real methods rather than helper functions.

Furthermore, the inlineCallbacks decorator causes the function to return a deferred that will fire when the function body finally completes or fails. This makes it possible to use the function from within other asynchronous code in a similar fashion. And once you’re using deferred results, you can mix in the gio calls with other Twisted asynchronous calls where it makes sense.

Syndicated 2009-01-06 01:18:53 from James Henstridge

Red Bull Air Race

Yesterday, I went to see the finals of the Red Bull Air Race here in Perth.  This was my first time watching the event, since I was over seas at the times it was held the previous two years.

The weather was good, and gave me a good opportunity to play with my camera a bit.  Sometimes you can miss out on the action by trying to take photos, but in this case the camera made it a lot easier to see the planes from the shore.

As the overall winner was decided by points scored over the full series, it wasn’t necessary for the series winner to win the Perth race.  This turned out to be the case, with Hans Arch coming third but winning the series.

Hannes Arch passing through one of the gates in front of the WACA

Hannes Arch passing through one of the gates in front of the WACA

The Perth final ended up being between two English pilots: Nigel Lamb and Paul Bonhomme, with Bonhomme winning the race.

Paul Bonhomme passing through the finishing pylons

Paul Bonhomme passing through the finishing gate

After the race, there was a display by the RAAF Roulettes formation flying team, and a fly over by an F/A-18 Hornet.

F/A-18 Hornet

F/A-18 Hornet

Overall, it was a pretty good day.  I don’t know if I’d have been up for watching the entire two days worth, but the finals were entertaining.  Hopefully I’ll be around for next year’s race.

Syndicated 2008-11-03 07:42:56 from James Henstridge

Re: Continuing to Not Quite Get It at Google…

David: taking a quick look at Google’s documentation, it sure looks like OpenID to me.  The main items of note are:

  1. It documents the use of OpenID 2.0’s directed identity mode.  Yes this is “a departure from the process outlined in OpenID 1.0″, but that could be considered true of all new features found in 2.0.  Google certainly isn’t the first to implement this feature:
    • Yahoo’s OpenID page recommends users enter “yahoo.com” in the identity box on web sites, which will initiate a directed identity authentication request.
    • We’ve been using directed identity with Launchpad to implement single sign on for various Canonical/Ubuntu sites.

    Given that Google account holders identify themselves by email address, users aren’t likely to know a URL to enter, so this kind of makes sense.

  2. The identity URLs returned by the OpenID provider do not directly reveal information about the user, containing a long random string to differentiate between users.  If the relying party wants any user details, they must request them via the standard OpenID Attribute Exchange protocol.
  3. They are performing access control based on the OpenID realm of the relying party.  I can understand doing this in the short term, as it gives them a way to handle a migration should they make an incompatible change during the beta.  If they continue to restrict access after the beta, you might have a valid concern.

It looks like there would be no problem talking to their provider using existing off the shelf OpenID libraries (like the ones from JanRain).

If you have an existing site using OpenID for login, chances are that after registering the realm with Google you’d be able to log in by entering Google’s OP server URL.  At that point, it’d be fairly trivial to add another button to the login page – sites seem pretty happy to plaster provider-specific radio buttons and entry boxes all over the page already …

Syndicated 2008-10-30 08:30:02 from James Henstridge

Streaming Vorbis files from Ubuntu to a PS3

One of the nice features of the PlayStation 3 is the UPNP/DLNA media renderer.  Unfortunately, the set of codecs is pretty limited, which is a problem since most of my music is encoded as Vorbis.  MediaTomb was suggested to me as a server that could transcode the files to a format the PS3 could understand.

Unfortunately, I didn’t have much luck with the version included with Ubuntu 8.10 (Intrepid), and after a bit of investigation it seems that there isn’t a released version of MediaTomb that can send PCM audio to the PS3.  So I put together a package of a subversion snapshot in my PPA which should work on Intrepid.

With the newer package, it was pretty easy to get things working:

  1. Install the mediatomb-daemon package
  2. Edit the /etc/mediatomb/config.xml file and make the following changes:
    • Change the <protocolInfo/> line to set extend=”yes”.
    • In the <extension-mimetype> section, uncomment the line to map “avi” to “video/divx”.  This will get a lot of videos to play without problem.
    • In the <mimetype-upnpclass> section, add a line to map “application/ogg” to “object.item.audioItem.musicTrack”.  This is needed for the vorbis files to be recognised as music.
    • In the <mimetype-contenttype> section add a line to map “audio/L16″ to “pcm”.
    • On the <transcoding> element, change the enabled attribute to “yes”.
    • Add the settings from here to the <transcoding> section.
  3. Edit the /etc/default/mediatomb script and set INTERFACE to the network interface you want to advertise on.
  4. Restart the mediatomb daemon.
  5. Go to the web UI (try opening /var/lib/mediatomb/mediatomb.html in a web browser), and add the directories you want to export.
  6. Test things on the PS3.

Things aren’t perfect though.  As MediaTomb is simply piping the transcoded audio to the PS3, it doesn’t implement seeking on such files, and it seems that the PS3 won’t even let you pause a stream that doesn’t allow seeking.  With a less generalised transcoding backend, it seems like it should be trivial to support seeking in an uncompressed PCM stream though, since the byte offsets can be trivially mapped to sample numbers.

The other problem I found was that none of the recent music I’d ripped showed up.  It seems that they’d been ripped with the .oga file extension rather than .ogg.  This change appears to have been made in bug 543306, but the reasoning seems suspect: the guidelines from Xiph indicate that the files generated by this encoding profile should continue to use the .ogg file extension.

I tried adding some extra mappings to the MediaTomb configuration file to recognise the files without success, but eventually decided to just rename them and fix the encoding profile locally.

A Perfect Media Server

While MediaTomb mostly works for me, it doesn’t do everything I’d like.  A few of the things I’d like out of a media server include:

  1. No need to configure things via a web UI.  In fact, I could do without a web UI all together – something nicely integrated into the desktop would be nice.
  2. No need to set model specific settings in the configuration file.  Ideally it would know how to talk to common players by default.
  3. Supports transcoding and seeking within transcoded files.  Preferably knows what needs transcoding for common players.
  4. Picks up new files in real time.  So something inotify based rather than periodic reindexing.
  5. A virtual folder tree for music based on artist/album metadata. A plain folder tree for other media would be fine.
  6. Cached video thumbnails would be nice too.  The build of MediaTomb in my PPA includes support for thumbnails (needs to be enabled in the config file), but they aren’t cached so are slow to appear.

Perhaps Zeeshan’s media server will be worth trying out at some point.

Syndicated 2008-10-30 02:35:40 from James Henstridge

274 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!