14 Feb 2008 elanthis   » (Journeyer)

OpenID Not The Best Bet

So, I’ve tried toying around with OpenID a bit, and I’ve come back feeling a little unimpressed.

There are two problems. First, it is still a super pain in the ass to setup an OpenID server. None of the servers I could find were installable with a simple tarball unpack and config script - they all required source modifications and even then didn’t really work. There are toolkits for building OpenID servers, but no ready-to-run servers.

The biggest problem though is just the user-experience as a whole. Having to type in anything at all is still kind of clunky. I want single sign-on - if I am online, any site I go to should be able to verify I am the entity I was last time (with the ability to easily allow/deny sites from doing so). I shouldn’t need to type anything in. The amount of information available to the system should be more than enough for any site, be it a simple blog comment form, a forum, or an online store.

I’m all for having a server to centralize this, but I don’t think the technology should be built around users interacting with this server. The server should be a storage medium at most, not the actual UI. Instead, I am imagining a browser extension (which should be possible for Gecko, IE, WebKit, and Opera) that exposes a new JavaScript object, something like window.AuthService. This object allows the site to query information about the current user, including name, email, contact information, etc. It will also be able to retrieve a user ID (which would probably be an email address, or something else guaranteed to be unique per-user) as well as a site token. This token would be a completely unique and cryptographically strong random identifier that is associated with the user ID and the site domain. In particular, each user/site combo gets a different token.

So, I connect to google.com, and it wants to know who I am. It queries window.AuthService.userId and window.AuthService.siteToken and gets ’sean@mojodo’ and ‘F583AC9…4AC’ back. It then uses these to log the user in, or create a new account (which in many cases could be completely silent).

The first time a site attempts to access the AuthService object, the browser can display a popup (or one of those notice bars that are becoming popular) informing the user that the site wishes to identify him, and allow him to accept the authorization (possibly selecting between multiple profiles), permanently accept it, deny it, or select the access level (id and token only, id token and contact info, etc.).

The central server comes into play by allowing the browser to configure such a server (which could easily just be an LDAP server) to grab identifies from. Browsers set up in public terminals could be configured to ask for the server login information when the user first accepts an AuthService request (and not store this information past the end of the browser session). This allows users to keep their authentication information somewhere central, but keeps the UI solely in the browser allowing for a far better user experience.

Obviously a lot of details need to be worked out, including the exact interface (would it be better to use HTTP headers rather than or in addition to JavaScript?), the UI needs to be nailed down, etc.

I’ve considered writing a Mozilla extension for this (as well as extensions for WHAT-WG Connection class and Server-Sent DOM Events), but writing extensions for Mozilla is such a byzantine process and the documentation on how one might register new objects in windows (and do so securely - the docs just say it’s insecure if not done right, warning you not to do it, which is fucking useless compared to just explaining how to do it correctly and securely) that I haven’t been able to get anywhere on any of those ideas.

In the end though, I think OpenID is pretty much dead technology. At most it might become very slightly popular with blogs for posting comments, but it’s usefulness pretty much ends there. The UI sucks and the ease of user control and information handling is too lacking.

Syndicated 2008-02-14 19:11:50 from Sean Middleditch

Latest blog entries     Older blog entries

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!