Older blog entries for ade (starting at number 78)

Beyond the NASCAR

People in the identity world worry a lot about the NASCAR problem. 

They worry that showing a large set of buttons will hurt conversion rates (because of the paradox of choice) and confuse users who don't remember which IDP (identity provider) they used on a particular site+device combination. 


I don't think that's a big problem nowadays. 

That's because we're down to a fairly small set of viable identity providers (henceforth known as IDPs). Most of the others are either dead(MyOpenId), new (Amazon Login) or only useful in specific niches (GitHub, Instagram, Tumblr, LinkedIn etc).

If we look at Stack Overflow's data we see that 5 IDPs are used by 98.6% of visitors but everybody has to deal with the cognitive load of choosing between the 12 buttons and 1 form field. 

Reducing the set of buttons to 3 would still give users a choice whilst reducing cognitive load. By cutting down to just these 3 IDPs they'd have covered the vast majority (in Stack Overflow's case 92.02%) of potential users of their site and greatly simplified the experience. 

However if you prioritise providing access for 100% of your users over providing the best possible experience for the majority then you have several alternative strategies available to you:
If your goal is optimising the percentage of users who sign-in and making sure those people get the best possible experience then here's what I suggest:

  • Use Google+ and Facebook buttons (there are also going to be scenarios where Weibo/Renren/VKontakte are appropriate additions). 
  • Then use checkSessionState() and FB.getLoginStatus() to find out if the user is already signed in to Google or Facebook. The mobile SDKs have equivalent APIs.
  • Then suggest whichever account the user is already using by putting that button first and/or making it bigger. 

We've even published a guide to handling the scenario where the user is signed-in to both IDPs and you can automatically bypass the sign-in screen. 

However there's still the situation where a user prefers different IDPs on different machines: for instance if they work in a company that blocks Facebook at the firewall or they prefer Google+ on Android but Facebook on iOS. For those users a naive NASCAR implementation leaves them with one account on your service per IDP. 

The easiest solution to is to ask for the user's email address and use that to correlate all the accounts they use to login to your service. That way the user never has to worry about creating duplicate accounts. Of course this does restrict you to IDPs who can offer a verified email address. 

The only IDP this excludes is Twitter. If you are using Twitter as an IDP then you'll have to either capture (and verify) the user's email address in a post-registration step. 


Sometimes you have to do things the hard way. Usually it's because you have large numbers of accounts with unverified email addresses (for instance if you used a standard OpenID IDP or used Twitter without capturing and verifying email addresses) or you're migrating users from one IDP to another. 

In that case you have to provide a 'connect flow.' This is where the user signs-in to your service with one IDP and you ask them to 'connect' with additional IDPs. Afterwards you know that the same person owns that set of accounts even if they have different email addresses associated with them. 


The heuristics above mean that the NASCAR anti-pattern doesn't have to harm conversion rates or UX.

If you'd like to learn more about this stuff I'll be attending Over The Air 2013 where I'll be walking people through examples of these heuristics in production and talking about the multi-device multi-platform post-NASCAR future of identity. Join me. 

Syndicated 2013-09-25 21:54:00 (Updated 2013-09-25 21:54:09) from Ade Oshineye

Why doesn't this blog allow comments?

Should your blog have comments? That's one of the perennial questions that every blogger faces. Are comments a way to bring in vital feedback from the-people-formerly-known-as-the-audience or are they merely a mechanism for enabling strangers to spew hatred and bile on a page with your name attached?

Historically my position has been "comments are bad, run away." My reasons included:
  • I really don't want to deal with spam. The only thing worse than having spam on your blog is using moderation systems that mean I have to read every spammy comment in order for you to get a better experience. I like you but I don't like you that much.
  • BuzzGoogle+ is a better conversational network than my blog. Every post on my blog ends up on my blog's Google+ Page as well.
  • BuzzGoogle+ also has the advantage that there can be multiple conversations by completely disjoint communities about the same blog post.
  • BuzzGoogle+ emphasises Real Names and Serial Identity. This means I can look at people's activity stream to see what they've been posting about, commenting upon and sharing. Of course, just because you're using your real name doesn't mean that you won't say or do things that I find objectionable but which your community finds laudable.
  • I agree strongly with Derek Powazek that your right to free speech stops where my territory starts.
  • I think it's a terrible idea to put everybody who has an opinion on a topic into the same room. That invariably leads to name-calling because they have so little common ground or shared vocabulary. For every person who understands the topic and wants to discuss nuances there'll be 10 people who would like a clearer explanation of the fundamentals. 
All of the above are good and sound reasons for disabling comments. So why have I just enabled comments on this blog?

The main reason is that I have new technology and I want to see if, just this once, technology can solve a social problem. The secondary reason is that I'm interested in aggregating the conversations around my blog posts. My hope is that this aggregation will help me discover people who are saying interesting and insightful things about what I've written.

I could be wrong but I live in hope.

  Now

Syndicated 2013-09-17 23:12:00 (Updated 2013-09-17 23:17:54) from Ade Oshineye

In Praise Of Shadows

I bought In Praise Of Shadows by Junichiro Tanizaki in a Dutch museum. It's an admirable corrective for anyone who feels that their taste has been overwhelmed by any particular aesthetic.

In praise of shadows

"a man who has a family and lives in the city cannot turn his back on the necessities of modern life" p3

"I always think how different everything would be if we in the Orient had developed our own science" p13

"how much better our own photographic technology might have suited our complexion, our facial features, our climate, our land" p16-17

"Of course this 'sheen of antiquity' of which we hear so much is in fact the glow of grime. In both Chinese and Japanese the words denoting this glow describe a polish that comes of being touched over and over again,  a sheen produced by long years of handling--which is to say grime." p20

"elegance is frigid" p20

"Sometimes a superb piece of black lacquerware, decorated perhaps with flecks of silver and gold -- a box or a desk or a set of shelves -- will seem to me unsettling garish and altogether vulgar. But render pitch the void in which they stand, and light them not with the rays of the sun or electricity but rather a single lantern or candler: suddenly those garish objects turn somber, refined, dignified. Artisans of old, when they finished their works in lacquer and decorated them in sparkling patterns, must surely have had in mind dark rooms and sought to turn to good effect what feeble light there was." p23

"The quality that we call beauty, however, must always grow from the realities of life, and our ancestors, forced to live in dark rooms, presently came to discover beauty in shadows, ultimately to guide shadows towards beauty's end." p29

"For the painting here is nothing more than another delicate surface upon which the faint, frail light can play; it performs precisely the same function as the sand-textured wall." p32

"This was the genius of our ancestors, that by cutting off the light from this empty space they imparted to the world of shadows that formed there a quality of mystery and depth superior to that of any wall painting or ornament. The technique seems simple, but was by no means simply achieved." p33

"And there may be some who argue that if beauty has to hide its weak points in the dark it is not beauty at all" p46

"we find beauty not in the thing itself but in the pattern of shadows, the light and the darkness, that one thing against another creates." p46

"A phosphorescent jewel gives off its glow and color in the dark and loses its beauty in the light of day. Were it not for shadows, there would be no beauty." p46

"It struck me that old people everywhere have much the same complaints. The older we get the more we seem to think that everything was better in the past. Old people a century ago wanted to go back two centuries, and two centuries ago they wished it were three centuries earlier. Never has there been an age that people have been satisfied with." p59

"I would call back at least for literature this world of shadows we are losing. In the mansion called literature I would have the eaves deep and the walls dark, I would push back into the shadows the things that come forward too clearly, I would strip away the useless decoration. I do not ask this be done everywhere, but perhaps we may be allowed at least one mansion where we can turn off the electric lights and see what it is like without them." p64

Syndicated 2013-06-19 09:07:00 (Updated 2013-06-19 09:07:49) from Ade Oshineye

8 Apr 2013 (updated 18 Sep 2013 at 10:15 UTC) »

Open always wins?

"Open" is one of my tribe's worship words. It is a word that is beyond criticism, analysis or critique except from professional trolls.

So what does it mean when people say "open always wins?" It means that because TCP/IP, HTML and Apache won then open systems, open standards and open source will always win given a long enough timeline. If the open solution isn't winning yet then we just have to wait.

This may seem like a strawman but Chris Saad bluntly stated "Whether it’s a year, a decade or a century, Open. Always. Wins."

I disagree. Mere openness isn't enough. Just because your product or service is open doesn't mean it's destined to win. Plenty of open solutions have 'lost' but we tiptoe past that particular graveyard. We either pretend that we don't remember its denizens or that they're merely sleeping.

Whilst I have a religious belief in openness and standards I can see the difference between what I want to be true ("open always wins" and "next year will the year of Linux on the desktop") and what is actually true. I want open systems to win but I'm also aware that this isn't guaranteed.

In fact when open solutions win it's because they:
  • have superior User Experience
  • have superior Developer Experience
  • give each user/developer/company more value than the equivalent closed solution
  • create a larger (and thus more valuable) market/network than the equivalent closed solution
  • co-opted the existing closed solutions
  • do something that no closed solution can match
  • commodify existing closed solutions thus rendering them unprofitable

Despite this I'm always surprised by the number of people who believe that openness is a sufficient condition for success. I'd even go so far as to suggest that if the only quality a solution has is its openness then that's a good indicator it's going to fail.

  Open

Syndicated 2013-04-08 19:09:00 (Updated 2013-09-18 09:45:12) from Ade Oshineye

Speakerconf 2013

A man, a hat, ... What is Speakerconf? Speakerconf is a small (roughly 16 attendees) invite-only conference where everybody who attends gives a presentation about a topic that's currently on their mind.

Speakerconf 2013 was educational, fun and humbling--all at the same time. It featured a wide range of speakers talking about a wide range of topics. Everything from UX to constraint programming to microservices to model checking to tail-call optimisation in Java 8 got covered.

The breadth and sophistication of the talks means that in every session at least some us were completely befuddled whilst others were making connections across disciplines that don't normally share the same conference let alone the same room. For every talk about computation tree logic that went completely over my head there was a moment when I got to introduce people to the ideas in composing contracts or As We May Think. Many of the other attendees told me they had a similar experience. This resulted in an environment that was unusually conducive to respectful and enlightening conversation.

If you get invited to attend a Speakerconf then I strongly recommend you accept.


Syndicated 2013-03-31 07:38:00 (Updated 2013-03-31 07:40:11) from Ade Oshineye

20 Mar 2013 (updated 18 Sep 2013 at 10:15 UTC) »

Why do we bother with APIs?

I love APIs
Sometimes people wonder why we bother building APIs since it seems they can end up being used in ways that compete with our own products.

There are idealistic reasons for building APIs, as outlined by Jonathan Rosenberg, but there are also commercial benefits even if you don't share that philosophy. The main one is that APIs reduce the friction involved in making your services more valuable. They make it easier for other people to add data to your services. 

They also attract more users to your services by effectively advertising them on other people's sites. As well as increasing your visibility APIs also ensure that users are more likely to try your services since the risk of lock-in is reduced. If you have at least a CRUD API potential users know that there will be a  mechanism for extracting their data if something better comes along or if your services change in ways they don't like.

The other benefit of APIs is that they lower the cost of experimentation and increase the set of potential experimenters. These experiments can serve your users in two ways. Firstly they can handle niche use cases without cluttering the user interface of the application. Secondly some of these niche use cases may turn out, after a period of refinement, to be useful for mainstream users or for attracting completely new sets of users.

Another thing we've learned the hard way is that if you don't give people an API or you give them an insufficient API they'll resort to screen-scraping and hacking in order to unlock the value in your product. This can create dependencies on things that were never meant to be stable or it can lead to the emergence of widely-used but unofficial APIs

That behaviour can harm your product, your developers and your users. For example it can lead to a mismatch in expectations when some developers believe they're using an official API with established deprecation and change management policies. You also have to ensure that the APIs you create don't damage the product, for instance, by making it very easy to spam or game your system.

Providing an API, no matter how good, is just the start. The next challenge is to make something valuable enough that developers will use it in the absence of some extrinsic compulsion.

Firstly this involves making something that's easy to experiment with. So it should be easy to copy-paste a personalised URL into a browser and see a pretty-printed dataset.

Then you have to offer a path from there. The path starts with letting people play even if they don't understand your service all the way to the point where they understand your abstractions and the specifications you're using.

People should be able to go from playing in the browser to playing at a terminal with curl/wget to playing with an OAuth-enabled HTTP client to playing with your specialized wrapper libraries for your API to building businesses upon your platform.

But you can't just stop there. If you want to go from merely offering an API (typically a set of CRUD operations on your product's datasets) to building a viable platform you need to solve some difficult problems:

  • how does your platform, as opposed to your product, generate revenue or value for you?
  • how does your platform generate revenue or value for those who build upon it?
  • how do you respond to and/or incorporate the innovations that will be built upon your platform?
  • how do you nudge developers into creating more value than they capture from your users and your platform?
  • what happens to this surplus value? Is it being re-invested in the platform or siphoned off?
Even if you solve all these problems you don't have any guarantees of long-term success. The transition from API to platform to ecosystem is difficult and most APIs don't make it. However APIs can still help developers create new possibilities along the way.

Syndicated 2013-03-20 15:20:00 (Updated 2013-09-18 09:48:03) from Ade Oshineye

19 Mar 2013 (updated 18 Sep 2013 at 10:15 UTC) »

What do you mean 'we'?


"The Moving Finger writes; and, having writ,
Moves on: nor all thy Piety nor Wit
Shall lure it back to cancel half a Line,
Nor all thy Tears wash out a Word of it."

The web that Anil Dash wrote about wasn't lost. It was rejected. 

Dash himself rejects it when he uses a commenting system that only allows Facebook users to comment. Daniel Tunkelang rejected it when he abandoned his blog in favour of a network that gives him higher levels of engagement. I reject it when I use Instagram to take and share photos just because it's more convenient than the alternatives. 

My initial response to Anil Dash's The Web We Lost was a mixture of amusement at his rose-tinted nostalgia, annoyance at his revisionist history and bemusement at his usage of Facebook comments. As time has gone on I've realised that Dash is not a hypocritical finger-wagging reactionary but just another sensible person making sensible decisions about the networks that will generate the most engagement for his content. Of course these sensible decisions happen to clash with his stated beliefs.

The mainstream of humanity actively rejected the web-that-was rather than accidentally let it slip away. They rejected it for much the same reasons they rejected the prospect of running their own power generator. It turns out that using a central power grid gives you a better quality service for less effort which frees you to focus on the things you really care about. Humanity rejected a vision of the web where everybody runs their own websites because it turned out that most people don't care as much about maintaining infrastructure as the geeks who formed the majority of the web's users 10 to 20 years ago.  That's why every time I see someone, for instance Clay Shirky, who has been cheerfully running a compromised blogging engine on his own domain for years I shudder at the idea that we once thought self-hosting was going to be the norm.

Felix Salmon's article was one of the first responses that acknowledged this problem. It made me realise why Dash's article reminded me so much of the distress of the privileged. That's because the 'we' who lost something is the set of middle-aged geeks who miss the way things used to be and want to roll back time to a world where only geeks could harness the power of the web. Like scribes bemoaning the advent of universal literacy the comments section of Dash's post is full of people saying how much better things were when communication tools were difficult to use and restricted to a sophisticated elite.

This makes me sad. The dream of the early web was that by removing the Gutenbourgeois as gatekeepers we would create the possibility for new voices to be heard. Wish granted. 

Unfortunately the technocratic response to these new voices was to dismiss them as an Eternal September of clueless newbies. It's as if the web was better before all these 'other' people turned up and started making choices 'we' don't like. It's as if all those developers choosing to build upon technologies with clear value propositions (build upon this platform and you'll get users and paying customers) and good DX were wrong. It's as if the billions of non-geeks were either ignorant, misled or suffering from false consciousness when they chose closed systems with great UX.

Robin Sloan has a refreshing perspective on this issue. He writes, on Medium, that we've reached a point where our taste has outpaced our skill. Our taste means we demand that an acceptable website must have lots of qualities that are beyond the skill of the average individual. By framing the issue in terms of taste and skill he shows why the pendulum is unlikely to swing back. Running a sufficiently high quality web site, as opposed to a web presence, is so hard that the amateur web looks like a wasteland of dead blogs, unmaintained websites and broken linksAgain and again and again sensible people choose better UX or a larger network over a more open, decentralised or federated service. But what if this flight to quality isn't a problem?

What if all those billions of people made intelligent decisions that made sense for them? What if the people saying that the past was better than the future are the ones who are wrong? What if we reject this mythical past in favour of a new future where we try to build new things that people use because they're better solutions not because they claim superior morality?

Appeals to a bygone era where the web was more open but less diverse aren't going to inspire the construction of a better future as history teaches us that "convenience wins, hubris loses." Instead those appeals sound like the beleaguered art critic moaning that "taking a picture feels like signing up to some mad collective self-delusion that we are all artists with an eye for beauty, when the tragicomic truth is that the sheer plenitude and repetition of modern amateur photography makes beauty glib." When Dash writes that there's "an entire generation of users who don't realize how much more innovative and meaningful their experience could be" but can't point to any examples it sounds like yet another hollow claim that things were better when we were young.

Maybe things really were better when we were young but I've learned to distrust appeals to bygone golden ages. Instead I want to hear people talking about vibrant futures. I want to see people working on new ideas that may not work out but which open up new possibilities. I want to see new people making new things. I want to see people making new things with all the uncertainty and doubt that brings.

This is why I'm increasingly hopeful about efforts like IndieWebCamp and ParallelFlickr. These are people building things that are useful primarily for themselves and possibly for others. That's how we'll invent a new and better web.

Jaiku forever


Syndicated 2013-03-19 18:39:00 (Updated 2013-09-18 10:08:08) from Ade Oshineye

1 Mar 2013 (updated 18 Sep 2013 at 10:15 UTC) »

A world of social login

Who are you?

We've known for years that passwords are bad.

They're bad for users because they tend to use the same weak password across multiple sites which means they're only as safe as the least secure site they use. They're bad for developers because the sign-up process loses a large portion of potential users. They also force every developer to jump through all the steps required for a world-class identity system:

  • multi-factor authentication
  • the forgot password dance
  • a salted and hashed password database
  • etc.

Despite all this, passwords and the password anti-pattern are still prevalent.

Social login isn't a panacea but in the long run the only viable solution is delegating authentication to a small set of high quality identity providers. It has to be a small set to avoid the damage to conversion rates caused by the NASCAR problem. They will be high quality since the market is so competitive that low quality providers (where quality is a measure of the experience/value provided to users, developers and publishers) will find it hard to acquire and retain users. The market will be competitive simply because various entities have realised that social login is the backbone of any successful ecosystem so they're making the necessary investments.

This is sub-optimal but the OpenId dream (where every user runs their own server and their own OpenID endpoint) ran aground on the twin rocks of user apathy and security. Even if the dream had survived that it still didn't have a good answer to the major publishers who wanted to know what they would be getting in return for the extra effort of supporting OpenID. If you think OpenID Attribute Exchange and PAPE are solutions then you may be wearing the complicator's gloves.

The only questions left are:
  • who will be these identity providers
  • what will be their business models
  • how will we assess and choose between them
  • how will we keep them honest
  • how much control do they give users
  • do they help developers build better and more valuable services as time passes
  • will they become gatekeepers that constrain future innovations

This moves us to a world where users authorise developers rather than particular apps or web sites. As a result once you give a developer access to your information you give all of their services and apps access to your information. Technologies like OAuth2's bearer tokens mean that developers can easily pass access to a user's information back and forth between their mobile apps and their back-end systems.

In this new world developers will have to deal with multiple competing identity providers who each impose their own constraints and policies in order to protect their users. As a result developers will have to start thinking in a more sophisticated way about the way they propagate identity between their different systems, track the provenance of user data and honour the conflicting policies imposed by multiple identity providers. They'll also need more nuanced terminology. It won't be enough to think solely in such crude terms as "public" versus "private". Developers will also have to be aware of the subtle distinctions between "obscure" versus "secret" and "public" versus "publicised".

In return we get a world of social login where you bring your identity, your interests and your community to every app, service and device rather than just the ones built by identity providers with unified privacy policies.

Syndicated 2013-03-01 16:12:00 (Updated 2013-09-18 09:50:15) from Ade Oshineye

The Google+ Sharelink Endpoint: doing it right

If your site has a Google+ sharing feature that uses this URL: https://plusone.google.com/_/+1/confirm?url= then you're doing it wrong. You're using unsupported and undocumented functionality. Don't.

You should be using a sharing URL that looks like this: https://plus.google.com/share?url=

That's our official sharelink endpoint. It is supported, monitored and maintained. The URL you're using right now is an internal part of our +1 button's Javascript API so it's subject to change because we don't expect anyone else to be depending on it.

The documentation for the sharelink endpoint is here: https://developers.google.com/+/plugins/share/#sharelink-endpoint It even offers a set of standard graphics that you should be using for consistency with the rest of the web.

In short: don't be like the guy in the photo below.

Helvetica heretic

Syndicated 2013-02-22 13:58:00 (Updated 2013-02-22 13:58:27) from Ade Oshineye

30 Dec 2012 (updated 18 Sep 2013 at 10:15 UTC) »

The other side of creating more value than you capture



+Tim O'Reilly likes to talk about "creating more value than you capture." The obvious logical alternative to this is "capture more value than you create."

However I suspect that this is a false dichotomy.  I think we've missed something. It's possible for a vendor to create more value than they capture and yet, by building a new network, ensure that the surplus value eventually flows back to them. This ends up primarily magnifying the value of their network rather than the wider web. 

For instance a post on Tumblr is easier to reblog, if you have a Tumblr account, than to re-post elsewhere. It's easier to 'follow' a tumblr than to subscribe to the RSS/Atom feed for that same tumblr. Tumblr's bookmarklet makes sharing to your tumblr easier than cutting and pasting it elsewhere.  By building proprietary solutions that have a better user experience than the open solutions, Tumblr created a situation where sensible people act in ways that keep them inside the Tumblr network. This is more like a gated community than a walled garden precisely because the members of this network made an informed choice and are happy with the consequences of being inside it.

The hidden assumption in Tim O'Reilly's thinking was that the network that would primarily benefit from all this surplus value was the web. But it turns out that large social networks and large blogging networks and other sites that host large numbers of activity streams are the primary beneficiaries. We can see this in Tim O'Reilly's examples from his original presentation which primarily focusses on Twitter and the benefits for Twitter users. At the time making a distinction between Twitter and the wider web would have seemed nonsensical. 

Today we realise that creators go where they can reap value and that's increasingly in networks (like Twitter, Tumblr, LinkedIn's Influencer platform, Google+, Medium, Instagram, etc) which can help them easily discover a community that cares deeply about their creations. We also realise that the necessary walls (which take the form of privacy controls and API restrictions) between these networks and the wider web means the networks can benefit from Reed's Law and grow their value in proportion to the number of such communities that are formed rather than just the number of users. 

Every creator and their audience forms a new community where value can be created and captured. Some of these communities may even be generative. They may be creating more value than they capture. So where is that value flowing?

If you own a network where people create more value than they capture then most of that surplus value flows to you rather than to the wider web. The challenge for the owners of these networks is to invest that surplus value back into the wider web in the hope that they'll reap even more surplus value in the future. The challenge for those who believe in the virtues of the wider web is to show these network owners how they can contribute to and benefit from the wider web.

Syndicated 2012-12-30 15:41:00 (Updated 2013-09-18 09:54:18) from Ade Oshineye

69 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!