Older blog entries for dan (starting at number 160)

Sharpening the sawfish

My son is two weeks old today. I don’t usually go a bundle on putting personal info on the public web – I keep that for Facebook, where they at least pretend to keep it private for me - but I mention this to explain why I’m using my laptop a lot more than my desktop lately.

The problem with my laptop is the mouse pointer. It’s one of those pointing stick devices technically known (apparently) as an isometric joystick and more commonly known as a nipple , and when the room is warm the little rubber cap gets slippery very quickly. So I decided to invest a little time in a few keyboard shortcuts.

As an Emacs user I know I’m supposed to like tiling window managers, but I don’t. My editor windows are windows onto text files that may be any size and shape but in which it’s a fairly safe bet (see “locality of reference”) that the spot I want to edit next is usually spatially close to the spot I’m currently looking at. The other ‘windows’ on my screen are things like web browsers and GUI programs where there’s no such guarantee, and the only way to make them work is to allow them to take the size and shape that their authors wanted them to have. So after a brief experiment with awesome I punted it and went looking for a programable window manager that was designed for overlapping windows.

And ended up back with Sawfish, which I used to use back when it was fashionable. Sawfish customization is a two-phase process: first you write commands in Lisp, then you use the sawfish-ui program to assign them to keystrokes. A bit like Emacs, really, and perhaps not surprisingly.

First I needed some shortcuts to focus particular windows (Emacs, Firefox, xterms). Happily, someone has done the work for this already: I just had to download the Gimme script and set up bindings for it

Then I needed something to chuck windows around the screen. The requirement is pretty simple here: every window on my screen is aligned against an edge, so I just need commands to pan a window to each edge. Here is the finished script in which the points I would like to draw attention to are

  • I use focus-follows-mouse mode, or whatever it’s called these days. This means that if I move a window under the pointer I need to move the pointer too otherwise it will go out of focus. The warp-cursor-to-window function does this: I needed to calculate the pointer position relative to window, which for some reason isn’t a builtin.
  • window-frame-dimensions is window-dimensions plus the decorations. We need these dimenstions for throwing windows rightwards or downwards, otherwise they end up slightly offscreen.
  • define-command is the magic that makes our new functions show up in the sawfish-ui dialog. The "%f" sigil means to pass the current window into the function.

And that’s about it. Put the file somewhere that sawfish will find it – for me, ~/.sawfish/lisp seems to be a good place – add the lines

(require 'gimme)
(setq warp-to-window-enabled t)
(require 'throw-window)

to .sawfishrc, and then set up your keys in sawfish-ui. I assigned them to Windows-key shortcuts: at last, I have a use for the Windows key.

If you hadn’t spotted in amongst all that, I have githubbed my dotfiles. More for my convenience than for your edification, but feel free to rummage. If you are one of the three other remaining XTerm users, have a look at the XTerm*VT100*translations in my .Xdefaults - I stole that “press Shift+RET to background the command” trick from Malcolm Beattie nearly twenty years ago and have been using it ever since.

Syndicated 2012-02-22 20:25:20 from diary at Telent Netowrks

ANN Twitling: a Twitter link digest tool

Problem: I can’t keep up with the Internet

I often check Twitter on my phone. When I see tweets with links in them I tend to skip over them intending to return later when I’m on a computer with a full-size screen, and then forget about them either because I find something else to look at or I can’t be bothered with scrolling all the way down again. And looking through old tweets is nearly as bad on the full-size twitter web site as it is in a mobile client.

Proposed solution: I need a computer program to read the Internet for me

Thus, Twitling: a small script consisting of Ruby and Sinatra and OmniAuth and the Twitter gem and Typhoeus to grab links in parallel, the function of which is to read one’s timeline and display the resolved URL, the title and an excerpt from the text of each link that was posted. Source code is on Github.

I haven’t really used it myself yet in anger: the first thing I notice while testing it is that there are a whole lot more links in my feed than I thought there were, and the original plan to produce a 24 hour digest might become very unwieldy.

Possible further development ideas include

  • speed it up, by prefetching, better caching, or fetching the links asynchronously and client-side
  • an “older” link at the bottom of the page
  • Atom/RSS output so it gets fed to me every so often and I don’t have to remember to check it
  • email output (for the same reason)
  • some css gradient fills just to make it look modern (hey, I already used text-shadow, what do you want, round button borders?)
  • your suggestion here: email dan@telent.net or open an issue on Github. Bug reports too.

Try not to break it, please.

Syndicated 2012-02-01 12:54:57 from diary at Telent Netowrks

backbone.js 1 0 jQuery

I’ve spent a few hours over the last couple of days figuring out how to use backbone.js, and so far I’m impressed by it: it solves a real problem elegantly and doesn’t seem to have an entire religion bolted onto the side of it.

5 minute summary: it introduces models (and collections of them) and views to client-side javascript, and connects them with a publish/subscribe event notifier system so that when you make changes to a model all the views of it update without your having to remember to do anything to them.

A Model is an object that knows how to update itself from a Rails-y “REST” server (scare quotes, because as we all know these days REST isn’t what you think it is ), and publishes its attributes using the methods set and get.

	var m=find_me_a_model();
	var selected= (m.has('selected')) ? m.get('selected') : false;
	m.set({selected:  !selected});

Calling set will, if the value has changed, trigger a changed event handler to be called in all objects which have bound to it. These objects are usually Views.

A View is an object with a render method and an el attribute, and in which calling the former creates a piece of DOM tree in the latter, which you can then attach to your document somewhere

MyApp.Views.ThingView=Backbone.View.extend({
    initialize: function() {
	this.model.bind("all",this.render,this);
	this.render();
    },
    // ... this is not working code - I missed out some important bits ...
    events: {
	"click li" : "do_select",
    },
    do_select: function(e) { ... },
    render: function() {
	var ul=$(this.el);
	ul.html(some_html_for(this.model));
        return this;
    }
})

jQuery(document).ready(function() {
     var myView=new MyApp.Views.ThingView();
     $('#some_element').append(myView.render().el);
});

Collections are provided too. They come with a large number of iteration functions (map, filter, reduce, all that stuff) which makes them really rather useful, and you can build Views of them in much the same way as you can build views of models.

(To complete the completion, there’s also a Router, which is an interface for monkeying around with the URL so you can build bookmarkable client-side apps. But I haven’t had to use that yet)

Anyway. As you see in the example above, the view can also take a hash of events which is registered with jQuery using its delegate method. In this case we’re asking to have do_select called whenever a click event is received on any li element inside it. Great!

Not so great when it unexpectedly doesn’t work, though. Specifically, jQuery drag/drop events don’t work with jQuery’s delegate method, and there’s nothing in the documentation on either page to stop you wasting an afternoon finding this out. Way to go. For more details on just how much hysterical raisins mess is involved with jQuery event handlers, see the pages for on and live – look upon these works, ye mighty, and despair.

backbone.js is here. There’s a Ruby gem for using it with Rails: add rails-backbone to your Gemfile, and you get a handy set of generators which write Coffeescript for you. (A brief inspection of the result says that this is a good thing because there’s no way on earth I’d want to write that stuff myself. But I concede, significant whitespace is a valid personal preference, just not one of mine.)

Syndicated 2012-01-29 22:04:06 from diary at Telent Netowrks

Making reload! work in Pry with Rails 3.2

As of Pry 0.9.7.4 (the current version according to Bundler at the time I write this), the setup instructions for replacing irb with Pry when you run rails console no longer work fully in Rails 3.2. Specifically, the Rails team have changed the way they create irb commands like reload!: where in earlier version they were added to Object, now they are added to IRB::ExtendCommandBundle to avoid polluting the global namespace. Here’s the github pull request where the change is described.

(The cynic here will say “Rails? Namespace pollution? Lost cause, mate”, but hey, let’s not be down on attempts to make it better)

IRB already knows how to look in IRB::ExtendCommandBundle; Pry doesn’t, so what will happen if you have installed Pry in the usually recommended way by assigning IRB=Pry is you’ll get an error that Pry::ExtendCommandBundle doesn’t exist.

(I’ve seen ‘fixes’ for this bug that assign Pry::ExtendCommandBundle=Pry. This will make rail console start, but it still doesn’t make the commands accessible. Less than entirely useful then)

So, let’s make it. Here’s the relevant bit of my .pryrc: feel free to use as inspiration, but don’t blame me if copy/paste doesn’t work

if Kernel.const_defined?(“Rails”) then
  require File.join(Rails.root,“config”,“environment”)
  require ‘rails/console/app’
  require ‘rails/console/helpers’
  Pry::RailsCommands.instance_methods.each do |name| 
    Pry::Commands.command name.to_s do 
      Class.new.extend(Pry::RailsCommands).send(name)
    end
  end
end

If you are using a newer version of Pry than me – well, first off, they may have fixed this already and if so you can ignore this whole post. But if they haven’t, and if Pry::Commands.command is giving you trouble, note that the unreleased Pry 0.9.8 is set to include a new way of defining custom commands and you may need to rewrite this using the new Pry::Commands.block_command construct instead.

HTH

Syndicated 2012-01-23 14:08:41 from diary at Telent Netowrks

Micro setup for minitest in rails

I think this is the bare minimum setup for being able to write Minitest::Spec tests in a Rails 3.1 app, and certainly a lot simpler than all that faffage with minitest-rails and stuff

  • add the line require 'minitest/spec' somewhere near the top of test/test_helper.rb
  • write tests that look something like this:
    require ‘test_helper’
    require ‘edition’
     
    describe Edition do
      it “ex nihilo nihil fit” do
        nil.must_be_nil
      end
    end
    
  • we don’t create generators, but really, why do you need a generator to add one line of code? To disable the builtin Test::Unit generators – which you may as well because in this context they’re useless, add
    config.generators do |g|
      g.test_framework nil
    end

inside YourApp::Application in config/application.rb

This is all pretty vanilla – it doesn’t do spork or any of the faster-testing-through-not-loading-the-framework stuff, but with those three simple steps you can run rake test:units just as you would with the default Test::Unit stuff. test:foo for other values of foo appears also to work, but I don’t have any integration tests in this project yet so don’t take my word for it.

Double Trouble

I can see no way in minitest to create a partial mock: viz. a real object with some but not all methods mocked. In fact I can see no documented way in minitest to do any kind of mocking at all. As the new fashionable Double Ruby (a.k.a. rr ) library scores highly on both these counts, I decided to use that too.

This is a simple matter of adding “rr” to the Gemfile, then amending test/test_helper.rb to include the lines

require ‘rr’
and
class MiniTest::Unit::TestCase
  include RR::Adapters::MiniTest
end

Syndicated 2012-01-21 18:27:16 from diary at Telent Netowrks

Objections on Rails

It seems to be pick-on-rails week/month/quarter again. I’m not sure that 3 months and two projects really should qualify me to have an opinion on it, but this is the internet, so here we go anyway.

Recent influences which have provoked these thoughts:

  • And Steve Klabnik’s blog in which he identifies the problem as ActiveRecord and/ or ActionController and/or ActionView and/or the whole concept of MVC. MVC. Webdevs, you keep using that word. I do not think it means what you think it means.
  • The veneer of delegating-domain-objects I created in my current $dayjob project, inspired by the writings above and about which this post was originally going to be until I realised just how much I was writing even explaining the problem. Look out for Part Two

I’m not going to lay into the V and C of Rails here: to be honest, although they seem rather unpretty (my controller’s instance variables appear magically in the view? ew) they’re perfectly up to the task as long as you don’t try to do the heavy lifting in either of those layers. Which is a bad idea anyway. And actually, after writing an entire site including the layouts in Erector (if you don’t know Erector, think Markaby), there is a certain pleasure in being able to write the mostly static HTML bits in … HTML.

No, instead of generalizing that MVC is the problem, I an going to confine myself to the ActiveRecord arena and generalize that M, specifically M built on ORM, is the problem. The two problems.

Here is the first problem. Object-orientated modelling is about defining the responsibilities (or behaviours, if you prefer) of the objects in your system. Object-relational modelling is about specifying their attributes. Except in the trivially simple cases (“an Elf is responsible for knowing what its name is”) the two are not the same: you define an Elf with a method which tells him to don his pointy shoes, not with direct access to his feet so you can do it yourself. So that’s the first problem: the objects you end up with when you design with ActiveRecord have accessors where their instance variables would be in a sensible universe.

(Compounding this they also have all the AR methods like #find wich in no way correspond to domain requirements, but really I think that’s a secondary issue: forcing you to inherit baggage from your ancestors is one thing, but this pattern actively encourages you to create more of it yourself. This is the kind of thing that drives Philip Larkin to poetry)

Here is the second problem. We’re wimping out on relations. For the benefit of readers who equate RDMBS with SQL with punishment visited on us by our forebears in 1960s mainframe data processing departments, I’m going to expound briefly on the nature of relational algebra: why it’s cool, and what the “object-relational impedance mismatch” really is. It’s not about the difference between String and VARCHAR

Digression: a brief guide to relational algebra

A relation is a collection of tuples. A tuple is a collection of named attributes.

(You can map these to SQL database terminology if you put your tuples in a grid with one column per attribute name. Then attribute=column, row=tuple, and relation=table. Approximately, at least)

An operation takes relation(s) as arguments and returns a relation as result. Operators are things like

  • select (a.k.a. restrict), which selects tuples from a relation according to some criteria and forms a new relation containing those selected. If you view the relation as a grid, this operation makes the grid shorter
  • project, which selects attributes from each tuple by name (make the grid narrower)
  • rename, which renames one or more attributes in each tuple (change the column titles)
  • set difference and set intersection
  • some kind of join: for example the cross join, which takes two relations A (m rows tall) and B (n rows tall) and returns an m*n row relation R in which for each row Ai in A there are n rows each consisting of all attributes in Ai plus all attributes in some row Bj in B. Usually followed by some kind of selection which picks out the rows where primary and foreign key values match, otherwise usually done accidentally.

Here’s an example to illustrate for SQL folk: when you write

select a,b,c from foo f join bar b on b.foo_id=f.id where a>1

this is mathematically a cross join of foo with bar, followed by a selection of the rows where b.foo_id=f.id, followed by a projection down to attributes a,b,c, followed by a selection of rows where a>1.

Now here’s the important bit:

the tuple isn’t in itself a representation of some real-world object: it’s an assertion that some object with the given attributes exists.

Why is this important? It makes a difference when we look at operations that throw away data. If Santa has a relation with rows representing two elves with the same name but different shoe sizes, and he projects this relation to remove shoe_size, he doesn’t say “oh shit, we can’t differentiate those two elves any more, how do we know which is which?”, because he doesn’t have records of two elves and has never had records of two elves – he has two assertions that at least one elf of that name exists. There might be one or two or n different elves with that name and we’ve thrown away the information that previously let us deduce there were at least two of them, but we haven’t broken our database – we’ve just deleted data from it. Relational systems fundamentally don’t and can’t have object identity, because they don’t contain objects. They record facts about objects that have an existence of their own. If you delete some of those facts your database is not screwed. You might be screwed, if you needed to know those facts, but your convention that a relation row uniquely identifies a real-world object is your convention, not the database’s rule.

(Aside: the relational algebra says we can’t have two identical rows: SQL says we can. I say it makes no difference either way because both rows represent the same truth and you have to violate the abstraction using internal row identifiers to differentiate between them)

Back in the room

The reason I’ve spent this expended all those words explaining the relational model instead of just saying “ActiveRecord has poor support for sticking arbitrary bits of SQL into the code” is to impress on you that it’s a beautiful, valuable, and legitimate way to look at the data. And that by imposing the requirement that the resulting relation has to be turned back into an object, we limit ourselves. Consider

  • As a present fulfillment agent, Santa wants a list of delivery postcodes so that he can put them in his satnav. Do you (a) select all the children and iterate over them, or (b) select distinct postcode from children where nice (he does the coal lumps in a separate pass)?
  • As a financial controller, Mrs Claus wants to know the total cost of presents in each of 2011, 2010 and 2009, broken down by year and by country of recipient, so that she can submit her tax returns on time.

We wave #select, #map and #inject around on our in-memory Ruby arrays like a Timelord looking for something to use his sonic screwdriver on. When it comes to doing the same thing for our persistent data: performing set operations on collections instead of iterating over them like some kind of VB programmer, why do we get a sense of shame from “going behind” the object layer and “dropping into” SQL? It’s not an efficiency hack, we’re using the relational model how it was intended.

And although we can do this in Rails (in fairness, it gets a lot easier now we have Arel and Sequel), I think we need a little bit of infrastructure support (for example, conventions for putting relations into views, or for adding presenters/decorators to them) to legitimise it.

Wrapping up

Summary: (1) our ORM-derived objects expose their internal state, and this is bad. (2) we don’t have good conventions for looking at our state except by bundling up small parcels of it and creating objects from them, and this is limiting us because sometimes we want to see a summary composed of parts of several objects. Summary of the summary: (1) exposing state is bad; (2) we can’t see all the state in the combinations we’d like.

Yes, I realise the apparent contradiction here, and no, I’m not sure how it resolves. I think there’s a distinction to be drawn between the parts of the sytem that allow mutation according to business requirements, and the “reporting” parts that just let us view information in different ways. I also think we’re putting behaviour in the wrong places, but that’s a topic for Part Three

If you have read all the way to the end, Part Two, “Objective in Rails”, will be a run through of my current progress (with code! honest!) for coping with with the first problem and parts of the second. Part Three will be a probably-quite-handwavey look at DCI and how it might provide a way of looking at things which makes both problems go away.

Syndicated 2012-01-05 18:01:28 from diary at Telent Netowrks

In Soviet Russia, ActiveRecord mocks YOU!

A week ago I attended the Ru3y Manor conference, which was Really Cool. Educational, entertaining, excellent value for money.

One of the talks was by Tom Stuart on Rails vs object-oriented design which could be summarised as a run through the SOLID principles and a description of how well (or how badly) the affordances in Rails encourage adherence to each principle.

ActiveRecord came in for some stick. The primary offence is against the Single Responsibility Principle, which says that a class should have only one reason to change – or in the vernacular, should do only one thing. This is because AR is both an implementation of a persistence pattern and (usually, in most projects) a place to dump all the business logic and often a lot of the presentation logic as well.

Divesting the presentation logic is usually pretty simple. Decorators (Tom plugged the Draper gem, which I haven’t yet tried but looks pretty cool in the screencast) seem well-equipped to fix that.

But I wish he’d said more about persistence, because it’s a mess. And the root cause of the mess is, I conjecture, that an AR object is actually two things (although only one at a time). First, it reifies a database row – it provides a convenient set of OO-ey accessors to some tuples in a relational database, allowing mutation of the underlying attributes and following of relations. Second, it provides a container for some data that might some day appear in some database – or on the other hand, might not even be valid. I refer of course to the unsaved objects. They might not pass validation, the result of putting them in associations is ambiguous, they don’t have IDs … really, they’re not actually the same thing as a real AR::Model object. But because saving is expensive (network round trips to the database, disk writes, etc) people use them e.g. when writing tests and then get surprised when they don’t honour the same contract that real saved db-backed AR objects do. So, the clear answer there is “don’t do that then”.

Ideally, I think, there would be a separate layer for business functionality which uses the AR stuff just for talkum-to-database and can have that dependency neatly replaced by e.g. a Hash when all you want to do is test your business methods. I suggest this is the way to go because my experiences with testing AR-based classes have not been uniformly painless: when I want to test object A and mock B, and each time I run the test I find a new internal ActiveRecord method on B that needs stubbing, someone somewhere is Doing Something Wrong. Me, most likely. But what? I should be using Plain Old Ruby Objects which might delegate some stuff to the AR instances: then I should decide whether all those CRUD pages should be using my objects or the AR backing, then I should decide how to represent associations (as objects or arrays of objects or using some kind of lazy on-demand reference to avoid loading the entire object graph on each request, and will there need to be a consistent syntax for searching or will I just end up with a large number of methods orders_in_last_week, orders_in_last_month, open_orders each of which does some query or other and then wraps each returned AR object in the appropriate domain object) and whether the semantic distinction between an “aggregation” relation and a “references” relation (an Order has many OrderLines, but a Country doesn’t have many People – people can emigrate) has practical relevance. The length of the preceding sentence suggests that there’s a fair amount to consider. I don’t know of any good discussion of this in Ruby, and the prospect of wading through all the Java/.NET limitations-imposed-by-insufficiently-expressive-languages shit to find it in “enterprise” languages is not one I look forward to. Surely someone must have answers already?

There’s other stuff. Saving objects is expensive. Saving objects on every single update is expensive and wasteful when there’s probably another update imminent, so there’s some kind of case to be made for inventing a “to be saved” queue of AR objects which is eventually flushed by saving them once each at most. The flush method could be called from some suitable post-request method in the controller, or wherever the analogous “all done now” point is in a non-Web application. That would probably be a fairly easy task, although it would be no help for the initial object creation, because until we have an id field – and we need to ask the database to get a legitimate value for it – the behaviour of associations is officially anybody’s guess.

Rant over, and I apologise for the length but I am running out of time in which to make it shorter. In happier news: Pry – a replacement ruby toplevel that does useful stuff and that can be invoked from inside code. It’s like what Ruby developers would come up with after seeing SLIME.

Syndicated 2011-11-09 11:20:41 from diary at Telent Netowrks

Inanely great

A lot has been written – and I expect a lot more is yet to be written - about the attention to detail and unique grasp of design aesthetic that Steve Jobs exerted on Apple product development. A reasonable observation and not a new one. But the implication that goes with it which I find curious is that those slacker open source/free software people who are threatening to eat his lunch with Android or (perhaps less convincingly) with Ubuntu have no hope of ever replicating this setup because as they’re volunteer-based they have to spend too much time being nice to their contributors.

Ignoring the quibble that Android’s not actually a very good exemplar of open source development style (development directions are quite obviously set by Google, and at the time I write this there have been two major releases since they even pushed any open source stuff out at all) this argument falls down because it’s simply not true. Free software projects can be very good indeed at maintaining exacting standards in areas that they care about, and not apparently caring too much whose toes they tread on in the process – it’s just that the areas they care about are much more related to code quality and maintainability than typography and exact shades of yellow

Taking the Linux kernel for an example, the particular story that prompted this observation was the Broadcom wireless drivers contribution, but I could add to that: Reiserfs, nvidia ethernet, Intel ethernet drivers, Android wake locks, and a zillion other less high-profile cases where badly coded patches have not been accepted, even when the rejection is due to something as trivial as whitespace[*]. (OK, maybe I was wrong to say they don’t care about typography ;-) So, the social/organisational structures exist for an open source project to be quite incredibly demanding of high standards and yet remain successful – the question of why they don’t extend these standards to external factors and “UX” probably has to remain open. And don’t tell me it’s because they don’t appreciate good design when it is on offer, because the number of Macs I see at conferences invalidates that hypothesis straight off.

[*] I am reasonably sure this is not an exaggeration, although I can no longer find the mail from when it happened to me so I may be misremembering.

Syndicated 2011-10-25 09:35:17 from diary at Telent Netowrks

Pluto

My previous entry was not just a retro whinge about today’s centralised and balkanised Internet, but also a run up to a description of how things could be different. My efforts on and off over the last few weeks to make that difference have recently been blocked by too-much-$DAYJOB, so maybe this is a good time to stop coding and talk about it a bit.

When I was first playing around with the idea of a distributed social network my focus was on duplicating the interesting bits of Facebook, and one of the reasons I concluded it wasn’t really ever worth pursuing was that Facebook already exists and nobody (to a first approximation) needs an empty duplicate of it. If you want a network where you can tell your friends what you had for breakfast and post cat videos, you want it to be the network that your friends are on.

But in the course of thinking about how to implement it and reading about Atompub , I realised that it showed the way to something subtly different. And when I thought about that a bit more I realised I’d reinvented the blog aggregator. Um. But this is the threaded blog aggregator, which is better.

The Embrace

Well, the logic is unassailable: there are already lots of people on the internet publishing their thoughts using Atom (or its gelatinous structural isomorph RSS): what we need is an app that sucks all their posts, sorts them into categories (which we are calling “channels”), and allows the user to post his own articles (either ab initio or in reply to those they read) into the same channels. You can notify the people you’re replying to by sending them a copy of your reply (as Atom POST to their published feed url, falling back to Trackback or Pingback or Slingback or Stickleback or whatever if that doesn’t work), and you can incorporate their replies to your articles in the same way when they come in. Stick a UI on the front that presents a trn-style threaded view of all unread articles by all authors in the channel, et viola, you’ve just created a conversational view of stuff that’s out there already. And by and large it’s much better stuff than “paste this as your status and tag three people”.

The Extension

How do we turn that into a distributed resilient blah system like Usenet?

The key bit of NNPP was that each node answers proxy requests on behalf of its neighbours, for articles it’s loaded from its neighbours. So, if one of your usual feed sources is offline, you can fetch their articles from someone else who reads them. Combine that with PubSubHubbub and add some yet-to-be-decided peer-to-peer negotiation protocol so that a group of nodes can decide between themselves which will be the hub and which will subscribe to it.

This does make the issue of identity a bit more pressing: what’s to stop node B altering articles published by A, or even introducing entirely new ones that purport to come from A? Crypto, that’s what. I don’t give a stuff whether the name you go by is what your government calls you, but I do want to know whether, when someone with your moniker is claiming to have written article N, it is the same someone who prevously wrote articles 1,2,3,… N-1. So, you get a PGP key (or some other asymmetric peer-to-peer public-key encryption system that doesn’t depend on a centralised certification authority). Then if the key associated with your feed changes without prior notification, my client shows me a big red warning that says you probably aren’t who you say you are. Key management by key continuity a.k.a “what ssh does”. Perhaps once you’ve been posting stuff I like for a while I’ll sign your key as well, and other people - at least, other people who like what I post – will be more likely to trust you as a result.

(NNPP also contains an outline sketch of a DNS protocol replacement. I presently think this is an optional extra, but that depends on how offensive you plan to be to deep-pocket corporates who will complain to your naming authority)

Spam? No magic solutions, I’m afraid, but the “trusted introducer” thing goes some way. If people that you don’t already read send you articles that aren’t signed by keys you have a trust relationship with, they pile up in your “slush pile” (the analogue of the G+ Incoming feed) until you decide to look at them – you might decide to apply spam filtering tools of the same kind as we use for email, or you might just decide to junk it sight unread.

The End

It’s called Pluto. Because Planet is “a feed aggregator application designed to collect posts from the weblogs of members of an Internet community and display them on a single page” (thank you, Wikipedia) and Pluto is a dwarf planet. Sometime soon, I hope, there will be code on Github.

Catchy summary points:

  • we care about content and conversation – I’m happy to let Facebook and Twitter corner the market in ephemera: this is for keepers
  • protocols not platforms – we interoperate on equal terms with anything that speaks Atompub (and intend to provide adaptors for RSS or Facebook or scraped content or even an email-to-pluto gateway) - all the other authentication and distribution stuff is strictly opt-in

Syndicated 2011-09-26 20:19:16 from diary at Telent Netowrks

Social notworking

After a bit over a month using Google Plus (with admittedly decreasing enthusiasm over the course of that time) I have no firm conclusions about what it’s good for, except that it’s incredibly good at reminding me how much I miss Usenet.

I could compare it with the other networks that people consider it “competition” for: it doesn’t replace Facebook – for me anyway - because the whole world isn’t on it, and that means I can’t use it to stay in touch with friends and family. It doesn’t replace Twitter as the the lack of a message length limit means it’s useless for epigrams (which I like) and not much cop for status updates either (which I can live without) – though it does work as “source of interesting links” which in my opinion is the third arm of Twitter utility. And Google will, probably, be disappointed to learn that it doesn’t replace LinkedIn be cause despite the best efforts of the Real Names policy enforcers, it still isn’t quite boring enough. Yet, anyway.

But that’s enough about Google+, what about Usenet?

  • The unit of discussion was an article. Not a two-line throwaway comment or a single bit of “me likes this” information. When you read something on Usenet that you felt strongly enough to reply to, you hit ‘r’, you got the scary warning about “hundreds if not thousands of dollars”, and it dumped you in a full screen text editor where you could compose your pearl of wisdom. Sure, so you could alternatively compose your “ME TOO!”, but it wasn’t a teeny text widget which practically demands the latter response: the affordances were there for writing something with meat
  • It was decentralised. No capricious site owner could take your comment down because someone might find it offensive, or ban all discussion of certain topics, or refuse to allow you to post links to other places, or even that he was going to pull the plug completely and delete all your words. You might be reading this and thinking Godfrey vs Demon and you’d be entirely correct that it wasn’t completely uncensored in practice – nor, I contend, should it have been – but there was at least a bit more effort involved in getting a post removed than clicking the ‘I am offended by this picture of a breast-feeding woman’ button, and that made potential complainants think a bit more carefully about whether it was worth it
  • It had user interfaces that didn’t get in the way. Really. I could sit in front of my computer for hours pressing only the space bar (maybe alternating with the ‘n’ key in less interesting groups) and it would keep the content coming. (And I did. I would blame my degree class on Usenet, if it weren’t that the time I spent fiddling with Linux was in itself sufficient to leave approximately 0 time for studying. But i digress.)

The reasons it’s dead are well-rehearsed, and boil down to this: it couldn’t cope with universal access. It was built back in the days when people had access through their institutions or employers, and for the most part knew they could lose it by acting like jerks - or at least by acting like jerks consistently enough and outrageously enough. Come the personal internet revolution – the Endless September - it had no protection against or meaningful sanctions for spammers and trolls, and so blogs/web forums sucked away most of the people who wanted to just talk, leaving behind people who were by and large too much concerned with the minutiae of meta and much less enthused about the actual posting of content.

But it did do stuff that nobody else has replicated since.

Other people:

Syndicated 2011-09-18 20:36:00 from diary at Telent Netowrks

151 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!