Older blog entries for robertc (starting at number 47)

Why oh why oh why are there adverts in movie theatres ? Surely we pay enough that we dont have to be inundated with advertising. Even the toilets have advertising. Now, at Greater Union they are advertising the ability to bu advertising space on staff clothing and soft drink cups.

Not to mention cable tv - what happened to 'paying for content'.... less adverts than free to air, but still more than a DVD.


17 Feb 2006 (updated 18 Feb 2006 at 00:10 UTC) »

In a discussion about VCS's on #bzr, I proposed the following axis in which a VCS's underpinning can vary:

* core data for a commit: snapshot | changeset | 'semantic patch'

* commit identifiers: content hash | namespace + sequence | pseudo-random | human assigned

* file location mapping: Unique ids | history analysis | None

We can then categorise systems (I apologise in advance for any errors in classification, please feel free to correct me :):

CVS: snapshot | namespace+sequence | none

git: snapshot | hash | none

svn: snapshot |namespace + sequence | history-analysis

darcs: semantic patch | human assigned | history-analysis

Arch: changeset | namespace | unique-id

bzr = snapshot | pseudo-random | unique-id

monotone = snapshot | hash | history-analysis


One can also consider what a semantic patch | pseudo random | unique-id using system would be like :)

Update: This has spawned the following wiki page

conflating revisions... I think that this is unneeded, and in fact probably harmful.

Its not needed because (generally) if the amount of history is large enough to be of concern with todays systems, its either a very old project (i.e. coreutils with 20K revisions on its mainline (which is trivially handlable today)), or a very active project where the bulk of history is, well, recent. The cases where you have lots of old history and a slow rate of growth are the exact cases where our technology will outpace the projects accumlation of history.

Its harmful because it violates one of the basic things users expect from a VCS tool - the ability to get their code as it was.

10 Sep 2005 (updated 10 Sep 2005 at 18:50 UTC) »

Theres an interesting rant on why JUnit wasn't suitable for this guy 'Mike's tests, with the primary reason as I understand it that he creates X number of dynamically created tests and runs out of memory.

Whilst agreeing with him - if you aren't writing unit tests, xUnit may not be the right tool ... I think he would have been better served with something along the lines of (in python):

class DynamicTests(unittest.TestCase):
    def countTests(self):
        return self.expectedCount
    def runTest(self):
        for int in range(self.expectedCount):

The point being that the xUnit framework imposes No Requirement that a test case _be_ an instance - it uses a composite pattern, and any element can be a test case, a container for tests, or both. If you need to write many related tests that (for example) need to store no local state, require the same temporary resources, then leverage the nature of the composite.

Had fun today configuring a sid chroot in hoary:

1) cdebootstrap doesn't setup signed package verification correctly, so it fails.

2) in the failed chroot, configure up apt with a debian sid mirror, and install cdebootstrap

3) now cdebootstrap sid sid

4) move the sid bootstrapped sid chroot back out, and then remove the temporary borked one.

This will impact sarge users too - but is it really a bug ?

James, when you compared configs/aliases, did you know about the 'config-manager' package? Its in debian, and install a command 'cm' which reads bazaar format configs or can be piped into it through standard in.

Config-manager supports configuration creation and introspection for tla, CVS and Subversion. I'm part way through a rewrite of config-manager into python. The python version supports check-out and update for baz and bzr.

fuzzyBSc - I really suggest not adding arbitrary methods to HTTP. Its standard practice for firewalls to take a deny unknown approach to handling unknown methods.

I think a more useful way of looking at HTTP methods and urls is from a message passing point of view:

GET = pass a query message to the object at the URL with parameters as per the URL.

POST = pass a mutating message to the object at the URL with parameters as per the URL/uploaded-message.

This gives you the as much flexability as changing the HTTP Method might, but makes the semantics of your request clear to all parties, which will aid debugging, reduce negative interactions with intercepting or mandatory proxies ...

Stewart - dell hardware may suck ... but what about samsung hardware with dell service ? Thats what my Dell X1 is, and I love it.

28 Aug 2005 (updated 28 Aug 2005 at 22:37 UTC) »

I went to the nsw annual CodeCon in the weekend, which was fantastic. While there we talked about everything from making an open source project flourish thru what makes a good sample rate converter - and how to prove it.

I had been planning to give a rant^Wtalk about how folk should extend unittest in python, versus how they are extending it, when someone I ended up putting my money where my mouth was. So KFish and I hammered out a minimalistic protocol for reporting test activity over a pipe, and I implemented subunit during the codecon. Peter Miller is planning on adding support for it to Aegis's current test-suite support.

Luis: it is a concern. I wasn't trying to say its not a concern - of course its something to think about. I'm saying that the net effect is easier returning of changes, rather than a net effect of easier-forked and not-merging code.

I completely agree that it is a social phenomenon that we're dealing with. I don't agree that the tools are the fundamental problem - way back when folk used patch to send everything around, forking and merging was common. These days its become a hidden thing with month long forks in a home dir while a feature is coded, and then finally committed, but I think the basic process has stayed the same: someone works on some code and collaborates via patch and diff with peers until its ready, when someone that has 'commit access' puts it in the mainline.

38 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!