Older blog entries for robertc (starting at number 136)

LCA2010 Wednesday Keynote


Another must-grab-the-video talk : Mako’s keynote. Antifeatures, principles vs pragmatism do come together. The principled side – RMS & the FSF – important to control ones technology because its important to control ones life. The pragmatic side – quality, no vendor lock etc. False dichotomy.. freedom imparts pragmatic benefits even though it doesn’t intrinsically import quality, good design:  95% of projects 5 contributors; median number of contributors 1, and such small collaborations are no different than a closed source one.

Definition of antifeatures – built functionality to make a product do something one does not want it to do. Great example of phone books: spammers pay for access to the lists, and thus we have to pay *not to be listed*, but its actually harder to list and print our numbers in the first place. Mako makes a lovely analogy to the mafia there. Similarly with Sony charging 50 dollars not to install trialware on windows laptops in the past.

Cameras: Canon cameras disabled RAW saving…. CHDK, an open source addon for the camera outputs RAW again. Panasonic are locking down their cameras to reject third party batteries.

The tivo is an example of how focusing on licensing can miss the big picture: free stack, but still locked into a subscription to get an ongoing revenue stream.

Dongles! Mako claimed there wasn’t a facebook appreciation group for dongles… there is.

Github: paying for the billing model – lots of code there to figure out how many projects in a repo, so that they can charge on that basis.

DRM is the ‘mother of all antifeatures’ – 10K people writing DRM code that no users want!

Syndicated 2010-01-19 21:02:44 from Code happens

LCA 2010 Tuesday


Gabriella Colemans keynote was really good; grab it from the videos once they come online.

WETA run Ubuntu for their render farm: 3700 machines, 35000 cores, 7kw per ‘cold’ rack and 22kw per ‘hot’ rack. (Hot racks are rendering, cold racks are storage). Wow. Another talk well worth watching if you are at all interested in the issues related to running large numbers of very active machines in a small space.

And a classic thing from the samba4 talk at the start of the afternoon: MS AD domain controllers do no validation of updates from other domain controllers: classic crunchy surface security. (Discovered by samba4 borking AD while testing r/w replica mode).

Blue-ray on linux is getting there, however one sad thing is that the Blue ray standard has no requirement that vendors make players be able to play un-encrypted content – and there are some hints that in fact licences may require them to not play un-encrypted content.

Peter Chubb’s talk on Articulate was excellent for music geeks: midi that sounds like music from lillypond.

Ben Balbo talked about ‘Roll your own dropbox’. Ben works at a multimedia agency, but the staff work locally and don’t use the file server…. use instant messenger to send files around! Tried using subversion… too hard. Dropbox looked good but 3-7 hundred a month – too pricey given an existing 1.4TB of spare capacity.

He then considered svn + cron but deleted directories cause havoc & something automatic was wanted… so git + cron instead. Key thing used in doing this was having a work area with absolutely no metadata. Conflicts dealt with by filename.conflict.DATESTAMP.HOSTNAME.origextention

Doesn’t trigger of inotify, no status bar widget, only single user etc at the moment, but was written to meet the office needs so is sufficient. Interestingly he hadn’t looked at e.g. iFolder.

Syndicated 2010-01-19 04:39:05 from Code happens

Announcing testrepository


For a while now I’ve been using subunit as part of my regular development workflow. I would pipe test results to a file, use subunit to report on failures from that file, and be able to inspect all the failures at my leisure without rerunning tests or copy and pasting from far back in my history.

However this is a bit adhoc, and its not trivial to get good pipelines together – while its not hard, its not obvious either. And commands like tee are less readily available for Windows users.

So during my holidays I started a small project to automate this workflow. I didn’t get all that much done due to a combination of travel and coming down with a nasty bug near the end of my holidays – which I’m now recovering from. Yay health returning + medicines. If only we had medichines :) .

However, I managed to get a reasonable first release out the door this evening. Grab it from launchpad or pypi.

Testrepository has a few deps – all listed in INSTALL.txt. Folk on Ubuntu Lucid should be able to just apt-get them all (sudo apt-get install subunit will be enough to run testrepository). If you’re not on Lucid you can grab the debs manually, or use the subunit ppa (sudo add-apt-repository ppa:subunit), though I’ve noticed just today that that karmic subunit build there only works with python 2.5, not the default of 2.6 – I will fix that at some point.

Using Testrepository is easy if you are developing python code:

$ testr init
$ python -m subunit.run project.tests.test_suite | testr load
id: 0 tests: 114

This will report any failures that occur. To see them again:

$ testr last
id: 0 tests: 114

The actual subunit streams are stored in .testrepository in sequentially numbered files (for now at least). So its very easy to get at them (for instance, subunit-stats < .testrepository/12).

If you are not using python, you can still use subunit easily if you are using shunit, ‘check’ or ‘cppunit’. subunit ships with bindings for shunit and cppunit, and check uses libsubunit with the CK_SUBUNIT output mode. TAP users can use tap2subunit to get a subunit stream from a TAP based testsuite.

It’s still early days but I’m finding this much nicer than the adhoc subunit management I was doing before.

Syndicated 2010-01-10 12:27:50 from Code happens

More evolution-reliability-speed


Evolution recently moved to a sqlite summary db rather than a custom summary db implementation. Its great to see such reuse of code.

However, it’s not really a complete transition yet as I’ve had cause to find out today. I’ve blogged before about performance with the sqlite summary sqlite database. Today I was greet with a crash-on-startup bug which happily has a patch upstream already. Before I looked in the bug tracker though, I did some house cleaning.

I started with a 900MB folders.db. Doing a vacuum on the db dropped that to 300MB. It doesn’t appear to be something that evolution does itself. Firefox too appears to lack an automatic vacuum. sqlite is an embedded database, and its wonderful at doing that, but its not as install-and-forget as (say) PostgreSQL which does autovacuum. So an additional tip is vacuum your folders, e.g. with http://www.gnome.org/~sragavan/evolution-rebuild-summarydb, a helper script that will run vacuum on all your account summary db’s. Note that it *does not rebuild*, it solely vacuums, and as such does not add or delete (modulo bugs in sqlite) data to the summary database.

After the housecleaning, I checked that the sqlite database was in good condition:

sqlite3 folders.db
pragma integrity_check;

This returned a number of indexing issues, so I reindexed:

reindex;

Evolution now starts up and crashes in a fraction of a second - a big improvement. Finally, I started looking at the evolution code as I now was fairly confident it was a bug - it was in a sqlite callback function - and the column the function extracts data from (flags) is missing a NOT NULL constraint, but the code doesn't check for NULL - boom. From there to finding the bug report and existing patch was trivial.

And this is where my comment on reliability turns up: Evolution doesn't anticipate NULL flag values in its code, so why does it insert them into the database at all ? I suspect its due to some aspect of the incremental conversion to using sqlite summaries. More concerning for me is the possibility that there are many other such crash bugs lurking in the new sqlite based code.

There are possibly some clues as to the excessive table scans done by evolution in the use of a flags bitset rather than separate columns, but I haven't looked close enough to really say.

Syndicated 2010-01-03 21:52:48 from Code happens

bzr selftest uses testtools


the emperor has new clothes

bzr has just changed the base class for its test suite from ‘unittest.TestCase’ to ‘testtools.TestCase’. This change has cleaned up a bunch of test logic, deleted a significant amount of code (much of which was redundant with Python unittest) and added some useful and important features.

bzr  has only been able to make this change due to testtools expanding its mission from a simple ‘aggregation of proven unittest extensions’ into one where new extensions that *make unittest more extensible*. My deepest thanks to Jonathan for permitting me to use testtools as the vehicle to put these extension-enabling-extensions (and for his patience in reviewing said changes!).

The change was pretty easy: The bulk of the changes were in bzrlib.tests and bzrlib.tests.test_selftest. I chose to cleanup an ugly API at the same time which added a little scattershot across a number of tests. And there are more changes that can be done to take better advantage of testtools – the amount of deleted and cleaned up code isn’t complete. Even so, its a pretty clear win:

18 files changed, 228 insertions(+), 496 deletions(-)

What went?

bzr had an implementation of TestCase.run. This function is the main workhorse of Python’s unittest module, and yet sadly it has to be replaced to change the exceptions that can be raised(to signal new outcomes), or to improve on test cleanup. Testtools provides an API to permit registering new exception types and handlers for them. Like python 2.7 testtools also provides the TestCase.addCleanup API, and these two things combined mean that bzr no longer needs to reimplement the run method.

For expected failures, bzr uses a helper method TestCase.expectFailure to perform an existing assertion and convert the test into an expected failure if that assertion does not trigger. This was another feature testtools already provides and thus got deleted.

All the custom code for skipping and expected failures got deleted, and the other outcomes bzr uses turned into extensions (as per the run discussion above).

In bzr test cases generate a log (because bzr generates a log) and previously the TestResult in bzrlib inspected each test that had been executed to extract the log. This was made simpler by using the details API that testtools provides (see testtools.TestCase.addDetail), which permits tests to add arbitrary data in a semi-structured fashion. This is supported by subunit and a long standing bug with bzr selftest --parallel was fixed as a result – logs from tests run in other processes are now carried across the process barrier intact and are presented cleanly.

Some other minor cleanups are in unittest compatibility code, where bzr would degrade gracefully with unittest runners, and testtools provides such logic comprehensively, so all that got deleted too.

Whats new?

I think the most significant new facility that testtools offers bzrlib is assertThat. This assertion is inspired by the very nice assertThat in JUnit (which has changed substantially since Python’s unittest was written based on it). This assertion separates the two concerns of ‘raise an exception’ and ‘decide if an exception should be raised’. The separation allows for better reuse of custom checking code, because it permits composition in a cleaner way than extra assertion methods permit. Testtools does not include many matchers as yet, but matchers are easy to write, and if one were to write a small adapter to the hamcrest library, there are a bunch of ready made matchers there (though they have a very Java feel – such as is not meaning is – which is why Testtools did not use that library).

Secondly, the addDetail API referenced above, in combination with testtools.TestCase.addOnException will permit capturing the entire working area when a test fails, something that developers currently have to fiddle about with breakpoints to achieve. This hasn’t been done, but is a straight forward patch I hope to do in the new year.

Lastly, Testtools offers testtools.TestCase.getUniqueInteger and testtools.TestCase.getUniqueString, which are not as yet used in bzr tests, but we may start using them soon.

Beyond that, the other features of testtools are already present in bzrlib, and we simply need to find and delete more duplicated code.

Syndicated 2009-12-23 08:41:16 from Code happens

Various releases


Recently I’ve been working on the Python unittest API in my spare time, with a long term goal of making it possible to safely and sensibly glue many different plugins together into the core.

Two important components of that goal are being able to extend the data included in a test result, and being able to change how a test is run (such as adding new exceptions that should be treated as specific outcomes – python unittest uses exceptions to signal outcomes).

In testtools 0.9.2 we have an answer to both those issues. I’m really happy with the data included in outcomes API, ‘TestCase.addDetail’. The API for extending outcomes works, but only addresses part of that issue for now.

Subunit 0.0.4, which is available for older Ubuntu releases in the Subunit releases PPA now, and mostly built on Debian (so it will propogate through to Lucid in due course) has support for the addDetail API. Subunit now depends on testtools, reducing the non-protocol related code and generally making things simpler.

Using those two together, bzr’s parallelised test suite has been improved as well, allowing it to include the log file for tests run in separate processes (previously it was silently discarded). The branch to do this will be merged soon, its just waiting on some sysadmin love to get these new versions into its merge-test environment. This change also provides complete capturing of the log when users want to supply a subunit log containing failed tests. The python code to do this is pretty simple:

def setUp(self):
    super(TestCase, self).setUp()
    self.addDetail("log", content.Content(content.ContentType("text", "plain",
        {"charset": "utf8"}), lambda:[self._get_log(keep_log_file=True)]))

I’ve made a couple of point releases to python-junitxml recently, fixing some minor bugs. I need to figure out how to add the extra data that addDetails permits  to the xml output. I suspect its a strict superset and so I’ll have to filter stuff down. If anyone knows about similar extensions done to junit’s XML format before, please leave a comment :)

Syndicated 2009-12-20 12:33:08 from Code happens

Debianising with bzr-builddeb


Bzr build-deb is very nice, but it can be very tricky to get started. I recently did a fresh debianisation of a project that is in bzr upstream, and I thought I’d record the recipe to make it work (at least until the various bugs making it hard re fixed).

Assuming that the upstream uses bzr, it goes like this:

  1. Start with a branch that is close to the code you want to Debianise. E.g. if the release was off trunk, 3 commits back: bzr branch trunk -r -3 debian
  2. Debianise as normal: put the tarball with the right name in the parent dir,  add a debian directory and fiddle until you build a package you’re happy with. Don’t commit while doing this.
  3. Build a source package- debuild -S, or bzr builddeb -S
  4. Revert your changes – bzr revert.
  5. Import the dsc – bzr import-dsc ../*.dsc
  6. Now, you may find that some dot files, such as .bzrignore have been discarded inappropriately (there is a bug open on this). If that happened, keep going. Otherwise, you’re done: you can now use merge-upstream on future upstream releases, and debcommit etc.
  7. bzr uncommit
  8. bzr revert .bzrignore (and any other files that you want to get back)
  9. debcommit
  10. All done, see point  6 for details.

Hope-this-helps

Syndicated 2009-12-19 04:34:16 from Code happens

Why upstreams should do distribution packaging


Software comes in many shapes and styles. One of the problems the author of software faces is distributing it to their users.

As distributors we should not discourage upstreams that wish to generate binary packages themselves, rather we should cooperate with them, and ideally they will end up maintaining their stable release packages in our distributions. Currently the Debian and Ubuntu communities have a tendancy to actively discourage this by objecting when an upstream software author includes a debian/ directory in their shipped code.  I don’t know if Redhat or Suse have similar concerns, but for the dpkg toolchain, the presence of an upstream debian directory can cause toolchain issues.

In this blog post, I hope to make a case that we should consider the toolchain issues bugs rather than just-the-way-it-is, or even features.

To start at the beginning, consider the difficulty of installing software: the harder it is to install a piece of software, the more important having it has to be for a user to jump through hoops to install it.

Thus projects which care about users will make it easy to install – and there is a spectrum of ease. At one end,

checkout from version control, install various build dependencies like autoconf, gcc and so on

through to

download and run this installer

Now, where some software authors get lucky, is when someone else makes it easy to install their software, they make binary packages, so that users can simply do

apt-get install product

Now some platforms like MacOSX and Microsoft Windows really do need an installer, but in the Unix world we generally have packaging systems that can track interdependencies between libraries, download needed dependencies automatically, perform uninstalls and so on. Binary packaging in a Linux distribution has numerous benefits including better management of security updates (because a binary package can sensibly use shared libraries that are not part of the LSB).

So given the above, its no surprise to me to see the following sort of discussion on #ubuntu-motu:

  1. upstream> Hi, I want to package product.
  2. developer> Hi, you should start by reading the packaging guide
  3. (upstream is understandably daunted – the packaging guide is a substantial amount of information, but only a small fraction is needed to package any one product.)

or (less usefully)

  1. upstream> Hi, I want to package product.
  2. developer> If you want to contribute, you should start with existing bugs
  3. upstream> But I want to package product.

Another conversation, which I think is very closely related is

  1. developer> Argh, product has a debian dir, why do they do this to me?!

The reasons for this should be pretty obvious at this point:

  • Folk want to make their product easy to install and are not themselves DD’s, DM’s or MOTU’s.
  • So they package it privately – such as in a PPA, or their own archive.
  • When they package it, they naturally put the packaging rules in their source tree.

Now, why should we encourage this, rather than ask the upstream to delete their debian directory?

Because it lets us, distributors, share the packaging effort with the upstream.

Upstreams that are making packages will likely be doing this for betas, or even daily builds. As such they will find issues related to new binaries, libraries and so on well in advance of their actual release. And if we are building on their efforts, rather than discarding them, we can spend less time repeating what they did and more packaging other things.

We can also encourage the upstream to become a maintainer in the distro and do their own uploads: many upstreams will come to this on their own, but by working with them as they take their early steps we can make this more likely and an easier path.

Syndicated 2009-12-18 11:23:56 from Code happens

Government data – please do it right


The Australian government 2.0 taskforce has an initiative to make data available for public remixing and use: after all its public property anyway, right? They have even run a mashup competition.

Notably missing from the excellent collection of data that has been opened is the NSW Transport and Infrastructure dataset for public transport in NSW. There is a similar dataset for the Northern Territory in the mashup transport section.

The NT dataset is under the fantastic cc-by licence. You can write an iphone app with this, a journey planner that you can cart with you while disconnected; a ‘find the closest bus I can walk to’ tool, or – well let the imagination run wild.

The NSW dataset is under a heavily restrictive license. Its so restrictive I’m not sure its feasible to write an open source tool using its data.

The meta-issue is that NSW T&I department wants control over the applications built with this data. This adds a tremendous chilling effect on potential uses of the data: the department will have to approve, with a long lead time, every use of the data, and get to tell the ‘application developer’ what to changes to make to their application.

I strongly doubt that a simple remixing of the data (e.g. with weather reports to prefer buses on very wet day) would be permitted, as it would allow other users to just read the remix and get the original data /without entering into a license agreement/.

I’m sure there is some unstated risk of openess, or benefit of control, that is shaping this problematic approach. Whatever the cause, its not open at all.

Given that the overall approach is fundamentally flawed, a blow by blow analysis of the custom license isn’t particularly useful, however I thought I would pick some highlights out to save folk the trouble ;)

  1. The dataset is behind a username/password wall [that you cannot share with others].
  2. Licensees may not be private – everyone must know you’re using the data.
  3. You must link to the 131500.com.au website
  4. You may not charge users for an app that has to be redeveloped if the dataset changes shape
  5. Any application written to use the dataset must be given to the department 30 days before release to the public.
  6. The department gets to ’suggest changes’ to any announcement related to the developers app, the license agreement or the dataset.
  7. The dataset is embargoed – you cannot share it with others.
  8. The use of the dataset has to be logged and reported.
  9. There is a restraint of use in there as well – related to Inappropriate and Offensive Material. It wouldn’t affect me, but sheese, given all the other restraints its hardly needed.

There are more gems in the details, but in short:

The department will control what, where, when and how (the data is accessed, the application’s functionality/appearance, how it was used). Hell, the 30 day requirement alone makes for slow delivery of whatever someone wants to build.

I really hope this can be improved on.

Syndicated 2009-10-19 05:26:58 from Code happens

Subunit-0.0.3


Subunit 0.0.3 should be a great little release. Its not ready yet, but some key things have been done.

Firstly, its been relicensed under BSD/Apache version 2. This makes using Subunit with other test frameworks much easier, as those frameworks tend to be permissive licenses such as the LGPL, BSD or Apache. Thanks go out to the contributors to Subunit who made this process very painless.

Secondly, the C client code is getting a few small touch ups, probably not enough to reach complete feature parity with the Python reporter.

Thirdly, the CPPUnit patch that Subunit has carried for ages has been turned into a small library built by Subunit, so you’ll be able to just install that into an existing CPPUnit environment without rebuilding CPPUnit.

Lastly, but most importantly it will have hopefully the last major protocol change (still backwards compatible!) needed for 1.0 – the ability to attach fairly arbitrary debug data in an outcome (things like ’stdout’, ’stderr’, ‘a log file X’ and so forth). This will be used via an experimental object protocol – the one I proposed on the Testing In Python list.

I should get the protocol changes done on the flight to Montreal tomorrow, which would be a great way for me to get my mind fully focused on testing for the sprint next week.

Syndicated 2009-10-03 13:53:33 from Code happens

127 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!