Older blog entries for robertc (starting at number 127)

Subunit-0.0.3


Subunit 0.0.3 should be a great little release. Its not ready yet, but some key things have been done.

Firstly, its been relicensed under BSD/Apache version 2. This makes using Subunit with other test frameworks much easier, as those frameworks tend to be permissive licenses such as the LGPL, BSD or Apache. Thanks go out to the contributors to Subunit who made this process very painless.

Secondly, the C client code is getting a few small touch ups, probably not enough to reach complete feature parity with the Python reporter.

Thirdly, the CPPUnit patch that Subunit has carried for ages has been turned into a small library built by Subunit, so you’ll be able to just install that into an existing CPPUnit environment without rebuilding CPPUnit.

Lastly, but most importantly it will have hopefully the last major protocol change (still backwards compatible!) needed for 1.0 – the ability to attach fairly arbitrary debug data in an outcome (things like ’stdout’, ’stderr’, ‘a log file X’ and so forth). This will be used via an experimental object protocol – the one I proposed on the Testing In Python list.

I should get the protocol changes done on the flight to Montreal tomorrow, which would be a great way for me to get my mind fully focused on testing for the sprint next week.

Syndicated 2009-10-03 13:53:33 from Code happens

Python unittest API : Time to fix it


So, for ages now I’ve been saying that unittest is, at its core, pretty sound. I incited a talk to this effect.

I have a vision; I dream of a python testing library that:

  1. Is in the python core
  2. Is simple
  3. Is extensible
  4. Has tests take care of testing
  5. Has results take care of reporting
  6. Aids communication from test to test reader

Hopefully those are pretty modest and agreeable things to want.

However we don’t have this: nose is lovely but not in the core [and is a moderately complex API]. py.test is also not in the core, and has previously tripped my too-much-magic alerts. I must admit to not having checked if this is fixed yet. unittest itself is in the core but has some cruft which we should clean up, but more importantly is not extensible enough, which leads to extensions such as the zope testrunner having to muddy the waters between testing and reporting.

The point “Aids communication from test to test reader” is worth expanding on: automated testing is something that doesn’t need observation…until the unexpected happens. At that point some poor schmuck such as you or I ends up trying to guess what went wrong. The more data that we gather and communicate about the event, the greater the chance it can be corrected without needing a repeat run under a debugger, or worse, single stepping through the code.

There is a problem with ‘assertFoo’ methods in unittest, something that I’m not going to cram into this blog post. I will say, if you find the tendency of such methods to crawl to the base class frustrating, that you should look at hamcrest – it and similar things have been very successful in the Java unit testing world; we can learn from them.

Going back to my vision, we need to make unittest more powerfully extensible to allow projects like nose to do all the cool things they want to while still being unittest compatible. I don’t mean that nose can’t run unittest tests; I mean that unittest can’t run nose tests: nose has had to expand the contract, not simply add implementations that do more.

To that end I have a number of bugs which I need to file. Solving them piecemeal will create a fractured API – particularly if this is done over more than one release. So I am planning on prototyping in other projects, discussing like mad on the testing-in-python list, and when it all starts to come together writing up a PEP.

The bugs I have are:

  1. streams nicely: countTestCases must die/be made optional. This function is inherently incompatible with generative tests or anything beyond the simplest lightweight environments
  2. no way to wrap code around a single test. This would permit profiling, debugging, tracing, and I’m sure other things more cleanly.  (At the moment, one must ‘turn on’ the profiler in startTestCase, and turn it off in stopTestCase. This is much more awkward than simply being in the call stack). Some care will be needed here, particularly for generative tests.
  3. code that isn’t part of the implementation in the core needs to be able to work with the reporting code; allowing an optionally wider API permits extensions to be debuggable. This needs thought: do we allow direct access to TestResults? Do we come up with some added level of indirection and ‘events’? I don’t know.
  4. More data than just the backtrace needs to be included when an outcome is reporter. I’ve started a discussion on the testing in python list about this. I’m proposing that we use a dict of named content objects, and use the HTTP content-type abstraction to make the content objects introspectable and reliably handleable without tying the unittest object protocol to any given wire format – loose coupling is good!
  5. The way we signal outcomes between TestCase and TestResult – the addFailure etc methods is concerning: there are many grades of outcome that users of the framework may usefully wish to represent; in fact there are more than we probably want to put in the core. Finding a way to decouple the intent of a particular outcome from how its signalled would allow users more control while still being able to use the core framework. One particular issue in this area is that its possible with the current API to have a single test object succeed multiple times. Or fail (addFailure) then succeed (addSuccess). This causes no end of confusion, as test counts can mismatch failure counts, and so on.

I’ve got some ideas about these bugs, but I’m approaching a kiloword already, and I hope this post has enough to provoke some serious thought about how we can fix these 5 bugs, compatibly, and end up with a significantly better unittest module. We’ll have been sucessful if projects like Trial, nose and the zope testrunner are able to remove all their code that duplicates standard library functionality or otherwise worksaround these bugs, and can instead focus on adding the specific test support needed by their environments (in the Trial and zope cases), or on UI and plug-n-play (for nose).

Syndicated 2009-09-22 14:48:18 from Code happens

Packaging backlog


Got some of my packaging backlog sorted out:

  • bicyclerepairman updated for the vim policy (which means it works again!)
  • python-testtools (a simple migration of the package to Debian)
  • subunit 0.0.2 released upstream and packaged for Debian.
  • testresources 0.2 ->  Debian.

And a small memo-to-self: On all new machines, echo ” filetype plugin on” >> ~/.vimrc

Syndicated 2009-09-20 07:23:21 from Code happens

Back from hiatus


Well, the new blog seems to be up and running – and gathering modest numbers of comments already. Woo.

I’ve a bunch of mail about test suite performance to gather and refine into a follow up post, but that can wait a day or two.

In bzr we suffer from a long test suite, which we let grow while we had some other very pressing performance concerns. 2.0 fixes these concerns, and we’re finding the odd moment to address our development environment a little now.

One of the things I want to do is to radically reduce the cost of testing inside bzr; code coverage is a great way to get a broad picture of what is tested. Rather than reinvent the wheel (and I’ve written one to throw away, so far) – are there tools out there that can:

  • build a per-test coverage map
  • do it quickly
  • don’t include setUp/tearDown/cleanUp code paths in the map
  • report on the difference between two such maps (at the suite level)

The third point is possibly contentious, so I should expand on it. While code that is executed by code within the test’s run() method is – from the outside – all part-of-the-test, its not (by definition) the focus of the test. And I find focused tests substantially easier to analyse failures in, because they tend to check preconditions, poke at object state etc.

As I want this coverage map to help preserve coverage as we refactor the test suite, I don’t want to include accidental coverage in the map.

Syndicated 2009-09-15 23:28:26 from Code happens

New blog location


My blog has moved: http://rbtcollins.wordpress.com/. If you’re syndicating me, please update to this location; if you don’t thats fine – advogato will be syndicating the blog indefinitely, but doesn’t support comments. Mega thanks to Jeff for doing an export from advogato for me :)

Syndicated 2009-09-15 11:54:00 from Code happens

31 Aug 2009


Hi Rich! Re hour+long unit tests

I agree that you need a comprehensive test suite, and that it should test all the dark and hidden corners of your code base.

But time is not free! A long test suite inhibits:

  • cycle time – the fastest you can release a hot fix to a customer
  • developer productivity – you can’t forget about a patch till its passed the regression test suite
  • community involvement – if it takes an hour to run the test suite, an opportunistic developer that wanted to tweak something in your code will have walked away long ago

    Note that these points are orthogonal to whether developers edit-test cycle runs some or all tests, or whether you use a CI tool, or a test-commit tool, or some other workflow.

    All that said though, I’m extremely interested in *why* any given test suite takes hours: does it need to? What is it doing? Can you decrease the time by 90% and coverage by 2%?

    I got another response back, which talks about keeping the working set of tests @ about 5 minutes long and splitting the rest off (via declared metadata on each test) into ‘run after commit or during CI’. This has merits for reducing the burden on a developer in their test-commit cycle, but as I claim above, I believe there is still an overhead from those other tests that are pending execution at some later time.

    From a LEAN perspective, the cycle time is very important. Another important thing is handoffs. Each time we hand over something (e.g. a code change that I *think* works because it passed my local tests), there is a cost. Handing over to a machine to do CI is just as expensive as handing to a colleague. Add that contributors sending in patches from the internet may not hang around to find out that their patch *fails* in your CI build, and you can see why I think CI tools are an adjunct to keeping a clean trunk, rather than a key tool. The key tool is to not commit regressions :)

    Oh, and I certainly accept that test suites should be comprehensive… I just don’t accept that more time == more coverage, or that there isn’t a trade off between comprehensive and timeliness.

  • Syndicated 2009-08-30 12:28:04 from Code happens

    30 Aug 2009


    Made some time to hack… the results:

    config-manager 0.4 released, re-uploaded to debian (it was removed due to some confusion a while back). This most notably drops the hard dependency on pybaz and adds specific-revision support for bzr.

    subunit snapshot packaging sorted out to work better with subunit from Ubuntu/Debian. This latest snapshot has nested progress and subunit2gtk included.

    PQM got a bit of a cleanup:

  • The status region shown during merges is ~ twice as tall now.
  • if the precommit_hook outputs subunit it will be picked up automatically and shown in the status region.
  • all deprecation warnings in python2.6 are cleaned up
  • Pending bugfixes were merged from Tim Cole and Daniel Watkins – thanks guys!
  • Syndicated 2009-08-29 07:42:09 from Code happens

    27 Aug 2009


    Does your test suite take too long (e.g. 5 minutes). Or did it and you solved it? Or it doesn’t but its getting worse?

    Tell me more, I’d like to know :-)

    Syndicated 2009-08-26 14:47:03 from Code happens

    16 Aug 2009


    Hudson seemed quite nice when I was looking at how drizzle use it.

    I proposed it to the squid project to replace a collection of cron scripts – and we’re now in the final stages of deployment: Hudson is running, doing test builds. We are now polishing the deployment, tweaking where and how often reports are made, and adding more coverage to the build farm.

    I thought I’d post a few observations about the friction involved in getting it up and running. Understand that I think its a lovely product – these blemishes are really quite minor.

    Installation of the master machine on Ubuntu – add an apt repository, apt-get update, apt-get install.

    Installation of the master machine on CentOS 5.2 (Now 5.3): make a user by hand, download a .war file, create an init.d script by copy-and-adjusting an example off the web. Plenty of room to improve here.

    Installing slave machines: make a user by hand, add an rc.local entry to run java on slave.jar as that new user. This could be more polished

    Installing a FreeBSD 6.4 slave: manually download various java sources to my laptop, scp them up to the FreeBSD machine, build *java* overnight, then make a user, add an rc.local entry etc. _Painful_.

    The next thing we noticed was that the model in Hudson doesn’t really expose platforms – but we want to test on a broad array of architectures, vendors and releases. i386-Ubuntu-intrepid building doesn’t imply that i386-Debian-lenny will build. We started putting in tags on the slaves that will let us say ‘this build is for amd64-CentOS-5.2′, so that if we have multiple machines for a platform, we’ll have some redundancy, and so that its easy to get a sense of whats failing.

    This had some trouble – its very manual, and as its manually entered data it can get out of date quite easily.

    So in the weekend I set out make a plugin, and ran into some yak shaving.

    Hudson plugins use maven2 to build and deploy. So I added the maven2 plugin to my eclipse (after updating eclipse to get the shiniest bzr-eclipse), and found the first bug – issue 1580 maven2 and teamplugins in eclipse 3.5 don’t play all that nice.

    Push.

    Removing bzr-eclipse temporarily allowed eclipse’s maven plugin to work, but for some reason many dependencies were not found, and various discussions found on the net suggest manually adding them to the CLASSPATH for the project – but not how to identify which ones they were.

    Pop.

    So, I switched to netbeans – a 200MB download, as Ubuntu only has 6.5 in the archive. netbeans has the ability to treat a maven2 project as a directly editable project. I have to say that it works beautifully.

    Push.

    I made a new plugin, looked around for an appropriate interface, (DynamicLabellers, designed for exactly our intended use).

    Sadly, in my test environment, it didn’t work – the master didn’t call into the plugin at all, and no node labels were attached.

    Push.

    Grab the source for hudson itself, find the trick-for-newcomers here – do a full build outside netbeans, in netbeans open main/war as a project, and main/core as well, not just the top level pom.xml. To run, with main/war selected in the project list hit the debug button. However changes made to the main/core sources are not deployed until you build them (F11) – the debug environment looks nearly identical to a real environment.

    Pop.

    There is a buglet in DynamicLabeller support in Hudson, where inconsistent code between general slave support and the ‘master node’ – ‘Hudson.java’ causes different behaviour with dynamic labels. Specifically the master node will never get dynamic labels. So I fixed this, cleaned up the code to remove the duplication as much as possible (there are comments in the code base that different synchronisation styles are needed for some reason) and submitted upstream.

    Pop.

    I’ll make the plugin for squid pretty some evening this week, and we should be able to start asking for volunteers for the squid build farm.

    Yay!

    Syndicated 2009-08-16 01:18:13 from Code happens

    06 Aug 2009


    0800 Friday morning, machine is slow… why?

    Random disk I/O, evolution doing a table scan again, and popularity contest fighting with it by reading in many many inodes.

     9729 be/6 nobody    183.44 K/s    0.00 B/s  0.00 %  0.00 % perl -w /usr/sbin/popularity-contest 

    Time for pop-con to go – while I like giving statistics about use, this isn’t the first time its chosen to get in my way.

    Syndicated 2009-08-06 02:03:57 from Code happens

    118 older entries...

    New Advogato Features

    New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

    Keep up with the latest Advogato features by reading the Advogato status blog.

    If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!