23 May 2003 graydon   » (Master)

testing tools

movement asks if free software testing tools are "wild and wooly". I would say some, but not all. I think the unit test things are as good as they need to be; unit testing is more about practise than technology anyways. there's a *Unit library for just about everything now, which is nice.

high-level supervisory and tracking stuff -- say aegis or bugzilla -- is pretty good. maybe if I'm lucky someday I'll be able to say that about monotone too. I think we all know that test cases are the core of the "science" which makes up computer science; the muck you have to slog through to make your software strong against reality.

the mid-level supervisory stuff, however, is weak. dejagnu is Not My Idea of Good (sorry rob) and most people seem content to just cobble together shell or python scripts when they're fed up with expect. testing tools which do automatic breadth or boundary calculations, which estimate and control test-run time, or which do much in the way of environment control and capture, seem disappointingly rare.

one thing which bothers me about test infrastructure is that the posix 1003.3 output format doesn't require (or doesn't often seem to be interpreted as requiring) unique identifiers for each assertion tested. as a result, people wind up measuring not the set of tests passed or failed, but the number. this is wrong in an important way: if I make some fixes and go from 438 failures to 245, it is not at all clear whether I am "193 tests better off" or whether I've fixed 438 and regressed on 245. constructing a QA system on this sort of measurement will be broken by design. it's a bit frustrating. the proper regression relationship is subset, and for that tests need identity.

option processing

I read in innumeracy that people generally mis-perceive coincidence, and that the expected frequency of coincidences (amongst all possible things that may happen to you every day) is pretty high. that may be, but it's still creepy. one day, I'm sitting on a go bus sketching out a design for a C++ option processing library. next day, it gets mentionned here. then I open up gnus and boost announces a formal review period for an option processing library. jeepers. I can feel the axons of the pan-network subcortex creeping into my head just thinking about it.

Latest blog entries     Older blog entries

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!