9 Jan 2005 titus   » (Journeyer)

Testing is addictive

After my many travails with PBP/maxq/sgmllib/HTMLParser/htmllib I finally sat down to work on my actual Web application, Cartwheel.

Cartwheel is a bioinformatics system that lets biologists upload sequences, analyze them, and export their analyses to a GUI, FamilyRelations II. Cartwheel itself is entirely written in Python, and FRII is written in C++ using FLTK -- a fine combination so far. I use a simple XML-RPC API to export the data & most of the internal communication between Cartwheel components is done in PostgreSQL. The system as a whole has been used by a few hundred people to do bioinformatics work and in general it's fairly robust. It's been around for several years, and I'm pretty much the only steady developer.

Normally I test Cartwheel's Web interface by roaming around in it with a Web browser and paying special attention to things I've changed. Until recently, I had no automated way to test it, and so in general I've been assuming it's mostly ok if no users yell at me after I post an update. (Known as the "Microsoft test method"... ;)

The mass of little bug reports recently reached a critical point, and so I started to fix them today. One of the problems had to do with some naively implemented search code in the Web interface, and so I set out to define the problem by changing the names of a bunch of the form variables. (This also fixed the bug, which tells you something about the code...) I had to edit files all over the place & quickly lost track of what code still needed to be patched.

So, I backed out all of my changes & used maxq to record a Web session that ran through all of the places where the search functionality was used. I saved the resulting PBP scripts and broke them into setup, test, and teardown scripts. I then went through the code base, made my changes, and re-ran the tests and fixed bugs caused by oversight until the tests all succeeded.

Another cool thing is that with the scripts separated into setup, tests, and teardown categories, I can also test my database export and import code quite easily:

setup-test-db
run-all-tests

export-db clear-db import-db

export-db clear-db import-db

run-all-tests

With only a few assumptions about what the setup script does to the DB (basically, that it's complete), this will tell me if my import/export scripts are catching everything.

Overall I probably spent about 3x the amount of time necessary to fix the bug on generating the tests in this manner, but now that I have a (simple) framework set up to do it, it should go faster... One thing is for sure: writing PBP tests without maxq would be painful!

A while back I asked about other Web testing tools, and John J. Lee recently responded with this link: http://wwwsearch.sourceforge.net/bits/GeneralFAQ.html. The Zope link is buggered, but overall I get the impression that there simply aren't many general Web testing tools for Python.

A few other Web testing link collections are on java-source.net and c2.com. JJL also pointed me towards opensourcetesting.org.

I'm interested in finding out about others, please let me know if you find any.

--titus

Latest blog entries     Older blog entries

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!