The Software
Carpentry project has re-started its competition to design a new
testing tool,
with a twist. Instead of submitting design proposals, people are being
asked to audition for the design team by participating in a two-month
discussion of requirements and options.
The Software
Carpentry project is pleased to announce the re-start of work
toward an Open Source regression testing tool is being re-started.
Details can be found at:
http://www.software-carpentry.com/sc_test/
Participants in the design competition earlier this year are asked
to note that there have been some procedural changes this time around.
Instead of submitting completed design proposals, people are being asked
to participate in two months of discussion about requirements and
strategies. At the end of this "audition" period, the judges will
select a 4-person team to produce a full design.
If you would like to participate, please join the discussion list by
sending mail to sc-discuss-subscribe@software-carpentry.com (or to
sc-discuss-
digest-subscribe@software-carpentry.com for the digestified
version), or contact Greg Wilson at info@software-carpentry.com, or on
+1 (416) 504 2325 ext. 229.
We look forward to hearing from you!
Greg Wilson
Software Carpentry Project Coordinator
I saw this announced on LWN, and that got me to thinking "Oh yeah,
what ever became of the winners of the other categories?" So I poked
around the Software Carpentry site for a while, and found the winners of
the categories had been announced some time ago. However, the pages
for the individual categories still haven't been updated to reflect the
second-round results, let alone have any news about how development
towards implementation is progressing for those tools, now that the
designs have been selected. As far as I can tell, the prizes haven't
been awarded yet either. My concerns are reflected in the closing
paragraphs of this message to sc-discuss, which appears to have gone unanswered
for two weeks.
I was excited about the promise of this project, but if the website
is any indicator, things just ain't happening. So it's a bit hard to
get excited for this next round...
For Mojo Nation we're using a very lightweight testing process which works quite nicely. It uses
introspection to find all the testing functions written into the code, and calls them all, with each either passing or failing. If everything
passes, you just keep coding.
Removing the need for humans to look at test results produces a fundamental improvement in regression test functionality - you actually
start to run them.