Older blog entries for pfdietz (starting at number 11)

Bill Clementson has posted on Concurrent Programming, and how a language that can effectively support concurrency may be able to displace current widely-used languages in which that support is flawed.

A language he focused on was Erlang, but I was reminded of another language, Cilk. Cilk is a variant of C with features for spawning parallel computations (Cilk programs become equivalent serial C programs when the new Cilk keywords are removed). Its implementation has an effective and highly cool 'work stealing' scheduler for allocating large numbers of fine grained tasks among a smaller set of worker threads.

I don't know if Cilk would be suitable for something like a web server, but for highly parallel computational tasks programs it has done well. A team using Cilk won the ICFP'98 Programming Contest, and Cilkchess, a chess program written in Cilk, came in 4th (out of 30 programs) in the 1999 World Computer Chess Championship.

The call for papers for the International Lisp Conference 2005 is up. Extended abstracts (1-2 pages) are due by March 15, 2005. I think I'll submit a paper on experience with the gcl test suite.

James Andrews has recently done an experiment with 'Coveraged-checked Random Unit Testing' (CRUT) (see also here. ) He took two sets of libraries with basic data structures (hash tables, trees, heaps, and so on) from Sourceforge, and wrote harnesses to make random function calls. The result logs were checked using an automated log checking system using an algebraic description of the expected results.

He found 6 major bugs (with no workarounds) in the two libraries and 12 additional bugs (either with workarounds or of lesser severity.) The random test generators achieved better than 90% statement coverage (as measured by gcov), which is even more impressive when one realizes that most of the uncovered lines were dead or untestable.

I'm not terribly surprised by these results -- it's easier to achieve high coverage with unit tests, and there are fewer low probability hoops for the random inputs to jump through -- but I'm troubled that free software packages labeled 'production/stable' had so many serious defects.

In a compiled dynamic language like Lisp, any information you have at compile time about the types of forms can be used to generate better code. Methods can be dispatched, type checks folded away, and, in some cases, objects of dynamic extent can be unboxed and stack or register allocated (this being particularly important for integer and float values.)

The importance of this is illustrated by a recent change to GNU Common Lisp. Camm Maguire has recently added a type propagator to the gcl compiler. It determines types for expressions in binding forms such as (LET ...) and, if a variable being bound isn't also assigned to, adds a declaration for that variable. The immediate motivation was to allow temporary variables in Lisp macro expansions to be well-typed, but the optimization applies to all local variables.

How much did it buy? Camm reports ansi-tests is running 27% faster in gcl, with a 41% improvement in garbage collection time. Not bad!

I've been bashing on this version with the random tester, but no problems have popped up yet.

Re 'Text Files Defended'

Another entry in Brian Marick's weblog:

(Not just to pick on Lisp and Smalltalk, two fine languages: how long did it take for a regex class to make it into Java? And a regex class is one significant affordance notch below regexps built into the language.)

Brian should take a look at CL-PPCRE, Edi Weitz's regex package for Common Lisp. The great thing about lisp, of course, is that the user can build the feature into the language, not just introduce a class -- and you get performance that's as good as or better than Perl's regular expression engine.

3 Dec 2004 (updated 3 Dec 2004 at 20:00 UTC) »

I came across the web log for testing guru Brian Marick, where he mentions lisp several times. Among other things, he ported CMUCL to a Gould processor in 1984.

I wonder what he'd think of the random tester?

21 Oct 2004 (updated 31 Oct 2004 at 15:11 UTC) »

Dave Roberts (Finding Lisp) recently posted about the gcl ansi-tests' Common Lisp random tester and random testing in general.

I use random testing not only for testing CL compilers, but also for more focused testing of specific language features. For example, I randomly check SUBTYPEP (which caused crhodes some grief), various sequence functions, settings of control variables for printing/reading, and various numeric functions.

The idea of using randomized inputs to test compilers has been tried before, aside from McKeeman. Don Slutz at Microsoft wrote a random test generator for SQL database systems. B. A. Wichmann did random stress testing of compilers for various languages. Ariel Faigon talks about random testing of compiler backends targeting National Semiconductor 3200 series microprocessors.

Hmm. Where does the time go?

The ANSI CL test suite is now at over 18,000 tests. Right now I'm testing interactions between various kinds of unusual vectors and the builtin string/sequence functions. This is mostly to support crhodes in his effort to add unicode support to SBCL, but it's already found at least one five year old bug in CLISP (REVERSE didn't work on displaced vectors).

After that I'm going back to finishing the FORMAT tests. I'm still aiming for about 20K tests for the 'finished' test suite.

Common Lisp testing is coming along nicely. The gcl ansi test suite is now over 14,000 tests. I've most recently nearly finished testing chapter 12 of the Common Lisp Specification, 'Numbers'.

To go beyond simple 'does function FOO do X when called on arguments Y' kinds of tests, I've also started experimenting with a random tester. I've taken the approach of William McKeeman (favorite quote: "Excellent testing can make you unpopular with almost everyone.") I generate random lisp forms that are guaranteed to be well-defined when a set of free variables have integer values. The forms are wrapped in two lambda expressions, one with type declarations for the free variables and one in which optimizations are inhibited. The functions are compiled and their outputs compared on a set of random inputs.

This approach is very simpleminded (and a lot easier to do in Lisp than in C, as McKeeman did), but it immediately found new bugs in every lisp implementation on which I tried it. The beauty of this approach is that it exploits increasingly cheap CPU cycles to save the increasingly relatively expensive programmer/tester cycles.

BTW, DEC (now HP) has a patent on this method of compiler testing. However, the patent assumes the compiler is a separate program, so lisp is apparently off the hook (but IANAL).

5 Jun 2003 (updated 6 Jun 2003 at 00:45 UTC) »

I've concluded the Dylan superclass linearization algorithm is preferable to the Common Lisp (CLOS) algorithm.

A refresher: in a hierarchy with multiple inheritance, some means must be found to determine which superclass a class should inherit entities from. One approach is to have an algorithm for totally ordering (linearizing) the superclasses ('computing the class precedence list (CPL)', as one might say in CL), and inheriting each entity from the first superclass in the list in which the entity occurs. Algorithms are allowed to reject certain hierarchies if they cannot be linearized in some acceptable way.

The CL and Dylan algorithms are similar, except that the Dylan algorithm is slightly more strict in the hierarchies it accepts. In return Dylan's orderings have the property of monotonicity: if C1 is a subclass of C2, then the linearized superclasses of C2 occur in the same order in the CPL of C1.

Why do I care about this? I was writing CL compliance tests for the class precedence orderings specified in the ANSI CL spec. To do this, I wrote a generic function with the LIST method combination, and added a method for each standardized class that return that class's name. This should yield the names of the standardized superclasses of the class of a value, in the order they appear in the CPL.

BUT... there's in general no guarantee that if some object is in type C (C a class) that it's also an instance of C. It could be an instance of a subclass of C, and that subclass could have a different CPL. Lack of monotonicity means I cannot presume the superclasses of C still occur in the specified order.

This reconfirms my belief that writing a test suite tests not only an implementation, but also a specification. Untestable requirements are bugs.

cmm: yes, that's the algorithm.

2 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!