Still adding to the expression parser, there's a wealth of special-purpose functions and the like in the expression language that I had forgotten about. This means I'm currently in a cycle of "try to read definition file, with some traceback enabled; wait for an error; read expression and identify non-handled function; modify expression parser" and then all over again. Today I have added handling of continuation lines (that I'd forgotten about), @hasmod, @itemhasmod, @indexedvalue. The code snippet for that can be found at the end of the post (it's wrapped inside a somewhat uncomfortably huge COND, I have been pondering ways of making it slimmer, but I can't think of any clean way to do it, due to the needs of communicating global things, there's some macrology that could take care of the test and incrementing "where are we" beyond the function name and then mathc the parenthesis around teh argument(s), but on the whole, I'm not sure it is worth it).
There's some helper functions used in the code snippet, FIND-PAREN is an FLET-introduced local that modifies END (LET-bound indicator of end-of-token), TAIL-CALL is another FLET-introduced function that intelligently skips past the parsed bits of the current input string and tail-calls TOKENIZE on an as-needed basis.
I am slightly naught in that I handle all unary things as part of the tokenizing than doing it cleanly in the building of parse-trees, but taht's because it's actually easier to read (and write) this way, there's too many ways they handle their arguments (among the thinsg needed to be handled are @if(test then expression else expression) and @max(expression, ..., expression)). Easier just to do that in the tokenizing step, all in all.
((is-prefix "@indexedvalue" str start) (incf start 13) (find-paren) (let* ((substr (subseq str (1+ start) (1- end))) (val (split-multi substr)) (ix (parse (car (tokenize val)))) (vals (mapcar (lambda (s) (parse (tokenize s))) (cdr val)))) (tail-call (make-instance 'ixval :ix ix :vals vals))))