Older blog entries for ingvar (starting at number 266)

29 Jan 2008 (updated 29 Jan 2008 at 06:57 UTC) »

Ooops! I am, as they say, an idiot.

Yesterday's tar ball is a wee bit lacking in essential files, as it were (the slight restructuring saw some functions being broken out into helpers.lisp and I wasn't packaging imports.lisp at all).

In further not-really-news, clast and Zaitcev have been discussing FizzBuzz (basically, count up from 1, for each n == 0 MOD 3, say Fizz, for each n == 0 MOD 5 sau Buzz and if both, FizzBuzz). Dividing by 3 and by 5 is, essentially, trivial in the decimal system. I first encountered this as a drinking game, with the addition that any 3 in the decimal expansion was also a Fizz and any 5 in the decimal expansion was a Buzz (and again if it's both a Fizz and a Buzz, it's a FizzBuzz).

This variety is slightly harder, in that you end up with 8 Fizz and 2 FizzBuzz between 29 and 40. Surprisingly hard, once you get into drinking game territory, around 2 AM.

It is longer than usual since the last post. January has so far been full of hectic (including, but not limited to two long-running faults coming to a clench at the same time as I was away for Cisco Networkers).

There is a new version of my image library out. It is, as these things go, not nearly as ambitious as what other people write, but as-is, it provides assorted drawing operations on a canvas, with back-end export to (at the moment) X11 or GIF files (via CLX and Skippy, respectively). In its latest incarnation, it also has the ability to load GIF files (no X11 image loading, yet).

There is, as usual, far too much WIP to even consider mumbling about it, most waiting for one thing or another before I forge ahead and finish off some more.

I have, recently, seen a more sophisticated version of "there are no libraries for Common Lisp" (there's quite a few). THis more sophisticated version does have something going for it, at that. Paraphrased, it is "There are very few Common Lisp libraries that have a bus number higher than one".

For those of you who haven't heard of the term "bus number" as it applies to projects (IT projects, I think, specifically), it is the number of key people that need to be hit by a bus to make the project grind to a halt.

An inactive project obviously has a bus number of zero (it can't really progress any slower). Most projects that are driven by a single person have a bus number of one. Most of the libraries I am responsible for is in this category (though I shroud myself in the comfortable illusion that at least GENHASH is sufficiently well documented that most anyone with a few spare hours should be able to whip up a library with the same API).

To an extent, I think there's something to that view (that is "language X is riskier than language Y, because a% of LangX libraries have a bus number <=1, whereas only b% (where b << a) of langY libraries do"), but at the same time, I do wonder if it's such a massive risk. After all, most CL libraries are distributed as source and that means "developer killed by bus" is no worse than "no more updates". Any and all bugs can (and if the library is "big" and "important" enough, will) be fixed, one way or another. Maybe with some initial forking, before consolidating itself.

It would be interesting to know to what extent this is just a perceived problem for Common Lisp contra other languages. At least I know there's now several libraries with multiple committers, so with any luck, this problem will solve itself, given time.

5 Dec 2007 (updated 5 Dec 2007 at 16:52 UTC) »

A delayed"is Emacs large?" essaylet has, after languishing unloved and beign ignored for quite a while, finally been finished off.

The quick answer is "maybe", it's all a bit relative. With a default Debian install, GNU Emacs 21.4 has almost half the files of a similar Vim installation, but consumes more disk space.

I found it quite interesting that the difference between a "only compoiled emacs-lisp" emacs and a vim installation were fairly small, since "emacs is so large!" has been a common complaint, for at least as long as I have used emacs (about 18 years now). Curiously, it seems like quite a few who champion vi-like editors over emacs are indeed vim users. At least now, there's some not-entirely-soft numbers showing ath while emacs isn't small, it's not exceedingly large, either.

Hah. Managed to find a timesink extraordinaire, Project Euler. I have, so far, solved just over 20 problems, out of the 166 listed.

Others have posted about how they use emacs when developing COmmon Lisp code. I, too, use emacs (it has been my primary editor since 1989 or so) and one thing I do find exceedingly useful is only tangentially related to development at all.

It is a well-known fact that many emacs users accumulate tweaks, config changes and small pieces of utility code. It got to the point that I found the sheer size of my .emacs file annoyingly huge and hard to manage. So I decided to do something about it.

These days, my .emacs file consists of 7 statements. First, I change the auto-save prefix, then I extend the load-path (where emacs finds its libraries) to include $HOME/src/ elisp (where I keep my private libraries), I then load one of those libraries and call a function from it. There's two additional forms, to allow the auto-management of the customize subsytem.

The library loaded is einit.el, a small library that allows you to splitr your configuration into multiple files, with load- order imposed on them. It will load files from (by default) $HOME/.emacsdir/ and it does take some care to only load files that are named ei-something (the convention I use is eiNNsubsytem.el). Using this, it is quite easy to find configuration for specific subsystems, keep a load-order for subsystems that depend on each other and (if needed) disable the setup for a specific system without any code modification (simply rename the file that loads the system or delete it if you are sure you won't want it, ever again).

Ugh, there's a horrible misfeature in the latest version of SLIME. I am, of curse, talking about slime-highlight-edits- mode, an abomination that colourises your edit buffer. I honestly don't know how peple can stand colour changes in their edit buffers (I have less of a problem with it in an interaction buffer or when reading mail/news). But any highlighting in a buffer where I constantly need to look at more than just "the next thing along text-reading forward" is distracting and when I don't know how to switch it off somewhat stressful.

I suppose there's enough pepople taht want that sort of thing, though, because they do seem to pop up with distressing regularity.

Not that the feature it intends to provide is bad (it is, effectively, highlighting forms that have yet to be compiled). I would, however, much prefer a key combo I can press to see if the current top-level form has been evaluated or compiled since it was last edited (notice that I did say "evaluated or compiled", not just compiled; I am usually bitten by "forgot to press C-x e" and going "butbutbut, that should work! oh, it's only in the editor, not in the image, duh." than I am having done that and get bitten because things haven't been compiled (I am sure that I've been bitten by compilation, but that has mostly been "this is way slower than it should've been, it's exactly the same speed as the naive proof-of-concept code! oh... wait.. it's interpreted" than "it doesn't work!").

As luck has it, it is indeed activated from slime-mode- hook, so deleting it out of there fixes it for this emacs process and there might be a permanent fix in my slime- stuff file.

27 Oct 2007 (updated 27 Oct 2007 at 12:49 UTC) »

Still adding to the expression parser, there's a wealth of special-purpose functions and the like in the expression language that I had forgotten about. This means I'm currently in a cycle of "try to read definition file, with some traceback enabled; wait for an error; read expression and identify non-handled function; modify expression parser" and then all over again. Today I have added handling of continuation lines (that I'd forgotten about), @hasmod, @itemhasmod, @indexedvalue. The code snippet for that can be found at the end of the post (it's wrapped inside a somewhat uncomfortably huge COND, I have been pondering ways of making it slimmer, but I can't think of any clean way to do it, due to the needs of communicating global things, there's some macrology that could take care of the test and incrementing "where are we" beyond the function name and then mathc the parenthesis around teh argument(s), but on the whole, I'm not sure it is worth it).

There's some helper functions used in the code snippet, FIND-PAREN is an FLET-introduced local that modifies END (LET-bound indicator of end-of-token), TAIL-CALL is another FLET-introduced function that intelligently skips past the parsed bits of the current input string and tail-calls TOKENIZE on an as-needed basis.

I am slightly naught in that I handle all unary things as part of the tokenizing than doing it cleanly in the building of parse-trees, but taht's because it's actually easier to read (and write) this way, there's too many ways they handle their arguments (among the thinsg needed to be handled are @if(test then expression else expression) and @max(expression, ..., expression)). Easier just to do that in the tokenizing step, all in all.


((is-prefix "@indexedvalue" str start)
 (incf start 13)
 (find-paren)
 (let* ((substr (subseq str (1+ start) (1- end)))
	(val (split-multi substr))
	(ix (parse (car (tokenize val))))
	(vals (mapcar (lambda (s) (parse (tokenize s)))
		      (cdr val))))
   (tail-call (make-instance 'ixval :ix ix :vals vals))))

Slight hitch with the GCA-reading code, I wasn't tokenizing and parsing some formulas that needed handling (specifically, I didn't handle formulas in attributes).

This, of course, necessitated re-writing the tokenizer code. It also entailed re-writing the expression parser (most of the file-reading code was, however, unaffected). I also needed to re-write some of the helper functions, since they weren't written with short strings in mind (effectively, I ended up trying to read off the end of a string, with resulting errors following; on the whole, bounds-checking of arrays is a boon, at least during development). That, in turn, cascaded to minor re-writes in the file-reading code.

Almost there, again, though. At the moment, I just need to make the tokenizer/expression parser to handle conditionals properly. At the moment, they're ALMOST properly handled, but instead of a "conditional expression" object having parsed expressions for the test, the then-expression and else-expression they only have tokenized lists, so there's an additional parsin step needed. I just didn't have tim to get that done before needing to head to work this morning, but with any luck it should be done by tonight.

After that, it's time to extend the scaffolding for individual character sheets, then on to reading saved character sheets and code to write them back to file. After that, it's "just" a GUI that needs building.

The GCA-reading code can now (almost successfully) read the two example definition files I have at hand.

There's still a few problems, attributes aren't, currently, defaulting as they should (Boooo!). This is because it requires a tokenizer/parsing step on top of the rather raw and cheery "chunk things into objects" that is currently done and it seems it's a bit more involved than the defaulting needed for skills. So, I get to rewrite my tokenizer and then I need to rework the expression parser slightly (currently, it expects a statement represented in tokenised for as a list of expressions and individual operands/operators expressed as strings, I will need to add handling for parsed ojbects, too, since I will need to handled conditional evaluation).

Other than that, all I can say is that the data file format is nastier than I thought, but with some external guidance and a lot of special-purpose parsing code (I had, initially, intened to use a parser generator, but there's so many subtly different syntaxes used through the files that it fell flat).

Once teh expression parsing is sorted, it's time to start working on importing;/exporting character sheets, possibly printing characer sheets and a non-CLI user interface.

257 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!