Older blog entries for ingvar (starting at number 262)

Hah. Managed to find a timesink extraordinaire, Project Euler. I have, so far, solved just over 20 problems, out of the 166 listed.

Others have posted about how they use emacs when developing COmmon Lisp code. I, too, use emacs (it has been my primary editor since 1989 or so) and one thing I do find exceedingly useful is only tangentially related to development at all.

It is a well-known fact that many emacs users accumulate tweaks, config changes and small pieces of utility code. It got to the point that I found the sheer size of my .emacs file annoyingly huge and hard to manage. So I decided to do something about it.

These days, my .emacs file consists of 7 statements. First, I change the auto-save prefix, then I extend the load-path (where emacs finds its libraries) to include $HOME/src/ elisp (where I keep my private libraries), I then load one of those libraries and call a function from it. There's two additional forms, to allow the auto-management of the customize subsytem.

The library loaded is einit.el, a small library that allows you to splitr your configuration into multiple files, with load- order imposed on them. It will load files from (by default) $HOME/.emacsdir/ and it does take some care to only load files that are named ei-something (the convention I use is eiNNsubsytem.el). Using this, it is quite easy to find configuration for specific subsystems, keep a load-order for subsystems that depend on each other and (if needed) disable the setup for a specific system without any code modification (simply rename the file that loads the system or delete it if you are sure you won't want it, ever again).

Ugh, there's a horrible misfeature in the latest version of SLIME. I am, of curse, talking about slime-highlight-edits- mode, an abomination that colourises your edit buffer. I honestly don't know how peple can stand colour changes in their edit buffers (I have less of a problem with it in an interaction buffer or when reading mail/news). But any highlighting in a buffer where I constantly need to look at more than just "the next thing along text-reading forward" is distracting and when I don't know how to switch it off somewhat stressful.

I suppose there's enough pepople taht want that sort of thing, though, because they do seem to pop up with distressing regularity.

Not that the feature it intends to provide is bad (it is, effectively, highlighting forms that have yet to be compiled). I would, however, much prefer a key combo I can press to see if the current top-level form has been evaluated or compiled since it was last edited (notice that I did say "evaluated or compiled", not just compiled; I am usually bitten by "forgot to press C-x e" and going "butbutbut, that should work! oh, it's only in the editor, not in the image, duh." than I am having done that and get bitten because things haven't been compiled (I am sure that I've been bitten by compilation, but that has mostly been "this is way slower than it should've been, it's exactly the same speed as the naive proof-of-concept code! oh... wait.. it's interpreted" than "it doesn't work!").

As luck has it, it is indeed activated from slime-mode- hook, so deleting it out of there fixes it for this emacs process and there might be a permanent fix in my slime- stuff file.

27 Oct 2007 (updated 27 Oct 2007 at 12:49 UTC) »

Still adding to the expression parser, there's a wealth of special-purpose functions and the like in the expression language that I had forgotten about. This means I'm currently in a cycle of "try to read definition file, with some traceback enabled; wait for an error; read expression and identify non-handled function; modify expression parser" and then all over again. Today I have added handling of continuation lines (that I'd forgotten about), @hasmod, @itemhasmod, @indexedvalue. The code snippet for that can be found at the end of the post (it's wrapped inside a somewhat uncomfortably huge COND, I have been pondering ways of making it slimmer, but I can't think of any clean way to do it, due to the needs of communicating global things, there's some macrology that could take care of the test and incrementing "where are we" beyond the function name and then mathc the parenthesis around teh argument(s), but on the whole, I'm not sure it is worth it).

There's some helper functions used in the code snippet, FIND-PAREN is an FLET-introduced local that modifies END (LET-bound indicator of end-of-token), TAIL-CALL is another FLET-introduced function that intelligently skips past the parsed bits of the current input string and tail-calls TOKENIZE on an as-needed basis.

I am slightly naught in that I handle all unary things as part of the tokenizing than doing it cleanly in the building of parse-trees, but taht's because it's actually easier to read (and write) this way, there's too many ways they handle their arguments (among the thinsg needed to be handled are @if(test then expression else expression) and @max(expression, ..., expression)). Easier just to do that in the tokenizing step, all in all.


((is-prefix "@indexedvalue" str start)
 (incf start 13)
 (find-paren)
 (let* ((substr (subseq str (1+ start) (1- end)))
	(val (split-multi substr))
	(ix (parse (car (tokenize val))))
	(vals (mapcar (lambda (s) (parse (tokenize s)))
		      (cdr val))))
   (tail-call (make-instance 'ixval :ix ix :vals vals))))

Slight hitch with the GCA-reading code, I wasn't tokenizing and parsing some formulas that needed handling (specifically, I didn't handle formulas in attributes).

This, of course, necessitated re-writing the tokenizer code. It also entailed re-writing the expression parser (most of the file-reading code was, however, unaffected). I also needed to re-write some of the helper functions, since they weren't written with short strings in mind (effectively, I ended up trying to read off the end of a string, with resulting errors following; on the whole, bounds-checking of arrays is a boon, at least during development). That, in turn, cascaded to minor re-writes in the file-reading code.

Almost there, again, though. At the moment, I just need to make the tokenizer/expression parser to handle conditionals properly. At the moment, they're ALMOST properly handled, but instead of a "conditional expression" object having parsed expressions for the test, the then-expression and else-expression they only have tokenized lists, so there's an additional parsin step needed. I just didn't have tim to get that done before needing to head to work this morning, but with any luck it should be done by tonight.

After that, it's time to extend the scaffolding for individual character sheets, then on to reading saved character sheets and code to write them back to file. After that, it's "just" a GUI that needs building.

The GCA-reading code can now (almost successfully) read the two example definition files I have at hand.

There's still a few problems, attributes aren't, currently, defaulting as they should (Boooo!). This is because it requires a tokenizer/parsing step on top of the rather raw and cheery "chunk things into objects" that is currently done and it seems it's a bit more involved than the defaulting needed for skills. So, I get to rewrite my tokenizer and then I need to rework the expression parser slightly (currently, it expects a statement represented in tokenised for as a list of expressions and individual operands/operators expressed as strings, I will need to add handling for parsed ojbects, too, since I will need to handled conditional evaluation).

Other than that, all I can say is that the data file format is nastier than I thought, but with some external guidance and a lot of special-purpose parsing code (I had, initially, intened to use a parser generator, but there's so many subtly different syntaxes used through the files that it fell flat).

Once teh expression parsing is sorted, it's time to start working on importing;/exporting character sheets, possibly printing characer sheets and a non-CLI user interface.

Still having strange locking prblems with SBCL, so teh more graophical programming is shelved until things are sorted (yes, I've tried upgrading to 1.0.10, though it's the debian-build version, I may try building from source and do some mucking around once the new box is up and working as it should, so I have a spare play environment or three).

Currently mucking around with code to read GCA definition files. It's progressing at a pace of some sort (next up to tackle is "Skills", then I need some extra parsing to happen, so I can get proper defaults). Once I can read the data definition files, it's time for some sort of UI and deciding how to handle character sheets (and, if needed, how to parse character sheets).

26 Sep 2007 (updated 26 Sep 2007 at 18:12 UTC) »

Hit an interesting problem yesterday-late (re-appeared this morning). It's semi-reproducable, but I have no obvious small test case as of yet.

In short, I seem to be hitting the SBCL interpreter lock, when processing X11 events and I have SLIME/SWANK running. Not (yet) tested with a free-standing SBCL. Somewhat unknown version of SBCL (essentially what's current in Debian/Unstable) and SLIME (again, whatever's current in Debian/Unstable).

It's kind-of cool, though. I start the code running and it works, for a while, then it starts signalling a WARNING and a traceback and after a while, everything wedges. I suspect I'll be having a fun rest-of-week, trying to diagnose this or call "failure" and submit a proper bug report, with a small test case.

Update: It happens in free-standing SBCL too, so it's definitely not related to SWANK/SLIME. Next, try to make the test case smaller.

Speaking of Snooper (my attempt to see how frequent scanning of "unused" IPs are), I am wondering if there's any other graphing I should look at doing. I've considered plotting ICMP scans (I've exclusively seen ICMP ECHO, so there'd not be much difference between looking at just pings or all ICMP). I've also considered plotting the activity of the top-N sources (probably the top 2-3 hosts).

I'm also considering doing something to graph time-trends in destination TCP port popularity, though that may have to be on a longer time-base (say weekly top-5, with all ports encountered during the report period graphed in the same colour and a key).

Ooops. Seems as if the upgrade the other week broke Snooper. I've got a week's loss of graphing, because (as usual with an SBCL upgrade) the FASL format changed. I've zapped the offending FASl and will be considering doing a re-build of the scans (though it would require some careful manual prodding, since part of the code is heavily reliant on "now" as a time concept).

Ah, well, at least it's fixed now.

253 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!