Older blog entries for ingvar (starting at number 191)

THings seems to have settled, if you fancy playing with pre-release code, the python server and client code is available and so is the Common Lisp client library. The latter is ASDF-packaged and will be wanting a serving of trivial-sockets on the side.

o, I've been prodding code. As a matter of fact, I have (mostly) written a client library for creek, in Common Lisp. It was, not ioverly surprising, quicker to write than the Python library. Part of that is down to "I am more used to it", but parts is the ability to skip verbiage.

Most of the protocol glue is handled liek so:

(defmacro defprotofun (namedes (&rest args) &optional doc)
  (let ((fname (if (symbolp namedes) namedes (car namedes)))
        (pname (if (symbolp namedes) namedes (cadr namedes))))
    `(defun ,fname ,args
       ,@(if doc (list doc))
       (send (list ',pname ,@args))
       (parse (creek-read *server-stream*)))))

(defprotofun take (item)) (defprotofun (list-maps listmaps) () "Return all maps defined on the server")

This is because the protocol is designed to be human-friendly and quite close to what you'd do in code, so the "trivial" functions are (essentially) "build a protocol string, ship to server, read response, parse response" (object creation and object cacheing is part of the parsing stage).

After I wrote the first two glue functions, i noticed that they were, effectively, identical and thouhg "Ah, macro time!". I then realised that a few of the protocol command were somewhat unlispish, so the macro got modified to accomodate a (possible) split between "lisp function name" and "protocol call". Then I added the option to add doc-strings.

As soon as the server is slightly more ready and a bit more tested with both client libraries and test-installed, there will be tarballs (one for the python stuff and a signed package suitable for asdf-install).

Apparently, I've been a bit lax in making known what things are and have so, inadvisedly, caused confusion.

The latest hack is "creek", a client/server workflow thingie back-ending ontyo a MySQL database (changing it from MySQL to something else that is (a) supported by Python and (b) has DictCursors should be fairly simple, I am, as far as I know, staying WELL within the standard SQL syntax).

Main reason why naming it might've slipped my mind is taht I've been doodling on it back and forth for the last three years (most coding has been done within the last 3 months, with some coding being done late 2003 and early 2004, most notably the reader for the network protocol).

So, creek is nearing enough maturity to be released on an unsuspecting world. I've been thinking on (a) what more I want and (b) what's the minimal extras needed.

  • License (provisionally, MIT-licensed)
  • Client library for C
  • Client library for Lisp
  • Client library for Perl
  • client library for Python (DONE, I think)
  • Better documentation
  • Built-in support for submaps
  • Map visualization
  • ACL "editor"
  • ACL "dumper"

Combined with this is some slight worries regarding packaging. I was hoping that the Python "disutils" package would allow me to auto-build tar files in the form and shape I wanted, but it seems as if the "sdist" option only picks up the source and leaves other things out. I guess I could do ad-hoc packaging (after all, that's almost what I do when I release CL packages, build-asdf-package is a bit too ad-hoc and requires at least one manual step; I've been considering routes around that, though, introspecting into the ASDF data, but so far nothing stable-enough has gelled).

Harking back to submaps... At the moment, it's sort-of doable (set up a split/join, with one split transit heading straight to the join, the other pointing at "the submap" and have the submap end up with a destructor AND run a separate daemon to go through the item inventory and push through any item that resides only in a join). This is all cumbersome and ugly, but...

To fix it would require two new state types (submap-split and submap-destruct) and possibly a third (submap-join). It would also need another database table (essentially a "submap stack"). But, darn, I can see it coming in handy.

Ah, well. Another day.

Back on the air, after many tribulations and revelations about the down-side of modern motherboards ("fail to boot off SATA" and "not enout PATA" is, in essence, the problem).

Old source packages should be back on their usual place. Old essays likewise. The only essay that didn't get enough air-time was Document management and workflow (thus an extra plug).

Still off the air. The anticipated 12 hours are now closer to three weeks. Ah, well.

Unanticipated kernel issues means that most (if not all) links pointed at in earlier diary entries are off-line. I'm hoping to have things back in the air within the next 12 hours.

I've also, hopefully, learnt that I should double-check that what I think is availabel at boot is, in fact, available at boot. :(

14 Jun 2006 (updated 14 Jun 2006 at 13:55 UTC) »

The python hacking seems to be progressing apace. Server-side is completed. Network protocol layer is done and tested (telnet is sometimes a useful test client). Implementing the client library is progressing, though isn't quite finished (it can, currently, authenticate to the server and disconnect; actually doing something useful isn't happening yet).

However, I did find some useful displacement activity (namely, writing a little piece on document management and workflow).

At some point, I should, probably, try getting all these little pieces together and see if there's ripples to be made.

Sometimes, I miss "proper macros". A lot. I'm trying to get an old stinker of a project done and dusted (it's essentially a re-write, in another language, of an older project; but back-ending onto a database, with only a client/server approach as opposed to the "same, almost, API; either local library modifying back-end straight or clients/server via network"). OK, I am moving teh project from C to Python (notice that "database backend"? that'd be why).

But as part of the new rewrite, I want to have some sensible exception/error handling and to my mind that means handling different types of errors differently (it's basically a workflow system and map errors, user-related errors and item-related errors are all fundamentally different). But, alas, this means I end up writing several lines of almost-duplicate code (class declaration differing in only class name; __init__ declaration identical; __init__ body, differing in one string).

It feels horribly inefficient and should, probably, have been compressable with either a code generator (free-standing from the rest of the code, requires a separate phase to run) or by a sufficiently flexible macro system.

But! I shall persevere! It is good for the soul (hah, I'm not even fooling myself) to experience pain. I guess I could the pre-existing code away and rewrite everything from scratch (OK, not the db schema, I'm sufficiently happy with that). But that'd be, like, a pain too.

Seems I didn't brave Foyle's, after all. Lots of stuff, mainly boiling down to "so tired, so incredibly tired, there's a nice, warm cozy home right here, not moving at all".

With a bit of luck... Actually, with a bit of even more luck, I shall do some hardware disassembly and get a better gfx card in new-head (the one out of old-head).

182 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!