i finally packaged up colobus for release to the world. (that's the name for the nntp server for ezmlm mailing list archives.)
as an excuse to do a little programming, i wrote a simple
clock program for x. it is just a simple text display (with
a drop shadow) in a shaped window (so it is transparent)
that raises itself whenever its visibility changes (which
probably isn't very friendly, but whatever).
you can grab it here.
the nntp server that runs off ezmlm mail archives is almost done. i just need to fix up the overview database updating, and test it with some more clients.
so, i went down the route of writing a news server that uses
ezmlm archives for the spool. i have a basically functional
news server working (including posting).
the big chunks remaining are dealing with message-ids properly (the current code basically punts -- i need to build a two-way database of message-ids to message paths), and adding xover support (which will be easy -- making it efficient will be slightly harder).
dealing with getting php.net's news server set up on a new
machine, i think i've come to the conclusion that i should
write a new storage driver similar to tradspool, but with
hashing of the individual articles at the newsgroup level.
since we're not expiring anything from the server, tradspool doesn't really cut it (over a hundred thousand files in one directory just isn't cool, even if your filesystem is), and timehash doesn't cut it either, because i'd like to keep the newsgroups seperated (without having to set up a seperate timehash spool for each group).
what would be really sexy, though, would be to have inn just slave off the ezmlm archives, and avoid the duplication altogether. hmm.
repeat after me: the article title length does not matter. putting long, unbreakable strings of text (like, say a url) into the article lead is, however, a different matter.
as mike750 pointed out, the latest pam
packages in debian's unstable are hosed. luckily, i noticed
it before logging out of the two machines i had upgraded,
and was able to copy a fixed pam_unix.so into place.
(basically, "apt-get source libpam0g; cd pam-0.72;
./debian/rules setup; perl -pi -e "s/sgaddset/sigaddset/"
./debian/rules build; sudo cp
this is the second time i've gotten screwed by pam breaking on an upgrade. the last time i had to fix it by abusing nfs. apparently the maintainer doesn't do a lot of pre-release testing, which is rather unfortunate for a part of the system that can make it very difficult to get into your system when it falls over.
other than that, debian is the bee's knees, of course.
i just have to wonder if it hurts to be involved in a heated database. it actually sounds rather refreshing, although perhaps not this time of year. maybe a couple of months ago, when it was a little colder.
i find it very disappointing that the gnu file utilities can't handle urls (speaking webdav under the hood, perhaps), and that gnu tar doesn't just automatically detect the compression type instead of making people remember whether bzip2 is 'y' or 'I' or 'j' or whatever this week.
sure, you can fill in the gaps with things like curl and wget and cadaver and remembering the magic bzip2 flag, but it just seems like someone isn't pushing the command-line interface forward when cp still doesn't understand urls.
New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.
Keep up with the latest Advogato features by reading the Advogato status blog.
If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!