Older blog entries for lkcl (starting at number 285)

8 Jun 2006 (updated 9 Jun 2006 at 16:37 UTC) »

... good, isn't it? :) try putting superkaramba with "kroller.sez" in front of people (make sure it's KDE 3.5.1 or above that you use).

after significant periods of time using kroller (the original, by caspian) or kroller.sez, people _do_ miss it - like the MAC rollerbar, it's just incredibly in-your-face and obvious as to what you're supposed to do.

click da icon, da program run. very visual. very satisfying. very pretty. very quick.

the advantage of using kroller.sez is that it reads the user's KDE system menu options and presents those at them.

none of this crap obscure text file editing shit.

social networking site framework

well... i am pleased to say that i have a decent application framework that can be used to switch between AJAX and "plain" modes - using the same content, giving exactly the same look irrespective of the "mode". this is crucial to successfully being able to have google adsense work correctly as google adsense cannot work correctly - at all - with AJAX sites. you just can't do it! the stupid javascript that you must insert into the page doesn't even _work_.

i've had to "templatise" - by putting things into python and have a function which constructs AJAX or "plain" on-demand for the following things:

  • all hrefs
  • all form actions and all form submit buttons
  • all "subsections" of web pages (think "child windows" in a GUI app)

the hrefs i handle in AJAX mode by calling a javascript function ajax_dlink('thedivid', 'thehref... &source=thedivid'), and in "plain" mode by adding "&source=thedivid&__mainpage__=index" to the end of the href.

in both cases, the "source=" argument is stripped out _before_ the actual page is constructed; the rest of the URL is then stored in a dictionary, using "thedivid" as the key into the dictionary.

later on, in the case of "plain" mode, whenever a "child window" - or subsection page - named "thedivid" is encountered, this URL is retrieved, the "loading" of that URL is simulated - its python function called - and the content is "substituted" into the parent's web page as the inner HTML of the relevant < div /> tag!

in the AJAX case, what i do is add in an extra AJAX call into the main index page, which will cause the browser to request that "child" page to be substituted into the DOM model at the browser end.

the result is that even when someone causes a "refresh" on the page, the "constructed" page is recreated (because the history of all "subsections" is stored in that dictionary).

i suppose i could do it on the server, in the same way as the "plain" mode - but.... naaaah :)

forms, forms, forms. these didn't turn out to be as tricky as i thought, after i decided that every submit, instead of returning content, should instead do a redirect to the page containing the content.

in this way i have a way to "catch" things and make a decision about where the actual destination should be - and also strip out any further bits which should go into that dictionary of child-window-state-info.

i'm surprised that it all seems to be hanging together as nicely as it does, and i look forward to experimenting to create an automatic "frames" mode at some point, just to see if i can.

8 Jun 2006 (updated 8 Jun 2006 at 12:40 UTC) »

yes it can happen, chicago: people who have Certified you, who are "closer" by degrees to the top-level seeds than you, happen, in your absence, to also Certify other people.

then, the "max flow" algorithm "diverts" flow through those people, leaving not enough to "flow" through your "node".

what is supposed to happen is that as the number of people in the database increases, so is the "maximum capacity" of each "degree" supposed to be increased, in order that more "flow" can reach more people.

that, however, requires a recompile of the c source code mod_virgule.c.

mod_virgule has not been actively maintained since it was written, nearly six years ago.

p.s. i understand the theory, design and implementation behind the mod_virgule code very well, having done a complete rewrite into something called xmlvl (xml virgule language - before zope, dtml and xslt were well-known) and also a port of the trust metric algorithm to python.

p.p.s. lack of disk space on the advogato server(s) has in the past resulted in truncation of people's profiles. i've lost nearly a hundred Certs from other people at least once.


the trust metric algorithm uses a "maximum flow" algorithm to ascertain whether users are "certified" - and the closer you are linked to one of the "top" seeds - raph, alan, miguel and one other - the more likely that you are to receive some "flow".

so it's quite simple: Chicago isn't connected closely enough to one of the top-level seeds.

that's the way the algorithm works.


sqlobject views

slight flaw in that code example - i had to use this:

class SiteSearch(sqlobject.SQLObject):
    class sqlmeta:
        #lazyUpdate = True
        cacheValues = False
        _cacheValue = False
    count = sqlobject.IntCol(default=None)

def dropTable(cls, ifExists=False):

if ifExists and not cls._connection.tableExists(cls.sqlmeta.table): return

sql = "DROP VIEW %s" % (cls.sqlmeta.table,) cls._connection.query(sql)


def createTable(cls, ifNotExists=False, createJoinTables=True, createIndexes=True, applyConstraints=True, connection=None): conn = connection or cls._connection if ifNotExists and conn.tableExists(cls.sqlmeta.table): return

sql = cls.createTableSQL()



i've just encountered the most _horrendous_ sql query i've ever had to design - it even beats the multi-alias-join thing to turn a sparse-entry recordset into a variable-width 2D table (for a demographic search)

the reason why i've had to use VIEWs is because the query has a COUNT record in it, and so requires a GROUP BY. the GROUP BY makes it impossible to do sensible multi-alias-joins, and not even a HAVING clause will do the trick.

so i had to first create the VIEW, then do a multi-alias-join multiple times on the VIEW.



my project, the social networking one, has ground to a halt - from, believe it or not, mental overexertion :) an ordinary social net site: fine, no problem. lots of people posting, some chat stuff, blah blaahhhh. boring.

the easy stuff we managed in a reasonable amount of time (5 weeks). user-login, forum, chat, profile, picture uploading.

now it comes to the hard part: tagging, making the tags useful, and then search on the tags.

with a fourth normalised form database (ultimate object-orientated design) it's all gone slightly crinkly. getting results out of such a database has to be done with JOINS on aliases of the same (object/table) obj_table AS column1, obj_table AS column2.

now imagine putting "please count the number of times a tag has been put onto any object" into 4th normalised form. i can't quite get my head round it. i will - eventually... just not... this... month!

i fully expect to be successful - i will just have to warp my tiny brain and wrap it around the problem several times before it all fits into place.

in the meantime i'm bouncing off the walls cos i really want to get this project completed!

does anyone know how to make this work?

it's a more advanced version of /sbin/udevsynthesize which on debian will trigger SIX HUNDRED events, which takes forever.

i'm working on depinit, and so have split udevsynthesize down into separate scripts - one for "essential" devices, one for networking, one for block devices, and another for non-essential tty devices.

each time the word "add" is shoved into /sys/class/something/something/uevent, udevd picks it up and shoves a symlink into /dev/.udev/queue/<thedevice>.

when the scripts and modprobes for that device are finally run, the symlink is removed (by udevd).

so, in shellscript language, in my (four) udevsynth scripts named udevsynth-tty, udevsynth-essential, udevsynth-block and udevsynth-net, i'm trying to ONLY "watch" and "wait" for those files (symlinks) that i did an "add" on, and to ignore all other symlinks.

the first problem that i encountered was that i need to call readlink on each of the things in the queue directory.

it's all got quite hairy!

#!/bin/sh -e

function get_queue() { list="`/usr/bin/find /dev/.udev/queue -ignore_readdir_race -type l -print0`" if [ "y$list" == "y" ] ; then queue=1; return; fi echo "`/usr/bin/find /dev/.udev/queue -ignore_readdir_race -type l -print0|xargs -0 -n1 readlink`" queue="`/usr/bin/find /dev/.udev/queue -ignore_readdir_race -type l -print0|xargs -0 -n1 readlink`" } function check_links() { get_queue if [ "y$queue" == "y1" ] ; then return 1; fi echo "$file" | grep -qF "$queue" }

# this is a list of /sys/class/*/(*/)uevent # and has to be dirname-stripped to work with # the find/xargs/grep trick, above.

file_list2="$first $default $last"

for file in $file_list; do [ "$file" ] || continue echo 'add' > "$file" || true done

sleep 1

for f in $file_list2; do [ "$file" ] || continue file=`dirname $f` y="0" while [ "y$y" == "y0" ] ; do check_links sleep 1 done done

the social net site i'm working on is slowly getting there. my friend richard has done a total redesign of the underlying (4th normalised form) database. it's a _truly_ object-orientated database - totally abusing postgresql to have complete flexibility over data. i dread to think what kinds of search queries will find horrendous bugs in postgresql.

i've done a sparse-array -> 2d-array query before now, for demographic searches, using mysql, and mysql completely xxxxed up. in order to do the search correctly, i had to ask the customer to apply boolean logic to their searches, so that instead of NOT a AND NOT b AND NOT c they do NOT (a or b or c).

the customer was naturally totally unimpressed.

4th normalised form means that you have an entry with pretty much nothing but an ID and a "type" field, then you have to do a sequence of JOINs attributes AS attributes_table_NNN to add more and more attributes - effectively constructing the rows of the table dynamically.

here's the bit where mysql went wrong, when i also did SELECT where attributes_table_1.value == 'hello' AND attributes_table_2.value == 5 etc. etc. mysql returned completely the wrong results.


i have more faith in postgresql.

once we've got this code underway, and are happy with it, i think it should be a pretty easy task to convince richard to free-software-license it. it's _really_ useful code if you're into object-orientated insane levels of flexibility.

it's not in the slightest bit obvious that there's a database underneath it.

i continue to be impressed with mod_python+formencode+htmltmpl. i think i will try simpletal because i really could do with the benefits of the tags being "buried" into the html rather than huuuge words TMPL_XXX - but, butbutbut... simpletal had _better_ have compiled-templates, otherwise it's out the door even before it's begun.

28 Apr 2006 (updated 28 Apr 2006 at 22:05 UTC) »

love it. my cute fujitsu's laptop is running very successfully with it. i am so lucky.

i have a battery life, from a default debian/unstable system, of 6 hours.

i have a boot time of 20 seconds which means i am not interested in doing stuff like "suspend". okay, i am - i would love to be able to save the state of whatever my current development setup is. more specifically: i no longer feel the need to worry about a THREE MINUTE startup time (of my acer c112) and no longer feel obliged to keep this one permanently switched on.

see, on my c112, i have STACKS of extra services installed - and i have to edit the /etc/init.d/XXXXX scripts to put, at the top "exit 0" to disable them - or worse, remove the symlinks from /etc/rc*.d

with depinit, i can have /etc/depinit/default/depend specify the minimum startup requirements ("normal" user mode with apache, ssh, x-windows and postgresql - all of which start up pretty much simultaneously).

then i can have some additional dummy service ... err... example... i have a requirement occasionally to use my laptop to pxe/netboot other systems: i can call that... oh.... pxeserv. so i create /etc/depinit/pxeserv/depend with the words dhcpserv, nfsserv, atftpd etc. which are the dependent services i will need.

depctl -s pxeserv will then run those required services - when i NEED them. not when initscripts says i have to have them.

new web site

i'm absolutely delighted with the combination of ajax, mod_python, formbuilder, htmltmpl and sqlobject - it's like... a breath of fresh air that leaves you free to think "what" rather than "how".

the site is (yet another) social network site - but it's something that _i_ have, _i_ am doing, _i_ enjoy - and, here's the important bit - that because i will know how it work, inside-out, can extend it with ease.

there's nothing worse than having to work with somebody else's code and going "ugh" and "i would have definitely done that differently in order to do xyz in the future".

in other words, i want nobody to blame for screwing up but me :)

profiles, photos, messages, forum. the forum i love: because i already had messages, adding "forum" took about five hours. to complete functionality. create, view forums, view messages, post message.

also i decided to put in a little trick: some javascript that refreshes the latest views - so if posts arrive in the meantime (including your) you get to see them automatically.

here's the neat bit: it's done by refreshing _only_ the list of 10 last messages, via ajax techniques. it's a perfect candidate for doing real-time irc.

you know - having discovered this trick, i can't imagine ever writing a web site that doesn't use ajax, ever again.


as part of (yet another) rather silly exchange, in which some rather silly person (yet again) asked me to "get a life", i decided to post a warning message, referencing a couple of my poems which i believed were appropriate.

i then forgot about it.

part of the poems site keeps a record of the number of views on your work: out of my 50 or so poems online on this site, the views varies from 3 to 20 - averaging around 12 or so. not any more!! i took a look and went "wtf??? that's a mistake: 116 views? 83 views??? where did _that_ come from?"


some people _were_ paying attention. this is good, cos that guy really _was_ being very silly. it's so unnecessary to think that difficult technical and strategic decisions can be avoided consideration by attempting to insult people. unless he apologises, i do sincerely hope that nobody allows him to make important technical or strategic decisions: his judgements are clearly clouded by personal hate.

23 Apr 2006 (updated 28 Apr 2006 at 00:04 UTC) »

_excellent_, muhahahah. i decided to try depinit on my shiny new fujitsu p1510 with debian/unstable.

the boot time is... 20 seconds, including starting postgres, apache2, sshd and xorg.


276 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!