Older blog entries for jtauber (starting at number 200)

Voronoi Diagrams

Back in Creating Gradients Programmatically in Python I presented some code for writing out PNGs according to some rgb-function of x and y.

The relevant write_png(filename, width, height, rgb_func) is here:

http://jtauber.com/2008/11/png.py

This enables one to quickly generate PNGs based on all sorts of functions of position.

For example, here's a Voronoi diagram:

Take any metric space and pick a discrete number of points in that space we'll designate "control points" (the black dots in the above example). Now colour each other point in the space based on which control point is closest. In other words, two points are the same colour if and only if they have the same "closest control point".

Here's how to generate such a diagram using write_png...

First of all, here's a function that given an (x, y) coordinate and a list of control points, returns a pair of:

  • which control point is the closest to (x, y)
  • what the distance was

def closest(x, y, control_points):
    closest = None
    distance = None
    for i, pt in enumerate(control_points):
        px, py = pt
        d = ((px - x) ** 2 + (py - y) ** 2)
        if d == 0:
            return i, 0
        if d < distance or not distance:
            closest = i
            distance = d
    return closest, distance

Now we can use this and write_png to generate a PNG Voronoi diagram for a given set of control points:

def voronoi(filename, size, control_points):
    def f(x, y):
        c, d = closest(x, y, control_points)
        # draw points in black
        if d < 5:
            return 0, 0, 0
        px, py = control_points[c]
        # just one way to generate a colour
        m = px * 255 / size
        n = py * 255 / size
        return m, 255 - ((m + n) / 2), n
    write_png(filename, size, size, f)

Of course, this is just a brute-force way of establishing the Voronoi diagram, but for just generating examples a few hundred points by a few hundred points, it's Good Enough.

Note the choice of colouring, based on the coordinates of the control point is just one of many possibilities. You could also just colour based on control_point number (i.e. c) The current approach has one disadvantage that two control points very close to one another can be given almost indistinguishable colours.

The example diagram was just randomly generated with the following code:

import random
from voronoi import voronoi
space_size = 200
num_control = 8
control_points = []
for i in range(num_control):
    x = random.randrange(space_size)
    y = random.randrange(space_size)
    control_points.append((x, y))
voronoi("voronoi.png", space_size, control_points)

You can read more about Voronoi diagrams on Wikipedia.

Syndicated 2008-11-07 20:33:58 (Updated 2008-11-07 20:57:34) from James Tauber's Blog

Song Project: RH Piano Riff

Most of my pop song ideas begin with either a chord progression voiced a particular way on piano or some bass line. The song we'll be talking about here falls in to the first category.

I remember when I first started composing in high school, I did a lot of songs that were just permutations of I, IV, V and vi chords (so in C, that would be C, F, G and Am).

I remember one instrumental I wrote in Year 10 (called "Mystical Movements in Green")—that my drama class choreographed a dance to—used the chord progression vi IV I V and in particular was voiced with the vi and I in the second inversion. I always liked the way it sounded.

A couple of weeks ago, I was improvising on my digital piano and took a liking to the following variation:

    III . vi . IV . I V

with the vi and I again in the second inversion. I was playing in F at the time with a driving 3+3+2 rhythm in the right hand, so the resultant riff was:

score

which sounds something like this:


download if embed doesn't work

This will form the basis for the song.

All material for this project is made available under a Creative Commons BY-NC-SA license so you are free to redistribute and remix with attribution but under the same license and not commercially.

Syndicated 2008-11-06 08:30:24 (Updated 2008-11-06 08:59:24) from James Tauber

Song Project

One series I thought would be fun to kick off this month is to talk about music composition and record producing and engineering by working through a song. I've chosen a song I just started working on last month and the idea is I'll go through the process from initial idea to final produced song over a series of blog entries.

All material for this project is made available under a Creative Commons BY-NC-SA license so you are free to redistribute and remix with attribution but under the same license and not commercially.

I'll get started with the music in a separate post right away.

Syndicated 2008-11-06 08:04:55 (Updated 2008-11-06 08:19:49) from James Tauber

Atom, Google Reader and Duplicates on Planets

For a while I've wondered why posts syndicated across multiple planets don't get picked up by Google Reader as duplicates (and automatically marked read when I've read it the first time around).

I wasn't sure whether the problem was:

  • the source feeds
  • the planet feeds
  • Google Reader itself

so I decided to investigate further with my own feed as the source and the three planets my site is syndicated to (that I know of).

Let's take my post Cleese and Disk Images.

My feed gives both an id and a link for both the feed itself and each individual entry. That makes it possible, at least, for planets and readers to do the Right Thing. So I don't think the problem is my feed.

On the Official Planet Python:

  • there is an RSS 1.0 feed
  • the rdf:about is not the same as the id in my feed but is the link URL (subtlety different but the planet is doing the wrong thing, IMHO)
  • there is no authorship or source information

On both the Unofficial Planet Python and on Sam Ruby's Planet Intertwingly:

  • there is an Atom feed
  • the entry Cleese and Disk Images has the same id as in my feed and a distinct link.
  • the source element gives appropriate information about my blog and the original feed
  • the author (which I only give on the source feed not each entry) is not inherited on to the entry in the planet feed, only included in the source

Note that the handling of the author by the latter two feeds is correct per the Atom RFC, although I have noticed that Safari's feed reader gets this wrong and, despite the author in the source element, uses the inherited author from the planet feed itself.

But, in short, the Atom-feed-based Planets do the Right Thing, although IMHO the RSS-1.0-based Official Planet Python does not. That may not be the Planet's fault. The RSS 1.0 Spec (or any RSS for that matter) may not make the distinction between id and link.

So given that my feed and two of the planet feeds do the right thing, I guess that places the blame with Google Reader.

Why does Google Reader not honour the entry id and automatically mark duplicates as already read when you've read it the first time. That's my pony request for Google Reader.

And by the way, the same thing applies to feeds themselves, not just entries. Feedburner, for example, does the right thing and passes through the id of a source Atom feed into its own Atom feed version. However, if you subscribe to both the source and Feedburner version of of a feed, Google Reader doesn't not identify them as the same feed. Of course, if either are RSS, I'd assume all bets are off.

So, in summary, Atom supports doing the Right Thing. The Atom-based Planets do the Right Thing. Google Reader doesn't take advantage of this.

Syndicated 2008-11-06 06:04:34 (Updated 2008-11-06 06:04:35) from James Tauber

Dear America

I'm glad you have elected someone you think brings hope and change. I hope he turns out to be one of the truly great presidents.

However, after the last eight years, you need a strong dose of fiscal conservatism. I hope your choice turns out to be the right one for that. I am not yet convinced.

Syndicated 2008-11-05 19:55:43 (Updated 2008-11-05 19:55:44) from James Tauber

Cleese and Disk Images

Previously I talked about setting up a toolchain to compile i386-elf binaries for hobby OS writing on Mac OS X.

The next step in getting Cleese development working on Mac OS X was working how to build disk images for VMware Fusion for a "hello world" kernel. I got about half way over the weekend, but Brian Rosner worked out the rest Monday night.

VMware can mount a normal disk image as a floppy but can't do the same for hard drives. Turns out, though, you can create floppy images larger than 1.44MB (although I don't know if there's an upper limit).

Here's the make target Brian came up with:

cleese.img: KERNEL.BIN
        hdiutil create -size 5M -fs "MS-DOS" -layout NONE cleese
        mv cleese.dmg cleese.img
        mkdir -p mnt
        mount_msdos -o nosync `hdid -nomount cleese.img` ./mnt
        cp -r boot KERNEL.BIN ./mnt
        umount -f ./mnt
        rm -r ./mnt

This creates a 5MB disk image, mounts it and copies the "boot" directory from GRUB and our kernel KERNEL.BIN on to the image.

This image isn't bootable by VMware yet. You need to boot off another floppy that has GRUB and is bootable but this is a one off operation. You can easily create a bootable GRUB disk with just

cat boot/grub/stage1 boot/grub/stage2 > grub.img

Once you've booted to the GRUB command line, you can switch to cleese.img as your floppy and type

setup (fd0)

and that will copy GRUB onto the boot sector. From that point on, cleese.img is all you need.

To avoid having to do that step every time KERNEL.BIN updates, I wrote an additional make target that just updates KERNEL.BIN on an existing image.

update-image:
        mkdir -p mnt
        mount_msdos -o nosync `hdid -nomount cleese.img` ./mnt
        cp KERNEL.BIN ./mnt
        umount -f ./mnt
        rm -r ./mnt

As a quick guide to what that's doing:

  • the -p option to mkdir just stops it complaining if mnt already exists
  • hdid -nomount cleese.img binds the disk image to a /dev and returns the device path
  • that device path is then used as an argument to mount_msdos (hence the backticks) which mounts that device as ./mnt
  • the file(s) are copied on, the image unmounted and the mount point deleted

I'm not sure why the -o nosync is needed. Maybe it isn't.

In the original target, the -layout NONE option to hdiutil ensures no partition map is created for the drive.

Syndicated 2008-11-05 09:26:14 (Updated 2008-11-05 09:49:40) from James Tauber

Daylight Saving Time

Yesterday I was asked at work what the origins of daylight savings were. People who know me know I can never just say "I don't know" to a question like that—I had to go do some research.

The short answer is "war and golf" but here is a longer version, gleaned from various articles online and a little prior knowledge on the topic.

While Benjamin Franklin is sometimes credited with the idea of setting clocks differently in the summer, his idea was well before its time as there wasn't a notion of standard time in his day. The notion that clocks would be set according to the "real time" (i.e. based on the Sun) of some other location has its origin with the railroad system. In November 1840, the Great Western Railway in England adopted London Time for all their schedules. The US and Canada followed suit with their own Standard Time in November 1883.

While Standard Time was initially for the railroads, it began to be adopted across the board, eventually being enacted into law in the US by the Standard Time Act of 1918.

An Englishman, William Willet made the observation, a century after Ben Franklin had done the same, that people were wasting the early hours of the day in summer by sleeping in. He was also an avid golfer who was frustrated at dusk cutting short his game. So he started campaigning for clocks to be advanced during the summer months. The idea was ridiculed and he died in 1915 without seeing his idea adopted.

In April 1916, however, Germany started advancing the clock an hour to reduce electricity usage and hence fuel consumption during the war. Many European countries immediately followed suit and Britain started in May 1916. When the US joined the war, they too adopted this daylight saving measure.

US Congress repealed the law in 1919, but Woodrow Wilson (incidentally also an avid golfer) vetoed the repeal. Congress overrode the veto and so daylight saving stopped, although was adopted locally in some places.

In World War II, it was reintroduced, this time all year around. The US had daylight saving from February 1942 to September 1945. After the war, it went back to being a local issue.

It was a controversial issue through the early 1960s but the confusion caused by so many local differences resulted in the US passing the Universal Time Act in 1966 which reintroduced it across the country unless overridden by state law.

My own home state of Western Australia is currently in a three-year trial of daylight saving and will hold a vote next year as to whether to keep it.

Syndicated 2008-11-04 08:21:40 (Updated 2008-11-04 18:46:31) from James Tauber

Python's re.DEBUG Flag

Eric Holscher points out a Python gem I never knew about. If you pass in the number 128 (or, as I have a preference for flags in hex, 0x80) as the second arg to re.compile, it prints out the parse tree of the regex:

>>> import re
>>> pattern = re.compile("a+b*\s\w?", 0x80)
max_repeat 1 65535
  literal 97
max_repeat 0 65535
  literal 98
in
  category category_space
max_repeat 0 1
  in
    category category_word

While re.compile is documented as having the signature

compile(pattern[, flags])

the particular flag 0x80 is not documented as far as I can tell.

I thought I'd dig in further.

Firstly, note that re appears to cache patterns as if you repeat the same re.compile, it returns the same object and doesn't spit out the parse tree. There is a re.purge function for purging this cache but while this is mentioned in help(re) it is not in the main documentation.

Secondly, note that the flag 0x80 is actually defined as DEBUG in the re module, so a more robust form would be:

re.compile(pattern, re.DEBUG)

A source code comment for DEBUG and another undocumented flag TEMPLATE (which supposedly disables backtracking) mentions:

# sre extensions (experimental, don't rely on these)

which explains why they aren't documented.

In the Python source code, there is also a Scanner class defined with the comment "experimental stuff (see python-dev discussions for details)"

A quick search of the python-dev mailing list found nothing. Perhaps a python core development could fill us in.

Syndicated 2008-11-03 11:47:43 (Updated 2008-11-03 11:49:27) from James Tauber

Cleese and a New Toolchain

Back in July 2003, I had an idea to "make the Python intepreter a micro-kernel and boot directly to the Python prompt". Thus started Cleese, which I worked on with Dave Long. We made a good deal of progress and I learned a tremendous amount.

In February 2007, I moved Cleese from SourceForge to Google Code Project Hosting in the hope of restarting work on it. In between 2003 and 2007 I'd become a switcher and so I needed to work out how to do on OS X what I'd been doing with a very strange hybrid of Windows command line and Cygwin before. Alas I never got around to that part.

Then about a week ago, inspired by Brian Rosner's interest in the project, I decided to give it another go. I also decided to use it as an opportunity to finally learn Git.

First goal: build a "hello world" kernel (no Python yet). Fortunately I had one from the initial stages of Cleese 2003, but it wouldn't build. In particular ld was barfing on the -T option used to specify a linking script (which OS X's ld doesn't support).

After asking some questions on the #osdev channel on freenode, I discovered I'd need a completely new gcc and binutils toolchain to support i386-elf. This didn't turn out to be difficult at all, though.

Here were my steps:

export PREFIX=/Users/jtauber/Projects/cleese/toolchain
export TARGET=i386-elf

cd ~/Projects/cleese curl -O http://ftp.gnu.org/gnu/binutils/binutils-2.19.tar.gz mkdir toolchain tar xvzf binutils-2.19.tar.gz cd binutils-2.19 ./configure --prefix=$PREFIX --target=$TARGET --disable-nls make make install cd .. curl -O http://ftp.gnu.org/gnu/gcc/gcc-4.2.4/gcc-core-4.2.4.tar.bz2 bunzip2 gcc-core-4.2.4.tar.bz2 tar xvf gcc-core-4.2.4.tar cd gcc-4.2.4 ./configure --prefix=$PREFIX --target=$TARGET --disable-nls --enable-languages=c --without-headers make all-gcc make install-gcc

Now my "hello world" kernel builds. Next goal...working out how to programmatically build disk images for VMware Fusion (or, failing that, Qemu)

Syndicated 2008-11-03 03:43:26 (Updated 2008-11-03 03:44:59) from James Tauber

2 Nov 2008 (updated 2 Nov 2008 at 12:06 UTC) »

Cell naming

My previous post introduced my adventures into C. elegans.

I've gone ahead and implemented my own little cell lineage browser using django-mptt. Once I've added more functionality, I'll put it online.

But for now, I'm intrigued by the naming of cells in the lineage. In particular, the majority of cells are named by appending either 'a' or 'p' to the parent cell. What do 'a' and 'p' stand for?

As an example:

P0 -> P1' -> P2' -> C

but then

  • C divides into Ca, Cp
  • Ca divides into Caa and Cap; Cp divides into Cpa and Cpp

Caa, Cpa then have a slightly different progression than Cap and Cpp:

  • Caa and Cpa respectively split into Caaa, Caap and Cpaa and Cpap
  • these then split into Caaaa, Caaap, Caapa, Caapp, Cpaaa, Cpaap, Cpapa and Cpapp
  • these then split into the 16 you'd expect except that Cpapp splits into what are called Cpappd and hyp11, Caapp splits into Caappd and PVR and Caapa splits into Caapap and DVC.

Cap and Cpp progress as follows:

  • they split into Capa, Capp, Cppa, Cppp as you'd expect
  • these split into Capaa, Capap, Cappa, Cappp, Cppaa, Cppap, Cpppa, Cpppp as you'd expect
  • these then split into Capaaa, Capaap, Capapa, Capapp, Cappaa, Cappap, Capppa, Capppp, Cppaaa, Cppaap, Cppapa, Cppapp, Cpppaa, Cpppap, Cppppa, Cppppp
  • and finally the 32 you would expect except Cppppp splits into what are called Cpppppd and Cpppppv

This is just the C lineage which is less than 10%. But I'd love to know what the 'a' and 'p' stand for; what the 'd' and 'v' stand for; and why hyp11, PVR and DVR get such a distinct names.

UPDATE: I added a "cell type" field to my browser and it revealed a couple of useful things: the "leaf nodes" (i.e. final cells) from Cap and Cpp are all marked as of cell type "muscle". The leaf nodes from Cpa (including hyp11) are all marked cell type "hypodermis". The leaf nodes from Caa are a little more interesting: The Caaa... leaf nodes are all "hypodermis". The leaf nodes from Caap are the most interesting, though. Caappd is "hypodermis", Caapap is marked as dying, and PVR and DVC are neurons.

UPDATE 2: Just as a point of comparison, there is another founder cell D whose descendants are a lot cleaner. D results in 20 cells, all of type "muscle". All are named with a/p. The only reason it's not a power of 2 is the two D{a|p}pp split into 4 whereas the others at that level split into only 2.

UPDATE 3: Based on http://en.wikipedia.org/wiki/Anatomical_terms_of_location I'm now convinced a, p, d, and v refer to anterior, posterior, dorsal and ventral respectively.

Syndicated 2008-11-02 05:39:49 (Updated 2008-11-02 06:51:50) from James Tauber

191 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!