I got replies from Michael Meeks and Maciej Stachowiak on
the gnome-components list. Maciej pointed out the existence
of a "private" flag that can be passed by the client when
instantiating a bonobo object, which will ensure that a new
object is created. The drawback is that the client would
have to know to pass this parameter, and so would be somewhat
A more reasonable solution, he continued, would be to write
a "proxy" factory which uses the "private" flag to instantiate
gnome-vim instances. This means that each client would get
a unique instance, but wouldn't have to do anything special.
Michael encouraged me to submit a bug report about this,
so that a new feature can be added to bonobo-activation
after Gnome 2.0 is out, which will handle my case more cleanly.
PeaceWorks has some contracts to supply public access
workstations in Kitchener/Waterloo and Cambridge. The platform
was (unfortunately) chosen to be Win 2000, which was beyond our control
-- I think Linux would have been well-suited for this task.
I have been working on a useful Linux-based backup solution
for these workstations, which will likely be released as
free software within the next few months.
The idea is to be able to back up the complete state of
each workstation at any point, and to be able to use these
backups to restore a machine to a known state. This way,
a backup can be made after the machine is installed and
configured, and the machine can always easily be restored to
this clean, working state. If more "standard" software is
installed, another backup can be made.
There is at least one
hardware solution which offers similar
functionality by mirroring partitions, at a cost of
about CDN $110-115 per machine. However, this doesn't allow
for backups to be taken off-site.
The key to our (cheaper) solution is that the contract calls
for each workstation to be equipped with a CD burner.
Python, dialog and cdrtools
(mkisofs and cdrecord),
I've created a bootable CDROM which will backup or restore a raw partition
using one or more CD-R or CD-RWs. The system can be used in "normal"
or "expert" mode. In "normal" mode the program holds the user's hand.
All settings (i.e. the partition to back up, CD burner speed, etc.) are
read from a config file; we wouldn't want to scare anyone off by
presenting them with a list of /dev/hda1, /dev/hda2, etc.
There have been some interesting technical challenges to get this
system working. The first problem was where to get the temporary space
for creating the CD images. Mkisofs won't from read a raw device, and
won't accept a file on stdin. One solution is to copy a chunk of the
partition into temporary file, which mkisofs can then use to produce the
filesystem. However, this would require a special 700 MB partition
for temporary space on the disk, which wastes space and means the backup
solution would only work on pre-prepared systems.
Instead, I took a look at the mkisofs source. For each file to be added to
the ISO filesystem, mkisofs first needs to determine its size (normally
via stat()) and later reads the entire file. I hacked mkisofs so
that, under certain "magic" circumstances, it instead invokes a script to
obtain the file size and the file data. The script can use "dd" to
read a chunk of the partition. With this piece of the puzzle in place,
it's possible to burn the CD on-the-fly with no temporary space whatsoever.
The next challenge is to compress the data on-the-fly as well.
The first difficulty lies in the fact
that you can't predict how large the compressed output will be unless you
compress it twice. The second difficulty is figuring out how to compress only
part of a file, and knowing how much input was processed, so that it's
possible to later start at that point to compress the next chunk.
The first difficulty
is solved by always assuming the compressed output will be the size of a
full CD. If it turns out to be less, the output can be padded to this size.
To solve the second problem, I wrote a small C program which uses zlib
to compress a data stream. Zlib has an interface with which you can
"feed" it buffers full of data, which it will then compress into your
output buffer. When the size of the output gets close to a full CD,
I pretend that the end of input has been reached. Then I pad the output
to the full size.
In summary, this project has presented some surprisingly interesting
problems. Without the enormous base of free software with which to build
this solution, including the creation of a bootable CD, CD-burner drivers,
on-the-fly generation of ISO filesystems, streaming compression and a
rich enough environment from which to execute all this code, development
costs would surely have been in the tens of thousands, if not more.
I cobbled this together in 30-35 hours.
Standing on the shoulders of giants,