Older blog entries for jdhildeb (starting at number 7)

As a Canadian expecting to have to buy a newer car this year, I've been doing some research as to what kind of car to look for. Of course both gas/electric hybrids and vehicles using alternative fuels (compressed natural gas) are very clean, but for the amount we use the car (very little), I decided it's not worth it for us to spend that much money on a car.

Instead, I've been trying to find out whether or not I should favour a diesel vehicle. Sure diesel engines use less fuel, but do they produce less emissions? Are the emissions overall more or less harmful than those from gasoline engines?

First, I found out that both the Canadian and US governments are aiming to clean up diesel emissions, which would be done by installing particulate filters on diesel vehicles. However, these filters will only be introduced once ultra-low sulphur diesel fuel becomes widespread, because this is a prerequisite for the filters to work effectively. Government legislation comes into effect (in Canada) in 2006 which requires refiners and importers to reduce sulphur content to ultra-low levels, and the first vehicles with "advanced emission control technologies" (particulate filters) will not be available until at least 2007.

Second, I was curious to know how current diesel emissions compare to gasoline emissions. The EPA produces a listing of vehicles rated by emissions on a scale of 1-10 (10 being the "cleanest"). I noted that there are only gasoline vehicles in the 7-10 range; the cleanest diesel (the New Volkswagen Beetle) is rated only 6 out of 10.

My conclusion is that at least in the short term (the next 3-5 years), gasoline is favourable over diesel in terms of emissions. Even after 2006, when ultra-low sulphur becomes the norm, an existing diesel vehicle would need to be retrofitted with a particulate filter in order to reduce emissions. And as this will be very new technology, it will likely be expensive to have this done.

12 Dec 2003 (updated 12 Dec 2003 at 21:05 UTC) »

Never really got into blogging, but after learning of Ettore's passing today I was moved to read some of his friends' memories about him. I like the way people use blogs to connect with each other, and get to know each other (I also find it cool the way Planet Gnome aggregates lots of different blogs into one using RSS).

I never met Ettore myself, but I did trade a few emails with him when working on gnome-vim. I remember reading his blog back a couple of years ago when I was getting to know the names of people in the gnome community, and feeling like I got to know him a little bit.

18 May 2002 (updated 18 May 2002 at 20:51 UTC) »

Gnome-vim

I got replies from Michael Meeks and Maciej Stachowiak on the gnome-components list. Maciej pointed out the existence of a "private" flag that can be passed by the client when instantiating a bonobo object, which will ensure that a new object is created. The drawback is that the client would have to know to pass this parameter, and so would be somewhat gnome-vim specific.

A more reasonable solution, he continued, would be to write a "proxy" factory which uses the "private" flag to instantiate gnome-vim instances. This means that each client would get a unique instance, but wouldn't have to do anything special.

Michael encouraged me to submit a bug report about this, so that a new feature can be added to bonobo-activation after Gnome 2.0 is out, which will handle my case more cleanly.

CDBackup

PeaceWorks has some contracts to supply public access workstations in Kitchener/Waterloo and Cambridge. The platform was (unfortunately) chosen to be Win 2000, which was beyond our control -- I think Linux would have been well-suited for this task. I have been working on a useful Linux-based backup solution for these workstations, which will likely be released as free software within the next few months.

The idea is to be able to back up the complete state of each workstation at any point, and to be able to use these backups to restore a machine to a known state. This way, a backup can be made after the machine is installed and configured, and the machine can always easily be restored to this clean, working state. If more "standard" software is installed, another backup can be made.

There is at least one hardware solution which offers similar functionality by mirroring partitions, at a cost of about CDN $110-115 per machine. However, this doesn't allow for backups to be taken off-site.

The key to our (cheaper) solution is that the contract calls for each workstation to be equipped with a CD burner. Using Mindi, Python, dialog and cdrtools (mkisofs and cdrecord), I've created a bootable CDROM which will backup or restore a raw partition using one or more CD-R or CD-RWs. The system can be used in "normal" or "expert" mode. In "normal" mode the program holds the user's hand. All settings (i.e. the partition to back up, CD burner speed, etc.) are read from a config file; we wouldn't want to scare anyone off by presenting them with a list of /dev/hda1, /dev/hda2, etc.

There have been some interesting technical challenges to get this system working. The first problem was where to get the temporary space for creating the CD images. Mkisofs won't from read a raw device, and won't accept a file on stdin. One solution is to copy a chunk of the partition into temporary file, which mkisofs can then use to produce the filesystem. However, this would require a special 700 MB partition for temporary space on the disk, which wastes space and means the backup solution would only work on pre-prepared systems.

Instead, I took a look at the mkisofs source. For each file to be added to the ISO filesystem, mkisofs first needs to determine its size (normally via stat()) and later reads the entire file. I hacked mkisofs so that, under certain "magic" circumstances, it instead invokes a script to obtain the file size and the file data. The script can use "dd" to read a chunk of the partition. With this piece of the puzzle in place, it's possible to burn the CD on-the-fly with no temporary space whatsoever.

The next challenge is to compress the data on-the-fly as well. The first difficulty lies in the fact that you can't predict how large the compressed output will be unless you compress it twice. The second difficulty is figuring out how to compress only part of a file, and knowing how much input was processed, so that it's possible to later start at that point to compress the next chunk.

The first difficulty is solved by always assuming the compressed output will be the size of a full CD. If it turns out to be less, the output can be padded to this size. To solve the second problem, I wrote a small C program which uses zlib to compress a data stream. Zlib has an interface with which you can "feed" it buffers full of data, which it will then compress into your output buffer. When the size of the output gets close to a full CD, I pretend that the end of input has been reached. Then I pad the output to the full size.

In summary, this project has presented some surprisingly interesting problems. Without the enormous base of free software with which to build this solution, including the creation of a bootable CD, CD-burner drivers, on-the-fly generation of ISO filesystems, streaming compression and a rich enough environment from which to execute all this code, development costs would surely have been in the tens of thousands, if not more. I cobbled this together in 30-35 hours.

Standing on the shoulders of giants, indeed. :)

13 May 2002 (updated 13 May 2002 at 09:37 UTC) »
Water

walters: One thing that can be done to improve access to clean water is to support (financially or otherwise) organizations which are working at this problem.

The Mennonite Central Committee (MCC) is an effective organization which is working at this (and many other) issues in developing countries. One particular aspect of how MCC works that I appreciate is that they don't go into a country and set up their own program. Rather, they coordinate and cooperate with people and groups in a country to help achieve common goals. Also, MCC carries out development programs in North America, too (it's not like so-called "first world" countries don't have development problems, too). Search google for mcc water.

I work for PeaceWorks Computer Consulting, which supports several non-governmental organizations (NGOs), including MCC. In supporting these organizations we make use of and contribute to free software as much as we can, which makes this job doubly-satisfying.

Bash

Bash programmable completion is very cool, and it's included in the bash from debian/unstable. I'm addicted.

13 May 2002 (updated 13 May 2002 at 09:31 UTC) »
Water

walters: One thing that can be done to improve access to clean water is to support organizations which are working at this problem (financially or otherwise).

The Mennonite Central Committee (MCC) is an effective organization which is working at this (and many other) issues in developing countries. One particular aspect of how MCC works that I appreciate is that they don't go into a country and set up their own program. Rather, they coordinate and cooperate with people and groups in a country to help achieve common goals. Also, MCC carries out development programs in North America, too (it's not like so-called "first world" countries don't have development problems, too).

I work for PeaceWorks Computer Consulting, which supports several non-governmental organizations (NGOs), including MCC. In supporting these organizations we make use of and contribute to free software as much as we can, which makes this job doubly-satisfying. Bash

Bash programmable completion is very cool, and it's included in the bash from debian/unstable. I'm addicted.

Well, I finally submitted my Linuxtag paper last week. It was a bit rushed in the end, as I was working on it at the same time as my parents were here visiting in Germany. I was only able to cover WebKit and MiddleKit in the paper, but I hope to also introduce PSP and show gtalvola's XML-RPC sample in the talk.

Finally sat down again to work on gnome-vim yesterday. The idea I'm working at is to add bonobo support to the vim itself. Each instance of the vim control would need to be in a separate process, though, which is different from all the tutorials and samples I've seen so far.

I played around with a bonobo sample app from djcb's bonobo tutorial.

I diddled with the sample a bit. I removed the factory and start an instance of the control in main(). I changed the .oaf file to reflect this. Oaf was happy enough, and when I started the container the control came up without problems. But when I tried to start a second instance the first control disappeared from the container, and I had two empty container apps running. So it appears that oaf wanted to start the second instance of the control from the same process, which won't work for what I'm trying to do.

Now I'm struggling with how to write a factory which spawns processes and returns their object references to oaf. One possibility would be to do what oaf does when bootstrapping a factory: it gets the IOR passed back to it from the child process via a pipe.

I'm pretty much out of time for this this weekend, but I've joined the gnome-components-list and asked for feedback, to see if my plan makes any sense at all. :)

Had a new idea for gnome-vim yesterday. Presently communication between my component code and the vim instance is limited to sending keystrokes and using the vim client-server functionality to (for example) evaluate a vim expression and receive the result. Quite limited, but, well, gnome-vim started out as a quick hack, because I wanted to use vim with Evo.

Several people have asked why I don't make use the vim-gtk code in gnome-vim. My response has been:

  • the vim-gtk toolbars and menus wouldn't "just work" as a bonobo control, because they're just straight gtk. They'd have to be rewritten to use bonobo so that the menus and toolbars can be merged with the container's UI.
  • gnome-vim must support multiple instances running simultaneously, and needs to maintain communication with each running vim instance.

However, I was thinking more about this second point, and it seems that this is requirement is just due to the structure that gnome-vim inherited from the gtkhtml control (on which I based my initial code).

If instead of writing a component factory, I could simply supply an executable which implements an interface, and oaf should (I think) fire up an instance of the process for each required component. It would be an out-of-process control, but each process would be running the vim process directly (with corba interface implementations and bonobo stuff added).

This would allow a much tighter integration with vim, which would make it possible to support fancier interfaces. I envision gnome-vim being used with Anjuta: when the user begins to type a function name, gnome-vim pops up a menu with possible completions.

I'd like to move to this structure down the road, but in the meantime I have a maintenance version to release.

Looking forward to being back in Germany again. I'll be spending two months there with my fiancee Katharina on April 25 for two months, during which time I'll give a talk on Webware at Linuxtag 2002.

I was at Linuxtag 2001 in Stuttgart, but this will be my first presentation. I'm both nervous and excited about it. Although I've been a free software user, advocate and contributor for several years, it's only recently that I've stopped just "lurking" and started to make more connections with other developers, and I'm looking forward to doing more of this in Karlsruhe.

I'll be sure to bring my GPG fingerprint along. :)

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!