Older blog entries for kwa (starting at number 4)

24 May 2006 (updated 24 May 2006 at 08:35 UTC) »

Holy crap!!! My extremely half-ass summer of code proposal was accepted?!? I have to say that I'm a little surprised. I submitted a proposal to the Beagle project entitled Networked Searches in Beagle. Here are the details:

Summary

I would lovepost to extend Beagle to use Avahi to advertise beagle running on a workstation, and allow other computers to search its indexes in a fashion similar to music sharing in Rhythmbox.

Deliverables

  • Extension of Beagle's XML-RPC system for secure communication on a local area network.
  • Automagic Beagle service discovery via Avahi.
  • Integration of Remote Searches into Holmes.

Timeline

  • June 26, 2006:
    Avahi service discovery implementation complete. Design completed for remote message passing interface, and partial implementation.
  • August 21, 2006
    Message passing interface completed. Holmes extended to perform remote queries and fetch data from remote hosts.

Authentication and Setup

Authentication would be comprised of a cryptographic challenge/response system using pass-phrases and keys. Once two nodes have been paired up, they will automatically talk to one another when they are on the same network. A "Sharing" or "Network" tab could be added to beagle-settings, which would allow the user to enable searching on the local network, and sharing local indexes with other computers.

When beagled finds another host sharing its indexes, the user would be notified, and given the opportunity to perform an initial authentication. If the automatic notification is too annoying, then the user could have some means of finding other computers to pair up with. Beagle daemons on remote machines will communicate via XML-RPC.

Searching

When searching in beagle-search, a query is sent to beagled, and beagled forwards the search to any hosts on the network that have been authenticated in the past and are currently available with shared indexes. Any nodes that require no authentication will automatically be searched.

For live queries, any nodes that leave the network or are no longer available will be removed from the results. Any new node that becomes available during a live query will be sent the query and any results will reported. Remote Beagle daemons will treat these live queries just like a local live query on that machine, and will send new results to the node that made the initial query until the remote live query ends, or the node that made the initial query becomes unavailable.

User Interface

Any remote hits will be displayed in such a way to indicate that they are from a remote machine. They can also be hidden by a disclosure triangle, and will only be shown if there are available results. For example:

Double clicking on a hit from another computer might be tricky. Maybe using HTTP to download the file from one host to another. The file could either be copied to a /tmp directory, showing some progress dialog until it is copied and then opened, or just using GnomeVFS. Itwould also be useful if the user could also drag and drop any remoteresults to a nautilus window or the desktop, which would copy it tothe local disk.

Non-file Data

The original webservices implementation for Beagle only handled file queries. It would be interesting to find a way to handle remote chatlogs, webpages and emails.

Me

I am finishing my third year of undergraduate studies at California State University, Sacramento, where I am majoring in Computer Science. I've been using GNU/Linux on the Desktop since 1999, and am passionate about free software. See my resume for more information.

As for my .NET experience, I have been using Mono since its 1.0-beta release, have been loving it ever since. I used Mono to develop several in-house Gtk# applications at work, and a charity raffle system for a large annual charity event in California.

As a casual GNOME contributor, and a Beagle user since 0.0.[89] ish, I would love to have the opportunity to hack on Beaglefull time this summer. Beagle is one of the most interesting components of the GNOME Desktop, and I would love nothing more than to help make it more useful for the community.

Congratulations to all of you other students out there who were also accepted, and thank you so much Google for working so hard to make this happen. And most of all, thank you Joe Shaw and any other mentors for the Beagle project for giving my proposal a high ranking.

2 Sep 2005 (updated 31 May 2006 at 02:03 UTC) »

Oops

24 Aug 2005 (updated 25 Aug 2005 at 07:36 UTC) »

Various things
This makes me sick to my stomach.

Found this very amusing quiz on p.g.o. I did recognize a few, but only got 6 out of 10 correct. Apparently, I should "avoid a career in either law enforcement or IT recruitment."

And finally, Chris Dibona has posted some awesome photos taken at Foo Camp with a high speed camera. They feature attendees, including Gnome Hacker Nat Friedmen, popping balloons.

Google Talk
While I am extremely pleased that a corporation as powerful as Google is supporting a free, open standard like Jabber, I'm sort of annoyed that their new Jabber service doesn't allow traffic from other Jabber servers. I have had a jabber id (kwa@jabber.org) for quite some time, and was hoping that this announcement would inspire a good number of my friends who use Gmail to switch to Jabber. It seems that this will change. The sooner, the better.

Photos
I finally got around to posting the pictures from our road trip to Vancouver a couple of months ago. The latest version of F-Spot is broken in the Flickr department, so I used F-Spot to generate a static HTML web gallery. Nothing on bugzilla or the mailing list. I'll have to look into it.

Alright, I'm pissed! I just started to upload some files, mostly python scripts and various other things so anyone who is interested can download and use them. Since CSUS provides web hosting for their students, I decided to host these files there. So far, everything is fine.

About 1/2 hour ago, I was chatting with a friend of mine named Jason Ralphs. I tried to show him the blogrip.py script that I had just uploaded, and we both got 404 errors. Startled and confused, I went to upload the files again thinking that I had just forgotten to do so. But of course, they were still there.

Unfortunately, my university uses IIS on a Windows server to serve its webpages. After some experimentation, I realized that IIS would only serve files that matched a registered mimetype on the server itself. So when you try to download a file like a python script, a c header file or a tarball, it just pretends that the file isn't there. BRILLIANT!!! I'm sending an angry email right now. Of course, it wouln't solve anything, but I'm just curious to see what those helpdesk monkeys have to say.

19 Aug 2005 (updated 19 Aug 2005 at 20:30 UTC) »

First post to my new advogato web log.

Well, I'm about to start class at CSUS in just over a week. I'm looking forward taking some more computer science courses after a year long hiatus. I've spent the last year finishing my lower division general education before transfering to the university.

Edd Dumbill released monopod-0.4 recently, and I don't like the fifo "Recent podcasts" playlist it creates on my iPod. When I find the time I'm going to modify it to create playlist for each podcast, so that...

  1. Latest shows from each feed are easier to find and ...

  2. I don't just end up with nothing but 10 episodes of Science Friday.

I must listen to podcasts much differently than Edd. I am subscribed to over 20 feeds with weekly or fortnightly radio shows, and I sometimes listen to 4 or 5 in one day. Some of the shows I almost never listen to, but I like to keep around just in case I get stuck somewhere with nothing to do. If I only have 10 podcasts, then I'll run out of things to listen to.

I think there should be a gconf key, and eventually a preference setting to choose the prefered way to synchronize with your ipod. The possible modes of operation would be:

  1. "Recent N podcasts" playlist
  2. One Playlist for each feed
  3. both simultanously
We'll see what Edd thinks.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!