Older blog entries for louie (starting at number 643)

Notes on Diaspora Talk

Diaspora came to lunch at Mozilla today. Some notes.

  • They gave me a nice shoutout. ;)
  • They’re doing pair-programming and test-driven development this summer, which I think is great. Sounds like they’re getting some great guidance from Pivotal Labs.
  • Very explicitly trying to focus on things everyone can use, rather than something for geeks.
  • Are trying to do micro-networks, rather than ‘everyone on the same plane’; I’m curious to see where that goes.
  • They’ll have single-user ‘seeds’ or multi-user ‘pods’ for servers. Push-driven, like email.
  • They’re doing Rails and Mongo, and even Websockets; they’re having problems between Websockets and Rails.
  • Planning on doing a code release by ‘flipping the github bit’ on September 15th, but will continue hacking after that, since they have a ‘low burn rate.’
  • Seem a bit worried about the community management problem once they go live.
  • Have a slide describing feature set for beta; focus on easy group management for you and close friends; private broadcasting to those friends; full data exportability.
  • Longer term: work with ostatus/other stuff to work closely with other distributed technologies; plugin and application infrastructure; build community and focus on design.
  • Anonymity is not currently a design goal, but still thinking about basic crypto and heavy focus on privacy (including privacy expectations-setting.)
  • In the move to california, they appear to have abandoned arepas for tacos. Bad news.
  • I admit I’m troubled that there is lots of talk about technology, and not much talk about UI/HCI/design, but in response to my question they say they mostly didn’t talk about it because it is still very much in flux. They’re talking with others about the problem, which is good to hear.

Anyway, interesting stuff- I continue to wish them well.

Syndicated 2010-08-20 20:24:10 from Luis Villa » Blog Posts

A must-read on google

In the same vein as my earlier commentaries on Google comes this piece by James Grimmelman. He doesn’t comment on the actual substance of the net neutrality announcement. Instead he focuses on process, and his description of how google does things seem so dead on to me into how google that I think I’ll be citing repeatedly in the future. I won’t quote; it is worth reading the whole, fairly brief thing.

The one thing I’d add to what James says is that Google’s process actually usually works quite well; for every Wave, Buzz, and verizon deal, there are several things that work well. When it works poorly, we should generally allow the market to discipline them, as it has with Wave. The reason the net neutrality issue is so important is that it could represent a new barrier to entry, making those market mechanisms less effective and leaving us more at google’s mercy when their processes go bad in the way James describes.

Syndicated 2010-08-12 19:26:27 from Luis Villa » Blog Posts

The Libre Web Application Stack (A Code Story)

[This was originally published at autonomo.us- comments over there.]

[Disclaimer: since my last post at autonomo.us, I have become an employee of the Mozilla Corporation. I don't feel this has tainted my views, but feel free to weigh that information as part of your analysis of this article. Relatedly, I do not speak for that employer when writing here; my words and ideas here are my own.]

tl;dr version: We shouldn’t throw the baby out with the bathwater. The bathwater is the ethical problems with software hosted on someone else’s server; the baby is the strength of the libre web application development stack- which may actually be the best platform for building autonomy-preserving user-oriented software.

Put yourself in 1995. I’m going to tell the you of 1995 that in 2010, there will be a software platform with the following properties:

  • The widget toolkit for this platform will have two independent, competitive, Libre implementations; one of those implementations will be the most widely used consumer-facing piece of free software ever, and the other one will have the support of the second and third largest software companies on earth.
  • Tens of millions of dollars a year in engineering time will go into improving those free implementations of the standard; in fact, Microsoft will spend lots of money and PR time saying ‘we’ve caught up to the libre implementations.’
  • This widget toolkit will have no vendor gatekeepers or tolls; in fact, all the biggest software vendors on earth will have attempted to disclaim patent claims against it, making it not only no-cost but also perhaps the safest free software toolkit available from an IP perspective. New additions to the toolkit will be free by default; there will be some exceptions to this but those exceptions will be bitterly contested, sometimes with huge proprietary software companies weighing in on the side of free implementations.
  • The major implementations of the widget toolkits will be extensible using a simple scripting language, making them much easier for users to customize than most (non-emacs) toolkits.
  • Free software written for this platform can be made trivially available not just to users of free operating systems but also to people on other platforms, potentially helping grow the community of free software developers and users.
  • This widget toolkit will be available on every computing platform on earth; not just PCs but also many phones and soon on TVs. If your hardware supports this widget toolkit, it immediately has enough applications that it is considered commercially viable, leading to an unprecedented blossoming of operating systems platforms (unlike the situation in 1995, where new platforms have no apps and so find it impossible to compete with Win3.1/95.)
  • The logic and data storage systems backing this widget framework will also be varied- there will be proprietary implementations, but there will be dozens if not hundreds of free frameworks as well, in essentially every programming language you can imagine. It is taken for granted that these frameworks run on Linux first. Among many others, the White House will use such a free framework to deliver software.
  • At the same time, because of elegant design, it is easy to swap out backends, so if you want to write backends in different ways, you can do that; in fact, people are taking advantage of this every day to write completely new frameworks with a variety of different properties.
  • A function called ‘view source’ is viewed as a key competitive differentiator in this platform; it is actually hard to close the source describing widget layout and behavior (though easy to close the backend.)
  • Millions of high school kids are taught (at least part of) this framework; millions more will have taught it to themselves using view source!
  • This platform will have been so successful that virtually every single first-world computer user will use it (in some way) every day.

At this point, 1995-you says ‘This sounds too good to be true. There must be a catch.’1

There are two major catches, the second a consequence of the first.2

The first catch is that the default data storage architecture is distributed, and the default licensing is effectively permissive, making it expected (though not mandatory) that users are separated from both the code they run and their data. In other words, developers who use this stack are perhaps more free than they’ve ever been in the history of computing (and not surprisingly they’ve adopted it in droves)- but users often have even less control than they had when they were using traditional proprietary desktop applications.

The you from 1995 would probably think that freedom-lovers would have reacted to this unfreedom by rewriting the thin layer of proprietary code sandwiched between gigantic gobs of free code, and/or by working to make the platform more amenable to local or distributed use. You might expect that they would even have embraced and extended the platform to make it even better for freedom.

Instead, the second catch is that fans of freedom have largely thrown the baby out with the bathwater, ignoring (or at best failing to embrace) this rich, free platform. Instead, in a story straight out of the Innovator’s Dilemma, they’ve continued to focus on traditional widget toolkits and interfaces which lack all the benefits I’ve just listed out.

It should be obvious by this point that the platform I’m talking about is HTML and the many, many application frameworks that can be used to generate it. This platform- what I’ll call the libre web application stack- is not perfect, but it has the potential to be hugely freedom-enhancing, and free software advocates should carefully consider it when thinking about developing their next app, or when planning to extend traditional software platforms to make them more free. Just because most web apps are not free does not mean the stack itself should be disregarded.

  1. This is true whether 1995-you is a free software fan or not; if you’re a free software fan, you’re particularly excited, but if you’re a die-hard silicon valley capitalist you should also be pretty excited by the potential of this platform.
  2. There are plenty of minor catches too; performance isn’t great; the widget toolkit does not have the vast variety of widgets that more mature toolkits have. But these have been improving consistently for years, and (more importantly) they’ve been improving much faster than traditional toolkits have improved.

Syndicated 2010-08-09 13:00:57 from Luis Villa » Blog Posts

Notes on Eugene Bestor’s ‘Backwoods Utopias’

A few months ago I finished reading Eugene Bestor‘s ‘Backwoods Utopias‘, a book on the Utopian social-communitarian movements of the pre-Civil War US. Some belated notes on the book’s themes follow.

The average high school US history textbook gives a thumbnail sketch of these movements, but for those who didn’t get that or don’t remember it, the gist is that, from very shortly after Europeans reached North America until right around the Civil War, groups of people regularly launched themselves into the North American wilderness, trying to found new communities organized around communitarian and egalitarian principles. They met with some success, but eventually the movements petered out, with none of them truly surviving into the modern age.

Owen by BinaryApe, used under CC-BY

The tie from this book to my own interests should be clear, but if not, I should make them explicit: free and open source software often thinks of itself as being sui generis, but in fact it is part of a history (in this country) of retreat from established economic structures with the intent of creating parallel systems that would eventually compete with or replace those established structures with something simultaneously individually empowering and socially just. (See also.) I’m both personally and professionally curious about gleaning lessons from such past experiments- so I picked up the book. If any of this blog’s readers have suggestions either of more histories of this movement, or of histories of other similar movements (watch this space for a post on the local food movement soon), please do let me know in email or comments.

Unfortunately, Bestor’s intended follow-up book (covering the 1840s to the end of the movement) was never completed, which limits the lessons that can be drawn about the decline of the movement.  Nevertheless, some observations and themes from the book:

  • The movement had a broad spectrum of motivations and philosophies- some were heavily religious, while others were overtly anti-religious; some had (or were intended to have) quite complex governance systems, while others were nearly anarchist, and indeed Marx condemned them in strong terms because (to over-simplify) they were not dedicated to fighting the good fight in the cities. Interestingly, while the community focus of these groups was typically very strong, in modern terms we might also call them libertarian (or what Erik Olin Wright calls ‘interstitial’ revolutionaries): they all believed that they had the right and the ability to make a better world by striking off on their own, rather than working within or against established structures.
  • Religion was initially a major motivating force; this faded over time, but Bestor does not make it clear why later groups tended to be non-religious. Interestingly, American critics of later movements like Owenism apparently tended to focus on this non-religious aspect, rather than the practical/anti-capitalist issues modern critics might focus on.
  • As with every movement, looking at who left is often as important as understanding who stayed. In particular, Bestor mentions that when pragmatists became frustrated and left New Harmony (perhaps the highest profile of the various communities), those left behind were a combination of those too lazy to leave and those too fanatic to leave. This was a huge problem for the morale of the remaining pragmatists, who resented the free-riders and were driven nuts by the fanatics, and so they repeated the cycle.
  • Relatedly, Bestor argues that the repeated talk of ‘everyone will live in our miraculous new society any day now’ meant that many newcomers were not prepared for the long haul; that may have disillusioned some people and contributed to a sense of lack of momentum. To paraphrase Bestor, ‘a new society cannot be built on excuses.’
  • When the movement started, it was actually pretty easy to get a community going- lots of land was effectively empty, and the median community size in the US was in the low hundreds, making it quite easy to form a community that had all the ‘comforts’ (such as they were) of traditionally organized communities. As time progressed, two things began to work against this: first, more and more ‘normal’ landowners migrated to the midwest, causing land to become more scarce, and second, even the smallest villages became larger as the country’s overall population grew. This meant that finding enough space for a ‘basic’ community became a much more capital intensive process over time. Not coincidentally, later communities tended to have wealthy patrons- with all the plusses and minuses that brings.
  • As economic complexity increased (more machinery, more specialists) it became harder to create a self-sustaining village, especially if your human capital stocks were limited to ‘believers.’ For example, when the movement started in the late 1600s/early 1700s, having a self-sustaining community required very little specialization, while by the mid-1800s, it was understood that you needed machinists and manufacturers who would trade with other areas. Bestor says that New Harmony was bitten by this, as the land they bought for the town had the hardware for extensive wool manufacture, but lacked the people familiar with the machines, killing an expected source of financial sustainability.
  • Over time, some of the social goals of early communitarians became more broadly accepted or supplied by other organizations. For example, public education was a significant goal of New Harmony, but over the course of the 1800s, that became more common in non-utopian communities. New Harmony also had a concept of mandatory social insurance; unions started providing similar services in the late 1800s. This again made recruitment harder.
  • As for most world-changers, the gap between theory and practice was often large. Robert Owen, the wealthy patron of New Harmony, created an elaborate philosophical scheme intended to encompass everything from the individual to the nation-state, but he was bad at creating practical schemes, which led to constant reorganizations at New Harmony. This may reflect the extreme difficulty of organizing a full society; capitalism has the advantage of being simple and direct in general scheme relative to a centrally planned society like Owen’s.

I’ll refrain from drawing any direct conclusions for free and open source software here, in part because many of them will be obvious to many of my readers, and also because my reading of the book (especially several months after the fact) is inevitably heavily biased by my own thinking about social movements like this one, so I’m not sure whether any ‘lessons’ would reflect actual history or just my interpretation (compounded with Bestor’s.) With or without direct applicability, though, the book was an interesting read for a history nut, and left me with a lot of food for thought.

Syndicated 2010-08-06 03:30:56 from Luis Villa » Blog Posts

A Pre-GUADEC Request

I’m preparing for my GUADEC keynote and have a request for material that would be useful. Specifically, does anyone have a good group picture from the first GUADEC? This is the best I’ve found so far, but I seem to recall there were better. Please comment or email me (luis at this domain) if you’ve got one. Thanks.

Syndicated 2010-07-18 11:35:57 from Luis Villa » Blog Posts

100 Words for my Friends Taking the Bar Exam

Things I did to myself before the bar exam:

  • Did only a fraction of the recommended practice essays.
  • Generally felt drastically underprepared.

Things that happened to me during the bar exam:

  • Day before the exam, while studying poolside at the hotel: got a sunburn.
  • First day of exam: stung by a bee.
  • Last day of exam: computer crash, requiring me to handwrite the last section of the exam. Haven’t hand-written for three hours straight since college.

Result? Passed, and not only that, got invited to be an exam grader.

So: moral of the story: don’t panic. You’ll pass.

Syndicated 2010-07-15 14:49:38 from Luis Villa » Blog Posts

Some Followup Thoughts on Bilski

Some Third-Party Thoughts

A friend summarized Bilski this way:

Shorter #Bilski: Federal Circuit, your rule was too straightforward and didn’t add enough uncertainty to an already volatile field.

I don’t think that was actually the court’s intent, but certainly that will be the short-term outcome. Long-term the court and the PTO will have to find new rules. Patently-o has some thoughts on how that process might play out, and the PTO has issued the following guidance to patent examiners on the topic. The PTO memo, while preliminary, is a great simple summary of the ruling, and contains the following critical passage:

If a claimed method does not meet the machine-or-transformation test, the examiner should reject the claim under section 101 unless there is a clear indication that the method is not directed to an abstract idea.

In other words, the PTO has reverted to the pre-business-methods ‘machine-or-transformation’ test as a default, with the burden of proof shifted to the patent filer to show a ‘clear indication’ that their non-machine/non-transformation is not an ‘abstract idea.’ It will be interesting to see in coming months what the PTO accepts as a ‘clear indication’; I would expect that this won’t be a high bar to clear, but it will probably cut out some of the most egregious applications.

For an optimistic take on the whole thing, check out Rob Tiller’s piece at opensource.com.

Comments on the Concurrences

Yesterday’s train ride focused on the majority opinion. However, as I noted then, the voting patterns here are complex; complex enough that there is some important law to be found in the two concurrences. The patently-o post I linked above makes a particularly astute observation in this regard. So today’s train ride I’ll try to read and share some thoughts on the concurrences, particularly the ‘swing’ concurrence from Breyer and Scalia.

The first thing to note is that the Breyer/Scalia concurrence opens with a strong support of Stevens’ opinion that business method patents are not patentable, but that this part is… only signed by Breyer. So it does not tell us much. The rest of it focuses on four (really three) points which Breyer and Scalia feel the entire court agrees on. If you read only one part of the opinion, read this part- it is short, sweet, and to the point, and because at least five (possibly nine) members of the court agree here, it will likely be the jumping off point for the next round of patentability litigation. These points are:

  1. There are many things which are unpatentable. This seems uncontroversial (the court was quite explicit about it in 1989′s Bonito Boats case), but after the Federal Circuit’s expansion of patentability through the 80s and 90s, it was perhaps not as clear as it should have been. This concurrence makes it very clear (once again) that there is a line, even as it simultaneously announces that no one knows where the line is. It could also be interpreted as a subtle hint to the Federal Circuit that they should set to finding that line. (Gottschalk v. Benson, which held that algorithms are unpatentable, is cited approvingly here; as I mentioned yesterday, Gottschalk and Flook may have been given some second wind by Bilski; possibly the best thing that anti-software patent crusaders can salvage from this.)
  2. Transformation of a thing to a different state is a “very good clue” (point two), but not the only clue (point three), as to whether or not non-machine things are patentable. The Federal Circuit’s Bilski ruling had essentially declared this ‘machine or transformation’ test to be the only test, which was what made business methods unpatentable under that ruling. Again, Flook is cited approvingly (when saying that it is a strong test) but unfortunately Gottschalk is cited to show that it is not the only test- which is exactly the loophole that State Street (the case that allowed business methods) drove through.
  3. The ‘useful, concrete, and tangible result’ test that the Federal Circuit put forth in State Street- i.e., the case that allowed business patents- is not a good test, sometimes producing patents that range from ‘the somewhat ridiculous to the truly absurd.’ In other words, something can be ‘useful, concrete, and tangible’ but still not be patentable. This last point was highlighted by Patently-O yesterday as being fairly important.

If you’d told anti-software patent/anti-business-method patent folks on Sunday that the court’s Monday ruling would have five justices (or maybe nine) justices agreeing that the ‘useful, concrete, tangible result’ rule was bogus, they’d have been pleased. Of course, they’d have expected the court to enunciate a new, replacement rule- which has not happened. It is that gap which has caused so much consternation, not just for patent critics but also for patent supporters.

It will be up to the Federal Circuit to try and find a new rule, somewhere between ‘machine or transformation’ and ‘useful, concrete, tangible’- and this almost certainly means that we’ll be back at the Supreme Court arguing similar issues within a few years, asking the court to ratify- or reject- the next Federal Circuit attempt.

In trying to figure out what Scalia actually agreed to, I’ve now read sections II.B.2 and II.C.2 (which Scalia did not sign on to) a couple of times. They are, like much of the decision, a little rambly; long on vague assertions about the current state of things (lots of talk about the ‘Information Age’) and not very strong on details or particular policy conclusions. If I had to guess (and I should stress that this is just a guess) Scalia is really reacting to the mechanisms used to reach these vague conclusions, which tend to be very divorced from the actual statutory text that the main body of the decision relies on. So probably not worth reading much into that.

The Stevens concurrence… that will have to wait for another train ride. Suffice to say for now that it is a thorough researching of a difficult question. It is certainly not perfect, but is the kind of dedicated textual and historical reading that many members of the court pay lip service to but do not consistently practice.

Syndicated 2010-07-01 01:16:07 from Luis Villa » Blog Posts

First thoughts on Bilski

Some very preliminary thoughts on Bilski, written in the course of one train-ride to work. This does not represent the viewpoint of my employer and should not be taken as legal advice; merely observations on one ruling.

  • In the lower court (Federal Circuit) ruling on this case, the Federal Circuit was very aggressive in trying to limit business method patents by applying an old rule very, very broadly. The Supreme Court here reached the same conclusion about the specific patent at issue (holding it not patentable) but chastised the Federal Circuit for their aggressiveness in going from step 1 (invalidate this particular patent) to step 2 (invalidate all business method patents). At the highest level, this is not good for opponents of software patents- this is the most change-averse patent opinion the Supreme Court has issued in recent years, and it will leave the Federal Circuit very reluctant to broadly attack entire classes of patents in the near future. But the court did not completely bar such attempts, and it also strengthened some older anti-software-patent rulings, so it is not a complete loss for opponents of business method and software patents.
  • This was a very splintered decision- while every judge agreed in the outcome, no part of the opinion got more than five votes, and many parts got only four. This probably explains why it took so long, and why Stevens was not (as widely anticipated) the author of the majority opinion- one or more judges probably were swinging between the two opinions until very late in the process. The addition of the probably pro-business Judge Kagan to replace the (effectively) pro-technology Judge Stevens could make future cases along these lines more conservative. And the court itself basically admits in their first section that this is hard; saying of the Federal Circuit’s ruling in the case that “Students of patent law would be well advised to study these scholarly opinions.”
  • The court punts on the most difficult questions, quite explicitly: “This [Information] Age puts the possibility of innovation in the hands of more people and raises new difficulties for the patent law. With ever more people trying to innovate and thus seeking patent protections for their inventions, the patent law faces a great challenge in striking the balance between protecting inventors and not granting monopolies over procedures that others would discover by independent, creative application of general principles. Nothing in this opinion should be read to take a position on where that balance ought to be struck.” [emphasis mine.] Unfortunately, this buys into the rhetoric that all inventors are patenters, but otherwise makes it explicit that the court is staying out of the deeper policy question to the greatest extent it can.
  • Core of the decision is to set up a very conflicting set of tests: business methods can in some circumstances be ‘processes’, which are patentable, but they may also be abstract ideas, which are not patentable. (The lower court had said that business methods are never processes and therefore a court did not need to ask ‘is this an idea?’ before ruling that it was unpatentable.) So future seekers of business method patents (and presumably software patents as well) will have to thread the needle, showing that they are a process (probably not difficult after this ruling) but also that they are not an abstract idea (may be hard, not clear yet.)
  • Needless to say, this kind of gap is the kind of thing that sophisticated lawyers love to drive trucks through, and which will continue to create lots of uncertainty for small innovators for whom even the threat of a patent suit is enough to stop innovation.
  • When deciding that the patent is an idea, and hence unpatentable, the court has kind things to say about Benson and Flook, two older case which spoke against patenting algorithms but which were then sort of ignored. This may signal to lower courts that they should take these cases more seriously when looking at software and business method patents, which would be a good thing for anyone who is seriously interested in the quality of software patents and not the worst possible outcome for those who believe that all software patents should be banned- these could become potent weapons against some of the most outrageous patents on algorithms.
  • At the same time, the court also speaks well of Diehr, another older case. This case has generally been interpreted to stand for the idea that a combination of software with hardware (originally, use of software to control a rubber curing machine) is patentable, but the court here seems to read it more broadly, arguing that Diehr should be interpreted to mean that algorithms combined with any new processes (whether mechanical or otherwise) might still be patentable.
  • The court specifically tells the Federal Circuit that the method of restriction it had been using is barred or weakened (not great for those who dislike software patents) but also specifically says that the Court can and should explore new methods of limitation as long as they are consistent with the text of the patent act; seemingly implicitly stating that the pre-Bilski situation (where business method patents ran rampant) was untenable. This suggests to me that we’ll see a period of several years of experimentation in the Federal Circuit, where the Federal Circuit attempts to find new ways to limit business method patents on something other than a case-by-case ‘I know it is an idea when I see it’ rule of thumb.
  • The court specifically says that they did not want to create uncertainty for software patents, citing the pro-software-patent amicus briefs, but then goes ahead to create such uncertainty by allowing the Federal Circuit to find new, narrower tests. However, these two sections of the majority holding got only four votes; Scalia did not join this part of the otherwise majority opinion- presumably because it seems to give the Federal Circuit very wide interpretive powers.
  • The Stevens opinion, at a glance (remember, brief train ride) would have been much more amenable to the anti-software-patent crowd, but I imagine that it is exactly this quality that made it the minority opinion.

I’m afraid that at the end of this brief train ride, my only firm conclusion can be that the real winners here are patent lawyers- this decision creates no new certainties, only uncertainties, which will encourage patenters to spend more money patenting things, and the rest of us to waste time and energy worrying about the problem- time and energy that should have been spent on innovating. But this is a long, multi-layered ruling, and will require a lot of time for the full implications to be truly understood, so take this one-train-ride blog post with a large grain of salt :) Hopefully more writing tonight/tomorrow.

Syndicated 2010-06-28 16:07:24 from Luis Villa » Blog Posts

working it

It is extremely satisfying when you can see your work turn directly into a working product. I just played with last night’s test version of firefox, and as per roc’s blog post, it indeed contains the video support whose licensing I (and others here) were working on last week. In an ideal world, lawyers should play a very small role in product development, and in this case we were probably involved more than anyone wanted us to be. But that wasn’t to be, so I am proud I helped get it done, and done right, and that all firefox users will benefit from it in the future.

Syndicated 2010-06-09 17:00:52 from Luis Villa's Internet Home » Blog Posts

some data points on facebook

My boss has written a blog post that tries to bring together some recent data points from across the privacy spectrum; it is worth a read. I’ve been noting a few (much smaller, more trivial) things myself over the past few days that suggest to me that privacy concerns in general, but facebook-related privacy concerns in particular, may be reaching a bit of a critical mass.

No Facebook by avlxyz used under CC-BY-SA.

Some anecdata:

These are just anecdotes, and not real data, but to me this feels vaguely different from the ‘rebellion’ in 2006. At that time I said ‘people adjust and things blow over sometimes.‘ This one feels different to me, but that is just a vague feeling; it may stem as much from my own facebook fatigue as from any concrete reality. It will be interesting to watch, at any rate.

Syndicated 2010-05-14 19:16:53 from Luis Villa's Internet Home » Blog Posts

634 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!