Older blog entries for mbrubeck (starting at number 119)

Headless Web Workers: Does the web need background apps?

At my last job, I created several web applications designed to replace built-in apps on mobile phones. While modern browsers and HTML5 made this incredibly easy in many ways, we still ended up writing native (i.e. non-web) code for most of our applications. There were a few different areas where the browser alone didn't meet our needs, but one that I found suprisingly common was background processing.

Consider the following mobile applications:

  • Calendar or clock with alarms.
  • E-book reader that syncs content from a server.
  • IM or email client that notifies the user of new messages.
  • Shopping list that pops up whenever you are near the store.

Ideally, each of these apps will perform some actions even when the user does not have it open. (Background processing is not strictly necessary for the e-reader, but it would be useful to ensure the library is up-to-date even when opened in a place with no network connection.)

You can't do this with a web app. Web Workers don't solve the problem, because they run only while the web page is open. What we need are headless web workers.1

The API

Headless workers could use almost the same API as Web Workers. Instead of responding to messages from a web page, they would listen to events from the host system (browser or OS). These events might include time intervals, power-on/resume, changes in network connection, geographic locations, or "push" notifications from a remote server.

The event-driven architecture of JavaScript in the browser allows the host system a high level of discretion over resource consumption. There's no special code needed to suspend processes and later restore their state, because JavaScript workers are naturally inactive between events. The host can provide limits on CPU or memory usage per event, with a separate message to notify processes whose handlers were aborted. And it can limit the number of concurrent processes by choosing when to dispatch events to listeners. Some listeners could even be disabled completely at times (like if the device is busy or the battery is low), and notified later of the events they missed.

This is almost a return to the old days of cooperative multitasking. Mobile computing is definitely driving everyone towards higher-level process control in the OS, and different assumptions for applications. It's not surprising that my whole proposal resembles Android and iPhone 4.0 multitasking in several ways, since I've been doing development on Android for the last 18 months and encountering many of the same issues.

The UI

Headless workers do need some way to interact with the user. They could display standard system notifications (via Growl on the Mac, libnotify on Ubuntu, the status bar in Android, etc.) using W3C Web Notifications, which already have an experimental implementation in Chrome.

Users also needs to know which sites have background tasks installed. Headless workers could be represented by icons in a standard location (perhaps a toolbar in desktop browsers, or the home screen on a mobile device). The icons could display ambient status; clicking one would reveal a menu with options to configure or remove it.

Questions

This proposal might be hard to standardize, especially where it's tied to specific OS capabilities. For now I'm just curious: would it be useful? You can write a native app or a browser extension to solve this problem today. But would it be worthwhile to have a standard, cross-platform way to do it? Has anyone else run into problems that this approach could solve?

  1. Because all web standards should have names that sound like Harry Potter creatures.

Syndicated 2010-04-22 07:00:00 from Matt Brubeck

Read Later: a Mobile Firefox extension

Hello, Planet Mozilla! I'm Matt Brubeck, the newest member of the Mobile Firefox (Fennec) front-end team. I'm working remotely in Seattle, but you can find me in #mobile during the North American day, or follow me on Buzz/Twitter/etc.

To help myself learn Fennec and XUL, I wrote a simple extension called Read Later. Like Marco Arment's Instapaper service it stores a list of web pages so you can return to them later. Unlike Instapaper, my extension does not save pages to a remote server. Instead, it uses your mobile device's storage, so you can view saved pages offline. I use code from Arc90's Readability bookmarklet to extract the main content from the page, save it, and present it in a simple mobile-friendly layout.

One thing the extension can't do (which Instapaper and other services can) is synchronize saved pages between computers. This would be a great feature for a Mobile Firefox add-on, but writing my own sync service is a bit more work than I want to put into this little side project. A future version may use Weave to sync saved pages, as long as the size of the data is not a problem.

If you are using a recent Fennec 1.1 build, try out Read Later and let me know what you think. And if you're a developer, you can look at the source code to see how a simple Fennec extension works.

Syndicated 2010-04-18 07:00:00 from Matt Brubeck

Discovering Urbit: Functional programming from scratch

C. Guy Yarvin is a “good friend” of Mencius Moldbug, a pseudonymous blogger known for iconoclastic novella-length essays on politics and history (and occasionally computer science). Guy recently published under his own name a novel project in language and systems design. His own writing about his work is entertaining but verbose (as Moldbug's readers might expect), so I will attempt to summarize it here.

Nock, Urbit, Watt

First there is Nock, “a tool for defining higher-level languages – comparable to the lambda calculus, but meant as foundational system software rather than foundational meta­mathe­matics.” Its primitives include positive integers with equality and increment operators, cons cells with car/cdr/cadr/etc., and a macro for convenient branching. Nock uses trees of integers to represent both code and data.

Next, Guy provides the rationale for Nock. In short, he asks how a planet-wide computing infrastructure (OS, networking, and languages) would look if designed from first priniciples for robustness and interoperability. The answer he proposes is Urbit: a URI-like name­space distributed globally via content-centric networking, with a feudal structure for top-level names and cryptographic identities. Urbit is a static functional name­space: it is both referentially transparent and monotonic (a name, once bound to a value, cannot be un- or re-bound).

Why does this require a new formal logic and a new programming language? In Urbit, all data and code are distributed via the global namespace. For interoperability, the code must have a standard format. Nock's minimal spec is meant to be an un­ambiguous, unchanging, totally standardized basis for computation in Urbit. Above it will be Watt, a self-hosting language that compiles to Nock. Urbit itself will be implemented in Watt, so Nock and Watt are designed to treat data as code using metacircular evaluation.

The code

A prototype implementation of Watt is on GitHub. It is not yet self-hosting; the current compiler is written in C. Watt is a functional language with static types called “molds” and a mechanism for explicit lazy evaluation. (I was suprised to find I had accidentally created an in­com­patible lazy dialect of Nock – despite its goal of unambiguous semantics – just by implementing it in Haskell.)

The code is not fully documented, but the repository contains draft specs for both Watt and Urbit. Beware: the syntax and terminology are a bit unconventional. Guy has offered a few exercises to help get started with Nock and Watt:

The Nock challenge:
Write a decrement operator in Nock, and an interpreter that can evaluate it.
Basic Watt:
Write an integer square root function in Watt.
Advanced Watt:
How would you write a function that tests whether molds A and B are orthogonal (no noun is in both A and B)? Or compatible (any noun in A is also in B)? Are these functions NP-complete? If so, how might one work around this in practice?

If you want to learn more, start with these problems. You can email your solutions to Guy.

Will it work?

I find Urbit intellectually appealing; it is a simple and clean architecture that could potentially replace a lot of complex system software. But can we get there from here?

Guy imagines Urbit as the product of an ages-old Martian civilization:

Since Earth code is fifty years old, and Martian code is fifty million years old, Martian code has been evolving into a big ball of mud for a million times longer than Earth software. (And two million times longer than Windows.) …

Therefore, at some point in Martian history, some abject fsck of a Martian code-monkey must have said: fsck this entire fscking ball of mud. For lo, its defects cannot be summarized; for they exceed the global supply of bullet points; for numerous as the fishes in the sea, like the fishes in the sea they fsck, making more little fscking fishes. For lo, it is fscked, and a big ball of mud. And there is only one thing to do with it: obliterate the trunk, fire the developers, and hire a whole new fscking army of Martian code-monkeys to rewrite the entire fscking thing.

… This is the crucial inference we can draw about Mars: since the Martians had 50 million years to try, in the end they must have succeeded. The result: Martian code, as we know it today. Not enormous and horrible – tiny and diamond-perfect. Moreover, because it is tiny and diamond-perfect, it is perfectly stable and never changes or decays. It neither is a big ball of mud, nor tends to become one. It has achieved its final, permanent and excellent state.

Do Earthlings have the will to throw out the whole ball of mud and start from scratch? I doubt it. We can build Urbit but no one will come, unless it solves some problem radically better than current software. Moldbug thinks feudalism will produce better online reputation, but feudal reputation does not require feudal identity; it is not that much harder to build Moldbug's reputation system on Earth than on Mars. I still have not figured out the killer app that will get early adopters to switch to Urbit.

Syndicated 2010-03-12 08:00:00 from Matt Brubeck

The network is the human being

Nathanael Boehm wrote a nice essay last month called The Future of Employment?, about a disconnect between workers' and employers' views of social networks. (This post is based partly on my comment in the ensuing Hacker News thread.) Boehm wrote:

When I need help with a challenge at work or need to run some ideas past people I don’t turn to my co-workers, I look to my network of colleagues beyond the walls of my workplace. Whilst my co-workers might be competent at their job they can’t hope to compete with the hundreds of people I have access to through my social networks...

The late Sun Microsystems taught us that the network is the computer. It's true: we still use non-networked computers for specialized tasks, but nobody wants one on their desk – it's just so useless compared to one that talks to the entire world. Boehm could have titled his essay The Network is the Employee. There are still tasks that people do in isolation, but the ability to contact a network of peers and experts makes the difference in my job, and many others.

Alone together

The lone computer programmer in a small business has thousands of colleagues on Stack Overflow, Reddit, and so on. It's a chaotic and messy way to find answers, but it's better than the days when your only choice was to call tech support – or smack the box with your fist, whichever seemed more useful. I can't begin to list all the problems I've solved and things I've learned by Googling for others with relevant experiences, and getting help from a different expert for every problem.

Decades before the web, computer geeks had virtual communities for on mailing lists, Usenet, and IRC. Now any job in the world has an online forum. Even the night clerk at the gas station has Not Always Right.

Teaching has long been a solitary profession. Despite working in a crowded classroom, teachers are isolated; they rarely have colleagues observing or participating directly in their work. This has such an impact that teacher education sometimes includes training in meditation or reflection, to compensate for lack of external feedback. So I'm really curious what happens when teachers start to work together remotely the way programmers do.

You will be assimilated

Boehm's essay reminded me of a vague sci-fi-like idea I've been kicking around: the first group minds will evolve from the intersection of Mechanical Turk, virtual assistants, social networking, and augmented reality.

Starting around the 1990s, it was possible to instantly "know" any fact that was published online. Since then, we've increased the amount of content online, our tools for searching it, and ways of connecting to the network. Today we have instant access to almost any published knowledge, anywhere.

The number of people on the net has grown too, and the number of ways to find and talk to them. Most of us can contact dozens of friends at any given moment, plus friends-of-friends, co-workers, fellow members of communites like Hacker News or MetaFilter, and also complete strangers. Along with raw facts, we now have access to vast amounts of human judgement, experience, and skill.

One result of this is the "virtual assistant," who provides a service that was once available only to high-powered executives. The new personal assistant can work remotely (often overseas), spread costs by serving many masters, and leverage the internet superpowers listed above. Today their services are targeted at small business owners and the Tim Ferriss crowd, but I'm sure someone soon will start marketing virtual personal assistance to all sorts of other creative workers, teachers, even stay-at-home parents.

So, how long before I can simply touch a button to let a remote assistant see what I'm seeing in real-time and help me make transportation plans, translate foreign signs and speech, look up emails related to whatever I'm doing or thinking, or even advise me on what to say? Some of these queries will go to my circle of friends, others to the general public, and some to a personal assistant who is paid well to keep up with my specific needs. And that assistant of course will sub-contract out portions of each job as needed to computer programs, legions of cheap anonymous Turkers, or to his or her own network of assistants. At that point, I'm augmenting my own perception, memory, and judgement with a whole network of brains that I carry around ready to engage with any situation I meet.

If nothing else, I hope someone writes a good sci-fi thriller story in which a rogue virtual assistant subtly manipulates the actions of unknowing clients, leading them to some unseen end.

Syndicated 2010-03-01 08:00:00 from Matt Brubeck

Finding SI unit domain names with Node.js

I'm working on some ideas for finance or news software that deliberately updates infrequently, so it doesn't reward me for checking or reloading it constantly. I came up with the name "microhertz" to describe the idea. (1 microhertz ≈ once every eleven and a half days.)

As usual when I think of a project name, I did some DNS searches. Unfortunately "microhertz.com" is not available (but "microhertz.org" is). Then I went off on a tangent and got curious about which other SI units are available as domain names.

This was the perfect opportunity to try node.js so I could use its asynchronous DNS library to run dozens of lookups in parallel. I grabbed a list of units and prefixes from NIST and wrote the following script:

  var dns = require("dns"), sys = require('sys');

var prefixes = ["yotta", "zetta", "exa", "peta", "tera", "giga", "mega",
  "kilo", "hecto", "deka", "deci", "centi", "milli", "micro", "nano",
  "pico", "femto", "atto", "zepto", "yocto"];

var units = ["meter", "gram", "second", "ampere", "kelvin", "mole",
  "candela", "radian", "steradian", "hertz", "newton", "pascal", "joule",
  "watt", "colomb", "volt", "farad", "ohm", "siemens", "weber", "henry",
  "lumen", "lux", "becquerel", "gray", "sievert", "katal"];

for (var i=0; i<prefixes.length; i++) {
  for (var j=0; j<units.length; j++) {
    checkAvailable(prefixes[i] + units[j] + ".com", sys.puts);
  }
}

function checkAvailable(name, callback) {
  var resolution = dns.resolve4(name);
  resolution.addErrback(function(e) {
    if (e.errno == dns.NXDOMAIN) callback(name);
  })
}

Out of 540 possible .com names, I found 376 that are available (and 10 more that produced temporary DNS errors, which I haven't investigated). Here are a few interesting ones, with some commentary:

  • exasecond.com – 32 billion years
  • petasecond.com – 32 million years
  • petawatt.com – can be produced for femtoseconds by powerful lasers
  • terapascal.com
  • gigakelvin.com – possible temperature of picosecond flashes in sonoluminescence
  • giganewton.com – 225 million pounds force
  • gigafarad.com
  • kilosecond.com – 16 minutes 40 seconds
  • kilokelvin.com – 1340 degrees Fahrenheit
  • centiohm.com
  • millifarad.com
  • microkelvin.com
  • picohertz.com – once every 31,689 years
  • picojoule.com
  • femtogram.com – mass of a single virus
  • yoctogram.com – a hydrogen atom weighs 1.66 yoctograms
  • zeptomole.com – 602 molecules

To get the complete list, just copy the script above to a file, and run it like this: node listnames.js

Along the way I discovered that the API documentation for Node's dns module was out-of-date. This is fixed in my GitHub fork, and I've sent a pull request to the author Ryan Dahl.

Syndicated 2010-01-13 08:00:00 from Matt Brubeck

Weekend hack: outline grep

I keep almost all of my notes and to-do lists in plain text files, so I can edit and search them with Vim, grep, and other standard Unix tools. I often indent lines in these files to create a simple outline structure, and use the autoindent and foldmethod=indent options to make Vim into a simple outliner.

To get useful output when searching through these outline-structured files, I wrote a simple grep replacement. Given a text file with a Python-style indentation structure, ogrep searches the file for a regular expression. It prints matching lines, with their "parent" lines as context. For example, if input.txt looks like this:

  2009-01-01
  New Year's Day!
    No work today.
    Visit with family.
2009-01-02
  Grocery store and library.
2009-01-03
  Stay home.
2009-01-04
  Back to work.
    Remember to set an alarm.

then ogrep work input.txt will produce the following output:

  2009-01-01
  New Year's Day!
    No work today.
2009-01-04
  Back to work...

You can download ogrep from the outline-grep repository on GitHub, or just read the literate Haskell file. The code is almost trivial (40 lines of code, plus imports and comments); I'm publishing it just in case anyone else has a use for it, and because some of my friends were curious about how I'm using Haskell. I've now written a few "real-world" Haskell programs (compleat was the first). I'm finding Haskell very well suited to such programs, though this particular one would be equally easy in a language like Perl, Python, or Ruby.

This is a one-off tool to fill a gap in my workflow; there are no configuration options or useful error messages. It would be fairly easy to extend it, though. For example, it might be handy to have an option to include children (as well as parents) of matching lines. I recently realized that ogrep often works for searching through source code too, which might generate some more unexpected use cases.

Syndicated 2010-01-12 08:00:00 from Matt Brubeck

Android 2.0 ships with V8 JavaScript engine

Google has not yet released most of the Android 2.0 source code, but they did publish source for a very small number of components, including a WebKit snapshot. I was very excited to see that the snapshot includes Google's V8 virtual machine! (Previous Android releases used Safari's JavaScriptCore/"SquirrelFish Extreme" VM.) But without the rest of the source tree, there was no way to build and run this on a real Android phone. The SDK includes a binary image that runs only in the qemu-based emulator.

Today I got to try out a Motorola Droid. Here is how its browser compares to Android 1.6 on my HTC Dream (Android Dev Phone / T-Mobile G1) in the V8 Benchmark Suite (version 5):

Test Dream Droid Change
Richards 13.5 15.6 16%
DeltaBlue 5.23 12.9 147%
Crypto 13.2 10.9 -17%
RayTrace 10.9 80.1 635%
EarleyBoyer 23.5 74.7 218%
RegExp did not complete 16.5
Splay did not complete did not complete

Some tests (Richards, Crypto) see little or no improvement, while others (DeltaBlue, RayTrace, EarleyBoyer) are dramatically faster. Just for comparison, let's run the same benchmark on Safari 4 (JavaScriptCore) and a Chromium 4 nightly build (V8) on a Mac Pro:

Test Safari Chromium Change
Richards 4103 4640 13%
DeltaBlue 3171 4418 39%
Crypto 3331 3643 9%
RayTrace 3509 6662 90%
EarleyBoyer 4737 7643 61%
RegExp 1268 1187 -6%
Splay 1198 7290 509%

The precise ratios are different, but the same tests that showed the most improvement from Android 1.6 to 2.0 also show the most improvement from Safari to Chrome. Based on this plus the source code snapshot, I'm pretty sure that Android 2.0 is indeed using V8.

This is exciting news. It makes Droid the first shipping product I know that uses V8 on an ARM processor, although V8 has included an ARM JIT compiler for some time now. For mobile web developers (like me), it means we're one step closer to having desktop-quality rich web applications on low-power handheld devices.

Final thought: Although the Motorola Droid is still 100 times slower than Chromium on a Mac Pro, it's actually faster at some benchmarks than IE8 on a low-end Windows machine, or Firefox 2 on hardware from just a few years ago.

Syndicated 2009-11-06 08:00:00 from Matt Brubeck

Compleat: Programmable Completion for Everyone

Compleat is an easy, declarative way to add smart tab-completion for any command-line program. For a quick description, see the README. For more explanation and a brief tutorial, keep reading...

Background

I'm one of those programmers who loves to carefully tailor my development environment. I do nearly all of my work at the shell or in a text editor, and I've spent a dozen years learning and customizing them to work more quickly and easily.

Most experienced shell users know about programmable completion, which provides smart tab-completion for for supported programs like ssh and git. (If you are not familiar it, you really should install and enable bash-completion, or the equivalent package for your chosen shell.) You can also add your own completions for programs that aren't supported—but in my experience, most users never bother.

When I worked at Amazon, everyone used Zsh (which has a very powerful but especially baroque completion system) and shared the completion scripts they wrote for our myriad internal tools. Now that I'm in a startup with few other command line die-hards, I'm on my own when it comes to extending my shell.

So I read the fine manual and started writing my own completions. Over on GitHub you can see the script I made for three commands from the Google Android SDK. It's 200 lines of shell code, fairly straightforward if you happen to be familiar with the Bash completion API. But as I cranked out more and more case statements, I felt there must be a better way...

The Idea

It's not hard to describe the usage of a typical command-line program. There's even a semi-standard format for it, used in man pages and generated by libraries like GNU AutoOpt. Here's one for android, one of the SDK commands supported by my script:

   android [--silent | --verbose]
   ( list [avd|target]
   | create avd ( --target <target> | --name <name> | --skin <name>
                 | --path <file> | --sdcard <file> | --force ) ...
   | move avd (--name <avd> | --rename <new> | --path <file>) ...
   | (delete|update) avd --name <avd>
   | create project ( (--package|--name|--activity|--path) <val>
                     | --target <target> ) ...
   | update project ((--name|--path) <val> | --target <target>) ...
   | update adb )

My idea: What if you could teach the shell to complete a program's arguments just by writing a usage description like this one?

The Solution

With Compleat, you can add completion for any command just by writing a usage description and saving it in a configuration folder. The ten-line description of the android command above generates the same results as my 76-line bash function, and it's so much easier to write and understand!

The syntax should be familiar to long-time Unix users. Optional arguments are enclosed in square brackets; alternate choices are separated by vertical pipes. An ellipsis following an item means it may be repeated, and parentheses group several items into one. Words in angle brackets are parameters for the user to fill in.

Let's look at some more features of the usage format. For programs with complicated arguments, it can be useful to break them down further. You can place alternate usages on their own lines separated by semicolons, like this:

  android <opts> list [avd|target];
android <opts> move avd (--name <avd>|--rename <new>|--path <file>)...;
android <opts> (delete|update) avd --name <avd>;

...and so on. Rather than repeat the common options on every line, I used a parameter <opts>. I can define that parameter using the same usage syntax.

  opts = [ --silent | --verbose ];

For parameters whose values are not fixed but can be computed by another program, we use a ! symbol followed by a shell command to generate completions, like this:

  avd = ! android list avd | grep 'Name:' | cut -f2 -d: ;
target = ! android list target | grep '^id:'| cut -f2 -d' ' ;

And any parameter without a definition will just use the shell's built-in completion rules, which suggest matching filenames by default.

The README file has more details of the usage syntax, and instructions for installing the software. Give it a try, and please send in any usage files that you want to share! (Questions, bug reports, or patches are also welcome.)

Future Work

For the next release of Compleat, I would like to make installation easier by providing better packaging and pre-compiled binaries; support zsh and other non-bash shells; and write better documentation.

In the long term, I'm thinking about replacing the usage file interpreter with a compiler. The compiler would translate the usage file into shell code, or perhaps another language like C or Haskell. This would potentially improve performance (although speed isn't an issue right now on my development box), and would also make it easy for usage files to include logic written in the target language.

Final Thoughts

Recently I realized that parts of my work are so specialized that my parents and non-programmer friends will probably never really get them. For example, Compleat is a program to generate programs to help you... run programs? Sigh. Well, maybe someone out there will appreciate it.

Compleat was my weekends/evenings/bus-rides project for the last few weeks (as you can see in the GitHub punch card), and my most fun side-project in quite a while. It's the first "real" program I've written in Haskell, though I've been experimenting with the language for a while. Now that I'm comfortable with it, I find that Haskell's particular combination of features works just right to enable quick exploratory programming, while giving a high level of confidence in the behavior of the resulting program. Compleat 1.0 is only 160 lines of Haskell, excluding comments and imports. Every module was completely rewritten at least once as I tried and compared different approaches. This is much less daunting when the code in question is only a couple dozen lines. I don't think this particular program would have been quite as easy to write—at least for me—in any of the other platforms I know (including Ruby, Python, Scheme, and C).

I had the idea for Compleat more than a year ago, but at the time I did not know how to implement it easily. I quickly realized that what I wanted to write was a specialized parser generator, and a domain-specific language to go with it. Unfortunately I never took a compiler-design class in school, and had forgotten most of what I learned in my programming languages course. So I began studying parsing algorithms and language implementation, with Compleat as my ultimate goal.

My good friend Josh and his Gazelle parser generator helped inspire me and point me toward other existing work. Compleat actually contains three parsers. The usage file parser and the input line tokenizer are built on the excellent Parsec library. The usage file is then translated into a parser that's built with my own simple set of parser combinators, which were inspired both by Parsec and by the original Monadic Parser Combinators paper by Graham Hutton and Erik Meijer. The simple evaluator for the usage DSL applies what I learned from Jonathan Tang's Write Yourself a Scheme in 48 Hours. And of course Real World Haskell was an essential resource for both the nuts and bolts and the design philosophy of Haskell.

So besides producing a tool that will be useful to me and hopefully others, I also filled in a gap in my CS education, learned some great new languages and tools, and kindled an interest in several new (to me) research areas. It has also renewed my belief in the importance of "academic" knowledge to real engineering problems. (I've already come across at least one problem in my day job that I was able to solve faster by implementing a simple parser than I would have a year ago by fumbling with regexes.) And I'll be even happier if this inspires some friends or strangers to take a closer look at Haskell, Parsec, or any problem they've thought about and didn't know enough to solve. Yet.

Syndicated 2009-10-30 07:00:00 from Matt Brubeck

Colophon

Colophon

This site is powered by Jekyll and based on styles and markup by Tom Preston Werner. Comments are run by Disqus and hosting is by Dreamhost.

Before starting this site, I had an Advogato diary for writing about software. I also have a personal journal (mostly interesting to my friends and family).

Syndicated 2009-10-28 07:00:00 from Matt Brubeck

11 Mar 2009 (updated 5 May 2010 at 22:24 UTC) »

position:fixed in Android Webkit

Good news for mobile web developers! In the latest development build of Android ("cupcake"), WebKit supports iPhone touch events and CSS3 animations/transforms. This means that Richard Herrera's iPhone fixed positioning hack will soon work on Android too.

The WebKit CSS Animation demos also work, but lack of hardware acceleration in Android makes them painfully slow compared to the iPhone.

110 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!