27 Jan 2003 josef
Hammered web servers
While there are lots of load-balancing tools around, most of them concentrate on clustering at the TCP level. This distributes the load between machines, but leaves one unsolved problem: The network throughput bottleneck.
During the weekend, my DSL line got hammered to death, so it's certainly not suitable anymore to host free projects on the web server at home once they become popular. Load balancing on the application protocol level can help here though: Every page or file to be downloaded can be replaced by a script (say, PHP) which sends back HTTP redirection headers containing distributed locations of the resource. One could then be placed on the SF download servers, another one on some download mirror site.
Apache's mod_backhand would fit (it understands HTTP pipelines) but the authors claim it cannot solve the bottleneck problem. My idea is however that it could work nevertheless if external resources (extra-cluster, not intra-cluster) are used.
On a side note, I have now a self-written Debian package autobuilder in place for i386, hurd-i386, arm and ppc. Except for the hurd they even run in parallel, but in the far future hurd will run on L4 and thus can be fired up in user-space.
Now, all I need is these pills which awake sleeping package sponsors.