Older blog entries for Bram (starting at number 126)

The deadline for CodeCon 2005 submissions is on December 15, less than two weeks away. Time to get those submissions in.

I've been posting blog entries exclusively to the livejournal version lately, because it's more featureful. Check there for the latest entries.

20 Nov 2004 (updated 20 Nov 2004 at 22:49 UTC) »
Boom-boom

The last variant on boom-boom I gave had fairly slow, plodding play. I've since come up with the following game, which is by far the best board game I've ever invented:

Play is done on an othello board, with both players having one color and moving alternately. On a turn, a player may either place a new piece in an empty square, or 'explode' a piece they already have. Placed pieces may not touch an opponent's piece on either a side or corner.

To explode a piece, the player places one of their pieces in each of the eight spaces touching the piece they exploded. If a piece is already there, it's left alone. A piece may not be exploded if it's already completely surrounded. After a piece is exploded, all opponent pieces which border on the exploded piece (either edges or corners) are flipped over, then all opponent piece which border on those are flipped over, and so on, until there aren't any places where two pieces of opposing sides border each other.

The first player to be unable to move on their turn loses.

I've play-tested this game and it went over very well. There are plenty of simple tactics for someone just starting out, and lots deeper and more interesting tactics as you master those. Kids love flipping over so many pieces, and find the rules easy to follow. An 8x8 board is a little small for this game, perhaps it would be good to use that as a beginner board and a larger board for tournament play.

Anyone who wants to follow up to this post should do so via livejournal comments rather than posting to advogato, because I find advogato blog conversations clunky and probably won't follow up to responses there.

comments

8 Nov 2004 (updated 8 Nov 2004 at 23:15 UTC) »
Car Emissions

There are two concepts in car pollution which people generally get mixed up. Some exhaust gases are simply stinky and noxious, most notably particulate carbon and carbon monoxide. Those do direct damage to the humans near them and crops which grow nearby and are clearly bad. Pollutants are clearly bad and there isn't much direct economic disincentive for any one person to make their car produce less of them.

The other troublesome kind of exhaust is greenhouse gases, mostly carbon dioxide. The amount of damage caused by these is much less clear, and there's a straightforward economic disincentive to produce them, because they correspond pretty much directly to the amount of gas your car consumes. Carbon dioxide also happens to be produced in mass quantities by respiration.

If you really want to know how clean a car is, look it up on the EPA web site. There are some surprises, for example the honda civic hybrid with a manual transmission has mediocre pollution ratings.

Erasure Codes

People keep asking me about using erasure/rateless/error correcting codes in BitTorrent. It isn't done because, quite simply, it wouldn't help.

One possible benefit erasure codes is that when sending data to a peer there are so many potential pieces that you can send any random one you have and it won't be a duplicate. The problem is that the peer may already have gotten that same piece from another peer, so that benefit is destroyed, and on top of that the overhead of communicating and remembering which peer has what is increased tremendously.

Possible benefit number two is that erasure codes increase the chances that your peers won't already have the pieces which you've downloaded. But simply downloading pieces which fewer of your peers have first handles that problem quite nicely, so a vastly more complicated solution is unwarranted.

Possible benefit number three is that if there's no seed left erasure codes increase the chances that the entire file will be recoverable. In practice, when a file becomes unrecoverable it's because there was only one seed and several downloaders started from scratch, then the seed disappeared after uploading less than the total length of the file. Erasure codes obviously would not help out in that case.

There are other possible benefits and corresponding rebuttals, but they get more complicated. The short of it all is that the possible benefits of erasure codes can be had with much more straightforward and already implemented techniques, and the implementation difficulties of such codes are quite onerous.

While I'm pissing on everyone's parade, I should probably mention another scenario in which everyone wants to use erasure codes and it's a bad idea: off-site backup. If you store everything straightforwardly on each backup site, and each site has two nines (99%) uptime (if it doesn't you shouldn't be using it for backup) then the overall reliability will be six nines (99.9999%). Engineering for more than six nines is nothing but intellectual masturbation, because unforseeable problems completely dominate failure at that point. Therefore one-of-three gets great reliability with unreliable backup sites in exchange for having to store three times the amount of data you're backing up.

With erasure codes, you could make it so that each backup site only had to store half as much stuff, but that two of them would still need to be up to recover data. If you then have four backup sites, there's a savings of 1/3 of the storage versus the much more straightforward approach. This is a pretty small reduction given that the price of mass storage is very small and plummeting rapidly. It also comes at great expense: you have to deal with four backup sites instead of three, and the software is much more complicated. In systems like this, the recovery software not working is a significant part of the chances of the system as a whole failing. Also, any economic benefit of savings on disk space must be weighed against the costs of the software system which runs it. Given the ludicrous prices of backup systems these days, a much simpler albeit slightly less efficient one would probably be a great value.

ECC of course has some great uses, for example data transmission of noisy mediums and storing data on media which can get physically corrupted, and recent developments in it are very exciting, but it's very important to only use sophisticated tools when clearly warranted.

Comments?

Electoral College Effects

Here is an interesting question - Given the electoral college system, does it favor the large or the small states? The very short answer is that it ridiculously disproportionately favors the small states, because they're flat-out given disproportionately large representation. And if that weren't the case then switching to a simple electoral majority would be uncontroversial, following the trend of a simple majority deciding within each state.

But if we pretend that this is an issue of real interest, what are the effects? Well, that depends. If we had a country of two states, one larger than the other, than the chances of one's vote mattering in the smaller state would be just about nil. In a country of three states two of which were just slightly larger than half the size of the largest state the small states would be far disproportionately represented.

In practice there are enough states that statistical effects overwhelm the weirdnesses of specific enumerable outcomes. We can adopt a much more simplistic model of there being many small states, and we compare two, one of which is roughly double the size of the other. If we assume that the race is a dead heat across the entire country, (a completely unrealistic assumption, as I'll get to in a minute), the chances of a voter swinging the half size state is approximately 1.4 times that of the double size state (because a standard deviation is proportional to the square root of the number of voters) and the chances of it swinging the overall election are about half, so the chances of a single vote from the smaller state swinging the election are about a third less.

But we don't have a homogeneous voting population, and very few races are dead heats across the entire country. In practice state lines jerrymander quite heavily against New York and California, whose votes in recent elections have been such a foregone conclusion that nobody bothers campaigning in them. With the two coasts being very populous and the economic centers of the country, and getting more so, this effect is likely to become even more pronounced in the future.

And then there's the question of, in a close race, which states do you give out more candy to? The ones which are close races, obviously, and the ones which you can more likely affect the outcomes of. Small states are much easier to win by buying off votes, because a much smaller number of votes can change their outcome. The likelihood of their swinging the overall election is negated here because we don't go into elections blind - campaigns poll to find out what states are up in the air, and ignore the ones which aren't.

Which states are close races varies from election to election, so there's a random crapshoot which decides who gets the most resources each time. The result is inevitable arbitrary disparities, which the only consistent thing being their arbitrariness, and strong incentivization for local officials to make their states be close, or at least appear to be close.

If this all sounds stupid and unpleasant, it's because it is. The only clear effect of a truly voter weighted electoral college would be that New York and California would be (still) jerrymandered against. All the other effects are random and generally bad for everybody.

Unfortunately the chances of the electoral college getting fixed via an orderly political process are just about nil. Fixing it would require a constitutional amendment, which would have to be approved by 2/3 of the states, and most of them are, unsurprisingly, small. The smallest states get several times their proportional say in the electoral college, and many times their propotional say in the senate, with a flat two senators from every state, so any constitutional amendment cleaning up the mess would be dead in the water.

The rules favoring small states, by the way, were set up at the time of the formation of the United States to get the south to join. Back then, the rules were even worse because slaves counted towards representation, even though they couldn't vote. It took the civil war, which was caused by the political imbalances favoring the less economically productive parts of the country and the separation being on neat geographic lines, for the mess to get cleaned up. Kind of like the situation today, except that it hasn't gotten to the point of internal warfare, at least not yet.

On that note, I feel obligated, this being election day, to encourage everyone to vote. Unless of course you'll be voting with a diebold machine, or your registration got mysteriously lost, as mine did. [Update - I showed up to vote and they did manage to find my registration in some obscure place, but my wife, whose registration was sent in at the same time and on the same day, had to cast a provisional ballot.]

comments

Trikke

I had some interesting ideas for a human-powered vehicle in which the rider stood upright and propelled forward by side to side motion using the same general principle as pumping on a skateboard or streetboard. Then I read about Trikke and realized that it's been invented already, so I could simply buy one. After an hour of practice I can do laps with a Trikke 8 without having to push at all. It's great fun.

On the subject of interesting vehicles, I'd be remiss not to mention the handcycle.

Graph Isomorphism

After much cogitation, I think I've figured out some examples of graph isomorphism problems which are almost tricky.

Take a graph for which for every pair of nodes (X, Y) the entire graph can be mapped onto itself in an isomorphism such that X maps onto Y. There are many example of these, such as hypercubes. Then, 'split' each node to make a new graph, such that for each node X in the old graph, there are two nodes in the new graph Y and Y', with Y connected to Y'. For each pair of nodes X and Z connected in the original graph and corresponding to Y, Y', W, and W' in the new graph, either connect Y to W and Y' to W' or 'cross' it by connecting Y to W' and Y' to W.

The potentially tricky problem is to determine whether a graph created with one set of crossings is isomorphic to one with a different set of crossings.

One way to differentiate nodes in graphs of this form is to, for each node, make a list of how many other nodes have a minimum distance of one hop away, then two, then three, etc. If there was a loop in the original graph then the numbers will be affected by whether that loop contains an odd or even number of crossings. Proper selection of the graph and crossings make make every short loop have an even number of crossings, thus foiling this approach.

comments

CodeCon 2005 Call For Papers

The CodeCon 2005 Call for Papers is now up. Please forward to anyone who might be interested.

CodeCon 4.0
February 2005
San Francisco CA, USA
www.codecon.org

Call For Papers

CodeCon is the premier showcase of cutting edge software development. It is an excellent opportunity for programmers to demonstrate their work and keep abreast of what's going on in their community.

All presentations must include working demonstrations, ideally accompanied by source code. Presenters must be done by one of the active developers of the code in question. We emphasize that demonstrations be of *working* code.

We hereby solicit papers and demonstrations.

* Papers and proposals due: December 15, 2005 * Authors notified: January 1, 2005

Possible topics include, but are by no means restricted to:

* community-based web sites - forums, weblogs, personals * development tools - languages, debuggers, version control * file sharing systems - swarming distribution, distributed search * security products - mail encryption, intrusion detection, firewalls

Presentations will be a 45 minutes long, with 15 minutes allocated for Q&A. Overruns will be truncated.

Submission details:

Submissions are being accepted immediately. Acceptance dates are November 15, and December 15. After the first acceptance date, submissions will be either accepted, rejected, or deferred to the second acceptance date.

The conference language is English.

Ideally, demonstrations should be usable by attendees with 802.11b connected devices either via a web interface, or locally on Windows, UNIX-like, or MacOS platforms. Cross-platform applications are most desirable.

Our venue will be 21+.

To submit, send mail to submissions2005@codecon.org including the following information:

* Project name * url of project home page * tagline - one sentence or less summing up what the project does * names of presenter(s) and urls of their home pages, if they have any * one-paragraph bios of presenters, optional, under 100 words each * project history, under 150 words * what will be done in the project demo, under 200 words * slides to be shown during the presentation, if applicable * future plans

General Chairs: Jonathan Moore, Len Sassaman Program Chair: Bram Cohen

Program Committee:

* Jeremy Bornstein, AtomShockwave Corp., USA * Bram Cohen, BitTorrent, USA * Jered Floyd, Permabit, USA * Ian Goldberg, Zero-Knowledge Systems * Dan Kaminsky, Avaya, USA * Klaus Kursawe, Katholieke Universiteit Leuven, BE * Ben Laurie, A.L. Digital Ltd., UK * Jonathan Moore, Mosuki, USA * Len Sassaman, Nomen Abditum Services, USA

Sponsorship:

If your organization is interested in sponsoring CodeCon, we would love to hear from you. In particular, we are looking for sponsors for social meals and parties on any of the three days of the conference, as well as sponsors of the conference as a whole and donors of door prizes. If you might be interested in sponsoring any of these aspects, please contact the conference organizers at codecon-admin@codecon.org.

Press policy:

CodeCon provides a limited number of passes to bona fide press. Complimentary press passes will be evaluated on request. Everyone is welcome to pay the low registration fee to attend without an official press credential.

Questions:

If you have questions about CodeCon, or would like to contact the organizers, please mail codecon-admin@codecon.org. Please note this address is only for questions and administrative requests, and not for workshop presentation submissions.

Comment

Motorized vehicles

I recently had the opportunity to try out Trevor Blackwell's segway and eunicycle. They're both a lot of fun, although the eunicycle requires actual practice to ride, unlike the segway, which you basically just step on and ride around.

I think that appropriate modifications to the eunicycle could make it even easier to ride than the segway. While this is extremely counterintuitive, it makes a lot of sense from a physics standpoint. A car is completely based on static balance, meaning that if it turns off it doesn't roll over. A bicycle has some dynamic balance, because it would fall to the right or left if the rider stopped balancing. A unicycle is completely dynamic balance, which makes it much more difficult to ride, but also makes it far more controllable and maneuverable once you've learned to ride it. The way one shifts one's weight on a unicycle, and by extension a eunicycle, is ironically more intuitive for humans than the way it works on a bicycle, because it's the same as the way we do it with the form of locomotion we inexplicably use all the time, which is bipedalism. Bipedalism is completely based on dynamic balance, and requires constant corrections even when just standing still.

In order to make a eunicycle easier to ride, it must be made self-balancing. On a unicycle, and the eunicycle as it is today, you balance by waving your hands around, which is both difficult to do and very limited in the amount of force it can exert, which makes going at very high speed inherently dangerous since you can't maneuver. A more stable vehicle can be constructed as follows: On the bottom, there's a wheel like in the existing eunicycle. Above that, there's a horizontal flywheel attached to a high-torque motor which is used for balancing. Above that, and attached to the wheel via structural components which go around the flywheel, is a platform which the rider stands on, and in front of the rider there are handlebars of the same design as the segway for the rider to hold onto.

If the rider is moving forwards and leans right, the eunicycle turns the flywheel to the left, thereby turning the rider and wheel to the right and keeping the rider from falling over (although generally this will result in the vehicle being angles slightly more forward, so it will then accelerate to keep from falling forwards). Likewise, if the rider is going forward and leans left the flywheel is used to turn the rider and wheel to the left. If the rider is going backwards then the wheel is turned left to compensate for leaning right and right to compensate for leaning left.

A weird problem is that if you keep turning in the same direction for a while the flywheel might build up considerable angular momentum, eventually getting to the point where it can't turn any faster and hence the steering bottoms out. I'm not sure if this would be a real problem in practice, there are several ways the effect could be dampened or avoided if it does.

In principle this sort of vehicle should be able to go faster than a motorcycle, since it has only one wheel and hence half the friction, although in practice the weight of the flywheel and stability issues might limit its speed.

If you really wanted the ultimate high-speed vehicle, it would probably be a glycerine jet-propelled unicycle with electronic stabilization system, although that would be incapable of idling and be in some ways closer to a jet pack than a land vehicle.

Comments

As an experiment I'm dual-posting both here and on livejournal, so that people can leave comments. If you'd like to leave comments on this entry, go here.

Airport 'Security'

I'm apparently on the always-harass list. After getting searched before going onto airplanes so many times, I've learned a bit about the procedures involved. When flying on Alaska/Horizon, if you're marked to be searched your boarding pass says SSSS in the lower-right corner. Conveniently, you get your boarding pass before going through security. This is presumably so that any would-be hijacker can see that they're going to be searched thoroughly and drop off all their weapons before trying to get on the plane, to avoid all that trouble that catching and detaining them would cause.

My last flight I happened to be sitting next to a pilot deadheading back. He confirmed that security searches pilots too. Whoever designed current airport security procedures might have a reasonable excuse for this ineptitude though. For example, they might still be in kindergarten.

The SSSS mark has a 2d bar code above it. I wonder if anyone has ever collected a bunch of those and decoded them.

SHA-1

There's rumor of a sha-1 break, related to a very confirmed break of sha-0. Fortunately the breaks are unlikely to lead to the construction of pre-images, so there's no need to panic just yet, even if the rumors prove true.

Which leads to the question, what should we do? First of all, there's no need to change algorithms just yet, although anyone designing a protocol might want to hold off a few months unless time isn't of the essense (and in software everything is always late, so it rarely is). With a hash length of 160 bits, and thus birthday attacks in 80 bits, sha-1 is due to expire in roughly two decades no matter what, so we should seriously consider using a hash function with a longer output. AES-128, which with a key and block size of 128 bits could easily survive past when moore's law runs out of gas.

Whatever happens, it would be a disaster for a multiplicity of hash functions to win out, since that would result in profound incompatibility. So the choice of successor to sha-1 shouldn't be taken lightly.

The clear political favorite is sha-256, which is unrelated to sha-1 in structure and hence not susceptible to the most recent attacks, and also has a 256 bit output to match aes's 128 bit key length. My only real complaint about sha-256 is that it's a different (and much less studied) cryptographic primitive than aes, thus providing two points of attack to our cryptographic systems rather than one. I would much prefer a hash function which is based on aes, such as whirlpool. Unfortunately whirlpool has a 512 bit output and about the performance of sha-512, at least on 32-bit systems. If there was a standardized aes-based hash which had an output of 256 bits and about the performance of sha-256 I'd recommend it unhesitatingly, but with the way things are now I'm torn.

If I had to put even money on it I'd bet on sha-256 winning, since that one already has considerable political oomph behind it, and noone has complained about potential weaknesses in it, and everybody agrees on the importance of a single standard.

Python Exception Assertions

I would like to be able to state in my Python code 'assert than exception spam would get caught somewhere in the current call stack by something other than a catch of general exception'. There are many bugs which this would catch easily which are very difficult to search for comprehensively using test code, and have caused me great pain in the past and probably will continue to do so in the future.

I mentioned this to Guido and he pointed out that the expressions stating what is to be caught aren't evaluated until an exception is actually thrown. This interesting piece of trivia indicates that the expressions would have to be evaluated at the time the assertion is made, which could be a little slow, but that doesn't invalidate the utility of the functionality I want.

Guido indicated that, although he doesn't disagree with the functionality I want, it would require a significant change to the Python VM which he doesn't expect anyone to be interested in doing. So if you want a Python core project which is technically challenging and, in my humble opinion, clearly very benificial I suggest adding this functionality. If you do, I will be eternally grateful.

Hex

After playing around with the hex code I gave in my last entry, I think I can finally explain some opening theory.

The first question is, if your first piece is in the center, why is it weak to put your second piece near it? The answer is somewhat indirect. If there were other pieces randomly scattered around the board, a piece close to the central one would be very powerful, but at the beginning of the game there are very few pieces around, so you have to think specifically about the pieces in play. When your opponent responds to your strong central move, he will do so in a place away from where you moved, and since your board coverage is poor from being so focused on one point, his move will be relatively strong. By spreading out your moves you prevent that.

So the second centralized move isn't weak, it's just that the responses it allows are strong. A very surprising thing about this observation is that it's made straightforwardly by increasing the ply of look-ahead, in fact 2 ply sees it just fine. I always assumed that the beginning of a hex game is very strategic and that increasing ply look-ahead wouldn't improve play any, since that's reserved for 'tactical' situations. Apparently that guess was completely wrong, but I still think that increasing ply doesn't improve play unless your board evaluation function isn't a piece of garbage, which I didn't have until now.

The second question is, why do the best hex players now play pieces so far away from the center for their opening move? The answer is related to the answer to the first question - if you place your first piece in the center, then any strong place to put your second piece will probably be close to it, which will be weak for the reason I gave above. I believe this would be seen by a four-ply look-ahead. Again, my guess that increasing the ply of a board evaluation function was completely off base.

So now at least I have an explanation of why early hex moves are the way they are, although I suspect I'll have a lot of difficulty applying this knowledge to make my play better.

Hex is the only game I know of where hypermodern theory has become the dominant one.

A good place to play hex is kurnik.

Boom-Boom

I came up with a new and improved board shape for playing boom-boom on. Start with a checkerboard of odd side length, with the corners colored black and alternating squares colored white. Make the white squares part of the 'real' board, and connect each of them to the four (or in the case of edges, two) other white squares they share a corner with. Next connect each of the edge pieces with the two edge pieces closest to it, so for example (6, 1) connects to (8, 1) and (4, 1). (My terminology counts the corner as (1, 1) not (0, 0).) Then remove (2, 1) and (1, 2) and replace them with a single piece at (1, 1) which is connected to (4, 1), (3, 2), (2, 3) and (1, 4).

This board is probably most interesting to play go on, since it makes each node have exactly four neigbors, thus having the 'borderless' property of circular go boards without being so difficult to print.

Boom-Boom

One of the big problems with boom-boom is that after each explosion new pieces wind up being placed in fairly 'random' places, which make the play extremely tactical, with not much advanced planning possible. This can be eliminated by making it so that when a piece explodes a piece remains on the original exploding spot. Although that results in more pieces to the side which caused the explosion, it doesn't really incentivize pointless exploding because the new group tends to get captured as a unit. But you do wind up with a question of where excess pieces propogate to, so that a single explosion doesn't propogate forever. This also leads to the funky-shaped board becoming pointless, since it was designed to make explosions happen more quickly.

A few more logical steps along those lines results in the following game, which can be easily played using a standard othello set.

Both players alternate placing pieces of their color. On a player's turn they have the option, instead of placing a new piece, of capturing a piece of the opponent's which borders on one of their own (for the purposes of this game 'bordering on' means sharing an edge or corner). When a piece is captured all of the pieces of the same color which border it are captured recursively, thus a very large group can be captured at once. The game ends when one player occupies the entire board.

Note that if white has pieces on (3, 3) and (3, 5) and black has pieces on (3, 2), (3, 4) and (3, 6) then if white captures (3, 4) than does not result in either (3, 2) or (3, 6) getting captured, although it would if there were black pieces on (4, 3) and (4, 5).

I'm not sure off the top of my head how to approach even very simple endgames of this game. Further experimentation is required.

Poker

It turns out that my simplified poker was a bit too simplified. Chris Chang pointed out that since all-in bluffs are guaranteed to fail half the time, raising one is strictly superior to going all in. He suggests adding more cards to get rid of that problem. With the way people play these days that game would be just about as interesting as no hold 'em. (See that link for a good explanation of why no limit isn't really 'the cadillac of poker' and pot limit is better.)

Here's an interesting question: if you have a positive expectation bet, how much should you bet each successive round to make your money grow as quickly as possible? If you bet nothing you obviously get nothing, but if you bet everything you wind up with zero and lose out on the ability to make any money in later rounds. It turns out this is known as the Kelly Criterion.

There's computer security snake oil even in poker. This 'audit' just about made my head explode. An audit is when you go over the internals of a system and analyze that it works correctly. Running statistical tests on the output isn't an audit, it's just bullshit.

While trying to grok the rules to Razz I came up with the following game, which I think has most of the strategic elements of Razz but has a lot of the weirdness removed. I say 'I think' because Razz has a whole lot of weirdness and not having played it I have a hard time figuring out what that weirdness causes in practice.

Each player gets two pocket cards to start and community cards are dealt out as in hold 'em. The difference is in how hands are scored. Unlike in other forms of poker, the hands really are seven-carded. The hand with the highest card wins, with a tie-break of the second-highest card, then third, etc. Duplicate cards are completely ignored. For example, if player X has AQ and player Y has K3 and on the board is A5538 then X's full hand is AQ853 and Y's full hand is AK853, so Y wins. If two hands are identical except that one has an extra low card, then the one with more cards wins. For example, if player X has AQ and Y has A2 and on the board is Q9884 then X's hand is AQ984 and Y's hand is AQ9842, and since Y has the extra 2 Y wins. Having two extra cards of course also wins, and having a triplicate of a card causes both duplicates to be thrown out. As in Razz, flushes and straights don't count for anything.

I think there are a few things which make this an interesting game. A hand which contains two lower cards than another one pre-flop has only a low chance, I think about 7%, of winning, so trying to intimidate people by playing every hand will rapidly get you soaked. Also, knowledge gained after each face-up cards are dealt is directly related to the previous knowledge gained, increasing chances for reasoning what cards a player has based on their behavior. The chances of any particular card turning up on the board are reasonably high, so even if you start with a pocket ace there's almost a one in three chance that you'll find yourself playing garbage and having to figure out whether to bluff or fold.

Alarm Clock Display

I have a digital alarm clock with a seven-pin display which I've long noticed 'warbles' in funny ways when I brush my teeth with an electric toothbrush, due to the vibration it imparts on my head. I always attributed to the weird way different pins warble to odd cognitive phenomena, but the other night I noticed that it's always the pins on the top and right edges which warble as one group and the others warble as another, so now I wonder if there's something funny about the timing of it, for example maybe those three pins are offset by 1/120 of a second from the other four as they cycle on and off.

Monitors also warble when you bite into crunchy potato chips. It's highly amusing.

117 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!