17 Jul 2002 Bram   » (Master)

Distributed Proofs

As Raph mentioned, we've discussed how to build a distributed proof system. I believe this has potential not just to be an interesting experiment, but a tool which changes how working mathematicians go about publishing their work.

A friend of mine described the way you write formal math papers is you take a proof, remove all the insight, and write that up. Mathematicians are rightfully paranoid about human error even with extensive peer review, and everything is designed around avoiding it. A practical formal proof checker would completely automate this onerous task, and allow for math papers to be insightful, perhaps even poetic, commentary on the concepts involved in a proof, with a link to the completely formal machine-checkable version.

Which is all a great vision, but to implement it I first have to understand how the universal algorithm works.

TCP

Jeff Darcy has another entry about TCP. I feel obligated to defend TCP, since it's what BitTorrent uses, with great effect. Granted, BitTorrent is doing high throughput, high latency bulk data transfers, which is exactly what TCP was designed for.

Jeff points out a very interesting article, and uses it as an example of problems in TCP. My reading of the article is very different. It's actually talking about the difficulties in congestion control and how TCP is handling them.

In a nutshell, the current implementation of TCP (Reno) is a bit brutish when it comes to doing backoff, so multiple transfers happening over the same net link have to leave a fair amount of unutilized bandwidth as safety margin to not step on each others's toes. A new implementation (Vegas) is much less brutish, making it waste much less capacity as a safety margin. The problem is that when mixing Reno and Vegas, Vegas gets much less transfer rate, because it's more polite. How to phase out Reno and phase in Vegas is a difficult problem, possibly an impossible one.

Writing your own congestion control won't solve these issues. If you even manage to write something less brutish than Reno (doubtful, a lot of work has been put into it) it will have the same meekness problems as Vegas. Any research into how to make congestion control gentle but firm is cutting edge stuff, and if you're really interested in it you should be working on TCP itself, not wasting time on some even more experimental new protocol.

So if you want congestion control, TCP is the only way to go. If you want both congestion control and low latency, I recommend using more than one connection, with the second one probably based on UDP.

I have to tease Jeff a bit for having suggested that I fix a transfer rate problem I had by turning off Nagle. I instead implemented pipelining, and the problem immediately disappeared. If a monkey is having trouble learning a trick, brain surgery is usually the last resort.

Cellular Automata

I initially thought that all linear cellular automata whose output is 'messy' must be universal, but given that Presburger arithmetic is decidable, I'm not so sure.

Latest blog entries     Older blog entries

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!