I hate being sick. Happily, I'm mostly better now.
Dispite ignoring my email to him a few months back when about making a math encyclopdedia, Raph seems to be more interested in one now. Raph, for the ultimate reference to Categories try Categories for the Working Mathematician by Saunders MacLane. This book tells you the really important stuff and uses lots of examples from all over mathematics. What you might also be interested in is Toposes, which are basically a categorical theoretic response to foundationalism in mathematics by trying to express Set Theory categorically. Here's a five minute introduction: Topos Theory in a Nutshell.
Went to the BADMath day yesterday. Lots of fun with really good talks. There was one paritcular talk by Herbert Wilf about finding closed form solution to summations which is worth mentioning here. The system is nice, fast, and works well. Positive results have one line proofs (checkable by eye, or your favourite symbolic algebra package), but for negative answers you have to trust the programme. He mentioned the Encylopedia of Integer sequences. Even though many people use it, they don't seem to realise that it is a conjecture generator. You give it a sequence and it conjectures that it is a particular sequence (actually it usually gives a number of conjectures).
The Math Encylopedia
Which takes us back to the Dictionary/Encyclopedia of Math. QED is probably the pin-up for this kind of project. Everyone interested in such a system should visit this site. It seems to have been abandoned a number of years ago, but obviously some people put some serious thought into what such a system would entail. As far as I know it was still in the planning stages when it died/went into hibernation.
Paul pointed out to me that the system is so big, and will require so much computation, that it will be difficult or impossible (or at least not optimal) to host it at a single location (or set of mirrors). Robustness is also an issue, so having the data and computation spread out seems like an excellent idea.
Back to this again soon.
Which logic to choose?
What about the basic logic. There is some discussion on the QED site about which logic should be chosen as the basis on which to build the proofs. Which logic and axioms should we choose?
Do you like to use the 'Axiom of Choice' or it's negation? Do you like the 'law of excluded the middle'? What do you like to base your proofs on anyway - ZFC, Peano arithmetic, Toposes? Rather than fight about which is best, it seems much easier to simply allow people to choose whichever one they like best at the time.
This immediately raises the question of what we then mean by a "True Statement" and a "Correct Proof". Some (very easy to understand ) statements are true in ZFC (Zormello-Frankel axioms plus the axiom of choice) which are not true in 'ZFnoC' (ZF with the negation of Choice).
Many people here seem to like thinking about trust metrics, so to put it in those terms, what we want is multiple (and customisable) metrics which can be selected by the user. For example, if I want to know if a particular theorem is true under the assumptions of ZFC, I'm simply searching for a proof of this theorem which is trusted by the axiom-system object 'ZFC'. You might also want to search for all theorems which use the following ideas/definitions (either directly or indirectly) and are True in both ZFC and ZFnoC.
Blind and not-so blind faith
Now to the distributed nature of an encylopedia. We're interested in verifying proofs of statements which we have found. How can we do this? For a fully machine-checked proof, you need to trust the machine and programme which are telling you that a statement is true.
However much dancing you do up high, at some point down at the bottom you have to place some blind faith in the person who wrote and checked the orginal algorithm, in the fact that the compiler was correct, that it wasn't corrupted en-route to the executing machine, that the machine executed it correctly and that the answer was communicated to you correctly. Of course, some of these can be trusted more than others.
To minimise these uncertainties, you can encrypt and error check when transporting information, you might use a 'proven' virtual machine running on a proven operating system running on a respected piece of hardware. You might also check the programme youself, compile it yourself (did you check the compiler youself?), and run it on your own 'trusted' machine.
If you're willing to accept a tiny bit more uncertainty however you might as well trust that the crypto-signed message comming from the AMS server saying that a statement is true/proof is correct is sufficiently trustworthy. If you're not quite that easily convinced then you might want to ask several of your most trusted sites to verify the statement, and then verify it yourself with a programme which you trust for some reason; but where did you get the programme from? Who or what checked it and how much do you trust them?.
The point of all of this paranoia is not to be paranoid, but to recognise that you will at some level have to have some blind faith. How blind it is is up to you. No matter how strict or lax you want to be at any time, it seems sensible to quantify how much you trust the statement/proof in front of you (or the sources which are certifying the statement).
So we come again to trust metrics. This time we're interested in how much we trust a proof or a mathematical, or a proof/certificate saying that a particular piece of code has been proven safe.
These issues are not new (for paranoid computer users), but we now have the possiblity for being much more sure than ever before. (and sure of how sure we are).
For example, your reasoning for your trust in a proof might go as follows.
There are ten major steps in this proof. I have personally checked that if each of these is correct then the proof is correct (how tired am I today?). My friend checked them too, as did various other mathematicians. I have a programme X which has been certified correct by source Y. Programme X says that these ten statements will together give me the theorem. Source Y consists of a certificate signed by the AMS server on a particular data, proporting to run a well accepted and hand-checked proof/programme-checker.
For each of my ten major sub-result used in the ten steps, I look them up in the encylopedia (and download certificates for them). If I can't find a certificate I trust enough, then I run my own proof-checker over the proof, hopefully this gives me a higher.
Combining all these degrees of trust is not as simple as it might seem at first. If you only trust a particular proof-checker W to be 99% right then if you get many different certificats which all use this programme then you are still only 99% right. If on the other hand you also trust proof-checker Z to be 99% right (and sufficiently independent of W) then you might end up being 100(1-(1-99/100)^2) percent sure that the statement is correct.