Older blog entries for Bram (starting at number 62)

Certs as an anti-spam system

Let us say we wish to build an anti-spam system based on certifications. To make this work, we will need some method of authentication. (We can punt and make a public key be part of an identity). We also need some method of distributing all the certs widely, which must be resistant to attacks which selectively remove certs and anti-certs and also can handle the likely humongous size of the resulting database.

The above problems seem like work, but are much less of a concern than the problem of not having a cert graph evaluation algorithm which we find acceptable. As we can see from the Advogato example, even punting on all the engineering problems by centralizing everything still leaves an open problem as to whether a certification system can work well. Advogato is encouraging, but still immature.

I am now convinced that if we can come up with a certification evaluation primitive which meets the right criteria we will have a spam-fighting solution which will be work well, even if deployed on a global scale. I am furthur convinced that such a primitive exists, although I haven't figured out exactly how to do it yet.

Here are the criteria -

  1. There is a permissible spam factor, s. Say that a spammer has compromised k typical people in a well-connected graph. The spammer then proceeds to repeatedly send pieces of spam from either compromised people or fake people certified by compromised people. After each piece of spam is sent out, the recipient anti-certs the sender. The spammer never bothers sending mail which will be rejected because of the existing anti-certs. The anti-spam criterion states that the attacker should only be able to get about k*s pieces of spam accepted this way before almost any mail they try to send out will get rejected by the trust system.

  2. Let us say an attacker has k people compromised and wishes to use this power to censor others. Due to the anti-spam criterion, it must be possible for the attacker to censor at least k/s typical peers. The censorship criterion says they must not be able to censor any more than that.

Once we find a primitive which meets the above criteria, then we will be able to straightfaced claim to have plans for a global certification based anti-spam system which works on paper. Then we can start worrying about deployment problems.

Partial busy-work functions

I had an interesting thought while pondering circuit complexity today.

We have busy-work functions, which do little more than require some precisely tunable amount of time be spent computing them. For example, computing the secure hash of a sequence of x null characters. I wish to have a busy-work function, b, which meets some additional criteria criteria.

There are to be a fixed number of partial results we can compute each of which takes almost exactly k/2 time, where k is the total amount of busy work for the complete function. Each pair of partial results may be synergistic, meaning that given both of them you can compute b very quickly. Pairs of partial functions which are not synergistic should have the property that if you know both of them it still takes an additional k/2 time to compute b.

We wish to have a specific set of synergy relationships between partical functions. For example, we may want four partial functions P, Q, R, and S such that (P, Q), (Q, R), (P, R) and (R, S) are synergistic but all other pairs are not. My question is, for any given set of synergy relationships, is there always a b and related partial functions which has that property? Can you think of a way of putting together standard (or even not so standard) cryptographic primitives to make an implementation?

My intuition very strongly says that this is possible, but I don't immediately see a way of doing it.

We had a p2p-hackers meeting today, spent a whole bunch of it talking about Codeville. It was fun.

On the way home I realized that the way Codeville resolutions were being determined could be vastly simplified. After two hours of coding I now have the newly rewritten and retested version up on the Codeville page. Ninety thorny lines which it took me a week of being half-dead to write have now been replaced with twenty lines I hacked out in two hours. Boy do I feel dumb.

InstallShield

InstallShield is still a 16-bit application. This means that on long-running 32-bit Windows machines (like mine) by the time you try to install anything which uses InstallShield the 16-bit VM will inevitably have long since crashed and the installer will simply not work.

Those of you using InstallShield should be aware that a very large fraction of all your end user installs are failing due to a problem which can't be fixed without InstallShield completely rewriting their software or you switching to a different installer. I recommend Nullsoft Installer.

Negative Certs

In any trust metric in which you have negative certs there's an attack in which an attacker creates many fake identities and certifies them. The fake identities may get negative certs attached, but it will still be possible to make more. Therefore, any trust metric which includes negative certs must back propogate negativity in its evaluation function to be effective. I just realized this yesterday and haven't put any real thought into how back propogation should be done. Clearly more thought is necessary.

Simple Trust Metric Evaluation

Here's a trust metric evaluation algorithm which fixes the problems of very low confidence levels advogato has right now.

First we must select a paranoia level. This is the assumed probability that any given given peer is compromised. A paranoia level of zero is extremely trusting, while a paranoia level of one never believes anything.

To determine Alice's rating of someone's diary, do multiple runs of the following:

  1. Decide whether to remove each node from the graph with probability equal to the paranoia level.

  2. Trim all dead ends (that is, nodes which have no path to a node which gives a rating) from the graph.

  3. Starting at Alice, do a random walk along trusted edges through non-removed nodes until you hit a node which gives a rating, then stop. It's possible that there is no such path, in which case this run is counted as having failed.

The confidence value is now the fraction of all runs which succeeded. The diary rating can be computed from the distribution of run results, either by taking the median or the mean.

Computation can be greatly sped up by figuring out exact probabilities for each run of removing nodes using linear equations. Disturbingly, it's still randomized, but approximates the exact value reasonably quickly except in the case where confidence is near zero, in which case rating doesn't really matter.

Raph pointed out that this approach involves combining three already known techniques -

  • random node removal

  • trimming dead ends

  • random walk

The above can be used in just about any combination, but using all three has the best properties.

Looking back on Raph's diary I see that I meant to post about this on the 19th of December, after discussing it with him after dinner. I'm a bit behind on my blogging.

Slashdot Trust Metric

Slashdot's comment rating system appears reasonable at first blush, but is deeply flawed in implementation. Happily, there are straightforward ways it could be vastly improved.

First the good part. Moderator points are a good approach, and the strength of them as a technique is largely responsible slashdot's comment system achieving its current level of usability, which to its credit is considerably better than usenet. (I do not mean than as an insult.) In particular, the policy of never giving a particular person more than a few moderator points at a time strongly limits the amount of damage an attacker could do and also allows the size of the community to be very precisely tweaked.

The problems with slashdot aren't so much with what should be there as what shouldn't. The current meta-moderation interface is completely unnecessary. Whenever anyone rates a post up or down, other people have the opportunity to use their moderation points to change that rating, which implicitly meta-moderates the first change. Analysis of that data could be used as the exclusive source of meta-moderation with good results.

Categories for reason of moderation are also pointless. They complicate the UI without adding any new functionality.

Without any data on how it's currently done, I can't comment on specific formulas for deciding when to give people moderation points. Clearly making posts which get modded up should increase likelihood/frequency of getting moderation points, and the same with positive meta-moderation. Coming up with specific formulas seems to be an interesting and soluble problem, but real-world data is necessary.

Perfect Graphs

There's now an algorithm for testing if a graph is Berge in polynomial time. (Read the links for an explanation of the Berge property, it's straightforward but a bit long to repeat here.) Although I haven't read through the papers yet (which I plan to do in my copious free time) I'm under the impression that the algorithm, while distinctly non-trivial, only employs elementary techniques. It's always nice when new results can be understood without extensive background reading.

Mitch Kapor at CodeCon

We've got Mitch Kapor as our special guest speaker for the benefit dinner at CodeCon!

CodeCon pre-registration closes in just a few hours, and it will cost more at the door, so if you haven't yet, go sign up now.

13 Feb 2003 (updated 14 Feb 2003 at 06:59 UTC) »
CodeCon

Pre-registration for CodeCon is closing on the 15th. After that you'll have to register at the door, which costs more.

We've got a really great program this year, including a presentation on Advogato. Also notable is a panel on version control, including representatives from BitKeeper, Subversion, and OpenCM.

CodeCon is also one of the the best places to meet all the local bay area open source hackers. I hope to see everyone there.

I've created a Codeville tutorial which demonstrates its cool features quite succinctly.

Secret Project Unveiled

My secret project is now ready to be seen. It's Codeville, a version control system.

One thing no longer nagging at my brain, a million left to go.

A public service announcement

Space travel is dangerous. Please take careful consideration before boarding any space vehicles.

Abstract Games Magazine is having a game design competition with the theme of simultaneous play. I came up with the following a little while ago, but it turns out I missed the submission deadline, so I'm posting it here.

The game is called Straights and Queers (this potentially uncomfortable metaphor is by far the easiest way of understanding the rules, so please bear with me). It's played on the following board -

       __
    __/  \__
 __/  \__/  \__
/  \__/  \__/  \
\__/  \__/  \__/
/  \__/  \__/  \
\__/  \__/  \__/
/  \__/  \__/  \
\__/  \__/  \__/
   \__/  \__/
      \__/

There are two players, the straights and the queers. Each player has two pieces, one male and one female.

On each turn, both players move any pieces they have on the board and place any pieces which aren't on the board, which happens at the beginning of the game or when a piece is captured. Both players write down what their moves are without seeing the opponent's moves, then reveal what the moves are.

Pieces move to any adjacent hexagon. They cannot move to a hexagon which another piece is currently on, even if that piece belongs to the same side. Pieces can be placed on any empty hexagon, but may not be placed onto a hexagon currently containing a piece, even one belonging to the same side. A player may not move both of their pieces onto the same hexagon. When pieces move, they must move to a different spot, they cannot remain in the same place.

If two pieces wind up on the same spot, then a capture happens. If the two pieces are of the same gender, then the queer captures, otherwise the straight captures. Additionally, in the rare case where a piece winds up in a corner with all three adjacent spots occupied and hence no legal turn on the next move, then it is captured. Captured pieces are returned to the side they belong to be placed on the next move.

The first player to perform ten captures wins.

That's all the rules. I think this game is made interesting by the simultaneous play despite the extraordinarily small board. Perhaps I went too far on board smallness, rendering the game brute forceable. However, there are classic games involving some chance and secret information which have extremely simple positions, such as yachtzee and texas hold'em poker. Texas hold'em turns out to be completely out of brute force range, but yachtzee is emminently brute forceable in the solo case optimizing average score, and on paper looks just barely solveable for the two-player case trying to optimize chances of winning.

I'm curious to see if the entrants into the sumultaneous play competition generally have very limited board positions. I'd also like to know if anyone has actually set about brute forcing yachtzee. If anyone knows of such efforts please tell me.

The CodeCon 2003 program is now announced, and registration is open.

CodeCon 2003 will be February 22-24, noon-6pm at Club NV in San Francisco, California.

All presentations will given by one of the active developers, and accompanied by a functional demo.

CodeCon presentations will include:

  • Advogato - Good metadata, even when under attack, based on a trust metric

  • Alluvium - p2p media streaming for low-bandwidth broadcasters

  • Bayonne - Telephony application services for freely licensed operating systems

  • Cryptopy - pure Python crypto

  • DeepGreen - Agent Oriented investment analysis designed to be self-funding

  • GNU radio - Hacking the RF Spectrum with Free Software and Hardware

  • HOTorNOT - People submit their picture for others to rate from 1 to 10

  • Hydan - Steganographically conceal a message into an executable application

  • Khashmir - A distributed hash table library upon which applications can be built

  • Mixminion - A next-generation anonymous remailer

  • Neurogrid - Decentralized Fuzzy Meta-Data Search

  • OpenRatings - An open source professor ratings engine

  • Paketto Keiretsu - Interesting and Useful Techniques for TCP/IP Networking

  • YouServ - A communal web-hosting system for the masses

  • A panel on version control

53 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!