There is a single statement which many people want to evaluate the truth of, but there is no single authority on.
Everyone has a belief level, in the range 0 to 1. They believe the statement if their belief level is greater than 0. People manually configure friends, giving them each a distrust level, ranging from 0 (believe everything they say) to 1 (don't trust at all), and also have a personal belief level, which is based on evidence they have gathered directly.
Each person then takes their own belief level, and each friend's belief level minus distrust of that friend, and takes greatest of them as their level of confidence. This is repeated until levels of confidence stabilize.
While extremely simple, this technique has a number of nice features -
It scales to a huge degree
There is no single point of failure - fooling a single person might fool some of their friends as well, but won't fool everyone.
There is no self-sustaining rumor problem - levels of belief don't arbitrarily rise just because people add more friends.
I have created some code which demonstrates how this algorithm works.
The one that Bram has presented here is his own variant. You can see some notes from a presentation that Raph and I gave at the first O'Reilly p2p conference on my web page. These slides may not make any sense without the accompanying patter -- sorry.
Amber Wilcox-O'Hearn also deserves credit for helping me flesh out the idea.
story where I have offered a reward for this.
I think this is something that is screaming out as needed, but ease of use and decentralisation seem hard to fit together. I would love to see an example in use.
I've spent time thinking abuot this too; I wanted it for irc++, an attempt at a massively scalable robust IRC/chat system.
Some challenges include:
I don't think anyone has solved all these. I have a fwe links on the irc++ page mentioned above, but there's no code yet.
eikeon and I have been planning on doing this sort of thing in Redfoot. With Redfoot, you run an RDF statement server which, as well as a customizable GUI, has a query interface so that the server can be queried remotely by other servers.
RDF, at its core, allows statements to be made about statements and so it is very easy to associate a level of trust about any statement. Furthermore, it would be easy in RDF to make a statement about how much you distrust another Redfoot. Redfoot servers are uniquely identified by a URI.
We might try implementing something like the system outlined above soon.
Ultimately, we might want to move to a system where you can state trust of a particular Redfoot server for statements made about a particular thing. For example, I trust statements that aceponkus makes about eikeon but not statements aceponkus makes about anything else.
The idea would be that particular nodes would become known for the trustworthiness of their statements on a particular topic. For example, my mum might gain trust with regard to statements about me.
I've been contemplating using an LDAP server as a central repository of account information and trustmetrics. I'm running a site using mod_virgule and it seems like it would not be too hard to have mod_virgule authenticate against an LDAP service instead of the local XML database. With a central repository, you'd only have to set up your account info once and all mod_virgule sites could authenticate against the same info. There would have to be a mechanism for each site to add different varieties of trust metrics (I believe lkcl did some work on this in his version of mod_virgule).
Once you got things going, you could even patch slash, scoop, or other similar packages to authenticate against the same LDAP server(s). It would certainly save a lot of time wasted setting up duplicate accounts on lots of different sites.
The idea of LDAP resonates well for me because I've been thinking for a while about the relationship between LDAP and RDF. DSML (version 1 of which I was the editor of and version 2 of which I'm the committee chair of) is about representing LDAP queries and data in XML so it possibly has a role here too.
long time no speak, neh?
yes, you're right. i split out the authorisation from the authentication credentials. to the extent where:
i get a little confused about the difference between authorisation and authentication: people familiar with Kerberos will know what i am referring to, however. type in username firstname.lastname@example.org, password bloggs, and you actually get logged in as user fred on server.subdomain.myfavouritedomain.com
"email@example.com, bloggs" is authorisation credentials; fred on server.subdomain.myfavouritedomain.com is authentication information.
something like that :)
yes, i worked on this eight months ago. i added HTTP-based client code to mod_virgule, which became xvl, at http://virgule.sourceforge.net.
This client-side HTTP call was used to obtain any remote user-profiles from another site, where the name needed to be of the form firstname.lastname@example.org or email@example.com etc.
It's a little clunky, and i soon disabled it, having only one site to test it on (duur :) and got on with other things. I investigated SOAP etc and keynote because of it, because i wanted to do secure, well-defined distributed trust metric systems, not just local ones.
in particular, i investigated keynote as a means to digitally sign user's profiles and certifications. the issue of validating a user profile, with its certifications, becomes very important once you start distributing the profiles.
creating a node graph from a distributed database is also made tricky by the fact that certifications may be revoked and / or created during the process of obtaining all the nodes! additionally, some sites - imagine that you are scanning 100 sites, obtaining 10,000 users or maybe 100 times that number - may temporarily be unavailable.
what do you do? wait for them to come live? use the last cached entry? all not really entirely satisfactory, so to "cap" this, what must be done is that when a calculation is performed, it must be digitally signed and made publicly available! otherwise, no-one is a) going to be able to reproduce the results b) trust the results!
also, one other important thing i added to advanced-mod_virgule: the means to select your own seeds. why is this important? well, you may not wish to trust the top-level seeds. your friends may decide that your favourite seeds are better for their purposes than anyone else's top-level seeds.
BUT! that doesn't mean that the entire SITE uses your favourite trusted seeds, and specifically the administrator of the site uses THEIR trusted seeds to make all the important decisions for their site (such as who gets to post what on the front page).
so, a little abstraction, there, allowing some quite empowering concepts, such as filtering of viewing Articles and News entries by your favourite trusted seeds, not the site's default ones.
you get the idea :)
btw, for readers of this reply, i'm assuming that you've read raph's paper on trust metrics that describe the concept of top-level trusted seeds!
----- Luke Kenneth Casson Leighton <firstname.lastname@example.org> -----
"i want a world of dreams, run by near-sighted visionaries"
"good. that's them sorted out. now, on _this_ world..."
Hi lkcl :)
You have not mention other thing about advanced advogato - introduction of interest certificates.
With them you could set How much you are interested in something/someone - project or article.
It could help you to keep track of interested articles (I have tryed to do this in diary, but have lost interest :( , projects, people (without any need to trust them).
Another thing with OWN SEEDs - you could see any list of articles on first page (it whould be your own personal page). Also there whould be no need in diaries - it whould be same articles, but seen by the ones who want to
Why I am mention all this? Just to ask Why You Need Trust Metrics At All?!!
What it helps you ?!!
All Advogato have got from this is filtering of one-day-talkers.
Still it will not save from Say-Good-Bye-SPAM attack or just DoS about filling space with diaries
It will not save you from boring postages (like this one ;-)). It all kept on the notion of some unspoken rules. But any person could spoil all image by overriding them (it was death of USENET). It will be , when site become popular enough
Also person could change himself... you just couldn't check all of certificates you have given away
As I have tried to tell in Dynamic/Static article - you need some other types of rules:
Another thing - your trust model is like rich model of real world. If a person have $$$ he is
trusted by business. But - money is nothing untill you could spent it or earn. You could make something
or, better, exchange smth to get them.
And last thing - you need goverment, low and army to use money - this is impossible in Inet world :)
Absolute trust - is useless. You only need trust to do something
I'll thinking of implementing this in future-IRC :)
Could something be done by storing simple ASCII tables with each user depicting their opinion of themselves and others, if a good structure could be thought of then this could be stored for each user on each system. (an easy read GeekCode, headings and values).
If each participating site did this and everyone used the same format and the data was all public, then a simple script could be run through to determine any sort of trust metric required.
USERID andrewmuck INTERESTS topic, topic, topic TRUSTSCOMMENTS site:user <0-9>, site:user <0-9>, site:user <0-9> TRUSTSMONEY site:user <$nnnn>, site:user <$nnnn> TRUSTSIDENTITY site:user, site:user
ironic! i decided to port raph's trust metric code to python over the weekend.
example code has a simple file format for certifications... :)
New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.
Keep up with the latest Advogato features by reading the Advogato status blog.
If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!