Distributed Trust Metrics
Posted 27 Sep 2000 at 03:47 UTC by lkcl
This paper describes a protocol to distribute the
process of using Trust Metrics. It specifically does not
cover the process of distributed trust metric calculations,
to which an entire area of research is devoted. The protocol
must include a means to obtain and update the Certificates,
which may be distributed, that are used in Trust Metric
Calculations, and to do so in a provably secure manner, if
This paper will be at
Trust Metrics provide a basis upon which responsibility - also
known as Access Control - can be hierarchically allocated.
As an example, Advogato.org is a
self-regulating site in which four top-level users [seeds],
raph, miguel, federico and alan express their opinion of
other people with a level of "Trust". This next level then
express their opinion, etc. The actual amount of
"Trust" that an individual can pass on is automatically
limited by the level of "Trust" that they themselves
Depending on the actual level of "Trust" an individual
receives, they are given capabilities - for example, the
right to post Articles on the front page. Conversely,
if an individual behaves irresponsibly, their peers can
express their disapproval by revoking the Trust Certificate.
If enough people do this, they will lose the right to
abuse the trust their own peers gave them.
This powerful mechanism is crying out for an extension to
other areas and other sites, and a mechanism to allow,
for example, email@example.com to certify firstname.lastname@example.org as
a 100% reliable Pizza Deliverator. This, of course,
assumes that both samba.org and baz.com have a Pizza
Delivery Trust Metric (yes, samba's developers have
received pizza vouchers and even pizzas by airmail in the past,
so this is a relevant example).
This paper, therefore, describes a protocol based on
the experiences of extending mod_virgule,
Advogato.org's engine, with
peer-to-peer communication of the Trust Certificates.
The protocol used is therefore http itself, which has
the added benefit of allowing https and even digital
signing, with very little extra coding overhead. This
also means that implementers wishing to produce alternative
implementations may leverage existing html and xml parsing
2: Get Remote Profile
The simplest means to obtain the Trust Metrics is to
add an html form method. For example, by issuing
the following request:
the following response is received:
<info givenname="Luke" surname="Leighton"/>
<cert subj="test" subj-type="acct" level="Journeyer"
<cert subj="lkcl" subj-type="acct" level="Journeyer"
<cert issuer="lkcl" issuer-type="acct" level="Journeyer"
<cert issuer="test" issuer-type="acct" level="Novice"
Comparing this to the original file, acct/lkcl/profile.xml on
the local disk, note that only the givenname, surname and
certificates are allowed to be published. The original profile.xml
contains the password in cleartext, a cookie, the user's email
address etc. which should not be made public.
3: Set Remote Certificate
This is the tricky one I haven't thought through, yet (help!)
An insecure method is to simply allow registration of any
certificates from any user. However, a user [e.g email@example.com]
may not exist on baz.com, and how do you get baz.com to trust
mod_virgule itself only allows logged-in users to register
certificates. access control to a site running mod_virgule is
done by issuing a cookie that contains the username and an
expiration date. to obtain the cookie, the correct plaintext
password and username must be entered in a dialog box.
how, exactly, can this mechanism be leveraged by a remote site?
CHAP (challenge response)
One method of authentication to use is a challenge-response system.
The client sends the Certificate modification request with an http POST,
along with a client challenge.
The client then uses an http GET to obtain a server challenge in return.
The client then performs some calculation based on the client and
server challenge, and uses an http POST to send the results to the
The server performs the same calculation, and the connection is assumed
to be secure if the calculations match.
As additional security, the
response should contain a signature based on the original Certificate
modification request, in order that the server may verify that the
request was in fact sent by the user [i.e. that the communications
system has not been compromised].
Channel If one server is compromised, it is possible to
poison all other servers, creating false Certifications
and even deleting some. In a Trust Metric environment,
this is completely unacceptable. The whole basis
of Trust Metrics is that an individual must be the only
person that issues or revokes Certifications.
The number of potential communications channels
is N * (N-1), with each individual server needing to set
up a secure channel with N-1 other servers. This could
quickly become unmanageable.
A provably secure channel is established between servers. It is
assumed that this channel is never compromised. Any Certification
modification requests sent over this channel are assumed to be
trustworthy, i.e. it is assumed that the user will have logged in
as, say firstname.lastname@example.org. There are two problems with this
Client-initiated Digital Signatures
The user's clear-text password, cookie or PGP key is used to generate
a signature that is sent along with the Certification modification
request. The server opens a second connection back to the
server [or, it contacts a PGP server and obtains the user's public
key, which could potentially be the client's server]. The
signature on the Certificate modification request is verified, and
on that basis, accepted or rejected.
An advance on this system would be to further digitally sign the
request with the remote server's PGP key, but that is to be somewhat
discouraged on the basis that it requires server administration
and server trust. The whole point of this exercise is that the
security of User's Certifications is inviolate.
Pre-existing mechanisms in
What are they? How do they work? How would they be leveraged,
particularly in an environment where users create their own
3.1: Preferred Method
A combination of Digital Signatures plus CHAP-based authentication
looks like the best way to go, as it is necessary not only to
ensure that the user's Certifications are inviolate, but also that
the remote server establishes that the user has been locally
Ok, i am confused, even over the issues involved. What now? Help!
RFC2617 "Digest authentication" already defines a "CHAP-like" authentication protocol.
Of course it has its major drawback in the fact that no browser to my knowledge has implemented it. I believe wget supports it. Curl
doesn't. There is a mod_digest for Apache, implementing the server side of it.
, thank you. taking a look, it's good... it...
could... be... incorporated. reason i say this is that it requires an
http.conf parameter, AuthDigestFile, to be specified. and mod_virgule
has its own authentication mechanism that doesn't tie in with this
*thinks*. what is the purpose of the rfc2617 HTTP chal-resp?
Like Basic Access Authentication, the Digest scheme is based on a
simple challenge-response paradigm. The Digest scheme challenges
using a nonce value. A valid response contains a checksum (by
default, the MD5 checksum) of the username, the password, the given
nonce value, the HTTP method, and the requested URI. In this way, the
password is never sent in the clear. Just as with the Basic scheme,
the username and password must be prearranged in some fashion not
addressed by this document.
ok, that means that the user, who must have a shared secret [same
password on local and remote server], is guaranteed to be making the
request of the URL, and the method (GET or POST) is also guaranteed.
whereas, the most important thing to actually be verified is the
Certification Update (the data in the POST method). so, whilst the
algorithm is suitable (use of md5), the way in which it is used in
rfc2617 is not
*thinks*... i think that this basically tells me, regardless of the
method used, that it is going to be necessary to create a "remote
account". _some_ sort of registration with a remote server is going to
It seems to me that from the POV of a server receiving a trust certificate the question is to what extent it trusts the sender. The issue of
whether the sender is a human user or another server is another issue (after all, computers are run by humans, and can be compromised
in many ways). This leads to the possibility of a distributed web of trust in which each server must decide how much it trusts each
source of information about the trustworthiness of the entities (computers and people) that it deals with. In a word, meta-trust.
I've been trying to figure out a system of trust which does not depend on either centralised servers or "Sun Gods" who are automatically
trusted (as in the Advogato metric). The big headache is dealing with people who create more than one ID. If we allow many IDs per
person then in theory a single user can create arbitrarily many mutually trusting IDs and thereby outvote the many users with single IDs.
Any attempt to get away from this seems to require Sun Gods. (BTW the term "Sun God" comes from Bruce Sterling's book
"Distraction", in which distributed trust metrics play a significant role. Read it for inspiration, if nothing else).
On the other hand enforcing one ID per user has practical problems (how many email addresses do you have?) and ideological problems
(e.g. someone might want a professional ID which is trusted on professional issues and a private ID which gets them trusted in the local
S&M dungeon, and have no way to connect the two).
If reputation servers and trust metrics are ever going to move beyond the toy stage then this problem will have to be addressed. If we can
crack it then I think that this stuff is going to be important.
There seem to me to be two separate trust issues with a distributed system.
In the first, the issue is that if I get a statement saying "lkcl@samba trusts foo@baz" I need to know that lkcl really authorized such a
statement. The public key cryptosystem is already doing much the same thing using digital signatures.
This way, a signed statement that "lkcl@samba trusts foo@baz" could be circulated even by a compromised server, through USEnet, or
on bathroom walls. :-)
If exchange of certificates is a rare event, crypto verification wouldn't be a horrendous burden.
Of course, this shifts responsibility onto the security and reliability of the public key repositories. At least this is a problem that has been
given a lot of careful thought and analysis.
The second, though, is the redistribution of recalculated trust metrics by a server. Here there may be lots and lots of traffic, and the need
for automation is clear. Here, in my opinion, the question isn't one of verifying a secure server-to-server connection, but in trying to
establish whether or not the remote server is correctly computing and redistributing the metric updates. That is, if I'm at server baz, how
do I know samba hasn't been cracked and is sending me garbage certs through a secure channel? There is no way to do this.
Here are three possibilities I can see:
1. Create a "meta-trust" network between servers which is charged with trying to figure out which ones are compromised. This doesn't
have to be a completely different thing. It can be seen as an insertion in the trust web of two extra steps: the server "pseudo-user" trusts
all its users. It does the calculations, and then tells its neighbors that it trusts foo@baz. On the remote server, the server user trusts (or
doesn't trust) the samba server user, and relays those new certs to its users. This adds a great degree of flexibility: if I read a trustworthy
primary cert on some bathroom wall that my server is cracked, I can personally lower my trust of "certserveruser@myserver" and so limit
exposure. It is at the expense, however, of significantly enlarging the trust metric universe.
2. Redistribution under end-user control. Every so often, the user logs in, finds recalced certs, and decides to sign them all and distribute
them. This basically eliminates all "secondary" cert exchanges.
3. Replicated update algorithms. In order to redistribute recalc'ed certs, a server must ship some code to N other servers, get them to
execute the code and update certs, and then include their digest signatures on the redistribution file. If the choice of the other servers is
made in a distributed, random way, the number of servers a cracker would have compromise to pollute the network would be immense.
course, this would be much more compute-intensive than a more trusting scheme. (This idea in general would take a lot more
development to flesh out in detail.)
here, in my opinion, the question isn't one of verifying a secure
server-to-server connection, but in
trying to establish whether or not the remote server is
correctly computing and redistributing the metric updates. That is, if
server baz, how do I know samba hasn't been cracked and is
sending me garbage certs through a secure channel?
if it is presumed that all certs are digitally signed, by adding a
suitable digital signature to the user's profile, then i believe that
this solves the problem.
the algorithm for calculations of the trust metrics is, "suck over all
digitally signed user profiles from remote servers, creating
links-to-links-to-links etc, and then perform the trust metric
calculation locally" [i published the algorithm in my diary entry, last
this then becomes that if you don't trust your own web server to perform
the calculations, then well at the very least you're not messing things
up for anyone _else_!
the only other issue becomes, then, how do you guarantee that a
digitally signed Certification will _definitely_ be picked up by any
given server? this can,
methinks, be done by not only signing the entire user-profile but also
by signing each individual certification.
------ PGP SIGNATURE THING --------
<info givenname="Luke" surname="Leighton"/>
<cert subj="test" level="Journeyer" sig="abcdef0123456789"/>
<cert subj="lkcl" level="Journeyer" sig="abcdef0123456789"/>
<cert issuer="lkcl" level="Journeyer" sig="abcdef0123456789"/>
<cert issuer="test" level="Novice" sig="abcdef0123456789"/>
------- PGP SIGNATURE --------
blah blah blah
------- END PGP SIGNATURE --------
the way that this works is that:
- each cert has a signature based on the user's password or pgp key,
and the data signed is simply the username, level and possibly the
- every "cert in" must be matched up with the "cert out"
that is contained in the user's profile.
- assuming that the digital signature on a given user's profile can
be verified, this means that every cert-out issued by that user must be
valid [assuming that the local server has not been compromised]. this
validity then follows over from the validated cert-outs to the cert-ins,
because the sig="....." field must match.
what does this gain you?
- the profiles of the seed users on your local server can be
validated [against a trusted 3rd party, whatever]. any certifications
that they issue are therefore valid.
- you know that there
is then a chain of signatures from those certifications to the people
- the people that they certified you can then validate
[against a trusted 3rd party].
in other words, you can at the very least exclude people whose
signatures do not match, or those people whose servers cannot be
contacted, and most importantly you can tell people that the
calculation excludes server xyz and person abc.
meta-trust, posted 28 Sep 2000 at 05:29 UTC by lkcl »
meta-trust i believe can be performed by using exactly
principles - digitally-signed trust metrics. in this instance, the
server itself gets its own digital key, which can be used to sign
things, and the signature can be independently verified.
however, it is not exactly clear to me that there is a definite benefit
in this. to what purpose could trusting the server be put? i cannot
think of any clear purpose. what would it mean that you trust
the server? to do what? sign user profiles [including
certifications]? it is more important that the user profiles are signed
by a user-confirmable digital signature, such that signing by a server
the only thing i can think of that might be relevant is inter-server
communication, for some sort of administrative purposes.
meta-certs, posted 28 Sep 2000 at 05:44 UTC by lkcl »
dear mr pauljohnson,
I've been trying to figure out a system of trust which does not depend
on either centralised servers or "Sun Gods" who are
automatically trusted (as in the Advogato metric). The big
headache is dealing with people who create more than one ID. If we allow
many IDs per person then in theory a single user can create
arbitrarily many mutually trusting IDs and thereby outvote the many
users with single IDs. Any attempt to get away from this
seems to require Sun Gods. (BTW the term "Sun God" comes from Bruce
Sterling's book "Distraction", in which distributed trust
metrics play a significant role. Read it for inspiration, if nothing
i have in fact read distraction, and recommended that other people do
so, and in fact this effort is aimed towards developing something that
would be suitable for use in fulfilling bruce sterling's vision. now,
why do i forget what a "sun god" is? please explain! i must have read
it too fast, and not enough times :)
if you assume that users have a single PGP key, then their identity
exists synonymously with the key.
it would be very interesting to have "levels" of trust, whereby it is
possible to request a Trust Metric Calculation that says, "only include
people in the graph who have digital signatures using a PGP key".
the reason for this is that some people may wish to use passwords,
whilst others may not wish to get involved with PGP keys, and take the
risk offset against the convenience.
PGP keys, posted 28 Sep 2000 at 06:00 UTC by lkcl »
- do you store a user's private key on advogato.org??? urrr... :)
- do you set up a protocol whereby a user is presented with, or
emailed, their profile, and asked to sign it?? urr... :)
- or, do you get people to run distro-trusty-mod_virgule on their own
local server, and then get issued with requests for their user's profile
to their _own_ local server? does it make sense, then, to have a
cacheing system where the user's digitally-signed profile can be cached,
and the digital signature [or cache date] verified...
- or, do you create a local user private key on the site to
which you log in, e.g. advogato.org, and the private key on the site is
encrypted with the user's own, personal private key? the
decrypted form of this key is destroyed when the user logs out. in
this, it becomes necessary to trust the site to some extent that it does
not get compromised during the time in which the site's private key is
decrypted. the decryption process need only take place when a user
wishes to issue or revoke one or more certifications.
fascinating, huh? *grin*.
lkcl wrote: to what purpose could trusting the server be put? i
cannot think of any clear purpose. what would it mean that you trust the server? to do what?
I suspect you're thinking of trust in the classical computer security sense. This is slightly different: its trust in a person to do something.
Roughly, to say you trust someone is to make a prediction about their future behaviour. If I trust you enough to loan you $100 then its
because I predict that you will pay it back. More formally, the probability that you won't repay multiplied by the value of the loan is the
"cost" to me of trusting you that much. If I am to extend that much trust it must be because I foresee a return which exceeds that cost.
Commercially this is done by interest payments (which is why poor risks pay higher interests) and this process is well understood.
Socially it is done by a network of favours and "reputation". Reputation is the community consensus about how much someone can be
trusted. Conversely, being trusted is valuable because if someone does you a favour then the cost of the risk for them is lower, so they
are more likely to do so. Ideally all this turns into a virtuous circle where everybody helps everybody else and looks after common
resources because anyone who doesn't gets frozen out.
The problem with informal methods of doing this is that they rely on the capacity of the human brain. Humans seem able to keep track of
around 200 other people. Once you get above this size of community the system breaks down. The idea behind a reputation server is to
augment limited human capacity with a computerised system based on inputs from the entire community. The problem then is: when the
server tells you that Joe Bloggs can be trusted, how much do you trust the server?
I can't quote the exact passage, but the hero and one of the other characters are discussing trust servers, and the problem that they tend
to produce a few people who are very highly trusted, but for a newcomer to gain that level of trust was very hard unless you could
persuade one of these people to vouch for you. They used the term "sun god" to describe these people. It was only in passing, so I'm
not surprised you missed it. I picked it up because I was already interested in the topic.
Reply to lkcl, posted 28 Sep 2000 at 17:44 UTC by billgr »
this then becomes that if you don't trust your own web server to perform the calculations, then well at the very least you're not messing
things up for anyone _else_!
Right. :-) This would be option 2 in my numbering: do away with automated trust metric updates altogether and put them all under control
of individual users.
the only other issue becomes, then, how do you guarantee that a digitally signed Certification will _definitely_ be picked up by any
given server? this can, methinks, be done by not only signing the entire user-profile but also by signing each individual certification.
Good point. Since the signed upates are secured cryptographically, they could be sent out on an agreed upon channel, say
alt.sex.certs.updates :-) Writing scripts to yank traffic one is interested in off this wire is fairly straightforward, and then recalc/republishing
happens under the control of the individual. The problem is this might end up generating immense amounts of traffic. Not sure, just a
guess. More explicit modelling would be good to do. Also, if people didn't update their certs, the distributed nature of the updates might
get clogged. Or it might get routed around. It would be nice to look at some kind of simulation would be nice to model what might happen.
In regards to the details, I wasn't sure what you meant by this: "every "cert in" must be matched up with the "cert out" that is contained in
the user's profile."
On the second post, you asked to what purpose could trusting the server be put?
Basically, you are trusting the server not to be compromised. That is, to compute metric updates correctly. If you have reason to believe
that it is not, then you can lower your cert of the server. (I thought PaulJohnson's comments on this were excellent.)
In thinking about it, it seems to me the key question to answer is whether a distributed calculation of a trust metric can rely on individual
initiation of local calculations, or whether it must be automated. If individiual recalc of local certs is fine, then there's no need for secure
channels, trust in servers, signed recalcs, or anything. If it is not, and the system relies on automated recalc, then things get interesting.
:-) This seems like a question in which a lot of headway could be made by simulation: what are the break points where the distributed
update of the trust metric will fail? That is, suppose we have a model where there are N individuals on M servers, with average number of
certs-per-user C. (Interesting characterization of the certs would be what fraction are to users on the local server, if certs to remote users
are clustered or widely dispersed, etc.) Our Fearless Leader has done some good early work on figuring out what abuse-resistant
thresholds there are and their relatedness to degrees of connectedness among users. Assume that the network satisfies those
conditions, what kind of distribution on metric recalc frequencies is needed to maintain the security of the network? Is it enough for just a
few key people to update often? Or do a few infrequent updaters cause ripple effects that destabilize the network? I don't know. It sounds
Back to the certification of the server thing, my third suggested option can be seen as a way for servers to maintain a high degree of trust,
so that their insertion into the network has negligible effect (that is, they can fit in the first option and not be noticed if they can prove they
are behaving well). Essentially, a server would distribute a cert update file containing its old and new states, and signatures from other
servers to the effect that the update was done correctly. Internal keys can be chosen to make it extremely difficult to choose prior states
to get some hoped-for result in the new state (like a cert that trustedperson@samba certifies me@here as Master). A compromised
server, then, would just put out meaningless certs that would look basically like a DoS attack. Since an open distributed network would
have to face that anyway, we could look at that as progress. :-)
Writing scripts to yank traffic one is interested in off this wire is
straightforward, and then recalc/republishing happens under
the control of the individual. The problem is this
might end up generating immense amounts of traffic.
there are ways round this:
- 1. ask a different server - one that locally contains the majority
of the profiles, or has better bandwidth. e.g. use
http://advogato.org/tmetric/ instead of http://localhost/tmetric/
- 2. cacheing of profiles. every profile has a creation date, a
last-modified date [and an additional possibility: update-frequency].
you do the math :)
- 3. a request can be tempered in two, maybe three ways. first,
limit the graphs' population to N-degrees of separation. second, limit
the certifications to specific levels [e.g "i am only interested in OS
Masters."]. third, simply say, if number of remote profiles downloaded
exceeds 10,000, stop!!!
In thinking about it, it seems to me the key question to answer is
whether a distributed calculation of a trust
metric can rely on individual initiation of local
calculations, or whether it must be automated. If individiual recalc
of local certs is fine, then there's no need for secure
channels, trust in servers, signed recalcs, or anything. If it is
not, and the system relies on automated recalc, then things
get interesting. :-)
hey bill, perhaps i should mention. in the experimental mod_virgule,
there are default seeds, from which some automated recalcs are
performed, BUT, _anyone_ can over-ride this and put in any seeds they
wish, in a manual calculation. think of it like a search-engine, in
so, let's say that you do a search with email@example.com as the seed.
mbp has issued a certification saying that firstname.lastname@example.org has "Really
Cool Taste In Music". email@example.com has also certified
CoolDanceTrack.mpeg as "Really Cool", whilst firstname.lastname@example.org has
certified FunkyGrooves.mpeg as "Insanely Cool".
the response to a tmetric/ calculation for the "Music" category will
come up with FunkyGrooves.mpeg at the top with a rating of "Insanely
Cool", and CoolDanceTrack.mpeg below it as "Really Cool".
now, imagine that you do not trust the server that produced
these calculations for you. the digital signatures will allow you to
analyse email@example.com's profile and firstname.lastname@example.org's profile, and
manually do a PGP key-signature verification to ensure that the profiles
have not been tampered with.
the only thing that this procedure does not disallow is a site to "lie"
by deliberately excluding arbitrary users from the trust metric
calculations procedure, skewing the results by exclusion rather than
inaccuracies. however, there is nothing to stop you going to a second,
third... etc site and making exactly the same request. if the
results differ, you bitch about it.
distribute a cert update file containing its old and new states, and
signatures from other servers to the effect that
the update was done correctly. Internal keys can be chosen
to make it extremely difficult to choose prior states to...
hey, you know, i _really_ like this. it's a bit like random
inspections. "perform this trust metric calculation. you have fifteen
seconds to comply :) :)"
servers could potentially pass on user-requests to other servers, ask
them to perform the same calculation. if one of the servers gets it
wrong, they bitch about it.
tmetric-blame: i like it :)
suffers from problem of consistency, though: this is distributed
metrics, we're talking. there's nothing to stop someone from removing a
cert during a calculation...
Assume that the network satisfies those conditions, what kind of
metric recalc frequencies is needed to maintain the security
of the network?
hey again bill,
well the mod_virgule system has a cache of commonly-used trust metric
calculations. having noticed that this is there, i extended it so that
if the calculation has already been performed in the last 30 seconds, it
is not done again. if this turns out to be too network-heavy, then more
advanced mechanisms can be developed.
what i really need is gnupg-lib, not a gpg binary.
more by email..., posted 2 Oct 2000 at 21:41 UTC by billgr »
Glad I could spark some ideas. If you want to follow up on anything, feel free to email me.
My initial response to the question of "why trust the server" was focussed on honesty: does the server honestly process trust metrics.
The idea of sample tests is a partial solution, but would not really solve the problem of a server which is generally honest but gives a high
rating to a small number of people (e.g. the owner and his immedate family). If a sufficiently small fraction of certificates are dishonest
then it could be a very long time before this is noticed.
However there is another aspect: is the server using the right trust metric? A distributed system has the potential for different servers
computing trust in different ways. Indeed, this might well be an important capability, allowing new metrics to be tested and introduced
gradually. The only requirement is that they can exchange common certificates. Indeed, the servers will compute trust metrics for each
other as well as for humans.
So to get around the possibility of a selectively dishonest server which generates a small number of dishonest certificates, and the
possibility of a weak server using a poor metric honestly, a user will take a sounding on another person from several servers and deduce
their own trust rating from that. Part of this process will be a weighting based on how much the user trusts the servers, which will in turn
be based on how much the various servers trust each other.
Of course the people running the servers want their servers to be trusted and valued, so there is an incentive to come up with ratings
which match everybody else. But there is also a premium on ratings which are more accurate than the competition. So the system as a
whole should converge on the most accurate view possible of the various players. Does this make sense?
Incidentally, I see this work as being more significant than just rating postings on Advogato or musical tastes or whatever. In the long run
this is going to be a way to manage valuable information as well. Know any good IT consultants? If so then you have valuable knowledge
that others could use, but we have no way to get it or to measure its accuracy once we have it, so its largely useless to us. Trust metrics
offer a way to break this problem.
does the server honestly process
trust metrics. If a
sufficiently small fraction of certificates are dishonest
then it could be a very long time before this is noticed.
remember, the certificates are signed, and the process of calculating
the trust metrics is deterministic. if one certificate is out-of-date,
excluded etc, then ... hmm ... what you really need is a hash of all the
certifications used as input, to *guarantee* that you actually used the
same inter-connected graph.
ok, nice one. thanks.
However there is another aspect: is the server using the
right trust metric?
yes, this is going to require a hash on the certification, and the hash
to be placed in the user's profile, associated with the certifications
that user issues. in this way, you guarantee that two different users
are actually using the same descriptions and meanings for what might be
the same "words" in a certification!
i am not going to worry about the social aspects of two groups or even
two users assigning different meanings to the same Certification type.
see what happened on advogato, cf. discussions about whether even the
advogato cert type was for
open source merit or open source skill, or something like that.
So to get around the possibility of a selectively dishonest
server which generates a small number of dishonest
certificates, and the possibility of a weak server using a
poor metric honestly, a user will take a sounding on another
person from several servers and deduce their own trust
rating from that.
exactly [hey, someone else groks this, this is cool!]. the concept of
"dishonest" may in fact be accidental: a server is off-line for a
this make sense?
yes, it does, and i am pleased to see that someone else gets it :)
Incidentally, I see this work as being more significant than
just rating postings on Advogato or musical tastes or
whatever. In the long run this is going to be a way to
manage valuable information as well.
now, consider the real-time field environment applications, as well,
where you *need* to know _right_ now, because your life depends on it,
whether a source of information has been compromised, or its source, or
its source, etc. but you have _some_ other information that gives a
clue. are _you_ going to be the one to make the decision to evaluate
hundreds or thousands of information sources, most of which don't apply
to your life-threatening situation, right now.