More thoughts on moderation and certification (part 1)
Posted 19 Apr 2000 at 18:52 UTC by Raphael
Several discussion sites and news sites are useful to the open
source developers: Advogato, Kuro5hin, Technocrat.net, Slashdot and others. Each of them has
its strengths and weaknesses. After comparing the moderation
processes used by these sites (and reading the previous articles
posted here), I identified a few problems with Advogato. In this
article, I explain what these problems are and I suggest some
solutions for making Advogato even better.
This article is more or less a followup to my first article on moderation (which
referred to Slashdot) and some of my comments in raph's first "Meta"
article about the trust metric. This article has grown and
changed as the weeks passed. I have now decided to split it in two
parts: the first one dealing with certifications, and
the second one dealing with moderation and content
filtering. I will post the second part soon, probably next
week. In retrospect, I think that I should have posted my ideas
earlier as several smaller articles, but this is too late now.
Anyway, I am probably taking this too seriously...
Trust metric and rating of skills should not be mixed
The certification system on Advogato is not used in a consistent
way by all users. Although it is defined as a "trust metric", it is
mostly used as a rating system for the skills or behavior of other
users. This drift is caused by the fact that the certification guidelines use terms such as
"Master" and "Apprentice", which are related to the skills of the user
and not related to how well she is known or trusted.
As a result of the inconsistency between the naming of the
certification system (trust metric) and how it should be used (skill
levels), this system is used in different ways by the users. Some of
them rate the others based only on their skills or reputation,
regardless of whether or not these accounts can be trusted for
representing who they claim to be. A good example of that is the
number of "Master" ratings received within a few hours by the recently
created accounts esr and BrucePerens, without any public
confirmation that these accounts were real or fake (e.g. a link to
their Advogato account from their home page, a public e-mail message
or news article, ...) . On the other hand, some users rate the others
based on how well they know them or trust them. This can be seen when
some people join Advogato and certify the ones working on the same
project(s) with a "Master" rating as long as they are sure that these
accounts really belong to the right users.
I suppose that most of us are somewhere between these two examples
and try to use the certification system as a combination of both. But
alas, these different interpretations lead to the fact that the
current certifications are not a good measure of either trust or skill
In order to measure trust, Advogato needs to change its
certification system so that it uses other terms for describing how
much an account is trusted. As I will explain below, this can coexist
with a rating system describing how the skills or contributions of
each user are perceived by the others. The trust levels could be as
- Level 4: "I have received a direct confirmation
from this person (face-to-face meeting, phone call, reply to a private
e-mail ) and now I
am sure that this account really belongs to her. I am also convinced
that this person will not harm Advogato." This is the highest
certification level and it will probably be exceptional because it
requires a direct exchange between the two users.
- Level 3: "The well-know home page of that person
has a link to her Advogato account, or she posted a link to her
Advogato account in a mailing list or newsgroup (using her well-known
e-mail address)." This is almost as good as a direct confirmation,
although the home page or e-mail account could have been subverted in
some way (especially if a public message is forged by an imposter in a
forum that is seldom visited by the real user.)
- Level 2: "I have heard about this person and the
information given on their account's page (on Advogato), or the
information given by other users, leads me to believe that they are
really who they claim to be." This will probably be the most common
- Level 1: "The information given on this account's
page leads me to believe that they are really who they claim to be,
although I do not know much about this person." This can be used by by
someone who wants to let new people in and give them a minimal access
to the site, if nobody else has certified them yet.
- Level 0: "I do not know this person and there is
not enough information available for me to trust this account; or I
know this person but I do not trust her for using Advogato correctly."
The first two levels should be close to each other (if a number
is associated to them for calculating the flows in the graph) because
there is a very low probability of fraud.
I tried to find some good names for describing these levels, but I
could not find a consistent set of single words. Suggestions are
welcome... I would like the highest level (4) to sound very personal
and level 1 to have a meaning of: "I'm not sure but that account does
not look bad." For level 0, I would prefer something like "Unknown"
instead of "Untrusted" because it should be neutral and not
Calculating the trust/skill levels
The system for evaluating the trust level of each user can coexist
with another one for evaluating their skills, contributions, or
whatever criteria we like to use for qualifying someone as a "Master"
(or "Mistress") or as an "Apprentice". This means that someone who
wants to certify another account could select the trust level and the
skill level in two separate lists. However, these two ratings should
not be calculated in the same way.
The current system for calculating the trust metric on Advogato is good and
would probably work very well if the names of the levels had been
based exclusively on trust/knowledge. Although there has been some discussion about the selection of "seeds"
for the graph, I do not think that it would be easy or even useful to
have many individual trust webs. So let's keep the current system for
But the evaluation of skills should be done in a different way. The trust
metric gives you the maximum certification level that can reach your
node in the graph. This is appropriate for deciding if an account can
be trusted, but this is not good for defining what the Advogato
community thinks about the skills of that user, because the criteria
for these ratings are much more subjective. Instead of using a method
that selects the maximum value reached by each node after multiple
passes, it would be better to use a weighted average of the
A weighted average has the advantage of being simpler to evaluate,
because it is based on a single formula that involves only a node and
its peers without requiring multiple passes. The weights would be a
function of the trust metric only and the method could look like
- First step: use the network flow algorithm to compute the trust
metric. This requires one pass for each level and gives a trust level
for each node
- Second step: for each node
use this simple formula for the skill level: multiply the skill rating
given by each of its peers
by a weight
add these numbers, divide the result by the sum of the weights.
would be derived from
and the trust certification
(more about this below).
In other words, the skill level
of each node
is given by 
Sn = sum_for_each_P (Spn *
Wpn) / sum_for_each_P (Wpn)
It should be noted that the skill level of one account has no
influence on the ratings it gives to other accounts (only the trust
level counts). This means that a trusted Apprentice has as much
influence as a trusted Master for rating another user. But an
untrusted Master would have very little influence. Also, since there
is no "flow" as in the trust graph, any user can rate as many other
users as she wants without risking that some ratings become
Using a weighted average would also reduce the need for "negative
certifications" that some people would like to have on Advogato.
Since every rating counts as long as it has a non-zero weight, it is
possible to lower someone's skill level by giving a lower rating to
that person. Of course, the effect depends on your own weight
compared to the weight of all other users who have certified that
At first I thought that each weight
should be equal to
(as generated in the first step). But this implies that
would rate all other nodes with the same weight, regardless of whether
she knows (has certified) these accounts or not. So I prefer the
Wpn = Tp * (c + Tpn)
or the very similar formula:
Wpn = Tp * (1 + Tpn / c)
is the trust certification that
is a positive constant that is higher than any trust level. The
should be higher is that the trust level of
should be the dominant factor. Otherwise, some users would be tempted
to raise the certification that they give to another account in order
to have more weight for increasing or decreasing that account's skill
rating. This would ruin the purpose of the trust metric. Adding a
constant in the formula should reduce this effect. It is also
interesting to note that with this formula, an account that is not
trusted at all
(Tpn = 0)
will get a weight of 0 for all of its ratings. This is probably a
good thing. On the other hand, a trusted account can give a skill
rating to another account without certifying it first. This could
also be useful.
Transition between the current system and the new one
If this new system is ever going to be implemented on Advogato,
there should be a way to move the existing certifications to the new
system. However, I do not think that these certifications should be
used directly as a measure of trust, because the majority of users
have probably used them with a "skills" meaning.
The following solution should be better:
Another solution would be to initialize the trust network with
level 1 certifications only, but I assume that if one rates another
account as Journeyer or Master, then one has probably heard about that
Once all accounts have been converted, then everybody can spend
some time raising their certifications for the people they know.
There will probably be a period during which the trust levels and
skill levels of all accounts will change frequently (up or down), but
most of them should stay close to their initial levels.
Aging of certifications and ratings
It would not make sense for a level 1 certification to stay in the
system forever. If, after a couple of months, the user who certified
an account has not got enough information to raise this certification
to level 2 or more, then the certification should probably be removed.
If this is not done automatically, then it would be nice to have at
least a reminder displayed on the user's page, telling her that some
certifications should be re-evaluated.
The certifications at level 3 or 4 should be allowed to stay
forever (until the user changes them). However, there could be an
exception for level 4: if a user certified someone at level 4 and that
other user has certified the first one at a level that is less then 3,
then the level 4 should probably be lowered. It would be unusual for
someone to claim that they talked to someone else, while that other
person claims that they have never heard of her. Again, this could be
done automatically or by adding a warning on the user's page.
For the skill ratings, it would be interesting to have the weights
decreasing slowly over time unless the ratings are refreshed. People
change, and it is good to re-evaluate their contributions from time to
time. If the time since the last evaluation is greater then 3 months
(for example), then the weight
could be decreased by 5% every month. After two years, the weight for
this rating would reach 0 or some minimal value that would make it
irrelevant in comparison with the other ratings. Re-submitting the
rating is all it would take to get back 100% of its weight for the
next couple of months.
This aging system does not have to be implemented at the same time
as the transition between the current system and the new one. But the
new one should at least store a time stamp for each certification, so
that the aging feature can be added later if necessary.
I hope that you are not bored by this long article. If you were
bored, I hope that you stopped reading before reaching this part.
This first part has presented a way to de-couple the trust metric
from the skill ratings. This solution preserves the resistance
against attacks that is provided by raph's
method, while allowing more freedom
for the skill ratings. Although the solution seems good to me, I may
have forgotten some obvious things, so feel free to criticize
The second part (to be posted soon) will deal with moderation and
 Now I know that these accounts really belong
to Eric Raymond and Bruce Perens, respectively, because I received
their confirmation by e-mail while I was preparing this article. The
day after, Bruce posted an article and some comments on Advogato,
confirming that his account was active.
 I wrote "reply to a private e-mail" because
an unsolicited e-mail message could have been forged by someone else.
In that case, a lower certification level should be used. A digitally
signed message (with GPG) could
probably be trusted at the highest level even if it was posted in a
public forum instead of being part of a private e-mail exchange
between the two users. But only if that user's public key can be (and
has been) checked from a reliable source.
 I am lazy so I will use the word "skills" for
describing the reputation of someone in the Advogato community. This
can be seen as a combination of their coding or documentation skills,
contributions to the open source efforts, ability to write good
articles or comments in this forum, and many other things.
 I use "sum_for_each_P" because it is not easy
to include a Greek sigma sign in an HTML page without using images or
making some unsafe assumptions about the fonts that could be installed
in your browser.
 This implies that the skill ratings could be
vulnerable to "attacks" by a group of trusted users, but this does not
matter because the trust metric should be resistant to attacks and
these users have to be trusted first. If they are trusted members of
the community, then I don't think that a coordinated action should be
considered as an "attack".
 The trust levels really need names instead of
numbers. By the way, aren't you tired of reading all these footnotes?
This shows that this article started as a brain dump that was not
Let me first state that I like the idea put forth in this article. The
"trust" portion of the rating system is much like Phil Zimmermann's
"web of trust" model, which works reasonably well.
In my view, the problem with the current trust metric is the
restriction of capacities on nodes. As I read the description of the
trust metric, a restriction on a node's capacity in effect places a
limit on the number of people that person can then certify.
example, consider a user 5 steps from the seed. The capacity of
his node is 4. When the transformation is done to allow the
Ford-Fulkerson algorithm to be applied, the capacity of the edge x-
to x+ becomes 3, and from x- to the supersink is 1. Now, the
capacities of the lines from x+ to all the nodes that have been
certified by x are infinite, but only three units of flow will leave x+
due to conservation. How this flow is allocated amongst the
possible flowlines is indeterminate in absence of a more
complete graph, but the fact is that if someone at this level certifies
more than three people, someone isn't going to get any
This may be a feature, not a bug, but in either case it
doesn't really make sense from my point of view.
I think the current system would work better if the capacity on each
node were infinite, but with the restriction that each flowline from a
another user be restricted to having a capacity of 1. This would
require that the capacity of the flowline to the supersink be infinite
to handle the overflow.
If this were the case, the flow out of each node would only be
restricted by the flow into the node. To rephrase this in context: a
person's ability to certify would be dependent on the number of
people who certified him/her. This
makes good sense. It also preserves the attack-resistance of the
original metric, since we can assume that few "trusted" people
would certify an attacker, thus limiting the attacker's ability to alter
the ratings of "trusted" users.
For the model described above, this could be modified by the
trust level. A "level 4" trusted user would have an infinite capacity
node, where a "level 0" trusted user would have a capacity of 1 (or
This would give added security, since the number of people you
could then certify would be based on your trust level.
I am looking forward to reading part 2...
The distinction between "trust" and "skill" is one that has already been
brought up. The Trust Metric in its
current form is inended to guage contribution to the free software
community. At least, that is how I understand it, and how it seems to
have been described in the various Meta discussions. "Trust Metric" is
probably a misnomer. It comes from PKI terminology. Semantics aside,
most folks here understand what is intended.
The current Trust Metric has to be at least partly a skill
metric. A person's contribution is not sheer volume, but the quality of
work also. Nothing big here. Separating the two ("trust" and
"skill/contribution") begs one large question: how do you trust
Think about trust. What am I trusting you for? I can't really
trust someone I don't know well. The free software community provides
many interrelationships and acquaintances, but not everyone gets
together for beer every week :-)
The problem here is that trust will be limited to a small subset. I
know certain people personally, and I trust them. But that is it. Only
they get my "trust" rating.
Imagine a new person joins. NewPerson pops up with a new diary entry
and all, and I read it. I notice that NewPerson wrote the utility
ReallyUseful. I know the code, and love the tool. I have all the
context I need to provide a "skill/contribution" certification to
But I've never met NewPerson, and know nothing about them personally. I
cannot provide any "trust" qualification, and so I leave that blank.
Certainly there may be other folks that know NewPerson. But the chances
are much smaller than someone knowing NewPerson's work. Already we have
"focused groups" on Advogato. There is the GIMP/GNOME people, the BSD
folks, the LinuxChix, etc. This is not bad. This is good. A grouping
can provide a stable certification graph that allows people unknown to
the other groups to be certified "correctly". This is a benefit of the
The difference is that a member of a "focused group" can easily certify
a lone hacker whose work they are familiar with. They cannot provide a
"trust" rating. People joining will not be able to acquire a nice think
trust graph if they are not already well known in at least one "focused
Advogato is not about exclusion.
The other question (brought up best by
shaver) is "what are we trusting?" Are we
saying we trust a person with free software, with our kids, with $100?
The obvious answer is that we are trusting them with free software. In
other words, we trust they have the ability to provide quality
contributions to free software.
It's a skill metric
"Web of Trust".com, posted 20 Apr 2000 at 02:07 UTC by lkcl »
there is a south-african company just been bought by verisign who
provide "web of trust" metrics. certification requires that you provide
certain bits of info, such as an email address for level 0; your name
and address for level 1; it goes up, from there. you are *not* allowed
to certify *other* people until you are at the top level (and for that,
you must have been certified by at least something like 30 other people,
at the top-minus-one level).
yes, i like this. i am happy to add in a cut/paste version of the
netflow code so that it can do "trust-known" as distinct from
"trust-skill". if you can come up with some lines of code at the right
point, and raph approves it, i'll do it.
btw, yes, i like the GPG thing. there should be other methods.
also btw, it could be possible to either:
- leave the current trust-metric as "trust-skill"
- work in two separate systems and phase the old one out
- trash the whole current metric-set to "default" and start again.
i'm in favour of the last one because it gives a window of opportunity
to bring in some things such as message-passing, etc.
jlbec, while you did make some good points, I think that a trust metric
should still exist seperate from a skill metric. I think the major point
is that not all people who are trustworthy, as in with their open source
coding, not flaming advogato, and generally pushing open source, are
good coders. And not all good coders are trustworthy to push open source
and not flame advogato.
I think the current problem is that it is just assumed that if you're a
decent coder and on advogato, you're an open source advocate and
trustworthy. Say (Yes, this is far-fetched) a Microsoft, or better yet,
a Corel employee who codes PURLY closed source and thinks open source
completely sucks, were to join advogato. Maybe he wanted to keep a diary
here and perhaps see why we're all insane. Now, perhaps he made some
excellent closed source linux app which many people really want/need...
it's extremely complex and took a large portion of his time. I think we
would want him to have a low trust rating, but high skill rating.
Perhaps it isn't so far-fetched. We are getting more and more popularity
here, there will be some people who are not 100% open source advocates,
and I know there are people who I wouldn't trust posting articles here.
Just because the current system works when added to the fact that the
majority are open source advocates and the majority use their conscience
a lot does not mean it will always be that way, and I think seperating
trust from skill will help to prepare the system for perhaps a day when
advogato will have all kinds of people, not necessarily open source
advocates, but people interested in open source.
Since anyone Apprentice on up has exactly the same privileges, the trust
metric is effectively binary: you either trust a person (in which case
you certify them) or you don't. How much you certify them by doesn't
really matter... so long as they get that first "Apprentice" standing
The distinction between "trust" and "skill" is one that has already
been brought up. The Trust Metric in its
current form is inended to guage contribution to the free software
community. At least, that is how I understand it, and how it seems to
have been described in the various Meta discussions. "Trust Metric" is
probably a misnomer. It comes from PKI terminology. Semantics aside,
most folks here understand what is intended.
This is exactly the problem that I brought up in the first
paragraphs of my article, in the section titled "Trust metric and
rating of skills should not be mixed". I will try to explain it in a
- On the one side, we have the page describing the trust metric. It
relies only on "trust" concepts and "security against attackers".
There is nothing relating to "free software", "skills" or any other
concepts, except that the names of the certification levels (Master,
Journeyer, Apprentice) are briefly mentioned in the introduction. But
most of the document is about security and trust, and deciding if an
account is valid or not. This is really defined as a trust
metric and the security against attackers makes sense in that
- On the other side, we have the page describing the certification guidelines. In
this page, the context is totally different: it explains that the
certification levels should be used for rating the skills and the
contributions of a person to the free software community. It does not
mention anything related to trust, security and validity of the
accounts. You can then consider that this page defines a
Alas, the two concepts do not mix well. That's why I said that
"these different interpretations lead to the fact that the current
certifications are not a good measure of either trust or skill
Among other things, using this system as a skill metric implies
that it looses a part of its resistance against attacks: if many users
certify others based on their reputation (skill metric) without
checking if the account really belong to whom they claim to be (trust
metric), then the number of "confused nodes" can increase quickly.
Note that the system would still be resistant against massive attacks
(it is designed to do that well) but it does fail against fake
accounts or abusive users.
Also, the resulting certification level for each user is defined by
the maximum level that can reach that node after three passes on the
graph (one for each level). This is good for measuring trust, because
once a sufficient number of users declare an account as valid, that
account should be accepted even if some other users did not certify it
to the same level. But this is not a good way to measure the skills
or the reputation of that person in the free software community. If 2
users rate someone as a Master and 20 users rate the same person as an
Apprentice (i.e. they take the time to certify that account), then
giving a Master rating to that account would be too much. That's why
I think that the current system is well designed for a trust metric,
but is not appropriate for a skill metric. And since we need both, I
proposed to use two systems for evaluating "trust" and "skill"
A note about why we need both:
- We need a trust metric in order to be resistant to attacks (script
kiddies) and to a lesser extent to be sure that the person posting an
article is really who they claim to be.
- We need a skill rating because many users want it. And if we do
not have a system that allows it, then those users would use the trust
certifications for rating the skills, which is not a good idea.
Separating the two ("trust" and "skill/contribution")
one large question: how do you trust someone?
The answer is simple: you only trust the people you know, or the
people who gave you enough information to be able to decide that they
are really who they claim to be. As I wrote in my article, the
highest certification level (level 4, lacking a better name) would be
exceptional. This is only for the people you know personally. Most
of the certifications would be at level 2 or 3, and there is nothing
wrong with that.
If someone joins this site and claims to have written this or that
great software package, then you can give her a level 2 certification
even if you don't know her personally. Later, after reading her
articles or checking her contributions, you can raise your
certification to level 3. And if you ever talk directly to her and
you have the feeling that you can trust that person, then (and only
then) you can give a level 4 certification. But this would be the
exception and not the rule, because you do not know so many people
And this leads me to the problem described by BrentN:
In my view, the problem with the current trust metric is
the restriction of capacities on nodes.
Well, I consider this as a feature: you cannot trust everybody in
the universe, so it is not a bad idea to encourage people to certify
the ones they really know instead of trying to certify
everybody. This also makes the system more resistant to attacks,
because if some nodes had an infinite capacity, then an attacker
getting who manages to get one such node would be able to create and
certify as many fake accounts as she wants.
However, the situation is different when you think about rating the
skills or contributions of other users: you are allowed to have an
opinion on anybody (because the skill ratings are just that, an
opinion) so it makes sense to allow every user to rate as many other
users as she wants. That's why the two systems should use a different
method for evaluating the trust or skill level of each node.
Since anyone Apprentice on up has exactly the same
privileges, the trust metric is effectively binary [...]
This does not have to be that way. Several people said that
letting an Observer post some diary entries could lead to some abuses.
So it could be interesting to require at least a minimal certification
before you can do anything, and then have another threshold before you
can post articles or comments. We could have the following set of
privileges, depending on the trust level:
- level 0: cannot post anything until someone certifies that
account (the user can only create the account, update her personal info
and wait for certifications.)
- level 1: can post diary entries.
- level 2: can post diary entries and replies to the articles.
- level 3 and 4: can post new articles on the front page.
Maybe the levels 2, 3 and 4 should have the same privileges so that
the system would not be too elitist. At least levels 3 and 4 must
have the same, because there would not be many accounts reaching level
Capacity != flow!, posted 20 Apr 2000 at 19:42 UTC by BrentN »
Raphael, you state that
This also makes the system more resistant to attacks, because if
some nodes had an infinite capacity, then an attacker getting who
manages to get one such node would be able to create and certify
as many fake accounts as she wants.
This is not quite true! Just because you have the capacity
doesn't mean that you wil have that amount of flow! The amount of
flow *out* of your node is limited to the flow *into* your node. This
is why I suggest restricting the flow to a single unit per certification.
This way if only one person certifies you, you can only certify one
person. If you think about it carefully, if raph's assumption that
"attacker" nodes are not connected to the "trusted" nodes holds,
then my suggestion is secure as well.
Read Section 5.1 of Chartrand & Oellermann's Applied and
Algorithmic Graph Theory...you'll see what I mean
If Advogato had started out with a trust metric with labels from 0 to 4 instead of Apprentice,
Journeyer and Master, I doubt that as many people would have signed up. So there was
a bootstrapping problem..
There are enough people posting now that there's an information overload problem instead,
which is a much healthier situation.
I agree that separating out trust from skill makes some sense. You could go all the way
and have arbitrary attributes if you wanted, not just open source awareness. You could include
coding ability, C indenting style, IRC l33tne55, height, SAT scores (US only), writing and
documenting ability, sock colour (those of you who were waiting for that one would probably be
willing to say that they knew I was who I was, a high trust rating), but if we wanted all that
we'd be characters in a roleplaying game.
The current system seems buggy (why are two of the people whom I certified still
marked as observers, for example?) but it also more or less seems to work.
Given limited time to implement changes, I'd rather see a sorted project list with one-line
descriptions on the same page than certification changes.
From the dumb-looks-and-questions dept., I don't see what/how
Technocrat is doing to filter users or posts. Anyone?
i am hesistant to agree that advogato should shut the door on anyone
*until* they receive certification. avoid "elitism".
with the introduction of different types of certifications (i'm working
on providing the means to do that), it should easily be possible to
allow graded access to the site's features.
so far, i have the "skill"-metrics (Novice, Apprentice...) in an xml
file instead of hard-coded into the program. i am now working on
modifying all hard-coded references to req_get_tmetric_level() to refer
to "skill" as a parameter, again, instead of hard-coded. basically, if
this doesn't get accepted on advogato, it doesn't matter: the default
behaviour of this code will be the same as it is now.
It is my understanding that Bruce reserves the right to edit or even
delete submissions. I'm pretty sure this is retroactive in nature when
it comes to posts. Article submissions go into a queue.
Moving the (skill and/or trust) metric values to XML files is a good idea. That would make it easier to customize mod_virgule for different
type of sites. And we need to move the metric seed values out to the config file too! When you're hacking on mod_virgule you usually have
to set yourself as a seed value and it's real easy to forget to revert the change before preparing diffs.
done that already [seeds into the file]. haven't put the maths or the
weightings into the certs/somefile.xml, as i don't quite get that
it also means that you can assess different schemes, on "live" data.
i also made it possible to enter "someoneelse" as the seed, and the
output will go into tmetric/someoneelse/somemetric-results.xml. this
means that you can make an assessement of other people from your *own*
Types of 'trust', posted 23 Apr 2000 at 20:08 UTC by lilo »
It seems to me that when we talk about 'trust' we are really talking
about several things. In the sense this article means it, it's 'trust
of identity.' That's an appropriate thing to measure.
But another appropriate measure would be 'trust to certify.' There are
issues there. Someone can be very good at what they do, but bad at
certifying. For example, a tendency to allow your feelings to get in
the way of your evaluations makes for bad certification.
If certifications are relative to 'trust to certify', that actually
makes the whole matter simpler. You certify whoever you want as you
think best, and one trust metric for people you haven't interacted with
is to look at how they certified the people you certified. The more
similarities, the more you trust their judgement. Then you look at the
exceptions as opportunities to reconsider your opinions.
Of course, this means that my view of Advogato certifications will be
different from yours--there would be no absolute certifications. I'm
sure that would be a bad thing if certifications are a tool for judging
competence, since judging competence has to be done by individuals, who
then have to be happy with the results.
Raph: This being your area of interest, you may well be aware of this
project already, but I thought I'd post a link just in case you
They're doing research on how much trust people place in information
and advice they find online...
On 23 Apr 2000, lilo
It seems to me that when we talk about 'trust' we are
really talking about several things. In the sense this article means it,
it's 'trust of identity.' That's an appropriate thing to measure. But
another appropriate measure would be 'trust to certify.'
My article talks mostly about 'trust of identity', although some
indirect references to 'trust to certify' crept in somehow. For
example, I described the
level 4 certification as: "...I am sure that this account really belongs
to her. I am also convinced that this person will not harm Advogato."
Same for the description of level 0. (By the way, I still haven't seen
any suggestions for replacing these ugly level numbers by cool names.)
In this case, trusting one person for using Advogato correctly also
means that you trust that person for certifying others.
Maybe these two types of 'trust' should be clearly separated. Maybe
not. Having two separate certifications for 'identity' and 'ability to
certify others' could be nice, but I think that it is easier to mix both
of them in a single certification. Not necessarily better; just easier.