Difference between revisions of "Peer-to-Peer Accountability Enforcement/mechanism/PCV"

From InstaGov
Jump to navigation Jump to search
(Created page with "''Personalized Credibility View''' (PCV) means that for each user A who wants an evaluation of user B who is previously unknown to them, A will see a rating for B that is indi...")
 
Line 1: Line 1:
''Personalized Credibility View''' (PCV) means that for each user A who wants an evaluation of user B who is previously unknown to them, A will see a rating for B that is individually calculated for A based on A's ratings of other users they ''do'' know and the ratings that any of those users have assigned to B.
+
'''Personalized Credibility View''' (PCV) means that for each user A who wants an evaluation of user B who is previously unknown to them, A will see a rating for B that is individually calculated for A based on A's ratings of other users they ''do'' know and the ratings that any of those users have assigned to B.
  
 +
(This requires [[../CW|credibility weighting]] in order to be possible.)
 +
 +
More formally:
 +
* For each pair of users A and B, where A wants an evaluation of B's credibility, a '''PCV''' is calculated as follows:
 +
** For every user C who has been given a positive rating by A (ArC) and who has rated B (CrB), the system sums all (ArC/10 x CrB/10) to produce a single rating of B for A -- representing, essentially, "what my friends think of this person".
 +
 +
On Google+, we attempt a crude approximation of this system by having invite-only groups where individuals share links to problematic accounts. When we join such a community, We are essentially selecting a group of others to pre-emptively warn us of other users we should avoid.
 
==Example==
 
==Example==
Let's say we have a users C and D, where D is a sockpuppet for C.
+
Let's say we have users C and D, where D is a sockpuppet for C.
  
In an attempt to game the system by giving themselves more credibility, D rates C highly and C rates D highly. If every user saw the same aggregate ratings for every other user, then D's vote for C would help boost C's overall ratings. If that wasn't enough, C could perhaps create a hundred or a thousand other sockpuppet accounts, and have them all rate C highly. This creates a situation where all that is needed in order to maintain or create a positive credibility rating is sufficient time -- which is to say, funding. Money would effectively buy credibility.
+
In an attempt to game the system by giving themselves more credibility, D (actually C using an alternate login) rates C highly and C rates D highly. If every user saw the same aggregate ratings for every other user, even if they were [[../CW|weighted by credibility]], then D's vote for C would help boost C's overall ratings (and their votes for each other would help boost the significance of their votes). If that wasn't enough, C could perhaps create a hundred or a thousand other sockpuppet accounts, and have them all rate C highly. This creates a situation where all that is needed in order to maintain or create a positive credibility rating is sufficient time -- which is to say, funding. Money would effectively buy credibility.
  
 
In a PCV system, though, no user's rating of any other user has global weight, and each individual user decides for themselves who has weight for them.
 
In a PCV system, though, no user's rating of any other user has global weight, and each individual user decides for themselves who has weight for them.
  
 
Naive user E might not have figured this out, but users A and B caught onto it earlier and downrated C, D, and any other sockpuppets they were able to identify.
 
Naive user E might not have figured this out, but users A and B caught onto it earlier and downrated C, D, and any other sockpuppets they were able to identify.
==The Tricky Part==
+
==Wrinkles==
The tricky part is in how novices decide who to take seriously. The pattern I have followed -- and which I think is followed ''by the kind of people I like to read'' -- is to rate highly those whose output matches my values of (e.g.) honesty, compassion, consistency, and rationality. People who hew to those values are likely to notice and downrate others who do not -- such as sockpuppets and others with power-based agendas such as spammers and harassers, not to mention Nazis.
+
===novice judgment===
 +
One tricky part is in how novices decide who to take seriously. The pattern I have followed -- and which I think is followed ''by the kind of people I like to read'' -- is to rate highly those whose output matches my values of (e.g.) honesty, compassion, consistency, and rationality. People who hew to those values are likely to notice and downrate others who do not -- such as sockpuppets and others with power-based agendas such as spammers and harassers, not to mention Nazis.
  
 
While this does leave users free to choose bad advice, it also offers a way out when they realize that the people they've chosen to trust for that advice are letting in people they do not like: re-evaluate your evaluators.
 
While this does leave users free to choose bad advice, it also offers a way out when they realize that the people they've chosen to trust for that advice are letting in people they do not like: re-evaluate your evaluators.
Line 16: Line 24:
 
...which is why PCV is important. On a system with a single rating for each user, users have no recourse but to complain to management when that rating produces bad results.
 
...which is why PCV is important. On a system with a single rating for each user, users have no recourse but to complain to management when that rating produces bad results.
  
On Google+, we attempt a crude approximation of this system by having invite-only groups where individuals share links to problematic accounts. When we join such a community, We are essentially selecting a group of others to pre-emptively warn us of other users we should avoid.
+
===not enough friends===
 +
A related situation is when user A doesn't have enough (or any) links to user B to get a reliable evaluation. Options for dealing with this include:
 +
# Looking further afield, following the FOAF (friend-of-a-friend) chain -- i.e. have any of C's friends rated B? How about their friends?
 +
# Any given instance runnning InstaGov could designate a handful (as in {{l/wp|Dunbar's number}} or less) of default advisors (DAs). These DAs would automatically be assigned as "friends" for new users, but they would not have any other privileges; users would be free to adjust their ratings of DAs like they can for any other user.
 +
 
 +
Side note: It might be a good idea to visually indicate the reliability/strength of a PCV (based on the number of inputs and, negatively, on the length of any FOAF chains) somehow.

Revision as of 11:38, 24 October 2017

Personalized Credibility View (PCV) means that for each user A who wants an evaluation of user B who is previously unknown to them, A will see a rating for B that is individually calculated for A based on A's ratings of other users they do know and the ratings that any of those users have assigned to B.

(This requires credibility weighting in order to be possible.)

More formally:

  • For each pair of users A and B, where A wants an evaluation of B's credibility, a PCV is calculated as follows:
    • For every user C who has been given a positive rating by A (ArC) and who has rated B (CrB), the system sums all (ArC/10 x CrB/10) to produce a single rating of B for A -- representing, essentially, "what my friends think of this person".

On Google+, we attempt a crude approximation of this system by having invite-only groups where individuals share links to problematic accounts. When we join such a community, We are essentially selecting a group of others to pre-emptively warn us of other users we should avoid.

Example

Let's say we have users C and D, where D is a sockpuppet for C.

In an attempt to game the system by giving themselves more credibility, D (actually C using an alternate login) rates C highly and C rates D highly. If every user saw the same aggregate ratings for every other user, even if they were weighted by credibility, then D's vote for C would help boost C's overall ratings (and their votes for each other would help boost the significance of their votes). If that wasn't enough, C could perhaps create a hundred or a thousand other sockpuppet accounts, and have them all rate C highly. This creates a situation where all that is needed in order to maintain or create a positive credibility rating is sufficient time -- which is to say, funding. Money would effectively buy credibility.

In a PCV system, though, no user's rating of any other user has global weight, and each individual user decides for themselves who has weight for them.

Naive user E might not have figured this out, but users A and B caught onto it earlier and downrated C, D, and any other sockpuppets they were able to identify.

Wrinkles

novice judgment

One tricky part is in how novices decide who to take seriously. The pattern I have followed -- and which I think is followed by the kind of people I like to read -- is to rate highly those whose output matches my values of (e.g.) honesty, compassion, consistency, and rationality. People who hew to those values are likely to notice and downrate others who do not -- such as sockpuppets and others with power-based agendas such as spammers and harassers, not to mention Nazis.

While this does leave users free to choose bad advice, it also offers a way out when they realize that the people they've chosen to trust for that advice are letting in people they do not like: re-evaluate your evaluators.

...which is why PCV is important. On a system with a single rating for each user, users have no recourse but to complain to management when that rating produces bad results.

not enough friends

A related situation is when user A doesn't have enough (or any) links to user B to get a reliable evaluation. Options for dealing with this include:

  1. Looking further afield, following the FOAF (friend-of-a-friend) chain -- i.e. have any of C's friends rated B? How about their friends?
  2. Any given instance runnning InstaGov could designate a handful (as in Dunbar's number or less) of default advisors (DAs). These DAs would automatically be assigned as "friends" for new users, but they would not have any other privileges; users would be free to adjust their ratings of DAs like they can for any other user.

Side note: It might be a good idea to visually indicate the reliability/strength of a PCV (based on the number of inputs and, negatively, on the length of any FOAF chains) somehow.