Peer-to-Peer Accountability Enforcement/mechanism/PCV

From InstaGov
< Peer-to-Peer Accountability Enforcement‎ | mechanism
Revision as of 11:07, 24 October 2017 by Woozle (talk | contribs) (Created page with "''Personalized Credibility View''' (PCV) means that for each user A who wants an evaluation of user B who is previously unknown to them, A will see a rating for B that is indi...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Personalized Credibility View' (PCV) means that for each user A who wants an evaluation of user B who is previously unknown to them, A will see a rating for B that is individually calculated for A based on A's ratings of other users they do know and the ratings that any of those users have assigned to B.

Example

Let's say we have a users C and D, where D is a sockpuppet for C.

In an attempt to game the system by giving themselves more credibility, D rates C highly and C rates D highly. If every user saw the same aggregate ratings for every other user, then D's vote for C would help boost C's overall ratings. If that wasn't enough, C could perhaps create a hundred or a thousand other sockpuppet accounts, and have them all rate C highly. This creates a situation where all that is needed in order to maintain or create a positive credibility rating is sufficient time -- which is to say, funding. Money would effectively buy credibility.

In a PCV system, though, no user's rating of any other user has global weight, and each individual user decides for themselves who has weight for them.

Naive user E might not have figured this out, but users A and B caught onto it earlier and downrated C, D, and any other sockpuppets they were able to identify.

The Tricky Part

The tricky part is in how novices decide who to take seriously. The pattern I have followed -- and which I think is followed by the kind of people I like to read -- is to rate highly those whose output matches my values of (e.g.) honesty, compassion, consistency, and rationality. People who hew to those values are likely to notice and downrate others who do not -- such as sockpuppets and others with power-based agendas such as spammers and harassers, not to mention Nazis.

While this does leave users free to choose bad advice, it also offers a way out when they realize that the people they've chosen to trust for that advice are letting in people they do not like: re-evaluate your evaluators.

...which is why PCV is important. On a system with a single rating for each user, users have no recourse but to complain to management when that rating produces bad results.

On Google+, we attempt a crude approximation of this system by having invite-only groups where individuals share links to problematic accounts. When we join such a community, We are essentially selecting a group of others to pre-emptively warn us of other users we should avoid.