Difference between revisions of "Peer-to-Peer Accountability Enforcement/mechanism"

From InstaGov
Jump to navigation Jump to search
(complete rewrite -- individual elements in subpages, moved refinements to subpages too)
Line 20: Line 20:
 
* '''2017-10-23''' [https://medium.com/@MoreAndAgain/twitter-reputation-3697e585e323 Improve Twitter’s Reputation by Giving Users One]  
 
* '''2017-10-23''' [https://medium.com/@MoreAndAgain/twitter-reputation-3697e585e323 Improve Twitter’s Reputation by Giving Users One]  
 
** The article proposes a simple system where each user can assign a rating to any other user. This includes CR and RV, but not CW or PCV.
 
** The article proposes a simple system where each user can assign a rating to any other user. This includes CR and RV, but not CW or PCV.
** This was cited [https://mammouth.cafe/users/wion/statuses/934838 here] as an example of a system that won't work.
+
** This was mentioned [https://mammouth.cafe/users/wion/statuses/934838 here] as an example of a system that won't work.
 +
 
 
==Miscellaneous==
 
==Miscellaneous==
 
* [[/refinements]]: some stuff that may be interfering with getting the basic concept across, and probably needs rewriting anyway
 
* [[/refinements]]: some stuff that may be interfering with getting the basic concept across, and probably needs rewriting anyway

Revision as of 12:06, 24 October 2017

Improved control over malusers requires both finer granularity in the blocking system and a more even-handed, less-centralized way of deciding who needs to be restricted.

The following mechanisms should address most or all of the above shortcomings. In particular, the process of allowing a given user A to avoid posts by user B based only on the ratings of other users explicitly trusted by A should help to overcome bias and snap judgements.

While it could still be not especially difficult for a user to create their own personal "echo chamber" under this system, it would be notably less easy than existing systems where any user has complete control over who they block -- and would become progressively more difficult the more people one "trusts". This should help to limit the damage done by attempts at epistemic closure.

The key elements of P2PAE are:

  • CR: crowdsourced ratings (as opposed to centralized or automated; most proposals include this)
  • RV: range voting (ratings are nonbinary and bipolar)
  • CW: credibility-weighting (all ratings are not created equal)
  • PCV: personalized credibility view (users don't all see the same numbers for other users)

Many refinements and elaborations are possible. Experimentation will determine what works best.

Comparison to Traditional Moderation

This design should greatly increase the effectiveness of a site in correctly assigning levels of trust to users, both initially and on an ongoing basis, greatly widening the "admin bottleneck" often encountered when trying to get abusive users banned or restricted.

It also allows more users to actually participate in the day-to-day running of the site, which is a good thing in terms of building a genuine, functional, sustainable working community.

Other Reputation Systems

  • Reddit allows upvoting and downvoting (CR), but it's binary (not RV), rigidly egalitarian (no CW), and global (no PCV).
  • 2017-10-23 Improve Twitter’s Reputation by Giving Users One
    • The article proposes a simple system where each user can assign a rating to any other user. This includes CR and RV, but not CW or PCV.
    • This was mentioned here as an example of a system that won't work.

Miscellaneous

  • /refinements: some stuff that may be interfering with getting the basic concept across, and probably needs rewriting anyway