Peer-to-Peer Accountability Enforcement/mechanism
Improved control over malusers requires both finer granularity in the blocking system and a more even-handed, less-centralized way of deciding who needs to be restricted.
The following mechanisms should address most or all of the above shortcomings. In particular, the process of allowing a given user A to avoid posts by user B based only on the ratings of other users explicitly trusted by A should help to overcome bias and snap judgements.
While it could still be not especially difficult for a user to create their own personal "echo chamber" under this system, it would be notably less easy than existing systems where any user has complete control over who they block -- and would become progressively more difficult the more people one "trusts". This should help to limit the damage done by attempts at epistemic closure.
The key elements of P2PAE are:
- CR: crowdsourced ratings (as opposed to centralized or automated; most proposals include this)
- RV: range voting (ratings are nonbinary and bipolar)
- CW: credibility-weighting (all ratings are not created equal)
- PCV: personalized credibility view (users don't all see the same numbers for other users)
Many refinements and elaborations are possible. Experimentation will determine what works best.
Comparison to Traditional Moderation
This design should greatly increase the effectiveness of a site in correctly assigning levels of trust to users, both initially and on an ongoing basis, greatly widening the "admin bottleneck" often encountered when trying to get abusive users banned or restricted.
It also allows more users to actually participate in the day-to-day running of the site, which is a good thing in terms of building a genuine, functional, sustainable working community.
Other Reputation Systems
- Reddit allows upvoting and downvoting (CR), but it's binary (not RV), rigidly egalitarian (no CW), and global (no PCV).
- 2017-10-23 Improve Twitter’s Reputation by Giving Users One
- The article proposes a simple system where each user can assign a rating to any other user. This includes CR and RV, but not CW or PCV.
- This was mentioned here as an example of a system that won't work.
- 2017-10-11 Colony Whitepaper Review: A framework for decentralized organizations
- I am skeptical of this. It waves around a lot of ideas without getting specific about them.
- I am also automatically skeptical, anymore, of anything that mentions "blockchain" without an apology and clear explanation.
- Listed here only because it involves a "reputation system".
- 2016-05-31 Periscope's new safety features will allow users to report, judge, and ban abusive comments (via)
- The main difference between Periscope's implementation and mine is that they randomly choose people to decide, rather than letting users build up a trust network of users who vote when they feel like it. Could be less partial, but (like any randomly-chosen jury) could also be stupider. Does not implement PCV or CW; arguably a crude form of CR, not sure about the rest.
- It also might be a way to bridge the gap when a new network doesn't yet have enough "trust" connections to make practical use of P2PAE.
- It may have practical applications beyond comment-policing.
Miscellaneous
- /refinements: some stuff that may be interfering with getting the basic concept across, and probably needs rewriting anyway