Difference between revisions of "Peer-to-Peer Accountability Enforcement/mechanism"

From InstaGov
Jump to navigation Jump to search
(complete rewrite -- individual elements in subpages, moved refinements to subpages too)
Line 4: Line 4:
  
 
While it could still be not especially difficult for a user to create their own personal "echo chamber" under this system, it would be notably less easy than existing systems where any user has complete control over who they block -- and would become progressively more difficult the more people one "trusts". This should help to limit the damage done by attempts at [[issuepedia:epistemic closure|epistemic closure]].
 
While it could still be not especially difficult for a user to create their own personal "echo chamber" under this system, it would be notably less easy than existing systems where any user has complete control over who they block -- and would become progressively more difficult the more people one "trusts". This should help to limit the damage done by attempts at [[issuepedia:epistemic closure|epistemic closure]].
==Crowdsourced Ratings==
 
This is a sort of minimum specification; many refinements and elaborations are possible. Experimentation will determine what works best.
 
  
* Every user can rate every other user's credibility on a numeric scale (e.g. integers from -10 to +10).
+
The key elements of P2PAE are:
* For each pair of users A and B, where A wants to know B's credibility, a '''Personal Credibility Rating (PCR)''' is calculated as follows:
+
* {{l/sub|CR}}: crowdsourced ratings (as opposed to centralized or automated; most proposals include this)
** For every user C who has been given a positive rating by A (ArC) and who has rated B (CrB), the system sums all (ArC/10 x CrB/10) to produce a single number which represents, essentially, "what my friends think of this person".
+
* {{l/sub|RV}}: range voting (ratings are nonbinary and bipolar)
** If the number of such ratings is too few, the resulting rating may be displayed in faint colors or with some other indication that it is tentative.
+
* {{l/sub|CW}}: credibility-weighting (all ratings are not created equal)
 +
* {{l/sub|PCV}}: personalized credibility view (users don't all see the same numbers for other users)
  
==Personalized Credibility Rating (PCR)==
+
Many refinements and elaborations are possible. Experimentation will determine what works best.
Personalized Credibility Rating, or PCR, is my term for a way of tracking each user's credibility in the eyes of others.
+
==Comparison to Traditional Moderation==
 
+
This design should greatly increase the effectiveness of a site in correctly assigning levels of trust to users, both initially and on an ongoing basis, greatly widening the "admin bottleneck" often encountered when trying to get abusive users banned or restricted.
It's built on a user-voting basis such as those in used on sites such as [[wikipedia:Reddit|Reddit]], but with some refinements:
 
* uses range (non-binary) voting
 
* summing is personalized for each user, rather than being global
 
 
 
Here's how it would work. (For the sake of brevity, I'll use "post" to mean anything posted by one user for others to see, regardless of whether it's a root-level post or a comment on a post.)
 
 
 
* Each post has its own PCR that is separate from the PCR of the user who posted it.
 
* The PCR for any given post will ''default'' to the poster's PCR.
 
* Any additional post-specific ratings will modify that post's PCR by some sort of weighted average.
 
** Experimentation will be needed to determine the best algorithm, but we could start by weighting the poster's PCR by how many individual ratings went into it.
 
* Each user X only sees posts whose PCR exceeds X's Minimum Required Credibility (MRC -- see below).
 
* Each user X may adjust the [[#Frequency of Notification]] at various levels above their MRC.
 
* Each user X starts with a default global MRC, but they may modify that default and they may also set specific MRCs for individual threads or groups.
 
* All users can rate each post's credibility.
 
* Optional: ratings of users' posts could have some small, cumulative influence on their overall PCR. Perhaps the influence of any given user A's ratings of B's posts should be overridden whenever they revise their overall rating of B. Experimentation needed.
 
 
 
"MRC" refers to a setting that each user can adjust to determine the strength of post-filtering. The lower it is, the more posts they'll see -- because they'll be allowing less-credible posts through. The higher it is, the higher-credibility a post has to have in order to be visible. (There will presumably be other user-options to determine how "hidden" posts are indicated -- e.g. a summary showing how many, a list of usernames, etc.)
 
 
 
==Personalized Credibility View (PCV)==
 
Given the symmetrical design of this system, each user could have a different credibility depending on who you're asking.
 
 
 
For example: user A might be seen as highly credible to user B, who returns the favor – but maybe both A and B are actually trolls or sockpuppets, and user C has figured this out and downrated them both. In that case, user A's PCV would include a high credibility for user B, while user C's PCV would view users A and B as low-credibility.
 
 
 
This becomes an important concept with regard to policies that affect everyone's experience, such as (for example) what content should be publicly visible (i.e. visible to viewers who aren't logged in). In such cases, PCV of the site manager becomes the deciding factor.
 
 
 
Note that this isn't the same as requiring the site manager to make trust decisions for each user. The distributed nature of the crowdsourced rating system means that managers can delegate those decisions to trusted others, who may in turn automatically act on advice from other users ''they'' trust, and so on.
 
 
 
This should greatly increase the effectiveness of a site in correctly assigning levels of trust to users, both initially and on an ongoing basis, greatly widening the "admin bottleneck" often encountered when trying to get abusive users banned or restricted.
 
  
 
It also allows more users to actually participate in the day-to-day running of the site, which is a good thing in terms of building a genuine, functional, sustainable working community.
 
It also allows more users to actually participate in the day-to-day running of the site, which is a good thing in terms of building a genuine, functional, sustainable working community.
==Frequency of Notification==
+
==Other Reputation Systems==
Rather than simply blocking those whose PCR falls below a threshhold, users can "distance" themselves from other users by selecting how often they wish to be notified of posts based on the poster's PCR (and possibly how often they wish for ''their'' posts to generate notifications to that other user).
+
* {{l/wp|Reddit}} allows upvoting and downvoting (CR), but it's binary (not RV), rigidly egalitarian (no CW), and global (no PCV).
 
+
* '''2017-10-23''' [https://medium.com/@MoreAndAgain/twitter-reputation-3697e585e323 Improve Twitter’s Reputation by Giving Users One]
For example, if I see user A with a high PCR and user B with a low PCR, I might want to be notified immediately whenever user A posts, but only every week regarding user B's posts. (Similarly, I might want to avoid notifying user B right away if I respond, in order to further reduce the amount of time I spend interacting with them.)
+
** The article proposes a simple system where each user can assign a rating to any other user. This includes CR and RV, but not CW or PCV.
 
+
** This was cited [https://mammouth.cafe/users/wion/statuses/934838 here] as an example of a system that won't work.
This would promote higher engagement between higher-value users without completely excluding others from the dialogue.
+
==Miscellaneous==
 
+
* [[/refinements]]: some stuff that may be interfering with getting the basic concept across, and probably needs rewriting anyway
Note that this is different from the random-selective notification of social networks such as Facebook and Google+, where some posts are simply ignored. (The latter system may require substantially less computing power; a data design for a FoN system has not yet been worked out.) All posts are eventually included in a notification, but may be time-delayed and grouped with other posts by the same user.
 

Revision as of 11:53, 24 October 2017

Improved control over malusers requires both finer granularity in the blocking system and a more even-handed, less-centralized way of deciding who needs to be restricted.

The following mechanisms should address most or all of the above shortcomings. In particular, the process of allowing a given user A to avoid posts by user B based only on the ratings of other users explicitly trusted by A should help to overcome bias and snap judgements.

While it could still be not especially difficult for a user to create their own personal "echo chamber" under this system, it would be notably less easy than existing systems where any user has complete control over who they block -- and would become progressively more difficult the more people one "trusts". This should help to limit the damage done by attempts at epistemic closure.

The key elements of P2PAE are:

  • CR: crowdsourced ratings (as opposed to centralized or automated; most proposals include this)
  • RV: range voting (ratings are nonbinary and bipolar)
  • CW: credibility-weighting (all ratings are not created equal)
  • PCV: personalized credibility view (users don't all see the same numbers for other users)

Many refinements and elaborations are possible. Experimentation will determine what works best.

Comparison to Traditional Moderation

This design should greatly increase the effectiveness of a site in correctly assigning levels of trust to users, both initially and on an ongoing basis, greatly widening the "admin bottleneck" often encountered when trying to get abusive users banned or restricted.

It also allows more users to actually participate in the day-to-day running of the site, which is a good thing in terms of building a genuine, functional, sustainable working community.

Other Reputation Systems

  • Reddit allows upvoting and downvoting (CR), but it's binary (not RV), rigidly egalitarian (no CW), and global (no PCV).
  • 2017-10-23 Improve Twitter’s Reputation by Giving Users One
    • The article proposes a simple system where each user can assign a rating to any other user. This includes CR and RV, but not CW or PCV.
    • This was cited here as an example of a system that won't work.

Miscellaneous

  • /refinements: some stuff that may be interfering with getting the basic concept across, and probably needs rewriting anyway