Difference between revisions of "Peer-to-Peer Accountability Enforcement"

From InstaGov
Jump to navigation Jump to search
(→‎Proposal: Frequency of Notification; got rid of unfinished text (can't remember what I was going to say))
(layout tweaks)
 
(20 intermediate revisions by the same user not shown)
Line 1: Line 1:
==Purpose==
+
==About==
It is easy for malicious, misinformed, and uncomprehending users to greatly reduce the efficacy of civil discussion. I'll refer to these collectively as "malusers" for now, although the majority are probably not deliberately or knowingly malicious.
+
[[Peer-to-Peer Accountability Enforcement]] is a methodology for sharply reducing the problem of posting content in bad faith (including both outright verbal abuse as well as abuses that are harder to spot, such as {{l/ip|sea-lioning}}) by allowing users to collectively delegate other trusted users to rate comments and commenters as to their credibility and appropriateness. It generally increases per-user accountability for abuse, but with the source of that accountability being other users rather than a central authority (with all the bottlenecking and [[power-concentration]] that implies).
 +
==Pages==
 +
<big>
 +
* '''{{l/sub|purpose}}''' - this needs to be a bit more general
 +
* '''{{l/sub|mechanism}}''' - the quasi-technical details
 +
</big>
 +
==Notes==
 +
Things that credibility management ''should'' be able to defeat or at least control:
 +
* [[sea-lioning]] (see {{issuepedia|sea-lioning}}): appears civil and polite on the surface, so may be difficult to judge without understanding the full context
 +
* [[brigading]] -- though it may take a combination of credibility management and [[debate mapping]]:
 +
** '''2015-07-17''' [http://freethoughtblogs.com/pharyngula/2015/07/17/there-are-good-reasons-ive-never-been-a-fan-of-reddit/ There are good reasons I’ve never been a fan of Reddit]
 +
* [[evaporative cooling]]:
 +
** '''2010-10-10''' [http://blog.bumblebeelabs.com/social-software-sundays-2-the-evaporative-cooling-effect/ Social Software Sundays #2 – The Evaporative Cooling Effect]
 +
* [[click-farming]] ...except I'm not understanding the value of having fake followers:
 +
** '''2015-04-20''' [http://www.newrepublic.com/article/121551/bot-bubble-click-farms-have-inflated-social-media-currency How Click Farms Have Inflated Social Media Currency]
 +
*** private discussion [https://plus.google.com/u/0/104092656004159577193/posts/MGmWGw3vUBx here]
 +
* [[online harassment]]
 +
** '''2014-10-09''' [http://www.theatlantic.com/technology/archive/2014/10/the-unsafety-net-how-social-media-turned-against-women/381261/ The Unsafety Net: How Social Media Turned Against Women] ([https://plus.google.com/u/0/+CindyBrown/posts/8Ahnx7mVciy via])
  
Malusers generally fall into one or more of the following groups:
+
Credibility management is beginning to look potentially useful for rating subjective quality of aesthetic works. Some discussion of that application is here:
* [[issuepedia:discussion troll|troll]]s
+
* '''2014-06-20''' [http://www.reddit.com/r/dredmorbius/comments/28jfk4/content_rating_moderation_and_ranking_systems/ Content rating, moderation, and ranking systems: some non-brief thoughts] (Edward Morbius).
* astroturfers
+
** Related: '''2014-09-21''' [http://www.reddit.com/r/dredmorbius/comments/2h0h81 Specifying a Universal Online Media Payment Syndication System]
* propaganda victims
+
*** which was a sequel to: '''2014-01-08''' [http://www.reddit.com/r/dredmorbius/comments/1uotb3/a_modest_proposal_universal_online_media_payment/# A Modest Proposal: Universal Online Media Payment Syndication]
 
+
* '''2012-02-08''' [http://torrentfreak.com/tribler-makes-bittorrent-impossible-to-shut-down-120208/ Tribler Makes BitTorrent Impossible to Shut Down] ([http://www.smartplanet.com/blog/thinking-tech/piracy-now-unstoppable-new-file-sharing-network-cant-be-shut-down/ via]) "Where most torrent sites have a team of moderators to delete viruses, malware and fake files, Tribler '''uses crowd-sourcing to keep the network clean.''' Content is verified by user generated “channels”, which can be “liked” by others. When more people like a channel, the associated torrents get a boost in the search results."
The involvement of individual malusers in a discussion frequently has the following adverse effects:
+
* '''2011-02-05''' [http://www.quora.com/What-is-Quoras-algorithm-formula-for-determining-the-ordering-ranking-of-answers-on-a-question What is Quora's algorithm/formula for determining the ordering/ranking of answers on a question?]: this is a similar concept on the surface, but lacks some important elements:
* throwing the conversation off-topic
+
** no proxying/layering -- all ratings are direct
* injecting false but believable information
+
** no personalized credibility ratings (PCRs)
* making false claims that require extensive research to refute
+
** minimal granularity, i.e. only two possible values (-1/+1) for each ranking
* failing to understand the arguments of others
 
 
 
It should be noted that many malusers are known for being unreasonable only on specific topics, and entirely reasonable on others.
 
 
 
While blocking and banning (currently supported by most social media venues) are generally effective ways of dealing with malusers when they are identified, those techniques have a number of shortcomings:
 
* it is too easy to block someone who is making valid arguments that you happen to disagree with
 
* blocking is fundamentally hierarchical:
 
** one person owns a thread or post, and has the sole authority to block individuals from commenting on it
 
** a group of admins have the sole authority to ban individuals from posting in that group; there is also typically a single owner or founder who can demote or block admins
 
* blocking is a very crude level of control:
 
** typically, the only way to block someone from posting on a given thread is a person-to-person block -- preventing the two of you from seeing ''anything'' said by the other
 
** blocking someone from posting in a group prevents them from participating in ''any'' discussions in that group, including topics on which they are more reasonable
 
* once blocked, there is no reliable process by which a reformed maluser can regain posting permission
 
 
 
A better solution is needed.
 
==Proposal==
 
Improved control over malusers requires both finer granularity in the blocking system and a more even-handed, less-centralized way of deciding who needs to be restricted.
 
 
 
The following mechanisms should address most or all of the above shortcomings. In particular, the process of allowing a given user A to avoid posts by user B based ''only'' on the ratings of ''other users explicitly trusted by A'' should help to overcome bias and snap judgements.
 
 
 
While it could still be not especially difficult for a user to create their own personal "echo chamber" under this system, it would be notably less easy than existing systems where any user has complete control over who they block -- and would become progressively more difficult the more people one "trusts". This should help to limit the damage done by attempts at [[issuepedia:epistemic closure|epistemic closure]].
 
===Crowdsourced Ratings===
 
This is a sort of minimum specification; many refinements and elaborations are possible. Experimentation will determine what works best.
 
 
 
* Every user can rate every other user's credibility on a numeric scale (e.g. integers from -10 to +10).
 
* For each pair of users A and B, where A wants to know B's credibility, a Personal Credibility Rating (PCR) is calculated as follows:
 
** For every user C who has been given a positive rating by A (ArC) and who has rated B (CrB), the system sums all (ArC/10 x CrB/10) to produce a single number which represents, essentially, "what my friends think of this person".
 
** If the number of such ratings is too few, the resulting rating may be displayed in faint colors or with some other indication that it is tentative.
 
===Personalized Minimum Credibility===
 
This is similar to the system which appears to be in use on sites such as [[wikipedia:Reddit|Reddit]], but with some refinements.
 
 
 
For the sake of brevity, we'll use the word "post" to refer to anything posted by one user for others to see, regardless of whether it's a root-level post or a comment on a post.
 
 
 
* Each post has its own PCR that is separate from the poster's PCR.
 
* The PCR for any given post will ''default'' to the poster's PCR.
 
* Any additional post-specific ratings will modify that post's PCR by some sort of weighted average.
 
** Experimentation will be needed to determine the best algorithm, but we could start by weighting the poster's PCR by how many individual ratings went into it.
 
* Each user X only sees posts whose PCR exceeds X's Minimum Required Credibility (MRC).
 
* Each user X may adjust the [[#Frequency of Notification]] at various levels above their MRC.
 
* Each user X starts with a default global MRC, but they may modify that default and they may also set specific MRCs for individual threads or groups.
 
* All users can rate each post's credibility.
 
* Optional: ratings of users' posts could have some small, cumulative influence on their overall PCR. Perhaps the influence of any given user A's ratings of B's posts should be overridden whenever they revise their overall rating of B. Experimentation needed.
 
 
 
===Frequency of Notification===
 
Rather than simply blocking those whose PCR falls below a threshhold, users can "distance" themselves from other users by selecting how often they wish to be notified of posts based on the poster's PCR (and possibly how often they wish for ''their'' posts to generate notifications to that other user).
 
 
 
For example, if I see user A with a high PCR and user B with a low PCR, I might want to be notified immediately whenever user A posts, but only every week regarding user B's posts. (Similarly, I might want to avoid notifying user B right away if I respond, in order to further reduce the amount of time I spend interacting with them.)
 
 
 
This would promote higher engagement between higher-value users without completely excluding others from the dialogue.
 
 
 
Note that this is different from the random-selective notification of social networks such as Facebook and Google+, where some posts are simply ignored. (The latter system may require substantially less computing power; a data design for a FoN system has not yet been worked out.) All posts are eventually included in a notification, but may be time-delayed and grouped with other posts by the same user.
 

Latest revision as of 12:25, 3 May 2021

About

Peer-to-Peer Accountability Enforcement is a methodology for sharply reducing the problem of posting content in bad faith (including both outright verbal abuse as well as abuses that are harder to spot, such as sea-lioning) by allowing users to collectively delegate other trusted users to rate comments and commenters as to their credibility and appropriateness. It generally increases per-user accountability for abuse, but with the source of that accountability being other users rather than a central authority (with all the bottlenecking and power-concentration that implies).

Pages

  • purpose - this needs to be a bit more general
  • mechanism - the quasi-technical details

Notes

Things that credibility management should be able to defeat or at least control:

Credibility management is beginning to look potentially useful for rating subjective quality of aesthetic works. Some discussion of that application is here: