Difference between revisions of "Peer-to-Peer Accountability Enforcement"

From InstaGov
Jump to navigation Jump to search
(post-move adjustments)
m (tweaks)
Line 1: Line 1:
 
==About==
 
==About==
[[Peer-to-Peer Accountability Enforcement]] is a methodology for sharply reducing the problem of posting content in bad faith (including both outright verbal abuse as well as abuses that are harder to spot, such as sea-lioning) by allowing users to collectively delegate other trusted users to rate comments and commenters as to their credibility and appropriateness. It generally increases per-user accountability for abuse, but with the source of that accountability being other users rather than a central authority (with all the bottlenecking that implies).
+
[[Peer-to-Peer Accountability Enforcement]] is a methodology for sharply reducing the problem of posting content in bad faith (including both outright verbal abuse as well as abuses that are harder to spot, such as {{l/ip|sea-lioning}}) by allowing users to collectively delegate other trusted users to rate comments and commenters as to their credibility and appropriateness. It generally increases per-user accountability for abuse, but with the source of that accountability being other users rather than a central authority (with all the bottlenecking and [[power-concentration]] that implies).
  
 
Pages:
 
Pages:

Revision as of 21:14, 2 March 2017

About

Peer-to-Peer Accountability Enforcement is a methodology for sharply reducing the problem of posting content in bad faith (including both outright verbal abuse as well as abuses that are harder to spot, such as sea-lioning) by allowing users to collectively delegate other trusted users to rate comments and commenters as to their credibility and appropriateness. It generally increases per-user accountability for abuse, but with the source of that accountability being other users rather than a central authority (with all the bottlenecking and power-concentration that implies).

Pages:

Related

  • /Periscope has implemented "safety features" which have a lot in common with this idea

Notes

Things that credibility management should be able to defeat or at least control:

Credibility management is beginning to look potentially useful for rating subjective quality of aesthetic works. Some discussion of that application is here: