Difference between revisions of "Peer-to-Peer Accountability Enforcement"

From InstaGov
Jump to navigation Jump to search
(deleted Periscope page -- content moved to Mechanisms)
(layout tweaks)
 
Line 1: Line 1:
 
==About==
 
==About==
 
[[Peer-to-Peer Accountability Enforcement]] is a methodology for sharply reducing the problem of posting content in bad faith (including both outright verbal abuse as well as abuses that are harder to spot, such as {{l/ip|sea-lioning}}) by allowing users to collectively delegate other trusted users to rate comments and commenters as to their credibility and appropriateness. It generally increases per-user accountability for abuse, but with the source of that accountability being other users rather than a central authority (with all the bottlenecking and [[power-concentration]] that implies).
 
[[Peer-to-Peer Accountability Enforcement]] is a methodology for sharply reducing the problem of posting content in bad faith (including both outright verbal abuse as well as abuses that are harder to spot, such as {{l/ip|sea-lioning}}) by allowing users to collectively delegate other trusted users to rate comments and commenters as to their credibility and appropriateness. It generally increases per-user accountability for abuse, but with the source of that accountability being other users rather than a central authority (with all the bottlenecking and [[power-concentration]] that implies).
 
+
==Pages==
Pages:
+
<big>
* [[/purpose]] - this needs to be a bit more general
+
* '''{{l/sub|purpose}}''' - this needs to be a bit more general
* [[/mechanism]] - the quasi-technical details
+
* '''{{l/sub|mechanism}}''' - the quasi-technical details
 +
</big>
 
==Notes==
 
==Notes==
 
Things that credibility management ''should'' be able to defeat or at least control:
 
Things that credibility management ''should'' be able to defeat or at least control:

Latest revision as of 12:25, 3 May 2021

About

Peer-to-Peer Accountability Enforcement is a methodology for sharply reducing the problem of posting content in bad faith (including both outright verbal abuse as well as abuses that are harder to spot, such as sea-lioning) by allowing users to collectively delegate other trusted users to rate comments and commenters as to their credibility and appropriateness. It generally increases per-user accountability for abuse, but with the source of that accountability being other users rather than a central authority (with all the bottlenecking and power-concentration that implies).

Pages

  • purpose - this needs to be a bit more general
  • mechanism - the quasi-technical details

Notes

Things that credibility management should be able to defeat or at least control:

Credibility management is beginning to look potentially useful for rating subjective quality of aesthetic works. Some discussion of that application is here: