Difference between revisions of "Peer-to-Peer Accountability Enforcement"
(→Personalized Minimum Credibility: modified for clarity) |
(→Crowdsourced Ratings: boldfaced the definition of PCR) |
||
Line 36: | Line 36: | ||
* Every user can rate every other user's credibility on a numeric scale (e.g. integers from -10 to +10). | * Every user can rate every other user's credibility on a numeric scale (e.g. integers from -10 to +10). | ||
− | * For each pair of users A and B, where A wants to know B's credibility, a Personal Credibility Rating (PCR) is calculated as follows: | + | * For each pair of users A and B, where A wants to know B's credibility, a '''Personal Credibility Rating (PCR)''' is calculated as follows: |
** For every user C who has been given a positive rating by A (ArC) and who has rated B (CrB), the system sums all (ArC/10 x CrB/10) to produce a single number which represents, essentially, "what my friends think of this person". | ** For every user C who has been given a positive rating by A (ArC) and who has rated B (CrB), the system sums all (ArC/10 x CrB/10) to produce a single number which represents, essentially, "what my friends think of this person". | ||
** If the number of such ratings is too few, the resulting rating may be displayed in faint colors or with some other indication that it is tentative. | ** If the number of such ratings is too few, the resulting rating may be displayed in faint colors or with some other indication that it is tentative. | ||
+ | |||
===Personalized Credibility Rating=== | ===Personalized Credibility Rating=== | ||
Personalized Credibility Rating, or PCR, is my term for a way of tracking each user's credibility in the eyes of others. | Personalized Credibility Rating, or PCR, is my term for a way of tracking each user's credibility in the eyes of others. |
Revision as of 02:41, 29 December 2014
Purpose
It is easy for malicious, misinformed, and uncomprehending users to greatly reduce the efficacy of civil discussion. I'll refer to these collectively as "malusers" for now, although the majority are probably not deliberately or knowingly malicious.
Malusers generally fall into one or more of the following groups:
- trolls
- astroturfers
- propaganda victims
The involvement of individual malusers in a discussion frequently has the following adverse effects:
- throwing the conversation off-topic
- injecting false but believable information
- making false claims that require extensive research to refute
- failing to understand the arguments of others
It should be noted that many malusers are known for being unreasonable only on specific topics, and entirely reasonable on others.
While blocking and banning (currently supported by most social media venues) are generally effective ways of dealing with malusers when they are identified, those techniques have a number of shortcomings:
- it is too easy to block someone who is making valid arguments that you happen to disagree with
- blocking is fundamentally hierarchical:
- one person owns a thread or post, and has the sole authority to block individuals from commenting on it
- a group of admins have the sole authority to ban individuals from posting in that group; there is also typically a single owner or founder who can demote or block admins
- blocking is a very crude level of control:
- typically, the only way to block someone from posting on a given thread is a person-to-person block -- preventing the two of you from seeing anything said by the other
- blocking someone from posting in a group prevents them from participating in any discussions in that group, including topics on which they are more reasonable
- once blocked, there is no reliable process by which a reformed maluser can regain posting permission
A better solution is needed.
Proposal
Improved control over malusers requires both finer granularity in the blocking system and a more even-handed, less-centralized way of deciding who needs to be restricted.
The following mechanisms should address most or all of the above shortcomings. In particular, the process of allowing a given user A to avoid posts by user B based only on the ratings of other users explicitly trusted by A should help to overcome bias and snap judgements.
While it could still be not especially difficult for a user to create their own personal "echo chamber" under this system, it would be notably less easy than existing systems where any user has complete control over who they block -- and would become progressively more difficult the more people one "trusts". This should help to limit the damage done by attempts at epistemic closure.
Crowdsourced Ratings
This is a sort of minimum specification; many refinements and elaborations are possible. Experimentation will determine what works best.
- Every user can rate every other user's credibility on a numeric scale (e.g. integers from -10 to +10).
- For each pair of users A and B, where A wants to know B's credibility, a Personal Credibility Rating (PCR) is calculated as follows:
- For every user C who has been given a positive rating by A (ArC) and who has rated B (CrB), the system sums all (ArC/10 x CrB/10) to produce a single number which represents, essentially, "what my friends think of this person".
- If the number of such ratings is too few, the resulting rating may be displayed in faint colors or with some other indication that it is tentative.
Personalized Credibility Rating
Personalized Credibility Rating, or PCR, is my term for a way of tracking each user's credibility in the eyes of others.
It's built on a user-voting basis such as those in used on sites such as Reddit, but with some refinements:
- uses range (non-binary) voting
- summing is personalized for each user, rather than being global
Here's how it would work. (For the sake of brevity, I'll use "post" to mean anything posted by one user for others to see, regardless of whether it's a root-level post or a comment on a post.)
- Each post has its own PCR that is separate from the PCR of the user who posted it.
- The PCR for any given post will default to the poster's PCR.
- Any additional post-specific ratings will modify that post's PCR by some sort of weighted average.
- Experimentation will be needed to determine the best algorithm, but we could start by weighting the poster's PCR by how many individual ratings went into it.
- Each user X only sees posts whose PCR exceeds X's Minimum Required Credibility (MRC -- see below).
- Each user X may adjust the #Frequency of Notification at various levels above their MRC.
- Each user X starts with a default global MRC, but they may modify that default and they may also set specific MRCs for individual threads or groups.
- All users can rate each post's credibility.
- Optional: ratings of users' posts could have some small, cumulative influence on their overall PCR. Perhaps the influence of any given user A's ratings of B's posts should be overridden whenever they revise their overall rating of B. Experimentation needed.
"MRC" refers to a setting that each user can adjust to determine the strength of post-filtering. The lower it is, the more posts they'll see -- because they'll be allowing less-credible posts through. The higher it is, the higher-credibility a post has to have in order to be visible. (There will presumably be other user-options to determine how "hidden" posts are indicated -- e.g. a summary showing how many, a list of usernames, etc.)
Frequency of Notification
Rather than simply blocking those whose PCR falls below a threshhold, users can "distance" themselves from other users by selecting how often they wish to be notified of posts based on the poster's PCR (and possibly how often they wish for their posts to generate notifications to that other user).
For example, if I see user A with a high PCR and user B with a low PCR, I might want to be notified immediately whenever user A posts, but only every week regarding user B's posts. (Similarly, I might want to avoid notifying user B right away if I respond, in order to further reduce the amount of time I spend interacting with them.)
This would promote higher engagement between higher-value users without completely excluding others from the dialogue.
Note that this is different from the random-selective notification of social networks such as Facebook and Google+, where some posts are simply ignored. (The latter system may require substantially less computing power; a data design for a FoN system has not yet been worked out.) All posts are eventually included in a notification, but may be time-delayed and grouped with other posts by the same user.
Notes
Credibility management is beginning to look potentially useful for rating subjective quality of aesthetic works. Some discussion of that application is here:
- 2014-06-20 Content rating, moderation, and ranking systems: some non-brief thoughts (Edward Morbius).
- Related: 2014-09-21 Specifying a Universal Online Media Payment Syndication System
- which was a sequel to: 2014-01-08 A Modest Proposal: Universal Online Media Payment Syndication
- Related: 2014-09-21 Specifying a Universal Online Media Payment Syndication System