Sunday, March 12, 2023

Self-Moderated Social Media

 #388: No Squaring Of The Moderation Circle Required

................................

What if we as a nation are on the brink of a major change in how we organize ourselves, socially and politically?  Something akin to the drafting of our nation’s constitution in 1787.


Of course I’m thinking about social media and the way we organize ourselves online.  At present, it looks likely that Twitter will become more divisive and contain greater misinformation.  Facebook, meanwhile, is hard to moderate.  Could there be an emerging way around these problems that takes social media to the next level?


Nowadays, we expect our social media feeds to be whatever generates the most clicks, replies, etc.  This transactional focus on what’s exciting underpins most platforms.  And there’s nothing inherently wrong with that arrangement; except, we don’t yet know how to eliminate the accompanying problems of divisiveness, exploitation, and even poor taste that tend to hitch a free ride.


A recent article in the Washington Post pointed to one possible solution: a revamped algorithm that seeks out “bridges” between different points of view, and thus feeds us a more balanced, and let’s say healthy, take on any given subject.  Here's a fascinating, professionally written paper on how such a system might work.


A possible alternative, that might reach the same goal and yet take a much simpler, more direct path, is to enable and reward self-moderation on any given platform.  Ideally, multiple platforms would emerge, each offering a different style of moderation, which taken together would accommodate every taste that lay within the bounds of an agreed upon minimal decency.


This self-moderation would involve:

  * Well-defined standards set out by a given platform 

  * Specific rewards for identifying material that either does not meet those standards, or is one or more degrees away and deserves some official warning or commentary

  * These rewards would involve greater prominence on the platform, and if necessary, cash prizes

  * Unlike hiring thousands of moderators, self-moderation would draw in volunteers to flag content, with posts representing the greatest potential harm assessed prioritized by a few hundred expert moderators

  * Variations in flag color could be used to guess at a post’s eventual fate, with rewards being assigned in relation to each guess's accuracy

  * The algorithm used would then significantly depress the circulation of future posts generated by those creating offending material, and thus gradually rid the platform of indecency


Importantly, the vast majority of a platform’s content would be unaffected, except for the increased prominence of the rewarded users who actively understand what constitutes indecency and its fainter shades of untoward content.