Saturday, November 27, 2021

Fixing Facebook II

#377: I Take A Second Swing

......................

Here's my first swing (#373).  After skeptical feedback, I've decided to address the issue a second time.  Helpfully, 538 posted a fix Facebook article a few weeks ago, so here's my reaction (in green) to various ideas that article discusses:

* Limit Re-shares

This would attack the 'viral' part of the problem.  But, not only would limiting shares impact positive (especially funny) posts, let's not forget that social media, if done right, serves a purpose in speaking truth to power.  So, though this might work, it is inelegant and could stifle the best reactive material.

* Curb ‘Bad Actors’

Let's assume that FB already downgrades, algorithmically, posts from those who peddle violent and false material.  If not, they should, though the effect will only be marginal.  The reason: the worst actors are adept at avoiding responsibility, and will just post as someone else. 

* More Prominent User Controls

I've been taking the year off from FB, so I'm not aware of any new controls.  But, since the worst offenders are the dumbbells who only want to cause trouble (plus the simpletons who welcome 'trouble'), our focus, unfortunately, will have to be on them.  Anyone who just wants to avoid bad content is obviously not the problem.

* Prioritize ‘Good For The World’ Content  

FB tried this out and found that it reduced time spent on the platform.  So, a non-starter, unless made law (something like this may actually happen, since the recent whistle-blowing RE: FB's integrity unit made the cover of Time magazine).

* Focus On User Interests Rather Than Friends’ Attention Grabbing

A possible first step, though if a user's interests are in being a bad actor.... 

* Show Reverse Order Chronological Posting

This is what I remember from the good old days when I had a dozen or two people on my feed.  It would take 10-20 minutes a day to keep abreast of family and friends.  But, again, this is Facebook heaven, with everyone being truthful and civic-minded.  The problem is that there's nothing to stop bad actors from setting viral wildfires that burn out of control before they're even identified.  The 538 article mentions that the FB algorithm has a hard time determining early on what's civic content, so throttling viral flare-ups can take time.


So, what's to be done?  In #373 I suggested that once a post reaches a threshold of engagement, a 1 to - 10 scale would appear for users to rate the post's content.  Anything that received a less than passing grade would from then on contain a link explaining why the post was a bugger (in general terms, and eventually, in specific terms).  But, thanks to feedback I've received, I can say that that isn't enough.  It might be something FB would be willing to do, but it doesn't focus on the bad actors.  If a group of trouble-makers all re-shared the same post, their followers could all rate it a '10', and so avoid any repercussions, until its viral nature broke through to the general public.  But, how's that all that much different from where things stand now?

This brings me to a second swing: 

Use the 1 - to - 10 scale as a voluntary whistle-blowing mechanism to both limit bad actors and promote good critics.  Here's how it might work:

* The FB algorithm would tend to promote posts from those with critiquing skill, and demote those whose critiquing is poor.

* A '5' rating on the 1 - to - 10 scale would be neutral.  A '6' or more would indicate a favorable review; and '4' or less, negative.  The closer to a '10' or a '1' the score, the more the user has at stake.

* This would incentivize users to search for and critique posts from obvious transgressors, downgrading, if not shutting down, the worst bad actor networks--due to infiltrators and the curious seeking to build up their critiquing score.  Likewise, posts that are particularly worthy and bring out our best (funny, interesting, or just fun) would be sought out and promoted.

* Yeah, but who decides what's good/bad?  Surely that's a tough call, no?  Right, but a post with enormous engagement might have a small fraction of users rating it.  These would be volunteers taking a chance that the post was so obviously wrong-headed/worthy that they were willing to rate it poorly/positively.  Until FB moderators settled the matter, the post's rating (averaged) would show, next to the 1 -  to - 10 scale.  Once FB had a chance to examine the post, it would either agree with the user rating, or toss the result, based on facts or objectionable content.  Those it agreed to would count towards a user's critiquing score.

* Not only would the most objectionable material likely tend to disappear from FB, but those posting material would probably think twice before cranking out yet more bad actor content, since their algorithmic score would suffer--especially so for content that's in poor taste.

* Since FB could choose to effect something like this idea, we might point out that quite a bit would depend on how much a user's algorithmic score was impacted when successfully exercising critical skills.  If users could become quite influential thanks to their volunteering, the system might just work.  

No comments:

Post a Comment