The problem for Twitter and Facebook, among others, is that their platforms see untold hundreds of millions of posts per day, and sifting through that deluge to find and remove harmful content is difficult. The companies have come to rely on a mix of computers and users to flag questionable content up to a human moderator, but the process is imperfect and tweets may slip through the cracks. When tweets do get flagged, human moderators themselves can have a hard time interpreting rules to make decisions about what is acceptable or not, which has led to frequent frustration where users say that Twitter has refused to remove abusive content.
Creative solutions — Twitter has been experimenting with other ways it can improve its process combating harmful content, such as by supplementing paid moderators with crowdsourcing. Birdwatch is a program it’s been testing in which users themselves submit fact-checks that are appended to tweets on the platform. That program remains small, and critics have pointed out that users participating in Birdwatch regularly fail to offer citations that substantiate corrections made to tweets.
At least Twitter is trying to find solutions, though it shouldn’t get too much credit for trying to solve problems it created.
A bill introduced in Congress by Sen. Amy Klobuchar earlier this year would remove social media platforms’ liability protections if they amplify public health misinformation. That could have some adverse affects, however, like forcing platforms to become highly restrictive about what’s allowed to be shared. But there is some agreement that policymakers should provide guidelines on what types of content are allowable so that tech leaders aren’t the ones carrying that burden.