Tech
Twitter is just now giving users a way to report tweets for misinformation
The process for reporting a tweet now includes an option to flag it as containing false information.
Twitter has begun testing an option that lets users report tweets for containing possible misinformation. The new option lives inside the same process as reporting a tweet for harassment or other harmful content. Select the dropdown menu at the top right of a tweet, choose “Report Tweet,” and a new option will prompt you to indicate whether the misleading comment is political, health-related, or falls in another category.
Experimental — The new option is being tested in the U.S., South Korea, and Australia, and Twitter emphasizes it won’t be able to take action on every report in the experiment. But the information it obtains could help it to identify what types of information have the potential to go viral, so it can catch falsehoods earlier.
Flagging a tweet under the health category includes an option for users to cite COVID-19 specific misinformation. Twitter could be responding to the Biden administration’s harsh criticism of social media platforms, which has lambasted them for allowing COVID-19 misinformation to spread. The delta variant that’s now spreading threatens to halt reopening plans and reverse progress made combating the virus.
The problem for Twitter and Facebook, among others, is that their platforms see untold hundreds of millions of posts per day, and sifting through that deluge to find and remove harmful content is difficult. The companies have come to rely on a mix of computers and users to flag questionable content up to a human moderator, but the process is imperfect and tweets may slip through the cracks. When tweets do get flagged, human moderators themselves can have a hard time interpreting rules to make decisions about what is acceptable or not, which has led to frequent frustration where users say that Twitter has refused to remove abusive content.
Creative solutions — Twitter has been experimenting with other ways it can improve its process combating harmful content, such as by supplementing paid moderators with crowdsourcing. Birdwatch is a program it’s been testing in which users themselves submit fact-checks that are appended to tweets on the platform. That program remains small, and critics have pointed out that users participating in Birdwatch regularly fail to offer citations that substantiate corrections made to tweets.
At least Twitter is trying to find solutions, though it shouldn’t get too much credit for trying to solve problems it created.
A bill introduced in Congress by Sen. Amy Klobuchar earlier this year would remove social media platforms’ liability protections if they amplify public health misinformation. That could have some adverse affects, however, like forcing platforms to become highly restrictive about what’s allowed to be shared. But there is some agreement that policymakers should provide guidelines on what types of content are allowable so that tech leaders aren’t the ones carrying that burden.