Culture
Facebook let advertisers target users interested in pseudoscience even during the pandemic
78M
How many users were targeted based on their interest in pseudoscience.
Facebook long allowed advertisers to target users interested in “pseudoscience,” despite claiming large-scale operations to combat misinformation on the platform, an investigation from The Markup reveals. The pseudoscience targeting category — which has now been eliminated — contained 78 million users on Facebook’s ad portal.
Facebook has pledged many times to fight the misinformation spreading on its massive social network — as recently as last week, when CEO Mark Zuckerberg shared a public post speaking to the topic. Zuckerberg says Facebook has taken down “hundreds of thousands of pieces of misinformation” related to COVID-19, such as theories about drinking bleach and physical distancing being ineffective.
And yet, as this report reveals, Facebook has been allowing plenty of questionable information through its ad filters under other names. The company only removed the pseudoscience category after The Markup reached out for comment.
This is not the first time Facebook has been caught in its own double standards about what’s allowed on its network. It likely won’t be the last, either. It sounds obvious at this point, but it bears repeating: Facebook needs to overhaul its moderation policies if it truly cares about fighting misinformation.
Way too easy — Both Facebook and Instagram operate on a model that prioritizes ease of access. Anyone can log in and buy ads with just a few clicks. This makes sense for Facebook: easy buying means increased profits.
As part of its investigation, The Markup purchased ads on both Instagram and Facebook and boosted both to those users who might be interested in pseudoscience. In just minutes the ads were approved and ready to be spread to millions.
Let’s pull out the receipts — Facebook has a storied history of allowing fringe content and conspiracy theories on its networks, despite insistent claims to the contrary.
For example just last year, a report from The Guardian proved that Facebook ads could be purchased to target those interested in “vaccine controversies.” Similarly, a 2017 ProPublica report revealed that interest in terms such as “Jew hater” and “History of why Jews ruin the world” were purchasable ad categories, amongst others.
More recently, we’ve seen Facebook struggle to keep up with even the most basic ads touting false COVID-19 information. It’s increasingly obvious that Facebook’s solutions for content moderation don’t match up with its messaging about curbing misinformation.
The Markup was able to locate at least 67 Facebook groups made with the intention of spreading coronavirus information. Some of them have received warnings from Facebook about spreading misinformation, so they just changed their names to avoid filters. Facebook says it’s in the process of reviewing these groups — but why should it take a third-party investigation for that to happen?
And there’s plenty more outside user-created groups. Nonprofit advocacy group Avaaz found that 104 pieces of coronavirus-related misinformation had been viewed more than 117 million times on Facebook. Consumer Reports was able to schedule seven paid ads that pushed fake claims about COVID-19, all of which were approved by Facebook.
Reactive, rather than proactive — What we’re seeing here is a classic tactic from Facebook: let almost everything slide and then deal with it later.
This reactive process is downright harmful. It allows misinformation to spread through Facebook and Instagram; the posts are allowed to do their damage before being taken down. If Facebook cared about its users as much as it claims, it would prevent this information from being promoted in the first place. Instead the company would prefer band-aids like adding abstract location data as a ploy at transparency.
Besides misinformation, Facebook is also allowing users to organize concerted anti-quarantine protests disguised to look like grassroots campaigns. The company says the groups don’t go against its policies.
The Markup’s investigation is just the latest in an increasingly long list of proof against Facebook’s claims of fighting misinformation. Facebook’s continued allowance of conspiracy theories and misinformation reveals the platform’s failure to govern itself. And that’s only going to get worse if something isn’t done soon.