Culture
Lawmakers say Facebook’s deepfake policy doesn’t go far enough
It raises the concern that Facebook isn’t able to regulate itself.
It’s only been a day since Facebook announced it’s banning deepfakes, and already the new policy is being criticized by lawmakers. The criticism is fairly simple: Facebook’s stance doesn’t go far enough to stop the spread of misinformation on the site.
Why won’t Facebook just take the videos down? — That’s the central question being asked by lawmakers and users alike. Yes, videos that meet certain criteria will be taken down; for others, Facebook is using its tried-and-true method of simply labeling it as false information. That doesn’t stop users from viewing the content and coming to their own conclusions.
Leaving a lot out — Comments from a hearing held by the House Energy & Commerce subcommittee display a marked lack of confidence in Facebook’s policy against deepfakes. The subcommittee chairwoman, Democrat Jan Schakowsky, cited evidence that big tech has failed to regulate itself. Others in the hearing brought up the fact that consumers are losing faith in which sources they can trust online.
Facebook says it understands the risks of misinformation — Facebook’s vice president of global policy management, Monika Bickert, said the company’s latest policy “is designed to prohibit the most sophisticated attempts to mislead people.” She also says enforcement has gotten better, even though it’s not perfect.
Which, of course, begs the question: if Facebook knows its policies could be better, why isn’t it doing anything to actually improve them?