Culture
YouTube will no longer let advertisers target hateful terms like "white power"
The company previously allowed hate speech terms to be used for keyword-based ad targeting.
Google this week blocked a slew of hateful keywords from being used for advertisement targeting on YouTube. The move follows a report in The Markup which found that advertisers could search for terms like “white power” to find videos and channels for placement.
Tripping over itself — After The Markup reached out to Google, the company apologized and blocked 44 harmful terms from being searchable. “We fully acknowledge that the functionality for finding ad placements in Google Ads did not work as intended,” said a spokesperson. “These terms are offensive and should not have been searchable.” The Markup followed up with more harmful terms that it found, which Google quietly blocked without further response.
Google told The Verge that ads never ran targeting the search terms in question because of other enforcement layers that dictating the type of content allowed in ads. YouTube also regularly removes videos that contain hate speech, and bans users from monetizing their videos if they regularly brush up against its hate speech policies.
While Google did allow keyword terms like “white lives matter” to be searchable, which could reasonably be construed as a racist response to recent social justice movements, it blocked “Black Lives Matter” as a keyword search. YouTube last year created a $100 million fund to support Black content creators.
Google blocks advertising against a host of keywords in part because major brands often don’t want to be associated with anything controversial or harmful for fear of alienating customers. But throwing a blanket ban on “Black Lives Matter” alongside another like “white power” highlights the mess that tech platforms find themselves trying to navigate.
Duct-tape — Google regularly touts its progress in cracking down on hate speech, but then journalists and the public easily find questionable content that raises questions. Internet platforms are engineered by small groups of people but designed to scale to billions, which makes it impossible for humans to review every piece of content and decide whether it’s safe. Artificial intelligence has been billed as the answer, but it’s far from perfect as it fails to understand the complexities of human speech. There will likely always be harmful content that gets missed.
The Markup in its report says that the way Google blocked the new terms makes it difficult to hold the company accountable because it has obfuscated whether or not they’ll actually blocked:
However, the way Google blocked those and other newly blocked words now make the responses in the code indistinguishable from the responses for gibberish. Because it’s now impossible to know for certain which terms are blocked, Google has shielded itself from future scrutiny of its keyword blocks on Google Ads.
YouTube says it blocked or removed more than 867 million ads last year for trying to evade its detection systems, and blocked or removed three billion ads in total for violating its content policies. But that doesn’t give a sense of how many harmful ads slipped through the cracks.