Culture
Report: Google and YouTube content moderators say job gave them PTSD
The consequences of watching the worst of the web.
More than a dozen Google and YouTube content moderators spoke to The Verge’s Casey Newton about the deleterious effects screening the worst content on the web has had –– and continues to have –– on their lives. The short version? While some people might be more resilient at watching extreme violence, child abuse, and related horrors than others, eventually almost everyone exposed to it for long enough is adversely affected.
Ultraviolence and other horrorshows — Aside from chronic anxiety, panic attacks, and PTSD, reports include a moderator collapsing on the job, and another being “hospitalized for an acute vitamin deficiency.” So, basically, scurvy. Being put off your lunch occasionally is understandable when keeping clear of mass-shooting videos. But being malnourished suggests something far more malign.
Outsourced, out of mind — Like Facebook, Google leaves much of the essential moderation work to contractors. That ensures looking after the well-being of moderators isn’t the search giant’s problem. It also means most moderators don’t just earn half what full-time Googlers do, they don’t get the same sort of support, like access to mental health professionals or paid time off.
One of Google’s largest moderation facilities is run by Accenture in Austin, TX, and employees there say they’re expected to vet 120 videos in five hours each day. Even then, when there’s an increase in content in need of moderation, that number can ratchet up, and early promises of a relaxed work environment have gradually evaporated as the company has made moves like banning smartphones.
Gray days — Better pay and access to mental health services may help moderators cope, but they’re not going to make the problems associated with content moderation go away. Moreover, content in need of moderation isn’t going anywhere. What’s needed are better methods to identify and automatically block certain types of content, along with stronger incentives for companies to invest in them.
Google’s testing various techniques itself to make disturbing content more palatable, including blurring videos or displaying them in grayscale, but it’s too early to tell how effective these approaches are.
The more obvious solution would be more thoroughly assessing would-be moderators, implementing early-warning systems to recognize burnout before it happens, providing better support mechanisms when it does, or the most potent but least likely to be implemented solution of all: investing in more people to do moderation work for fewer hours so each is subjected less nightmare-inducing fare.