Coronavirus
YouTube's use of AI to police COVID-19 content means more flagged videos for creators
The change won’t affect current monetization policies though.
YouTube has announced it will turn to machine learning as ever more of its content moderators self-isolate because of the coronavirus. The automated review process will likely remove more videos than human reviewers do, so YouTube won’t apply strikes to accounts, except in extreme cases of clear violations. The update will prevent a backlog of moderation and also won’t impact monetization.
Last week, the company partially lifted the monetization ban on coronavirus videos for news organizations and a limited number of creators deemed responsible. Previously, the streaming video platform had banned coronavirus-related content after some unscrupulous YouTubers started trying to game the system by including coronavirus mentions in video titles or tags to drive traffic to their channels.
How does this impact creators? — Those eligible for monetization will continue to earn money from their videos. If a video is incorrectly flagged by the AI-powered moderation tools, they won’t get a punitive strike on their account. In general, if creators get three strikes in a 90-day period, their account is deleted. Waiving strikes for zero to low-level violative videos reviewed by a program could keep high-volume creators like daily vloggers in the clear. As usual, creators can appeal video and account takedowns, but responses may take longer than usual given YouTube's curtailed — and no doubt strained — workforce.
How will this change YouTube? — Unreviewed videos won’t show up in searches and recommendations. Additionally, the company is being more cautious about how it promotes videos in general to avoid the spread of misinformation. Frankly, an increase in circumspection from YouTube and other sources of information is welcome. It's a pity it's taken a pandemic to prompt it, though.