After sending content moderators home, YouTube doubled its video removals

The company said it had to "accept a lower level of accuracy" to protect YouTube users when it relied more heavily on algorithmic moderation.

After sending content moderators home, YouTube doubled its video removals

YouTube opted to over-enforce its policies in order to prioritize safety on the platform.

Photo: Szabo Viktor/Unsplash

When YouTube sent content moderators home in March due to the COVID-19 pandemic, it dramatically expanded its use of automated filters — and that led to twice as many videos being taken down in the second quarter of 2020 as the first. The spike stems from YouTube's decision to "cast a wider net" for potentially violative videos in the absence of human moderators, and highlights the imperfect science of automatically policing content.

"When reckoning with greatly reduced human review capacity due to COVID-19, we were forced to make a choice between potential under-enforcement or potential over-enforcement," the company wrote in a blog post accompanying its second quarter transparency report. "Because responsibility is our top priority, we chose the latter — using technology to help with some of the work normally done by reviewers."

YouTube removed more content last quarter in all but two categories: hateful videos and videos that encourage harmful or dangerous activities. But in the most sensitive content categories, including violent extremist content and content that could jeopardize child safety, YouTube saw a threefold increase in the number of videos it removed. YouTube explained that's because the company "accepted a lower level of accuracy to make sure that we were removing as many pieces of violative content as possible."

This means, of course, that YouTube removed plenty of videos that didn't actually violate its policies, leading to roughly double the number of appeals, from around 166,000 last quarter to 325,000 in the second quarter. The number of videos that were reinstated after appeal also nearly quadrupled, from around 41,000 in the first quarter to 161,000 last quarter.

YouTube's transparency report comes on the heels of a similar report from Facebook, which described markedly different results. Like YouTube, Facebook also opted to send its content moderators home in March. But unlike YouTube, which removed more content last quarter in almost every category, Facebook and Instagram saw steep declines — including in some of the most sensitive categories it polices.

On Instagram, for example, the company removed about half as much child sexual abuse material in the second quarter as it did in the first, while removals of suicide related content fell by a whopping 79%. That's not because there was less of it: According to Facebook, it's because moderators were unable to review this graphic imagery at home, and therefore couldn't log it in Facebook's automated systems, which is how the company is able to search the platform and remove other exact matches that pop up in the future. That means much of that content that would normally be removed was left online.

A YouTube spokesperson said the company ran into the same problem. But it compensated for that issue by removing far more content overall than it otherwise would have. Anticipating that would lead to a spike in appeals — and it did — YouTube maintained a skeleton crew to process appeals in a timely fashion, eventually reinstating about half of the videos that were removed.

Facebook, by contrast, scaled back appeals, suspending them entirely in some sensitive content categories, like violent and graphic content. That led to a massive drop in appeals and the amount of content that was restored after removal in almost every category. "We couldn't always offer [appeals]," Facebook's vice president of integrity Guy Rosen said on a call with reporters earlier this month. "We still gave people an option to tell us that they disagreed with our decision on a piece of content and our teams looked at these signals in aggregate to find potential issues and restore content where appropriate."

The comparison between Facebook and YouTube isn't exact. For one thing, YouTube doesn't report as much granular information as Facebook does: While Facebook shares stats on the amount of child nudity and sexual exploitation content it removes, for example, YouTube shares information more broadly on child safety, a category that also includes risky challenges and videos that could broadly "endanger minors." For another, Facebook did see a much bigger jump in the amount of hate speech it removed last quarter, compared to YouTube.

And yet, the two reports still illustrate an important point about how the COVID-19 era has affected what people see — and don't — online. Facebook and YouTube often get lumped together as two social networks filled with the same filth, both using a combination of AI and low-wage contractors to rid problematic posts from their platforms. But over the last six months, these two companies have taken two different approaches to the same problem, and they have yielded dramatically different outcomes.

Where YouTube has risked silencing users who have done nothing wrong, Facebook has risked not silencing them fast enough in the name of maintaining accuracy. Neither approach is perfect. Both show just how far automated systems still have to go.


Is it legal to fire someone while they’re on parental leave?

Twitter is in chaos right now. But that’s still not a good reason to fire someone while they’re on parental leave.

Kayvon Beykpour was terminated during his parental leave.

Screenshot: Twitter

This week, Twitter fired the company’s head of Consumer, Kayvon Beykpour, in the latest shakeup related to the Elon Musk deal.

According to Beykpour’s tweet, the senior executive was on paternity leave after welcoming a daughter last month. This brings up a lot of questions around the ethics — and legality — of firing someone while they’re on parental leave.

Keep Reading Show less
Michelle Ma

Michelle Ma (@himichellema) is a reporter at Protocol, where she writes about management, leadership and workplace issues in tech. Previously, she was a news editor of live journalism and special coverage for The Wall Street Journal. Prior to that, she worked as a staff writer at Wirecutter. She can be reached at mma@protocol.com.

Sponsored Content

Foursquare data story: leveraging location data for site selection

We take a closer look at points of interest and foot traffic patterns to demonstrate how location data can be leveraged to inform better site selecti­on strategies.

Imagine: You’re the leader of a real estate team at a restaurant brand looking to open a new location in Manhattan. You have two options you’re evaluating: one site in SoHo, and another site in the Flatiron neighborhood. Which do you choose?

Keep Reading Show less

Crypto is crumbling, and DeFi hacks are getting worse

The amount of crypto stolen in the first quarter of 2022 has already surpassed criminal hackers’ 2021 haul. There aren’t any easy fixes.

The biggest hacks of 2022 were carried out by attackers spotting vulnerabilities in smart contracts and protocols, especially in cross-chain bridges and flash loan protocols.

Illustration: Christopher T. Fong/Protocol

Until recently, DeFi seemed like it was on an exponential trajectory upwards. With the collective value of crypto peaking near $3 trillion, hackers saw a big opportunity. The only thing that may slow them down is the precipitous drop in the value of the tokens they’re going after.

DeFi hacks have been getting worse and worse, with no clear solutions in sight. According to a recent report by blockchain security firm PeckShield, the amount of money netted from DeFi hacks in the first four months of 2022, $1.57 billion, has already surpassed the amount netted in all of 2021, $1.55 billion. A report by Chainalysis found a similar trend, with the hacker haul in the first three months of 2022 exceeding a record set in the third quarter of 2021.

Keep Reading Show less
Lindsey Choo
Lindsey Choo is a San Francisco-based reporter covering fintech. She is a graduate of UC San Diego, where she double majored in communications and political science. She has previously covered healthcare issues for the Center for Healthy Aging and was a senior staff writer for The UCSD Guardian. She can be reached at lchoo@protocol.com.

Privacy by Design laws will kill your data pipelines

The legislation could make old data pipelines more trouble than they’re worth.

Data pipelines have become so unwieldy that companies might not even know if they are complying with regulations.

Image: Andriy Onufriyenko/Getty Images

A car is totaled when the cost to repair it exceeds its total value. By that logic, Privacy by Design legislation could soon be totaling data pipelines at some of the most powerful tech companies.

Those pipelines were developed well before the advent of more robust user privacy laws, such as the European Union’s GDPR (2018) and the California Consumer Privacy Act (2020). Their foundational architectures were therefore designed without certain privacy-preserving principals in mind, including k-anonymity and differential privacy.

Keep Reading Show less
Hirsh Chitkara

Hirsh Chitkara ( @HirshChitkara) is a reporter at Protocol focused on the intersection of politics, technology and society. Before joining Protocol, he helped write a daily newsletter at Insider that covered all things Big Tech. He's based in New York and can be reached at hchitkara@protocol.com.


Why AI-powered ransomware could be 'terrifying'

Hiring AI experts to automate ransomware could be the next step for well-endowed ransomware groups that are seeking to scale up their attacks.

Ransomware gangs don’t have AI ransomware. At least not yet.

Photo: Max Duzij/Unsplash

In the perpetual battle between cybercriminals and defenders, the latter have always had one largely unchallenged advantage: The use of AI and machine learning allows them to automate a lot of what they do, especially around detecting and responding to attacks. This leg-up hasn't been nearly enough to keep ransomware at bay, but it has still been far more than what cybercriminals have ever been able to muster in terms of AI and automation.

That’s because deploying AI-powered ransomware would require AI expertise. And the ransomware gangs don’t have it. At least not yet.

Keep Reading Show less
Kyle Alspach

Kyle Alspach ( @KyleAlspach) is a senior reporter at Protocol, focused on cybersecurity. He has covered the tech industry since 2010 for outlets including VentureBeat, CRN and the Boston Globe. He lives in Portland, Oregon, and can be reached at kalspach@procotol.com.

Latest Stories