Power

After sending content moderators home, YouTube doubled its video removals

The company said it had to "accept a lower level of accuracy" to protect YouTube users when it relied more heavily on algorithmic moderation.

After sending content moderators home, YouTube doubled its video removals

YouTube opted to over-enforce its policies in order to prioritize safety on the platform.

Photo: Szabo Viktor/Unsplash

When YouTube sent content moderators home in March due to the COVID-19 pandemic, it dramatically expanded its use of automated filters — and that led to twice as many videos being taken down in the second quarter of 2020 as the first. The spike stems from YouTube's decision to "cast a wider net" for potentially violative videos in the absence of human moderators, and highlights the imperfect science of automatically policing content.

"When reckoning with greatly reduced human review capacity due to COVID-19, we were forced to make a choice between potential under-enforcement or potential over-enforcement," the company wrote in a blog post accompanying its second quarter transparency report. "Because responsibility is our top priority, we chose the latter — using technology to help with some of the work normally done by reviewers."

YouTube removed more content last quarter in all but two categories: hateful videos and videos that encourage harmful or dangerous activities. But in the most sensitive content categories, including violent extremist content and content that could jeopardize child safety, YouTube saw a threefold increase in the number of videos it removed. YouTube explained that's because the company "accepted a lower level of accuracy to make sure that we were removing as many pieces of violative content as possible."

This means, of course, that YouTube removed plenty of videos that didn't actually violate its policies, leading to roughly double the number of appeals, from around 166,000 last quarter to 325,000 in the second quarter. The number of videos that were reinstated after appeal also nearly quadrupled, from around 41,000 in the first quarter to 161,000 last quarter.

YouTube's transparency report comes on the heels of a similar report from Facebook, which described markedly different results. Like YouTube, Facebook also opted to send its content moderators home in March. But unlike YouTube, which removed more content last quarter in almost every category, Facebook and Instagram saw steep declines — including in some of the most sensitive categories it polices.

On Instagram, for example, the company removed about half as much child sexual abuse material in the second quarter as it did in the first, while removals of suicide related content fell by a whopping 79%. That's not because there was less of it: According to Facebook, it's because moderators were unable to review this graphic imagery at home, and therefore couldn't log it in Facebook's automated systems, which is how the company is able to search the platform and remove other exact matches that pop up in the future. That means much of that content that would normally be removed was left online.

A YouTube spokesperson said the company ran into the same problem. But it compensated for that issue by removing far more content overall than it otherwise would have. Anticipating that would lead to a spike in appeals — and it did — YouTube maintained a skeleton crew to process appeals in a timely fashion, eventually reinstating about half of the videos that were removed.

Facebook, by contrast, scaled back appeals, suspending them entirely in some sensitive content categories, like violent and graphic content. That led to a massive drop in appeals and the amount of content that was restored after removal in almost every category. "We couldn't always offer [appeals]," Facebook's vice president of integrity Guy Rosen said on a call with reporters earlier this month. "We still gave people an option to tell us that they disagreed with our decision on a piece of content and our teams looked at these signals in aggregate to find potential issues and restore content where appropriate."

The comparison between Facebook and YouTube isn't exact. For one thing, YouTube doesn't report as much granular information as Facebook does: While Facebook shares stats on the amount of child nudity and sexual exploitation content it removes, for example, YouTube shares information more broadly on child safety, a category that also includes risky challenges and videos that could broadly "endanger minors." For another, Facebook did see a much bigger jump in the amount of hate speech it removed last quarter, compared to YouTube.

And yet, the two reports still illustrate an important point about how the COVID-19 era has affected what people see — and don't — online. Facebook and YouTube often get lumped together as two social networks filled with the same filth, both using a combination of AI and low-wage contractors to rid problematic posts from their platforms. But over the last six months, these two companies have taken two different approaches to the same problem, and they have yielded dramatically different outcomes.

Where YouTube has risked silencing users who have done nothing wrong, Facebook has risked not silencing them fast enough in the name of maintaining accuracy. Neither approach is perfect. Both show just how far automated systems still have to go.

Theranos’ investor pitches go on trial

Prosecutors in the Elizabeth Holmes fraud case are now highlighting allegations the company sought to mislead investors.

The fresh details of unproven claims made about the viability of Theranos' blood tests and efforts to conceal errors when demonstrating testing equipment added to the evidence against Holmes, who is accused of fraud in her role leading the company.

Photo: David Paul Morris/Bloomberg via Getty Images

The Theranos trial continued this week with testimony from Daniel Edlin, a former product manager at the blood-testing startup, and Shane Weber, a scientist from Pfizer. Their testimonies appeared to bolster the government's argument that Holmes intentionally defrauded investors and patients.

The fresh details about audacious and unproven claims made about the viability of Theranos' blood tests and efforts to conceal errors when demonstrating testing equipment added to the evidence against Holmes, who is accused of fraud in her role leading the company.

Keep Reading Show less
Aisha Counts

Aisha Counts (@aishacounts) is a reporting fellow at Protocol, based out of Los Angeles. Previously, she worked for Ernst & Young, where she researched and wrote about the future of work, emerging technologies and startups. She is a graduate of the University of Southern California, where she studied business and philosophy. She can be reached at acounts@protocol.com.

The way we work has fundamentally changed. COVID-19 upended business dealings and office work processes, putting into hyperdrive a move towards digital collaboration platforms that allow teams to streamline processes and communicate from anywhere. According to the International Data Corporation, the revenue for worldwide collaboration applications increased 32.9 percent from 2019 to 2020, reaching $22.6 billion; it's expected to become a $50.7 billion industry by 2025.

"While consumers and early adopter businesses had widely embraced collaborative applications prior to the pandemic, the market saw five years' worth of new users in the first six months of 2020," said Wayne Kurtzman, research director of social and collaboration at IDC. "This has cemented collaboration, at least to some extent, for every business, large and small."

Keep Reading Show less
Kate Silver

Kate Silver is an award-winning reporter and editor with 15-plus years of journalism experience. Based in Chicago, she specializes in feature and business reporting. Kate's reporting has appeared in the Washington Post, The Chicago Tribune, The Atlantic's CityLab, Atlas Obscura, The Telegraph and many other outlets.

Protocol | Policy

8 takeaways from states’ new filing against Google

New details have been unsealed in the states' antitrust suit against Google for anticompetitive behavior in the ads market.

Google is facing complaints by government competition enforcers on several fronts.

Photo: Drew Angerer/Getty Images

Up to 22%: That's the fee Google charges publishers for sales on its online ad exchanges, according to newly unredacted details in a complaint by several state attorneys general.

The figure is just one of the many details that a court allowed the states to unveil Friday. Many had more or less remained secrets inside Google and the online publishing industry, even through prior legal complaints and eager public interest.

Keep Reading Show less
Ben Brody

Ben Brody (@ BenBrodyDC) is a senior reporter at Protocol focusing on how Congress, courts and agencies affect the online world we live in. He formerly covered tech policy and lobbying (including antitrust, Section 230 and privacy) at Bloomberg News, where he previously reported on the influence industry, government ethics and the 2016 presidential election. Before that, Ben covered business news at CNNMoney and AdAge, and all manner of stories in and around New York. He still loves appearing on the New York news radio he grew up with.

Protocol | Workplace

This tech founder uses a converted Sprinter van as an office on wheels

The CEO of productivity startup Rock likes to work on the road. Here's how he does it — starting with three different WiFi hotspots.

Kenzo Fong, founder and CEO of the 20-person productivity software startup Rock, has been working out of his converted Mercedes-Benz Sprinter van since the pandemic began.

Photo: Kenzo Fong/Rock

Plenty of techies have started companies in garages. Try running a startup from a van.

In San Francisco, one software company founder has been using a converted Mercedes-Benz Sprinter van — picture an Amazon delivery vehicle — as a mobile office.

Keep Reading Show less
Allison Levitsky
Allison Levitsky is a reporter at Protocol covering workplace issues in tech. She previously covered big tech companies and the tech workforce for the Silicon Valley Business Journal. Allison grew up in the Bay Area and graduated from UC Berkeley.
Protocol | Policy

Most Americans want AI regulation — and they want it yesterday

In a poll, people said they wanted to see artificial intelligence technologies develop in the U.S. — alongside rules governing their use.

U.S. lawmakers have only just begun the long process of regulating the use of AI.

Photo: Louis Velazquez/Unsplash

Nearly two-thirds of Americans want the U.S to regulate the development and use of artificial intelligence in the next year or sooner — with half saying that regulation should have begun yesterday, according to a Morning Consult poll. Another 13% say that regulation should start in the next year.

"You can thread this together," Austin Carson, founder of new nonprofit group SeedAI and former government relations lead for Nvidia, said in an email. "Half or more Americans want to address all of these things, split pretty evenly along ideological lines."

Keep Reading Show less
Ben Brody

Ben Brody (@ BenBrodyDC) is a senior reporter at Protocol focusing on how Congress, courts and agencies affect the online world we live in. He formerly covered tech policy and lobbying (including antitrust, Section 230 and privacy) at Bloomberg News, where he previously reported on the influence industry, government ethics and the 2016 presidential election. Before that, Ben covered business news at CNNMoney and AdAge, and all manner of stories in and around New York. He still loves appearing on the New York news radio he grew up with.

ai
Latest Stories