Power

After sending content moderators home, YouTube doubled its video removals

The company said it had to "accept a lower level of accuracy" to protect YouTube users when it relied more heavily on algorithmic moderation.

After sending content moderators home, YouTube doubled its video removals

YouTube opted to over-enforce its policies in order to prioritize safety on the platform.

Photo: Szabo Viktor/Unsplash

When YouTube sent content moderators home in March due to the COVID-19 pandemic, it dramatically expanded its use of automated filters — and that led to twice as many videos being taken down in the second quarter of 2020 as the first. The spike stems from YouTube's decision to "cast a wider net" for potentially violative videos in the absence of human moderators, and highlights the imperfect science of automatically policing content.

"When reckoning with greatly reduced human review capacity due to COVID-19, we were forced to make a choice between potential under-enforcement or potential over-enforcement," the company wrote in a blog post accompanying its second quarter transparency report. "Because responsibility is our top priority, we chose the latter — using technology to help with some of the work normally done by reviewers."

YouTube removed more content last quarter in all but two categories: hateful videos and videos that encourage harmful or dangerous activities. But in the most sensitive content categories, including violent extremist content and content that could jeopardize child safety, YouTube saw a threefold increase in the number of videos it removed. YouTube explained that's because the company "accepted a lower level of accuracy to make sure that we were removing as many pieces of violative content as possible."

This means, of course, that YouTube removed plenty of videos that didn't actually violate its policies, leading to roughly double the number of appeals, from around 166,000 last quarter to 325,000 in the second quarter. The number of videos that were reinstated after appeal also nearly quadrupled, from around 41,000 in the first quarter to 161,000 last quarter.

YouTube's transparency report comes on the heels of a similar report from Facebook, which described markedly different results. Like YouTube, Facebook also opted to send its content moderators home in March. But unlike YouTube, which removed more content last quarter in almost every category, Facebook and Instagram saw steep declines — including in some of the most sensitive categories it polices.

On Instagram, for example, the company removed about half as much child sexual abuse material in the second quarter as it did in the first, while removals of suicide related content fell by a whopping 79%. That's not because there was less of it: According to Facebook, it's because moderators were unable to review this graphic imagery at home, and therefore couldn't log it in Facebook's automated systems, which is how the company is able to search the platform and remove other exact matches that pop up in the future. That means much of that content that would normally be removed was left online.

A YouTube spokesperson said the company ran into the same problem. But it compensated for that issue by removing far more content overall than it otherwise would have. Anticipating that would lead to a spike in appeals — and it did — YouTube maintained a skeleton crew to process appeals in a timely fashion, eventually reinstating about half of the videos that were removed.

Facebook, by contrast, scaled back appeals, suspending them entirely in some sensitive content categories, like violent and graphic content. That led to a massive drop in appeals and the amount of content that was restored after removal in almost every category. "We couldn't always offer [appeals]," Facebook's vice president of integrity Guy Rosen said on a call with reporters earlier this month. "We still gave people an option to tell us that they disagreed with our decision on a piece of content and our teams looked at these signals in aggregate to find potential issues and restore content where appropriate."

The comparison between Facebook and YouTube isn't exact. For one thing, YouTube doesn't report as much granular information as Facebook does: While Facebook shares stats on the amount of child nudity and sexual exploitation content it removes, for example, YouTube shares information more broadly on child safety, a category that also includes risky challenges and videos that could broadly "endanger minors." For another, Facebook did see a much bigger jump in the amount of hate speech it removed last quarter, compared to YouTube.

And yet, the two reports still illustrate an important point about how the COVID-19 era has affected what people see — and don't — online. Facebook and YouTube often get lumped together as two social networks filled with the same filth, both using a combination of AI and low-wage contractors to rid problematic posts from their platforms. But over the last six months, these two companies have taken two different approaches to the same problem, and they have yielded dramatically different outcomes.

Where YouTube has risked silencing users who have done nothing wrong, Facebook has risked not silencing them fast enough in the name of maintaining accuracy. Neither approach is perfect. Both show just how far automated systems still have to go.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins