yesIssie LapowskyNone
×

Get access to Protocol

I’ve already subscribed

Will be used in accordance with our Privacy Policy

Power

After sending content moderators home, YouTube doubled its video removals

The company said it had to "accept a lower level of accuracy" to protect YouTube users when it relied more heavily on algorithmic moderation.

After sending content moderators home, YouTube doubled its video removals

YouTube opted to over-enforce its policies in order to prioritize safety on the platform.

Photo: Szabo Viktor/Unsplash

When YouTube sent content moderators home in March due to the COVID-19 pandemic, it dramatically expanded its use of automated filters — and that led to twice as many videos being taken down in the second quarter of 2020 as the first. The spike stems from YouTube's decision to "cast a wider net" for potentially violative videos in the absence of human moderators, and highlights the imperfect science of automatically policing content.

"When reckoning with greatly reduced human review capacity due to COVID-19, we were forced to make a choice between potential under-enforcement or potential over-enforcement," the company wrote in a blog post accompanying its second quarter transparency report. "Because responsibility is our top priority, we chose the latter — using technology to help with some of the work normally done by reviewers."

YouTube removed more content last quarter in all but two categories: hateful videos and videos that encourage harmful or dangerous activities. But in the most sensitive content categories, including violent extremist content and content that could jeopardize child safety, YouTube saw a threefold increase in the number of videos it removed. YouTube explained that's because the company "accepted a lower level of accuracy to make sure that we were removing as many pieces of violative content as possible."

This means, of course, that YouTube removed plenty of videos that didn't actually violate its policies, leading to roughly double the number of appeals, from around 166,000 last quarter to 325,000 in the second quarter. The number of videos that were reinstated after appeal also nearly quadrupled, from around 41,000 in the first quarter to 161,000 last quarter.

YouTube's transparency report comes on the heels of a similar report from Facebook, which described markedly different results. Like YouTube, Facebook also opted to send its content moderators home in March. But unlike YouTube, which removed more content last quarter in almost every category, Facebook and Instagram saw steep declines — including in some of the most sensitive categories it polices.

On Instagram, for example, the company removed about half as much child sexual abuse material in the second quarter as it did in the first, while removals of suicide related content fell by a whopping 79%. That's not because there was less of it: According to Facebook, it's because moderators were unable to review this graphic imagery at home, and therefore couldn't log it in Facebook's automated systems, which is how the company is able to search the platform and remove other exact matches that pop up in the future. That means much of that content that would normally be removed was left online.

A YouTube spokesperson said the company ran into the same problem. But it compensated for that issue by removing far more content overall than it otherwise would have. Anticipating that would lead to a spike in appeals — and it did — YouTube maintained a skeleton crew to process appeals in a timely fashion, eventually reinstating about half of the videos that were removed.

Facebook, by contrast, scaled back appeals, suspending them entirely in some sensitive content categories, like violent and graphic content. That led to a massive drop in appeals and the amount of content that was restored after removal in almost every category. "We couldn't always offer [appeals]," Facebook's vice president of integrity Guy Rosen said on a call with reporters earlier this month. "We still gave people an option to tell us that they disagreed with our decision on a piece of content and our teams looked at these signals in aggregate to find potential issues and restore content where appropriate."

The comparison between Facebook and YouTube isn't exact. For one thing, YouTube doesn't report as much granular information as Facebook does: While Facebook shares stats on the amount of child nudity and sexual exploitation content it removes, for example, YouTube shares information more broadly on child safety, a category that also includes risky challenges and videos that could broadly "endanger minors." For another, Facebook did see a much bigger jump in the amount of hate speech it removed last quarter, compared to YouTube.

And yet, the two reports still illustrate an important point about how the COVID-19 era has affected what people see — and don't — online. Facebook and YouTube often get lumped together as two social networks filled with the same filth, both using a combination of AI and low-wage contractors to rid problematic posts from their platforms. But over the last six months, these two companies have taken two different approaches to the same problem, and they have yielded dramatically different outcomes.

Where YouTube has risked silencing users who have done nothing wrong, Facebook has risked not silencing them fast enough in the name of maintaining accuracy. Neither approach is perfect. Both show just how far automated systems still have to go.

Microsoft wants to replace artists with AI

Better Zoom calls, simpler email attachments, smart iPhone cases and other patents from Big Tech.

Turning your stories into images.

Image: USPTO/Microsoft

Hello and welcome to 2021! The Big Tech patent roundup is back, after a short vacation and … all the things … that happened between the start of the year and now. It seems the tradition of tech companies filing weird and wonderful patents has carried into the new year; there are some real gems from the last few weeks. Microsoft is trying to outsource all creative endeavors to AI; Apple wants to make seat belts less annoying; and Amazon wants to cut down on some of the recyclable waste that its own success has inevitably created.

And remember: The big tech companies file all kinds of crazy patents for things, and though most never amount to anything, some end up defining the future.

Keep Reading Show less
Mike Murphy

Mike Murphy ( @mcwm) is the director of special projects at Protocol, focusing on the industries being rapidly upended by technology and the companies disrupting incumbents. Previously, Mike was the technology editor at Quartz, where he frequently wrote on robotics, artificial intelligence, and consumer electronics.

Politics

Facebook’s Oversight Board won’t save it from the Trump ban backlash

The Board's decision on whether to reinstate Trump could set a new precedent for Facebook. But does the average user care what the Board has to say?

A person holds a sign during a Free Speech Rally against tech companies, on Jan. 20 in California.

Photo: Valerie Macon/Getty Images

Two weeks after Facebook suspended former President Donald Trump's account indefinitely, Facebook answered a chorus of calls and referred the case to its newly created Oversight Board for review. Now, the board has 90 days to make a call as to whether Trump stays or goes permanently. The board's decision — and more specifically, how and why it arrives at that decision — could have consequences not only for other global leaders on Facebook, but for the future of the Board itself.

Facebook created its Oversight Board for such a time as this — a time when it would face a controversial content moderation decision and might need a gut check. Or a fall guy. There could be no decision more controversial than the one Facebook made on Jan. 7, when it decided to muzzle one of the most powerful people in the world with weeks remaining in his presidency. It stands to reason, then, that Facebook would tap in its newly anointed refs on the Oversight Board both to earnestly review the call and to put a little distance between Facebook and the decision.

Keep Reading Show less
Issie Lapowsky
Issie Lapowsky (@issielapowsky) is a senior reporter at Protocol, covering the intersection of technology, politics, and national affairs. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University’s Center for Publishing on how tech giants have affected publishing. Email Issie.
Politics

This is the future of the FTC

President Joe Biden has named Becca Slaughter acting chair of the FTC. In conversation with Protocol, she laid out her priorities for the next four years.

FTC commissioner Becca Slaughter may be President Biden's pick for FTC chair.

Photo: David Becker/Getty Images

Becca Slaughter made a name for herself last year when, as a commissioner for the Federal Trade Commission, she breastfed her newborn baby during video testimony before the Senate, raising awareness about the plight of working parents during the pandemic.

But on Thursday, Slaughter's name began circulating for other reasons: She was just named as President Joe Biden's pick for acting chair of the FTC, an appointment that puts Slaughter at the head of antitrust investigations into tech giants, including Facebook.

Keep Reading Show less
Issie Lapowsky
Issie Lapowsky (@issielapowsky) is a senior reporter at Protocol, covering the intersection of technology, politics, and national affairs. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University’s Center for Publishing on how tech giants have affected publishing. Email Issie.
Politics

The other reason Facebook silenced Trump? Republicans lost power.

Yes, the president's acts were unprecedented. But Facebook is also preparing for a new Washington, controlled by Democrats.

Mark Zuckerberg and Facebook's head of public policy Joel Kaplan have spent four years bending to conservatives' demands. Now, Facebook is bending in a new direction.

Photo: Samuel Corum/Getty Images

In his post announcing that President Trump would be blocked from posting on Facebook until at least Inauguration Day, Mark Zuckerberg wrote that the president's incitement of the violent mob that stormed the U.S. Capitol building Wednesday was "fundamentally different" than any of the offenses he's committed on Facebook before. "The risks of allowing the President to continue to use our service during this period are simply too great," he wrote on Thursday.

That may be true. But there's another reason why — after four years spent insisting that a tech company has no business shutting up the president of the United States, no matter how much he threatens to shoot protesters or engages in voter suppression — Zuckerberg finally had a change of heart: Republicans just lost power.

Keep Reading Show less
Issie Lapowsky
Issie Lapowsky (@issielapowsky) is a senior reporter at Protocol, covering the intersection of technology, politics, and national affairs. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University’s Center for Publishing on how tech giants have affected publishing. Email Issie.
Power

Pressure mounts on tech giants to ban Trump, as rioters storm Capitol

Facebook, Twitter and YouTube removed a video in which Trump expressed love for the rioters, but none of the companies have banned him outright — yet.

Twitter locked President Trump's account.

Image: Twitter

Twitter, Facebook and YouTube took action against several of President Trump's posts Wednesday, labeling the posts, limiting reshares and removing a video in which President Trump expressed his love for rioters who stormed the U.S. Capitol building, leading to the evacuation of the Senate, the deployment of the National Guard and to one person being shot and killed. Twitter locked President Trump's account, requiring him to remove three tweets and saying that his account would remain locked for 12 hours after those tweets were removed. Twitter also warned that any future violations would get him banned. Facebook also locked his account for 24 hours, citing "two policy violations." These actions followed a day of calls from tech investors, academics and others to kick Trump off of their platforms once and for all.

In an early tweet, University of Virginia law professor Danielle Citron implored Twitter CEO Jack Dorsey to take action. "As someone who has served on your Trust and Safety Board since its inception and counseled you since 2009, time is now to suspend President Trump's account," Citron wrote. "He has deliberately incited violence, causing mayhem with his lies and threats."

Keep Reading Show less
Issie Lapowsky
Issie Lapowsky (@issielapowsky) is a senior reporter at Protocol, covering the intersection of technology, politics, and national affairs. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University’s Center for Publishing on how tech giants have affected publishing. Email Issie.
Latest Stories