Power

How COVID-19 helped — and hurt — Facebook’s fight against bad content

The amount of child sexual abuse material Instagram caught and removed fell dramatically, while hate speech removals on Facebook and Instagram grew.

Mark Zuckerberg looking sad

Facebook's data shows that the pandemic made content-moderation systems both better and worse.

Photo: Drew Angerer/Getty Images

When Facebook sent home its content moderators in March due to the COVID-19 pandemic, announcing it would rely on automation to at least temporarily do their job, the company predicted the decision would have a major impact on its ability to find and remove content that violates its policies. Now, according to newly released data from Facebook, we know just how big an impact it had.

During the second quarter of 2020 the company removed less than half of the child sexual abuse material from Instagram that it did the quarter before — not because there was less of it, but because the company was less equipped to catch it. And on both Facebook and Instagram, the amount of suicide and self-injury content it removed dropped precipitously too. On Instagram, it fell from 1.3 million pieces of suicide content removed last quarter to just 275,000 pieces this quarter.

But in other categories, like hate speech, Facebook's new reliance on automated systems actually led to a drastic increase in removals, from just 9.6 million pieces of hate speech removed from Facebook in the beginning of 2020 to 22.5 million pieces removed between April and June.

The drop in the removal of child sexual abuse material from Instagram wasn't due to a decrease in the amount of it on the platform. Neither was the decrease in takedowns of suicide related content. It was due to the limited number of human beings who were available to look at those posts, since, initially at least, they were all working from home. "The reason this content is challenging is because it's graphic content that, at home, is very hard for people to moderate," said Guy Rosen, Facebook's vice president of integrity. "We want to be very careful with the environment that people have in order to look at that content."

It's not that the human reviewers are required to spot all child sexual abuse material. Automated systems are already responsible for removing 97.5% of those types of posts that appear on Facebook. But according to Facebook spokesperson Emily Cain, human reviewers are critical when it comes to "banking" child sexual abuse material. That is, taking known images and logging them so that Facebook's AI systems can then go find and remove them.

"Without humans banking this content then our machines can't find it at scale," Cain said. "And this compounds after a while, so our content-actioned numbers decreased."

"Overall, this pandemic and this situation really reinforced to us that it is always people and technology working together," Rosen said on a call with reporters Tuesday. "We always need people who look and measure and help tune our automation to ensure that we're always up to speed and always up to date with how content is evolving."

The decrease in content removal is a blow to Facebook's ongoing efforts to fight the spread of child sexual abuse material on the platform at a time when the National Center for Missing and Exploited Children says that it's seeing an exponential increase in the number of reports about child exploitation. That said, the company did manage to remove more pieces of child sexual abuse material from the Facebook app than it did last quarter. And yet, overall, in 2020, removals in that category are down significantly from where they were at the end of last year.

During the COVID-19 crisis, Rosen said Facebook has developed a ranking system to prioritize the most critical content in these sensitive categories. That might include anything from a live video to a post in which someone indicates they plan to harm themselves imminently. This ranking system was already in the works before COVID-19, but Rosen said the company expedited its development in response to the crisis.

"This enables our teams to spend their time on the cases where we need their expertise the most, and it means there will be a shift towards more content being initially actioned by our automated systems," Rosen said.

As for the sharp increase in the amount of hate speech being removed from the platform, Rosen attributed that, too, to the ongoing development of Facebook's AI systems. Because hate speech is less graphic than, say, a video of child abuse, moderators are more able to handle that content remotely. As Facebook's chief technology officer Mike Schroepfer told Protocol in a tweet, "The more … sensitive and nuanced the content the more we need help from people."

Of course, the perennial question about hate speech, child sexual abuse material and other types of problematic content is not just how much Facebook is taking down and how fast, but how prevalent that content is to begin with. On the subject of hate speech, that's a question that Facebook hasn't been able to answer yet. Turns out, measuring prevalence requires a lot of human input, too.

Climate

New Jersey could become an ocean energy hub

A first-in-the-nation bill would support wave and tidal energy as a way to meet the Garden State's climate goals.

Technological challenges mean wave and tidal power remain generally more expensive than their other renewable counterparts. But government support could help spur more innovation that brings down cost.

Photo: Jeremy Bishop via Unsplash

Move over, solar and wind. There’s a new kid on the renewable energy block: waves and tides.

Harnessing the ocean’s power is still in its early stages, but the industry is poised for a big legislative boost, with the potential for real investment down the line.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Every day, millions of us press the “order” button on our favorite coffee store's mobile application: Our chosen brew will be on the counter when we arrive. It’s a personalized, seamless experience that we have all come to expect. What we don’t know is what’s happening behind the scenes. The mobile application is sourcing data from a database that stores information about each customer and what their favorite coffee drinks are. It is also leveraging event-streaming data in real time to ensure the ingredients for your personal coffee are in supply at your local store.

Applications like this power our daily lives, and if they can’t access massive amounts of data stored in a database as well as stream data “in motion” instantaneously, you — and millions of customers — won’t have these in-the-moment experiences.

Keep Reading Show less
Jennifer Goforth Gregory
Jennifer Goforth Gregory has worked in the B2B technology industry for over 20 years. As a freelance writer she writes for top technology brands, including IBM, HPE, Adobe, AT&T, Verizon, Epson, Oracle, Intel and Square. She specializes in a wide range of technology, such as AI, IoT, cloud, cybersecurity, and CX. Jennifer also wrote a bestselling book The Freelance Content Marketing Writer to help other writers launch a high earning freelance business.
Entertainment

Watch 'Stranger Things,' play Neon White and more weekend recs

Don’t know what to do this weekend? We’ve got you covered.

Here are our picks for your long weekend.

Image: Annapurna Interactive; Wizard of the Coast; Netflix

Kick off your long weekend with an extra-long two-part “Stranger Things” finale; a deep dive into the deckbuilding games like Magic: The Gathering; and Neon White, which mashes up several genres, including a dating sim.

Keep Reading Show less
Nick Statt

Nick Statt is Protocol's video game reporter. Prior to joining Protocol, he was news editor at The Verge covering the gaming industry, mobile apps and antitrust out of San Francisco, in addition to managing coverage of Silicon Valley tech giants and startups. He now resides in Rochester, New York, home of the garbage plate and, completely coincidentally, the World Video Game Hall of Fame. He can be reached at nstatt@protocol.com.

Fintech

Debt fueled crypto mining’s boom — and now, its bust

Leverage helped mining operations expand as they borrowed against their hardware or the crypto it generated.

Dropping crypto prices have upended the economics of mining.

Photo: Lars Hagberg/AFP via Getty Images

As bitcoin boomed, crypto mining seemed almost like printing money. But in reality, miners have always had to juggle the cost of hardware, electricity and operations against the tokens their work yielded. Often miners held onto their crypto, betting it would appreciate, or borrowed against it to buy more mining rigs. Now all those bills are coming due: The industry has accumulated as much as $4 billion in debt, according to some estimates.

The crypto boom encouraged excess. “The approach was get rich quick, build it big, build it fast, use leverage. Do it now,” said Andrew Webber, founder and CEO at crypto mining service provider Digital Power Optimization.

Keep Reading Show less
Tomio Geron

Tomio Geron ( @tomiogeron) is a San Francisco-based reporter covering fintech. He was previously a reporter and editor at The Wall Street Journal, covering venture capital and startups. Before that, he worked as a staff writer at Forbes, covering social media and venture capital, and also edited the Midas List of top tech investors. He has also worked at newspapers covering crime, courts, health and other topics. He can be reached at tgeron@protocol.com or tgeron@protonmail.com.

Policy

How lax social media policies help fuel a prescription drug boom

Prescription drug ads are all over TikTok, Facebook and Instagram. As the potential harms become clear, why haven’t the companies updated their advertising policies?

Even as providers like Cerebral draw federal attention, Meta’s and TikTok’s advertising policies still allow telehealth providers to turbocharge their marketing efforts.

Illustration: Overearth/iStock/Getty Images Plus

In the United States, prescription drug advertisements are as commonplace as drive-thru lanes and Pete Davidson relationship updates. We’re told every day — often multiple times a day — to ask our doctor if some new medication is right for us. Saturday Night Live has for decades parodied the breathless parade of side effect warnings tacked onto drug commercials. Here in New York, even our subway swipes are subsidized by advertisements that deliver the good news: We can last longer in bed and keep our hair, if only we turn to the latest VC-backed telehealth service.

The U.S. is almost alone in embracing direct-to-consumer prescription drug advertisements. Nations as disparate as Saudi Arabia, France and China all find common ground in banning such ads. In fact, of all developed nations, only New Zealand joins the U.S. in giving pharmaceutical companies a direct line to consumers.

Keep Reading Show less
Hirsh Chitkara

Hirsh Chitkara ( @HirshChitkara) is a reporter at Protocol focused on the intersection of politics, technology and society. Before joining Protocol, he helped write a daily newsletter at Insider that covered all things Big Tech. He's based in New York and can be reached at hchitkara@protocol.com.

Latest Stories
Bulletins