Facebook’s new content moderation report only proves the case of its moderators

The company's ability to spot violating posts is back to pre-pandemic levels, just as its moderators start heading back to the office.

Facebook's office in Menlo Park

Dozens of Facebook moderators signed an open letter this week to Mark Zuckerberg and others that criticizes recent orders that they return to the office despite a surge in COVID-19 cases and demands they be made full-time employees.

Image: AFP/Getty Images

Facebook's decision to send content moderators home in March had devastating consequences for the company's ability to catch and remove posts containing the most harmful content. Now, as some moderators have returned to the office in recent months, things are getting back to normal — underlining the importance of having humans in the loop.

That's according to Facebook's third-quarter transparency report, published Thursday. It shows, for instance, that in the third quarter of this year, Instagram removed nearly twice as much child sexual abuse material and nearly five times as much suicide-related content as it did in the second quarter.

This dramatic shift underscores just how crucial this global army of moderators is to the way the world's largest social media platform operates.

"People are an important part of the equation for content enforcement," Guy Rosen, Facebook's vice president of integrity, said on a call with reporters Thursday. "These are incredibly important workers who do an incredibly important part of this job ... The reason we're bringing some workers back into offices is exactly to ensure that we can have that balance of both people and AI working on these areas."

Facebook's report comes just one day after dozens of moderators signed an open letter to Facebook CEO Mark Zuckerberg and others that criticizes recent orders that they return to the office despite a surge in COVID-19 cases and demands that they be made full-time employees. "By outsourcing our jobs, Facebook implies that the 35,000 of us who work in moderation are somehow peripheral to social media," the letter read. "Yet we are so integral to Facebook's viability that we must risk our lives to come into work."

Rosen stressed that the majority of content moderators are still working from home, but said that those who have gone back to the office are doing so in spaces with reduced capacity, physical distancing, mandatory temperature checks and other safety precautions "to ensure that we're providing a safe workspace for them to do this incredibly important work to keep our community safe as well."

The moderators argued that's not enough, and are pushing Facebook to guarantee them things like "real healthcare," hazard pay and the ability to continue working from home if they live with at-risk individuals.

Facebook's executives credited many of the gains they made this quarter to their investment in automated systems. That includes their ability to proactively detect 95% of hate speech on the platform. When Facebook first began reporting this stat in 2017, just 23.6% of hate speech was proactively detected before users reported it. Facebook also reported the prevalence of hate speech on the platform — that is, the percentage of times people actually see hate speech while using Facebook — for the first time. They found that prevalence was 0.1% to 0.11%, suggesting for every 10,000 views on Facebook, about 10 or 11 of them contain hate speech.

Despite these advances, Facebook's chief technology officer Mike Schroepfer acknowledged that automated filters will never replace the work of human moderators. "I don't see any short-term reduction or long-term reduction in the human involvement in this," he said on the call. "We get faster, more accurate, more powerful and then we can use our amazing staff we have to work on the more nuanced problems we have that really require human review."

Workplace

The tools that make you pay for not getting stuff done

Some tools let you put your money on the line for productivity. Should you bite?

Commitment contracts are popular in a niche corner of the internet, and the tools have built up loyal followings of people who find the extra motivation effective.

Photoillustration: Anna Shvets/Pexels; Protocol

Danny Reeves, CEO and co-founder of Beeminder, is used to defending his product.

“When people first hear about it, they’re kind of appalled,” Reeves said. “Making money off of people’s failure is how they view it.”

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.

Sponsored Content

Foursquare data story: leveraging location data for site selection

We take a closer look at points of interest and foot traffic patterns to demonstrate how location data can be leveraged to inform better site selecti­on strategies.

Imagine: You’re the leader of a real estate team at a restaurant brand looking to open a new location in Manhattan. You have two options you’re evaluating: one site in SoHo, and another site in the Flatiron neighborhood. Which do you choose?

Keep Reading Show less

Elon Musk has bots on his mind.

Photo: Christian Marquardt/Getty Images

Elon Musk says he needs proof that less than 5% of Twitter's users are bots — or the deal isn't going ahead.

Keep Reading Show less
Jamie Condliffe

Jamie Condliffe ( @jme_c) is the executive editor at Protocol, based in London. Prior to joining Protocol in 2019, he worked on the business desk at The New York Times, where he edited the DealBook newsletter and wrote Bits, the weekly tech newsletter. He has previously worked at MIT Technology Review, Gizmodo, and New Scientist, and has held lectureships at the University of Oxford and Imperial College London. He also holds a doctorate in engineering from the University of Oxford.

Policy

Nobody will help Big Tech prevent online terrorism but itself

There’s no will in Congress or the C-suites of social media giants for a new approach, but smaller platforms would have room to step up — if they decided to.

Timothy Kujawski of Buffalo lights candles at a makeshift memorial as people gather at the scene of a mass shooting at Tops Friendly Market at Jefferson Avenue and Riley Street on Sunday, May 15, 2022 in Buffalo, NY. The fatal shooting of 10 people at a grocery store in a historically Black neighborhood of Buffalo by a young white gunman is being investigated as a hate crime and an act of racially motivated violent extremism, according to federal officials.

Photo: Kent Nishimura / Los Angeles Times via Getty Images

The shooting in Buffalo, New York, that killed 10 people over the weekend has put the spotlight back on social media companies. Some of the attack was livestreamed, beginning on Amazon-owned Twitch, and the alleged shooter appears to have written about how his racist motivations arose from misinformation on smaller or fringe sites including 4chan.

In response, policymakers are directing their anger at tech platforms, with New York Governor Kathy Hochul calling for the companies to be “more vigilant in monitoring” and for “a legal responsibility to ensure that such hate cannot populate these sites.”

Keep Reading Show less
Ben Brody

Ben Brody (@ BenBrodyDC) is a senior reporter at Protocol focusing on how Congress, courts and agencies affect the online world we live in. He formerly covered tech policy and lobbying (including antitrust, Section 230 and privacy) at Bloomberg News, where he previously reported on the influence industry, government ethics and the 2016 presidential election. Before that, Ben covered business news at CNNMoney and AdAge, and all manner of stories in and around New York. He still loves appearing on the New York news radio he grew up with.

We're answering all your questions about the crypto crash.

Photo: Chris Liverani/Unsplash

People started talking about another crypto winter in January, when falling prices had wiped out $1 trillion in value from November’s peak. Prices rallied back in March, restoring some of the losses. Then crypto fell hard again, with bitcoin down more than 60% from its all-time high and other cryptocurrencies harder hit. The market’s message was clear: Crypto winter was no longer coming. It’s here.

If you’ve got questions about the crypto crash, the Protocol Fintech team has answers.

Keep Reading Show less
Latest Stories
Bulletins