Power

How COVID-19 helped — and hurt — Facebook’s fight against bad content

The amount of child sexual abuse material Instagram caught and removed fell dramatically, while hate speech removals on Facebook and Instagram grew.

Mark Zuckerberg looking sad

Facebook's data shows that the pandemic made content-moderation systems both better and worse.

Photo: Drew Angerer/Getty Images

When Facebook sent home its content moderators in March due to the COVID-19 pandemic, announcing it would rely on automation to at least temporarily do their job, the company predicted the decision would have a major impact on its ability to find and remove content that violates its policies. Now, according to newly released data from Facebook, we know just how big an impact it had.

During the second quarter of 2020 the company removed less than half of the child sexual abuse material from Instagram that it did the quarter before — not because there was less of it, but because the company was less equipped to catch it. And on both Facebook and Instagram, the amount of suicide and self-injury content it removed dropped precipitously too. On Instagram, it fell from 1.3 million pieces of suicide content removed last quarter to just 275,000 pieces this quarter.

But in other categories, like hate speech, Facebook's new reliance on automated systems actually led to a drastic increase in removals, from just 9.6 million pieces of hate speech removed from Facebook in the beginning of 2020 to 22.5 million pieces removed between April and June.

The drop in the removal of child sexual abuse material from Instagram wasn't due to a decrease in the amount of it on the platform. Neither was the decrease in takedowns of suicide related content. It was due to the limited number of human beings who were available to look at those posts, since, initially at least, they were all working from home. "The reason this content is challenging is because it's graphic content that, at home, is very hard for people to moderate," said Guy Rosen, Facebook's vice president of integrity. "We want to be very careful with the environment that people have in order to look at that content."

It's not that the human reviewers are required to spot all child sexual abuse material. Automated systems are already responsible for removing 97.5% of those types of posts that appear on Facebook. But according to Facebook spokesperson Emily Cain, human reviewers are critical when it comes to "banking" child sexual abuse material. That is, taking known images and logging them so that Facebook's AI systems can then go find and remove them.

"Without humans banking this content then our machines can't find it at scale," Cain said. "And this compounds after a while, so our content-actioned numbers decreased."

"Overall, this pandemic and this situation really reinforced to us that it is always people and technology working together," Rosen said on a call with reporters Tuesday. "We always need people who look and measure and help tune our automation to ensure that we're always up to speed and always up to date with how content is evolving."

The decrease in content removal is a blow to Facebook's ongoing efforts to fight the spread of child sexual abuse material on the platform at a time when the National Center for Missing and Exploited Children says that it's seeing an exponential increase in the number of reports about child exploitation. That said, the company did manage to remove more pieces of child sexual abuse material from the Facebook app than it did last quarter. And yet, overall, in 2020, removals in that category are down significantly from where they were at the end of last year.

During the COVID-19 crisis, Rosen said Facebook has developed a ranking system to prioritize the most critical content in these sensitive categories. That might include anything from a live video to a post in which someone indicates they plan to harm themselves imminently. This ranking system was already in the works before COVID-19, but Rosen said the company expedited its development in response to the crisis.

"This enables our teams to spend their time on the cases where we need their expertise the most, and it means there will be a shift towards more content being initially actioned by our automated systems," Rosen said.

As for the sharp increase in the amount of hate speech being removed from the platform, Rosen attributed that, too, to the ongoing development of Facebook's AI systems. Because hate speech is less graphic than, say, a video of child abuse, moderators are more able to handle that content remotely. As Facebook's chief technology officer Mike Schroepfer told Protocol in a tweet, "The more … sensitive and nuanced the content the more we need help from people."

Of course, the perennial question about hate speech, child sexual abuse material and other types of problematic content is not just how much Facebook is taking down and how fast, but how prevalent that content is to begin with. On the subject of hate speech, that's a question that Facebook hasn't been able to answer yet. Turns out, measuring prevalence requires a lot of human input, too.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins