Policy

Here’s what happened when Facebook stopped protecting users — on purpose

Internal documents reveal the impact of withholding certain integrity protections from a subset of Facebook users.

Facebook Papers: an illustration of the Facebook logo cracking

Frances Haugen said Facebook withholds certain protections from a subset of users to see how they'll react.

Image: Protocol

In her testimony before Congress last month, whistleblower Frances Haugen told lawmakers Facebook has conducted experiments where it withholds certain protections from a subset of users to see how they'll react. Facebook refers to this experimental group internally, she said, as "integrity holdouts."

"These are people who don't get protections from integrity systems to see what happens to them," Haugen said. "And those people who deal with a more toxic, painful version of Facebook use Facebook less."

Internal documents reveal a more complex story. According to one internal report from April 2019, Facebook has studied the impact of removing some protections against problematic content like clickbait and untrustworthy news for some users, but the results, at least in that report, were decidedly mixed.

The report showed that during one March 2019 test, when the company rolled back some protections from millions of users' News Feeds, their exposure to some of the worst forms of harmful content, like graphic violence, barely changed. As the report's author wrote, referring to the company's news feed protections at the time, "We are likely having little (if any) impact on violence."

The report also suggested that far from using Facebook less, integrity holdouts actually commented more and had more sessions on the app. "Given that Integrity is changing a ranking algorithm that is optimized for engagement, it is not surprising that integrity has some negative engagement impact," the report read.

A spokesperson for Haugen said that other documents she collected showed that retention is stronger among regular users than holdouts, but those findings were not included in this report.

The report was included in disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by Frances Haugen's legal counsel. A consortium of news organizations, including Protocol, has reviewed the redacted versions received by Congress. They offer a glimpse at how Facebook has analyzed the efficacy of its user protections — and weighed them against their impact on other company priorities like growth.

"Testing product and safety features is an important part of improving our platform and something that's standard in tech and many other industries," Facebook spokesperson Drew Pusateri told Protocol, noting that the holdout affected about 2% of Facebook users. "It helps us build the tools to reduce the prevalence of hate speech and other types of problematic content on our platform."

'High-harm spaces'

Facebook conducts holdout experiments for a range of business goals, not just its integrity work. Holdouts are effectively control groups that Facebook can compare to its larger pool of users. As former Facebook data scientist Sophie Zhang recently told Protocol, Facebook has also studied the impact of withholding ads from users. "The company wants to know the very long-term impacts of advertising on retention and usage for Facebook," Zhang said. "The argument was usually that we need to know what the impact of this is. We need to know if people like it or not. But this is also motivated by wanting to know the impact for growth."

By early 2019, it appears, the company had begun applying this approach to integrity protections. The report, published in late April 2019, detailed the initial findings from an experiment that tinkered with certain integrity protections in News Feed. Some of the findings were encouraging: The report showed, for instance, that regular users got substantially less clickbait and ad-farm content than the holdouts did, something the author notes is "not surprising," given that the company was demoting clickbait and ad farms "quite a bit."

The report showed that regular users' exposure to what the company considers "low quality news" was down about 18% compared to the holdouts. The company also found it was boosting content from major publishers more when integrity protections were in place. Holdouts, by contrast, were more likely to click the dislike button and report the posts they saw, and they were also more likely to see content from public pages than regular users were.

But the main takeaway from the report, the author wrote, was that the News Feed protections that were in place at the time weren't having an equally significant effect on more severe types of harm, like graphic violence. "I believe strongly that this needs to change," the author wrote.

During the experiment, the company continued to demote hate speech and graphic violence in at-risk countries, the report said. But for holdouts who weren't in at-risk countries, those demotions didn't exist. And yet, the report found no impact on regular users' exposure to violence compared to the holdouts.

"11% of users see content that has been marked as disturbing every day; 16% of users see content that is likely to be bullying; 39% of users see hateful content (i.e. borderline hate); 32% of users see borderline 3+ nudity content," the author wrote. "These are significant proportions of [daily active users] and we have effectively no ranking interventions in place to mitigate this." The author added, however, that those particular numbers "should be taken with a grain of salt," as measuring bad experiences on the platform was still a work in progress.

The report also made no secret of the negative impact of News Feed integrity protections on engagement. "By definition, Integrity is going to cause some engagement regression," the author wrote, noting that there are "tradeoffs between Integrity and Engagement."

Integrity efforts, the report found, were a blow to the company's "meaningful social interactions" metric, which emphasizes interactions between friends over public-facing content. One reason for that, the author proposed, was that holdouts commented more than regular users did. While regular users were more likely to like posts on Facebook compared to holdouts, the author wrote, it was "not enough to make up for the decline in comments." The report also showed that content views and time spent on the app were down slightly among regular users compared to holdouts.

The report's limitations

It would be easy to construe the findings from this report as a total contradiction of Haugen's claims and a condemnation of integrity work's impact on the worst types of content in general. But that would be a misread, said Sahar Massachi, a former member of Facebook's integrity team and co-founder of the new Integrity Institute think tank. It's important to note, he said, that this document appears to be only looking at integrity protections that existed in News Feed rankings at the time, and doesn't take into account other integrity interventions that other teams at Facebook might have been working on.

It also only looks at the integrity interventions that the News Feed team had already deployed, not the full range of possible interventions that may have been proposed but were shot down. "Their view on what 'integrity' covers is likely scoped to whichever team they're on," Massachi said of the report's author. "I read this as: Integrity interventions that were allowed to ship — in the scoped set that this person considered — didn't affect views of that kind of content."

The report itself isn't clear on exactly what protections were being withheld from the holdouts, but a comment posted along with the document suggests that the experiment affected protections related to clickbait, ad farms, engagement bait and news trustworthiness, among other things. Given that fact, it shouldn't be all that surprising that exposure to graphic violence wasn't impacted by the experiment.

But what the report is calling attention to is the fact that, at the time at least, Facebook's integrity protections for News Feed weren't designed so they would capture more severe harms. The company had only begun demoting what it called "borderline" content that nearly violated its policies a few months before the report was published, and the rollout of those demotions was slow.

"This document says: We should expand the definitions more," said one current Facebook employee who has worked on News Feed ranking and reviewed the report. And according to that employee, the message stuck. "This person's argument was successful in that the program was expanded in various dimensions."

The employee said, however, that some of those expansions were rolled back before Facebook published a public list of content it demotes.

"The story of integrity is you try to do the good thing and you go to the execs, and they shoot you down, and you come back with something more conservative, and you realize you didn't do anything, so you try again," the employee said. "What you're seeing [in this document] is that middle part."

Facebook wouldn't comment on whether the company changed its demotions before publishing its list, but Pusateri said the demotions included on that list are still in place today.

Both Zhang and Massachi — as well as Facebook's own public relations team — cautioned Protocol not to cast Facebook's decision to withhold these protections at all as a scandal in and of itself. Measuring the effectiveness of these interventions, they said, is critical to strengthening them. As Massachi put it: "In the vaccine trials, some people have to get the placebo."

[Editor's note: Below, OCQ stands for "objective content quality," which refers to clickbait and ad-farm content. A high OCQ score means likely clickbait.]


A MESSAGE FROM QUALCOMM

www.protocol.com

Just as the power of the PC fueled the early leaps of the tech revolution and the accessibility of the web built on that, the smartphone and 5G networking technology will reshape our world with blazingly fast connected devices. Leading that charge is 5G, the high-speed next generation of mobile wireless connectivity that will connect virtually everyone and everything, including machines, objects and devices.

LEARN MORE

Climate

This carbon capture startup wants to clean up the worst polluters

The founder and CEO of point-source carbon capture company Carbon Clean discusses what the startup has learned, the future of carbon capture technology, as well as the role of companies like his in battling the climate crisis.

Carbon Clean CEO Aniruddha Sharma told Protocol that fossil fuels are necessary, at least in the near term, to lift the living standards of those who don’t have access to cars and electricity.

Photo: Carbon Clean

Carbon capture and storage has taken on increasing importance as companies with stubborn emissions look for new ways to meet their net zero goals. For hard-to-abate industries like cement and steel production, it’s one of the few options that exist to help them get there.

Yet it’s proven incredibly challenging to scale the technology, which captures carbon pollution at the source. U.K.-based company Carbon Clean is leading the charge to bring down costs. This year, it raised a $150 million series C round, which the startup said is the largest-ever funding round for a point-source carbon capture company.

Keep Reading Show less
Michelle Ma

Michelle Ma (@himichellema) is a reporter at Protocol covering climate. Previously, she was a news editor of live journalism and special coverage for The Wall Street Journal. Prior to that, she worked as a staff writer at Wirecutter. She can be reached at mma@protocol.com.

Sponsored Content

Great products are built on strong patents

Experts say robust intellectual property protection is essential to ensure the long-term R&D required to innovate and maintain America's technology leadership.

Every great tech product that you rely on each day, from the smartphone in your pocket to your music streaming service and navigational system in the car, shares one important thing: part of its innovative design is protected by intellectual property (IP) laws.

From 5G to artificial intelligence, IP protection offers a powerful incentive for researchers to create ground-breaking products, and governmental leaders say its protection is an essential part of maintaining US technology leadership. To quote Secretary of Commerce Gina Raimondo: "intellectual property protection is vital for American innovation and entrepreneurship.”

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.
Workplace

Why companies cut staff after raising millions

Are tech firms blowing millions in funding just weeks after getting it? Experts say it's more complicated than that.

Bolt, Trade Republic, HomeLight, and Stord all drew attention from funding announcements that happened just weeks or days before layoffs.

Photo: Pulp Photography/Getty Images

Fintech startup Bolt was one of the first tech companies to slash jobs, cutting 250 employees, or a third of its staff, in May. For some workers, the pain of layoffs was a shock not only because they were the first, but also because the cuts came just four months after Bolt had announced a $355 million series E funding round and achieved a peak valuation of $11 billion.

“Bolt employees were blind sided because the CEO was saying just weeks ago how everything is fine,” an anonymous user wrote on the message board Blind. “It has been an extremely rough day for 1/3 of Bolt employees,” another user posted. “Sadly, I was one of them who was let go after getting a pay-raise just a couple of weeks ago.”

Keep Reading Show less
Nat Rubio-Licht

Nat Rubio-Licht is a Los Angeles-based news writer at Protocol. They graduated from Syracuse University with a degree in newspaper and online journalism in May 2020. Prior to joining the team, they worked at the Los Angeles Business Journal as a technology and aerospace reporter.

Climate

The fight to define the carbon offset market's future

The world’s largest carbon offset issuer is fighting a voluntary effort to standardize the industry. And the fate of the climate could hang in the balance.

It has become increasingly clear that scaling the credit market will first require clear standards and transparency.

Kevin Frayer/Getty Images

There’s a major fight brewing over what kind of standards will govern the carbon offset market.

A group of independent experts looking to clean up the market’s checkered record and the biggest carbon credit issuer on the voluntary market is trying to influence efforts to define what counts as a quality credit. The outcome could make or break an industry increasingly central to tech companies meeting their net zero goals.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Policy

White House AI Bill of Rights lacks specific guidance for AI rules

The document unveiled today by the White House Office of Science and Technology Policy is long on tech guidance, but short on restrictions for AI.

While the document provides extensive suggestions for how to incorporate AI rights in technical design, it does not include any recommendations for restrictions on the use of controversial forms of AI.

Photo: Ana Lanza/Unsplash

It was a year in the making, but people eagerly anticipating the White House Bill of Rights for AI will have to continue waiting for concrete recommendations for future AI policy or restrictions.

Instead, the document unveiled today by the White House Office of Science and Technology Policy is legally non-binding and intended to be used as a handbook and a “guide for society” that could someday inform government AI legislation or regulations.

Blueprint for an AI Bill of Rights features a list of five guidelines for protecting people in relation to AI use:

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins