In her testimony before Congress last month, whistleblower Frances Haugen told lawmakers Facebook has conducted experiments where it withholds certain protections from a subset of users to see how they'll react. Facebook refers to this experimental group internally, she said, as "integrity holdouts."
"These are people who don't get protections from integrity systems to see what happens to them," Haugen said. "And those people who deal with a more toxic, painful version of Facebook use Facebook less."
Internal documents reveal a more complex story. According to one internal report from April 2019, Facebook has studied the impact of removing some protections against problematic content like clickbait and untrustworthy news for some users, but the results, at least in that report, were decidedly mixed.
The report showed that during one March 2019 test, when the company rolled back some protections from millions of users' News Feeds, their exposure to some of the worst forms of harmful content, like graphic violence, barely changed. As the report's author wrote, referring to the company's news feed protections at the time, "We are likely having little (if any) impact on violence."
The report also suggested that far from using Facebook less, integrity holdouts actually commented more and had more sessions on the app. "Given that Integrity is changing a ranking algorithm that is optimized for engagement, it is not surprising that integrity has some negative engagement impact," the report read.
A spokesperson for Haugen said that other documents she collected showed that retention is stronger among regular users than holdouts, but those findings were not included in this report.
The report was included in disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by Frances Haugen's legal counsel. A consortium of news organizations, including Protocol, has reviewed the redacted versions received by Congress. They offer a glimpse at how Facebook has analyzed the efficacy of its user protections — and weighed them against their impact on other company priorities like growth.
"Testing product and safety features is an important part of improving our platform and something that's standard in tech and many other industries," Facebook spokesperson Drew Pusateri told Protocol, noting that the holdout affected about 2% of Facebook users. "It helps us build the tools to reduce the prevalence of hate speech and other types of problematic content on our platform."
'High-harm spaces'
Facebook conducts holdout experiments for a range of business goals, not just its integrity work. Holdouts are effectively control groups that Facebook can compare to its larger pool of users. As former Facebook data scientist Sophie Zhang recently told Protocol, Facebook has also studied the impact of withholding ads from users. "The company wants to know the very long-term impacts of advertising on retention and usage for Facebook," Zhang said. "The argument was usually that we need to know what the impact of this is. We need to know if people like it or not. But this is also motivated by wanting to know the impact for growth."
By early 2019, it appears, the company had begun applying this approach to integrity protections. The report, published in late April 2019, detailed the initial findings from an experiment that tinkered with certain integrity protections in News Feed. Some of the findings were encouraging: The report showed, for instance, that regular users got substantially less clickbait and ad-farm content than the holdouts did, something the author notes is "not surprising," given that the company was demoting clickbait and ad farms "quite a bit."
The report showed that regular users' exposure to what the company considers "low quality news" was down about 18% compared to the holdouts. The company also found it was boosting content from major publishers more when integrity protections were in place. Holdouts, by contrast, were more likely to click the dislike button and report the posts they saw, and they were also more likely to see content from public pages than regular users were.
But the main takeaway from the report, the author wrote, was that the News Feed protections that were in place at the time weren't having an equally significant effect on more severe types of harm, like graphic violence. "I believe strongly that this needs to change," the author wrote.
During the experiment, the company continued to demote hate speech and graphic violence in at-risk countries, the report said. But for holdouts who weren't in at-risk countries, those demotions didn't exist. And yet, the report found no impact on regular users' exposure to violence compared to the holdouts.
"11% of users see content that has been marked as disturbing every day; 16% of users see content that is likely to be bullying; 39% of users see hateful content (i.e. borderline hate); 32% of users see borderline 3+ nudity content," the author wrote. "These are significant proportions of [daily active users] and we have effectively no ranking interventions in place to mitigate this." The author added, however, that those particular numbers "should be taken with a grain of salt," as measuring bad experiences on the platform was still a work in progress.
The report also made no secret of the negative impact of News Feed integrity protections on engagement. "By definition, Integrity is going to cause some engagement regression," the author wrote, noting that there are "tradeoffs between Integrity and Engagement."
Integrity efforts, the report found, were a blow to the company's "meaningful social interactions" metric, which emphasizes interactions between friends over public-facing content. One reason for that, the author proposed, was that holdouts commented more than regular users did. While regular users were more likely to like posts on Facebook compared to holdouts, the author wrote, it was "not enough to make up for the decline in comments." The report also showed that content views and time spent on the app were down slightly among regular users compared to holdouts.
The report's limitations
It would be easy to construe the findings from this report as a total contradiction of Haugen's claims and a condemnation of integrity work's impact on the worst types of content in general. But that would be a misread, said Sahar Massachi, a former member of Facebook's integrity team and co-founder of the new Integrity Institute think tank. It's important to note, he said, that this document appears to be only looking at integrity protections that existed in News Feed rankings at the time, and doesn't take into account other integrity interventions that other teams at Facebook might have been working on.
It also only looks at the integrity interventions that the News Feed team had already deployed, not the full range of possible interventions that may have been proposed but were shot down. "Their view on what 'integrity' covers is likely scoped to whichever team they're on," Massachi said of the report's author. "I read this as: Integrity interventions that were allowed to ship — in the scoped set that this person considered — didn't affect views of that kind of content."
The report itself isn't clear on exactly what protections were being withheld from the holdouts, but a comment posted along with the document suggests that the experiment affected protections related to clickbait, ad farms, engagement bait and news trustworthiness, among other things. Given that fact, it shouldn't be all that surprising that exposure to graphic violence wasn't impacted by the experiment.
But what the report is calling attention to is the fact that, at the time at least, Facebook's integrity protections for News Feed weren't designed so they would capture more severe harms. The company had only begun demoting what it called "borderline" content that nearly violated its policies a few months before the report was published, and the rollout of those demotions was slow.
"This document says: We should expand the definitions more," said one current Facebook employee who has worked on News Feed ranking and reviewed the report. And according to that employee, the message stuck. "This person's argument was successful in that the program was expanded in various dimensions."
The employee said, however, that some of those expansions were rolled back before Facebook published a public list of content it demotes.
"The story of integrity is you try to do the good thing and you go to the execs, and they shoot you down, and you come back with something more conservative, and you realize you didn't do anything, so you try again," the employee said. "What you're seeing [in this document] is that middle part."
Facebook wouldn't comment on whether the company changed its demotions before publishing its list, but Pusateri said the demotions included on that list are still in place today.
Both Zhang and Massachi — as well as Facebook's own public relations team — cautioned Protocol not to cast Facebook's decision to withhold these protections at all as a scandal in and of itself. Measuring the effectiveness of these interventions, they said, is critical to strengthening them. As Massachi put it: "In the vaccine trials, some people have to get the placebo."
[Editor's note: Below, OCQ stands for "objective content quality," which refers to clickbait and ad-farm content. A high OCQ score means likely clickbait.]
A First Look at the Minimum Integrity Holdout by Protocol on Scribd
A MESSAGE FROM QUALCOMM

Just as the power of the PC fueled the early leaps of the tech revolution and the accessibility of the web built on that, the smartphone and 5G networking technology will reshape our world with blazingly fast connected devices. Leading that charge is 5G, the high-speed next generation of mobile wireless connectivity that will connect virtually everyone and everything, including machines, objects and devices.