On a call with reporters Thursday, Facebook's vice president of integrity, Guy Rosen, said that Facebook wants to "lead the industry in transparency." The call accompanied the release of Facebook's fourth-quarter content moderation report, which shares in granular detail the amount of content Facebook removed for various violations of its policies and why.
But what's become increasingly clear over the years that Facebook has published these reports is just how much the company leaves out. Also clear: Facebook hopes these reports will serve as a model for regulators to impose on the tech industry writ large.
Tech giants and the people who study them have begun to recognize that there's more to content moderation than the decisions tech companies make around taking posts down or leaving them up. Equally if not more important is the way companies' algorithms amplify content that violates their policies by recommending it to users or pushing it to the top of users' feeds. In the run-up to the U.S. election, for instance, Facebook's internal researchers found that the majority of political groups they were recommending to users were overrun with calls for violence, a realization that prompted Facebook to — temporarily and then permanently — remove political groups from recommendations all together.
But that sort of insight about how Facebook has actually promoted content that violates its own policies is nowhere to be found in the report by the company that says it strives to lead the industry on transparency.
Neither is information on some particularly dicey categories of violations, including incitements to violence. That, after all, is the policy that prompted Facebook to ban former President Donald Trump from the platform earlier this year, following a riot at the U.S. Capitol. And yet, Facebook's transparency report offers no indication of whether such incitements to violence were on the rise in 2020 or whether Facebook acted more aggressively to stop them.
On the call, Rosen called the reports a "multi-year journey" and said Facebook is working to expand them to include additional violation categories, like incitements to violence. The company is also working on ways to report the number of accounts, pages and groups it's taking action on, not just the posts themselves. "We don't have any immediate timeline for that, but it's absolutely on the list of things we want to get to," Rosen said.
When it comes to violations in groups and pages that appear in recommendations, Rosen added, "We don't have any numbers yet to share." At least, not publicly.
For now, the report and the accompanying blog post lean heavily on the strides Facebook has made in cracking down on hate speech and organized hate groups. And it has made strides. In the last quarter of 2019, Facebook removed just 1.6 million pieces of content from organized hate groups, compared to 6.4 million in the final quarter of 2020. The uptick indicates just how much the company's policies evolved in 2020 when it comes to homegrown militia and hate groups like the Proud Boys and violent conspiracy theories like QAnon. Facebook had resisted calls for years to ban those groups, but came around to the idea in the latter half of 2020 as the risk of violence around the 2020 election grew. Since then, Facebook has removed more than 3,000 pages and 10,000 groups associated with QAnon alone. Facebook also attributed gains it made in removing more hate speech in 2020 to advancements in automated technology, particularly with regard to hate speech in Arabic, Spanish and Portuguese.
And yet, the offline violence Facebook has fueled over the last several months in the U.S. suggests that this evolution is coming far too late in Facebook's history. The report also obscures the limits of Facebook's own definition of hate speech, which refers to direct attacks on the basis of "race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease." The sort of politically-motivated hate that seemed to fuel the Jan. 6 riot goes unaccounted for in this report — and in Facebook's policies.
That's not to say that Facebook's report only shows the good stuff. Despite the at least technical progress Facebook made on hate speech last year, in other crucial categories, the report suggests Facebook's content moderation systems suffered serious setbacks on both Facebook and Instagram, most notably in the category of child sexual abuse material. Last quarter, Facebook reported its lowest levels of enforcement against child exploitation on the platform since it began reporting these stats in 2018. In the fourth quarter, it removed 5.4 million pieces of child nudity and exploitation content, compared to 12.4 million pieces in the third quarter, and not that's because the overall volume of that content dropped. Instead, Facebook attributed the decline in enforcement to a "technical issue" that arose in mid-November, which has since been fixed.
"When the error was discovered, we fixed it, and are in the process of going back to retroactively remove all of that content that was missed," Rosen said.
It is true that Facebook reports far more information than other tech companies do. For instance, it reports the "prevalence" of violating content on its platforms. That is, not just the amount of content that it takes down, but the amount of content it missed. In the last quarter, for instance, as hate speech takedowns grew, the prevalence of hate speech on the platform also dropped, with users seeing seven to eight hate-speech posts for every 10,000 views of content. "I think you know we remain the only company to publish these numbers," Facebook's vice president of content policy Monika Bickert said on the call.
Facebook's vision of transparency — and all of the holes contained in that vision — are especially relevant now, as the company begins to push for light-handed regulation. In particular, Facebook has urged lawmakers interested in reforming Section 230 to adopt laws that require tech companies to be more transparent about content moderation. For Facebook, that would constitute a compromise that stops short of stripping away any of the protections that Section 230 gives tech platforms. As it calls for more mandated transparency, Facebook is clearly setting up its reports as an example.
"As we talk about putting in place regulations or reforming Section 230 in the U.S., we should be considering how to hold companies accountable to take action on harmful content," Bickert said. "We think the numbers we're providing today can help inform that conversation."
They very well might. And if they do, it will be all the more important for lawmakers to look critically at not just what those numbers reveal, but also what they hide.