Power

Facebook’s transparency report shows what Facebook wants you to see

For all of the granular detail about removing hate speech, Facebook leaves lots of questions unanswered.

Facebook

Facebook's latest content moderation report reveals strides and setbacks.

Photo: Chesnot/Getty Images

On a call with reporters Thursday, Facebook's vice president of integrity, Guy Rosen, said that Facebook wants to "lead the industry in transparency." The call accompanied the release of Facebook's fourth-quarter content moderation report, which shares in granular detail the amount of content Facebook removed for various violations of its policies and why.

But what's become increasingly clear over the years that Facebook has published these reports is just how much the company leaves out. Also clear: Facebook hopes these reports will serve as a model for regulators to impose on the tech industry writ large.

Tech giants and the people who study them have begun to recognize that there's more to content moderation than the decisions tech companies make around taking posts down or leaving them up. Equally if not more important is the way companies' algorithms amplify content that violates their policies by recommending it to users or pushing it to the top of users' feeds. In the run-up to the U.S. election, for instance, Facebook's internal researchers found that the majority of political groups they were recommending to users were overrun with calls for violence, a realization that prompted Facebook to — temporarily and then permanently — remove political groups from recommendations all together.

But that sort of insight about how Facebook has actually promoted content that violates its own policies is nowhere to be found in the report by the company that says it strives to lead the industry on transparency.

Neither is information on some particularly dicey categories of violations, including incitements to violence. That, after all, is the policy that prompted Facebook to ban former President Donald Trump from the platform earlier this year, following a riot at the U.S. Capitol. And yet, Facebook's transparency report offers no indication of whether such incitements to violence were on the rise in 2020 or whether Facebook acted more aggressively to stop them.

On the call, Rosen called the reports a "multi-year journey" and said Facebook is working to expand them to include additional violation categories, like incitements to violence. The company is also working on ways to report the number of accounts, pages and groups it's taking action on, not just the posts themselves. "We don't have any immediate timeline for that, but it's absolutely on the list of things we want to get to," Rosen said.

When it comes to violations in groups and pages that appear in recommendations, Rosen added, "We don't have any numbers yet to share." At least, not publicly.

For now, the report and the accompanying blog post lean heavily on the strides Facebook has made in cracking down on hate speech and organized hate groups. And it has made strides. In the last quarter of 2019, Facebook removed just 1.6 million pieces of content from organized hate groups, compared to 6.4 million in the final quarter of 2020. The uptick indicates just how much the company's policies evolved in 2020 when it comes to homegrown militia and hate groups like the Proud Boys and violent conspiracy theories like QAnon. Facebook had resisted calls for years to ban those groups, but came around to the idea in the latter half of 2020 as the risk of violence around the 2020 election grew. Since then, Facebook has removed more than 3,000 pages and 10,000 groups associated with QAnon alone. Facebook also attributed gains it made in removing more hate speech in 2020 to advancements in automated technology, particularly with regard to hate speech in Arabic, Spanish and Portuguese.

And yet, the offline violence Facebook has fueled over the last several months in the U.S. suggests that this evolution is coming far too late in Facebook's history. The report also obscures the limits of Facebook's own definition of hate speech, which refers to direct attacks on the basis of "race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease." The sort of politically-motivated hate that seemed to fuel the Jan. 6 riot goes unaccounted for in this report — and in Facebook's policies.

That's not to say that Facebook's report only shows the good stuff. Despite the at least technical progress Facebook made on hate speech last year, in other crucial categories, the report suggests Facebook's content moderation systems suffered serious setbacks on both Facebook and Instagram, most notably in the category of child sexual abuse material. Last quarter, Facebook reported its lowest levels of enforcement against child exploitation on the platform since it began reporting these stats in 2018. In the fourth quarter, it removed 5.4 million pieces of child nudity and exploitation content, compared to 12.4 million pieces in the third quarter, and not that's because the overall volume of that content dropped. Instead, Facebook attributed the decline in enforcement to a "technical issue" that arose in mid-November, which has since been fixed.

"When the error was discovered, we fixed it, and are in the process of going back to retroactively remove all of that content that was missed," Rosen said.

It is true that Facebook reports far more information than other tech companies do. For instance, it reports the "prevalence" of violating content on its platforms. That is, not just the amount of content that it takes down, but the amount of content it missed. In the last quarter, for instance, as hate speech takedowns grew, the prevalence of hate speech on the platform also dropped, with users seeing seven to eight hate-speech posts for every 10,000 views of content. "I think you know we remain the only company to publish these numbers," Facebook's vice president of content policy Monika Bickert said on the call.

Facebook's vision of transparency — and all of the holes contained in that vision — are especially relevant now, as the company begins to push for light-handed regulation. In particular, Facebook has urged lawmakers interested in reforming Section 230 to adopt laws that require tech companies to be more transparent about content moderation. For Facebook, that would constitute a compromise that stops short of stripping away any of the protections that Section 230 gives tech platforms. As it calls for more mandated transparency, Facebook is clearly setting up its reports as an example.

"As we talk about putting in place regulations or reforming Section 230 in the U.S., we should be considering how to hold companies accountable to take action on harmful content," Bickert said. "We think the numbers we're providing today can help inform that conversation."

They very well might. And if they do, it will be all the more important for lawmakers to look critically at not just what those numbers reveal, but also what they hide.

Climate

Supreme Court takes a sledgehammer to greenhouse gas regulations

The court ruled 6-3 that the EPA cannot use the Clean Air Act to regulate power plant greenhouse gas emissions. That leaves a patchwork of policies from states, utilities and, increasingly, tech companies to pick up the slack.

The Supreme Court struck a major blow to the federal government's ability to regulate greenhouse gases.

Eric Lee/Bloomberg via Getty Images

Striking down the right to abortion may be the Supreme Court's highest-profile decision this term. But on Wednesday, the court handed down an equally massive verdict on the federal government's ability to regulate greenhouse gas emissions. In the case of West Virginia v. EPA, the court decided that the agency has no ability to regulate greenhouse gas pollution under the Clean Air Act. Weakening the federal government's powers leaves a patchwork of states, utilities and, increasingly, tech companies to pick up the slack in reducing carbon pollution.

Keep Reading Show less
Brian Kahn

Brian ( @blkahn) is Protocol's climate editor. Previously, he was the managing editor and founding senior writer at Earther, Gizmodo's climate site, where he covered everything from the weather to Big Oil's influence on politics. He also reported for Climate Central and the Wall Street Journal. In the even more distant past, he led sleigh rides to visit a herd of 7,000 elk and boat tours on the deepest lake in the U.S.

Every day, millions of us press the “order” button on our favorite coffee store's mobile application: Our chosen brew will be on the counter when we arrive. It’s a personalized, seamless experience that we have all come to expect. What we don’t know is what’s happening behind the scenes. The mobile application is sourcing data from a database that stores information about each customer and what their favorite coffee drinks are. It is also leveraging event-streaming data in real time to ensure the ingredients for your personal coffee are in supply at your local store.

Applications like this power our daily lives, and if they can’t access massive amounts of data stored in a database as well as stream data “in motion” instantaneously, you — and millions of customers — won’t have these in-the-moment experiences.

Keep Reading Show less
Jennifer Goforth Gregory
Jennifer Goforth Gregory has worked in the B2B technology industry for over 20 years. As a freelance writer she writes for top technology brands, including IBM, HPE, Adobe, AT&T, Verizon, Epson, Oracle, Intel and Square. She specializes in a wide range of technology, such as AI, IoT, cloud, cybersecurity, and CX. Jennifer also wrote a bestselling book The Freelance Content Marketing Writer to help other writers launch a high earning freelance business.
Fintech

Can crypto regulate itself? The Lummis-Gillibrand bill hopes so.

Creating the equivalent of the stock markets’ FINRA for crypto is the ideal, but experts doubt that it will be easy.

The idea of creating a government-sanctioned private regulatory association has been drawing more attention in the debate over how to rein in a fast-growing industry whose technological quirks have baffled policymakers.

Illustration: Christopher T. Fong/Protocol

Regulating crypto is complicated. That’s why Sens. Cynthia Lummis and Kirsten Gillibrand want to explore the creation of a private sector group to help federal regulators do their job.

The bipartisan bill introduced by Lummis and Gillibrand would require the CFTC and the SEC to work with the crypto industry to look into setting up a self-regulatory organization to “facilitate innovative, efficient and orderly markets for digital assets.”

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.

Enterprise

Alperovitch: Cybersecurity defenders can’t be on high alert every day

With the continued threat of Russian cyber escalation, cybersecurity and geopolitics expert Dmitri Alperovitch says it’s not ideal for the U.S. to oscillate between moments of high alert and lesser states of cyber readiness.

Dmitri Alperovitch (the co-founder and former CTO of CrowdStrike) speaks at RSA Conference 2022.

Photo: RSA Conference

When it comes to cybersecurity vigilance, Dmitri Alperovitch wants to see more focus on resiliency of IT systems — and less on doing "surges" around particular dates or events.

For instance, whatever Russia is doing at the moment.

Keep Reading Show less
Kyle Alspach

Kyle Alspach ( @KyleAlspach) is a senior reporter at Protocol, focused on cybersecurity. He has covered the tech industry since 2010 for outlets including VentureBeat, CRN and the Boston Globe. He lives in Portland, Oregon, and can be reached at kalspach@protocol.com.

Policy

How the internet got privatized and how the government could fix it

Author Ben Tarnoff discusses municipal broadband, Web3 and why closing the “digital divide” isn’t enough.

The Biden administration’s Internet for All initiative, which kicked off in May, will roll out grant programs to expand and improve broadband infrastructure, teach digital skills and improve internet access for “everyone in America by the end of the decade.”

Decisions about who is eligible for these grants will be made based on the Federal Communications Commission’s broken, outdated and incorrect broadband maps — maps the FCC plans to update only after funding has been allocated. Inaccurate broadband maps are just one of many barriers to getting everyone in the country successfully online. Internet service providers that use government funds to connect rural and low-income areas have historically provided those regions with slow speeds and poor service, forcing community residents to find reliable internet outside of their homes.

Keep Reading Show less
Aditi Mukund
Aditi Mukund is Protocol’s Data Analyst. Prior to joining Protocol, she was an analyst at The Daily Beast and NPR where she wrangled data into actionable insights for editorial, audience, commerce, subscription, and product teams. She holds a B.S in Cognitive Science, Human Computer Interaction from The University of California, San Diego.
Latest Stories
Bulletins