Power

Facebook’s transparency report shows what Facebook wants you to see

For all of the granular detail about removing hate speech, Facebook leaves lots of questions unanswered.

Facebook

Facebook's latest content moderation report reveals strides and setbacks.

Photo: Chesnot/Getty Images

On a call with reporters Thursday, Facebook's vice president of integrity, Guy Rosen, said that Facebook wants to "lead the industry in transparency." The call accompanied the release of Facebook's fourth-quarter content moderation report, which shares in granular detail the amount of content Facebook removed for various violations of its policies and why.

But what's become increasingly clear over the years that Facebook has published these reports is just how much the company leaves out. Also clear: Facebook hopes these reports will serve as a model for regulators to impose on the tech industry writ large.

Tech giants and the people who study them have begun to recognize that there's more to content moderation than the decisions tech companies make around taking posts down or leaving them up. Equally if not more important is the way companies' algorithms amplify content that violates their policies by recommending it to users or pushing it to the top of users' feeds. In the run-up to the U.S. election, for instance, Facebook's internal researchers found that the majority of political groups they were recommending to users were overrun with calls for violence, a realization that prompted Facebook to — temporarily and then permanently — remove political groups from recommendations all together.

But that sort of insight about how Facebook has actually promoted content that violates its own policies is nowhere to be found in the report by the company that says it strives to lead the industry on transparency.

Neither is information on some particularly dicey categories of violations, including incitements to violence. That, after all, is the policy that prompted Facebook to ban former President Donald Trump from the platform earlier this year, following a riot at the U.S. Capitol. And yet, Facebook's transparency report offers no indication of whether such incitements to violence were on the rise in 2020 or whether Facebook acted more aggressively to stop them.

On the call, Rosen called the reports a "multi-year journey" and said Facebook is working to expand them to include additional violation categories, like incitements to violence. The company is also working on ways to report the number of accounts, pages and groups it's taking action on, not just the posts themselves. "We don't have any immediate timeline for that, but it's absolutely on the list of things we want to get to," Rosen said.

When it comes to violations in groups and pages that appear in recommendations, Rosen added, "We don't have any numbers yet to share." At least, not publicly.

For now, the report and the accompanying blog post lean heavily on the strides Facebook has made in cracking down on hate speech and organized hate groups. And it has made strides. In the last quarter of 2019, Facebook removed just 1.6 million pieces of content from organized hate groups, compared to 6.4 million in the final quarter of 2020. The uptick indicates just how much the company's policies evolved in 2020 when it comes to homegrown militia and hate groups like the Proud Boys and violent conspiracy theories like QAnon. Facebook had resisted calls for years to ban those groups, but came around to the idea in the latter half of 2020 as the risk of violence around the 2020 election grew. Since then, Facebook has removed more than 3,000 pages and 10,000 groups associated with QAnon alone. Facebook also attributed gains it made in removing more hate speech in 2020 to advancements in automated technology, particularly with regard to hate speech in Arabic, Spanish and Portuguese.

And yet, the offline violence Facebook has fueled over the last several months in the U.S. suggests that this evolution is coming far too late in Facebook's history. The report also obscures the limits of Facebook's own definition of hate speech, which refers to direct attacks on the basis of "race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease." The sort of politically-motivated hate that seemed to fuel the Jan. 6 riot goes unaccounted for in this report — and in Facebook's policies.

That's not to say that Facebook's report only shows the good stuff. Despite the at least technical progress Facebook made on hate speech last year, in other crucial categories, the report suggests Facebook's content moderation systems suffered serious setbacks on both Facebook and Instagram, most notably in the category of child sexual abuse material. Last quarter, Facebook reported its lowest levels of enforcement against child exploitation on the platform since it began reporting these stats in 2018. In the fourth quarter, it removed 5.4 million pieces of child nudity and exploitation content, compared to 12.4 million pieces in the third quarter, and not that's because the overall volume of that content dropped. Instead, Facebook attributed the decline in enforcement to a "technical issue" that arose in mid-November, which has since been fixed.

"When the error was discovered, we fixed it, and are in the process of going back to retroactively remove all of that content that was missed," Rosen said.

It is true that Facebook reports far more information than other tech companies do. For instance, it reports the "prevalence" of violating content on its platforms. That is, not just the amount of content that it takes down, but the amount of content it missed. In the last quarter, for instance, as hate speech takedowns grew, the prevalence of hate speech on the platform also dropped, with users seeing seven to eight hate-speech posts for every 10,000 views of content. "I think you know we remain the only company to publish these numbers," Facebook's vice president of content policy Monika Bickert said on the call.

Facebook's vision of transparency — and all of the holes contained in that vision — are especially relevant now, as the company begins to push for light-handed regulation. In particular, Facebook has urged lawmakers interested in reforming Section 230 to adopt laws that require tech companies to be more transparent about content moderation. For Facebook, that would constitute a compromise that stops short of stripping away any of the protections that Section 230 gives tech platforms. As it calls for more mandated transparency, Facebook is clearly setting up its reports as an example.

"As we talk about putting in place regulations or reforming Section 230 in the U.S., we should be considering how to hold companies accountable to take action on harmful content," Bickert said. "We think the numbers we're providing today can help inform that conversation."

They very well might. And if they do, it will be all the more important for lawmakers to look critically at not just what those numbers reveal, but also what they hide.

Hollywood averted its first streaming strike with an 11th-hour deal

IATSE's 60,000 members threatened to strike for better working conditions; at the core of the conflict was Hollywood's move to streaming.

60,000 Hollywood workers are set to go on strike this week.

Photo: Myung J. Chun/Los Angeles Times via Getty Images

The union representing 60,000 studio workers struck an agreement with major studios and production companies.

A last-minute agreement between the International Alliance of Theatrical Stage Employees (IATSE) and the Alliance of Motion Picture and Television Producers (AMPTP) helped avert a strike that would have shut down Hollywood: The two sides agreed on a new contract that includes pay raises as well as improved break schedules, Deadline reported Saturday evening.

Keep Reading Show less
Janko Roettgers

Janko Roettgers (@jank0) is a senior reporter at Protocol, reporting on the shifting power dynamics between tech, media, and entertainment, including the impact of new technologies. Previously, Janko was Variety's first-ever technology writer in San Francisco, where he covered big tech and emerging technologies. He has reported for Gigaom, Frankfurter Rundschau, Berliner Zeitung, and ORF, among others. He has written three books on consumer cord-cutting and online music and co-edited an anthology on internet subcultures. He lives with his family in Oakland.

The way we work has fundamentally changed. COVID-19 upended business dealings and office work processes, putting into hyperdrive a move towards digital collaboration platforms that allow teams to streamline processes and communicate from anywhere. According to the International Data Corporation, the revenue for worldwide collaboration applications increased 32.9 percent from 2019 to 2020, reaching $22.6 billion; it's expected to become a $50.7 billion industry by 2025.

"While consumers and early adopter businesses had widely embraced collaborative applications prior to the pandemic, the market saw five years' worth of new users in the first six months of 2020," said Wayne Kurtzman, research director of social and collaboration at IDC. "This has cemented collaboration, at least to some extent, for every business, large and small."

Keep Reading Show less
Kate Silver

Kate Silver is an award-winning reporter and editor with 15-plus years of journalism experience. Based in Chicago, she specializes in feature and business reporting. Kate's reporting has appeared in the Washington Post, The Chicago Tribune, The Atlantic's CityLab, Atlas Obscura, The Telegraph and many other outlets.

Protocol | Workplace

Instacart workers are on strike. How far can it get them?

Instacart activists want a nationwide strike to start today, but many workers are too afraid of the company and feel they can't afford a day off of work.

Gig workers protest in front of an Amazon facility in 2020.

Photo: Michael Nagle/Bloomberg via Getty Images

Starting today, an Instacart organizing group is asking the app's gig workers to go on a nationwide strike to demand better payment structures, benefits and other changes to the way the company treats its workers — but if past strikes are any indication, most Instacart users probably won't even notice.

The majority of Instacart workers on forums like Reddit and Facebook appear either unaware of the planned strike or don't plan to participate because they are skeptical of its power, afraid of retaliation from the company or are too reliant on what they do make from the app to be able to afford to take even one day off of the platform. "Not unless someone is going to pay my bills," "It will never work, you will never be able to get every shopper to organize" and "Last time there was a 'strike' Instacart took away our quality bonus pay," are just a few of the comments Instacart shoppers have left in response to news of the strike.

Keep Reading Show less
Anna Kramer

Anna Kramer is a reporter at Protocol (Twitter: @ anna_c_kramer, email: akramer@protocol.com), where she writes about labor and workplace issues. Prior to joining the team, she covered tech and small business for the San Francisco Chronicle and privacy for Bloomberg Law. She is a recent graduate of Brown University, where she studied International Relations and Arabic and wrote her senior thesis about surveillance tools and technological development in the Middle East.

Protocol | China

WeChat promises to stop accessing users’ photo albums amid public outcry

A tech blogger claimed that popular Chinese apps snoop around users' photo libraries, provoking heightened public concerns over privacy.

A survey launched by Sina Tech shows 94% of the some 30,000 responding users said they are not comfortable with apps reading their photo libraries just to allow them to share images faster in chats.

Photo: S3studio via Getty Images

A Chinese tech blogger dropped a bombshell last Friday, claiming on Chinese media that he found that several popular Chinese apps, including the Tencent-owned chat apps WeChat and QQ, as well as the Alibaba-owned ecommerce app Taobao, frequently access iPhone users' photo albums in the background even when those apps are not in use.

The original Weibo post from the tech blogger, using the handle of @Hackl0us, provoked intense debates about user privacy on the Chinese internet and consequently prompted WeChat to announce that it would stop fetching users' photo album data in the background.

Keep Reading Show less
Shen Lu

Shen Lu is a reporter with Protocol | China. Her writing has appeared in Foreign Policy, The New York Times and POLITICO, among other publications. She can be reached at shenlu@protocol.com.

Protocol | Enterprise

As businesses struggle with data, enterprise tech is cleaning up

Enterprise tech's vision of "big data" largely fell flat inside silos. But now, an army of providers think they've figured out the problems. And customers and investors are taking note.

Corporate data tends to settle in silos that makes it harder to understand the bigger picture. Enterprise tech vendors smell a lucrative opportunity.

Photo: Jim Witkowski/Unsplash

Data isn't the new oil; it's the new gold. And in any gold rush, the ones who make the most money in the long run are the tool makers and suppliers.

Enterprise tech vendors have long peddled a vision of corporate America centered around so-called "big data." But there was a big problem: Many of those projects failed to produce a return. An army of new providers think they've finally figured out the problem, and investors and customers are taking note.

Keep Reading Show less
Joe Williams

Joe Williams is a senior reporter at Protocol covering enterprise software, including industry giants like Salesforce, Microsoft, IBM and Oracle. He previously covered emerging technology for Business Insider. Joe can be reached at JWilliams@Protocol.com. To share information confidentially, he can also be contacted on a non-work device via Signal (+1-309-265-6120) or JPW53189@protonmail.com.

Latest Stories