Policy

Facebook is finally taking organized hate seriously

Last quarter, Facebook removed more organized hate content than foreign terrorist content for the first time in its history.

Facebook is finally taking organized hate seriously
Tech spent years fighting foreign terrorists. Then came the Capitol riot.
Photo: Roberto Schmidt/Getty Images

Facebook has historically cracked down on foreign terrorist threats more aggressively than domestic ones, but new data released by the company Wednesday suggests that might be changing.

Between January and March of 2021, Facebook said it took down more content related to organized hate groups than it did content related to terrorist organizations — the first time that has happened since Facebook began reporting on content violations and enforcement in late 2017.

Overall in the first quarter of 2020, Facebook removed 9.8 million pieces of organized hate content, up from 6.4 million in the last quarter of 2020. That's compared to 9 million pieces of terrorist content that were removed during the first quarter of this year, a slight increase from 8.6 million pieces of terrorist content removed in the last quarter of 2020. On Instagram, the company continued to remove more terrorist content than organized hate content, but the overall volume of content in both categories was significantly smaller than it was on Facebook.

While Facebook prohibits both organized hate and terrorism, enforcement against terrorist organizations has traditionally dwarfed enforcement against domestic hate groups. That's partly to do with governments, including the U.S. government, forcefully pushing social networks to banish foreign terrorist groups, like ISIS and Al Qaeda, while tiptoeing around domestic organizations such as the Oath Keepers or the Proud Boys. In the U.S., the threat posed by those homegrown groups has only recently been prioritized by government officials since the Capitol attack.

The disparity in enforcement is also partly to do with the diffuse, and in some ways, disorganized nature of so-called organized hate groups. "Groups that are less organized and less structured and don't put out official propaganda in the same sort of way, you have to use a different tool kit in order to get at those kinds of entities," Brian Fishman, who leads Facebook's work fighting dangerous individuals and organizations, recently told Protocol.

In the run up to the 2020 election, Facebook began ramping up its efforts to crack down on organized hate groups and militias, banning individuals and groups that had previously had free rein on the platform. Those policies were full of holes, and leaders of those militia groups have since been arrested for storming the Capitol on Jan. 6 as part of schemes that court records show were largely planned on Facebook-owned platforms.

Still, the policy updates, coupled with improved algorithmic detection, appear to have had an impact. "These are improvements in our technology that continue to improve how proactive we are in detecting more violating content," Guy Rosen, Facebook's vice president of integrity, said on a call with reporters.

The uptick in enforcement against organized hate groups tracks with an increase in automated enforcement against hate speech writ large. While Facebook removed slightly less hate speech in the first quarter of 2021 than it did in the last quarter of 2020, actual views of that content decreased. Facebook estimates that in the first quarter, users saw hate speech five or six times per every 10,000 views of content. That's down from about seven or eight views of hate speech for every 10,000 views in the fourth quarter of 2020.

While Facebook's efforts to combat hate speech and hate groups may be progressing, Facebook's enforcement against other particularly sensitive categories of violating content suffered significant setbacks. According to Rosen, Facebook detected two technical issues that interfered with its detection of child sexual abuse material in the fourth quarter of 2020 and the first quarter of 2021. Enforcement against that type of content dropped from 12.4 million pieces of content in the third quarter of 2020 to less than half of that in the fourth quarter of 2020 and first quarter of 2021.

Rosen said the company is in the "process of addressing that and going back retroactively to remove and take action" on any violations Facebook might have missed.

In 2020, Facebook said it also saw a staggering increase in the number of content restrictions required by governments worldwide in order to comply with local laws. According to the report, those requests nearly doubled from 22,120 in the first half of 2020 to 42,606 in the second "driven mainly by increases in requests from the UK, Turkey and Brazil."

Climate

A pro-China disinformation campaign is targeting rare earth miners

It’s uncommon for cyber criminals to target private industry. But a new operation has cast doubt on miners looking to gain a foothold in the West in an apparent attempt to protect China’s upper hand in a market that has become increasingly vital.

It is very uncommon for coordinated disinformation operations to target private industry, rather than governments or civil society, a cybersecurity expert says.

Photo: Goh Seng Chong/Bloomberg via Getty Images

Just when we thought the renewable energy supply chains couldn’t get more fraught, a sophisticated disinformation campaign has taken to social media to further complicate things.

Known as Dragonbridge, the campaign has existed for at least three years, but in the last few months it has shifted its focus to target several mining companies “with negative messaging in response to potential or planned rare earths production activities.” It was initially uncovered by cybersecurity firm Mandiant and peddles narratives in the Chinese interest via its network of thousands of fake social media accounts.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Some of the most astounding tech-enabled advances of the next decade, from cutting-edge medical research to urban traffic control and factory floor optimization, will be enabled by a device often smaller than a thumbnail: the memory chip.

While vast amounts of data are created, stored and processed every moment — by some estimates, 2.5 quintillion bytes daily — the insights in that code are unlocked by the memory chips that hold it and transfer it. “Memory will propel the next 10 years into the most transformative years in human history,” said Sanjay Mehrotra, president and CEO of Micron Technology.

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.
Fintech

Ripple’s CEO threatens to leave the US if it loses SEC case

CEO Brad Garlinghouse said a few countries have reached out to Ripple about relocating.

"There's no doubt that if the SEC doesn't win their case against us that that is good for crypto in the United States,” Brad Garlinghouse told Protocol.

Photo: Stephen McCarthy/Sportsfile for Collision via Getty Images

Ripple CEO Brad Garlinghouse said the crypto company will move to another country if it loses in its legal battle with the SEC.

Garlinghouse said he’s confident that Ripple will prevail against the federal regulator, which accused the company of failing to register roughly $1.4 billion in XRP tokens as securities.

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.

Policy

The Supreme Court’s EPA ruling is bad news for tech regulation, too

The justices just gave themselves a lot of discretion to smack down agency rules.

The ruling could also endanger work on competition issues by the FTC and net neutrality by the FCC.

Photo: Geoff Livingston/Getty Images

The Supreme Court’s decision last week gutting the Environmental Protection Agency’s ability to regulate greenhouse gas emissions didn’t just signal the conservative justices’ dislike of the Clean Air Act at a moment of climate crisis. It also served as a warning for anyone that would like to see more regulation of Big Tech.

At the heart of Chief Justice John Roberts’ decision in West Virginia v. EPA was a codification of the “major questions doctrine,” which, he wrote, requires “clear congressional authorization” when agencies want to regulate on areas of great “economic and political significance.”

Keep Reading Show less
Ben Brody

Ben Brody (@ BenBrodyDC) is a senior reporter at Protocol focusing on how Congress, courts and agencies affect the online world we live in. He formerly covered tech policy and lobbying (including antitrust, Section 230 and privacy) at Bloomberg News, where he previously reported on the influence industry, government ethics and the 2016 presidential election. Before that, Ben covered business news at CNNMoney and AdAge, and all manner of stories in and around New York. He still loves appearing on the New York news radio he grew up with.

Enterprise

Microsoft and Google are still using emotion AI, but with limits

Microsoft said accessibility goals overrode problems with emotion recognition and Google offers off-the-shelf emotion recognition technology amid growing concern over the controversial AI.

Emotion recognition is a well-established field of computer vision research; however, AI-based technologies used in an attempt to assess people’s emotional states have moved beyond the research phase.

Photo: Microsoft

Microsoft said last month it would no longer provide general use of an AI-based cloud software feature used to infer people’s emotions. However, despite its own admission that emotion recognition technology creates “risks,” it turns out the company will retain its emotion recognition capability in an app used by people with vision loss.

In fact, amid growing concerns over development and use of controversial emotion recognition in everyday software, both Microsoft and Google continue to incorporate the AI-based features in their products.

“The Seeing AI person channel enables you to recognize people and to get a description of them, including an estimate of their age and also their emotion,” said Saqib Shaikh, a software engineering manager and project lead for Seeing AI at Microsoft who helped build the app, in a tutorial about the product in a 2017 Microsoft video.

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins