Policy

One year since Jan. 6, has anything really changed for tech?

Tech platforms have had a lot to make up for this last year. Did any of it matter?

Rioters scaling the U.S. Capitol walls during the insurrection

The last year also saw tech platforms wrestle with what to do about posts and people who aren’t explicitly violating their rules, but are walking a fine line.

Photo: Blink O'faneye/Flickr

There was a brief window when it almost looked like tech platforms were going to emerge from the 2020 U.S. election unscathed. They’d spent years nursing their wounds from 2016 and building sturdy defenses against future attacks. So when Election Day came and went without some obvious signs of foreign interference or outright civil war, tech leaders and even some in the tech press considered it a win.

“As soon as Biden was declared the winner, and you didn’t have mass protests in the streets, people sort of thought, ‘OK, we can finally turn the corner and not have to worry about this,’” said Katie Harbath, Facebook’s former public policy director.

One year ago today, it became clear those declarations of victory were as premature as former President Trump’s.

Much has been said and written about what tech platforms large and small failed to do in the weeks leading up to the Capitol riot. Just this week, for example, ProPublica and The Washington Post reported that after the election, Facebook rolled back protections against extremist groups right when the company arguably needed those protections most. Whether the riot would have happened — or happened like it did — if tech platforms had done things differently is and will forever be unknowable. An arguably better question is: What’s changed in a year and what impact, if any, have those changes had on the spread of election lies and domestic extremism?

“Ultimately what Jan. 6 and the last year has shown is that we can no longer think about these issues around election integrity and civic integrity as something that’s a finite period of time around Election Day,” Harbath said. “These companies need to think more about an always-on approach to this work.”

What changed?

The most immediate impact of the riot on tech platforms was that it revealed room for exceptions to even their most rigid rules. That Twitter and Facebook would ban a sitting U.S. president was all but unthinkable up until the moment it finally happened, a few weeks before Trump left office. After Jan. 6, those rules were being rewritten in real time, and remain fuzzy one year later. Facebook still hasn’t come to a conclusion about whether Trump will ever be allowed back when his two-year suspension is up.

But Trump’s suspension was still a watershed moment, indicating a new willingness among social media platforms to actually enforce their existing rules against high profile violators. Up until that time, said Daniel Kreiss, a professor at University of North Carolina’s Hussman School of Journalism and Media, platforms including Facebook and Twitter had rules on the books but often found ways to justify why Trump wasn’t running afoul of them.

“There was a lot of interpretive flexibility with their policies,” Kreiss said. “Since Jan. 6, the major platforms — I’m thinking particularly of Twitter and Facebook — have grown much more willing to enforce existing policies against powerful political figures.” Just this week, Twitter offered up another prominent example with the permanent suspension of Georgia Rep. Marjorie Taylor Greene.

Other work that began even before Jan. 6 took on new urgency after the riot. Before the election, Facebook had committed to temporarily stop recommending political and civic groups, after internal investigations found that the vast majority of the most active groups were cesspools of hate, misinformation and harassment. After the riot, that policy became permanent. Facebook also said late last January that it was considering reducing political content in the News Feed, a test that has only expanded since then.

The last year also saw tech platforms wrestle with what to do about posts and people who aren’t explicitly violating their rules, but are walking a fine line. Twitter and Facebook began to embrace a middle ground between completely removing posts or users and leaving them alone entirely by leaning in on warning labels and preventative prompts.

They also started taking a more expansive view of what constitutes harm, looking beyond “coordinated inauthentic behavior,” like Russian troll farms, and instead focusing more on networks of real users who are wreaking havoc without trying to mask their identities. In January of last year alone, Twitter permanently banned 70,000 QAnon-linked accounts under a relatively new policy forbidding “coordinated harmful activity.”

“Our approach both before and after January 6 has been to take strong enforcement action against accounts and Tweets that incite violence or have the potential to lead to offline harm,” spokesperson Trenton Kennedy told Protocol in a statement.

Facebook also wrestled with this question in an internal report on its role in the riot last year, first published by Buzzfeed. “What do we do when a movement is authentic, coordinated through grassroots or authentic means, but is inherently harmful and violates the spirit of our policy?” the authors of the report wrote. “What do we do when that authentic movement espouses hate or delegitimizes free elections?”

Those questions are still far from answered, said Kreiss. “Where’s the line between people saying in the wake of 2016 that Trump was only president because of Russian disinformation, and therefore it was an illegitimate election, and claims about non-existent voting fraud?” Kreiss said. “I can draw those lines, but platforms have struggled with it.”

In a statement, Facebook spokesperson Kevin McAlister told Protocol, “We have strong policies that we continue to enforce, including a ban on hate organizations and removing content that praises or supports them. We are in contact with law enforcement agencies, including those responsible for addressing threats of domestic terrorism.”

What didn’t?

The far bigger question looming over all of this is whether any of these tweaks and changes have had an impact on the larger problem of extremism in America — or whether it was naive to ever believe they could.

The great deplatforming of 2021 only prompted a “great scattering” of extremist groups to other alternative platforms, according to one Atlantic Council report. “These findings portray a domestic extremist landscape that was battered by the blowback it faced after the Capitol riot, but not broken by it,” the report read.

Steve Bannon’s War Room channel may have gotten yanked from YouTube and his account may have been banned from Twitter, but his extremist views have continued unabated on his podcast and on his website, where he’s been able to rake in money from Google Ads. And Bannon’s not alone: A recent report by news rating firm NewsGuard found that 81% of the top websites spreading misinformation about the 2020 election last year are still up and running, many of them backed by ads from major brands.

Google noted the company did demonetize at least two of the sites mentioned in the report — Gateway Pundit and American Thinker — last year, and has taken ads off of individual URLs mentioned in the report as well. “We take this very seriously and have strict policies prohibiting content that incites violence or undermines trust in elections across Google's products,” spokesperson Nicolas Lopez said in a statement, noting that the company has also removed tens of thousands of videos from YouTube for violating its election integrity policies.

Deplatforming can also create a measurable backlash effect, as those who have been unceremoniously excised from mainstream social media urge their supporters to follow them to whatever smaller platform will have them. One recent report on Parler activity leading up to the riot found that users who had been deplatformed elsewhere wore it like a badge of honor on Parler, which only mobilized them further. “Being ‘banned from Twitter’ is such a prominent theme among users in this subset that it raises troubling questions about the unintended consequences and efficacy of content moderation schemes on mainstream platforms,” the report, by the New America think tank, read.

“Did deplatforming really work or is it just accelerating this fractured news environment that we have where people are not sharing common areas where they’re getting their information?” Harbath asked. This fragmentation can also make it tougher to intervene in the less visible places where true believers are gathering.

There’s an upside to that, of course: Making this stuff harder to find is kind of the point. As Kreiss points out, deplatforming “reduces the visibility” of pernicious messages to the average person. Evidence overwhelmingly shows that the majority of people who were arrested in connection to the Capitol riot were average people with no known connections to extremist groups.

Still, while tech giants have had plenty to make up for this last year, ultimately, there’s only so much they can change at a time when some estimates suggest about a quarter of Americans believe the 2020 election was stolen and some 21 million Americans believe use of force would be justified to restore Trump as president. And they believe that not just because of what they see on social media, but because of what the political elites and elected officials in their party are saying on a regular basis.

“The biggest thing that hasn’t changed is the trajectory of the growing extremism of one of the two major U.S. political parties,” Kreiss said. “Platforms are downstream of a lot of that, and until that changes, we’re not going to be able to create new policies out of that problem.”

A MESSAGE FROM ZOOM

www.protocol.com

While we were all Zooming, the Zoom team was thinking ahead and designing new offerings that could continue to enable seamless collaboration, communication and connectivity while evolving with the shifting workplace culture. Protocol sat down with Yuan to talk about Zoom's evolution, the future of work and the Zoom products he's most excited about.

Learn more

Correction: This was updated Jan. 6, 2022 to clarify that Facebook was just considering reducing political content in the new News Feed on its late January earnings call.

Climate

A pro-China disinformation campaign is targeting rare earth miners

It’s uncommon for cyber criminals to target private industry. But a new operation has cast doubt on miners looking to gain a foothold in the West in an apparent attempt to protect China’s upper hand in a market that has become increasingly vital.

It is very uncommon for coordinated disinformation operations to target private industry, rather than governments or civil society, a cybersecurity expert says.

Photo: Goh Seng Chong/Bloomberg via Getty Images

Just when we thought the renewable energy supply chains couldn’t get more fraught, a sophisticated disinformation campaign has taken to social media to further complicate things.

Known as Dragonbridge, the campaign has existed for at least three years, but in the last few months it has shifted its focus to target several mining companies “with negative messaging in response to potential or planned rare earths production activities.” It was initially uncovered by cybersecurity firm Mandiant and peddles narratives in the Chinese interest via its network of thousands of fake social media accounts.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Some of the most astounding tech-enabled advances of the next decade, from cutting-edge medical research to urban traffic control and factory floor optimization, will be enabled by a device often smaller than a thumbnail: the memory chip.

While vast amounts of data are created, stored and processed every moment — by some estimates, 2.5 quintillion bytes daily — the insights in that code are unlocked by the memory chips that hold it and transfer it. “Memory will propel the next 10 years into the most transformative years in human history,” said Sanjay Mehrotra, president and CEO of Micron Technology.

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.
Fintech

Ripple’s CEO threatens to leave the US if it loses SEC case

CEO Brad Garlinghouse said a few countries have reached out to Ripple about relocating.

"There's no doubt that if the SEC doesn't win their case against us that that is good for crypto in the United States,” Brad Garlinghouse told Protocol.

Photo: Stephen McCarthy/Sportsfile for Collision via Getty Images

Ripple CEO Brad Garlinghouse said the crypto company will move to another country if it loses in its legal battle with the SEC.

Garlinghouse said he’s confident that Ripple will prevail against the federal regulator, which accused the company of failing to register roughly $1.4 billion in XRP tokens as securities.

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.

Policy

The Supreme Court’s EPA ruling is bad news for tech regulation, too

The justices just gave themselves a lot of discretion to smack down agency rules.

The ruling could also endanger work on competition issues by the FTC and net neutrality by the FCC.

Photo: Geoff Livingston/Getty Images

The Supreme Court’s decision last week gutting the Environmental Protection Agency’s ability to regulate greenhouse gas emissions didn’t just signal the conservative justices’ dislike of the Clean Air Act at a moment of climate crisis. It also served as a warning for anyone that would like to see more regulation of Big Tech.

At the heart of Chief Justice John Roberts’ decision in West Virginia v. EPA was a codification of the “major questions doctrine,” which, he wrote, requires “clear congressional authorization” when agencies want to regulate on areas of great “economic and political significance.”

Keep Reading Show less
Ben Brody

Ben Brody (@ BenBrodyDC) is a senior reporter at Protocol focusing on how Congress, courts and agencies affect the online world we live in. He formerly covered tech policy and lobbying (including antitrust, Section 230 and privacy) at Bloomberg News, where he previously reported on the influence industry, government ethics and the 2016 presidential election. Before that, Ben covered business news at CNNMoney and AdAge, and all manner of stories in and around New York. He still loves appearing on the New York news radio he grew up with.

Enterprise

Microsoft and Google are still using emotion AI, but with limits

Microsoft said accessibility goals overrode problems with emotion recognition and Google offers off-the-shelf emotion recognition technology amid growing concern over the controversial AI.

Emotion recognition is a well-established field of computer vision research; however, AI-based technologies used in an attempt to assess people’s emotional states have moved beyond the research phase.

Photo: Microsoft

Microsoft said last month it would no longer provide general use of an AI-based cloud software feature used to infer people’s emotions. However, despite its own admission that emotion recognition technology creates “risks,” it turns out the company will retain its emotion recognition capability in an app used by people with vision loss.

In fact, amid growing concerns over development and use of controversial emotion recognition in everyday software, both Microsoft and Google continue to incorporate the AI-based features in their products.

“The Seeing AI person channel enables you to recognize people and to get a description of them, including an estimate of their age and also their emotion,” said Saqib Shaikh, a software engineering manager and project lead for Seeing AI at Microsoft who helped build the app, in a tutorial about the product in a 2017 Microsoft video.

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins