Policy

One year since Jan. 6, has anything really changed for tech?

Tech platforms have had a lot to make up for this last year. Did any of it matter?

Rioters scaling the U.S. Capitol walls during the insurrection

The last year also saw tech platforms wrestle with what to do about posts and people who aren’t explicitly violating their rules, but are walking a fine line.

Photo: Blink O'faneye/Flickr

There was a brief window when it almost looked like tech platforms were going to emerge from the 2020 U.S. election unscathed. They’d spent years nursing their wounds from 2016 and building sturdy defenses against future attacks. So when Election Day came and went without some obvious signs of foreign interference or outright civil war, tech leaders and even some in the tech press considered it a win.

“As soon as Biden was declared the winner, and you didn’t have mass protests in the streets, people sort of thought, ‘OK, we can finally turn the corner and not have to worry about this,’” said Katie Harbath, Facebook’s former public policy director.

One year ago today, it became clear those declarations of victory were as premature as former President Trump’s.

Much has been said and written about what tech platforms large and small failed to do in the weeks leading up to the Capitol riot. Just this week, for example, ProPublica and The Washington Post reported that after the election, Facebook rolled back protections against extremist groups right when the company arguably needed those protections most. Whether the riot would have happened — or happened like it did — if tech platforms had done things differently is and will forever be unknowable. An arguably better question is: What’s changed in a year and what impact, if any, have those changes had on the spread of election lies and domestic extremism?

“Ultimately what Jan. 6 and the last year has shown is that we can no longer think about these issues around election integrity and civic integrity as something that’s a finite period of time around Election Day,” Harbath said. “These companies need to think more about an always-on approach to this work.”

What changed?

The most immediate impact of the riot on tech platforms was that it revealed room for exceptions to even their most rigid rules. That Twitter and Facebook would ban a sitting U.S. president was all but unthinkable up until the moment it finally happened, a few weeks before Trump left office. After Jan. 6, those rules were being rewritten in real time, and remain fuzzy one year later. Facebook still hasn’t come to a conclusion about whether Trump will ever be allowed back when his two-year suspension is up.

But Trump’s suspension was still a watershed moment, indicating a new willingness among social media platforms to actually enforce their existing rules against high profile violators. Up until that time, said Daniel Kreiss, a professor at University of North Carolina’s Hussman School of Journalism and Media, platforms including Facebook and Twitter had rules on the books but often found ways to justify why Trump wasn’t running afoul of them.

“There was a lot of interpretive flexibility with their policies,” Kreiss said. “Since Jan. 6, the major platforms — I’m thinking particularly of Twitter and Facebook — have grown much more willing to enforce existing policies against powerful political figures.” Just this week, Twitter offered up another prominent example with the permanent suspension of Georgia Rep. Marjorie Taylor Greene.

Other work that began even before Jan. 6 took on new urgency after the riot. Before the election, Facebook had committed to temporarily stop recommending political and civic groups, after internal investigations found that the vast majority of the most active groups were cesspools of hate, misinformation and harassment. After the riot, that policy became permanent. Facebook also said late last January that it was considering reducing political content in the News Feed, a test that has only expanded since then.

The last year also saw tech platforms wrestle with what to do about posts and people who aren’t explicitly violating their rules, but are walking a fine line. Twitter and Facebook began to embrace a middle ground between completely removing posts or users and leaving them alone entirely by leaning in on warning labels and preventative prompts.

They also started taking a more expansive view of what constitutes harm, looking beyond “coordinated inauthentic behavior,” like Russian troll farms, and instead focusing more on networks of real users who are wreaking havoc without trying to mask their identities. In January of last year alone, Twitter permanently banned 70,000 QAnon-linked accounts under a relatively new policy forbidding “coordinated harmful activity.”

“Our approach both before and after January 6 has been to take strong enforcement action against accounts and Tweets that incite violence or have the potential to lead to offline harm,” spokesperson Trenton Kennedy told Protocol in a statement.

Facebook also wrestled with this question in an internal report on its role in the riot last year, first published by Buzzfeed. “What do we do when a movement is authentic, coordinated through grassroots or authentic means, but is inherently harmful and violates the spirit of our policy?” the authors of the report wrote. “What do we do when that authentic movement espouses hate or delegitimizes free elections?”

Those questions are still far from answered, said Kreiss. “Where’s the line between people saying in the wake of 2016 that Trump was only president because of Russian disinformation, and therefore it was an illegitimate election, and claims about non-existent voting fraud?” Kreiss said. “I can draw those lines, but platforms have struggled with it.”

In a statement, Facebook spokesperson Kevin McAlister told Protocol, “We have strong policies that we continue to enforce, including a ban on hate organizations and removing content that praises or supports them. We are in contact with law enforcement agencies, including those responsible for addressing threats of domestic terrorism.”

What didn’t?

The far bigger question looming over all of this is whether any of these tweaks and changes have had an impact on the larger problem of extremism in America — or whether it was naive to ever believe they could.

The great deplatforming of 2021 only prompted a “great scattering” of extremist groups to other alternative platforms, according to one Atlantic Council report. “These findings portray a domestic extremist landscape that was battered by the blowback it faced after the Capitol riot, but not broken by it,” the report read.

Steve Bannon’s War Room channel may have gotten yanked from YouTube and his account may have been banned from Twitter, but his extremist views have continued unabated on his podcast and on his website, where he’s been able to rake in money from Google Ads. And Bannon’s not alone: A recent report by news rating firm NewsGuard found that 81% of the top websites spreading misinformation about the 2020 election last year are still up and running, many of them backed by ads from major brands.

Google noted the company did demonetize at least two of the sites mentioned in the report — Gateway Pundit and American Thinker — last year, and has taken ads off of individual URLs mentioned in the report as well. “We take this very seriously and have strict policies prohibiting content that incites violence or undermines trust in elections across Google's products,” spokesperson Nicolas Lopez said in a statement, noting that the company has also removed tens of thousands of videos from YouTube for violating its election integrity policies.

Deplatforming can also create a measurable backlash effect, as those who have been unceremoniously excised from mainstream social media urge their supporters to follow them to whatever smaller platform will have them. One recent report on Parler activity leading up to the riot found that users who had been deplatformed elsewhere wore it like a badge of honor on Parler, which only mobilized them further. “Being ‘banned from Twitter’ is such a prominent theme among users in this subset that it raises troubling questions about the unintended consequences and efficacy of content moderation schemes on mainstream platforms,” the report, by the New America think tank, read.

“Did deplatforming really work or is it just accelerating this fractured news environment that we have where people are not sharing common areas where they’re getting their information?” Harbath asked. This fragmentation can also make it tougher to intervene in the less visible places where true believers are gathering.

There’s an upside to that, of course: Making this stuff harder to find is kind of the point. As Kreiss points out, deplatforming “reduces the visibility” of pernicious messages to the average person. Evidence overwhelmingly shows that the majority of people who were arrested in connection to the Capitol riot were average people with no known connections to extremist groups.

Still, while tech giants have had plenty to make up for this last year, ultimately, there’s only so much they can change at a time when some estimates suggest about a quarter of Americans believe the 2020 election was stolen and some 21 million Americans believe use of force would be justified to restore Trump as president. And they believe that not just because of what they see on social media, but because of what the political elites and elected officials in their party are saying on a regular basis.

“The biggest thing that hasn’t changed is the trajectory of the growing extremism of one of the two major U.S. political parties,” Kreiss said. “Platforms are downstream of a lot of that, and until that changes, we’re not going to be able to create new policies out of that problem.”

A MESSAGE FROM ZOOM

www.protocol.com

While we were all Zooming, the Zoom team was thinking ahead and designing new offerings that could continue to enable seamless collaboration, communication and connectivity while evolving with the shifting workplace culture. Protocol sat down with Yuan to talk about Zoom's evolution, the future of work and the Zoom products he's most excited about.

Learn more

Correction: This was updated Jan. 6, 2022 to clarify that Facebook was just considering reducing political content in the new News Feed on its late January earnings call.

Climate

Climate startups' secret weapon to meet their missions

Climate tech startups are embracing the public benefit corporation, a formerly niche way of incorporating, as a way of holding themselves accountable.

An increasing number of mission-driven companies are incorporating as PBCs.

Illustration: Christopher T. Fong/Protocol

Nearly every company today claims to be mission-driven. But the quest for profits and shareholder demands can often get in the way of more altruistic goals.

A new wave of climate-focused startups is trying to mitigate those competing interests using a wonky and somewhat dry piece of business incorporation status that’s existed for more than a decade: the public benefit corporation. Ultimately, PBCs are just one attempt — albeit a still untested one — to better align the capitalist system with combatting the climate crisis.

Keep Reading Show less
Michelle Ma

Michelle Ma (@himichellema) is a reporter at Protocol covering climate. Previously, she was a news editor of live journalism and special coverage for The Wall Street Journal. Prior to that, she worked as a staff writer at Wirecutter. She can be reached at mma@protocol.com.

Sponsored Content

How cybercrime is going small time

Blockbuster hacks are no longer the norm – causing problems for companies trying to track down small-scale crime

Cybercrime is often thought of on a relatively large scale. Massive breaches lead to painful financial losses, bankrupting companies and causing untold embarrassment, splashed across the front pages of news websites worldwide. That’s unsurprising: cyber events typically cost businesses around $200,000, according to cybersecurity firm the Cyentia Institute. One in 10 of those victims suffer losses of more than $20 million, with some reaching $100 million or more.

That’s big money – but there’s plenty of loot out there for cybercriminals willing to aim lower. In 2021, the Internet Crime Complaint Center (IC3) received 847,376 complaints – reports by cybercrime victims – totaling losses of $6.9 billion. Averaged out, each victim lost $8,143.

Keep Reading Show less
Chris Stokel-Walker

Chris Stokel-Walker is a freelance technology and culture journalist and author of "YouTubers: How YouTube Shook Up TV and Created a New Generation of Stars." His work has been published in The New York Times, The Guardian and Wired.

Climate

Red tape is holding back the EV transition

Charging infrastructure is getting held up by local bureaucracy, creating a conundrum for would-be EV drivers.

Lengthy administrative processes are causing significant delays as EV charging companies and local businesses seek to provide access to charging.

Photo illustration: Kena Betancur/VIEW press/Getty Images; Protocol

Building out charging infrastructure as quickly as possible has never been more critical to getting people in electric vehicles.

Yet as states and the federal government embark on ambitious plans to transition from gas-powered to electric vehicles, local government bureaucracies often stand in the way. From acquiring multiple permits to zoning requirements, lengthy administrative processes are causing significant delays as EV charging companies and local businesses seek to provide access to charging. That could slow down EV adoption at a time when the climate crisis depends on getting more of them on the road.

Keep Reading Show less
Kwasi Gyamfi Asiedu

Kwasi (kway-see) is a fellow at Protocol with an interest in tech policy and climate. Previously, he covered global religion news at the Associated Press in New York. Before that, he was a freelance journalist based out of Accra, Ghana, covering social justice, health, and environment stories. His reporting has been published in The New York Times, Quartz, CNN, The Guardian, and Public Radio International. He can be reached at kasiedu@protocol.com.

Workplace

Proximity bias is real. Here's how Prezi is fixing it.

Going back to the office isn’t the answer, but better virtual meetings could be.

"As simple as that sounds, creating that sense of place and purpose with a digital workspace and branding, those are the key things that we do internally and that we've productized for our customers."

Photo: Prezi

Jim Szafranski, CEO of presentation software company Prezi, started developing video meeting and presentation software Prezi Video as a “hobby project” toward the end of 2019. Then the pandemic hit.

“What was typically thought of as a presentation company suddenly was involved in the virtual work world,” Szafranski said.

Keep Reading Show less
Nat Rubio-Licht

Nat Rubio-Licht is a Los Angeles-based news writer at Protocol. They graduated from Syracuse University with a degree in newspaper and online journalism in May 2020. Prior to joining the team, they worked at the Los Angeles Business Journal as a technology and aerospace reporter.

Entertainment

Why Microsoft needs to drag Call of Duty into the future

Microsoft’s biggest challenge with Call of Duty has nothing to do with Sony. It’s about modernizing the franchise for a cross-platform and subscription future.

Call of Duty: Modern Warfare II premiered the biggest entertainment advertisement ever at the port of Los Angeles in May 2022.

Photo: Jerod Harris/Getty Images for Activision

Microsoft and Sony have been waging an increasingly bitter battle over Call of Duty. Over the past two weeks, the feud has spilled out into the public through regulatory filings in countries like Brazil and New Zealand, which, unlike the U.S., publish such documents for all to see.

Microsoft’s goal is to convince regulators worldwide that its landmark acquisition of Call of Duty parent Activision Blizzard for close to $70 billion should get the greenlight. Sony's goal, on the other hand, is to raise the alarm about its primary gaming rival owning one of its biggest cash cows, and whether the PlayStation playbook of platform exclusivity might be turned against Sony if Microsoft decides to make Call of Duty exclusive in some way to Xbox or its Game Pass subscription service.

Keep Reading Show less
Nick Statt

Nick Statt is Protocol's video game reporter. Prior to joining Protocol, he was news editor at The Verge covering the gaming industry, mobile apps and antitrust out of San Francisco, in addition to managing coverage of Silicon Valley tech giants and startups. He now resides in Rochester, New York, home of the garbage plate and, completely coincidentally, the World Video Game Hall of Fame. He can be reached at nstatt@protocol.com.

Latest Stories
Bulletins