Policy

One year since Jan. 6, has anything really changed for tech?

Tech platforms have had a lot to make up for this last year. Did any of it matter?

Rioters scaling the U.S. Capitol walls during the insurrection

The last year also saw tech platforms wrestle with what to do about posts and people who aren’t explicitly violating their rules, but are walking a fine line.

Photo: Blink O'faneye/Flickr

There was a brief window when it almost looked like tech platforms were going to emerge from the 2020 U.S. election unscathed. They’d spent years nursing their wounds from 2016 and building sturdy defenses against future attacks. So when Election Day came and went without some obvious signs of foreign interference or outright civil war, tech leaders and even some in the tech press considered it a win.

“As soon as Biden was declared the winner, and you didn’t have mass protests in the streets, people sort of thought, ‘OK, we can finally turn the corner and not have to worry about this,’” said Katie Harbath, Facebook’s former public policy director.

One year ago today, it became clear those declarations of victory were as premature as former President Trump’s.

Much has been said and written about what tech platforms large and small failed to do in the weeks leading up to the Capitol riot. Just this week, for example, ProPublica and The Washington Post reported that after the election, Facebook rolled back protections against extremist groups right when the company arguably needed those protections most. Whether the riot would have happened — or happened like it did — if tech platforms had done things differently is and will forever be unknowable. An arguably better question is: What’s changed in a year and what impact, if any, have those changes had on the spread of election lies and domestic extremism?

“Ultimately what Jan. 6 and the last year has shown is that we can no longer think about these issues around election integrity and civic integrity as something that’s a finite period of time around Election Day,” Harbath said. “These companies need to think more about an always-on approach to this work.”

What changed?

The most immediate impact of the riot on tech platforms was that it revealed room for exceptions to even their most rigid rules. That Twitter and Facebook would ban a sitting U.S. president was all but unthinkable up until the moment it finally happened, a few weeks before Trump left office. After Jan. 6, those rules were being rewritten in real time, and remain fuzzy one year later. Facebook still hasn’t come to a conclusion about whether Trump will ever be allowed back when his two-year suspension is up.

But Trump’s suspension was still a watershed moment, indicating a new willingness among social media platforms to actually enforce their existing rules against high profile violators. Up until that time, said Daniel Kreiss, a professor at University of North Carolina’s Hussman School of Journalism and Media, platforms including Facebook and Twitter had rules on the books but often found ways to justify why Trump wasn’t running afoul of them.

“There was a lot of interpretive flexibility with their policies,” Kreiss said. “Since Jan. 6, the major platforms — I’m thinking particularly of Twitter and Facebook — have grown much more willing to enforce existing policies against powerful political figures.” Just this week, Twitter offered up another prominent example with the permanent suspension of Georgia Rep. Marjorie Taylor Greene.

Other work that began even before Jan. 6 took on new urgency after the riot. Before the election, Facebook had committed to temporarily stop recommending political and civic groups, after internal investigations found that the vast majority of the most active groups were cesspools of hate, misinformation and harassment. After the riot, that policy became permanent. Facebook also said late last January that it was considering reducing political content in the News Feed, a test that has only expanded since then.

The last year also saw tech platforms wrestle with what to do about posts and people who aren’t explicitly violating their rules, but are walking a fine line. Twitter and Facebook began to embrace a middle ground between completely removing posts or users and leaving them alone entirely by leaning in on warning labels and preventative prompts.

They also started taking a more expansive view of what constitutes harm, looking beyond “coordinated inauthentic behavior,” like Russian troll farms, and instead focusing more on networks of real users who are wreaking havoc without trying to mask their identities. In January of last year alone, Twitter permanently banned 70,000 QAnon-linked accounts under a relatively new policy forbidding “coordinated harmful activity.”

“Our approach both before and after January 6 has been to take strong enforcement action against accounts and Tweets that incite violence or have the potential to lead to offline harm,” spokesperson Trenton Kennedy told Protocol in a statement.

Facebook also wrestled with this question in an internal report on its role in the riot last year, first published by Buzzfeed. “What do we do when a movement is authentic, coordinated through grassroots or authentic means, but is inherently harmful and violates the spirit of our policy?” the authors of the report wrote. “What do we do when that authentic movement espouses hate or delegitimizes free elections?”

Those questions are still far from answered, said Kreiss. “Where’s the line between people saying in the wake of 2016 that Trump was only president because of Russian disinformation, and therefore it was an illegitimate election, and claims about non-existent voting fraud?” Kreiss said. “I can draw those lines, but platforms have struggled with it.”

In a statement, Facebook spokesperson Kevin McAlister told Protocol, “We have strong policies that we continue to enforce, including a ban on hate organizations and removing content that praises or supports them. We are in contact with law enforcement agencies, including those responsible for addressing threats of domestic terrorism.”

What didn’t?

The far bigger question looming over all of this is whether any of these tweaks and changes have had an impact on the larger problem of extremism in America — or whether it was naive to ever believe they could.

The great deplatforming of 2021 only prompted a “great scattering” of extremist groups to other alternative platforms, according to one Atlantic Council report. “These findings portray a domestic extremist landscape that was battered by the blowback it faced after the Capitol riot, but not broken by it,” the report read.

Steve Bannon’s War Room channel may have gotten yanked from YouTube and his account may have been banned from Twitter, but his extremist views have continued unabated on his podcast and on his website, where he’s been able to rake in money from Google Ads. And Bannon’s not alone: A recent report by news rating firm NewsGuard found that 81% of the top websites spreading misinformation about the 2020 election last year are still up and running, many of them backed by ads from major brands.

Google noted the company did demonetize at least two of the sites mentioned in the report — Gateway Pundit and American Thinker — last year, and has taken ads off of individual URLs mentioned in the report as well. “We take this very seriously and have strict policies prohibiting content that incites violence or undermines trust in elections across Google's products,” spokesperson Nicolas Lopez said in a statement, noting that the company has also removed tens of thousands of videos from YouTube for violating its election integrity policies.

Deplatforming can also create a measurable backlash effect, as those who have been unceremoniously excised from mainstream social media urge their supporters to follow them to whatever smaller platform will have them. One recent report on Parler activity leading up to the riot found that users who had been deplatformed elsewhere wore it like a badge of honor on Parler, which only mobilized them further. “Being ‘banned from Twitter’ is such a prominent theme among users in this subset that it raises troubling questions about the unintended consequences and efficacy of content moderation schemes on mainstream platforms,” the report, by the New America think tank, read.

“Did deplatforming really work or is it just accelerating this fractured news environment that we have where people are not sharing common areas where they’re getting their information?” Harbath asked. This fragmentation can also make it tougher to intervene in the less visible places where true believers are gathering.

There’s an upside to that, of course: Making this stuff harder to find is kind of the point. As Kreiss points out, deplatforming “reduces the visibility” of pernicious messages to the average person. Evidence overwhelmingly shows that the majority of people who were arrested in connection to the Capitol riot were average people with no known connections to extremist groups.

Still, while tech giants have had plenty to make up for this last year, ultimately, there’s only so much they can change at a time when some estimates suggest about a quarter of Americans believe the 2020 election was stolen and some 21 million Americans believe use of force would be justified to restore Trump as president. And they believe that not just because of what they see on social media, but because of what the political elites and elected officials in their party are saying on a regular basis.

“The biggest thing that hasn’t changed is the trajectory of the growing extremism of one of the two major U.S. political parties,” Kreiss said. “Platforms are downstream of a lot of that, and until that changes, we’re not going to be able to create new policies out of that problem.”

A MESSAGE FROM ZOOM

www.protocol.com

While we were all Zooming, the Zoom team was thinking ahead and designing new offerings that could continue to enable seamless collaboration, communication and connectivity while evolving with the shifting workplace culture. Protocol sat down with Yuan to talk about Zoom's evolution, the future of work and the Zoom products he's most excited about.

Learn more

Correction: This was updated Jan. 6, 2022 to clarify that Facebook was just considering reducing political content in the new News Feed on its late January earnings call.

China

Why does China's '996' overtime culture persist?

A Tencent worker’s open criticism shows why this work schedule is hard to change in Chinese tech.

Excessive overtime is one of the plights Chinese workers are grappling with across sectors.

Photo: VCG/VCG via Getty Images

Workers were skeptical when Chinese Big Tech called off its notorious and prevalent overtime policy: “996,” a 12-hour, six-day work schedule. They were right to be: A recent incident at gaming and social media giant Tencent proves that a deep-rooted overtime culture is hard to change, new policy or not.

Defiant Tencent worker Zhang Yifei, who openly challenged the company’s overtime culture, reignited wide discussion of the touchy topic this week. What triggered Zhang's criticism, according to his own account, was his team’s positive attitude toward overtime. His team, which falls under WeCom — a business communication and office collaboration tool similar to Slack — announced its in-house Breakthrough Awards. The judges’ comments to one winner highly praised them for logging “over 20 hours of intense work nonstop,” to help meet the deadline for launching a marketing page.

Keep Reading Show less
Shen Lu

Shen Lu covers China's tech industry.

Sponsored Content

A CCO’s viewpoint on top enterprise priorities in 2022

The 2022 non-predictions guide to what your enterprise is working on starting this week

As Honeywell’s global chief commercial officer, I am privileged to have the vantage point of seeing the demands, challenges and dynamics that customers across the many sectors we cater to are experiencing and sharing.

This past year has brought upon all businesses and enterprises an unparalleled change and challenge. This was the case at Honeywell, for example, a company with a legacy in innovation and technology for over a century. When I joined the company just months before the pandemic hit we were already in the midst of an intense transformation under the leadership of CEO Darius Adamczyk. This transformation spanned our portfolio and business units. We were already actively working on products and solutions in advanced phases of rollouts that the world has shown a need and demand for pre-pandemic. Those included solutions in edge intelligence, remote operations, quantum computing, warehouse automation, building technologies, safety and health monitoring and of course ESG and climate tech which was based on our exceptional success over the previous decade.

Keep Reading Show less
Jeff Kimbell
Jeff Kimbell is Senior Vice President and Chief Commercial Officer at Honeywell. In this role, he has broad responsibilities to drive organic growth by enhancing global sales and marketing capabilities. Jeff has nearly three decades of leadership experience. Prior to joining Honeywell in 2019, Jeff served as a Partner in the Transformation Practice at McKinsey & Company, where he worked with companies facing operational and financial challenges and undergoing “good to great” transformations. Before that, he was an Operating Partner at Silver Lake Partners, a global leader in technology and held a similar position at Cerberus Capital LP. Jeff started his career as a Manufacturing Team Manager and Engineering Project Manager at Procter & Gamble before becoming a strategy consultant at Bain & Company and holding executive roles at Dell EMC and Transamerica Corporation. Jeff earned a B.S. in electrical engineering at Kansas State University and an M.B.A. at Dartmouth College.
Entertainment

Spoiler alert: We’re already in the beta-metaverse

300 million people use metaverse-like platforms — Fortnite, Roblox and Minecraft — every month. That equals the total user base of the internet in 1999.

A lot of us are using platforms that can be considered metaverse prototypes.

Illustration: Christopher T. Fong/Protocol

What does it take to build the metaverse? What building blocks do we need, how can companies ensure that the metaverse is going to be inclusive, and how do we know that we have arrived in the 'verse?

This week, we convened a panel of experts for Protocol Entertainment’s first virtual live event, including Epic Games Unreal Engine VP and GM Marc Petit, Oasis Consortium co-founder and President Tiffany Xingyu Wang and Emerge co-founder and CEO Sly Lee.

Keep Reading Show less
Janko Roettgers

Janko Roettgers (@jank0) is a senior reporter at Protocol, reporting on the shifting power dynamics between tech, media, and entertainment, including the impact of new technologies. Previously, Janko was Variety's first-ever technology writer in San Francisco, where he covered big tech and emerging technologies. He has reported for Gigaom, Frankfurter Rundschau, Berliner Zeitung, and ORF, among others. He has written three books on consumer cord-cutting and online music and co-edited an anthology on internet subcultures. He lives with his family in Oakland.

Boost 2

Can Matt Mullenweg save the internet?

He's turning Automattic into a different kind of tech giant. But can he take on the trillion-dollar walled gardens and give the internet back to the people?

Matt Mullenweg, CEO of Automattic and founder of WordPress, poses for Protocol at his home in Houston, Texas.
Photo: Arturo Olmos for Protocol

In the early days of the pandemic, Matt Mullenweg didn't move to a compound in Hawaii, bug out to a bunker in New Zealand or head to Miami and start shilling for crypto. No, in the early days of the pandemic, Mullenweg bought an RV. He drove it all over the country, bouncing between Houston and San Francisco and Jackson Hole with plenty of stops in national parks. In between, he started doing some tinkering.

The tinkering is a part-time gig: Most of Mullenweg’s time is spent as CEO of Automattic, one of the web’s largest platforms. It’s best known as the company that runs WordPress.com, the hosted version of the blogging platform that powers about 43% of the websites on the internet. Since WordPress is open-source software, no company technically owns it, but Automattic provides tools and services and oversees most of the WordPress-powered internet. It’s also the owner of the booming ecommerce platform WooCommerce, Day One, the analytics tool Parse.ly and the podcast app Pocket Casts. Oh, and Tumblr. And Simplenote. And many others. That makes Mullenweg one of the most powerful CEOs in tech, and one of the most important voices in the debate over the future of the internet.

Keep Reading Show less
David Pierce

David Pierce ( @pierce) is Protocol's editorial director. Prior to joining Protocol, he was a columnist at The Wall Street Journal, a senior writer with Wired, and deputy editor at The Verge. He owns all the phones.

Enterprise

Lyin’ AI: OpenAI launches new language model despite toxic tendencies

Research company OpenAI says this year’s language model is less toxic than GPT-3. But the new default, InstructGPT, still has tendencies to make discriminatory comments and generate false information.

The new default, called InstructGPT, still has tendencies to make discriminatory comments and generate false information.

Illustration: Pixabay; Protocol

OpenAI knows its text generators have had their fair share of problems. Now the research company has shifted to a new deep-learning model it says works better to produce “fewer toxic outputs” than GPT-3, its flawed but widely-used system.

Starting Thursday, a new model called InstructGPT will be the default technology served up through OpenAI’s API, which delivers foundational AI into all sorts of chatbots, automatic writing tools and other text-based applications. Consider the new system, which has been in beta testing for the past year, to be a work in progress toward an automatic text generator that OpenAI hopes is closer to what humans actually want.

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Policy

AI bias is rampant. Bug bounties could help catch it.

A Q&A with cybersecurity guru Camille François about her new research on bug bounties, and the hope that they could help rein in AI harms.

Developers of harmful AI need to know "how ugly their baby is," Camille François said.

Illustration: clarote

The 1990s might have a lot to teach us about how we should tackle harm from artificial intelligence in the 2020s.

Back then, some companies found they could actually make themselves safer by incentivizing the work of independent “white hat” security researchers who would hunt for issues and disclose them in a process that looked a lot like hacking with guardrails. That’s how the practice of bug bounties became a cornerstone of cybersecurity today.

Keep Reading Show less
Ben Brody

Ben Brody (@ BenBrodyDC) is a senior reporter at Protocol focusing on how Congress, courts and agencies affect the online world we live in. He formerly covered tech policy and lobbying (including antitrust, Section 230 and privacy) at Bloomberg News, where he previously reported on the influence industry, government ethics and the 2016 presidential election. Before that, Ben covered business news at CNNMoney and AdAge, and all manner of stories in and around New York. He still loves appearing on the New York news radio he grew up with.

Latest Stories
Bulletins