Tech spent years fighting foreign terrorists. Then came the Capitol riot.
Policy

Tech spent years fighting foreign terrorists. Then came the Capitol riot.

"Nobody's going to have a hearing if a platform takes down 1,000 ISIS accounts. But they might have a hearing if you take down 1,000 QAnon accounts."

Photo: Roberto Schmidt/Getty Images

On a Friday in August 2017 — years before a mob of armed and very-online extremists took over the U.S. Capitol — a young Black woman who worked at Facebook walked up to the microphone to ask Mark Zuckerberg a question during a weekly companywide question-and-answer session.

Zuckerberg had just finished speaking to the staff about the white supremacist violence in Charlottesville, Virginia, the weekend before — and what a difficult week it had been for the world. He was answering questions on a range of topics, but the employee wanted to know: Why had he waited so long to say something?

The so-called Unite the Right rally in Charlottesville had been planned in plain sight for the better part of a month on Facebook. Facebook took the event down only a day before it began, citing its ties to hate groups and the threat of physical harm. That turned out to be more than a threat. The extremist violence in Charlottesville left three people dead and dozens more injured. Then-Attorney General Jeff Sessions later called it an act of "domestic terrorism."

Zuckerberg had already posted a contrite, cautious message about the rally on Facebook earlier that week, saying the company would monitor for any further threats of violence. But his in-person response to the employee's question that day struck some on the staff as dismissive. "He said in front of the entire company, both in person and watching virtually, that things happen all over the world: Is he supposed to comment on everything?" one former employee recalled.

"It was something like: He can't be giving an opinion on everything that happens in the world every Friday," another former employee remembered.

Facebook's chief operating officer and resident tactician, Sheryl Sandberg, quickly swooped in, thanking the employee for her question and rerouting the conversation to talk about Facebook's charitable donations and how Sandberg herself thinks about what to comment on publicly. A Facebook spokesperson confirmed the details of this account, but said it lacked context, including that Zuckerberg did admit he should have said something sooner.

Still, to the people who spoke with Protocol, Zuckerberg's unscripted remarks that day underscored something some employees already feared: that the company had yet to take the threat posed by domestic extremists in the U.S. as seriously as it was taking the threat from foreign extremists linked to ISIS and al-Qaeda. "There wasn't a patent condemnation such that we would have expected had this been a foreign extremist group," a third former employee said.

At the time, tech giants were already hard at work figuring out how to crack down on the global terrorist networks that filled their sites with beheading videos and used social media to openly recruit new adherents. Just a few months before Charlottesville, Facebook, YouTube, Twitter and Microsoft had announced a novel plan to share intel on known terrorist content so they could automatically remove posts that had appeared elsewhere on the web.

Despite the heavy-handed approach to international jihadism, tech giants have applied a notably lighter touch to the same sort of xenophobic, racist, conspiratorial ideologies that are homegrown in the U.S. and held largely by white Westerners. Instead, they've drawn drifting lines in the sand, banning explicit calls for violence, but often waiting to address the deranged beliefs underlying that violence until something has gone terribly wrong.

But the Capitol riot on Jan. 6 and the spiraling conspiracies that led to it have forced a reckoning many years in the making on how both Big Tech and the U.S. government approach domestic extremists and their growing power. In the weeks since the riot, the Department of Homeland Security has issued a terrorism advisory bulletin, warning of the increased threat of "domestic violent extremists" who "may be emboldened" by the riot. The head of the intelligence community has promised to track domestic extremist groups like QAnon. And attorney general nominee Merrick Garland, who prosecuted the 1995 Oklahoma City bombing case, said during his recent confirmation hearing that investigating domestic terrorism will be his "first priority."

Tech companies have followed suit, cracking down in ways they never have before on the people and organizations that worked to motivate and glorify that extremist behavior, including former President Donald Trump himself.

But the question now is the same as it was when that employee confronted Zuckerberg three years ago: What took so long?

Interviews with more than a dozen people who have worked on these issues at Facebook, Twitter and Google or inside the government shed light on how tech giants' defenses against violent extremism have evolved over the last decade and why their work on domestic threats lagged behind their work on foreign ones.

Some of it has to do with the War on Terror sociopolitical dynamics that have prioritized violent Islamism above all else.

Some of it has to do with the technical advancements that have been made in just the last four years.

And yes, some of it has to do with Trump.

The room full of lawyers

Nearly a decade before Q was a glimmer in some 4channer's eye, the tech industry was facing a different scourge – the proliferation of child sexual abuse material. In 2008, Microsoft called a Dartmouth computer science professor named Hany Farid to help the company figure out a way to do something about it. Farid, now a professor at University of California, Berkeley, traveled to Washington to meet with representatives from the tech industry to discuss a possible solution.

"I go down to D.C. to talk to them, and it's exactly what you think it is: a room full of lawyers, not engineers, from the tech industry, talking about how they can't solve the problem," Farid recalled.

To Farid, the fact that he was one of the only computer scientists at the meeting sent a message about how the industry thought about the problem, and still thinks about other content moderation problems — not as a technical challenge that rewarded speed and innovation, but as a legal liability that had to be handled cautiously.

At the time, tech companies were already attaching unique fingerprints to copyrighted material so they could remove anything that risked violating the Digital Millennium Copyright Act. Farid didn't see any reason why companies couldn't apply the same technology to automatically remove child abuse material that had been previously reported to the National Center for Missing & Exploited Children's tipline. That might not catch every piece of child abuse imagery on the internet, but it would make a dent. In partnership with Microsoft, he spent the next year developing a tool called PhotoDNA that Microsoft deployed across its products in 2009.

But social networks dragged their feet. Facebook became the first company outside of Microsoft to announce it had adopted PhotoDNA in 2011. Twitter took it up in 2013. (Google had begun using its own hashing system in 2008.) "Everybody came to this reluctantly," Farid said. "They knew if you came for [child sexual abuse material], you're now going to come for the other stuff."

That turned out to be the right assumption. By 2014, that "other stuff" included a string of ghastly beheading videos, slickly filmed and distributed by ISIS's loud and proud propaganda arm. One of those videos in particular, which documented the beheading of journalist James Foley, horrified Americans as it filled their Twitter feeds and multiplied on YouTube. "It was the first instance of an execution video going really viral," said Nu Wexler, who was working on Twitter's policy communications team at the time. "It was one of the early turning points where the platforms realized they needed to work together."

As the Foley video made the rounds, Twitter and YouTube scrambled to form an informal alliance, where each platform swapped links on the videos it was finding and taking down. But at the time, that work was happening through user reports and manual searches. "A running theme for a number of services was that we had a very manual, very reactive response to the threat that ISIS posed to our services, coupled with the speed of ISIS' territorial organizational expansion and at the same time the response from industry and from government being relatively siloed," said Nick Pickles, Twitter's senior director of public policy strategy.

The Foley video and several other videos that appeared on the platform in quick succession only underscored how insufficient that approach was. "The way that content was distributed by ISIS online represented a manifestation of their online and physical threat in a way which led to a far more focused policy conversation and urgency to address their exploitation of digital services," Pickles said.

But there was also trepidation among some in the industry. "I heard a tech executive say to me, and I wanted to punch him in the face: 'One person's terrorist is another person's freedom fighter,'" Farid remembered. "I'm like, there's a fucking video of a guy getting his head cut off and then run over. Do you want to talk about that being a 'freedom fighter'?"

To Farid, the choice for tech companies was simple: automatically filter out the mass quantities of obviously abhorrent content using hashing technology like PhotoDNA and worry about the gray areas later.

There's a fucking video of a guy getting his head cut off and then run over. Do you want to talk about that being a 'freedom fighter'?

Inside YouTube, one former employee who has worked on policy issues for a number of tech giants said people were beginning to discuss doing just that. But questions about the slippery slope slowed them down. "You start doing it for this, then everybody's going to ask you to do it for everything else. Where do you draw the line there? What is OK and what's not?" the former employee said, recalling those discussions. "A lot of these conversations were happening very robustly internally."

Those conversations were also happening with the U.S. government at a time when tech giants were very much trying to distance themselves from the feds in the wake of the Edward Snowden disclosures. "At first when we were meeting with social media companies to address the ISIS threat, there was some reluctance to feel like tech companies were part of the solution," said Ryan Greer, who worked on violent extremism issues in both the State Department and the Department of Homeland Security under President Obama. "It had to be a little bit shamed out of them."

The Madison Valleywood Project

Then, in 2015, ISIS-inspired attackers shot up a theater in Paris and an office in San Bernardino. The next year, they carried out a series of bombings in Brussels and drove cargo trucks through crowds in Berlin and Nice. The rash of terrorist attacks in the U.S. and Europe changed the stakes for the tech industry. Suddenly, governments in those countries began forcefully pushing tech companies to prevent their platforms from becoming instruments of radicalization.

"You had multiple Western governments speaking with one voice and linking arms on this to put pressure to bear on these companies," said the former YouTube employee.

In the U.S., the Obama administration gave this pressure campaign a codename: The Madison Valleywood Project, which was designed to get Madison Avenue advertisers, Silicon Valley technologists and Hollywood filmmakers to work with the government in the fight against ISIS. In February 2016, Obama invited representatives from all of those industries – Google, Facebook and Twitter among them – to a day-long summit at the White House that was laser-focused on ISIS. The day's opening speaker, former Assistant Attorney General John Carlin, applauded Facebook, Twitter and YouTube's nascent counterterrorism efforts but urged them to do more. "We anticipate — and indeed hope — that after today you will continue to meet without the government, to continue to develop on your own efforts, building on the connections you make today," Carlin said, according to a copy of the speech obtained by the Electronic Privacy Information Center.

"The ISIS threat really captivated both U.S. and international media at the time," said Greer, who now works as national security director for the Anti-Defamation League. "There was a constant drumbeat of questions: What are you doing about ISIS? What are you doing about ISIS?"

The mounting pressure seemed to have an impact. Just weeks before the White House summit, Twitter became the first tech company to publicly enumerate the terrorist accounts it had removed in 2016. The communications team opted to publish the blog post laying out the stats without attaching the author's name, Wexler said, because they were fearful of directing more death threats to executives who were already being bombarded with them.

Over the course of the next year and a half, tech executives continued to hold meetings with the U.K.'s home secretary, the United Nations Counter-Terrorism Executive Directorate and the EU Internet Forum.


Twitter's Nick Pickles (second from left) and Facebook's Brian Fishman (third from left) attend the G7 Interior Ministerial Meeting in Ischia, Italy in October of 2017. Photo: Vincenzo Rando/Getty Images


By June 2017, Microsoft, Facebook, Google and Twitter emerged with a plan to share hashed terrorist images and videos through a new group called the Global Internet Forum to Counter Terrorism, or GIFCT. That group has since grown to include smaller companies, including Snap, Pinterest, Mailchimp and Discord, and is led by Obama's former director of the National Counterterrorism Center, Nick Rasmussen.

Meanwhile, Google's internal idea lab, Jigsaw, which had been studying radicalization online for years, began running a novel pilot designed to stop people from getting pulled in by ISIS through search. Working with outside groups, Jigsaw began sponsoring Google search ads in 2016 that would run whenever users searched for terms that risked sending them down an ISIS rabbit hole. Those search ads, inspired by Jigsaw's interviews with actual ISIS defectors, would link to Arabic and English-language YouTube videos that aimed to counter ISIS propaganda. In 2017, even as Google and YouTube worked on ways to remove ISIS content algorithmically, YouTube deployed the Redirect Method to searches inside its own platform to help counter propaganda its automated filters had not yet found.

Facebook, meanwhile, hired an expert on jihadi terrorism, Brian Fishman, to head up Facebook's work on counterterrorism and dangerous organizations in April 2016. At the time, the list of dangerous organizations consisted mainly of foreign terrorist organizations, as well as well-known hate groups like the Ku Klux Klan and the neo-Nazi group Blood & Honour. These organizations were banned from the platform, as was any praise of those groups. But Fishman's hiring was a clear signal that cracking down on ISIS and al-Qaeda had become a priority for Facebook.

There was a constant drumbeat of questions: What are you doing about ISIS? What are you doing about ISIS?

After Fishman began, Facebook started using an approach similar to what the intelligence community would use to go after ISIS, relying not just on user reports and automated takedowns of known terrorist content, but using artificial intelligence as well as off-platform information to chase down whole networks of accounts.

ISIS's overt and corporate branding gave tech platforms a clear focal point to start with. "Some groups like the ISISes of the world and the al-Qaedas of the world are very focused on protecting their brand," Fishman said. "They retain tight control over the release of information." ISIS had media outlets, television stations, slogans and soundtracks. That meant platforms could begin sniffing out accounts that used that common branding without having to look exclusively at the content itself.

"I look back on that threat, and I recognize now in hindsight there were attributes of it that made it easier to go after than the types of domestic terrorism and extremism we're grappling with today," said Yasmin Green, director of research and development at Google's Jigsaw. "There was one organization, basically, and you had to make public your allegiance to it. Obviously, all of those things made it possible for law enforcement and the platforms to model and pursue it."


Jigsaw's Yasmin Green has recently focused her violent extremism research on white supremacists and conspiracy theorists.Photo: Craig Barritt/Getty Images


Around that time, Twitter also began investing in monitoring what Pickles calls "behavioral signals," not just tweets. "If you focus on behavioral signals, you can take action before they distribute content," Pickles said. "The switch to behavior meant that we could take action much faster at a much greater scale, rather than waiting."

The development of automated filters across the industry was almost stunningly successful. Within a year, Facebook, Twitter and YouTube went from manually removing foreign terrorist content that had been reported by users to automatically taking down the vast majority of foreign terrorists' posts before anyone flags them. Today, according to Facebook's transparency reports, 99.8% of terrorist content is removed before a single person has even reported it.

"If you actually look a bit farther back, you understand just how much has moved in this arena," Green said. "That always makes me feel a little bit optimistic."

The domestic dilemma

It didn't hurt that both the United States and the United Nations keep lists of designated international terrorist organizations. To the "rooms full of lawyers" that help make these decisions, that kept things clean; use those lists as a guide and level a hammer on any organizations on them, and tech executives could be fairly confident they wouldn't face much second-guessing from the powers that be. "If a terrorist group is put on a watch list or terrorist list or viewed by the international community, by the UN, as a terrorist group, then that gives Facebook everything they need to have a very strong policy," said Yael Eisenstat, a former CIA officer who led election integrity efforts for Facebook's political ads in 2018.

The same can't be said for domestic extremists. In the United States, there's not an analogous list of domestic terrorist organizations for companies to work from. That doesn't mean acts of domestic terrorism go unpunished. It just means that people are prosecuted for the underlying crimes they commit, not for being part of a domestic terrorist organization. That also means that individual extremists who commit the crimes are the ones who face the punishment, not the groups they represent. "You have to have a violent crime committed in pursuit of an ideology," former FBI Acting Director Andrew McCabe said in a recent podcast. "We hesitate to call domestic terrorists 'terrorists' until after something has happened."

Nobody's going to have a hearing if a platform takes down 1,000 ISIS accounts. But they might have a hearing if you take down 1,000 QAnon accounts.

This gap in the legal system means tech companies write their own rules around what sorts of objectionable ideologies and groups ought to be forbidden on their platforms and often only take action once the risk of violence is imminent. "If something was illegal it was going to be handled. If something was not, then it became a political conversation," Eisenstat said of her time at Facebook.

Even in the best of times, it's an uncomfortable balancing act for companies that purport to prioritize free speech above all else. But it's particularly fraught when the person condoning or even espousing extremist views is the president of the United States. "Nobody's going to have a hearing if a platform takes down 1,000 ISIS accounts. But they might have a hearing if you take down 1,000 QAnon accounts," said Wexler, who worked in policy communications for Facebook, Google and Twitter during the Trump administration.

There never was a Madison Valleywood moment in the U.S. related to the rising hate crimes and domestic extremist events that marked the Trump presidency. Not after Charlottesville. Not after Pittsburgh. Not after El Paso. The former president did have what might be construed as the opposite of a Madison Valleywood moment when he held an event at the White House in 2019 where far-right conspiracy theorists and provocateurs discussed social media censorship. But this time, Facebook, Google and Twitter weren't invited.


President Trump's Social Media Summit in 2019 focused on alleged social media censorship of conservatives. Photo: Jabin Botsford/Getty Images


"Platforms write their own rules, but governments signal which types of content they find objectionable, creating a permission structure for the companies to step up enforcement," Wexler said. "President Trump's comments after Charlottesville and his tacit support of the Proud Boys sent a deliberate message to tech companies: If you crack down on white nationalists' accounts, we'll accuse you of political bias and make your CEOs testify before Congress."

During the Trump years, tech companies repeatedly courted favor with the president and his party. In 2019, Google CEO Sundar Pichai met directly with Trump to discuss what Trump later characterized as "political fairness." Internally, Google employees told NBC News that the company had rolled back diversity training programs because the company "doesn't want to be seen as anti-conservative." (Google denied the accusation to NBC.) On Election Day in 2020, YouTube allowed the Trump campaign to book an ad on its homepage for the entire day, and it was the only one of the top three social platforms that had no explicit policy regarding attempts to delegitimize the election results. After the election, videos claiming Trump won went viral, but YouTube only began removing widespread allegations of fraud and errors on Dec. 9.

Google and YouTube declined to make any executives available for comment. In a statement, YouTube spokesperson Farshad Shadloo said: "Our Community Guidelines prohibit hate speech, gratuitous violence, incitement to violence, and other forms of intimidation. Content that promotes terrorism or violent extremism does not have a home on YouTube." Shadloo said YouTube's policies focus on content violations and not speakers or groups, unless those speakers or groups are included on a government foreign terrorist organization list.

At times, tech giants bent their own policies or adopted entirely new ones to accommodate President Trump and his most conspiratorial supporters on the far right. In January 2018, Twitter, for one, published its "world leaders policy" for the first time, seemingly seeking to explain why President Trump wasn't punished for threatening violence when he tweeted that his nuclear button was "much bigger & more powerful" than Kim Jong Un's. Later that year, after Facebook, Apple and YouTube all shut down accounts and pages linked to Infowars' Alex Jones, Twitter CEO Jack Dorsey booked an interview with conservative kingmaker Sean Hannity, where he defended Twitter's decision not to do the same. Just a few weeks later, the company would reverse course after Jones livestreamed a tirade against a CNN reporter on Twitter — while standing outside of Dorsey's own congressional hearing.


Twitter defended its decision not to remove accounts tied to InfoWars Alex Jones. Weeks later, Twitter reversed that decision, following CEO Jack Dorsey's testimony before Congress in September of 2018.Photo: Tom Williams/CQ Roll Call


Jones was a lightning rod for Facebook, too. As BuzzFeed recently reported, Facebook decided in 2019 to do more than just ban Jones' pages. The company wanted to designate him as a dangerous individual, a label that also ordinarily forbids other Facebook users from praising or expressing support for those individuals. But according to BuzzFeed, Facebook altered its own rules at Zuckerberg's behest, creating a third lane for Jones that would allow his supporters' accounts to go untouched. And when President Trump threatened to shoot looters in the aftermath of George Floyd's killing, Facebook staffers reportedly called the White House themselves, urging the president to delete or tweak his post. When he didn't, Zuckerberg told his staff the post didn't violate Facebook's policies against incitement to violence anyway.

"I would argue the looter-shooter post was more violating than [the post] on Jan. 6," Eisenstat said, referring to the Facebook video that ended up getting the former president kicked off of Facebook indefinitely in his last weeks in office. In the video, Trump told the Capitol rioters he loved them and that they were very special, while repeating baseless claims of election fraud.

For Facebook at least, this instinct to accommodate the party in power wasn't unique to the U.S., said tech entrepreneur Shahed Amanullah, who worked with Facebook on a series of global hackathons through his company, Affinis Labs. The goal of the hackathons, Amanullah said, was to fight all forms of hate and extremism online, and the events had been successful in countries like Indonesia and the Philippines. But when he brought the program to India, Amanullah said he received pressure from Facebook India's policy team to focus the event specifically on terrorism coming out of the majority-Muslim region of Kashmir.

The woman leading Facebook India's policy team at the time, Ankhi Das, was a vocal supporter of Indian Prime Minister Narendra Modi, and, according to The Wall Street Journal, had a pattern of allowing anti-Muslim hate speech to go unchecked on the platform. "I said there's no way I'm ever going to accept a directive like that," Amanullah recalled.

Though he was supposed to run seven more hackathons in the country, Amanullah cut ties. "That was the last time we ever worked with Facebook," he said.

A Facebook spokesperson told Protocol, "We've found nothing to suggest this is true. We've looked into it on our end, spoken to people who were present at the hackathon and have no reason to believe that anyone was pressured to shift the focus of the hack." Das did not respond to Protocol's request for comment.

To Amanullah, the experience working with Facebook in India signaled that the company was giving in to the Indian government and giving Islamist extremism an inordinate amount of attention compared to other threats. "If you want to talk about hate," he said, "you have to talk about all kinds of hate."

The reckoning

Looking back in the wake of the Capitol riot, it's easy to view Charlottesville as a warning shot that went unheard, or at least insufficiently answered, by tech giants. And in many ways it was. But inside, things were also changing, albeit far more slowly than almost anyone believes they should have.

For Fishman, who had focused almost entirely on jihadism on Facebook until that point, the Unite the Right rally was a turning point. "Charlottesville was a moment when extremist groups on the American far right clearly were trying to overcome the historical fractioning of that movement and express themselves in more powerful ways," he said. "It absolutely was something we tracked and realized we needed to invest more resources into."


The Unite the Right rally in Charlottesville, which had been planned on Facebook, left three dead and dozens injured.Photo: Zach D Roberts/Getty Images


Immediately after the rally, Facebook banned eight far-right and white nationalist pages associated with it. That's in addition to hate groups like the National Socialist Movement, the KKK, The Daily Stormer and Identity Evropa, which were already designated as dangerous organizations and forbidden from the platform long before the rally. A few months later, in 2018, Fishman's team changed names, from the counterterrorism team to the dangerous organizations team, a signal that the company would double down on enforcement against hate groups, too. The team working primarily on violent extremism of all stripes at Facebook eventually grew to 350 people.

In late 2017, Twitter broadened its policy on violent extremism to prohibit all violent extremist groups, not just designated foreign terrorist organizations. And in 2019, YouTube promised it would limit the spread of misinformation by keeping conspiracy theory videos out of recommendations, a promise it's had some success in keeping.

Even so, these companies repeatedly bungled the definition of what constitutes hate and violent extremism. It was, after all, a year after Charlottesville when Zuckerberg boldly defended Facebook's policy of allowing Holocaust denial in an interview with Recode. It wasn't until 2020 that Zuckerberg changed his mind about that, citing "data showing an increase in anti-Semitic violence." His post never fully acknowledged the role Facebook's earlier policies might have played in stoking that violence.

There were also legitimate reasons why identifying and removing domestic hate groups in the U.S. was harder than taking down networks of accounts tied to ISIS and al-Qaeda. For one thing, domestic groups and the people associated with them have traditionally been far more diffuse, making it tougher to draw neat lines around them or use clues in their branding to find and remove whole networks. "Groups that are less organized and less structured and don't put out official propaganda in the same sort of way, you have to use a different tool kit in order to get at those kinds of entities," Fishman said. "Chasing the networks becomes even more important than chasing known pieces of content that you can gather using vendors."

Then, there's the fact that tech companies defined hate in limited ways. Facebook, Twitter and YouTube have all introduced hate speech policies that generally prohibit direct attacks on the basis of specific categories like race or sexual orientation. But what to do with a new conspiracy theory like QAnon that hinges on some imagined belief in a cabal of Satan-worshipping Democratic pedophiles? Or a group of self-proclaimed "Western chauvinists" like the Proud Boys cloaking themselves in the illusion that white pride doesn't necessarily require racial animus? Or the #StoptheSteal groups, which were based on a lie, propagated by the former president of the United States, that the election had been stolen? These movements were shot through with hate and violence, but initially, they didn't fit neatly into any of the companies' definitions. And those companies, operating in a fraught political environment, were in turn slow to admit, at least publicly, that their definitions needed to change.

It's also true that it's philosophically trickier for American companies to ban people who are not faraway terrorists, but other Americans, whom the U.S. government couldn't punish for their beliefs even if it wanted to. For all of the bad-faith concerns about censorship, it's also possible to argue with a straight face that platforms shouldn't have even more power to police speech than Congress has. "I find that to be kind of troubling at a time when so many people are talking about tech platforms having too much power, we look to them to do what governments would be unable to do," said Matt Perault, director of Duke University's Center on Science & Technology Policy and Facebook's former director of public policy.

All of those considerations complicated tech companies' efforts to respond to domestic threats that could be harmful but haven't yet caused harm. "One of the biggest challenges around designing a policy and enforcement framework in these areas is when you have domestic [actors] who are part of this conversation, who are not promoting violence," Pickles said, after recently admitting that Twitter was too slow to act on QAnon. It wasn't until after the Capitol attack that Twitter began permanently banning accounts for primarily sharing QAnon content.

It's not always easy to predict which extreme beliefs will turn violent either, said Green, who has spent the last few years focusing on both violent white supremacists and conspiracy theorists at Jigsaw. In late 2019, Green traveled to Tennessee and Alabama to meet with people who believe in a range of conspiracy theories, from flat earthers to Sandy Hook deniers. "I went into the field with a strong hypothesis that I know which conspiracy theories are violent and which aren't," Green said. But the research surprised her, as some conspiracy theorists she believed to be innocuous, like flat earthers, were far more militant followers than the ones she considered violent, like people who believed in white genocide.

"We spoke to flat earthers who could tell you which NASA scientists are propagating a world view and what they would do to them if they could," Green said.

Even more challenging: Of the 77 conspiracy theorists Green's team interviewed, there wasn't a single person who believed in only one conspiracy. That makes mapping out the scope of the threat much more complex than fixating on a single group.

The runup to the 2020 election did accelerate some of this work, in part because the real-world threat was accelerating, too. After extremists associated with the far right, anti-government boogaloo movement carried out a series of killings in 2020, Facebook designated the boogaloos as a dangerous organization in June. "We saw something that we thought looked like a real network, a real entity, not just a bunch of guys using shared imagery in order to project frustration at the government," Fishman said.

The reason we put policies [in place] like those we built over 2020 was because of concern about something like Jan. 6.

As the blurred lines between militia groups and conspiracy theorists became obvious throughout summer 2020, Facebook also barred hundreds of militarized groups like the Oath Keepers. And, after initially attempting to merely limit the spread of QAnon, it wound up banning all accounts, pages and groups "representing QAnon" in October and launching its own version of the Redirect Method to point people searching QAnon-related terms toward more credible sources. Meanwhile, Facebook's automated filters against hate speech also made significant strides, with the company removing more than double the amount of hate speech in the second quarter of 2020 as it did in the first.

YouTube cracked down on QAnon too, prohibiting "content that targets an individual or group with conspiracy theories that have been used to justify real-world violence." And it kicked out white supremacists like David Duke and Richard Spencer, who had managed to evade the company's hate speech policies for years. Twitter, in addition to banning the Oath Keepers, began aggressively labeling President Trump's tweets over the summer for glorifying violence and, after the election, for spreading misinformation about voter fraud. Twitter also launched a new policy prohibiting "coordinated harmful activity," in June 2020, a policy that was meant to rope in dangerous mass movements and conspiracy theorists who weren't always explicitly violent.

Dorsey and Zuckerberg, at least, paid for these decisions in hours spent flagellating themselves before incensed Republicans in Congress. (YouTube CEO Susan Wojcicki has, for the most part, avoided their ire.) But to anyone who was watching closely, it was clear from the frequent announcements and policy changes coming from tech companies that these global behemoths were preparing for an election like they'd seen other countries — countries where there is no such thing as a peaceful transfer of power. "The reason we put policies [in place] like those we built over 2020 was because of concern about something like Jan. 6," Fishman said.

What comes next?

Then came Jan. 6, and none of those policies were enough. Like the string of ISIS attacks before it, the raid inside the U.S. Capitol showed just how permeable both the country's digital and physical defenses still were against the evolving threat of domestic extremists. "It's not just organizations," Fishman said. "It's not just structured ideologies, even. It includes folks that are haphazard adherents to various conspiracy theories. It extends from people engaged primarily in the political process to folks that are explicitly rejecting political resolution of disputes, and all of that was represented on the mall on Jan. 6."

Indeed, a recent George Washington University study found that the majority of rioters who have been charged so far for their involvement had no known ties to domestic violent extremist organizations. They were, instead, what the researchers call "inspired believers."

"The challenge is now to figure out what you do with all of those different pieces," Fishman said.



The Jan. 6 right inside the U.S. Capitol forced a reckoning on Big Tech's approach to domestic extremists.Photo: Roberto Schmidt/Getty Images


It's no coincidence that tech companies' increased willingness to crack down on far-right fringe movements comes as Democrats have taken control of both the White House and Congress. The same political pragmatism that stalled efforts to address domestic extremism during the Trump years will undoubtedly animate efforts to fight domestic extremism in the Biden years. But having government officials in place who actually recognize domestic extremism as a threat and prioritize it could help Big Tech's efforts, too. "Insight from government about what's happening off of Twitter is a really important part of making sure our policies keep pace with what's happening," Pickles said.

That doesn't mean tech companies will march in lockstep with the Biden administration when it comes to identifying domestic extremists — or that they should. After all, the United States has a long and checkered history of casting civil rights leaders as domestic extremists and using that to justify all manner of surveillance and violence. Just last year, President Trump traveled to Kenosha, Wisconsin, and accused Black Lives Matter protesters of being domestic terrorists, too.

"I think the government has a really important role in helping educate civil society, including industry. And there are things the government can do and validate that we don't know," Fishman said. "But there are reasons Facebook designates organizations under our dangerous organizations policy ourselves."

As the Department of Homeland Security and other agencies issue warnings about domestic violent extremism, Fishman said it will be critical for those agencies to share more context about those threats so platforms can make informed decisions. "Show us the footnotes," he said. "This is going to be one of the things that I think the government needs to do more of."

Of course, there's a lot the industry could do more of too. For one thing, critics like Farid say it needs to acknowledge and take responsibility for the way its algorithms often amplify violent extremism through recommendations. "Why are they so good at targeting you with content that's consistent with your prior engagement, but somehow when it comes to harm, they become bumbling idiots?" asked Farid, who remains dubious of Big Tech's efforts to control violent extremists. "You can't have it both ways."

Facebook, for one, recently said it would stop recommending political and civic groups to its users, after reportedly finding that the vast majority of them included hate, misinformation or calls to violence leading up to the 2020 election. Facebook is also now experimenting with limiting the amount of political content in some users' news feeds.

"I do think things are trending up," said Greer, whose work at the Anti-Defamation League involves pushing companies on issues of violent extremism. "There are a lot of people working to solve these challenges. We are making some headway."

But more cross-industry collaboration on this front is crucial, particularly as violent extremists move from the big three platforms to smaller sites with less formidable defenses. Today, the GIFCT database includes roughly 300,000 unique hashed images and videos that all of its members can tap into. But the majority of those images and videos are tied to organizations on the United Nations' designated entities list, which includes almost entirely Islamist extremists.

Last month, the organization's executive director, Nick Rasmussen, acknowledged this shortcoming in a letter to the GIFCT community, asking for concrete proposals for expanding the hash database. "While GIFCT member companies are able to tag or 'hash' content associated with U.N.-designated terrorist groups like ISIS and Al-Qaeda, GIFCT does not currently have a framework in place for responding to terrorist and violent extremist content from all parts of the ideological spectrum, representing a critical gap in our collective efforts to prevent terrorists and violent extremists from exploiting digital platforms," Rasmussen wrote. The organization recently brought on Erin Saltman, a leading expert in far-right extremism and Facebook's former head of the dangerous organizations team in Europe, Africa and the Middle East, to be its director of programming.

Tackling this problem will require tech companies to take a broader view of what constitutes harm, looking not just at violent threats, but other common language that links violent movements. In her research on conspiracy theorists, Green said one commonality she's found that separates passive adherents to violent ones is their sense of urgency. She cited the Pittsburgh synagogue shooter's final post, which read, "Screw the optics. I'm going in," as evidence. "He felt that it was so urgent," Green said. "Same with Pizzagate. Same with QAnon. Same with Stop the Steal. That's why that conspiracy theory turned violent. It wasn't just the very large scale threat of an election being stolen, but also that you have to go now and take back your rights."

On March 25, Zuckerberg, Dorsey and Pichai will testify before Congress for the first time since the Capitol attack. The three men will undoubtedly come armed with talking points about how their policies have evolved and how they're committed to doing better. But that conversation will likely only be the first of many as the threat from violent extremists here in the U.S. continues to grow.

"This isn't over," Fishman said of the aftermath of Jan. 6. "This is going to be with us for a long time."

Latest Stories