Nobody is coming to help Big Tech prevent online radicalization but itself

There’s no will in Congress or the C-suites of social media giants for a new approach, but smaller platforms would have room to step up — if they decided to.

A memorial for victims of the Buffalo, New York shooting.

The response seems to highlight the prior actions by big social media sites after similar killings — and the ongoing reluctance of smaller companies like Discord to take the same steps.

Photo: Kent Nishimura / Los Angeles Times via Getty Images

The shooting in Buffalo, New York, that killed 10 people over the weekend has put the spotlight back on social media companies. Some of the attack was livestreamed, beginning on Amazon-owned Twitch, and the alleged shooter appears to have written about how his racist motivations arose from misinformation on smaller or fringe sites including 4chan.

In response, policymakers are directing their anger at tech platforms, with New York Governor Kathy Hochul calling for the companies to be “more vigilant in monitoring” and for “a legal responsibility to ensure that such hate cannot populate these sites.”

It’s certainly not the first time tech has been the focus of blame for tragedies and vile attacks. Democrats know they have no hope of passing gun reform despite being in charge in Washington, and Republicans are trying to dodge the fact that some conservatives have brought talking points the shooter embraced into the mainstream.

The response seems to highlight the prior actions by big social media sites after similar killings — and the ongoing reluctance of smaller companies like Discord to take the same steps.

Is Congress going to do anything?

No. It’s hopelessly gridlocked and ignorant on tech issues. The work lawmakers have done on tech is more focused on antitrust and privacy, and the problem of online hate is particularly complicated because lawmakers have to make sure not to trample on free speech rights. In addition, the apparent consensus on content policies is an illusion — with Democrats focused on what posts get left up and Republicans focused on what content the platforms take down. Finally, there is, by the standards of Congress, scant time to do anything and a list of top-tier issues that’s getting longer despite the dwindling days.

What about individual states?

Hochul seemed to hint that Albany could take the reins if Washington won’t. “We will continue to work at the federal, state and local level with our community partners to help identify these messages as soon as they arise on social media,” she said. At least one state senate bill that’s headed to the floor would force social media companies to maintain reporting mechanisms for hate speech, suggesting at least limited appetite for action on the state level even if the big players already provide such mechanisms. Federal law appears to keep states out of bigger questions about content moderation, though, and actual rules that would impose content requirements on companies would likely raise major constitutional issues.

Will the big social media sites do anything?

Facebook and Twitter could try to move more quickly to respond to these incidents, but they may have already taken most of the major steps they’re willing to take.

The presence of the stream on Twitch — and links to the video that have popped up repeatedly on Facebook and Twitter — showed how easily bad actors can go live for a wide audience. It also reinforced how easily supporters can keep graphic video circulating on huge international platforms, even though Twitch said it was able to shut down the stream within two minutes. According to a New York Times report, Facebook was slow to flag some versions of the videos as violating its terms, and Twitter at first suggested it might limit the reach of the videos through warnings rather than takedowns.

Powerful social media sites with extensive reach may try to apply policies faster once violence erupts, yet there are technical limitations to what they can do, as the Times reported. The platforms have already invested in monitoring for extremist streams and takedown of those that get popular, particularly after a similar attack on two mosques in Christchurch, New Zealand, in 2019. Twitch, Facebook and Twitter all already participate in an anti-terrorism industry group that, among other activities, shares digital signatures of violent content to help identify it faster. All three also joined a post-Christchurch commitment to do more to stop the uploading of terrorist content and bring it down, including by examining algorithms.

But the platforms also risk political backlash if they come down too hard on views expressed in writings that appear linked to the suspected shooter. Rising stars in the Republican Party and conservative media personalities have also flirted with the so-called “great replacement theory,” the racist notion that says people of color, Jews and immigrants present a demographic and political threat to white people. It’s gained traction as the right also echoes misinformation about illegal votes and President Biden’s victory in the 2020 election. Facebook in particular has at times shied away from pushing back on misinformation that’s popular on the right.

And with Elon Musk still saying he’s planning to buy Twitter, the trend toward fewer speech restrictions continues.

What about smaller platforms?

They’ve done much less to tackle these issues, even as they seem to have played a big role in the shooter’s views. A 180-page document that appears to have been written by the person who has been arrested for the killings describes its author learning about the great replacement theory on 4chan, the meme site with few rules that’s become a haven for hate speech.

The author also may have maintained a presence on subreddits for guns and tactical gear and pursued similar interests on Discord, according to NBC News. He also allegedly laid out his plans for the attack on a private Discord server months in advance, Bloomberg reported.

Those services have in many cases thrived by positioning themselves as freewheeling alternatives to larger social media sites, and have shown little interest in moderating to the extent that more mainstream sites do. Still, the companies could theoretically decide to take some of the same steps on information-sharing about extremist content and investments in monitoring that bigger sites have.

Reddit, which mostly relies on community moderation of individual channels, said in a statement it would continue enforcing sitewide policies banning “content that encourages, glorifies, incites or calls for violence or physical harm against an individual or group of people.” It does not formally belong to either of the anti-terrorism groups that bigger players have joined, but it does participate in the sharing of unique digital “hashes” identifying extremist content by one of the groups, the Global Internet Forum to Counter Terrorism. A spokesperson said Reddit is also "coordinating with GIFCT in the Buffalo response."

Discord also said that it is part of the GIFCT and has worked with other anti-terrorist groups, including Tech Against Terrorism. 4chan did not respond to Protocol.

This story was updated to clarify Reddit and Discord's participation with anti-terrorist groups and to add a Reddit spokesperson's comment. This story was updated May 19, 2022.


Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.


Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories