The shooting in Buffalo, New York, that killed 10 people over the weekend has put the spotlight back on social media companies. Some of the attack was livestreamed, beginning on Amazon-owned Twitch, and the alleged shooter appears to have written about how his racist motivations arose from misinformation on smaller or fringe sites including 4chan.
In response, policymakers are directing their anger at tech platforms, with New York Governor Kathy Hochul calling for the companies to be “more vigilant in monitoring” and for “a legal responsibility to ensure that such hate cannot populate these sites.”
It’s certainly not the first time tech has been the focus of blame for tragedies and vile attacks. Democrats know they have no hope of passing gun reform despite being in charge in Washington, and Republicans are trying to dodge the fact that some conservatives have brought talking points the shooter embraced into the mainstream.
The response seems to highlight the prior actions by big social media sites after similar killings — and the ongoing reluctance of smaller companies like Discord to take the same steps.
Is Congress going to do anything?
No. It’s hopelessly gridlocked and ignorant on tech issues. The work lawmakers have done on tech is more focused on antitrust and privacy, and the problem of online hate is particularly complicated because lawmakers have to make sure not to trample on free speech rights. In addition, the apparent consensus on content policies is an illusion — with Democrats focused on what posts get left up and Republicans focused on what content the platforms take down. Finally, there is, by the standards of Congress, scant time to do anything and a list of top-tier issues that’s getting longer despite the dwindling days.
What about individual states?
Hochul seemed to hint that Albany could take the reins if Washington won’t. “We will continue to work at the federal, state and local level with our community partners to help identify these messages as soon as they arise on social media,” she said. At least one state senate bill that’s headed to the floor would force social media companies to maintain reporting mechanisms for hate speech, suggesting at least limited appetite for action on the state level even if the big players already provide such mechanisms. Federal law appears to keep states out of bigger questions about content moderation, though, and actual rules that would impose content requirements on companies would likely raise major constitutional issues.
Will the big social media sites do anything?
Facebook and Twitter could try to move more quickly to respond to these incidents, but they may have already taken most of the major steps they’re willing to take.
The presence of the stream on Twitch — and links to the video that have popped up repeatedly on Facebook and Twitter — showed how easily bad actors can go live for a wide audience. It also reinforced how easily supporters can keep graphic video circulating on huge international platforms, even though Twitch said it was able to shut down the stream within two minutes. According to a New York Times report, Facebook was slow to flag some versions of the videos as violating its terms, and Twitter at first suggested it might limit the reach of the videos through warnings rather than takedowns.
Powerful social media sites with extensive reach may try to apply policies faster once violence erupts, yet there are technical limitations to what they can do, as the Times reported. The platforms have already invested in monitoring for extremist streams and takedown of those that get popular, particularly after a similar attack on two mosques in Christchurch, New Zealand, in 2019. Twitch, Facebook and Twitter all already participate in an anti-terrorism industry group that, among other activities, shares digital signatures of violent content to help identify it faster. All three also joined a post-Christchurch commitment to do more to stop the uploading of terrorist content and bring it down, including by examining algorithms.
But the platforms also risk political backlash if they come down too hard on views expressed in writings that appear linked to the suspected shooter. Rising stars in the Republican Party and conservative media personalities have also flirted with the so-called “great replacement theory,” the racist notion that says people of color, Jews and immigrants present a demographic and political threat to white people. It’s gained traction as the right also echoes misinformation about illegal votes and President Biden’s victory in the 2020 election. Facebook in particular has at times shied away from pushing back on misinformation that’s popular on the right.
And with Elon Musk still saying he’s planning to buy Twitter, the trend toward fewer speech restrictions continues.
What about smaller platforms?
They’ve done much less to tackle these issues, even as they seem to have played a big role in the shooter’s views. A 180-page document that appears to have been written by the person who has been arrested for the killings describes its author learning about the great replacement theory on 4chan, the meme site with few rules that’s become a haven for hate speech.
The author also may have maintained a presence on subreddits for guns and tactical gear and pursued similar interests on Discord, according to NBC News. He also allegedly laid out his plans for the attack on a private Discord server months in advance, Bloomberg reported.
Those services have in many cases thrived by positioning themselves as freewheeling alternatives to larger social media sites, and have shown little interest in moderating to the extent that more mainstream sites do. Still, the companies could theoretically decide to take some of the same steps on information-sharing about extremist content and investments in monitoring that bigger sites have.
Reddit, which mostly relies on community moderation of individual channels, said in a statement it would continue enforcing sitewide policies banning “content that encourages, glorifies, incites or calls for violence or physical harm against an individual or group of people.” It does not formally belong to either of the anti-terrorism groups that bigger players have joined, but it does participate in the sharing of unique digital “hashes” identifying extremist content by one of the groups, the Global Internet Forum to Counter Terrorism. A spokesperson said Reddit is also "coordinating with GIFCT in the Buffalo response."
Discord also said that it is part of the GIFCT and has worked with other anti-terrorist groups, including Tech Against Terrorism. 4chan did not respond to Protocol.
This story was updated to clarify Reddit and Discord's participation with anti-terrorist groups and to add a Reddit spokesperson's comment. This story was updated May 19, 2022.