Tech platforms' patchwork approach to content moderation has made them a hotbed for hate speech that can turn deadly, as it did this weekend in Buffalo. The alleged shooter that killed 10 in a historically Black neighborhood used Discord to plan his rampage for months and livestreamed it on Twitch.
The move mirrors what happened in Christchurch, New Zealand, when a white supremacist murdered 51 people in a mosque in 2019. He viewed the killings as a meme. To disseminate that meme, he turned to the same place more than 1 billion other users do: Facebook. This pattern is destined to repeat itself as long as tech companies continue to play defense instead of offense against online hate and fail to work together.
“They’ll take the hits when someone uses their platform to post terrorism and they will take the the bad op-eds about them, because they know that we've been here before and we'll be here again,” Jon Lewis, a research fellow at The George Washington University’s Program on Extremism, told Protocol.
The Buffalo attack showed that tech’s best defenses against online hate aren’t sophisticated enough to fight the algorithms designed by those same companies to promote content. Twitch was applauded for how quickly it moved to take down the shooter’s livestream, but the video was still screen-recorded and posted across other social media platforms, where it was viewed millions of times.
“No matter how good your content moderation is, it's hard to keep up with something in real time that is spreading like wildfire across a range of platforms,” Lewis said.
There’s also no financial incentive for platforms to work together on the issue, said Soraya Chemaly, former executive director of the Representation Project, who noted that the competitive nature of the industry prevents full cooperation.
Some of the companies do cooperate, to an extent. Twitch, Facebook and Twitter are members of an anti-terrorism industry group that shares digital signatures of violent content to identify and take it down more quickly. There are some ways Twitch and other platforms can prevent a livestream from beginning in the first place, such as making it more difficult to start accounts and requiring a user to have a certain number of followers to be able to go live.
But Lewis said major platforms are still playing catch-up when it comes to understanding just how white supremacists are using their sites and exploiting tools to ensure hate goes viral. Large platforms like Facebook took some steps toward detecting and fighting online terrorism after the attack in New Zealand. But many companies’ efforts to tackle terrorism have been too little, too late.
Andre Oboler, the founder and CEO of the Online Hate Prevention Institute in Australia, said the gunman in the Buffalo shooting was also prepared for Twitch to act quickly on his livestream. Of the nearly two dozen accounts that viewed the livestream on Twitch, one was the shooter’s own desktop browser. The gunman set up his computer so that the recording was sent to Discord for others to watch. That's allowed the video to continue to be shared on it and other platforms.
“Other people were watching the copy of the video on the Discord,” Oboler said. “And at least two of them went and recorded the Discord. So what we're seeing is a copy of a copy.”
Oboler said legislative action is essential to preventing the spread of online hate. Even if one platform quickly addresses violence on its platform, another may be slow to act or not act at all. He pointed to Gab, an alt-right site that hosts neo-Nazi groups, and 4chan, a popular site among the far and alt-right as likely landing spots if, say, Twitch came up with policies making it harder for white nationalists to share hate on its platform.
“There are some platforms that are deliberately hostile,” he added. “When you get a platform that's deliberately gone out to try and attract that audience as they've been thrown off other platforms. They're getting their business by tailoring it to that audience. They’ve got no incentive unless what they’re doing is illegal.”
In the aftermath of the Christchurch shooting, for example, Australia passed a law that will fine social media platforms and jail executives if they don’t quickly remove violent content. “That [law] does provide a model and it includes a liability on a platform,” Oboler said. Yet despite the need for policymakers to act, it’s unlikely that Congress will step in anytime soon.
Matthew Williams, a professor of criminology at Cardiff University and the head of HateLab, which provides data on hate speech and crime, said there’s no single solution to stopping a violent livestream in the future — unless everyone shuts off the internet, which isn’t going to happen. Platforms can introduce new anti-hate policies, as Twitch did in 2018 and again in 2020, and AI can be used to identify and take down content that has already been broadcast elsewhere. But governments, law enforcement and platforms essentially need to band together — and that partnership is unlikely.
“Each actor is currently underperforming,” Williams told Protocol. “The legislation lacks teeth, law enforcement have their hands tied behind their backs and lack resources, the platforms lack the will (hate, afterall is profitable), and users are far too often bystanders who simply scroll on by.”