Policy

'We’ll be here again': How tech companies try and fail to prevent terrorism

Social media platforms are playing defense to stop mass shootings. Without cooperation and legislation, it’s not working.

Law enforcement officials are seen at the scene of a mass shooting at Tops Friendly Market at Jefferson Avenue and Riley Street on Sunday, May 15, 2022 in Buffalo, NY. The fatal shooting of 10 people at a grocery store in a historically Black neighborhood of Buffalo by a young white gunman is being investigated as a hate crime and an act of racially motivated violent extremism, according to federal officials.

The Buffalo attack showed that tech’s best defenses against online hate aren’t sophisticated enough to fight the algorithms designed by those same companies to promote content.

Photo: Kent Nishimura / Los Angeles Times via Getty Images

Tech platforms' patchwork approach to content moderation has made them a hotbed for hate speech that can turn deadly, as it did this weekend in Buffalo. The alleged shooter that killed 10 in a historically Black neighborhood used Discord to plan his rampage for months and livestreamed it on Twitch.

The move mirrors what happened in Christchurch, New Zealand, when a white supremacist murdered 51 people in a mosque in 2019. He viewed the killings as a meme. To disseminate that meme, he turned to the same place more than 1 billion other users do: Facebook. This pattern is destined to repeat itself as long as tech companies continue to play defense instead of offense against online hate and fail to work together.

“They’ll take the hits when someone uses their platform to post terrorism and they will take the the bad op-eds about them, because they know that we've been here before and we'll be here again,” Jon Lewis, a research fellow at The George Washington University’s Program on Extremism, told Protocol.

The Buffalo attack showed that tech’s best defenses against online hate aren’t sophisticated enough to fight the algorithms designed by those same companies to promote content. Twitch was applauded for how quickly it moved to take down the shooter’s livestream, but the video was still screen-recorded and posted across other social media platforms, where it was viewed millions of times.

“No matter how good your content moderation is, it's hard to keep up with something in real time that is spreading like wildfire across a range of platforms,” Lewis said.

There’s also no financial incentive for platforms to work together on the issue, said Soraya Chemaly, former executive director of the Representation Project, who noted that the competitive nature of the industry prevents full cooperation.

Some of the companies do cooperate, to an extent. Twitch, Facebook and Twitter are members of an anti-terrorism industry group that shares digital signatures of violent content to identify and take it down more quickly. There are some ways Twitch and other platforms can prevent a livestream from beginning in the first place, such as making it more difficult to start accounts and requiring a user to have a certain number of followers to be able to go live.

But Lewis said major platforms are still playing catch-up when it comes to understanding just how white supremacists are using their sites and exploiting tools to ensure hate goes viral. Large platforms like Facebook took some steps toward detecting and fighting online terrorism after the attack in New Zealand. But many companies’ efforts to tackle terrorism have been too little, too late.

Andre Oboler, the founder and CEO of the Online Hate Prevention Institute in Australia, said the gunman in the Buffalo shooting was also prepared for Twitch to act quickly on his livestream. Of the nearly two dozen accounts that viewed the livestream on Twitch, one was the shooter’s own desktop browser. The gunman set up his computer so that the recording was sent to Discord for others to watch. That's allowed the video to continue to be shared on it and other platforms.

“Other people were watching the copy of the video on the Discord,” Oboler said. “And at least two of them went and recorded the Discord. So what we're seeing is a copy of a copy.”

Oboler said legislative action is essential to preventing the spread of online hate. Even if one platform quickly addresses violence on its platform, another may be slow to act or not act at all. He pointed to Gab, an alt-right site that hosts neo-Nazi groups, and 4chan, a popular site among the far and alt-right as likely landing spots if, say, Twitch came up with policies making it harder for white nationalists to share hate on its platform.

“There are some platforms that are deliberately hostile,” he added. “When you get a platform that's deliberately gone out to try and attract that audience as they've been thrown off other platforms. They're getting their business by tailoring it to that audience. They’ve got no incentive unless what they’re doing is illegal.”

In the aftermath of the Christchurch shooting, for example, Australia passed a law that will fine social media platforms and jail executives if they don’t quickly remove violent content. “That [law] does provide a model and it includes a liability on a platform,” Oboler said. Yet despite the need for policymakers to act, it’s unlikely that Congress will step in anytime soon.

Matthew Williams, a professor of criminology at Cardiff University and the head of HateLab, which provides data on hate speech and crime, said there’s no single solution to stopping a violent livestream in the future — unless everyone shuts off the internet, which isn’t going to happen. Platforms can introduce new anti-hate policies, as Twitch did in 2018 and again in 2020, and AI can be used to identify and take down content that has already been broadcast elsewhere. But governments, law enforcement and platforms essentially need to band together — and that partnership is unlikely.

“Each actor is currently underperforming,” Williams told Protocol. “The legislation lacks teeth, law enforcement have their hands tied behind their backs and lack resources, the platforms lack the will (hate, afterall is profitable), and users are far too often bystanders who simply scroll on by.”

Enterprise

Why foundation models in AI need to be released responsibly

Foundation models like GPT-3 and DALL-E are changing AI forever. We urgently need to develop community norms that guarantee research access and help guide the future of AI responsibly.

Releasing new foundation models doesn’t have to be an all or nothing proposition.

Illustration: sorbetto/DigitalVision Vectors

Percy Liang is director of the Center for Research on Foundation Models, a faculty affiliate at the Stanford Institute for Human-Centered AI and an associate professor of Computer Science at Stanford University.

Humans are not very good at forecasting the future, especially when it comes to technology.

Keep Reading Show less
Percy Liang
Percy Liang is Director of the Center for Research on Foundation Models, a Faculty Affiliate at the Stanford Institute for Human-Centered AI, and an Associate Professor of Computer Science at Stanford University.

Every day, millions of us press the “order” button on our favorite coffee store's mobile application: Our chosen brew will be on the counter when we arrive. It’s a personalized, seamless experience that we have all come to expect. What we don’t know is what’s happening behind the scenes. The mobile application is sourcing data from a database that stores information about each customer and what their favorite coffee drinks are. It is also leveraging event-streaming data in real time to ensure the ingredients for your personal coffee are in supply at your local store.

Applications like this power our daily lives, and if they can’t access massive amounts of data stored in a database as well as stream data “in motion” instantaneously, you — and millions of customers — won’t have these in-the-moment experiences.

Keep Reading Show less
Jennifer Goforth Gregory
Jennifer Goforth Gregory has worked in the B2B technology industry for over 20 years. As a freelance writer she writes for top technology brands, including IBM, HPE, Adobe, AT&T, Verizon, Epson, Oracle, Intel and Square. She specializes in a wide range of technology, such as AI, IoT, cloud, cybersecurity, and CX. Jennifer also wrote a bestselling book The Freelance Content Marketing Writer to help other writers launch a high earning freelance business.
Climate

The West’s drought could bring about a data center reckoning

When it comes to water use, data centers are the tech industry’s secret water hogs — and they could soon come under increased scrutiny.

Lake Mead, North America's largest artificial reservoir, has dropped to about 1,052 feet above sea level, the lowest it's been since being filled in 1937.

Photo: Mario Tama/Getty Images

The West is parched, and getting more so by the day. Lake Mead — the country’s largest reservoir — is nearing “dead pool” levels, meaning it may soon be too low to flow downstream. The entirety of the Four Corners plus California is mired in megadrought.

Amid this desiccation, hundreds of the country’s data centers use vast amounts of water to hum along. Dozens cluster around major metro centers, including those with mandatory or voluntary water restrictions in place to curtail residential and agricultural use.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Workplace

Indeed is hiring 4,000 workers despite industry layoffs

Indeed’s new CPO, Priscilla Koranteng, spoke to Protocol about her first 100 days in the role and the changing nature of HR.

"[Y]ou are serving the people. And everything that's happening around us in the world is … impacting their professional lives."

Image: Protocol

Priscilla Koranteng's plans are ambitious. Koranteng, who was appointed chief people officer of Indeed in June, has already enhanced the company’s abortion travel policies and reinforced its goal to hire 4,000 people in 2022.

She’s joined the HR tech company in a time when many other tech companies are enacting layoffs and cutbacks, but said she sees this precarious time as an opportunity for growth companies to really get ahead. Koranteng, who comes from an HR and diversity VP role at Kellogg, is working on embedding her hybrid set of expertise in her new role at Indeed.

Keep Reading Show less
Amber Burton

Amber Burton (@amberbburton) is a reporter at Protocol. Previously, she covered personal finance and diversity in business at The Wall Street Journal. She earned an M.S. in Strategic Communications from Columbia University and B.A. in English and Journalism from Wake Forest University. She lives in North Carolina.

Climate

New Jersey could become an ocean energy hub

A first-in-the-nation bill would support wave and tidal energy as a way to meet the Garden State's climate goals.

Technological challenges mean wave and tidal power remain generally more expensive than their other renewable counterparts. But government support could help spur more innovation that brings down cost.

Photo: Jeremy Bishop via Unsplash

Move over, solar and wind. There’s a new kid on the renewable energy block: waves and tides.

Harnessing the ocean’s power is still in its early stages, but the industry is poised for a big legislative boost, with the potential for real investment down the line.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Latest Stories
Bulletins