Can Twitter warnings actually curb hate speech? A new study says yes.

Researchers found that warning Twitter users that someone they follow has been suspended — and they could be next — cuts down on hate speech.

A sign on a fence that reads: "WARNING: DO NOT FEED THE TROLLS"

A new study from NYU adds to the evidence that giving users warnings about hate speech can actually cut down their use of hate speech by 10-20%.

Image: Gwengoat via Getty Images

Twitter in May said it would begin prompting users who are about to tweet something nasty to either revise or delete the message before sending. The decision, the company said at the time, was based on a successful test of the messages in the run-up to the 2020 election.

Now, a new study — this one from researchers at New York University — adds to the evidence that giving users warnings about hate speech can actually cut down their use of hate speech by 10-20%. And those warnings can change users' behavior even when users aren't in the heat of the moment and about to tweet something regrettable.

The researchers at NYU's Center for Social Media and Politics developed their experiment last summer, in response to what was beginning to look like a mass migration of Twitter users to more extreme platforms like Parler. "We wanted to find a way that would basically prevent them from migrating to these platforms, but at the same time, that would result in the reduction of hate speech," said Mustafa Mikdat Yildirim, a PhD. candidate in NYU's department of politics and the lead researcher on the report.

So, last July, as racial justice protests were swelling, anti-Asian sentiment was filling social media and conservatives like Sen. Ted Cruz were threatening to abandon Twitter, the NYU researchers began monitoring a subset of 600,000 tweets and scanning for users who they thought might soon be suspended for hate speech. Eventually, the researchers whittled their list down to users who did get suspended and also met certain other criteria, including having more than 50 followers and having at least 7 followers who have also used hateful language in their tweets.

Then the researchers trained their attention on the people who followed those suspended accounts. They wanted to know whether warning these people that someone they followed had been suspended for hate speech — and that they could be next — would change the way those people behaved.

The researchers ended up with a list of 27 suspended users with 4,327 followers among them, and divided the followers up into six experimental groups and one control. The researchers then set up their own Twitter accounts with names like @hate_suspension and @expert_on_hate and began publicly tweeting directly at the users in all six groups with one of six different warning messages. They wanted to see which approach, if any, was most effective.

Two of the groups got messages designed to remind people of what they could lose if they used hate speech. Another two received tweets that emphasized "legitimacy," which more or less meant respectfulness. The last two groups got messages that framed the sender as an expert to lend credibility to the message. The messages came in two different flavors — high intensity and low intensity. The control group, meanwhile, received no warning at all.

A table showing a selection of sample tweets sent from the researchers' accounts to users selected for participation in the study, including language such as, "The user @account you follow was suspended, and I suspect this was because of hateful langauge. If you continue to use hate speech, you might get suspended."Image: NYU Center for Social Media and Politics

The researchers found that just one warning reduced the use of hateful language by 10% a week after the experiment. For the most effective message — which was also the most politely worded — the change was more like 15-20% a week later.

The fact that all of the messages had similar degrees of impact suggested to the researchers that simply receiving a warning may have had more of an impact than what the particular warning said. "Knowing that someone else sees their hate speech [...] may make people think once more about the language that they used," Yildirim said.

The NYU researchers' findings build on Twitter's own results from last year's experiment. The company found that when users were prompted to revise or delete a harmful tweet before sending it, a whopping 34% of them actually did. And in the future, Twitter said, those users sent 11% fewer offensive replies than they'd sent before.

"Our teams are reviewing the report and its findings," a Twitter spokesperson said of the NYU research. "Broadly, over the past year, we've taken an iterative approach to our work, from encouraging people to more thoughtfully consider sharing content to taking several measures to slow down the spread of misinformation. We'll continue that iterative approach and look forward to building on our efforts with a variety of third-party partners on this critical work."

The NYU report suggests that an even more proactive intervention — warning users even when they're not on the cusp of saying something rotten — could have a significant effect too. And yet, the researchers aren't urging Twitter to adopt their method wholesale.

The NYU researchers didn't see evidence to detect that their warnings, which were coming from accounts with less than 100 followers, might prompt people to send even more hateful tweets out of spite. But they acknowledge that it might be different if Twitter was sending the message itself. "We don't really know whether people would actually come back at Twitter with some type of backlash," Yildirim said.

It would also be tricky for Twitter to automatically send these warnings to users who follow accounts that have been suspended. Some suspensions, after all, happen by mistake and then get reversed.

Yildirim said it will be important for Twitter to test this type of system itself and be transparent about its findings. The fact is, no one is quite as equipped as Twitter to implement such a widespread intervention on the platform. Civil society groups could take up the task, but they don't have all of Twitter's data or its technical resources at their disposal. Yildirim has been encouraged by Twitter's transparency with its experiments in creating healthier conversations so far.

Twitter is not the only company experimenting with warnings and other kinds of "friction." Facebook has also been ramping up its use of information labels and interstitials, but Facebook, Yildirim said, is far harder to study.

All of these companies are leaning into this strategy as a way to avoid having to take more drastic action, like removing content or suspending entire accounts. The NYU approach offers yet another option, using those suspensions as a cautionary example for the users who are left.


Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.


Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories