Policy

Can Twitter warnings actually curb hate speech? A new study says yes.

Researchers found that warning Twitter users that someone they follow has been suspended — and they could be next — cuts down on hate speech.

A sign on a fence that reads: "WARNING: DO NOT FEED THE TROLLS"

A new study from NYU adds to the evidence that giving users warnings about hate speech can actually cut down their use of hate speech by 10-20%.

Image: Gwengoat via Getty Images

Twitter in May said it would begin prompting users who are about to tweet something nasty to either revise or delete the message before sending. The decision, the company said at the time, was based on a successful test of the messages in the run-up to the 2020 election.

Now, a new study — this one from researchers at New York University — adds to the evidence that giving users warnings about hate speech can actually cut down their use of hate speech by 10-20%. And those warnings can change users' behavior even when users aren't in the heat of the moment and about to tweet something regrettable.

The researchers at NYU's Center for Social Media and Politics developed their experiment last summer, in response to what was beginning to look like a mass migration of Twitter users to more extreme platforms like Parler. "We wanted to find a way that would basically prevent them from migrating to these platforms, but at the same time, that would result in the reduction of hate speech," said Mustafa Mikdat Yildirim, a PhD. candidate in NYU's department of politics and the lead researcher on the report.

So, last July, as racial justice protests were swelling, anti-Asian sentiment was filling social media and conservatives like Sen. Ted Cruz were threatening to abandon Twitter, the NYU researchers began monitoring a subset of 600,000 tweets and scanning for users who they thought might soon be suspended for hate speech. Eventually, the researchers whittled their list down to users who did get suspended and also met certain other criteria, including having more than 50 followers and having at least 7 followers who have also used hateful language in their tweets.

Then the researchers trained their attention on the people who followed those suspended accounts. They wanted to know whether warning these people that someone they followed had been suspended for hate speech — and that they could be next — would change the way those people behaved.

The researchers ended up with a list of 27 suspended users with 4,327 followers among them, and divided the followers up into six experimental groups and one control. The researchers then set up their own Twitter accounts with names like @hate_suspension and @expert_on_hate and began publicly tweeting directly at the users in all six groups with one of six different warning messages. They wanted to see which approach, if any, was most effective.

Two of the groups got messages designed to remind people of what they could lose if they used hate speech. Another two received tweets that emphasized "legitimacy," which more or less meant respectfulness. The last two groups got messages that framed the sender as an expert to lend credibility to the message. The messages came in two different flavors — high intensity and low intensity. The control group, meanwhile, received no warning at all.

A table showing a selection of sample tweets sent from the researchers' accounts to users selected for participation in the study, including language such as, "The user @account you follow was suspended, and I suspect this was because of hateful langauge. If you continue to use hate speech, you might get suspended." Image: NYU Center for Social Media and Politics

The researchers found that just one warning reduced the use of hateful language by 10% a week after the experiment. For the most effective message — which was also the most politely worded — the change was more like 15-20% a week later.

The fact that all of the messages had similar degrees of impact suggested to the researchers that simply receiving a warning may have had more of an impact than what the particular warning said. "Knowing that someone else sees their hate speech [...] may make people think once more about the language that they used," Yildirim said.

The NYU researchers' findings build on Twitter's own results from last year's experiment. The company found that when users were prompted to revise or delete a harmful tweet before sending it, a whopping 34% of them actually did. And in the future, Twitter said, those users sent 11% fewer offensive replies than they'd sent before.

"Our teams are reviewing the report and its findings," a Twitter spokesperson said of the NYU research. "Broadly, over the past year, we've taken an iterative approach to our work, from encouraging people to more thoughtfully consider sharing content to taking several measures to slow down the spread of misinformation. We'll continue that iterative approach and look forward to building on our efforts with a variety of third-party partners on this critical work."

The NYU report suggests that an even more proactive intervention — warning users even when they're not on the cusp of saying something rotten — could have a significant effect too. And yet, the researchers aren't urging Twitter to adopt their method wholesale.

The NYU researchers didn't see evidence to detect that their warnings, which were coming from accounts with less than 100 followers, might prompt people to send even more hateful tweets out of spite. But they acknowledge that it might be different if Twitter was sending the message itself. "We don't really know whether people would actually come back at Twitter with some type of backlash," Yildirim said.

It would also be tricky for Twitter to automatically send these warnings to users who follow accounts that have been suspended. Some suspensions, after all, happen by mistake and then get reversed.

Yildirim said it will be important for Twitter to test this type of system itself and be transparent about its findings. The fact is, no one is quite as equipped as Twitter to implement such a widespread intervention on the platform. Civil society groups could take up the task, but they don't have all of Twitter's data or its technical resources at their disposal. Yildirim has been encouraged by Twitter's transparency with its experiments in creating healthier conversations so far.

Twitter is not the only company experimenting with warnings and other kinds of "friction." Facebook has also been ramping up its use of information labels and interstitials, but Facebook, Yildirim said, is far harder to study.

All of these companies are leaning into this strategy as a way to avoid having to take more drastic action, like removing content or suspending entire accounts. The NYU approach offers yet another option, using those suspensions as a cautionary example for the users who are left.

Policy

A new UK visa could steal your top tech talent

Without meaningful immigration reform, U.S.-trained foreign graduates could head across the pond.

The U.S. immigration system turns away hundreds of thousands of highly skilled tech workers every year.

Photo: Ben Fathers/AFP via Getty Images

Almost as soon as he took office, President Biden began the work of undoing a lot of the damage the Trump administration did to the U.S. H-1B visa program. He allowed a Trump-era ban on entry by H-1B holders to expire and withdrew a Trump proposal to prohibit H-1B visa holders’ spouses from working in the U.S. More recently, his administration has expanded the number of degrees considered eligible for special STEM OPT visas.

But the U.S. immigration system still turns away hundreds of thousands of highly skilled — and in many cases U.S.-educated — tech workers every year. Now the U.K. is trying to capitalize on the United States’ failure to reform its policy regarding high-skilled immigrants with a new visa that could poach American-trained tech talent across the pond. And there’s good reason to believe it could work.

Keep Reading Show less
Kwasi Gyamfi Asiedu
Kwasi (kway-see) is a fellow at Protocol with an interest in tech policy and climate. Previously, he covered global religion news at the Associated Press in New York. Before that, he was a freelance journalist based out of Accra, Ghana, covering social justice, health, and environment stories. His reporting has been published in The New York Times, Quartz, CNN, The Guardian, and Public Radio International. He can be reached at kasiedu@protocol.com.

Businesses are evolving, with current events and competition serving as the catalysts for technology adoption. Events from the pandemic to the ongoing war in Ukraine have exposed the fragility of global supply chains. The topic of sustainability is now on every board room agenda. Industries from manufacturing to retail and everything in between are exploring the latest innovations like process automation, machine learning and AI to identify potential safeguards against future disruption. But according to a recent survey from Boston Consulting Group, while 80% of companies are adopting digital solutions to navigate existing business challenges or opportunities like the ones mentioned, only about 30% successfully digitally transform their business.

For the last 50 years, SAP has worked closely with our customers to solve some of the world’s most intricate problems. We have also seen, and have been a part of, rapid accelerations in technology in response. Across industries, certain paths have emerged to help businesses manage the unexpected challenges over the last few years.

Keep Reading Show less
DJ Paoni, President of SAP North America

DJ Paoni is the President of SAP North America and is responsible for the strategy, day-to-day operations, and overall customer success in the United States and Canada. Dedicated to helping customers become best-run businesses, DJ has established himself as a trusted advisor who places a high priority on their success. He works with many of SAP North America's 155,000 customers and helps them adopt business and technology best practices across 25 different industries.

Fintech

A new regulator is stepping into the 'rent-a-bank' ring

The CFPB is promising a "close look" at controversial lending partnerships between banks and fintechs.

Rent-a-bank lending for personal loans is getting regulatory attention.

Photo: Attentie Attentie/Unsplash

Consumer groups pushing for banking regulators to crack down on so-called rent-a-bank lending for personal loans may have found a willing watchdog.

Zixta Martinez, deputy director of the Consumer Financial Protection Bureau, said at a recent consumer group conference that the agency is taking a "close look" at the lending partnerships between banks and nonbanks, which are often fintech companies.

Keep Reading Show less
Ryan Deffenbaugh
Ryan Deffenbaugh is a reporter at Protocol focused on fintech. Before joining Protocol, he reported on New York's technology industry for Crain's New York Business. He is based in New York and can be reached at rdeffenbaugh@protocol.com.
Enterprise

Why Thomas Kurian thinks cloud computing is on the brink of a new era

Kurian tapped his enterprise experience from 22 years at Oracle to reshape Google Cloud as an open, hybrid and multicloud player. What comes next?

Google Cloud CEO Thomas Kurian spoke with Protocol.

Photo courtesy of Google/Weinberg-Clark Photography

When Thomas Kurian landed the CEO role at Google Cloud, he was welcomed as a respected technologist and executive bringing 22 years of needed enterprise chops from Oracle for a substantial undertaking: turning an underdog into a heavyweight contender for meeting major corporations’ cloud needs.

At the Google Cloud Next conference in early 2019, Alphabet and Google CEO Sundar Pichai introduced Kurian, then about three months into his tenure, as a “tremendous leader with a powerful vision” who already had met with hundreds of customers and partners and whose “personal productivity is testing the limits of G Suite and Calendar.”

Keep Reading Show less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Enterprise

AWS employees say evidence of misconduct hides in plain sight

Such is the reality of today’s corporate environment.

There was hope this report would be the catalyst to institute more systemic change within both ProServe and the whole of AWS.

Image: Henrique Casinhas/SOPA Images/LightRocket via Getty Images

It’s a tale as old as, well, the last few years. And this month, it’s AWS that got to live it.

The company recently outlined to employees the findings of an external probe conducted by Oppenheimer Investigations Group into a troubled division of the sprawling cloud giant. Known shorthand as ProServe, it’s the unit that helps customers make the most of AWS products.

Keep Reading Show less
Joe Williams

Joe Williams is a writer-at-large at Protocol. He previously covered enterprise software for Protocol, Bloomberg and Business Insider. Joe can be reached at JoeWilliams@Protocol.com. To share information confidentially, he can also be contacted on a non-work device via Signal (+1-309-265-6120) or JPW53189@protonmail.com.

Latest Stories
Bulletins