Policy

Can Twitter warnings actually curb hate speech? A new study says yes.

Researchers found that warning Twitter users that someone they follow has been suspended — and they could be next — cuts down on hate speech.

A sign on a fence that reads: "WARNING: DO NOT FEED THE TROLLS"

A new study from NYU adds to the evidence that giving users warnings about hate speech can actually cut down their use of hate speech by 10-20%.

Image: Gwengoat via Getty Images

Twitter in May said it would begin prompting users who are about to tweet something nasty to either revise or delete the message before sending. The decision, the company said at the time, was based on a successful test of the messages in the run-up to the 2020 election.

Now, a new study — this one from researchers at New York University — adds to the evidence that giving users warnings about hate speech can actually cut down their use of hate speech by 10-20%. And those warnings can change users' behavior even when users aren't in the heat of the moment and about to tweet something regrettable.

The researchers at NYU's Center for Social Media and Politics developed their experiment last summer, in response to what was beginning to look like a mass migration of Twitter users to more extreme platforms like Parler. "We wanted to find a way that would basically prevent them from migrating to these platforms, but at the same time, that would result in the reduction of hate speech," said Mustafa Mikdat Yildirim, a PhD. candidate in NYU's department of politics and the lead researcher on the report.

So, last July, as racial justice protests were swelling, anti-Asian sentiment was filling social media and conservatives like Sen. Ted Cruz were threatening to abandon Twitter, the NYU researchers began monitoring a subset of 600,000 tweets and scanning for users who they thought might soon be suspended for hate speech. Eventually, the researchers whittled their list down to users who did get suspended and also met certain other criteria, including having more than 50 followers and having at least 7 followers who have also used hateful language in their tweets.

Then the researchers trained their attention on the people who followed those suspended accounts. They wanted to know whether warning these people that someone they followed had been suspended for hate speech — and that they could be next — would change the way those people behaved.

The researchers ended up with a list of 27 suspended users with 4,327 followers among them, and divided the followers up into six experimental groups and one control. The researchers then set up their own Twitter accounts with names like @hate_suspension and @expert_on_hate and began publicly tweeting directly at the users in all six groups with one of six different warning messages. They wanted to see which approach, if any, was most effective.

Two of the groups got messages designed to remind people of what they could lose if they used hate speech. Another two received tweets that emphasized "legitimacy," which more or less meant respectfulness. The last two groups got messages that framed the sender as an expert to lend credibility to the message. The messages came in two different flavors — high intensity and low intensity. The control group, meanwhile, received no warning at all.

A table showing a selection of sample tweets sent from the researchers' accounts to users selected for participation in the study, including language such as, "The user @account you follow was suspended, and I suspect this was because of hateful langauge. If you continue to use hate speech, you might get suspended." Image: NYU Center for Social Media and Politics

The researchers found that just one warning reduced the use of hateful language by 10% a week after the experiment. For the most effective message — which was also the most politely worded — the change was more like 15-20% a week later.

The fact that all of the messages had similar degrees of impact suggested to the researchers that simply receiving a warning may have had more of an impact than what the particular warning said. "Knowing that someone else sees their hate speech [...] may make people think once more about the language that they used," Yildirim said.

The NYU researchers' findings build on Twitter's own results from last year's experiment. The company found that when users were prompted to revise or delete a harmful tweet before sending it, a whopping 34% of them actually did. And in the future, Twitter said, those users sent 11% fewer offensive replies than they'd sent before.

"Our teams are reviewing the report and its findings," a Twitter spokesperson said of the NYU research. "Broadly, over the past year, we've taken an iterative approach to our work, from encouraging people to more thoughtfully consider sharing content to taking several measures to slow down the spread of misinformation. We'll continue that iterative approach and look forward to building on our efforts with a variety of third-party partners on this critical work."

The NYU report suggests that an even more proactive intervention — warning users even when they're not on the cusp of saying something rotten — could have a significant effect too. And yet, the researchers aren't urging Twitter to adopt their method wholesale.

The NYU researchers didn't see evidence to detect that their warnings, which were coming from accounts with less than 100 followers, might prompt people to send even more hateful tweets out of spite. But they acknowledge that it might be different if Twitter was sending the message itself. "We don't really know whether people would actually come back at Twitter with some type of backlash," Yildirim said.

It would also be tricky for Twitter to automatically send these warnings to users who follow accounts that have been suspended. Some suspensions, after all, happen by mistake and then get reversed.

Yildirim said it will be important for Twitter to test this type of system itself and be transparent about its findings. The fact is, no one is quite as equipped as Twitter to implement such a widespread intervention on the platform. Civil society groups could take up the task, but they don't have all of Twitter's data or its technical resources at their disposal. Yildirim has been encouraged by Twitter's transparency with its experiments in creating healthier conversations so far.

Twitter is not the only company experimenting with warnings and other kinds of "friction." Facebook has also been ramping up its use of information labels and interstitials, but Facebook, Yildirim said, is far harder to study.

All of these companies are leaning into this strategy as a way to avoid having to take more drastic action, like removing content or suspending entire accounts. The NYU approach offers yet another option, using those suspensions as a cautionary example for the users who are left.

Policy

Google is wooing a coalition of civil rights allies. It’s working.

The tech giant is adept at winning friends even when it’s not trying to immediately influence people.

A map display of Washington lines the floor next to the elevators at the Google office in Washington, D.C.

Photo: Andrew Harrer/Bloomberg via Getty Images

As Google has faced intensifying pressure from policymakers in recent years, it’s founded trade associations, hired a roster of former top government officials and sometimes spent more than $20 million annually on federal lobbying.

But the company has also become famous in Washington for nurturing less clearly mercenary ties. It has long funded the work of laissez-faire economists who now defend it against antitrust charges, for instance. It’s making inroads with traditional business associations that once pummeled it on policy, and also supports think tanks and advocacy groups.

Keep Reading Show less
Ben Brody

Ben Brody (@ BenBrodyDC) is a senior reporter at Protocol focusing on how Congress, courts and agencies affect the online world we live in. He formerly covered tech policy and lobbying (including antitrust, Section 230 and privacy) at Bloomberg News, where he previously reported on the influence industry, government ethics and the 2016 presidential election. Before that, Ben covered business news at CNNMoney and AdAge, and all manner of stories in and around New York. He still loves appearing on the New York news radio he grew up with.

Sustainability. It can be a charged word in the context of blockchain and crypto – whether from outsiders with a limited view of the technology or from insiders using it for competitive advantage. But as a CEO in the industry, I don’t think either of those approaches helps us move forward. We should all be able to agree that using less energy to get a task done is a good thing and that there is room for improvement in the amount of energy that is consumed to power different blockchain technologies.

So, what if we put the enormous industry talent and minds that have created and developed blockchain to the task of building in a more energy-efficient manner? Can we not just solve the issues but also set the standard for other industries to develop technology in a future-proof way?

Keep Reading Show less
Denelle Dixon, CEO of SDF

Denelle Dixon is CEO and Executive Director of the Stellar Development Foundation, a non-profit using blockchain to unlock economic potential by making money more fluid, markets more open, and people more empowered. Previously, Dixon served as COO of Mozilla. Leading the business, revenue and policy teams, she fought for Net Neutrality and consumer privacy protections and was responsible for commercial partnerships. Denelle also served as general counsel and legal advisor in private equity and technology.

Workplace

Everything you need to know about tech layoffs and hiring slowdowns

Will tech companies and startups continue to have layoffs?

It’s not just early-stage startups that are feeling the burn.

Photo: Kirsty O'Connor/PA Images via Getty Images

What goes up must come down.

High-flying startups with record valuations, huge hiring goals and ambitious expansion plans are now announcing hiring slowdowns, freezes and in some cases widespread layoffs. It’s the dot-com bust all over again — this time, without the cute sock puppet and in the midst of a global pandemic we just can’t seem to shake.

Keep Reading Show less
Nat Rubio-Licht

Nat Rubio-Licht is a Los Angeles-based news writer at Protocol. They graduated from Syracuse University with a degree in newspaper and online journalism in May 2020. Prior to joining the team, they worked at the Los Angeles Business Journal as a technology and aerospace reporter.

Entertainment

Sink into ‘Love, Death & Robots’ and more weekend recs

Don’t know what to do this weekend? We’ve got you covered.

Our favorite picks for your weekend pleasure.

Image: A24; 11 bit studios; Getty Images

We could all use a bit of a break. This weekend we’re diving into Netflix’s beautifully animated sci-fi “Love, Death & Robots,” losing ourselves in surreal “Men” and loving Zelda-like Moonlighter.

Keep Reading Show less
Nick Statt

Nick Statt is Protocol's video game reporter. Prior to joining Protocol, he was news editor at The Verge covering the gaming industry, mobile apps and antitrust out of San Francisco, in addition to managing coverage of Silicon Valley tech giants and startups. He now resides in Rochester, New York, home of the garbage plate and, completely coincidentally, the World Video Game Hall of Fame. He can be reached at nstatt@protocol.com.

Workplace

This machine would like to interview you for a job

Companies are embracing automated video interviews to filter through floods of job applicants. But interviews with a computer screen raise big ethical questions and might scare off candidates.

Although automated interview companies claim to reduce bias in hiring, the researchers and advocates who study AI bias are these companies’ most frequent critics.

Photo: Johner Images via Getty Images

Applying for a job these days is starting to feel a lot like online dating. Job-seekers send their resume into portal after portal and a silent abyss waits on the other side.

That abyss is silent for a reason and it has little to do with the still-tight job market or the quality of your particular resume. On the other side of the portal, hiring managers watch the hundreds and even thousands of resumes pile up. It’s an infinite mountain of digital profiles, most of them from people completely unqualified. Going through them all would be a virtually fruitless task.

Keep Reading Show less
Anna Kramer

Anna Kramer is a reporter at Protocol (Twitter: @ anna_c_kramer, email: akramer@protocol.com), where she writes about labor and workplace issues. Prior to joining the team, she covered tech and small business for the San Francisco Chronicle and privacy for Bloomberg Law. She is a recent graduate of Brown University, where she studied International Relations and Arabic and wrote her senior thesis about surveillance tools and technological development in the Middle East.

Latest Stories
Bulletins