As the Russian invasion of Ukraine continues, social media platforms are fighting an avalanche of misinformation and propaganda with a common weapon: information labels.
But do those labels actually stop people from spreading questionable content? Not exactly, according to a newly released study that analyzed how labels affected former President Trump’s ability to spread election lies on Twitter in 2020.
Researchers for the German Marshall Fund analyzed replies and engagement data for 1,241 tweets Trump wrote between October 2020 and January 2021. They found that labels alone didn’t change Twitter users’ likelihood of sharing or engaging with the tweets. That doesn’t mean, however, the labels were totally ineffective. When the label itself included a particularly strong rebuttal of a tweet containing false information, the researchers found, it led to fewer user interactions and less toxicity in the replies and comments.
This suggests that while labels on their own may not stop the spread of misinformation, labels designed in a particular way might. “As policymakers and platforms consider fact checks and warning labels as a strategy for fighting misinformation, it is imperative that we have more research to understand the impact of labels and label design on user engagement,” Ellen P. Goodman, a law professor at Rutgers, wrote in a statement. Goodman co-authored the study with Orestis Papakyriakopoulos, a postdoctoral research associate at Princeton.
“This new empirical research shows that the impacts can differ depending on label design, ranging from no impact at all to reductions in engagement and even toxicity of replies,” Goodman said.
Twitter did not immediately respond to a request for comment. But the company has shared its own data about labeling in the past, yielding somewhat different results. Shortly after the election, the company reported that it had observed a 29% decrease in quote tweets of labeled tweets between Oct. 27 and Nov. 11 of 2020. “These enforcement actions remain part of our continued strategy to add context and limit the spread of misleading information about election processes around the world on Twitter,” company executives wrote at the time.
Twitter has also pointed to data that shows warning labels that prompt users to reassess mean tweets before sending them actually work. In a test, 34% of users opted to delete or change their tweet after receiving a prompt.
Labels have continued to be one of the primary tools tech companies have used in the days since Russia invaded Ukraine. Twitter and Facebook have both said they will label all tweets with links to Russian state media and demote them in users’ feeds. The question is whether simply seeing a label on a tweet or post will be enough to stop users from engaging with them. The new study would suggest that the most effective way to actually stop a lie from spreading is to directly call it out as one.