Twitter’s own research shows that it’s a megaphone for the right. But it’s complicated.
Twitter shared its own research today that showed its algorithms amplify right-leaning political content more than left-leaning.
Twitter is publicly sharing research findings today that show that the platform's algorithms amplify tweets from right-wing politicians and content from right-leaning news outlets more than people and content from the political left.
The research did not identify whether or not the algorithms that run Twitter's Home feed are actually biased toward conservative political content, because the conclusions only show bias in amplification, not what caused it. Rumman Chowdhury, the head of Twitter's machine learning, ethics, transparency and accountability team, called it "the what, not the why" in an interview with Protocol.
"We can see that it is happening. We are not entirely sure why it is happening. To be clear, some of it could be user-driven, people's actions on the platform, we are not sure what it is. It's just important that we share this information," Chowdhury said. The META team plans to conduct what she called a "root-cause analysis" to try to discover the "why," and that analysis will likely include creating testable hypotheses about how people use the platform that could help show whether it's the way users interact with Twitter or the algorithm itself that is causing this uneven amplification.
Twitter didn't define for itself what news outlets and politicians are "right-leaning" or belong to right-wing political parties, instead using definitions from other researchers outside the company. The study looked at millions of tweets from politicians across seven countries and hundreds of millions of tweets of links from news outlets, not tweets from the outlets themselves.
Chowdhury emphasized that Twitter doesn't already know what causes certain content to be amplified. "When algorithms get put out into the world, what happens when people interact with it, we can't model for that. We can't model for how individuals or groups of people will use Twitter, what will happen in the world in a way that will impact how people use Twitter," she said. Twitter algorithms cannot just be opened up and examined for biases, and the home feed isn't run by just one algorithm. It's a system that works together, creating "system-level complexity."
She speculated that if the analysis doesn't find that the algorithms themselves are designed in a way that creates bias, other reasons for conservative content success could be that some right-wing politicians are better at selling their ideology online, or that the right is more successful at mobilizing support for its ideology. "The purpose is not to dump the responsibility on users here. There is a lot here for us to think about, how to give people more meaningful choice, more meaningful control over their input [to the algorithms], as well as the output that's going on."
Twitter's META team — made up of some of the more renowned technology and algorithm ethicists and critics — promised to share its research with the public when it grew into a powerhouse unit as one of Twitter's 2021 priorities. But actually sharing that research with the world is difficult; Chowdhury and Lindsay McCallum, the Twitter spokesperson who works closely with the META team, described an intense process of debate over how to word and phrase each line in today's statement. While they insisted that there is broad internal support for META's transparency, Chowdhury and McCallum worry about accidentally miscommunicating the complex science behind the research. They know that news articles — like this one — are easily misinterpreted, and that the final takeaway for most people is a pretty diluted version of the research.
Facebook has fallen under massive fire for failing to share the findings of its own social media research, revealed over the last month in a series of leaks to the Wall Street Journal. The company has seemed especially frustrated with how media and politicians have interpreted the leaked findings, and has also tried to emphasize the limitations of researchers' work. While they didn't mention Facebook, Chowdhury and her team have gone to great lengths to do the interpreting of the newly published paper themselves.
Chowdhury also hinted at a coming announcement that Twitter has created a way to share the data used for its work for third-party validation of its scientific papers. While today's announcement doesn't specify what that will look like, Twitter said that it has finalized a partnership that will both allow for the privacy of its users and also give researchers the chance to replicate the study conducted here. "Anybody who makes algorithms and relies on them has the same questions we have," Chowdhury said.