Opinion
People

To combat disinformation, centralize moderation

There's more to content moderation than deplatforming.

Twitter screen with disputed popup

In addition to interplatform collaboration, big tech companies would also benefit from greater collaborations with academic researchers, government agencies or other private entities, the authors argue.

Image: Twitter

Yonatan Lupu is an associate professor of political science and international affairs at George Washington University. Nicolás Velasquez Hernandez is a lecturer at the Elliott School of International Affairs and a postdoctoral researcher at GW'sInstitute for Data, Democracy and Politics.

Florida Gov. Ron DeSantis' signing of a bill that penalizes social media companies for deplatforming politicians was yet another salvo in an escalating struggle over the growth and spread of digital disinformation, malicious content and extremist ideology. While Big Tech, world leaders and policymakers — along with many of us in the research community — all recognize the importance of mitigating online and offline harm, agreement on how best to do that is few and far between.

Big tech companies have approached the problem in different ways and with varying degrees of success. Facebook, for example, has had considerable success in containing malicious content by blocking links that lead to domains characterized by disinformation and hateful content, and by removing keywords from its search engine index that link to hate and supremacist movements. Additionally, Facebook and Twitter have both deplatformed producers and purveyors of malicious content and disinformation, including, famously, a former U.S. president.

But these "gatekeeper powers" often put Big Tech squarely in the crosshairs of U.S. politicians like DeSantis and other critics, who argue the platforms are censoring the American people. (Legal scholars have argued otherwise, noting that the right of private companies to remove malicious persons or content from their platforms is itself protected under the First Amendment.)

Although studies have shown that deplatforming, removing content and counter-messaging can effectively slow the spread of misinformation or extremist content, these tactics also come at a cost. Deplatforming is likely to continue raising the ire of critics accusing companies of censorship or political favoritism. Likewise, counter-messaging can be resource-intensive and even counterproductive: Conspiracy theorists, for example, often view counter-messaging as further evidence of their misguided beliefs. Moreover, these methods do not truly contain the growth and spread of malicious content or extremism.

To make matters worse, individuals and groups become increasingly savvy at subverting the moderation efforts of single platforms, and our research shows how malicious content can quickly and easily move between platforms. In fact, by mapping this network of hate communities across multiple platforms, our research team can see how groups exploit the multiverse of online hate. When a platform removes them, extremists often simply regroup on less-moderated platforms like Gab or Telegram and then find ways to reenter the platform from which they were initially removed. This points to a key challenge: Mainstream companies have made great strides in moderating the content on their own platforms, but they cannot control the spread of malicious content on unmoderated platforms, which often seeps back onto their own sites.

Likewise, when we investigate how extremist groups operate online, we see hidden, mathematical patterns in how they grow and evolve. The growth patterns of early online support for the U.S.-based extremist group known as the Boogaloos, for example, mirrored those for the terrorist organization ISIS; both movements' growth over time can be explained by a single shockwave mathematical equation. Though ideologically, culturally and geographically distinct, these two groups nevertheless show remarkable likeness in their digital evolution and "collective chemistry." By understanding how these groups assemble and combine into communities, we can effectively nudge that chemistry in ways that slow their growth or even prevent them from forming in the first place.

These types of system-level insights provide a deeper level of understanding as to how malicious online content spreads, persists and grows. They also point the way forward for social media companies to identify new strategies beyond content removal and counter-messaging to better slow the spread of malicious content, especially during high-stakes moments like a pandemic or social unrest.

For example, our research suggests that platforms could slow the growth of hate communities by intentionally introducing non-malicious, mainstream content onto their pages and crowding out malicious users. They could also modify their platforms to lengthen the paths malicious content would need to travel between hate communities (including those on other platforms) and mainstream groups, thereby slowing its spread and increasing the chance of detection by moderators. Even simple tactics like capping the number of users on extremist pages could be highly effective. One advantage of tactics like these is that their subtlety makes them less likely to draw backlash.

Although companies hoping to protect their secret sauce of success from competitors might be resistant to work together, it's clear that treating their individual platforms like semi-fortified islands is a limited solution. For example, when individual platforms remove malicious content, they understandably are reluctant to disclose details about what they removed, but finding ways to confidentially share such information with each other could greatly reduce time and resources spent on duplicate efforts. This could also prevent reemergence of malicious content elsewhere. Along similar lines, if mainstream platforms can find ways to share information with each other about users and content migrating to them from unmoderated platforms, this could help more quickly sever the connections between mainstream social media and the dark web.

It is asking a lot of huge, profit-driven corporations to cooperate with their direct competitors, but the need to do so is vital. Examples of interplatform coordination to reduce malicious content — such as the Global Alliance for Responsible Media — are encouraging. Through the Alliance, platforms like Facebook and YouTube are working to harmonize best practices and share data to clamp down on hate speech. Another example is the information-sharing platform run by the Global Internet Forum to Counter Terrorism, which allows platforms to identify certain types of malicious content.

In addition to interplatform collaboration, big tech companies would also benefit from greater collaboration with academic researchers, government agencies or other private entities. New perspectives and ways of thinking will ultimately lead to more effective strategies.

Given the sheer effort they expend to connect all of us, Big Tech should remember that they don't have to go it alone.

Latest Stories
Bulletins