Get access to Protocol
The decisions Facebook and Twitter have made this week to stop the spread of election misinformation would have been unimaginable just four years ago. On Twitter, President Trump's feed has become an infinite scroll of warning labels. On Facebook, one of the fastest growing groups in the company's history, called Stop the Steal, got shut down in a matter of days. Twitter slowed recommendations of tweets that had been labeled as election misinformation, and Facebook is reportedly cooking up plans to do the same.
The companies' swift and aggressive action led some to speculate that tech giants — with the exception of YouTube — had succeeded in averting the mess they made in 2016.
But have they really? Is it worth cheering Facebook for shutting down a group over the risk of offline violence when armed protesters have already gathered at the gates and new groups are popping up in its place? Is it really enough for Twitter to label Donald Trump Jr.'s tweet as "disputed" when what he's calling for in that tweet is "total war"? As debunked rumors about dead people voting and watermarks on ballots continue to grow days after the election, is it really fair to say tech companies have fought the good fight against misinformation and won?
To put it in language we're all well-familiar with by now: It's too early to call.
"I really hate a lot of the reporting that's going on that's like they passed or they didn't pass," Alex Stamos, director of the Stanford Internet Observatory, said on a call with reporters Friday organized by the Election Integrity Partnership. "It's not that simple … In some cases they did really well. In some cases there's lots of room for improvement. And I don't think we can just dilute that down."
First: Credit where it's due. Recent research from Harvard has shown that President Trump and other elite conservatives are the biggest spreaders of misinformation about voter fraud, and this week has certainly proved the case. Both online and off, President Trump has continuously claimed without evidence that the election is being stolen. Facebook and Twitter have taken aggressive action on those claims, stringently policing the accounts of Trump and his inner circle.
"Facebook and Twitter are saying to Donald Trump, 'This disinformation on the platform won't fly, and we're going to enforce our policies,'" said Daniel Kreiss, a professor of media and communications at the University of North Carolina, who specializes in tech and policy. "From an elite perspective, I think they sent a message that they play a democratic role and will protect free and fair elections. That's an unmitigated good."
Trump's efforts to undermine the election results were something the tech platforms foresaw and had solid plans in place for, said Emerson Brooking, a resident Fellow at the Atlantic Council's Digital Forensic Research Lab. "They prepared a lot for a contested result and contestation coming from the President of the United States, and I think they've done as reasonable a job as you could," Brooking said on the Election Integrity Partnership call.
But their record has been mixed when it comes to the broader landscape of misinformation and disinformation. The EIP tracked the spread of the #Sharpiegate conspiracy, for one, which falsely claimed that Arizona ballots would be rejected if they were filled out with Sharpie. They found that tweets related to Sharpiegate started off being shared by unverified accounts with relatively small audiences Wednesday and rapidly ballooned as they were picked up by larger accounts over the course of the day. That's even after Arizona election officials debunked the rumor on Twitter.
Misinformation that begins on Facebook and Twitter has been spreading quickly to smaller platforms with less robust monitoring. One example: Twitter has repeatedly labeled tweets from the account @PhillyGOP for spreading false information. But according to Renee DiResta, a technical research manager at the Stanford Internet Observatory, posts that were identical to @PhillyGOP's would nearly simultaneously turn up on platforms like Parler, which bills itself as a "free speech social network." "They're just watching the Twitter account, and then they want to be the first person on Parler to spread the news," DiResta said on one of the EIP's press calls.
There's little Facebook and Twitter can do once posts have left their sites, but the researchers say there's a lot more they could be doing to at least prevent repeat offenders from continuing to violate their rules and amass a larger audience in the process. "The fact that you have the same accounts, who violate these rules over and over again that don't get punished is going to be something the platforms have to address," Stamos said. "If you can keep on getting punished, but in doing so, you increase your follower account, and there's no risk of your entire account being taken away, it's completely logical to do this."
The question of what to do with repeat offenders is particularly fraught when it comes to elected officials, like newly elected congresswoman and QAnon supporter Marjorie Taylor Greene, whose unsubstantiated tweets about the election being stolen were repeatedly masked under warning labels this week. And yet, she shows no signs of stopping. "These platforms have to consider permanently removing U.S. officials from platforms if they use them so destructively," Brooking said. "That will be a big challenge in the weeks ahead."
Facebook and Twitter, it's clear, are trying. Their websites are wallpapered with warning labels. But what good is wallpaper if you peel it back and find the walls are rotting?
By almost any measure, misinformation and disinformation on social media have been as prolific as ever this week. Much of that informational pollution came from the president himself and the media channels that exist to support him. Tech platforms were never going to be able to fully defend themselves against such an onslaught. But it's still unclear how formidable the defenses they do have in place have been. Did labels actually slow the spread of these rumors? What kind of a life did these posts take on on other platforms? What connection did all the online chatter have to offline action? Answering those questions will take more considered research than can be completed in a few days.
What is clear is at the moment, the country is teetering on the edge of instability, with hoards of angry and often armed voters taking to the streets and intimidating election officials. In some cases, they're doing it because of the things they saw online, which they've been conditioned over time by conservative media and tech companies' lax policies or lack of enforcement to believe are true. Do those same companies now deserve pats on the back for finally, suddenly telling them they're not? That, it seems, would be a premature declaration of victory.
- Election 2020 - Protocol ›
- Big Tech is finally ready for Election 2016. Too bad it's 2020. - Protocol ›
- Meet the researchers and activists fighting misinformation - Protocol ›
- Facebook and Twitter are finally calling out election misinformation ... ›
- Facebook's viral misinformation policy gets put to the test with ... ›
- Zuckerberg calls on Congress to impose content moderation transparency standards - Protocol ›