When they testify before the Senate Judiciary Committee on Tuesday, Mark Zuckerberg and Jack Dorsey will undoubtedly try to convince lawmakers that their companies took unprecedented actions this year to protect the 2020 election.
If lawmakers actually do their job this time, they could get answers about whether any of those actions worked.
Yes, the last Senate hearing featuring Zuckerberg, Dorsey and Sundar Pichai (who will not attend Tuesday's hearing) was an unmitigated partisan disaster, and there's no guarantee this one will be any different. But with the election behind us and attempts to undermine it still very much ongoing, members of the committee have a chance to redeem themselves by getting to the bottom of important questions about how these platforms have dealt with those attempts from President Trump on down.
The hearing was initially scheduled after Facebook and Twitter limited the spread of a viral New York Post story about president-elect Biden's son Hunter in October. Now, Republicans seem even more primed to cry censorship than when the hearing was first announced, given the huge volume of warning labels the two companies have since slapped on President Trump's own posts. That means if anyone is going to get to the bottom of whether the platforms' strategies were effective, it will likely be the Democrats.
Perhaps the most important question Zuckerberg and Dorsey could answer is whether warning labels actually stopped or slowed the spread of misinformation. That's nearly impossible for researchers outside of the companies to figure out. "As external researchers to those platforms, it's difficult to measure the effects of their interventions because we don't know what those interventions are or when they happen, or what combination of interventions are happening," Kate Starbird, an associate professor at the University of Washington, said on a recent call with the disinformation research group the Election Integrity Partnership.
Twitter hinted at some of these answers in a blog post last week, in which executives said tweets that included warning labels saw a 29% decrease in quote retweets. But that figure didn't distinguish between the subtle labels that appeared below some tweets and the more forceful ones that required users to click through an interstitial before they could view the tweet at all.
Twitter also touted its "pre-bunk" notifications, which informed users that voting by mail was safe and that the election results might be delayed at the top of their feeds. Those prompts were viewed by 389 million people, according to Twitter, but that number says very little about the impact those prompts had on those people.
So far, Facebook hasn't shared any such numbers illustrating its labels' effectiveness. "We saw the same posts on Twitter and Facebook receive pretty different treatments," said Jessica González, co-CEO of the advocacy group Free Press. "Facebook had a more general message, which was almost the same as the message they put on any post people posted that had anything to do with the election. I'm worried about the milquetoast nature."
González said lawmakers should use this opportunity to press both companies on whether and how they're studying those qualitative questions about their warning labels and what results, if any, they've found so far.
Erin Shields, national field organizer at MediaJustice, which is part of a group called the Disinfo Defense League, said Zuckerberg and Dorsey need to answer questions about their treatment of repeat offenders. This is a concern other disinformation researchers at the Election Integrity Partnership have recently raised as well, regarding a slew of far-right personalities who have repeatedly spread voting misinformation. Twitter recently permanently suspended an account belonging to Steve Bannon over a video in which he argued Dr. Anthony Fauci and FBI director Christopher Wray should be beheaded. Facebook took down the video, but left Bannon's account untouched.
"At what point do those rule violators get suspended?" Shields said. "Regular, everyday people get booted off the platform and get their accounts suspended for much less. It's interesting to see how much grace these platforms are giving political actors who are consistently violating their policies."
One related question from Shields: How much of the violative content consisted of live videos, like Bannon's, or memes? And how much longer did it take to take action on those posts, as opposed to posts containing text? The answer to that question, Shields argues, could say a lot about how porous these platforms' defenses are when it comes to video and imagery.
"We know they have some ability to check words, but what are they doing about memes and graphics and, in particular, live video where disinformation and misinformation is being shared with no pushback from the platforms?" Shields said.
This question is what makes YouTube CEO Susan Wojcicki's absence from the hearing so conspicuous. YouTube took by far the most hands-off approach to election misinformation, allowing videos falsely declaring President Trump's victory to remain on the platform and rack up views. YouTube added subtle warning labels to some videos and removed their ability to run ads, but was far less proactive than either Facebook or Twitter in directly contradicting misinformation within warning labels.
YouTube has pushed back on some of the criticism it's faced, stating that 88% of the top 10 results in the U.S. for election-related searches come from authoritative sources. But that stat elides the fact that people often encounter YouTube videos on other websites or social media platforms, without ever touching YouTube search. Given how much coordination there was between tech platforms and federal agencies leading up to Election Day, it's unclear why YouTube took such a markedly different approach. "YouTube has been let off the hook here," Shields said. Without Wojcicki there, the Senate will have to save those questions for another day.