In the months since the attack on the U.S. Capitol, Facebook has talked at length about how it plans to keep Groups and News Feed from turning into toxic swirls of political vitriol. But new evidence presented by the Department of Justice in a case against one of the rioters raises questions about the role Facebook Messenger played leading up to the riot and how the company polices that platform.
On Wednesday, the DOJ released a slew of private Facebook messages sent and received by Kelly Meggs, the Florida leader of the militia group Oath Keepers, as part of a case in which Meggs is accused of conspiring to stop Congress from certifying the election.
In the messages, which begin on Nov. 9, shortly following the election, and grow increasingly detailed in the days leading up to the riot, Meggs describes having formed an "alliance" with the Proud Boys and the Three Percenters and responds to one of former President Trump's tweets, in which Trump suggested that Jan. 6 "will be wild." "He called us all to the Capitol and wants us to make it wild !!! Sir Yes Sir !!!" one of Meggs's Facebook messages reads.
Meggs also outlines a plan to "come in behind antifa and beat the hell out of them" and later describes Jan. 6 as "when we are all in DC to insurrection [sic]."
The obvious calls for violence would appear to be clear violations of Facebook's terms. Months before the election, Facebook had also announced a ban on militia groups, including Oath Keepers. But both Meggs and his messages, it appears, escaped Facebook's notice.
A Facebook spokesperson said Meggs' account was disabled "some time ago," though the DOJ's evidence includes messages sent as recently as Jan. 4.
Facebook Messenger has always been a tricky product for the company to moderate. On one hand, users and civil liberties advocates recoil at the idea of Facebook reading private messages. On the other, if Facebook didn't moderate Messenger at all, it could become a free-for-all (some might argue it already is).
In 2018, CEO Mark Zuckerberg talked in an interview about how Facebook uses automated systems to actively moderate private messages, describing a case in Myanmar where users were sending Muslim users messages in Facebook Messenger claiming a Buddhist uprising was coming, then sending similar messages about a Muslim uprising to Buddhist users. "That's the kind of thing where I think it is clear that people were trying to use our tools in order to incite real harm," Zuckerberg said. "Now, in that case, our systems detect that that's going on. We stop those messages from going through. But this is certainly something that we're paying a lot of attention to."
Just a year after that interview, though, Zuckerberg announced the company would be transitioning Messenger to end-to-end encryption, making Messenger content invisible even to Facebook. The announcement quickly raised alarm bells about how Facebook would find, for instance, child sexual abuse material, which it currently reports en masse to the National Center for Missing and Exploited Children. Facebook has said that the project, which is still ongoing, will take years.
But the DOJ's findings highlight the shortcomings of Facebook's moderation of even unencrypted and highly explicit messages on Messenger. (Facebook isn't the only messaging platform mentioned in the disclosures. Meggs' Signal messages are part of the filing too). The disclosures come just a day after another report from the group Avaaz showed how hundreds of militia and extremist groups continued to grow on Facebook despite the platform's policies.
The DOJ's evidence also shines a spotlight on the limits of Facebook's crackdown on militia groups like the Oath Keepers. While the company did ban Pages, Groups and Instagram accounts linked to militia movements, it didn't ban the individual profiles of Facebook users who identify with those groups, unless they were the admins of those Pages or Groups or they had repeatedly shared content supporting a banned militia group. Facebook said it has so far disabled more than 25,000 individual users' accounts under that policy. Facebook did not ban praise of those militia groups outright, however, as it's done with foreign extremist organizations like ISIS and Al Qaeda. In its initial blog post, Facebook said it would "allow people to post content that supports these movements and groups, so long as they do not otherwise violate our content policies."
Brian Fishman, the head of counterterrorism and dangerous organizations policy at Facebook, recently explained how the structure of domestic and foreign extremist groups differ during an interview with Protocol, and why he feels that necessitates a different approach from Facebook. "It's not just organizations," Fishman said, referring to the threat of domestic extremism. "It's not just structured ideologies, even. It includes folks that are haphazard adherents to various conspiracy theories. It extends from people engaged primarily in the political process to folks that are explicitly rejecting political resolution of disputes, and all of that was represented on the mall on Jan. 6."
On Thursday, members of Congress will have a chance to question Zuckerberg about this and other issues when he appears alongside Twitter CEO Jack Dorsey and Alphabet CEO Sundar Pichai at a hearing of the House Energy and Commerce Committee. According to his prepared remarks, Zuckerberg plans to tell the committee, "The Capitol attack was a horrific assault on our values and our democracy, and Facebook is committed to assisting law enforcement in bringing the insurrectionists to justice."