Facebook knows more rules are coming on harmful content, so it's doing what it can to shape them.
In a 22-page white paper released Monday in tandem with Mark Zuckerberg's meeting with EU regulators, Facebook's VP of content policy Monika Bickert wrestles with big questions about speech online and lays out a policy roadmap for regulating content — a roadmap that borrows from the company's existing procedures, including transparency reports and channels for users to report content. Except now, Facebook is encouraging governments to adopt similar policies as regulations, along with a new liability model, as it pushes for global standards for managing harmful content. It's a shift that would make it easier for the company to navigate a world of fractured national requirements.
Get what matters in tech, in your inbox every morning. Sign up for Source Code.
It's "more of a summation of the thinking and conversations about this topic that have been happening for several years than anything else," said Renée DiResta, research manager at the Stanford Internet Observatory.
"I don't see much that's new," she added.
As suggested by the paper's timing, it's targeted at the very regulators Zuckerberg is attempting to charm right now.
"It's important to understand that this white paper is talking about regulation outside the U.S.," said St. John's Law professor Kate Klonick.
Several countries, including Germany and Australia, have enacted laws with stiff penalties for companies that fail to remove certain content within certain timeframes, drawing complaints from companies including Facebook.
Here are four key highlights from the white paper — and what they mean:
'Preserving free expression'
"In the United States, for example, the First Amendment protects a citizen's ability to engage in dialogue online without government interference except in the narrowest of circumstances. Citizens in other countries often have different expectations about freedom of expression, and governments have different expectations about platform accountability. Unfortunately, some of the laws passed so far do not always strike the appropriate balance between speech and harm, unintentionally pushing platforms to err too much on the side of removing content." (Page 4)
The paper opens with an extended meditation on free expression and acknowledges that private platforms like itself are "increasingly" the ones making determinations about what speech is allowable online. But the section above flags that Facebook is most concerned about regulation outside the U.S., while throwing shade at some established laws.
Without explicitly addressing them, the paper is "clearly a response" to two laws, Klonick said: the German Network Enforcement Act (or NetzDG) and the United Kingdom's currently in-process Online Harms regulations.
NetzDG is aimed at limiting the spread of hate speech, which is illegal in Germany. It was passed by Germany's Bundestag in 2017 — and Facebook already faced fines for allegedly violating it. The law includes a 24-hour removal requirement for material that breaks German hate speech law and penalties of up 50 million euros for violations.
However, human rights activists also criticized that law for being over-broad and potentially infringing on freedom of expression rights. Civil liberties advocates are also worried about censorship when it comes to proposed U.K. regulation.
'Systems and procedures' vs. 'performance targets'
"By requiring systems such as user-friendly channels for reporting content or external oversight of policies or enforcement decisions, and by requiring procedures such as periodic public reporting of enforcement data, regulation could provide governments and individuals the information they need to accurately judge social media companies' efforts." (Page 9)
When it comes to regulatory models, Facebook argues in favor of an approach that requires various "systems and procedures." If that sounds familiar, it's because they are already built into Facebook's current operations.
The company also makes a case against establishing "performance targets" for companies, such as fines if companies don't keep harmful content below a certain threshold. That approach could create "perverse incentives," the company argues, and lead to procedures that do more to juice the required metrics than actually stop the spread of harmful content.
For example, the company argues that pushing for removal within 24 hours, as the German law does, could make it so companies don't proactively search for and remove offending comments that are outside that timeframe to appear better in reporting.
'Harmful content'
"Among types of content most likely to be outlawed, such as content supporting terrorism and content related to the sexual exploitation of children, there is significant variance in laws and enforcement. In some jurisdictions, for instance, images of child sexual exploitation are lawful if they are computer generated. In others, praise of terror groups is permitted, and lists of proscribed terror groups vary widely by country and region. Laws vary even more in areas such as hate speech, harassment, or misinformation. Many governments are grappling with how to approach the spread of misinformation, but few have outlawed it." (Page 17)
One big issue Facebook has come up against is that laws already on the books aren't very specific about what it means for content harmful or the kind of remedy companies should deliver. For example, Australia passed a law in the aftermath of the Christchurch shooting that called on social media companies to remove "abhorrent violent material," defined as videos of rapes, murders, and terrorist attacks — but the timeline for that removal, "expeditiously," is open to a lot of interpretation.
Ultimately, rather than talk about specific content that should be regulated, Facebook argues that governments should consider a few issues before deciding how to approach such content. Among them, that their rules could "be enforced practically, at scale, with limited context about the speaker and content" and take into account the nature of the content (private vs. public, permanent vs. ephemeral) while providing "flexibility so that platforms can adapt policies to emerging language trends."
What's next?
"Any national regulatory approach to addressing harmful content should respect the global scale of the internet and the value of cross-border communications. It should aim to increase interoperability among regulators and regulations. However, governments should not impose their standards onto other countries' citizens through the courts or any other means." (Page 19)
The paper builds on a point CEO Mark Zuckerburg made in a Washington Post op-ed last year, when he argued "we need a more standardized approach" to combating harmful content like hate and terrorist speech online. And while the policy content may not be a revelation, its release makes clear the direction Facebook hopes larger global conversations about content moderation will go: toward an integrated system that acknowledges the reality that global platforms will inevitably help drive.
Some in the tech industry seem on board. For example, Twitter Director of Public Policy Strategy Nick Pickles thanked Facebook for the report — naturally, via tweet — calling it "an important contribution to the debate on tech regulation."
However, the European lawmakers Facebook was likely trying to woo seem less enthused.
"It's not for us to adapt to those companies, but for them to adapt to us," said Europe's commissioner for internal markets Thierry Breton, according to a POLITICO report.