Politics

Facebook said what about content regulations?

A new white paper proposes "a way forward" on content rules.

Vera Jourova and Mark Zuckerberg

Facebook released a 22-page white paper in tandem with Mark Zuckerberg's meeting with EU regulators.

Photo: AFP via Getty Images

Facebook knows more rules are coming on harmful content, so it's doing what it can to shape them.

In a 22-page white paper released Monday in tandem with Mark Zuckerberg's meeting with EU regulators, Facebook's VP of content policy Monika Bickert wrestles with big questions about speech online and lays out a policy roadmap for regulating content — a roadmap that borrows from the company's existing procedures, including transparency reports and channels for users to report content. Except now, Facebook is encouraging governments to adopt similar policies as regulations, along with a new liability model, as it pushes for global standards for managing harmful content. It's a shift that would make it easier for the company to navigate a world of fractured national requirements.

Get what matters in tech, in your inbox every morning. Sign up for Source Code.

It's "more of a summation of the thinking and conversations about this topic that have been happening for several years than anything else," said Renée DiResta, research manager at the Stanford Internet Observatory.

"I don't see much that's new," she added.

As suggested by the paper's timing, it's targeted at the very regulators Zuckerberg is attempting to charm right now.

"It's important to understand that this white paper is talking about regulation outside the U.S.," said St. John's Law professor Kate Klonick.

Several countries, including Germany and Australia, have enacted laws with stiff penalties for companies that fail to remove certain content within certain timeframes, drawing complaints from companies including Facebook.

Here are four key highlights from the white paper — and what they mean:

'Preserving free expression'

"In the United States, for example, the First Amendment protects a citizen's ability to engage in dialogue online without government interference except in the narrowest of circumstances. Citizens in other countries often have different expectations about freedom of expression, and governments have different expectations about platform accountability. Unfortunately, some of the laws passed so far do not always strike the appropriate balance between speech and harm, unintentionally pushing platforms to err too much on the side of removing content." (Page 4)

The paper opens with an extended meditation on free expression and acknowledges that private platforms like itself are "increasingly" the ones making determinations about what speech is allowable online. But the section above flags that Facebook is most concerned about regulation outside the U.S., while throwing shade at some established laws.

Without explicitly addressing them, the paper is "clearly a response" to two laws, Klonick said: the German Network Enforcement Act (or NetzDG) and the United Kingdom's currently in-process Online Harms regulations.

NetzDG is aimed at limiting the spread of hate speech, which is illegal in Germany. It was passed by Germany's Bundestag in 2017 — and Facebook already faced fines for allegedly violating it. The law includes a 24-hour removal requirement for material that breaks German hate speech law and penalties of up 50 million euros for violations.

However, human rights activists also criticized that law for being over-broad and potentially infringing on freedom of expression rights. Civil liberties advocates are also worried about censorship when it comes to proposed U.K. regulation.

'Systems and procedures' vs. 'performance targets'

"By requiring systems such as user-friendly channels for reporting content or external oversight of policies or enforcement decisions, and by requiring procedures such as periodic public reporting of enforcement data, regulation could provide governments and individuals the information they need to accurately judge social media companies' efforts." (Page 9)

When it comes to regulatory models, Facebook argues in favor of an approach that requires various "systems and procedures." If that sounds familiar, it's because they are already built into Facebook's current operations.

The company also makes a case against establishing "performance targets" for companies, such as fines if companies don't keep harmful content below a certain threshold. That approach could create "perverse incentives," the company argues, and lead to procedures that do more to juice the required metrics than actually stop the spread of harmful content.

For example, the company argues that pushing for removal within 24 hours, as the German law does, could make it so companies don't proactively search for and remove offending comments that are outside that timeframe to appear better in reporting.

'Harmful content'

"Among types of content most likely to be outlawed, such as content supporting terrorism and content related to the sexual exploitation of children, there is significant variance in laws and enforcement. In some jurisdictions, for instance, images of child sexual exploitation are lawful if they are computer generated. In others, praise of terror groups is permitted, and lists of proscribed terror groups vary widely by country and region. Laws vary even more in areas such as hate speech, harassment, or misinformation. Many governments are grappling with how to approach the spread of misinformation, but few have outlawed it." (Page 17)

One big issue Facebook has come up against is that laws already on the books aren't very specific about what it means for content harmful or the kind of remedy companies should deliver. For example, Australia passed a law in the aftermath of the Christchurch shooting that called on social media companies to remove "abhorrent violent material," defined as videos of rapes, murders, and terrorist attacks — but the timeline for that removal, "expeditiously," is open to a lot of interpretation.

Ultimately, rather than talk about specific content that should be regulated, Facebook argues that governments should consider a few issues before deciding how to approach such content. Among them, that their rules could "be enforced practically, at scale, with limited context about the speaker and content" and take into account the nature of the content (private vs. public, permanent vs. ephemeral) while providing "flexibility so that platforms can adapt policies to emerging language trends."

What's next?

"Any national regulatory approach to addressing harmful content should respect the global scale of the internet and the value of cross-border communications. It should aim to increase interoperability among regulators and regulations. However, governments should not impose their standards onto other countries' citizens through the courts or any other means." (Page 19)

The paper builds on a point CEO Mark Zuckerburg made in a Washington Post op-ed last year, when he argued "we need a more standardized approach" to combating harmful content like hate and terrorist speech online. And while the policy content may not be a revelation, its release makes clear the direction Facebook hopes larger global conversations about content moderation will go: toward an integrated system that acknowledges the reality that global platforms will inevitably help drive.

Some in the tech industry seem on board. For example, Twitter Director of Public Policy Strategy Nick Pickles thanked Facebook for the report — naturally, via tweet — calling it "an important contribution to the debate on tech regulation."

However, the European lawmakers Facebook was likely trying to woo seem less enthused.

"It's not for us to adapt to those companies, but for them to adapt to us," said Europe's commissioner for internal markets Thierry Breton, according to a POLITICO report.
Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins