Power

The Justice Department wants to chip away at Section 230. Here's what you need to know.

The DOJ's proposal goes after "bad Samaritans," encryption and the moderation of "otherwise objectionable" content.

William Barr

The Department of Justice is calling on Congress to rewrite Section 230 of the Communications Decency Act.

Photo: Alex Wong/Getty Images

The internet's favorite law is under attack.

The latest blow to Section 230 of the Communications Decency Act came Wednesday, when the Department of Justice issued a list of proposals asking Congress to amend the law in ways that would make it easier to hold tech companies like Facebook and Twitter legally liable for harmful content that appears on their platforms.

The department's final proposal to Congress comes on the heels of a presidential executive order last month and a newly introduced bill from Missouri Sen. Josh Hawley, both of which seek to chip away at companies' Section 230 protection in cases where those companies are accused of political bias.

Unlike those measures, which have been roundly criticized even by Section 230's critics, the DOJ's proposal focuses more squarely on stripping immunity from "bad Samaritans." It would also punish encrypted platforms that are, in essence, blind to bad behavior on their platforms.

What does that mean? How would it affect the majority of tech companies? And what will happen next? Here's what you need to know:

What is a 'bad Samaritan'?

Some parts of the DOJ's proposal would affect only a small subset of tech companies, while others would have more sweeping ramifications.

The proposal first seeks to address the problem of companies claiming Section 230 immunity despite the fact that they are actively facilitating or soliciting illegal content. That includes, for instance, websites that exist for the sole purpose of hosting revenge porn or illegal gun sales. This, the DOJ argues, was never the intention of Section 230, which was written to enable "good Samaritans" to block bad behavior, but protect them when they miss things or when they take down content they shouldn't have.

The bad Samaritan exception has been promoted by some of Section 230's most prominent reformers. In a 2017 paper, Section 230 scholar and Boston University professor Danielle Citron and Benjamin Wittes, a senior fellow at the Brookings Institution, argued that "bad Samaritans" should be denied immunity and that companies should have to demonstrate they used a "reasonable standard of care" in removing violative content.

The DOJ's proposal would strip immunity from these bad Samaritans who intentionally seek out illicit content. That, of course, applies to a smaller subset of bad actors.

But the proposal would also create a carveout from immunity for instances in which companies fail to take reasonable steps against particularly egregious content, like child sexual abuse material, terrorist content and cyberstalking, all of which takes place on mainstream platforms like Facebook.

It would also strip immunity from companies in specific cases where they had "actual knowledge or notice" of content that was violating federal law and failed to remove it. This, too, could be a threat to any platform that hosts third-party content but hasn't taken action on every post that gets flagged as illicit. If enacted, this would likely lead tech platforms to take down far more content once it's been reported.

What else would the proposal do?

Two other big changes that would affect lots of companies have to do with narrowing the scope of content that tech platforms can take down without legal repercussions and increasing transparency around their decisions.

First, the DOJ is proposing that Congress define what it means to moderate content in "good faith." If the DOJ gets its way, that definition would require companies to moderate content "in accordance with plain and particular terms of service and accompanied by a reasonable explanation." This line item aims to address concerns that tech platforms are enforcing their terms unevenly and without much visibility into how the decisions get made.

Second, the DOJ also wants Congress to rewrite the part of Section 230 that enables tech companies to block "obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable" content. The DOJ would cut "otherwise objectionable" from that line and replace it with "unlawful" and "promotes terrorism." This subtle change would drastically limit the range of content that tech companies could filter without opening themselves up to liability. For instance, it could make it harder for, say, Facebook to defend its decision to take down misinformation about voting and elections, because it doesn't neatly fit into one of those other categories.

What about encryption?

The DOJ has signaled its disdain for encryption since long before current Attorney General William Barr came to office. It's fought high-profile battles against Apple in hopes of gaining access to encrypted devices used in terrorist plots. Now, the DOJ is asking Congress to strip companies of Section 230 immunity if they "purposefully blind themselves and law enforcement" to illicit material. This would mean that in order to have immunity, companies would have to ensure that law enforcement has access to potential evidence in a "comprehensible, readable and usable format." Such a requirement would throw a wrench in Facebook's plans to encrypt messaging across its entire family of apps.

Of note: When the DOJ broke out its "key takeaways" from the 28-page proposal on its website, it left this part out completely.

Who supports this?

The DOJ's proposal received widespread praise from Republicans, who have advocated for changes to Section 230 for years. Many Republicans, including Sens. Josh Hawley and Marsha Blackburn, have called for these changes due to what they perceive to be the suppression of conservative voices on tech platforms — despite the fact that conservatives often have the top-performing posts on sites like Facebook.

"These changes will modernize oversight of the Internet economy and hold giants like Google, Facebook, and Twitter accountable when they overstep as the online speech police," Blackburn said in a statement. "No longer will we let Big Tech hide behind these liability protections as a pretense to bully competitors or to suppress free speech."

The proposal also garnered support from online safety advocates, including Gretchen Peters, executive director of the Center on Illicit Network and Organized Crime. "I really believe there's a genuine effort to understand and respond to a public safety and consumer safety concern," Peters said. She added, however, that the parts of the proposal that deal with free speech, not criminal activity, are likely to "collapse under their own weight."

Who's against it?

The tech industry was quick to oppose the DOJ's plan, particularly the fact that it was so closely timed with Hawley's Senate bill.

"This is a coordinated attack by the administration against tech businesses to sidestep the First Amendment," Carl Szabo, vice president and general counsel at the tech advocacy group NetChoice, said in a statement. "The administration is weaponizing the Department of Justice and Congress to control online speech, not stop bad actors."

In a call with reporters Wednesday, following a Wall Street Journal report that suggested the DOJ's recommendations were imminent, Facebook's Vice President of Global Affairs Nick Clegg said, "Changing significantly or eliminating the balance of responsibility and provisions about liability in Section 230 would, in our view, in the end mean less speech of all kinds appearing online."

Eric Goldman, a Section 230 expert and professor at Santa Clara University School of Law, said he objects not only to the substance of the proposal, but to the process the DOJ has undertaken. Goldman points to the fact that while Section 230 prohibits the average user from successfully suing a tech platform, the DOJ does have the ability to bring enforcement action for criminal activity that takes place online. "They're the one entity in the world that is categorically free to pursue cases without encountering Section 230," Goldman said.

Goldman said the sheer number of carveouts contained in the proposal makes "Swiss cheese" of Section 230. "It would ultimately really eliminate the immunity altogether," he said.

What happens next?

Reforming Section 230 is Congress' job, and a lawmaker will have to introduce a bill that includes the reforms suggested by the DOJ in order for these proposals to proceed.

While there's significant bipartisan interest in modifying the controversial statute in some way, it's unlikely that the reforms sanctioned by the DOJ could make it through the Democratic House and Republican Senate. Most Section 230 proposals that have been introduced in Congress largely fall along party lines. That means the new legislation introduced by Hawley and four other Republican senators on Wednesday is unlikely to move this year, either.

But the DOJ's proposal represents a new front in the battle over Section 230. As Goldman points out, it is the DOJ's job to "tell us where it can't enforce the law properly." And it's Congress' job to do something about it.

Additional reporting by Emily Birnbaum.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep Reading Show less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep Reading Show less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep Reading Show less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep Reading Show less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins