Politics

Here’s what’s on the Department of Justice’s Section 230 wish list

The proposal submitted to Congress would limit Section 230 protections and create new carve-outs for "bad Samaritans."

Attorney General William Barr and President Donald Trump

The proposal was almost immediately met with confusion by even some Section 230 experts.

Photo: Drew Angerer/Getty Images

The Department of Justice fired a warning shot against Section 230 on Wednesday, sending Congress proposed legislation that would limit platforms' protections under the law and create a carve-out for so-called "bad Samaritans" who purposely promote, solicit or facilitate criminal activity.

Both the White House and Attorney General William Barr have argued that Section 230 is overly broad, protecting predators on the internet and enabling alleged censorship of conservatives. Some of their Republican colleagues in Congress have mounted much the same argument, introducing legislation that would curb Section 230 protections. But, with this proposal, the DOJ is asking them to do more.

The proposal was almost immediately met with confusion by even some Section 230 experts. "I think this proposal is a mess, and I don't know if that's on purpose," said Blake Reid, a professor of tech policy at Colorado Law. "That is not a great way to run the railroad for a statute that governs a huge swath of activity that happens online."

"Why are they doing this now when we know that Congress doesn't have the time to make any progress on this?" asked Eric Goldman, a professor at Santa Clara University School of Law. "There's only one reason why they wanted to bring this out in October before an election."

Barring some act by Congress — which is in the final days of the session and currently consumed with a Supreme Court vacancy, an election and a pandemic — the proposal is mostly a rough sketch of what the DOJ wants out of Section 230 reform. Here are some of its biggest asks.

Strict adherence to terms of service

Conservatives have been claiming for years, without much evidence, that tech platforms enforce their terms of service unevenly based on the political leanings of their users. The DOJ's proposal attempts to get at that supposed problem by specifying that platforms can't be held liable for filtering content that violates their terms of service. That, Reid argues, is a "bank shot" way of saying that when platforms don't precisely follow their terms of service, they can be liable. That could have major implications for how tech platforms choose to moderate, Reid says.

"For some platforms, they'll look at this and say, 'We're going to moderate as little as possible because we don't want to inadvertently moderate something that's inconsistent with our terms of service,'" Reid said. "Another effect could be platforms draw their terms of service incredibly broadly, like, 'Under our terms of service, we can take down any of your content for any reason. Tough cookies.'"

A new definition of 'good faith'

Another way of getting at that same issue is creating a new definition for "good faith." Section 230 currently protects content moderation decisions that are made "in good faith" but doesn't define that term. The DOJ would create a definition that gives platforms precious little room for error. It would require them to publish their terms of service, restrict content only as it relates to their terms of service, give users "timely notice" about their content being taken down and, most importantly, not screw up. The new definition states that if platforms restrict one piece of content based on their terms of service but don't restrict another similar piece of content, they won't be acting "in good faith."

"It tries to cram into the term 'good faith' all these restrictions in how you set your terms of service up," Reid said.

Limits on content filtering

While the first part of Section 230 protects platforms from liability for what other people publish, the second part is the one that's most often overlooked. That's the part that protects platforms when they take content down. As it's currently written, Section 230 says platforms won't be held liable for good faith efforts to restrict content that is "obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable." The inclusion of "otherwise objectionable" is what has given platforms a lot of flexibility in designing their content moderation policies. But the DOJ's proposal would strike "otherwise objectionable" from the law and add some new categories, including content that promotes terrorism, violent extremism or self-harm.

Such a change wouldn't automatically make it illegal for platforms to take down otherwise objectionable content. But it would mean that doing so would open those platforms up to potential lawsuits that Section 230 currently shields them from. "Current good-faith moderation efforts that remove things like misinformation, platform manipulation and cyberbullying would all result in lawsuits under this proposal," Elizabeth Banker, deputy general counsel to the lobbying group the Internet Association, wrote in a statement.

Carve-outs for 'bad Samaritans'

Section 230 was written to protect "good Samaritan blocking and screening of offensive material." It was designed, in other words, to protect platforms that, despite putting in a genuine good faith effort to keep bad stuff off the internet, sometimes mess up. But over the years, it's also protected plenty of "bad Samaritans," like revenge porn operators or online auctions that deal in illicit goods. Section 230 scholars, including Danielle Citron, a professor of law at Boston University, have been arguing for years that Section 230 ought to be amended to deal with these so-called bad Samaritans.

The DOJ's proposal would create a carve-out in Section 230 for platforms that purposely promote, solicit or facilitate material that they know or believe might violate federal criminal law. The DOJ can already go after platforms for actively participating in criminal activity (that's how the FBI ended up taking down Backpage.com), but this proposal would broaden that power by giving victims the ability to sue as well. That aspect of the proposal was celebrated by Section 230 reformers like Carrie Goldberg, a lawyer who has represented victims of online harassment and cyberstalking in cases against companies like Grindr. "The bad Samaritan carve-out is excellent, and I'm thrilled the DOJ listened to us victim advocates," Goldberg said.

It seems wholly unlikely that Congress will move forward with these recommendations before the end of the session. But if the DOJ remains under Republican control, these issues aren't going to go away anytime soon.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep Reading Show less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep Reading Show less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep Reading Show less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep Reading Show less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins