Politics

Reddit worries it’s going to be crushed in the fight against Big Tech

Reddit's general counsel says attacks on Section 230 ignore that there are different kinds of moderation.

Reddit

Reddit will be watching from the sidelines during the hearing.

Illustration: Reddit

Benjamin Lee, Reddit's general counsel, gets emotional about the potential loss of Section 230 of the Communications Decency Act. He paused for a long time before answering a question about whether he's frustrated over the state of play over the internet's favorite law, which Joe Biden and President Trump both want to repeal. "I apologize, I'm trying to hold back my full emotional weight of how I feel about this, on a certain level," he told Protocol.

Much of the conversation around Section 230 has revolved around the missteps of Facebook, Twitter and Google, which mediate the vast majority of online conversation and regularly draw ire from lawmakers on both sides of the aisle over alleged censorship or mishandling viral misinformation. On Wednesday, Mark Zuckerberg, Jack Dorsey and Sundar Pichai will testify before a Republican-led Senate panel about the future of 230 — and whether their actions have proven that it's time to change the law.

But smaller social media networks like Reddit rely on Section 230, too, and they're afraid that their business models will turn into collateral damage as the government tries to wrangle in the major platforms. Reddit will be watching from the sidelines during the hearing.

Protocol spoke with Lee about what's missing from the Section 230 debate, what reforming the law could mean for competition and why he's optimistic about where we go from here.

This interview has been edited and condensed for clarity.

What do you think is currently missing from the Section 230 debate?

What's missing currently is how important and critical Section 230 is to allowing competition against Big Tech, and encouraging platforms like Reddit to moderate in good faith and ultimately fulfill the promise and potential of the open internet. Section 230 was drafted in this really elegant way to protect not just providers of these services, but users of these services as well.

Section 230 reads, "No provider or user of an interactive computer service shall be treated as the publisher or speaker," so Section 230 protects the decisions of our users, Reddit users, as much as it protects Reddit's decisions themselves. It protects the decisions of our volunteer moderators, it protects the decisions of our users every time they vote on content, it protects their everyday decisions to curate content for our communities and protect their communities from unwanted content. That's something that we wanted to really emphasize in the context of our FCC comment, is how much we work in partnership with our communities of users; we wanted to drive that point home by filing our FCC comment similarly in partnership with one of our communities [r/LGBTQ].

You filed that comment with the r/LGBT community. What does 230 mean to that group?

Reddit has a unique, layered approach to content moderation that resembles our own democracy. We have site-wide content rules that apply to everyone, much akin to our federal laws. But just as we are a United States, a group of state governments, Reddit is a network of communities. Each community has its own volunteer moderation team.

Reddit basically allows every member of a community to vote on content. Voting is important to our democracy; it's just as important to moderation of the content shown to a Reddit community. So we work in partnership with every user on Reddit to protect all of the communities from unwanted content, including protecting communities who are coming together, much like the community highlighted in the FCC comments. These are communities that come together to provide a place, a safe place for authentic conversation. Section 230 basically allows us to do this.

Let's say Section 230 is repealed tomorrow. What would that mean immediately for Reddit?

It would be pretty bad. It's sometimes hard for me to even fully grasp the implications of it because Section 230 was devised at a time when most of the platforms that provided places for people to come together resembled Reddit more than the platforms we see today. Those platforms were put into this dilemma due to existing law that predates 230 — and that law basically rewarded the platforms that did not look for bad content.

If you actually took proactive measures like we did, and if your community took proactive measures like they do, then you were held fully liable for that content. That was the law, and that would become the law if 230 were repealed.

There's some disagreement among experts about this: For the most part, do you think changing the law would result in platforms working harder to remove horrible stuff from their platforms, or would it make the platforms wary of removing any content?

I think the irony of it is they're both correct in the sense the law creates a perverse incentive that pushes you to the far extremes, where on the one hand you either allow everything, you allow the worst type of content on the internet and you have to because that's the best way to avoid liability, or you avoid liability by restricting the amount of content so much that people aren't allowed to say anything actually meaningful or authentic.

These are communities that come together to provide a place, a safe place for authentic conversation. Section 230 basically allows us to do this.

Critics have pointed out that 230 allows platforms to be exempt from state and local criminal laws, even when they're knowingly facilitating illegal activity. Is there room for reform there?

I think the debate regarding the intersection of federal and state law regarding 230 is a complicated one. And I think that there's an opportunity there for some sophisticated thinking regarding how best to alleviate some of the tension there. Section 230 was never devised to protect providers from facilitating criminal conduct; that was never the intent of Section 230, and by and large it doesn't do that. In fact, I think that a lot of the appeals to modify 230 in that context unfortunately end up having the exact opposite consequences, primarily because most of at least what I've seen in this context is so focused on targeting limits to 230 that are premised on companies that moderate very differently than how we moderate.

They're focused on these giant, centrally moderated corporations that have an industrialized model of content moderation. So unfortunately, the sorts of targeted limits they're looking at for Section 230 ironically end up benefiting these largest companies by placing significant burden and cost on smaller companies like us.

Wednesday's congressional hearing will revolve around lawmakers' gripes with Facebook, Twitter and Google. Both sides will argue this is proof it's time to reform 230. Is it frustrating for Reddit to potentially have to face legislative repercussions for the actions of Big Tech?

It's frightening as well as sad. It's just fundamentally unfortunate, as it fails to recognize and appreciate that there are many different approaches to moderation, such as Reddit's approach to community moderation. I apologize, I'm trying to hold back my full emotional weight of how I feel about this, on a certain level. I think that right now, the world really needs a different approach to content moderation. Reddit is one of those approaches; we are really trying hard in a different approach that focuses so much on the community and our communities of users. We need more communities that create belonging.

And these proposals, to a large extent, are really designed in a way that have these large, unintended consequences on these alternative models, and have a real possibility of fundamentally destroying these other approaches. A good example in my mind is the PACT Act, which is so focused on putting process burdens on centrally moderated, large platforms, it ignores the implications on different content moderation approaches. It basically implicitly assumes all moderation decisions are being made by the service provider itself. It doesn't even contemplate that important decisions might be made by users — let alone in our situation, over 99% of the content moderation decisions on Reddit are done by users.

So because of the way that it's focused on these process penalties it imposes on providers, it creates a perverse incentive. Users are better off reporting to Reddit rather than to their own communities. So even something that seems good intentioned like the PACT Act would end up basically undermining and fundamentally, in the long run, destroying our layered moderation approaches. The PACT Act would end up turning us into Facebook.

This hearing will prominently feature Republican lawmakers claiming the major platforms are biased against them. What's your reaction to those anti-conservative bias allegations?

It does frustrate me because I think there's a serious conversation to be had about, for example, algorithmic bias. What can we do better to make what appear to be facially neutral algorithms less biased? That is a reasonable discussion and a reasonable technical discussion. But I feel like it's a nuanced discussion that is lost in this debate right now.

I think there are some hard questions that we need to answer with regards to how best to architect these centrally moderated approaches, but I'm not certain that these debates are the most nuanced way in which to come to a clear direction with regards to how to deal with these sorts of issues.

The PACT Act would end up turning us into Facebook.

After facing years of criticism over the proliferation of hate speech, Reddit recently majorly expanded its rules against hate speech. Hasn't that effort made your model more "centralized"? How does Section 230 play into that?

Section 230 allowed us to do exactly what we're doing right now with regards to hate. Our campaign against hateful content is all built around our partnership with our communities. We literally cowrote our hate policies with consultation with the communities. The tools that we built to help the community and help us hunt down hateful content were all facilitated through Section 230.

Are there any proposals you've seen that you would get behind?

We're always open to approaches that require providers to be thoughtful about issues such as transparency. We're also open to evaluating different ways of approaching 230 that are more mindful of community-based approaches to content moderation and to not end up undermining such alternatives to the dominant social networks today.

Personally I've seen bits and pieces of this in a variety of the different proposals. For example, the transparency pieces I've seen — there are aspects of those that I think are quite realistic and reasonable.

What is Reddit's position on the EARN IT Act?

I mean, child sexual abuse material is already illegal. A provider that mishandles CSAM [child sexual abuse] material is not protected under Section 230. Reddit takes CSAM very, very seriously. And frankly, I think most providers do as well.

I think that if we as a society want to take the issue of CSAM more seriously, there are a variety of other proposals that more realistically address the issue of CSAM [and] provide the enforcement resources. We've highlighted these different materials. EARN IT it doesn't feel like a serious attempt to actually address CSAM. It feels like there's a lot of other issues that are built into EARN IT, other than addressing child safety, and it raises a lot of the issues we've been talking about with regards to Section 230.

Reddit has a reputation of hosting hate speech and bigoted rhetoric that Democrats have said they're concerned about. If it's not Section 230, do you think there are any legislative interventions needed to have the government step in on this?

That's a hard question. I feel that Reddit has struggled with the balance between fighting hate online and balancing it against freedom of expression, just as much as our whole country has tried to struggle with that balance. I feel like especially this year, we've tried to take aggressive steps in that area that move the ball forward with regards to how we want to see our communities evolve.

As to what Congress can do, I feel like that's a harder question. Congress has to always balance what they're capable of doing with regards to hate with the current interpretation of freedom of expression under the First Amendment, so their balance there is one that kind of goes to the balance literally between Congress and the Supreme Court and the rule of law. It's a very, very difficult topic.

What's your response to those who say Reddit's decentralized content moderation model is what has enabled hate to spread in the past — that it put so much in the hands of community moderators who set their own guidelines?

We've learned a lot about how to empower our communities in a way to allow them to grow in a positive direction. But we've also learned a lot about how communities can be weaponized against each other and can be used to undermine the sort of belonging that is fundamental to our mission. And through that, we've — with the support of 230 behind us — been able to empower the right sorts of behaviors within these communities. Fundamentally, it's because 230 protects our ability to try these different approaches.

Realistically, what do you predict will happen to 230 over the next four years — under either administration?

I know some academics have been somewhat vocal about their pessimism about the fate of 230. I am more hopeful. Section 230 is unique in the entire world. What would be super unfortunate is if we end up throwing out 230 in an effort to punish the largest internet players for their perceived or real abuse of their dominance.

Unraveling 230 would basically further ensure that dominance, while undermining the ability of smaller companies like Reddit to challenge that dominance with alternative models of innovation.

You say repealing Section 230 would harm competition because of the cost, right? The major tech companies could afford the barrage of lawsuits it might unleash whereas smaller companies, maybe like Reddit, could not?

No question that there's part of it that's the significant burden and cost placed on smaller competitors. A startup who's trying to innovate in this space won't survive a swath of lawsuits. So that's definitely part of it. But there's also this other part, which is this notion that when [lawmakers] focus on these centrally moderated models, they [unintentionally] create incentives that make other platforms start to resemble these centrally moderated models. And they end up cementing that as the approach, which I don't think is a consequence that either is intended or that, in the long term, we as a society want.

It's funny — I do feel that these discussions are far more nuanced once you're one-on-one with any of the lawmakers and policymakers. So my hope is that eventually this discussion becomes more nuanced than it currently is right now in the public sphere.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins