Get access to Protocol
Where should we send your daily tech briefing?
Benjamin Lee, Reddit's general counsel, gets emotional about the potential loss of Section 230 of the Communications Decency Act. He paused for a long time before answering a question about whether he's frustrated over the state of play over the internet's favorite law, which Joe Biden and President Trump both want to repeal. "I apologize, I'm trying to hold back my full emotional weight of how I feel about this, on a certain level," he told Protocol.
Much of the conversation around Section 230 has revolved around the missteps of Facebook, Twitter and Google, which mediate the vast majority of online conversation and regularly draw ire from lawmakers on both sides of the aisle over alleged censorship or mishandling viral misinformation. On Wednesday, Mark Zuckerberg, Jack Dorsey and Sundar Pichai will testify before a Republican-led Senate panel about the future of 230 — and whether their actions have proven that it's time to change the law.
But smaller social media networks like Reddit rely on Section 230, too, and they're afraid that their business models will turn into collateral damage as the government tries to wrangle in the major platforms. Reddit will be watching from the sidelines during the hearing.
Protocol spoke with Lee about what's missing from the Section 230 debate, what reforming the law could mean for competition and why he's optimistic about where we go from here.
This interview has been edited and condensed for clarity.
What do you think is currently missing from the Section 230 debate?
What's missing currently is how important and critical Section 230 is to allowing competition against Big Tech, and encouraging platforms like Reddit to moderate in good faith and ultimately fulfill the promise and potential of the open internet. Section 230 was drafted in this really elegant way to protect not just providers of these services, but users of these services as well.
Section 230 reads, "No provider or user of an interactive computer service shall be treated as the publisher or speaker," so Section 230 protects the decisions of our users, Reddit users, as much as it protects Reddit's decisions themselves. It protects the decisions of our volunteer moderators, it protects the decisions of our users every time they vote on content, it protects their everyday decisions to curate content for our communities and protect their communities from unwanted content. That's something that we wanted to really emphasize in the context of our FCC comment, is how much we work in partnership with our communities of users; we wanted to drive that point home by filing our FCC comment similarly in partnership with one of our communities [r/LGBTQ].
You filed that comment with the r/LGBT community. What does 230 mean to that group?
Reddit has a unique, layered approach to content moderation that resembles our own democracy. We have site-wide content rules that apply to everyone, much akin to our federal laws. But just as we are a United States, a group of state governments, Reddit is a network of communities. Each community has its own volunteer moderation team.
Reddit basically allows every member of a community to vote on content. Voting is important to our democracy; it's just as important to moderation of the content shown to a Reddit community. So we work in partnership with every user on Reddit to protect all of the communities from unwanted content, including protecting communities who are coming together, much like the community highlighted in the FCC comments. These are communities that come together to provide a place, a safe place for authentic conversation. Section 230 basically allows us to do this.
Let's say Section 230 is repealed tomorrow. What would that mean immediately for Reddit?
It would be pretty bad. It's sometimes hard for me to even fully grasp the implications of it because Section 230 was devised at a time when most of the platforms that provided places for people to come together resembled Reddit more than the platforms we see today. Those platforms were put into this dilemma due to existing law that predates 230 — and that law basically rewarded the platforms that did not look for bad content.
If you actually took proactive measures like we did, and if your community took proactive measures like they do, then you were held fully liable for that content. That was the law, and that would become the law if 230 were repealed.
There's some disagreement among experts about this: For the most part, do you think changing the law would result in platforms working harder to remove horrible stuff from their platforms, or would it make the platforms wary of removing any content?
I think the irony of it is they're both correct in the sense the law creates a perverse incentive that pushes you to the far extremes, where on the one hand you either allow everything, you allow the worst type of content on the internet and you have to because that's the best way to avoid liability, or you avoid liability by restricting the amount of content so much that people aren't allowed to say anything actually meaningful or authentic.
These are communities that come together to provide a place, a safe place for authentic conversation. Section 230 basically allows us to do this.
Critics have pointed out that 230 allows platforms to be exempt from state and local criminal laws, even when they're knowingly facilitating illegal activity. Is there room for reform there?
I think the debate regarding the intersection of federal and state law regarding 230 is a complicated one. And I think that there's an opportunity there for some sophisticated thinking regarding how best to alleviate some of the tension there. Section 230 was never devised to protect providers from facilitating criminal conduct; that was never the intent of Section 230, and by and large it doesn't do that. In fact, I think that a lot of the appeals to modify 230 in that context unfortunately end up having the exact opposite consequences, primarily because most of at least what I've seen in this context is so focused on targeting limits to 230 that are premised on companies that moderate very differently than how we moderate.
They're focused on these giant, centrally moderated corporations that have an industrialized model of content moderation. So unfortunately, the sorts of targeted limits they're looking at for Section 230 ironically end up benefiting these largest companies by placing significant burden and cost on smaller companies like us.
Wednesday's congressional hearing will revolve around lawmakers' gripes with Facebook, Twitter and Google. Both sides will argue this is proof it's time to reform 230. Is it frustrating for Reddit to potentially have to face legislative repercussions for the actions of Big Tech?
It's frightening as well as sad. It's just fundamentally unfortunate, as it fails to recognize and appreciate that there are many different approaches to moderation, such as Reddit's approach to community moderation. I apologize, I'm trying to hold back my full emotional weight of how I feel about this, on a certain level. I think that right now, the world really needs a different approach to content moderation. Reddit is one of those approaches; we are really trying hard in a different approach that focuses so much on the community and our communities of users. We need more communities that create belonging.
And these proposals, to a large extent, are really designed in a way that have these large, unintended consequences on these alternative models, and have a real possibility of fundamentally destroying these other approaches. A good example in my mind is the PACT Act, which is so focused on putting process burdens on centrally moderated, large platforms, it ignores the implications on different content moderation approaches. It basically implicitly assumes all moderation decisions are being made by the service provider itself. It doesn't even contemplate that important decisions might be made by users — let alone in our situation, over 99% of the content moderation decisions on Reddit are done by users.
So because of the way that it's focused on these process penalties it imposes on providers, it creates a perverse incentive. Users are better off reporting to Reddit rather than to their own communities. So even something that seems good intentioned like the PACT Act would end up basically undermining and fundamentally, in the long run, destroying our layered moderation approaches. The PACT Act would end up turning us into Facebook.
This hearing will prominently feature Republican lawmakers claiming the major platforms are biased against them. What's your reaction to those anti-conservative bias allegations?
It does frustrate me because I think there's a serious conversation to be had about, for example, algorithmic bias. What can we do better to make what appear to be facially neutral algorithms less biased? That is a reasonable discussion and a reasonable technical discussion. But I feel like it's a nuanced discussion that is lost in this debate right now.
I think there are some hard questions that we need to answer with regards to how best to architect these centrally moderated approaches, but I'm not certain that these debates are the most nuanced way in which to come to a clear direction with regards to how to deal with these sorts of issues.
The PACT Act would end up turning us into Facebook.
After facing years of criticism over the proliferation of hate speech, Reddit recently majorly expanded its rules against hate speech. Hasn't that effort made your model more "centralized"? How does Section 230 play into that?
Section 230 allowed us to do exactly what we're doing right now with regards to hate. Our campaign against hateful content is all built around our partnership with our communities. We literally cowrote our hate policies with consultation with the communities. The tools that we built to help the community and help us hunt down hateful content were all facilitated through Section 230.
Are there any proposals you've seen that you would get behind?
We're always open to approaches that require providers to be thoughtful about issues such as transparency. We're also open to evaluating different ways of approaching 230 that are more mindful of community-based approaches to content moderation and to not end up undermining such alternatives to the dominant social networks today.
Personally I've seen bits and pieces of this in a variety of the different proposals. For example, the transparency pieces I've seen — there are aspects of those that I think are quite realistic and reasonable.
What is Reddit's position on the EARN IT Act?
I mean, child sexual abuse material is already illegal. A provider that mishandles CSAM [child sexual abuse] material is not protected under Section 230. Reddit takes CSAM very, very seriously. And frankly, I think most providers do as well.
I think that if we as a society want to take the issue of CSAM more seriously, there are a variety of other proposals that more realistically address the issue of CSAM [and] provide the enforcement resources. We've highlighted these different materials. EARN IT it doesn't feel like a serious attempt to actually address CSAM. It feels like there's a lot of other issues that are built into EARN IT, other than addressing child safety, and it raises a lot of the issues we've been talking about with regards to Section 230.
Reddit has a reputation of hosting hate speech and bigoted rhetoric that Democrats have said they're concerned about. If it's not Section 230, do you think there are any legislative interventions needed to have the government step in on this?
That's a hard question. I feel that Reddit has struggled with the balance between fighting hate online and balancing it against freedom of expression, just as much as our whole country has tried to struggle with that balance. I feel like especially this year, we've tried to take aggressive steps in that area that move the ball forward with regards to how we want to see our communities evolve.
As to what Congress can do, I feel like that's a harder question. Congress has to always balance what they're capable of doing with regards to hate with the current interpretation of freedom of expression under the First Amendment, so their balance there is one that kind of goes to the balance literally between Congress and the Supreme Court and the rule of law. It's a very, very difficult topic.
What's your response to those who say Reddit's decentralized content moderation model is what has enabled hate to spread in the past — that it put so much in the hands of community moderators who set their own guidelines?
We've learned a lot about how to empower our communities in a way to allow them to grow in a positive direction. But we've also learned a lot about how communities can be weaponized against each other and can be used to undermine the sort of belonging that is fundamental to our mission. And through that, we've — with the support of 230 behind us — been able to empower the right sorts of behaviors within these communities. Fundamentally, it's because 230 protects our ability to try these different approaches.
Realistically, what do you predict will happen to 230 over the next four years — under either administration?
I know some academics have been somewhat vocal about their pessimism about the fate of 230. I am more hopeful. Section 230 is unique in the entire world. What would be super unfortunate is if we end up throwing out 230 in an effort to punish the largest internet players for their perceived or real abuse of their dominance.
Unraveling 230 would basically further ensure that dominance, while undermining the ability of smaller companies like Reddit to challenge that dominance with alternative models of innovation.
You say repealing Section 230 would harm competition because of the cost, right? The major tech companies could afford the barrage of lawsuits it might unleash whereas smaller companies, maybe like Reddit, could not?
No question that there's part of it that's the significant burden and cost placed on smaller competitors. A startup who's trying to innovate in this space won't survive a swath of lawsuits. So that's definitely part of it. But there's also this other part, which is this notion that when [lawmakers] focus on these centrally moderated models, they [unintentionally] create incentives that make other platforms start to resemble these centrally moderated models. And they end up cementing that as the approach, which I don't think is a consequence that either is intended or that, in the long term, we as a society want.
It's funny — I do feel that these discussions are far more nuanced once you're one-on-one with any of the lawmakers and policymakers. So my hope is that eventually this discussion becomes more nuanced than it currently is right now in the public sphere.
Emily Birnbaum ( @birnbaum_e) is a tech policy reporter with Protocol. Her coverage focuses on the U.S. government's attempts to regulate one of the most powerful industries in the world, with a focus on antitrust, privacy and politics. Previously, she worked as a tech policy reporter with The Hill after spending several months as a breaking news reporter. She is a Bethesda, Maryland native and proud Kenyon College alumna.