Despite current debate over harmful content online and Section 230 of the Communications Decency Act, the truth is that Section 230 is the law that makes our internet a better place. Section 230 is often blamed for all bad content and illegal activity on the internet, but under the law, any activity that's criminal offline is criminal online. In fact, Section 230 provides no shield to criminals from enforcement of state, local and federal laws, whether they commit their crimes on or off the internet.
Take the horrific example of Backpage.com, an online platform that enabled sex trafficking online. In 2018, the federal government seized control of the website, shut it down and threw its owners in prison. The federal government swooped in and enforced federal criminal law. In fact, Section 230 was irrelevant in this case because the law provides no protection for platforms that contribute to criminal wrongdoing. The law also offers no protection for child exploitation or copyright violations.
Similarly, Section 230 offered no protection for online platform Silk Road: an anything-goes marketplace where users could sell guns and drugs and even contract for murder. The government shut down this website and enforced criminal law on its owners because, again, Section 230 does not shield platforms from federal criminal law.
These stories made headlines. But today, critics incorrectly claim that Section 230 protects bad online platforms from the enforcement of major crimes. Platforms that host horrendous content like child pornography or sex trafficking should and can be dealt with by the law. When law enforcement fails to adequately police the proliferation of this content, we must discuss how to better ensure they do so.
In a recent piece in Protocol, James Steyer and Bruce Reed claimed that Section 230 enables online harm. They argued that Section 230 protects bad platforms when they host content that exploits children. Thankfully, Steyer and Reed are entirely incorrect on this point. Section 230 offers no protection against such exploitation and, in fact, makes clear to judges that they should not grant immunity in such cases.
Steyer and Reed do raise some forms of content that platforms would not be held liable for hosting under Section 230, but we should not conflate the severity of content that is inappropriate for children with content that exploits children, like child pornography. If content is available on online platforms that some consider inappropriate for children, then a discussion over how media of all types can do more to protect kids is necessary. But upending the internet as we know it by eliminating Section 230 as Steyer and Reed suggest is not the nuanced approach for such important discussions.
Indeed, online platforms are able to moderate and make themselves more appropriate for children because of Section 230. From YouTube Kids to forced opt-in tagging and screening for child-friendly content, YouTube works on new content moderation guidelines each day, dedicated to making our children safe online.
Without Section 230, YouTube has no shield behind which to help build its family-friendly practices. Section 230 saves platforms like YouTube from a "moderator's dilemma," where they would need to refuse to moderate content on their sites to achieve court-awarded protections from conduit liability that predated Section 230, or end the practice of hosting user-created content easily and seamlessly.
Groups like Common Sense Media, founded by Steyer, rely on Section 230 to protect the public. By providing parents with useful resources for identifying whether movies and shows are appropriate for their kids, Common Sense relies on parents and kids to add their own reviews and ratings. Without Section 230, they would risk assuming liability for all statements made in these reviews if any of them were found to be defamatory.
The world is a better place when Common Sense Media can host reviews that empower parents, and that world is best realized when Section 230 remains the law of the land.
Steyer and Reed also mistakenly justify their drastic changes by citing the only amendment to Section 230, a 2018 law called the Fight Online Sex Trafficking Act. FOSTA's supporters have provided little evidence of its efficacy; their main argument, that the law enabled the FBI to takedown Backpage, is false because the FBI took down the site before FOSTA became law. FOSTA's author has also argued that it "eliminated 90 percent of sex-trafficking ads."
But these arguments have been decisively debunked by the Washington Post, and the track record of the law is particularly controversial. Many women's advocates have raised concerns that the law has put women in greater danger, and news reports from San Francisco show that after the passage of FOSTA, law enforcement saw a surge in street-based sex work and sex trafficking.
FOSTA shows that blaming the failures of law enforcement to stop criminals with Section 230 is not only a factual error: It also risks harming victims and decimating the widespread benefits of free speech online.
Since the inception of the internet, and even before the enactment of Section 230, any platform that engaged in an "anything goes" approach to user-generated content has been immune from the content of its users. For example, bookstores are not legally liable for any law-breaking content in the books they sell: the book's author is. The same is true for internet platforms.
But without Section 230, as was the case before its passage, if an internet platform engages in the type of content moderation required to protect children online, this protection evaporates and the platform suddenly becomes liable for every user's post. That leads to a clear disincentive for sites to moderate content.
When Congress enacted Section 230, it empowered platforms to remove "lewd, lascivious, or otherwise objectionable" content without becoming liable for all content on their sites. Just like a good samaritan is not liable for harm caused by trying to save someone on the street, platforms are not liable for harm caused when they seek to clean up their corners of the internet.
In fact, the only websites that have little to lose from a repeal of Section 230 are those like 8chan and other content cesspools of the internet that neglect to moderate content: Websites that actually focus on family-friendly content, like YouTube, KidsReads or Commons Sense Media, would likely not exist as they do today. If Section 230 were repealed, then what parents fear most would come to fruition — an increase in hate speech, violence, conspiracy videos and other harmful content online.
Steyer and Reed are right to bring attention to corners of the internet where children are at risk, and we must look for ways to ensure law enforcement has the tools it needs to tackle child exploitation. But by looking at Section 230 rather than at an increase in funding for enforcement against child exploitation, they are advocating for a policy that would severely underserve their goal of protecting children.
Section 230 is what has helped the internet become safer for children as it has grown and matured. As we search for ways to protect children from harm, we should all examine the true impacts of our policy prescriptions before presenting them as the golden ticket to a family-friendly future.
- Why Section 230 hurts kids, and what to do about it - Protocol ›
- Why Joe Biden and Donald Trump are both wrong about Section 230 ›
- Section 230 under siege: A comprehensive guide - Protocol ›
- The insurrection scrambled Section 230 plans - Protocol — The people, power and politics of tech ›
- Thanks to FOSTA, a lawsuit against Twitter can proceed - Protocol — The people, power and politics of tech ›
To give you the best possible experience, this site uses cookies. If you continue browsing. you accept our use of cookies. You can review our privacy policy to find out more about the cookies we use.