Mark Zuckerberg, are your views on freedom of expression hypocritical? Sundar Pichai, are you ready for collective responsibility for online harm? Jack Dorsey, should revenge porn sites really have the same legal protections as Twitter?
Those are the kinds of hard questions that top experts on Section 230 of the Communications Decency Act think could stop the CEOs of Facebook, Google and Twitter in their tracks on Wednesday, when they're due to testify before the U.S. Senate Committee on Commerce, Science and Transportation about how the law has enabled "bad behavior" by Big Tech.
In the past, when Zuckerberg, Pichai and Dorsey have appeared before Congress, they've been faced with a deluge of questions from lawmakers about how their companies favor or suppress various viewpoints, using cherry-picked examples of controversial content that was either taken down or left online. With Election Day just one week away and tensions about tech platforms' treatment of political discourse at an all-time high, Wednesday's hearing will surely feature plenty of that.
But this is the first Congressional hearing featuring these CEOs to focus on Section 230, and could provide lawmakers with the opportunity to develop their understanding of how Section 230 really ought to be updated. In case they're willing to look beyond partisan quarrels, Protocol asked some of the top experts on Section 230 the toughest questions they'd ask Zuckerberg, Pichai and Dorsey. Here's what they had to say:
There's bipartisan support for the PACT Act, which would mean that you couldn't use Section 230 as a defense if you leave content up after a judge orders you to remove it. Do you support this reform?
— Matt Perault, former Facebook director of public policy and current director of Duke University's Center on Science and Technology policy
This bipartisan bill, sponsored by Sens. Brian Schatz and John Thune, would make relatively light-touch changes to Section 230, including requiring platforms to explain their moderation policies, issue quarterly reports on moderation decisions and take down content deemed illegal in court within 24 hours. Facebook, Google and Twitter already comply with many of the provisions in the bill, but the Internet Association, which represents all three companies, has expressed concerns about it. Pinning these powerful CEOs down on their personal feelings about the legislation would be a meaningful contribution to the debate.
Let's say Congress repeals Section 230 tomorrow. How does that change your content moderation practices?
—Jeff Kosseff, assistant professor of cybersecurity law at the United States Naval Academy's Cyber Science Department
Because Section 230 protects companies from liability for filtering out offensive or objectionable content, one concern is that by removing Section 230 protection altogether, tech companies would stop filtering content altogether. Kosseff posits the opposite is true: that companies would filter even more to limit their liability for whatever might be left up. What the CEOs might say in response could be telling.
How should the platforms address false statements and disinformation camouflaged as opinion? A statement that "I believe all Blacks are lazy" is not on its face an assertion of fact, but could be considered hate speech. What safeguards can ensure that any restrictions levied against such speech will be employed in the interest of public safety, and not merely to stifle a viewpoint with which a platform simply disagrees?
— Lateef Mtima, professor of law at Howard University
Tech platforms are under increasingly intense pressure to crack down on hate speech against minority groups, particularly as research shows that Facebook, Twitter and Google have fanned the flames of racism in the U.S. and abroad. The platforms have recently taken action against speech that promotes real-world violence, but they're still working out how aggressively they should act against bigoted opinions. "There's not yet a perfect tool or system that can reliably find and distinguish posts that cross the line from expressive opinion into unacceptable hate speech," a Facebook executive wrote in 2017. This is an area where the platforms' stances are changing quickly, and it will be important to hear the executives' thoughts on it now.
In the physical world, collective responsibility is a familiar concept: A person can be partly responsible for harm even if he did not intend for it to happen and was not its direct cause. Do you believe that tech companies should continue to be granted a special exemption from the rules of collective responsibility? Why?
— Mary Anne Franks, professor of law at University of Miami School of Law and president of the Cyber Civil Rights Initiative
There's an ongoing debate over why tech platforms aren't subject to the same liability that brick-and-mortar businesses face in the offline world. Steering the conversation toward addressing the actual harms that tech platforms facilitate, and not baseless accusations of political bias, would be one way to facilitate a more substantive conversation.
Would you support an amendment to Section 230 that excludes from protection any interactive computer service provider that manifests deliberate indifference to harmful content? Why or why not?
— Franks
Though they often fail, Facebook, Google and Twitter arguably at least attempt to make their platforms safe for users. But Section 230 doesn't just protect companies that are trying to do the right thing and sometimes get it wrong; It also shields companies that either invite or completely ignore bad behavior. Tech companies spend so much time answering for their own misdeeds, they rarely get asked how the law ought to handle explicitly bad actors.
Narrowing Section 230 immunity doesn't mean platforms will automatically be held liable. Victims still must prove their case. If they have a credible claim they've been harmed at the hands of platforms, why should victims be denied an opportunity for justice?
— Neil Fried, founder of DigitalFrontiers Advocacy, former chief counsel of the House Energy and Commerce Committee and SVP of the Motion Picture Association
Twitter, Facebook and Google have argued that reforming Section 230 could unleash a barrage of frivolous lawsuits against any company with an online footprint. But Section 230 has also been a major obstacle in court for very real victims of crimes facilitated by tech platforms, including genocide and online impersonation. Most judges throw out cases against the platforms immediately because Section 230 makes them so difficult to try. Section 230 reformers want to make it easier for victims to sue major online platforms for those harms. Tech giants have fought these cases vigorously in court but have rarely addressed them publicly.
Should a business that is knowingly facilitating an illegal activity be exempt from state and local criminal laws?
— Rick Lane, former 21st Century Fox SVP currently advising victim's advocacy groups on Section 230
Section 230 defenders often point out that the law doesn't protect companies from being charged with federal crimes. The subtext: If the feds are so concerned about criminal activity happening online, they should enforce the law themselves. But the counter-argument boils down to a lack of resources at the federal level. Opening platforms up to state and local criminal liability would essentially expand the number of cops on the beat. It could also invite more activist enforcement from politically appointed attorneys general.
How consistent are your defenses of 230 with the rest of your views around maintaining freedom of expression and preventing a chilling effect? Those values seem to vanish into the ether when it comes to removing NDAs that keep employees from exercising that same freedom of expression. Where is the fear of a chilling effect when company whistleblowers are intimidated, retaliated against, then fired without recourse?
— Ifeoma Ozoma, First Draft board member, former public policy and social impact manager at Pinterest
The tech executives will likely argue that reforming Section 230 could limit free expression online, potentially forcing the companies to more aggressively remove content posted by their billions of users. But their companies have been accused of silencing criticism by maintaining restrictive NDAs and firing employees who speak out. It could be revealing to hear Pichai and Zuckerberg in particular talk about their recent employee unrest and how they plan to navigate future internal dissent.
Your services enable users to treat each other awfully. However, people also treat each other awfully in the offline world. What specific steps does/will your service take to reduce the quantum of awful behavior on your service so that it is lower than the offline baseline of awfulness?
— Eric Goldman, professor at Santa Clara University School of Law
This question feels tailor-made for Dorsey, who has spoken at length about creating "healthier" conversations on Twitter. Tech CEOs are used to being grilled about all the ways they punish people for the bad things they do online, but there's often less of a focus on whether anything can be done to discourage people from doing so many bad things online in the first place.