Politics

The hardest questions tech CEOs could be asked at the Section 230 hearing

There will be plenty of political point-scoring on Wednesday. But here's what senators should actually ask if they're serious about fixing the internet's favorite law.

The hardest questions tech CEOs could be asked at the Section 230 hearing

Mark Zuckerberg, Sundar Pichai and Jack Dorsey are all set to testify before the Senate on issues related to Section 230 of the Communications Decency Act.

Photo: Graeme Jennings-Pool/Getty Images

Mark Zuckerberg, are your views on freedom of expression hypocritical? Sundar Pichai, are you ready for collective responsibility for online harm? Jack Dorsey, should revenge porn sites really have the same legal protections as Twitter?

Those are the kinds of hard questions that top experts on Section 230 of the Communications Decency Act think could stop the CEOs of Facebook, Google and Twitter in their tracks on Wednesday, when they're due to testify before the U.S. Senate Committee on Commerce, Science and Transportation about how the law has enabled "bad behavior" by Big Tech.

In the past, when Zuckerberg, Pichai and Dorsey have appeared before Congress, they've been faced with a deluge of questions from lawmakers about how their companies favor or suppress various viewpoints, using cherry-picked examples of controversial content that was either taken down or left online. With Election Day just one week away and tensions about tech platforms' treatment of political discourse at an all-time high, Wednesday's hearing will surely feature plenty of that.

But this is the first Congressional hearing featuring these CEOs to focus on Section 230, and could provide lawmakers with the opportunity to develop their understanding of how Section 230 really ought to be updated. In case they're willing to look beyond partisan quarrels, Protocol asked some of the top experts on Section 230 the toughest questions they'd ask Zuckerberg, Pichai and Dorsey. Here's what they had to say:

There's bipartisan support for the PACT Act, which would mean that you couldn't use Section 230 as a defense if you leave content up after a judge orders you to remove it. Do you support this reform?

— Matt Perault, former Facebook director of public policy and current director of Duke University's Center on Science and Technology policy

This bipartisan bill, sponsored by Sens. Brian Schatz and John Thune, would make relatively light-touch changes to Section 230, including requiring platforms to explain their moderation policies, issue quarterly reports on moderation decisions and take down content deemed illegal in court within 24 hours. Facebook, Google and Twitter already comply with many of the provisions in the bill, but the Internet Association, which represents all three companies, has expressed concerns about it. Pinning these powerful CEOs down on their personal feelings about the legislation would be a meaningful contribution to the debate.

Let's say Congress repeals Section 230 tomorrow. How does that change your content moderation practices?

Jeff Kosseff, assistant professor of cybersecurity law at the United States Naval Academy's Cyber Science Department

Because Section 230 protects companies from liability for filtering out offensive or objectionable content, one concern is that by removing Section 230 protection altogether, tech companies would stop filtering content altogether. Kosseff posits the opposite is true: that companies would filter even more to limit their liability for whatever might be left up. What the CEOs might say in response could be telling.

How should the platforms address false statements and disinformation camouflaged as opinion? A statement that "I believe all Blacks are lazy" is not on its face an assertion of fact, but could be considered hate speech. What safeguards can ensure that any restrictions levied against such speech will be employed in the interest of public safety, and not merely to stifle a viewpoint with which a platform simply disagrees?

Lateef Mtima, professor of law at Howard University

Tech platforms are under increasingly intense pressure to crack down on hate speech against minority groups, particularly as research shows that Facebook, Twitter and Google have fanned the flames of racism in the U.S. and abroad. The platforms have recently taken action against speech that promotes real-world violence, but they're still working out how aggressively they should act against bigoted opinions. "There's not yet a perfect tool or system that can reliably find and distinguish posts that cross the line from expressive opinion into unacceptable hate speech," a Facebook executive wrote in 2017. This is an area where the platforms' stances are changing quickly, and it will be important to hear the executives' thoughts on it now.

In the physical world, collective responsibility is a familiar concept: A person can be partly responsible for harm even if he did not intend for it to happen and was not its direct cause. Do you believe that tech companies should continue to be granted a special exemption from the rules of collective responsibility? Why?

Mary Anne Franks, professor of law at University of Miami School of Law and president of the Cyber Civil Rights Initiative

There's an ongoing debate over why tech platforms aren't subject to the same liability that brick-and-mortar businesses face in the offline world. Steering the conversation toward addressing the actual harms that tech platforms facilitate, and not baseless accusations of political bias, would be one way to facilitate a more substantive conversation.

Would you support an amendment to Section 230 that excludes from protection any interactive computer service provider that manifests deliberate indifference to harmful content? Why or why not?

Franks

Though they often fail, Facebook, Google and Twitter arguably at least attempt to make their platforms safe for users. But Section 230 doesn't just protect companies that are trying to do the right thing and sometimes get it wrong; It also shields companies that either invite or completely ignore bad behavior. Tech companies spend so much time answering for their own misdeeds, they rarely get asked how the law ought to handle explicitly bad actors.

Narrowing Section 230 immunity doesn't mean platforms will automatically be held liable. Victims still must prove their case. If they have a credible claim they've been harmed at the hands of platforms, why should victims be denied an opportunity for justice?

— Neil Fried, founder of DigitalFrontiers Advocacy, former chief counsel of the House Energy and Commerce Committee and SVP of the Motion Picture Association

Twitter, Facebook and Google have argued that reforming Section 230 could unleash a barrage of frivolous lawsuits against any company with an online footprint. But Section 230 has also been a major obstacle in court for very real victims of crimes facilitated by tech platforms, including genocide and online impersonation. Most judges throw out cases against the platforms immediately because Section 230 makes them so difficult to try. Section 230 reformers want to make it easier for victims to sue major online platforms for those harms. Tech giants have fought these cases vigorously in court but have rarely addressed them publicly.

Should a business that is knowingly facilitating an illegal activity be exempt from state and local criminal laws?

— Rick Lane, former 21st Century Fox SVP currently advising victim's advocacy groups on Section 230

Section 230 defenders often point out that the law doesn't protect companies from being charged with federal crimes. The subtext: If the feds are so concerned about criminal activity happening online, they should enforce the law themselves. But the counter-argument boils down to a lack of resources at the federal level. Opening platforms up to state and local criminal liability would essentially expand the number of cops on the beat. It could also invite more activist enforcement from politically appointed attorneys general.

How consistent are your defenses of 230 with the rest of your views around maintaining freedom of expression and preventing a chilling effect? Those values seem to vanish into the ether when it comes to removing NDAs that keep employees from exercising that same freedom of expression. Where is the fear of a chilling effect when company whistleblowers are intimidated, retaliated against, then fired without recourse?

— Ifeoma Ozoma, First Draft board member, former public policy and social impact manager at Pinterest

The tech executives will likely argue that reforming Section 230 could limit free expression online, potentially forcing the companies to more aggressively remove content posted by their billions of users. But their companies have been accused of silencing criticism by maintaining restrictive NDAs and firing employees who speak out. It could be revealing to hear Pichai and Zuckerberg in particular talk about their recent employee unrest and how they plan to navigate future internal dissent.

Your services enable users to treat each other awfully. However, people also treat each other awfully in the offline world. What specific steps does/will your service take to reduce the quantum of awful behavior on your service so that it is lower than the offline baseline of awfulness?

Eric Goldman, professor at Santa Clara University School of Law

This question feels tailor-made for Dorsey, who has spoken at length about creating "healthier" conversations on Twitter. Tech CEOs are used to being grilled about all the ways they punish people for the bad things they do online, but there's often less of a focus on whether anything can be done to discourage people from doing so many bad things online in the first place.

Fintech

Kraken CEO defends his ‘back to dictatorship’ crackdown

Jesse Powell says the crypto exchange’s cultural revolution was necessary.

"Some people feel they should be able to be whatever they want to be in the workplace. But there's a line," Powell told Protocol.

Photo: David Paul Morris/Bloomberg via Getty Images

Kraken CEO Jesse Powell found himself under fire last month for provocative remarks he made that kicked off a contentious workplace battle and shined a light on the crypto exchange’s distinctive corporate culture.

A New York Times report based on leaked Slack messages and employee interviews accused Powell of making insensitive comments on gender and race, sparking heated conversations within Kraken. Powell responded forcefully, laying out new ground rules and principles in an attempt to define the way he wanted the company to operate — sharply at odds in some aspects with the tech industry’s standard practices.

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.

Every day, millions of us press the “order” button on our favorite coffee store's mobile application: Our chosen brew will be on the counter when we arrive. It’s a personalized, seamless experience that we have all come to expect. What we don’t know is what’s happening behind the scenes. The mobile application is sourcing data from a database that stores information about each customer and what their favorite coffee drinks are. It is also leveraging event-streaming data in real time to ensure the ingredients for your personal coffee are in supply at your local store.

Applications like this power our daily lives, and if they can’t access massive amounts of data stored in a database as well as stream data “in motion” instantaneously, you — and millions of customers — won’t have these in-the-moment experiences.

Keep Reading Show less
Jennifer Goforth Gregory
Jennifer Goforth Gregory has worked in the B2B technology industry for over 20 years. As a freelance writer she writes for top technology brands, including IBM, HPE, Adobe, AT&T, Verizon, Epson, Oracle, Intel and Square. She specializes in a wide range of technology, such as AI, IoT, cloud, cybersecurity, and CX. Jennifer also wrote a bestselling book The Freelance Content Marketing Writer to help other writers launch a high earning freelance business.
Enterprise

GitHub’s CEO wants to go passwordless by 2025

Thomas Dohmke sat down with Protocol to talk about what the open-source code hosting site is doing to address security vulnerabilities, including an aim to go passwordless by 2025.

GitHub CEO Thomas Dohmke spoke to Protocol about its plan to go passwordless.

Photo: Vaughn Ridley/Sportsfile for Collision via Getty Images

GitHub CEO Thomas Dohmke wants to get rid of passwords.

Open-source software has been plagued with cybersecurity issues for years, and GitHub and other companies in the space have been taking steps to bolster security. Dohmke knows, however, that to get to the root of the industrywide problem will take more than just corporate action: It will ultimately require a sea change and cultural shift in how developers work.

Keep Reading Show less
Michelle Ma

Michelle Ma (@himichellema) is a reporter at Protocol, where she writes about management, leadership and workplace issues in tech. Previously, she was a news editor of live journalism and special coverage for The Wall Street Journal. Prior to that, she worked as a staff writer at Wirecutter. She can be reached at mma@protocol.com.

Enterprise

Why foundation models in AI need to be released responsibly

Foundation models like GPT-3 and DALL-E are changing AI forever. We urgently need to develop community norms that guarantee research access and help guide the future of AI responsibly.

Releasing new foundation models doesn’t have to be an all or nothing proposition.

Illustration: sorbetto/DigitalVision Vectors

Percy Liang is director of the Center for Research on Foundation Models, a faculty affiliate at the Stanford Institute for Human-Centered AI and an associate professor of Computer Science at Stanford University.

Humans are not very good at forecasting the future, especially when it comes to technology.

Keep Reading Show less
Percy Liang
Percy Liang is Director of the Center for Research on Foundation Models, a Faculty Affiliate at the Stanford Institute for Human-Centered AI, and an Associate Professor of Computer Science at Stanford University.
Climate

The West’s drought could bring about a data center reckoning

When it comes to water use, data centers are the tech industry’s secret water hogs — and they could soon come under increased scrutiny.

Lake Mead, North America's largest artificial reservoir, has dropped to about 1,052 feet above sea level, the lowest it's been since being filled in 1937.

Photo: Mario Tama/Getty Images

The West is parched, and getting more so by the day. Lake Mead — the country’s largest reservoir — is nearing “dead pool” levels, meaning it may soon be too low to flow downstream. The entirety of the Four Corners plus California is mired in megadrought.

Amid this desiccation, hundreds of the country’s data centers use vast amounts of water to hum along. Dozens cluster around major metro centers, including those with mandatory or voluntary water restrictions in place to curtail residential and agricultural use.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Latest Stories
Bulletins