Source Code: Your daily look at what matters in tech.

source-codesource codeauthorIssie LapowskyNoneWant your finger on the pulse of everything that's happening in tech? Sign up to get David Pierce's daily newsletter.64fd3cbe9f
×

Get access to Protocol

Your information will be used in accordance with our Privacy Policy

I’m already a subscriber
Politics

The hardest questions tech CEOs could be asked at the Section 230 hearing

There will be plenty of political point-scoring on Wednesday. But here's what senators should actually ask if they're serious about fixing the internet's favorite law.

The hardest questions tech CEOs could be asked at the Section 230 hearing

Mark Zuckerberg, Sundar Pichai and Jack Dorsey are all set to testify before the Senate on issues related to Section 230 of the Communications Decency Act.

Photo: Graeme Jennings-Pool/Getty Images

Mark Zuckerberg, are your views on freedom of expression hypocritical? Sundar Pichai, are you ready for collective responsibility for online harm? Jack Dorsey, should revenge porn sites really have the same legal protections as Twitter?

Those are the kinds of hard questions that top experts on Section 230 of the Communications Decency Act think could stop the CEOs of Facebook, Google and Twitter in their tracks on Wednesday, when they're due to testify before the U.S. Senate Committee on Commerce, Science and Transportation about how the law has enabled "bad behavior" by Big Tech.

In the past, when Zuckerberg, Pichai and Dorsey have appeared before Congress, they've been faced with a deluge of questions from lawmakers about how their companies favor or suppress various viewpoints, using cherry-picked examples of controversial content that was either taken down or left online. With Election Day just one week away and tensions about tech platforms' treatment of political discourse at an all-time high, Wednesday's hearing will surely feature plenty of that.

But this is the first Congressional hearing featuring these CEOs to focus on Section 230, and could provide lawmakers with the opportunity to develop their understanding of how Section 230 really ought to be updated. In case they're willing to look beyond partisan quarrels, Protocol asked some of the top experts on Section 230 the toughest questions they'd ask Zuckerberg, Pichai and Dorsey. Here's what they had to say:

There's bipartisan support for the PACT Act, which would mean that you couldn't use Section 230 as a defense if you leave content up after a judge orders you to remove it. Do you support this reform?

— Matt Perault, former Facebook director of public policy and current director of Duke University's Center on Science and Technology policy

This bipartisan bill, sponsored by Sens. Brian Schatz and John Thune, would make relatively light-touch changes to Section 230, including requiring platforms to explain their moderation policies, issue quarterly reports on moderation decisions and take down content deemed illegal in court within 24 hours. Facebook, Google and Twitter already comply with many of the provisions in the bill, but the Internet Association, which represents all three companies, has expressed concerns about it. Pinning these powerful CEOs down on their personal feelings about the legislation would be a meaningful contribution to the debate.

Let's say Congress repeals Section 230 tomorrow. How does that change your content moderation practices?

Jeff Kosseff, assistant professor of cybersecurity law at the United States Naval Academy's Cyber Science Department

Because Section 230 protects companies from liability for filtering out offensive or objectionable content, one concern is that by removing Section 230 protection altogether, tech companies would stop filtering content altogether. Kosseff posits the opposite is true: that companies would filter even more to limit their liability for whatever might be left up. What the CEOs might say in response could be telling.

How should the platforms address false statements and disinformation camouflaged as opinion? A statement that "I believe all Blacks are lazy" is not on its face an assertion of fact, but could be considered hate speech. What safeguards can ensure that any restrictions levied against such speech will be employed in the interest of public safety, and not merely to stifle a viewpoint with which a platform simply disagrees?

Lateef Mtima, professor of law at Howard University

Tech platforms are under increasingly intense pressure to crack down on hate speech against minority groups, particularly as research shows that Facebook, Twitter and Google have fanned the flames of racism in the U.S. and abroad. The platforms have recently taken action against speech that promotes real-world violence, but they're still working out how aggressively they should act against bigoted opinions. "There's not yet a perfect tool or system that can reliably find and distinguish posts that cross the line from expressive opinion into unacceptable hate speech," a Facebook executive wrote in 2017. This is an area where the platforms' stances are changing quickly, and it will be important to hear the executives' thoughts on it now.

In the physical world, collective responsibility is a familiar concept: A person can be partly responsible for harm even if he did not intend for it to happen and was not its direct cause. Do you believe that tech companies should continue to be granted a special exemption from the rules of collective responsibility? Why?

Mary Anne Franks, professor of law at University of Miami School of Law and president of the Cyber Civil Rights Initiative

There's an ongoing debate over why tech platforms aren't subject to the same liability that brick-and-mortar businesses face in the offline world. Steering the conversation toward addressing the actual harms that tech platforms facilitate, and not baseless accusations of political bias, would be one way to facilitate a more substantive conversation.

Would you support an amendment to Section 230 that excludes from protection any interactive computer service provider that manifests deliberate indifference to harmful content? Why or why not?

Franks

Though they often fail, Facebook, Google and Twitter arguably at least attempt to make their platforms safe for users. But Section 230 doesn't just protect companies that are trying to do the right thing and sometimes get it wrong; It also shields companies that either invite or completely ignore bad behavior. Tech companies spend so much time answering for their own misdeeds, they rarely get asked how the law ought to handle explicitly bad actors.

Narrowing Section 230 immunity doesn't mean platforms will automatically be held liable. Victims still must prove their case. If they have a credible claim they've been harmed at the hands of platforms, why should victims be denied an opportunity for justice?

— Neil Fried, founder of DigitalFrontiers Advocacy, former chief counsel of the House Energy and Commerce Committee and SVP of the Motion Picture Association

Twitter, Facebook and Google have argued that reforming Section 230 could unleash a barrage of frivolous lawsuits against any company with an online footprint. But Section 230 has also been a major obstacle in court for very real victims of crimes facilitated by tech platforms, including genocide and online impersonation. Most judges throw out cases against the platforms immediately because Section 230 makes them so difficult to try. Section 230 reformers want to make it easier for victims to sue major online platforms for those harms. Tech giants have fought these cases vigorously in court but have rarely addressed them publicly.

Should a business that is knowingly facilitating an illegal activity be exempt from state and local criminal laws?

— Rick Lane, former 21st Century Fox SVP currently advising victim's advocacy groups on Section 230

Section 230 defenders often point out that the law doesn't protect companies from being charged with federal crimes. The subtext: If the feds are so concerned about criminal activity happening online, they should enforce the law themselves. But the counter-argument boils down to a lack of resources at the federal level. Opening platforms up to state and local criminal liability would essentially expand the number of cops on the beat. It could also invite more activist enforcement from politically appointed attorneys general.

How consistent are your defenses of 230 with the rest of your views around maintaining freedom of expression and preventing a chilling effect? Those values seem to vanish into the ether when it comes to removing NDAs that keep employees from exercising that same freedom of expression. Where is the fear of a chilling effect when company whistleblowers are intimidated, retaliated against, then fired without recourse?

— Ifeoma Ozoma, First Draft board member, former public policy and social impact manager at Pinterest

The tech executives will likely argue that reforming Section 230 could limit free expression online, potentially forcing the companies to more aggressively remove content posted by their billions of users. But their companies have been accused of silencing criticism by maintaining restrictive NDAs and firing employees who speak out. It could be revealing to hear Pichai and Zuckerberg in particular talk about their recent employee unrest and how they plan to navigate future internal dissent.

Your services enable users to treat each other awfully. However, people also treat each other awfully in the offline world. What specific steps does/will your service take to reduce the quantum of awful behavior on your service so that it is lower than the offline baseline of awfulness?

Eric Goldman, professor at Santa Clara University School of Law

This question feels tailor-made for Dorsey, who has spoken at length about creating "healthier" conversations on Twitter. Tech CEOs are used to being grilled about all the ways they punish people for the bad things they do online, but there's often less of a focus on whether anything can be done to discourage people from doing so many bad things online in the first place.

Protocol | China

China’s era of Big Tech Overwork has ended

Tech companies fear public outcry as much as they do regulatory crackdowns.

Chinese tech workers are fed up. Companies fear political and publish backlashes.

Photo: Susan Fisher Plotner/Getty Images

Two years after Chinese tech workers started a decentralized online protest against grueling overtime work culture, and one year after the plight of delivery workers came under the national spotlight, a chorus of Chinese tech giants have finally made high-profile moves to end the grueling work schedules that many believe have fueled the country's spectacular tech boom — and that many others have criticized as exploitative and cruel.

Over the past two months, at least four Chinese tech giants have announced plans to cancel mandatory overtime; some of the changes are companywide, and others are specific to business units. ByteDance, Kuaishou and Meituan's group-buying platform announced the end of a policy called "Big/Small Week," where a six-day workweek is followed by a more moderate schedule. In early June, a game studio owned by Tencent rolled out a policy that mandated employees punch out at 6 p.m. every Wednesday and take the weekends off.

Keep Reading Show less
Shen Lu

Shen Lu is a reporter with Protocol | China. She has spent six years covering China from inside and outside its borders. Previously, she was a fellow at Asia Society's ChinaFile and a Beijing-based producer for CNN. Her writing has appeared in Foreign Policy, The New York Times and POLITICO, among other publications. Shen Lu is a founding member of Chinese Storytellers, a community serving and elevating Chinese professionals in the global media industry.

Over the last year, financial institutions have experienced unprecedented demand from their customers for exposure to cryptocurrency, and we've seen an inflow of institutional dollars driving bitcoin and other cryptocurrencies to record prices. Some banks have already launched cryptocurrency programs, but many more are evaluating the market.

That's why we've created the Crypto Maturity Model: an iterative roadmap for cryptocurrency product rollout, enabling financial institutions to evaluate market opportunities while addressing compliance requirements.

Keep Reading Show less
Caitlin Barnett, Chainanalysis
Caitlin’s legal and compliance experience encompasses both cryptocurrency and traditional finance. As Director of Regulation and Compliance at Chainalysis, she helps leading financial institutions strategize and build compliance programs in order to adopt cryptocurrencies and offer new products to their customers. In addition, Caitlin helps facilitate dialogue with regulators and the industry on key policy issues within the cryptocurrency industry.
Power

Brownsville, we have a problem

The money and will of Elon Musk are reshaping a tiny Texas city. Its residents are divided on his vision for SpaceX, but their opinion may not matter at all.

When Musk chose Cameron County, he changed its future irrevocably.

Photo: Verónica G. Cárdenas for Protocol

In Boca Chica, Texas, the coastal prairie stretches to the horizon on either side of the Gulf of Mexico, an endless sandbar topped with floating greenery, wheeling gulls and whipping gusts of wind.

Far above the sea on a foggy March day, the camera feed on the Starship jerked and then froze on an image of orange flames shooting into the gray. From the ground below, onlookers strained to see through the opaque sky. After a moment of quiet, jagged edges of steel started to rain from the clouds, battering the ground near the oceanside launch pad, ripping through the dunes, sinking deep into the sand and flats.

Keep Reading Show less
Anna Kramer

Anna Kramer is a reporter at Protocol (Twitter: @ anna_c_kramer, email: akramer@protocol.com), where she writes about labor and workplace issues. Prior to joining the team, she covered tech and small business for the San Francisco Chronicle and privacy for Bloomberg Law. She is a recent graduate of Brown University, where she studied International Relations and Arabic and wrote her senior thesis about surveillance tools and technological development in the Middle East.

People

Facebook’s push to protect young users is a peek at the future of social

More options, more proactive protections, fewer one-size-fits-all answers for being a person on the internet.

Social media companies are racing to find ways to protect underage people on their apps.

Image: Alexander Shatov/Unsplash

Social media companies used to see themselves as open squares, places where everyone could be together in beautiful, skipping-arm-in-arm harmony. But that's not the vision anymore.

Now, Facebook and others are going private. They're trying to rebuild around small groups and messaging. They're also trying to figure out how to build platforms that work for everyone, that don't try to apply the same set of rules to billions of people around the world, that bring everyone together but on each user's terms. It's tricky.

Keep Reading Show less
David Pierce

David Pierce ( @pierce) is Protocol's editor at large. Prior to joining Protocol, he was a columnist at The Wall Street Journal, a senior writer with Wired, and deputy editor at The Verge. He owns all the phones.

Power

Who owns that hot startup? These insiders want to clear it up.

Cap tables are fundamental to startups. So 10 law firms and startup software vendors are teaming up to standardize what they tell you about investors' stakes.

Cap tables describe the ownership of shares in a startup, but they aren't standardized.

Illustration: Protocol

Behind every startup, there's a cap table. Startups have to start keeping track of who owns what, from the moment they're created, to fundraising from venture capitalists, to an eventual IPO or acquisition.

"Everything that happens that is a sexy thing that's important to the tech world, it really is something having to do with the cap table," said David Wang, chief innovation officer at the Wilson Sonsini Goodrich & Rosati law firm.

Keep Reading Show less
Biz Carson

Biz Carson ( @bizcarson) is a San Francisco-based reporter at Protocol, covering Silicon Valley with a focus on startups and venture capital. Previously, she reported for Forbes and was co-editor of Forbes Next Billion-Dollar Startups list. Before that, she worked for Business Insider, Gigaom, and Wired and started her career as a newspaper designer for Gannett.

Latest Stories