People

What went wrong with free speech online — and how Big Tech can fix it

Jillian C. York on free speech, how platforms should manage moderation and free expression, and her new book.

Jillian York's book

Jillian C. York's new book explores the history of online expression from YouTube to Facebook and from Europe to Myanmar.

Image: Jillian C. York

Jillian C. York has been thinking about online expression longer than most. As director for international freedom of expression at the Electronic Frontier Foundation, she's a longtime writer and advocate when it comes to preserving freedom of speech and expression on social platforms, and the consequences — particularly outside the U.S. — when that freedom is taken away.

In York's most recent book, "Silicon Values: The Future of Free Speech Under Surveillance Capitalism," she explores the history of online expression from YouTube to Facebook and from Europe to Myanmar. She depicts a complicated puzzle without a lot of easy answers, and worries that because things are complicated, too many companies have decided to do nothing. Or, even worse, to let governments around the world tell them what to do.

York joined the Source Code podcast to discuss the themes of her book, plus what platforms got right about the pandemic, why global companies need a more global perspective on expression and what other platforms can learn from Reddit.

Subscribe to the show: Apple Podcasts | Spotify | Google Podcasts | RSS

You can listen to the entire conversation with Jillian C. York on this episode of the Source Code podcast. The below excerpts have been edited for length and clarity.

In the course of reading your book, I had this same question and answered it like 50 different ways. Basically: Is it totally crazy for tech companies to say, "We operate in these countries, different countries have different rules and laws and norms, and we exist to operate by those? China is the most obvious example, because what China wants is very different from what people in San Francisco mostly want. But they're like, "Well, those are the rules, we have to play by the rules."

It's a tricky one. Let's break it down. On the one hand, it's not unreasonable to adhere to the rules of a democratic nation. So if we're talking about Germany, as much as I don't agree with all of the laws here around hate speech, they were for the most part decided in a democratic fashion. This is an electoral democracy.

When we're talking about China or Saudi Arabia or Turkey, we're talking about countries that do not have the same level of democracy. And so a U.S. company going ahead and saying, "We're going to side with the government?" I think that's really the important part here: These governments are not representative of the people. And so what you have is American capitalism siding with power with governments that were not chosen. These are not governments vying for the people.

At this stage in the internet, it's kind of too late to put it back in the box. People around the world know more than they did a generation ago. They know what the other options are, immigration is on the rise in every direction. It's not just people coming to the U.S. I'm an immigrant to Germany! And we've got a global playing field when it comes to all sorts of things, including the job market. And so to say that it's OK to censor certain information in another country is essentially putting people at a disadvantage.

I'm obviously making a kind of a capitalist argument here. But that's the one that I think might end up winning. And so we have to think about it from that perspective, as well as from a perspective of cultural knowledge.

Would that question be different if networks like YouTube and Facebook were deliberately walled off in some of these places? If Facebook Saudi Arabia was its own thing within the borders of Saudi Arabia and didn't touch the rest of the world, would we be having a different conversation about it?

Yes. And I think we'd also be having a different conversation about it if these companies hadn't started off from a very different perspective.

This debate has been going on for more than a decade. When I was talking to Facebook back around 2010, Google had just pulled out of China. This was an era where Yahoo had handed over information about a Chinese dissident to the Chinese government, putting him in prison. So this is a very different era, these companies were making their decisions based on user safety, and free expression to some degree, but not from an absolutist perspective.

And then the technology starts to grow and change. We have the introduction of SSL, which enabled censorship in a different way. Before that, a government could make a more granular decision about what to block, so they could block a specific YouTube video. SSL comes along, makes it harder for a government entity to identify the specific piece of content. And so the decision becomes much more binary. At that point, the companies start doing things differently, based on profit.

If we had these companies in individual countries — if their servers were based there, all of that — it would be much different. I do want to also note that it's not an easy decision for them, because they are risking their employees on the ground in a lot of these cases. But my argument is, and always has been, you shouldn't put offices in a country if that country is going to threaten your employees' lives.

I don't know if he originated this, but Siva Vaidhyanathan likes to say that "the problem with Facebook is Facebook." Is this just a totally intractable problem, where we're either going to have to tear down and rebuild the structures of these giant companies and the technology that powers the internet, or just learn to live with the side effects?

I'm of two minds here, because on the one hand, I don't think we can easily put it back in the box. I also think that that argument does a disservice to people in other parts of the world who may not have access to the broader internet. And again, that's the fault of these companies. They've chosen to go in and give free access to their platforms, and maybe throw in some Wikipedia on the side. So I do worry about the privilege in that argument.

But at the same time, I don't think that we should view these platforms as inevitable. I do think that we can create a better world. But it's going to come from a number of different sectors. It has to come from entrepreneurship, it also has to come from political science and sociology. And so we have to have a united force in designing what we want the next iteration of the internet to look like.

There's a really interesting transition in that, which you talked about a couple of times. Knitting communities, you point out —

Ravelry!

Ravelry! These same rules don't have to apply to Ravelry, or JDate, or some of these other things. The rules are different when you're huge. And I feel like Facebook has spent the last decade resolutely denying the fact that it's huge, at least from a policy perspective. The part of me that's sympathetic to that is the part of me that realizes that figuring out when you're huge, and what to do about it, is a really challenging thing.

It's definitely tricky. But with Facebook in particular, I don't think they've even tried. Part of the issue with Facebook is that a lot of the top-level executives have been there since the inception of the company, or since the very early years. They're out of touch with society. They're very privileged. They're living in their walled gardens, literally. And then the hires that they have made over the years have come primarily from government, like Nick Clegg, and law enforcement, like Monika Bickert. And so it's a very, very narrow view of the world.

With some of the other platforms, I think it's a little bit different. I mean, we've seen Twitter rethink itself, we've seen Reddit really rethink itself. As these platforms grow, they do have to go back and say, does this still make sense for 2021? And if I look at Facebook's rules, they've been just piling rule after rule on top of each other without really doing some kind of assessment or audit of what makes sense in this current era. And I think that's what they have to go back and do. And we are seeing other companies do that, which is great.

If I'm Ravelry, should I be thinking about that stuff right now? Do they need to be having deep conversations about their place in the expression world?

Nah. Ravelry, they're there for one reason. They're a knitting platform, and they've decided that political speech just isn't that important to their bottom line and to their M.O. If Ravelry started saying "you can't say anything but knitting," they might lose some users, right? Like, if you can't even just share what happened that day. But I think the thing that's so interesting about the political speech example, the fact that they chose to just be like "you know what, we're not even just going to ban conversation about Trump or U.S. politics, we're just gonna say no politics at all," it was a really stark reminder for me that political speech is not the most important speech.

A lot of these platforms, and U.S. culture in general, treats it that way. But it's not the be-all end-all, there's a lot of other types of expression that are absolutely vital. Cultural artistic expression, for example. And so I think with Ravelry, it's fine for them to do that.

I think the problem is when you try to be a platform for everything — and in Twitter and Facebook's case in particular, when you decide that you're going to be a conduit for the expression of elected and other public officials — that's where I think it gets really tricky. Because if you're going to be the place where politicians are talking to each other, there is some degree of transparency needed there. It's kind of a weird thing to just say, "no, we're not going to allow political expression," or "we're going to kick off this politician, but not this one."

These companies talk pretty freely about how much of their audience is outside of the U.S. It seems like you could make a pretty simple capitalistic case that they should really stop paying so much attention to people in the United States. Why hasn't that happened? What would it take to actually make them think more proactively about global issues?

The cynical answer to that is that only certain countries and regions are profitable. They do pay a lot of attention to Saudi Arabia, to Turkey, because those are big markets for them. They don't pay a lot of attention in Myanmar, because it's a poor country. It's that simple.

I also think that we have to bring this back to universal human rights frameworks and values. And so there is some stuff that the majority of the world is OK with having taken down. The problem is, you have to do it really carefully. So you can't just apply automation to hate speech and hope that it works, and hope that it doesn't catch counter speech and satire and human rights documentation in the mix, because that is what's going to happen if we're not being cautious and gentle in our approach to this.

That said, I used to be more of an absolutist. I've come around to the fact that we can't just allow hate to flourish on these platforms. But Article 19 doesn't allow that, either. There are restrictions that most of the world is signed onto and accepted that the U.S. (and Japan, also, for some reason) is really the big outlier on all of this. If we were using these frameworks, it would not allow for a lot of the stuff that Facebook takes down. We would have nudity, we would have some documentation of violence, we would have counter-speech against violence. But the way that these platforms do it is the lazy way. They apply the laziest, cheapest method. And that's really why we're in this mess.

I'm gonna make the bull case for why AI is going to solve all of our problems. And I want you to tell me why I'm wrong. Because I also think I'm wrong, but I'm gonna make the bull case anyway.

Ooh, all right.

The bull case says that AI doesn't solve all problems right now. But eventually, machine translation is going to get good enough that we can reasonably understand things that are going on everywhere. And machines are truly the only way that this can happen at scale. The goal is to do it quickly and proactively, so that people are not relied on to report on these things. The only solution, then, is to turn to sufficiently intelligent machines — which we don't have yet but we'll have someday — that can solve these problems. At the limit of scale, that's the only way this will ever actually work. So even if it will never be perfect, that's where we should be investing.

What a robotic lack of imagination that is! OK, so I speak Arabic badly. And I've been relying on Google Translate to fill in the blanks for me for many, many years, and I don't think Arabic translation has improved one iota.

Now, automation is good for some things. What it's good at is stuff that can be put in box A or box B. And what I would like to see happen is an increased use of automation for that, but with the decision of what happens after it's placed in boxes A or B in the hands of the user instead of in the hands of the centralized platform. So when it comes to nudity, you can use automation to detect nudity, but give me the choice whether to flip that switch on or off. If it's Saudi Arabia, maybe the government gets the choice. If it's a parent, maybe they get the choice. But there's so many different ways we can do this. And instead, these companies are demanding that we trust them with automation. And even the best automation, I don't trust Mark Zuckerberg with it. Why should I? He's given me zero reason to.

If I were to put it at the broadest possible level, your solution to a lot of these problems is just resources. More people, more diverse groups, more money, just paying more attention to these issues in more places, and in more cases. Is that a fair characterization?

Yeah! Stop spending the money on acquisitions and start spending it on genuine inclusivity. They speak about diversity, but diversity to them is, to put it bluntly, people of different races in the U.S. And that's important, don't get me wrong, but inclusivity is different. Inclusivity is bringing those people, both in the U.S. and abroad, up to the highest levels of governance.

If you look at the executive team of Facebook, they're almost entirely white. Many of them are men. And those who are not men, not white, are still Ivy-league educated. It's a very elite bubble. And so inclusivity is different from diversity in the sense that it's really looking at the broad intersectional backgrounds of different people, and bringing them not just into the room for consultations, but bringing them to the table, bringing them to the boardroom. It's not Sheryl Sandberg "Lean In." It's really more like, "Let's lean out and look at who's missing."

Correction: An earlier version of this story misspelled Monika Bickert's name. This story was updated on April 14, 2021.

Protocol | Workplace

Instacart workers are on strike. How far can it get them?

Instacart activists want a nationwide strike to start today, but many workers are too afraid of the company and feel they can't afford a day off of work.

Gig workers protest in front of an Amazon facility in 2020.

Photo: Michael Nagle/Bloomberg via Getty Images

Starting today, an Instacart organizing group is asking the app's gig workers to go on a nationwide strike to demand better payment structures, benefits and other changes to the way the company treats its workers — but if past strikes are any indication, most Instacart users probably won't even notice.

The majority of Instacart workers on forums like Reddit and Facebook appear either unaware of the planned strike or don't plan to participate because they are skeptical of its power, afraid of retaliation from the company or are too reliant on what they do make from the app to be able to afford to take even one day off of the platform. "Not unless someone is going to pay my bills," "It will never work, you will never be able to get every shopper to organize" and "Last time there was a 'strike' Instacart took away our quality bonus pay," are just a few of the comments Instacart shoppers have left in response to news of the strike.

Keep Reading Show less
Anna Kramer

Anna Kramer is a reporter at Protocol (Twitter: @ anna_c_kramer, email: akramer@protocol.com), where she writes about labor and workplace issues. Prior to joining the team, she covered tech and small business for the San Francisco Chronicle and privacy for Bloomberg Law. She is a recent graduate of Brown University, where she studied International Relations and Arabic and wrote her senior thesis about surveillance tools and technological development in the Middle East.

The way we work has fundamentally changed. COVID-19 upended business dealings and office work processes, putting into hyperdrive a move towards digital collaboration platforms that allow teams to streamline processes and communicate from anywhere. According to the International Data Corporation, the revenue for worldwide collaboration applications increased 32.9 percent from 2019 to 2020, reaching $22.6 billion; it's expected to become a $50.7 billion industry by 2025.

"While consumers and early adopter businesses had widely embraced collaborative applications prior to the pandemic, the market saw five years' worth of new users in the first six months of 2020," said Wayne Kurtzman, research director of social and collaboration at IDC. "This has cemented collaboration, at least to some extent, for every business, large and small."

Keep Reading Show less
Kate Silver

Kate Silver is an award-winning reporter and editor with 15-plus years of journalism experience. Based in Chicago, she specializes in feature and business reporting. Kate's reporting has appeared in the Washington Post, The Chicago Tribune, The Atlantic's CityLab, Atlas Obscura, The Telegraph and many other outlets.

Protocol | China

WeChat promises to stop accessing users’ photo albums amid public outcry

A tech blogger claimed that popular Chinese apps snoop around users' photo libraries, provoking heightened public concerns over privacy.

A survey launched by Sina Tech shows 94% of the some 30,000 responding users said they are not comfortable with apps reading their photo libraries just to allow them to share images faster in chats.

Photo: S3studio via Getty Images

A Chinese tech blogger dropped a bombshell last Friday, claiming on Chinese media that he found that several popular Chinese apps, including the Tencent-owned chat apps WeChat and QQ, as well as the Alibaba-owned ecommerce app Taobao, frequently access iPhone users' photo albums in the background even when those apps are not in use.

The original Weibo post from the tech blogger, using the handle of @Hackl0us, provoked intense debates about user privacy on the Chinese internet and consequently prompted WeChat to announce that it would stop fetching users' photo album data in the background.

Keep Reading Show less
Shen Lu

Shen Lu is a reporter with Protocol | China. Her writing has appeared in Foreign Policy, The New York Times and POLITICO, among other publications. She can be reached at shenlu@protocol.com.

Protocol | Enterprise

As businesses struggle with data, enterprise tech is cleaning up

Enterprise tech's vision of "big data" largely fell flat inside silos. But now, an army of providers think they've figured out the problems. And customers and investors are taking note.

Corporate data tends to settle in silos that makes it harder to understand the bigger picture. Enterprise tech vendors smell a lucrative opportunity.

Photo: Jim Witkowski/Unsplash

Data isn't the new oil; it's the new gold. And in any gold rush, the ones who make the most money in the long run are the tool makers and suppliers.

Enterprise tech vendors have long peddled a vision of corporate America centered around so-called "big data." But there was a big problem: Many of those projects failed to produce a return. An army of new providers think they've finally figured out the problem, and investors and customers are taking note.

Keep Reading Show less
Joe Williams

Joe Williams is a senior reporter at Protocol covering enterprise software, including industry giants like Salesforce, Microsoft, IBM and Oracle. He previously covered emerging technology for Business Insider. Joe can be reached at JWilliams@Protocol.com. To share information confidentially, he can also be contacted on a non-work device via Signal (+1-309-265-6120) or JPW53189@protonmail.com.

Protocol | Policy

What Frances Haugen’s SEC complaint means for the rest of tech

Haugen argues Facebook misled investors by failing to disclose its platforms' harms. If the SEC bites, the rest of tech could be next.

The question is whether the SEC will find the contents of Haugen's complaint relevant to investors' interests.

Photo: Matt McClain-Pool/Getty Images

Whistleblowers like former Facebook staffer Frances Haugen have pretty limited options when it comes to actually seeking redress for the harms they've observed and documented. There's no federal privacy law in the U.S. to speak of, Section 230 protects platforms for online speech and companies like Facebook are under no obligation to share any information with lawmakers, or anyone else, about what's happening on their sites.

But there is one agency that not only governs all publicly-traded companies, including in tech, but also offers whistleblowers like Haugen the opportunity for a payout: the Securities and Exchange Commission.

Keep Reading Show less
Issie Lapowsky

Issie Lapowsky ( @issielapowsky) is Protocol's chief correspondent, covering the intersection of technology, politics, and national affairs. She also oversees Protocol's fellowship program. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University's Center for Publishing on how tech giants have affected publishing.

Latest Stories