People

What went wrong with free speech online — and how Big Tech can fix it

Jillian C. York on free speech, how platforms should manage moderation and free expression, and her new book.

Jillian York's book

Jillian C. York's new book explores the history of online expression from YouTube to Facebook and from Europe to Myanmar.

Image: Jillian C. York

Jillian C. York has been thinking about online expression longer than most. As director for international freedom of expression at the Electronic Frontier Foundation, she's a longtime writer and advocate when it comes to preserving freedom of speech and expression on social platforms, and the consequences — particularly outside the U.S. — when that freedom is taken away.

In York's most recent book, "Silicon Values: The Future of Free Speech Under Surveillance Capitalism," she explores the history of online expression from YouTube to Facebook and from Europe to Myanmar. She depicts a complicated puzzle without a lot of easy answers, and worries that because things are complicated, too many companies have decided to do nothing. Or, even worse, to let governments around the world tell them what to do.

York joined the Source Code podcast to discuss the themes of her book, plus what platforms got right about the pandemic, why global companies need a more global perspective on expression and what other platforms can learn from Reddit.

Subscribe to the show: Apple Podcasts | Spotify | Google Podcasts | RSS

You can listen to the entire conversation with Jillian C. York on this episode of the Source Code podcast. The below excerpts have been edited for length and clarity.

In the course of reading your book, I had this same question and answered it like 50 different ways. Basically: Is it totally crazy for tech companies to say, "We operate in these countries, different countries have different rules and laws and norms, and we exist to operate by those? China is the most obvious example, because what China wants is very different from what people in San Francisco mostly want. But they're like, "Well, those are the rules, we have to play by the rules."

It's a tricky one. Let's break it down. On the one hand, it's not unreasonable to adhere to the rules of a democratic nation. So if we're talking about Germany, as much as I don't agree with all of the laws here around hate speech, they were for the most part decided in a democratic fashion. This is an electoral democracy.

When we're talking about China or Saudi Arabia or Turkey, we're talking about countries that do not have the same level of democracy. And so a U.S. company going ahead and saying, "We're going to side with the government?" I think that's really the important part here: These governments are not representative of the people. And so what you have is American capitalism siding with power with governments that were not chosen. These are not governments vying for the people.

At this stage in the internet, it's kind of too late to put it back in the box. People around the world know more than they did a generation ago. They know what the other options are, immigration is on the rise in every direction. It's not just people coming to the U.S. I'm an immigrant to Germany! And we've got a global playing field when it comes to all sorts of things, including the job market. And so to say that it's OK to censor certain information in another country is essentially putting people at a disadvantage.

I'm obviously making a kind of a capitalist argument here. But that's the one that I think might end up winning. And so we have to think about it from that perspective, as well as from a perspective of cultural knowledge.

Would that question be different if networks like YouTube and Facebook were deliberately walled off in some of these places? If Facebook Saudi Arabia was its own thing within the borders of Saudi Arabia and didn't touch the rest of the world, would we be having a different conversation about it?

Yes. And I think we'd also be having a different conversation about it if these companies hadn't started off from a very different perspective.

This debate has been going on for more than a decade. When I was talking to Facebook back around 2010, Google had just pulled out of China. This was an era where Yahoo had handed over information about a Chinese dissident to the Chinese government, putting him in prison. So this is a very different era, these companies were making their decisions based on user safety, and free expression to some degree, but not from an absolutist perspective.

And then the technology starts to grow and change. We have the introduction of SSL, which enabled censorship in a different way. Before that, a government could make a more granular decision about what to block, so they could block a specific YouTube video. SSL comes along, makes it harder for a government entity to identify the specific piece of content. And so the decision becomes much more binary. At that point, the companies start doing things differently, based on profit.

If we had these companies in individual countries — if their servers were based there, all of that — it would be much different. I do want to also note that it's not an easy decision for them, because they are risking their employees on the ground in a lot of these cases. But my argument is, and always has been, you shouldn't put offices in a country if that country is going to threaten your employees' lives.

I don't know if he originated this, but Siva Vaidhyanathan likes to say that "the problem with Facebook is Facebook." Is this just a totally intractable problem, where we're either going to have to tear down and rebuild the structures of these giant companies and the technology that powers the internet, or just learn to live with the side effects?

I'm of two minds here, because on the one hand, I don't think we can easily put it back in the box. I also think that that argument does a disservice to people in other parts of the world who may not have access to the broader internet. And again, that's the fault of these companies. They've chosen to go in and give free access to their platforms, and maybe throw in some Wikipedia on the side. So I do worry about the privilege in that argument.

But at the same time, I don't think that we should view these platforms as inevitable. I do think that we can create a better world. But it's going to come from a number of different sectors. It has to come from entrepreneurship, it also has to come from political science and sociology. And so we have to have a united force in designing what we want the next iteration of the internet to look like.

There's a really interesting transition in that, which you talked about a couple of times. Knitting communities, you point out —

Ravelry!

Ravelry! These same rules don't have to apply to Ravelry, or JDate, or some of these other things. The rules are different when you're huge. And I feel like Facebook has spent the last decade resolutely denying the fact that it's huge, at least from a policy perspective. The part of me that's sympathetic to that is the part of me that realizes that figuring out when you're huge, and what to do about it, is a really challenging thing.

It's definitely tricky. But with Facebook in particular, I don't think they've even tried. Part of the issue with Facebook is that a lot of the top-level executives have been there since the inception of the company, or since the very early years. They're out of touch with society. They're very privileged. They're living in their walled gardens, literally. And then the hires that they have made over the years have come primarily from government, like Nick Clegg, and law enforcement, like Monika Bickert. And so it's a very, very narrow view of the world.

With some of the other platforms, I think it's a little bit different. I mean, we've seen Twitter rethink itself, we've seen Reddit really rethink itself. As these platforms grow, they do have to go back and say, does this still make sense for 2021? And if I look at Facebook's rules, they've been just piling rule after rule on top of each other without really doing some kind of assessment or audit of what makes sense in this current era. And I think that's what they have to go back and do. And we are seeing other companies do that, which is great.

If I'm Ravelry, should I be thinking about that stuff right now? Do they need to be having deep conversations about their place in the expression world?

Nah. Ravelry, they're there for one reason. They're a knitting platform, and they've decided that political speech just isn't that important to their bottom line and to their M.O. If Ravelry started saying "you can't say anything but knitting," they might lose some users, right? Like, if you can't even just share what happened that day. But I think the thing that's so interesting about the political speech example, the fact that they chose to just be like "you know what, we're not even just going to ban conversation about Trump or U.S. politics, we're just gonna say no politics at all," it was a really stark reminder for me that political speech is not the most important speech.

A lot of these platforms, and U.S. culture in general, treats it that way. But it's not the be-all end-all, there's a lot of other types of expression that are absolutely vital. Cultural artistic expression, for example. And so I think with Ravelry, it's fine for them to do that.

I think the problem is when you try to be a platform for everything — and in Twitter and Facebook's case in particular, when you decide that you're going to be a conduit for the expression of elected and other public officials — that's where I think it gets really tricky. Because if you're going to be the place where politicians are talking to each other, there is some degree of transparency needed there. It's kind of a weird thing to just say, "no, we're not going to allow political expression," or "we're going to kick off this politician, but not this one."

These companies talk pretty freely about how much of their audience is outside of the U.S. It seems like you could make a pretty simple capitalistic case that they should really stop paying so much attention to people in the United States. Why hasn't that happened? What would it take to actually make them think more proactively about global issues?

The cynical answer to that is that only certain countries and regions are profitable. They do pay a lot of attention to Saudi Arabia, to Turkey, because those are big markets for them. They don't pay a lot of attention in Myanmar, because it's a poor country. It's that simple.

I also think that we have to bring this back to universal human rights frameworks and values. And so there is some stuff that the majority of the world is OK with having taken down. The problem is, you have to do it really carefully. So you can't just apply automation to hate speech and hope that it works, and hope that it doesn't catch counter speech and satire and human rights documentation in the mix, because that is what's going to happen if we're not being cautious and gentle in our approach to this.

That said, I used to be more of an absolutist. I've come around to the fact that we can't just allow hate to flourish on these platforms. But Article 19 doesn't allow that, either. There are restrictions that most of the world is signed onto and accepted that the U.S. (and Japan, also, for some reason) is really the big outlier on all of this. If we were using these frameworks, it would not allow for a lot of the stuff that Facebook takes down. We would have nudity, we would have some documentation of violence, we would have counter-speech against violence. But the way that these platforms do it is the lazy way. They apply the laziest, cheapest method. And that's really why we're in this mess.

I'm gonna make the bull case for why AI is going to solve all of our problems. And I want you to tell me why I'm wrong. Because I also think I'm wrong, but I'm gonna make the bull case anyway.

Ooh, all right.

The bull case says that AI doesn't solve all problems right now. But eventually, machine translation is going to get good enough that we can reasonably understand things that are going on everywhere. And machines are truly the only way that this can happen at scale. The goal is to do it quickly and proactively, so that people are not relied on to report on these things. The only solution, then, is to turn to sufficiently intelligent machines — which we don't have yet but we'll have someday — that can solve these problems. At the limit of scale, that's the only way this will ever actually work. So even if it will never be perfect, that's where we should be investing.

What a robotic lack of imagination that is! OK, so I speak Arabic badly. And I've been relying on Google Translate to fill in the blanks for me for many, many years, and I don't think Arabic translation has improved one iota.

Now, automation is good for some things. What it's good at is stuff that can be put in box A or box B. And what I would like to see happen is an increased use of automation for that, but with the decision of what happens after it's placed in boxes A or B in the hands of the user instead of in the hands of the centralized platform. So when it comes to nudity, you can use automation to detect nudity, but give me the choice whether to flip that switch on or off. If it's Saudi Arabia, maybe the government gets the choice. If it's a parent, maybe they get the choice. But there's so many different ways we can do this. And instead, these companies are demanding that we trust them with automation. And even the best automation, I don't trust Mark Zuckerberg with it. Why should I? He's given me zero reason to.

If I were to put it at the broadest possible level, your solution to a lot of these problems is just resources. More people, more diverse groups, more money, just paying more attention to these issues in more places, and in more cases. Is that a fair characterization?

Yeah! Stop spending the money on acquisitions and start spending it on genuine inclusivity. They speak about diversity, but diversity to them is, to put it bluntly, people of different races in the U.S. And that's important, don't get me wrong, but inclusivity is different. Inclusivity is bringing those people, both in the U.S. and abroad, up to the highest levels of governance.

If you look at the executive team of Facebook, they're almost entirely white. Many of them are men. And those who are not men, not white, are still Ivy-league educated. It's a very elite bubble. And so inclusivity is different from diversity in the sense that it's really looking at the broad intersectional backgrounds of different people, and bringing them not just into the room for consultations, but bringing them to the table, bringing them to the boardroom. It's not Sheryl Sandberg "Lean In." It's really more like, "Let's lean out and look at who's missing."

Correction: An earlier version of this story misspelled Monika Bickert's name. This story was updated on April 14, 2021.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins