Policy

Reid Hoffman on Facebook, anti-vaxxers and LinkedIn leaving China

"Western society and China try to put you in the crossfire."

Reid Hoffman

Reid Hoffman spoke with Protocol about blitzscaling and companies' responsibility.

Photo: David Paul Morris/Bloomberg via Getty Images

LinkedIn entered China in 2014, because, its founder Reid Hoffman said, the company wanted to build economic opportunity for people there. Of course, it couldn't have hurt that the country of 1.4 billion was also a major economic opportunity for LinkedIn — or for any company willing to adhere to the government's censorship restrictions. For a while at least, LinkedIn was willing.

But that all changed last month, when LinkedIn pulled the platform out of China amid new speech and data protection requirements. The company plans to replace it with a stripped-down job board.

"What we've discovered over the last few years was, because of the kind of conflicts between societies, Western society and China try to put you in the crossfire," Hoffman told Protocol. "Between them, you end up in a lot of controversy when you're trying to navigate this line."

Hoffman spoke with Protocol as part of the Knight Foundation's symposium on lessons learned during the first internet age, and had a lot to say about how companies should weigh the trade-offs when entering global markets with increasingly stringent laws governing speech. The early Facebook investor and author of several books on how to scale fast also had some advice for Facebook and thoughts on how platforms can harness their users' most sinful impulses for good.

This interview has been edited and condensed for clarity.

You wrote the book "Blitzscaling," which is all about how to grow really, really, really fast into global markets. That really has been a hallmark of some of the most successful tech companies that we think of today. But looking back, what we've also seen these last few years — and particularly these last few months, with the disclosures of Frances Haugen around the internal conflicts between growth and user protections — I wonder whether you believe, still, that all this relentless growth is ultimately beneficial for society.

One of the things I've said about Facebook is it doesn't resemble Big Tobacco, because even as [big as] Facebook is, it has a bunch of positive effects: people connecting with their families, sharing information that isn't just misinformation — anti-vaxx kinds of things, but also a bunch of really good things, unlike, you know, smoking cigarettes.

The thing that sets the blitzscaling clock is competition. And so the fact that we have global competition, and the ones that actually move fast to get to scale are the ones that actually end up setting the norms and setting the rules, is still a truth of reality. I don't think there's any way we're going to get to a global let's-slow-down-business kind of clock, because of the nature of how we build our economies.

Now, I did [write] one of my chapters in "Blitzscaling" on responsible blitzscaling, because part of what I wanted to do was to get people to say: You can still totally emphasize the speed that you need in order to compete and think responsibly. You can think about the big risks. Are you breaking a system? Are you causing real harm to people? As you scale very quickly, you become a multi-threaded organization, and you should have one or more of those threads in your organization — people whose job it is to be predicting where you might be causing something to happen that's really bad, and then thinking about, in advance, possibly preventing it, possibly thinking about how to recover very quickly from it, that kind of thing. I think that's part of what I will be adding to the blitzscaling playbook.

How early on do you think companies need to be integrating that kind of thinking into their work? And do you think that when we look back at the most successful companies of this first internet age, were those people around the table at the right time?

We're all learning. I mean, why I wrote "Blitzscaling" is because: Why does half of the NASDAQ emerge from a 30-mile radius from Palo Alto? It's because of this collective learning about the speed to scale. China obviously also does this in pretty astounding ways. I wanted to share it with the rest of the world, because the more entrepreneurial and more diverse entrepreneurial ecosystem of jobs you have in other places, it's good for creating economies and entrepreneurship in all kinds of places.

Once you [operate] at a global scale, all of a sudden, the rules change. I had some early visibility into that with PayPal, because we grew really fast. Oh boy. We're gonna have embezzlers here, and we're gonna have thieves. When you get to 100 million people in your system, you have all sorts. So how do you create the rules in a way that is still very beneficial for individuals, very beneficial for society and obviates harms?

You wrote about how at LinkedIn, when you opened up your publishing platform, you kept it really tight to about 150 really famous, really established people — Richard Branson, Arianna Huffington — people like that, and you wanted them to model good behavior before you really opened it up. When I was reading that I thought: Well, this sort of feels like Reid taking the opposite of his advice — taking a beat and seeing how it plays out before growing as fast as you possibly could. So tell me a little bit about how you arrived at that decision, given how laser-focused you are on growth?

It was one of the advantages that we'd already set up a network. So we have the network to cushion us. If we'd been doing it at the very beginning, we might not have. That beat was given to us.

When I was writing the essay, I was thinking about how the engineering mindset tends to be: We just need to set up a reputation system, we need to set up an algorithm that will do this. And, by the way, those are important things to think about. But then what I realized was actually, in fact, leadership and what counts as leadership is one of the [most important] things — enabling leadership that heads in the direction of the virtues that we want. If we had done that more than designing this kind of completely flat ecosystem, we might have actually been able to even do stuff earlier. The pattern for doing that was enabling that leadership.

That being said, we could also do it slowly and measured, because we already had our network. If we'd been in the first 1,000 people or 100,000 people in the network, we might have been recruiting those people as fast as possible.

You're talking about creating leadership among your users, not just within your organization. I wonder whether you think that means that the next internet age is going to be defined less by this sort of rose-colored idea that social media is this democratizing force and it's a power equalizer and all voices are equal.

My ideal hope would be we figure out how to essentially have our cake and eat it too. It's really important. The openness of the internet and the [social] media platforms is what really enabled the strong resurgence of the Black Lives Matter movement. Those things are super important for our social progress. We want to keep all that, but we want to ramp down the massive spreading of misinformation, like all the vaccination stuff that says, "Oh, vaccines: bad." It's like, look. If you don't get vaccinated, you're threatening the health of the people around you.

How do we have both, is the challenge. The rose-colored glasses of, "It will just all work out. It doesn't require leadership. Doesn't require intervention. Doesn't require a new invention" is, I think, naive. On the other hand, you want to keep the optimism. You want to keep the: "We've figured out a way to solve this problem." It doesn't mean that there's zero instances of bad things. Just like in society, we try to evolve so we have less crime, less violence, etc. Doesn't mean we get to zero. We just try to get to much less.

You did speak out critically about what you heard from Frances Haugen's disclosures, and you've called on the company to build more transparency, and I hear you have also heard from Mark Zuckerberg about the comments you made. If you were to name about three things that the company could do today to begin working on these issues, what would they be?

It's really good that Facebook actually is, in fact, doing that internal research, and is analyzing. Now that it's come out in a kind of historical kind of whistleblowing fashion, which is good, now it behooves the company to be much more transparent about, "OK, what did we learn [from] this stuff? What did we do?"

You might say, "Well, we weren't quite certain of the results, so we commissioned new studies." Great. What are the new studies? What does that show? Oh, we're experimenting with the following things in order to make it better. Or, we realized that we could make these trade-offs and would have this beneficial impact within these groups. So I'd say that's one: more transparency.

The second thing is this notion that we are just neutral is, I think, not correct. That doesn't mean that you can't seek objective truth or you can't seek a playing field for lots of different voices. But I think Renée DiResta said it very well: "Freedom of speech is not freedom of reach." People can say the things they need to say, but how much does that end up being in megaphones? And where is the place where that megaphone is playing out in a good way? And how do we make that megaphone, broadly speaking, healthier?

One of the illusions that a lot of these tech companies, including Facebook, talk about is: Well, you don't want us to be editors. It's like, well, you're already editing. We know you're editing. You're editing porn and child porn and terrorism content and other stuff. You're just editing things that everyone agrees with. The real question is: Where should we set the line? Maybe we'll have "No Science Misinformation Month" and see how that goes.

I think the third one is to be clear about: What do you think you're building towards? I want to be building towards the thing where people who are saying things that are true over time are getting more amplified. How do you understand truth? It isn't that truth is easy, but like, there's a reason why we made a bunch of scientific progress. There's a reason why one of the things that has been great about journalism is fact-checking. How does that get reflected more into the common knowledge and the common discourse? That would be the society that I would want to see. And that's obviously the kind of thing that the various great folks at LinkedIn are working on.

I want to talk also about growth, not just in the context of how to grow your audience, but how you decide which global markets to enter. For a long time, LinkedIn was one of the only big social media platforms that was operating in China, which meant obviously complying with increasingly stringent government requests around speech. And the company has since announced — and so have a number of companies just recently — it's pulling out of China, and will replace LinkedIn with a different version that's more stripped down and runs into fewer speech issues. Take us back to 2014. What was the ambition entering China knowing the obstacles that you would likely face there?

Every company should have a defined moral compass that isn't just shareholder value or profitability. It's: What's the world that you're seeking to become? And how are you investing in that world? And for LinkedIn, that's economic opportunity.

So when we're looking at China, we said: Well, there's a whole bunch of people here. There's people who are poor, there's people who are in more fortunate circumstances — how do we enable that as much as we can? So we think we can navigate these speech issues, because while, yes, you can't say "Tiananmen Square" or something else within it, we think that we could still enable everyone who's doing the work, including people who say, "I'm an activist." Maybe I can't put Tiananmen Square in my CV, but I could say I'm an activist for human rights and freedom of speech. Those aren't in a censored category. I still can assemble, find other people to work with, collaborate on projects and discover that zone of stuff, even though you couldn't post about Tiananmen Square in something that's visible within China.

These things are always complex trade-offs. We thought that would be an OK trade-off. Even though as Americans with belief in freedom of speech and truth, we were like: That's a cost, but that's OK, too, to get to this global reach.

What we've discovered over the last few years was because of the kind of conflicts between society, Western society and China try to put you in the crossfire. Between them, you end up in a lot of controversy when you're trying to navigate this line. What we're about is the economic stuff. So LinkedIn says: All right, we'll go specifically to job seeking and talent finding. We won't try to do the more broad stuff there because it's under the crossfire, and it's crossfire from both governments and crossfires from activist groups on both sides. We're like: Look, our thing is economic empowerment. We can continue to do that. Even though we'd love to see this other stuff happen. Maybe that's sometime in the future, or maybe never. Let's focus on what our mission is, which is economic empowerment.

If the last internet age has been about all of these companies expanding into these global markets, it seems like the next one is going to be a lot more about these companies navigating the restrictions these governments want to place on them. Having been through the experience you went through in China, what advice do you have for other companies that are right now navigating a lot of really stringent laws around the world?

I have a couple different worries about it. I actually think it's generally good for the internet not to be that fragmented. One of the things that leads to prosperity and peace and understanding around the world is some mutual understanding and mutual trade and mutual interaction. I think that's a good thing to maintain.

One of the challenges is the unexpected thing about regulation. You say, "Well, I'm Country X, and I'm going to set up this regulatory regime." Well, all right, the big companies that can afford the additional tax of your unique regulatory regime persist and startups, wherever they are, don't. And you disadvantage your own country's companies.

Obviously, collective governance is a challenging thing. We haven't even really managed collective governance about things like climate change, things which are existential threats and risks. So how do you get there?

Ultimately, some small number of countries probably will need to say, "Hey, look, here's our minimum baseline. Let's all roughly agree to this in terms of what the controls are around data protections or other kinds of things." And then that's the thing that we're all going to standardize on at least, and you could do it dynamically.

One of the mistakes about regulation is can you imagine people saying: You should use databases that have the following five features that happen to be [in] Oracle databases. You have to say these are the outcomes. This is the governance function that should be executed upon.

I think it's if anything the next five to 10 years, whatever, is going to be probably [continually] more chaotic than not, because working through this stuff is hard work and requires knowledge of how to do technology strategy, which the vast majority of governments do not [have].

Since it is Election Day, and you're somebody who's been deeply involved in politics, how has your thinking evolved on the role technologists can play in improving elections since you first got involved many years ago?

We want to have a place where we're coming more collectively to truth. As a technologist, you say we will create an AI, and AI will know truth. Well, what's your training set of AI? Other people will contest that — and by the way, just because someone else contests it or just because someone else says it's political doesn't mean it is. If I was going out and yelling, every time someone said something, "That's because the Martians made you say it," it doesn't mean it's because the Martians made you say it. So just saying it's false or biased or political doesn't mean that it is.

In medicine, when it gets to vaccines, it's very simple. It's not that there's zero risk in vaccines ever. But what's the risk of the vaccine versus the risk of no vaccine? And you look at those two things together. That's what's called a truth and scientific basis way of looking at it. And that's the thing that you want us to cohere and to get to more depth on, so I would hope for more of that.

I think the important thing for all the technology companies to be talking about is: Here's what we think is a good future that we would like to work towards. It could be: We think these kinds of things could be good virtues. What do you guys think?

I think you have to have dialogue at a society level of what your intentions are. Because simply being quiet allows for this melee of some people thinking that you're trying to destroy democracy. Other people think that you're conspiracies for the alt-right.

Speak to your intent.

A MESSAGE FROM QUALCOMM

www.protocol.com

Think about the massive amounts of data going through all of our smart devices today. And not just between the devices but also up to the cloud and across the networks — all that bandwidth is increasingly brought to us through 5G. This powerful combination of new capability and speed leads to massive innovation.

LEARN MORE

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins