Reid Hoffman on Facebook, anti-vaxxers and LinkedIn leaving China

"Western society and China try to put you in the crossfire."

Reid Hoffman

Reid Hoffman spoke with Protocol about blitzscaling and companies' responsibility.

Photo: David Paul Morris/Bloomberg via Getty Images

LinkedIn entered China in 2014, because, its founder Reid Hoffman said, the company wanted to build economic opportunity for people there. Of course, it couldn't have hurt that the country of 1.4 billion was also a major economic opportunity for LinkedIn — or for any company willing to adhere to the government's censorship restrictions. For a while at least, LinkedIn was willing.

But that all changed last month, when LinkedIn pulled the platform out of China amid new speech and data protection requirements. The company plans to replace it with a stripped-down job board.

"What we've discovered over the last few years was, because of the kind of conflicts between societies, Western society and China try to put you in the crossfire," Hoffman told Protocol. "Between them, you end up in a lot of controversy when you're trying to navigate this line."

Hoffman spoke with Protocol as part of the Knight Foundation's symposium on lessons learned during the first internet age, and had a lot to say about how companies should weigh the trade-offs when entering global markets with increasingly stringent laws governing speech. The early Facebook investor and author of several books on how to scale fast also had some advice for Facebook and thoughts on how platforms can harness their users' most sinful impulses for good.

This interview has been edited and condensed for clarity.

You wrote the book "Blitzscaling," which is all about how to grow really, really, really fast into global markets. That really has been a hallmark of some of the most successful tech companies that we think of today. But looking back, what we've also seen these last few years — and particularly these last few months, with the disclosures of Frances Haugen around the internal conflicts between growth and user protections — I wonder whether you believe, still, that all this relentless growth is ultimately beneficial for society.

One of the things I've said about Facebook is it doesn't resemble Big Tobacco, because even as [big as] Facebook is, it has a bunch of positive effects: people connecting with their families, sharing information that isn't just misinformation — anti-vaxx kinds of things, but also a bunch of really good things, unlike, you know, smoking cigarettes.

The thing that sets the blitzscaling clock is competition. And so the fact that we have global competition, and the ones that actually move fast to get to scale are the ones that actually end up setting the norms and setting the rules, is still a truth of reality. I don't think there's any way we're going to get to a global let's-slow-down-business kind of clock, because of the nature of how we build our economies.

Now, I did [write] one of my chapters in "Blitzscaling" on responsible blitzscaling, because part of what I wanted to do was to get people to say: You can still totally emphasize the speed that you need in order to compete and think responsibly. You can think about the big risks. Are you breaking a system? Are you causing real harm to people? As you scale very quickly, you become a multi-threaded organization, and you should have one or more of those threads in your organization — people whose job it is to be predicting where you might be causing something to happen that's really bad, and then thinking about, in advance, possibly preventing it, possibly thinking about how to recover very quickly from it, that kind of thing. I think that's part of what I will be adding to the blitzscaling playbook.

How early on do you think companies need to be integrating that kind of thinking into their work? And do you think that when we look back at the most successful companies of this first internet age, were those people around the table at the right time?

We're all learning. I mean, why I wrote "Blitzscaling" is because: Why does half of the NASDAQ emerge from a 30-mile radius from Palo Alto? It's because of this collective learning about the speed to scale. China obviously also does this in pretty astounding ways. I wanted to share it with the rest of the world, because the more entrepreneurial and more diverse entrepreneurial ecosystem of jobs you have in other places, it's good for creating economies and entrepreneurship in all kinds of places.

Once you [operate] at a global scale, all of a sudden, the rules change. I had some early visibility into that with PayPal, because we grew really fast. Oh boy. We're gonna have embezzlers here, and we're gonna have thieves. When you get to 100 million people in your system, you have all sorts. So how do you create the rules in a way that is still very beneficial for individuals, very beneficial for society and obviates harms?

You wrote about how at LinkedIn, when you opened up your publishing platform, you kept it really tight to about 150 really famous, really established people — Richard Branson, Arianna Huffington — people like that, and you wanted them to model good behavior before you really opened it up. When I was reading that I thought: Well, this sort of feels like Reid taking the opposite of his advice — taking a beat and seeing how it plays out before growing as fast as you possibly could. So tell me a little bit about how you arrived at that decision, given how laser-focused you are on growth?

It was one of the advantages that we'd already set up a network. So we have the network to cushion us. If we'd been doing it at the very beginning, we might not have. That beat was given to us.

When I was writing the essay, I was thinking about how the engineering mindset tends to be: We just need to set up a reputation system, we need to set up an algorithm that will do this. And, by the way, those are important things to think about. But then what I realized was actually, in fact, leadership and what counts as leadership is one of the [most important] things — enabling leadership that heads in the direction of the virtues that we want. If we had done that more than designing this kind of completely flat ecosystem, we might have actually been able to even do stuff earlier. The pattern for doing that was enabling that leadership.

That being said, we could also do it slowly and measured, because we already had our network. If we'd been in the first 1,000 people or 100,000 people in the network, we might have been recruiting those people as fast as possible.

You're talking about creating leadership among your users, not just within your organization. I wonder whether you think that means that the next internet age is going to be defined less by this sort of rose-colored idea that social media is this democratizing force and it's a power equalizer and all voices are equal.

My ideal hope would be we figure out how to essentially have our cake and eat it too. It's really important. The openness of the internet and the [social] media platforms is what really enabled the strong resurgence of the Black Lives Matter movement. Those things are super important for our social progress. We want to keep all that, but we want to ramp down the massive spreading of misinformation, like all the vaccination stuff that says, "Oh, vaccines: bad." It's like, look. If you don't get vaccinated, you're threatening the health of the people around you.

How do we have both, is the challenge. The rose-colored glasses of, "It will just all work out. It doesn't require leadership. Doesn't require intervention. Doesn't require a new invention" is, I think, naive. On the other hand, you want to keep the optimism. You want to keep the: "We've figured out a way to solve this problem." It doesn't mean that there's zero instances of bad things. Just like in society, we try to evolve so we have less crime, less violence, etc. Doesn't mean we get to zero. We just try to get to much less.

You did speak out critically about what you heard from Frances Haugen's disclosures, and you've called on the company to build more transparency, and I hear you have also heard from Mark Zuckerberg about the comments you made. If you were to name about three things that the company could do today to begin working on these issues, what would they be?

It's really good that Facebook actually is, in fact, doing that internal research, and is analyzing. Now that it's come out in a kind of historical kind of whistleblowing fashion, which is good, now it behooves the company to be much more transparent about, "OK, what did we learn [from] this stuff? What did we do?"

You might say, "Well, we weren't quite certain of the results, so we commissioned new studies." Great. What are the new studies? What does that show? Oh, we're experimenting with the following things in order to make it better. Or, we realized that we could make these trade-offs and would have this beneficial impact within these groups. So I'd say that's one: more transparency.

The second thing is this notion that we are just neutral is, I think, not correct. That doesn't mean that you can't seek objective truth or you can't seek a playing field for lots of different voices. But I think Renée DiResta said it very well: "Freedom of speech is not freedom of reach." People can say the things they need to say, but how much does that end up being in megaphones? And where is the place where that megaphone is playing out in a good way? And how do we make that megaphone, broadly speaking, healthier?

One of the illusions that a lot of these tech companies, including Facebook, talk about is: Well, you don't want us to be editors. It's like, well, you're already editing. We know you're editing. You're editing porn and child porn and terrorism content and other stuff. You're just editing things that everyone agrees with. The real question is: Where should we set the line? Maybe we'll have "No Science Misinformation Month" and see how that goes.

I think the third one is to be clear about: What do you think you're building towards? I want to be building towards the thing where people who are saying things that are true over time are getting more amplified. How do you understand truth? It isn't that truth is easy, but like, there's a reason why we made a bunch of scientific progress. There's a reason why one of the things that has been great about journalism is fact-checking. How does that get reflected more into the common knowledge and the common discourse? That would be the society that I would want to see. And that's obviously the kind of thing that the various great folks at LinkedIn are working on.

I want to talk also about growth, not just in the context of how to grow your audience, but how you decide which global markets to enter. For a long time, LinkedIn was one of the only big social media platforms that was operating in China, which meant obviously complying with increasingly stringent government requests around speech. And the company has since announced — and so have a number of companies just recently — it's pulling out of China, and will replace LinkedIn with a different version that's more stripped down and runs into fewer speech issues. Take us back to 2014. What was the ambition entering China knowing the obstacles that you would likely face there?

Every company should have a defined moral compass that isn't just shareholder value or profitability. It's: What's the world that you're seeking to become? And how are you investing in that world? And for LinkedIn, that's economic opportunity.

So when we're looking at China, we said: Well, there's a whole bunch of people here. There's people who are poor, there's people who are in more fortunate circumstances — how do we enable that as much as we can? So we think we can navigate these speech issues, because while, yes, you can't say "Tiananmen Square" or something else within it, we think that we could still enable everyone who's doing the work, including people who say, "I'm an activist." Maybe I can't put Tiananmen Square in my CV, but I could say I'm an activist for human rights and freedom of speech. Those aren't in a censored category. I still can assemble, find other people to work with, collaborate on projects and discover that zone of stuff, even though you couldn't post about Tiananmen Square in something that's visible within China.

These things are always complex trade-offs. We thought that would be an OK trade-off. Even though as Americans with belief in freedom of speech and truth, we were like: That's a cost, but that's OK, too, to get to this global reach.

What we've discovered over the last few years was because of the kind of conflicts between society, Western society and China try to put you in the crossfire. Between them, you end up in a lot of controversy when you're trying to navigate this line. What we're about is the economic stuff. So LinkedIn says: All right, we'll go specifically to job seeking and talent finding. We won't try to do the more broad stuff there because it's under the crossfire, and it's crossfire from both governments and crossfires from activist groups on both sides. We're like: Look, our thing is economic empowerment. We can continue to do that. Even though we'd love to see this other stuff happen. Maybe that's sometime in the future, or maybe never. Let's focus on what our mission is, which is economic empowerment.

If the last internet age has been about all of these companies expanding into these global markets, it seems like the next one is going to be a lot more about these companies navigating the restrictions these governments want to place on them. Having been through the experience you went through in China, what advice do you have for other companies that are right now navigating a lot of really stringent laws around the world?

I have a couple different worries about it. I actually think it's generally good for the internet not to be that fragmented. One of the things that leads to prosperity and peace and understanding around the world is some mutual understanding and mutual trade and mutual interaction. I think that's a good thing to maintain.

One of the challenges is the unexpected thing about regulation. You say, "Well, I'm Country X, and I'm going to set up this regulatory regime." Well, all right, the big companies that can afford the additional tax of your unique regulatory regime persist and startups, wherever they are, don't. And you disadvantage your own country's companies.

Obviously, collective governance is a challenging thing. We haven't even really managed collective governance about things like climate change, things which are existential threats and risks. So how do you get there?

Ultimately, some small number of countries probably will need to say, "Hey, look, here's our minimum baseline. Let's all roughly agree to this in terms of what the controls are around data protections or other kinds of things." And then that's the thing that we're all going to standardize on at least, and you could do it dynamically.

One of the mistakes about regulation is can you imagine people saying: You should use databases that have the following five features that happen to be [in] Oracle databases. You have to say these are the outcomes. This is the governance function that should be executed upon.

I think it's if anything the next five to 10 years, whatever, is going to be probably [continually] more chaotic than not, because working through this stuff is hard work and requires knowledge of how to do technology strategy, which the vast majority of governments do not [have].

Since it is Election Day, and you're somebody who's been deeply involved in politics, how has your thinking evolved on the role technologists can play in improving elections since you first got involved many years ago?

We want to have a place where we're coming more collectively to truth. As a technologist, you say we will create an AI, and AI will know truth. Well, what's your training set of AI? Other people will contest that — and by the way, just because someone else contests it or just because someone else says it's political doesn't mean it is. If I was going out and yelling, every time someone said something, "That's because the Martians made you say it," it doesn't mean it's because the Martians made you say it. So just saying it's false or biased or political doesn't mean that it is.

In medicine, when it gets to vaccines, it's very simple. It's not that there's zero risk in vaccines ever. But what's the risk of the vaccine versus the risk of no vaccine? And you look at those two things together. That's what's called a truth and scientific basis way of looking at it. And that's the thing that you want us to cohere and to get to more depth on, so I would hope for more of that.

I think the important thing for all the technology companies to be talking about is: Here's what we think is a good future that we would like to work towards. It could be: We think these kinds of things could be good virtues. What do you guys think?

I think you have to have dialogue at a society level of what your intentions are. Because simply being quiet allows for this melee of some people thinking that you're trying to destroy democracy. Other people think that you're conspiracies for the alt-right.

Speak to your intent.



Think about the massive amounts of data going through all of our smart devices today. And not just between the devices but also up to the cloud and across the networks — all that bandwidth is increasingly brought to us through 5G. This powerful combination of new capability and speed leads to massive innovation.



SAP’s leadership vacuum on display with Hasso Plattner’s last stand

Conflict of interest questions, blowback to the Ukraine response and a sinking stock price hang in the backdrop of Plattner’s last election to the SAP supervisory board.

Plattner will run for a final two-year transition term atop SAP’s supervisory board.

Photo: Soeren Stache/picture alliance via Getty Images

Just one man has been with SAP over its entire 50-year history: co-founder Hasso Plattner. Now, the 78-year-old software visionary is making his last stand.

On Wednesday, Plattner will run for a final two-year transition term atop SAP’s supervisory board, an entity mandated by law in Germany that basically oversees the executive team. Leaders at SAP, for example, report to the supervisory board, not the CEO.

Keep Reading Show less
Joe Williams

Joe Williams is a writer-at-large at Protocol. He previously covered enterprise software for Protocol, Bloomberg and Business Insider. Joe can be reached at JoeWilliams@Protocol.com. To share information confidentially, he can also be contacted on a non-work device via Signal (+1-309-265-6120) or JPW53189@protonmail.com.

Sponsored Content

Foursquare data story: leveraging location data for site selection

We take a closer look at points of interest and foot traffic patterns to demonstrate how location data can be leveraged to inform better site selecti­on strategies.

Imagine: You’re the leader of a real estate team at a restaurant brand looking to open a new location in Manhattan. You have two options you’re evaluating: one site in SoHo, and another site in the Flatiron neighborhood. Which do you choose?

Keep Reading Show less

Why Google Cloud is providing security for AWS and Azure users too

“To just focus on Google Cloud, we wouldn't be serving our customers,” Google Cloud security chief Phil Venables told Protocol.

Google Cloud announced the newest addition to its menu of security offerings.

Photo: G/Unsplash

In August, Google Cloud pledged to invest $10 billion over five years in cybersecurity — a target that looks like it will be easily achieved, thanks to the $5.4 billion deal to acquire Mandiant and reported $500 million acquisition of Siemplify in the first few months of 2022 alone.

But the moves raise questions about Google Cloud’s main goal for its security operation. Does Google want to offer the most secure cloud platform in order to inspire more businesses to run on it — or build a major enterprise cybersecurity products and services business, in whatever environment it’s chosen?

Keep Reading Show less
Kyle Alspach

Kyle Alspach ( @KyleAlspach) is a senior reporter at Protocol, focused on cybersecurity. He has covered the tech industry since 2010 for outlets including VentureBeat, CRN and the Boston Globe. He lives in Portland, Oregon, and can be reached at kalspach@procotol.com.


The tools that make you pay for not getting stuff done

Some tools let you put your money on the line for productivity. Should you bite?

Commitment contracts are popular in a niche corner of the internet, and the tools have built up loyal followings of people who find the extra motivation effective.

Photoillustration: Anna Shvets/Pexels; Protocol

Danny Reeves, CEO and co-founder of Beeminder, is used to defending his product.

“When people first hear about it, they’re kind of appalled,” Reeves said. “Making money off of people’s failure is how they view it.”

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.

Elon Musk has bots on his mind.

Photo: Christian Marquardt/Getty Images

Elon Musk says he needs proof that less than 5% of Twitter's users are bots — or the deal isn't going ahead.

Keep Reading Show less
Jamie Condliffe

Jamie Condliffe ( @jme_c) is the executive editor at Protocol, based in London. Prior to joining Protocol in 2019, he worked on the business desk at The New York Times, where he edited the DealBook newsletter and wrote Bits, the weekly tech newsletter. He has previously worked at MIT Technology Review, Gizmodo, and New Scientist, and has held lectureships at the University of Oxford and Imperial College London. He also holds a doctorate in engineering from the University of Oxford.

Latest Stories