Protocol | Enterprise

VMware CEO Pat Gelsinger on Kubernetes hype, controlling open-source and freeing your CPUs

Kubernetes is an important part of VMware's product strategy, but even its CEO thinks some backers are a little too enthusiastic. Plus, why Google's open-source approach will "backfire," and the long-term impact of chip changes in the data center.

VMware CEO Pat Gelsinger

VMware CEO Pat Gelsinger says we're in a Kubernetes hype cycle.

Photo: VMware

VMware CEO Pat Gelsinger has worked for two organizations over a 40-year career in technology and somehow managed to be at the center of the action the entire time.

Gelsinger still keeps a framed mockup of Intel's ground-breaking 486 PC processor in his office. It's a memory of one of his primary accomplishments as chief architect for Intel in the late 1980s, at a time when the PC was just starting to break through as the dominant computing platform. Three decades later, the world is driven by lightweight mobile devices and massive data centers that take on much of the heavy lifting, and he's been heavily involved in trying to help enterprise companies orchestrate that transition.

VMware's virtualization software allowed companies building web applications amid the rubble of the dot-com era to wring nearly twice as much performance from their hardware investments as was possible just a few years earlier. Virtual machines also became a core building block of cloud computing, which at one point looked like an existential threat to VMware's data center business.

Gelsinger became CEO of VMware in 2012 after three years as president and COO of its parent company, EMC. In the time since, he has averted a downward spiral that some enterprise companies no longer at the cutting edge have suffered. A shrewd partnership with AWS gave existing VMware customers an easy route to cloud services where it made sense, while more recent bets on multicloud and hybrid cloud technology, as well as new AI products in collaboration with Nvidia announced this week, could help clients navigate a new era.

I spoke with Gelsinger earlier this week following the VMworld 2020 conference to discuss whether anybody is actually using Kubernetes, the role of open-source software in the company's product strategy, and the future of Arm in the data center.

Disclosure: My wife works as a contractor for VMware.

This interview has been edited for clarity and brevity.

Describe your current customer base: How many of those customers have yet to adopt the cloud for any of their workloads? Do you think they'll ever move older workloads that they're maybe a little scared to touch?

It's a little bit of a hard question to answer since essentially everybody on the planet runs VMware. But we think of our customers as the enterprise customers, and then the commercial customers.

We'll say that most of the 20,000 [enterprise customers] have some level of workloads running in the cloud. It could be like [Capital One] that has gone aggressively to that end, and others who are very minimally [in the cloud]. We had JPMorgan at VMworld, they are at a few percent and they're in no hurry. And then you also have clearly more regulated customers who are never going to go.

What I'd say is generally most customers are running some [cloud workloads], most customers plan to run more going forward, and I've said I expect this to get to 50/50, or a little bit more than 50% of the workloads running in the cloud.

I've called it the three laws of the hybrid cloud: the laws of physics, the laws of economics and laws of the land.

So laws of the land would clearly be the public sector: German workloads are never going to leave German-controlled government data centers. Just ain't gonna happen. Health care, financial services, telco — in their core network centers — these are going to be spaces that are going to be [on premises] a long time, mostly for regulation reasons.

Your second would be economics. We had Comcast at VMworld, that said, "Hey, I started running a whole lot of workloads in the public cloud, it got to be pretty expensive." They're finding that balance point. On average, a per-unit workload coming from the public cloud [to] on premises is on the order of 30% cheaper.

The third is what I call the laws of physics, where we expect … 5G and edge computing to bring a burst of infrastructure at the edge.

When I predicted the crossover point to 50/50, I think four years ago now, I predicted this [would happen in] 2023. COVID might accelerate that a little bit to make it 2022, late '22, but I think you hit that 50/50 point of cloud versus on premises, and I expect the edge to start to moderate that as more and more infrastructure gets built at the edge.

VMware has embraced Kubernetes quite enthusiastically. There's still obviously a lot of use cases for it, but I'm also hearing a backlash against it — not exactly the product itself, but how complex it is, and whether or not it's actually the best answer for companies other than the JPMorgans of the world, or the huge multinational companies.

I think you're definitely in a little bit of [a hype] cycle with Kubernetes. What we laid out with our Project Pacific [is] hitting that nail on its head.

If you can run vSphere — which is millions of people who can run vSphere — you can now run Kubernetes. And that's why [Tuesday's] announcement and the delivery of Tanzu as part of vSphere is such a powerful example, because we're making it easy to embrace Kubernetes. So we think that today's activities will actually be a meaningful accelerant not just for VMware, but for the industry to be able to adopt it.

There's a lot of power in an industry standard like this, because today when people go to run a workload on Azure, do they say "are you Kubernetes compliant in your control plane, Azure?" No, they're just saying, "What are some of the services, which ones are available, how do I start doing it?"

Essentially, you're building more proprietary [infrastructure] into your cloud silo. But as we enable more and more native Kubernetes capability, you're actually reducing those unique bindings to proprietary services and APIs, because everybody says, "that's how I want to manage my multicloud environment, including my private clouds going forward: through a common Kubernetes API and control structure."

So that's why things like our Tanzu Mission Control, a multicloud control plane, are saying this will plug into Azure, Amazon, Google, as well as your own private cloud infrastructure. So you really do have more of that multicloud common management, shared services, which also enables people to develop separately from [operations].

I don't want to turn a talented developer into a beeper-carrying operations person. That's a bad outcome. So we also have to make this very easy for people to do that so they can have real developers do Kubernetes and containers, separate from the operational requirements of being able to do that in the multicloud way.

That sort of addresses the "maybe it's a little too hard to use" thing, but just in terms of the overall question "why do I need this in my business?" I still feel like that's a question that a lot of people are struggling with.

I love the FiServ example that we talked about at [the SpringOne conference]: a Kubernetes, [a] container, [a] Spring application. They, in 28 days, stood up the small and medium business loan system for all of the stimulus loan capabilities, with millions of loans going through that system. They did 400-plus production code releases over a 45-day period. This isn't like, you know, some cool SaaS company, this is FiServ, serving the middle of America's banking system for small-business loans.

And when you say, "hey, how did they do that?" Well, we brought them to an opinionated, Kubernetes environment built on a common Tanzu infrastructure, containerized applications that you're able to push to production in an operational readiness state.

Also, we hear and maybe you're not hearing this as much, but there's an increasing backlash upon the public cloud expenses that CIOs are seeing. They went into it [saying] "well, this is pretty cool. My developers like it. This is pretty fast. Oh, now I'm scaling these applications. These are becoming pretty expensive. Wow, this is a third of my IT budget. How did that happen? I thought this was cheap and easy."

And no, it was easy and not cheap, in a number of cases. So now this ability to create multicloud, to bring things back on premises, to be able to leverage different clouds for different purposes in a more thoughtful way, we're definitely seeing CIOs resonate with that strategy.

Multicloud is another thing I hear a lot about, but it's a little unclear how many people are actually doing it. Can you give me some sense of how many people are actually running production across multiple clouds or using multiple cloud vendors in a way that didn't involve mergers and acquisitions?

I'd say, for the most part, this is something that is easy for people to talk about, and really hasn't been done on any serious scale. Now, [at VMworld 2020] we did talk about Deutsche Boerse as an example, they had an explicit multicloud strategy. But I'd say that's more the exception than it is the rule at this point in time.

For the most part, when people talk about multicloud today, it's, "I'm running my end-user stuff on Azure, I'm running some of my analytics stuff on Google, I'm using Amazon for my new microservices development environment."

When you listen to the Deutsche Boerse discussion, they would say: I'm still locked in on an individual cloud, I wanted the vast majority of my development to be on a common VMware environment that's running on Amazon, Azure and Google. So they architect their app strategy very explicitly to not make it dependent on any app category, but the large majority of their apps were able to literally be moved from one cloud to another, as well as back on premise.

How important is open source to the future of your business? How do you think about whether or not VMware needs to own and control its own open-source destiny and projects if other companies, as we've seen with Google this year, look to retain control of some of the key projects that they once viewed as destined for a community governance structure?

I think most of those examples, like the Google one, they're just going to backfire on them. Because when you say, "I want open source but I want control," that's somewhat an anathema for the open-source community. Why am I participating in your open-source project if I don't get to influence it?

We think for the most part people aren't going to buy that approach. So we're approaching it from much more of an open-handed view, where we're being very explicit. [On Tuesday] we announced Antrea, which is a major open-source networking [project] for Kubernetes. We're the No. 1 contributor to knative, which will be another aspect of functions-as-a-service for Kubernetes.

We announced the acquisition of SaltStack, and SaltStack has a very vibrant Salt community. It has one of the largest number of contributions coming from external to the Salt project.

So we're pretty binary on this. If it's open source, we're going to be very explicit about how we're participating [to] true open source, where we don't control; you influence by contribution.

But we're also very explicit [about] this is where our closed-source products play. This is how we're augmenting our closed source with open source. And I think our reputation in the open-source community is rising very nicely.

Let's talk about the Nvidia deal a little bit. Why is it the time now for data center customers to start considering the types of future AI-driven architectures that you talked about? What is the tipping point for that?

Let's come at it from: What are the problems we're solving? One problem is if I want to use AI today, I'm taking my data sets to the cloud. Sometimes these data sets are the most proprietary governed thing that I have. They're often big and heavy, so I have enormous networking costs, latency, etc., to move them to the cloud. If I keep them on premises, I have to stand up dedicated infrastructure that's uniquely bound to those [data sets].

And what I want to do is have less of my folks working on dedicated infrastructure and more working on my multicloud and my private cloud strategy. And my application developers, they want to be able to develop apps using AI that run everywhere — at the core, in the cloud and on the edge.

Those are the problems that we're solving, because we're saying with this VMware Tanzu layer, we're going to take care of making those AI hardware resources available in a consistent way, irrespective of where you are. So the richest set of AI resources in the world is now becoming standardized on the VMware infrastructure.

Nvidia made another very interesting deal earlier this month to acquire Arm. I have no expectation that you remember this, but many years ago I interviewed you about the history of x86 and whether or not there was a chance that it could be displaced. And at the time, you said that "the burden of compatibility is there, but the value of compatibility overwhelms the cost it brings with it."

This was a long time ago. But what chance do you think Arm servers have to disrupt the x86 world now? Does that value of compatibility still hold true?

That was at least like 15 years ago?

I think it was 13.

So, hey, I've been proven pretty darn right for a long period of time. So what would change that equation?

For one, you now have cloud guys who are at scale who could say, "I'm moving the compatibility burden into my cloud abstraction layer." They are now at such a massive scale that they could say … let's just make it up [that] the Arm server CPU is 30% more dollar-cost power efficient than their x86 equivalent. They are running at such scale, hundreds of thousands of sockets, that they could say, even if there are certain applications that I can't move there, if I get some reasonable applications, I've moved that burden of compatibility to the app layer out of the infra layer. I could flip that cycle of compatibility costs. And of course, once you flip it, it can grow over time.

The second example I would point to would be some of the geopolitical challenges. China can induce both the scale of their market and geopolitical pressure into their local suppliers that they could just say, "I could care less about how it's going to take you three years to solve those compatibility things. Fix it." They can induce dislocative characteristics into the market, and the Chinese market is big enough now.

The third example would be the one that we just announced. I'm not trying to solve the big compatibility issue, I'm only moving a very, very targeted piece of the workload. Now, guess what: I/O and security in the network, load balancing, firewalls, all that kind of stuff, that's pretty meaningful work; 25% to 30% of all CPU MIPS are being consumed in those workloads.

That's pretty interesting when you look at that data center scale.

So what you'll have now is CPUs, DPUs, and GPUs. GPUs have already won the AI workload, DPUs will now potentially win the I/O and network security workload, which says the CPU burden … all of a sudden I gave you a third of your CPUs for free, right, since I've moved those other workloads into these additional areas.

So those would be some of the characteristics that are sort of changing, but even if all of these effects that I described come into play in a very, very aggressive way, this is a long, long tail.

Facebook wants to be a metaverse company. What about Facebook.com?

Will the metaverse kill Facebook's legacy apps and services?

Will the metaverse replace Facebook's existing apps and services?

Photo: Kirill Kudryavtsev/AFP via Getty Images

At this week's Facebook Connect conference, Mark Zuckerberg is expected to unveil additional details about his company's quest to build the metaverse. That includes a new generation of social media services that brings real-time communication to AR, VR and other platforms, complete with varying degrees of embodied presence (in the future, we'll all be avatars).

Facebook has been spending heavily on this endeavor, including more than $10 billion in 2021 alone. During Thursday's Connect keynote, Zuckerberg is expected to share a few more details on what all this money is being spent on, including an update on the company's social VR world, called Horizon.

Keep Reading Show less
Janko Roettgers

Janko Roettgers (@jank0) is a senior reporter at Protocol, reporting on the shifting power dynamics between tech, media, and entertainment, including the impact of new technologies. Previously, Janko was Variety's first-ever technology writer in San Francisco, where he covered big tech and emerging technologies. He has reported for Gigaom, Frankfurter Rundschau, Berliner Zeitung, and ORF, among others. He has written three books on consumer cord-cutting and online music and co-edited an anthology on internet subcultures. He lives with his family in Oakland.

If you've ever tried to pick up a new fitness routine like running, chances are you may have fallen into the "motivation vs. habit" trap once or twice. You go for a run when the sun is shining, only to quickly fall off the wagon when the weather turns sour.

Similarly, for many businesses, 2020 acted as the storm cloud that disrupted their plans for innovation. With leaders busy grappling with the pandemic, innovation frequently got pushed to the backburner. In fact, according to McKinsey, the majority of organizations shifted their focus mainly to maintaining business continuity throughout the pandemic.

Keep Reading Show less
Gaurav Kataria
Group Product Manager, Trello at Atlassian

There’s a platform war brewing in the NFT gaming space

Valve's NFT ban has created the makings of a platform war in the burgeoning blockchain gaming space.

Play-to-earn is a whole new paradigm, based on creating real-world value out of in-game items and other forms of digital goods using non-fungible tokens, cryptocurrency and other blockchain technologies.

Image: Axie Infinity

The worlds of crypto and video games are fast colliding. The combination, dubbed "play-to-earn" and more broadly part of the decentralization movement known as "web3," could result in a whole new generation of gaming experiences with real-world economies and new player incentives. This, in turn, could radically upend traditional business models in the game industry.

That is, of course, if the traditional platform gatekeepers in gaming decide to open their doors. Right now, many of them are shut, leaving these games in their corners of the internet, and it's not clear what it might take to get the most powerful companies in the industry to open their arms to these new technologies. Meanwhile, the blockchain gaming market has become one of the fastest-growing segments in the game industry, and it's showing no signs of stopping.

Keep Reading Show less
Nick Statt
Nick Statt is Protocol's video game reporter. Prior to joining Protocol, he was news editor at The Verge covering the gaming industry, mobile apps and antitrust out of San Francisco, in addition to managing coverage of Silicon Valley tech giants and startups. He now resides in Rochester, New York, home of the garbage plate and, completely coincidentally, the World Video Game Hall of Fame. He can be reached at nstatt@protocol.com.
Protocol | Policy

5 things to know about FCC nominee Gigi Sohn

The veteran of some of the earliest tech policy fights is a longtime consumer champion and net-neutrality advocate.

Gigi Sohn, who President Joe Biden nominated to serve on the FCC, is a longtime net-neutrality advocate.

Photo: Alex Wong/Getty Images

President Joe Biden on Tuesday nominated Gigi Sohn to serve as a Federal Communications Commissioner, teeing up a Democratic majority at the agency that oversees broadband issues after months of delay.

Like Lina Khan, who Biden picked in June to head up the Federal Trade Commission, Sohn is a progressive favorite. And if confirmed, she'll take up a position in an agency trying to pull policy levers on net neutrality, privacy and broadband access even as Congress is stalled.

Keep Reading Show less
Ben Brody

Ben Brody (@ BenBrodyDC) is a senior reporter at Protocol focusing on how Congress, courts and agencies affect the online world we live in. He formerly covered tech policy and lobbying (including antitrust, Section 230 and privacy) at Bloomberg News, where he previously reported on the influence industry, government ethics and the 2016 presidential election. Before that, Ben covered business news at CNNMoney and AdAge, and all manner of stories in and around New York. He still loves appearing on the New York news radio he grew up with.

Protocol | Workplace

Adobe wants a more authentic NFT world

Adobe's Content Credentials feature will allow Creative Cloud subscribers to attach edit-tracking information to Photoshop files. The goal is to create a more trustworthy NFT market and digital landscape.

Adobe's Content Credentials will allow users to attach their identities to an image

Image: Adobe

Remember the viral, fake photo of Kurt Cobain and Biggie Smalls that duped and delighted the internet in 2017? Doctored images manipulate people and erode trust and we're not great at spotting them. The entire point of the emerging NFT art market is to create valuable and scarce digital files and when there isn't an easy way to check for an image's origin and edits, there's a problem. What if someone steals an NFT creator's image and pawns it off as their own? As a hub for all kinds of multimedia, Adobe feels a responsibility to combat misinformation and provide a safe space for NFT creators. That's why it's rolling out Content Credentials, a record that can be attached to a Photoshop file of a creator's identity and includes any edits they made.

Users can connect their social media addresses and crypto wallet addresses to images in Photoshop. This further proves the image creator's identity, but it's also helpful in determining the creators of NFTs. Adobe has partnered with NFT marketplaces KnownOrigin, OpenSea, Rarible and SuperRare in this effort. "Today there's not a way to know that the NFT you're buying was actually created by a true creator," said Adobe General Counsel Dana Rao. "We're allowing the creator to show their identity and attach it to the image."

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.

Latest Stories