Enterprise

VMware CEO Pat Gelsinger on Kubernetes hype, controlling open-source and freeing your CPUs

Kubernetes is an important part of VMware's product strategy, but even its CEO thinks some backers are a little too enthusiastic. Plus, why Google's open-source approach will "backfire," and the long-term impact of chip changes in the data center.

VMware CEO Pat Gelsinger

VMware CEO Pat Gelsinger says we're in a Kubernetes hype cycle.

Photo: VMware

VMware CEO Pat Gelsinger has worked for two organizations over a 40-year career in technology and somehow managed to be at the center of the action the entire time.

Gelsinger still keeps a framed mockup of Intel's ground-breaking 486 PC processor in his office. It's a memory of one of his primary accomplishments as chief architect for Intel in the late 1980s, at a time when the PC was just starting to break through as the dominant computing platform. Three decades later, the world is driven by lightweight mobile devices and massive data centers that take on much of the heavy lifting, and he's been heavily involved in trying to help enterprise companies orchestrate that transition.

VMware's virtualization software allowed companies building web applications amid the rubble of the dot-com era to wring nearly twice as much performance from their hardware investments as was possible just a few years earlier. Virtual machines also became a core building block of cloud computing, which at one point looked like an existential threat to VMware's data center business.

Gelsinger became CEO of VMware in 2012 after three years as president and COO of its parent company, EMC. In the time since, he has averted a downward spiral that some enterprise companies no longer at the cutting edge have suffered. A shrewd partnership with AWS gave existing VMware customers an easy route to cloud services where it made sense, while more recent bets on multicloud and hybrid cloud technology, as well as new AI products in collaboration with Nvidia announced this week, could help clients navigate a new era.

I spoke with Gelsinger earlier this week following the VMworld 2020 conference to discuss whether anybody is actually using Kubernetes, the role of open-source software in the company's product strategy, and the future of Arm in the data center.

Disclosure: My wife works as a contractor for VMware.

This interview has been edited for clarity and brevity.

Describe your current customer base: How many of those customers have yet to adopt the cloud for any of their workloads? Do you think they'll ever move older workloads that they're maybe a little scared to touch?

It's a little bit of a hard question to answer since essentially everybody on the planet runs VMware. But we think of our customers as the enterprise customers, and then the commercial customers.

We'll say that most of the 20,000 [enterprise customers] have some level of workloads running in the cloud. It could be like [Capital One] that has gone aggressively to that end, and others who are very minimally [in the cloud]. We had JPMorgan at VMworld, they are at a few percent and they're in no hurry. And then you also have clearly more regulated customers who are never going to go.

What I'd say is generally most customers are running some [cloud workloads], most customers plan to run more going forward, and I've said I expect this to get to 50/50, or a little bit more than 50% of the workloads running in the cloud.

I've called it the three laws of the hybrid cloud: the laws of physics, the laws of economics and laws of the land.

So laws of the land would clearly be the public sector: German workloads are never going to leave German-controlled government data centers. Just ain't gonna happen. Health care, financial services, telco — in their core network centers — these are going to be spaces that are going to be [on premises] a long time, mostly for regulation reasons.

Your second would be economics. We had Comcast at VMworld, that said, "Hey, I started running a whole lot of workloads in the public cloud, it got to be pretty expensive." They're finding that balance point. On average, a per-unit workload coming from the public cloud [to] on premises is on the order of 30% cheaper.

The third is what I call the laws of physics, where we expect … 5G and edge computing to bring a burst of infrastructure at the edge.

When I predicted the crossover point to 50/50, I think four years ago now, I predicted this [would happen in] 2023. COVID might accelerate that a little bit to make it 2022, late '22, but I think you hit that 50/50 point of cloud versus on premises, and I expect the edge to start to moderate that as more and more infrastructure gets built at the edge.

VMware has embraced Kubernetes quite enthusiastically. There's still obviously a lot of use cases for it, but I'm also hearing a backlash against it — not exactly the product itself, but how complex it is, and whether or not it's actually the best answer for companies other than the JPMorgans of the world, or the huge multinational companies.

I think you're definitely in a little bit of [a hype] cycle with Kubernetes. What we laid out with our Project Pacific [is] hitting that nail on its head.

If you can run vSphere — which is millions of people who can run vSphere — you can now run Kubernetes. And that's why [Tuesday's] announcement and the delivery of Tanzu as part of vSphere is such a powerful example, because we're making it easy to embrace Kubernetes. So we think that today's activities will actually be a meaningful accelerant not just for VMware, but for the industry to be able to adopt it.

There's a lot of power in an industry standard like this, because today when people go to run a workload on Azure, do they say "are you Kubernetes compliant in your control plane, Azure?" No, they're just saying, "What are some of the services, which ones are available, how do I start doing it?"

Essentially, you're building more proprietary [infrastructure] into your cloud silo. But as we enable more and more native Kubernetes capability, you're actually reducing those unique bindings to proprietary services and APIs, because everybody says, "that's how I want to manage my multicloud environment, including my private clouds going forward: through a common Kubernetes API and control structure."

So that's why things like our Tanzu Mission Control, a multicloud control plane, are saying this will plug into Azure, Amazon, Google, as well as your own private cloud infrastructure. So you really do have more of that multicloud common management, shared services, which also enables people to develop separately from [operations].

I don't want to turn a talented developer into a beeper-carrying operations person. That's a bad outcome. So we also have to make this very easy for people to do that so they can have real developers do Kubernetes and containers, separate from the operational requirements of being able to do that in the multicloud way.

That sort of addresses the "maybe it's a little too hard to use" thing, but just in terms of the overall question "why do I need this in my business?" I still feel like that's a question that a lot of people are struggling with.

I love the FiServ example that we talked about at [the SpringOne conference]: a Kubernetes, [a] container, [a] Spring application. They, in 28 days, stood up the small and medium business loan system for all of the stimulus loan capabilities, with millions of loans going through that system. They did 400-plus production code releases over a 45-day period. This isn't like, you know, some cool SaaS company, this is FiServ, serving the middle of America's banking system for small-business loans.

And when you say, "hey, how did they do that?" Well, we brought them to an opinionated, Kubernetes environment built on a common Tanzu infrastructure, containerized applications that you're able to push to production in an operational readiness state.

Also, we hear and maybe you're not hearing this as much, but there's an increasing backlash upon the public cloud expenses that CIOs are seeing. They went into it [saying] "well, this is pretty cool. My developers like it. This is pretty fast. Oh, now I'm scaling these applications. These are becoming pretty expensive. Wow, this is a third of my IT budget. How did that happen? I thought this was cheap and easy."

And no, it was easy and not cheap, in a number of cases. So now this ability to create multicloud, to bring things back on premises, to be able to leverage different clouds for different purposes in a more thoughtful way, we're definitely seeing CIOs resonate with that strategy.

Multicloud is another thing I hear a lot about, but it's a little unclear how many people are actually doing it. Can you give me some sense of how many people are actually running production across multiple clouds or using multiple cloud vendors in a way that didn't involve mergers and acquisitions?

I'd say, for the most part, this is something that is easy for people to talk about, and really hasn't been done on any serious scale. Now, [at VMworld 2020] we did talk about Deutsche Boerse as an example, they had an explicit multicloud strategy. But I'd say that's more the exception than it is the rule at this point in time.

For the most part, when people talk about multicloud today, it's, "I'm running my end-user stuff on Azure, I'm running some of my analytics stuff on Google, I'm using Amazon for my new microservices development environment."

When you listen to the Deutsche Boerse discussion, they would say: I'm still locked in on an individual cloud, I wanted the vast majority of my development to be on a common VMware environment that's running on Amazon, Azure and Google. So they architect their app strategy very explicitly to not make it dependent on any app category, but the large majority of their apps were able to literally be moved from one cloud to another, as well as back on premise.

How important is open source to the future of your business? How do you think about whether or not VMware needs to own and control its own open-source destiny and projects if other companies, as we've seen with Google this year, look to retain control of some of the key projects that they once viewed as destined for a community governance structure?

I think most of those examples, like the Google one, they're just going to backfire on them. Because when you say, "I want open source but I want control," that's somewhat an anathema for the open-source community. Why am I participating in your open-source project if I don't get to influence it?

We think for the most part people aren't going to buy that approach. So we're approaching it from much more of an open-handed view, where we're being very explicit. [On Tuesday] we announced Antrea, which is a major open-source networking [project] for Kubernetes. We're the No. 1 contributor to knative, which will be another aspect of functions-as-a-service for Kubernetes.

We announced the acquisition of SaltStack, and SaltStack has a very vibrant Salt community. It has one of the largest number of contributions coming from external to the Salt project.

So we're pretty binary on this. If it's open source, we're going to be very explicit about how we're participating [to] true open source, where we don't control; you influence by contribution.

But we're also very explicit [about] this is where our closed-source products play. This is how we're augmenting our closed source with open source. And I think our reputation in the open-source community is rising very nicely.

Let's talk about the Nvidia deal a little bit. Why is it the time now for data center customers to start considering the types of future AI-driven architectures that you talked about? What is the tipping point for that?

Let's come at it from: What are the problems we're solving? One problem is if I want to use AI today, I'm taking my data sets to the cloud. Sometimes these data sets are the most proprietary governed thing that I have. They're often big and heavy, so I have enormous networking costs, latency, etc., to move them to the cloud. If I keep them on premises, I have to stand up dedicated infrastructure that's uniquely bound to those [data sets].

And what I want to do is have less of my folks working on dedicated infrastructure and more working on my multicloud and my private cloud strategy. And my application developers, they want to be able to develop apps using AI that run everywhere — at the core, in the cloud and on the edge.

Those are the problems that we're solving, because we're saying with this VMware Tanzu layer, we're going to take care of making those AI hardware resources available in a consistent way, irrespective of where you are. So the richest set of AI resources in the world is now becoming standardized on the VMware infrastructure.

Nvidia made another very interesting deal earlier this month to acquire Arm. I have no expectation that you remember this, but many years ago I interviewed you about the history of x86 and whether or not there was a chance that it could be displaced. And at the time, you said that "the burden of compatibility is there, but the value of compatibility overwhelms the cost it brings with it."

This was a long time ago. But what chance do you think Arm servers have to disrupt the x86 world now? Does that value of compatibility still hold true?

That was at least like 15 years ago?

I think it was 13.

So, hey, I've been proven pretty darn right for a long period of time. So what would change that equation?

For one, you now have cloud guys who are at scale who could say, "I'm moving the compatibility burden into my cloud abstraction layer." They are now at such a massive scale that they could say … let's just make it up [that] the Arm server CPU is 30% more dollar-cost power efficient than their x86 equivalent. They are running at such scale, hundreds of thousands of sockets, that they could say, even if there are certain applications that I can't move there, if I get some reasonable applications, I've moved that burden of compatibility to the app layer out of the infra layer. I could flip that cycle of compatibility costs. And of course, once you flip it, it can grow over time.

The second example I would point to would be some of the geopolitical challenges. China can induce both the scale of their market and geopolitical pressure into their local suppliers that they could just say, "I could care less about how it's going to take you three years to solve those compatibility things. Fix it." They can induce dislocative characteristics into the market, and the Chinese market is big enough now.

The third example would be the one that we just announced. I'm not trying to solve the big compatibility issue, I'm only moving a very, very targeted piece of the workload. Now, guess what: I/O and security in the network, load balancing, firewalls, all that kind of stuff, that's pretty meaningful work; 25% to 30% of all CPU MIPS are being consumed in those workloads.

That's pretty interesting when you look at that data center scale.

So what you'll have now is CPUs, DPUs, and GPUs. GPUs have already won the AI workload, DPUs will now potentially win the I/O and network security workload, which says the CPU burden … all of a sudden I gave you a third of your CPUs for free, right, since I've moved those other workloads into these additional areas.

So those would be some of the characteristics that are sort of changing, but even if all of these effects that I described come into play in a very, very aggressive way, this is a long, long tail.

Enterprise

SaaS valuations cratered in early 2022. But these startups thrived.

VCs were still bullish on supply chain, recruiting and data startups despite the economic environment that chopped the valuations of newly public companies and late-stage enterprise startups.

While private equity has been investing in enterprise tech for decades, the confluence of several trends in the sector is making it more competitive than ever before.
Image: Getty Images; Protocol

Despite a volatile tech stock market so far this year that has included delayed IPOs, lowered valuations and declining investor sentiment, a few enterprise tech categories managed to keep getting funding. Data platforms, supply chain management tech, workplace software and cybersecurity startups all dominated the funding cycle over the past quarter.

When it comes to enterprise SaaS, the number of mega-deals — VC funding rounds over $100 million — spiked last year, according to data from Pitchbook. Partially driven by the onset of a pandemic that accelerated the need for everything from contact centers to supply chains to move into the cloud, the number of large VC deals tripled between 2020 and 2021. That growth has extended into this year, where the number of mega-deals has already outpaced all of 2020.

Keep Reading Show less
Aisha Counts

Aisha Counts (@aishacounts) is a reporter at Protocol covering enterprise software. Formerly, she was a management consultant for EY. She's based in Los Angeles and can be reached at acounts@protocol.com.

Sponsored Content

Foursquare data story: leveraging location data for site selection

We take a closer look at points of interest and foot traffic patterns to demonstrate how location data can be leveraged to inform better site selecti­on strategies.

Imagine: You’re the leader of a real estate team at a restaurant brand looking to open a new location in Manhattan. You have two options you’re evaluating: one site in SoHo, and another site in the Flatiron neighborhood. Which do you choose?

Keep Reading Show less
Fintech

Plaid is striking back after Stripe entered its core business

Onboarding customers through identity verification and ACH transfers is a hot sector in fintech, and the two fast-growing fintechs are set to battle it out.

Plaid is looking to help banks and fintech companies with anything related to the onboarding of a customer onto a financial product, said Plaid CTO Jean-Denis Greze.

Photo: Plaid

Plaid is moving into identity verification in a crucial expansion beyond its roots connecting banks and fintechs — a move that could put it in more direct competition with Stripe, another company known for its financial software tools.

In conjunction with its Plaid Forum customer conference this week, the company is also announcing two products focused on ACH transfers as it moves into payments.

Keep Reading Show less
Tomio Geron

Tomio Geron ( @tomiogeron) is a San Francisco-based reporter covering fintech. He was previously a reporter and editor at The Wall Street Journal, covering venture capital and startups. Before that, he worked as a staff writer at Forbes, covering social media and venture capital, and also edited the Midas List of top tech investors. He has also worked at newspapers covering crime, courts, health and other topics. He can be reached at tgeron@protocol.com or tgeron@protonmail.com.

Workplace

Getting reproductive benefits at work could be a privacy nightmare

A growing number of tech companies are extending abortion-related travel benefits. Given privacy and legal fears, will employees be too scared to use them?

How employers can implement and discuss reproductive benefits in a way that puts employees at ease.

Photo: Sigrid Gombert via Getty Images

It’s about to be a lot harder to get an abortion in the United States. For many, it’s already hard. The result is that employers, including large companies, are being called upon to fill the abortion care gap. The likelihood of a Roe v. Wade reversal was the push some needed to extend benefits, with Microsoft and Tesla announcing abortion-related travel reimbursements in recent weeks. But the privacy and legal risks facing people in need of abortions loom large. If people have reason to fear texting friends for abortion resources, will they really want to confide in their company?

An employee doesn’t have “much to worry about” when it comes to health privacy, said employee benefits consultant Jessica Du Bois. “The HR director or whoever's in charge of the benefits program is not going to be sharing that information.” Employers have a duty to protect employee health data under HIPAA and a variety of state laws. Companies with self-funded health plans — in other words, most large companies — can see every prescription and service an employee receives. But the data is deidentified.

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.

Enterprise

VMware CEO Raghu Raghuram: Edge is growing faster than cloud

The now-standalone company is staking its immediate future on the multicloud era of IT and hybrid work, while anticipating increased demand for edge-computing software.

VMware CEO Raghu Raghuram spoke with Protocol about the company's future.

Photo: VMware

Nearly a year into his tenure as CEO, Raghu Raghuram believes VMware is well-positioned for the third phase of its evolution, but acknowledges its product transformation still needs some work.

The company, which pioneered the hypervisor and expanded to virtualized networking and storage with its vSphere operating environment, now is helping customers navigate a distributed, multicloud world and hybrid work with newfound freedom as an independent company after being spun off from Dell Technologies last November.

Keep Reading Show less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Latest Stories
Bulletins