Enterprise

HPE's CEO Antonio Neri: Why cloud computing is like the Hotel California

Fresh off a bout with coronavirus, Neri is scrambling to find new sources of revenue growth to offset HPE's declines in the traditional IT business.

HPE's CEO Antonio Neri

HPE CEO and President Antonio Neri announced in June that he had tested positive for coronavirus and went into a two-week quarantine.

Photo: Courtesy of HPE
Dealing with the fallout of the worst pandemic the world has seen in over 100 years would be enough for any CEO to handle this year. For HPE CEO and President Antonio Neri, it hit a little closer to home.

Neri announced in June that he had tested positive for coronavirus and went into a two-week quarantine. Now, in early August, he's fully recovered and ready to deal with what might be a greater challenge: transforming his company from a server vendor to a service provider.

HPE is betting heavily on a product called GreenLake to launch it into the cloud era, hoping that its customers, and other companies that have lagged behind the transition to cloud computing, will pay for a service that helps them manage computing and storage resources across their own data centers and public cloud providers. The "low-hanging fruit" of cloud business, as Neri put it, has already been plucked, and HPE is one of several companies, including VMware and IBM's Red Hat, vying for the rest of that bounty.

The problem is that revenue from HPE's core businesses is declining at an alarming clip, down 16% during its last fiscal quarter. A fair amount of the decline in that quarter can be chalked up to supply-chain disruptions and economic uncertainty, but HPE needs to start making more money from its modern businesses to offset those declines.

In an interview with Protocol earlier this week, Neri talked about his COVID-19 experience, why cloud computing is a little like the Eagles' "Hotel California," and the future of the hardware business.

This interview has been edited and condensed for clarity.

How are you feeling? I've talked about coronavirus and COVID-19 with dozens of CEOs over the past five months, and you're the first one I've talked to who has been directly affected by it.

I'm doing great. Thank you for asking. I guess I got a mild case.

I started on a Friday, started feeling a little bit tired, and then by Sunday had some fever. On Monday I tested positive and obviously went into quarantine. But the next Monday or Tuesday, a week [or] eight days after I tested positive, I was feeling good. You know, throughout the period, you're always tired. You want to sleep the whole time, you have a fever on and off. But I never had any respiratory [problems] or complications.

And from there on, there's been no problems for the last month or so.

Does your experience change the way that you've thought about managing the company during this period of time, in terms of working from home or the way offices will be structured at HPE?

I don't believe that 50% of employees will ever come back to the office to do their jobs, thinking about [it as] coming to a desk, dock your PC, and do your daily job. What we think we're going to have is a more collaborative, innovative set of centers, where we will have space for people to come, but not to do the actual daily job, more to collaborate and do meetings and planning and obviously to drive the innovation, which is very important in the context of the culture, the fabric of a company.

When it comes to GreenLake, is the idea behind that to basically transition current compute customers, as they digitally transform and need some help doing that?

What we see is a market-based transition from a capex to a consumption-based model. And the consumption-based model has several elements: a subscription base, where you subscribe to the software to deliver some sort of service behind it, or a utility-based model, which means you pay only for what you consume. The public cloud experience brought that kind of experience to the market, where it starts by renting infrastructure, and then paying only for what you consume, whether it's per virtual machine, or per container, or whatever for memory or for storage.

The problem is the low-hanging fruit of that, meaning the workloads and applications that were able to move to the public cloud, has already [been] taken. Now we're talking about production-scale workloads where the data aspect is pretty significant. Data has tremendous gravity. And when you look at the cost model of what you pay in a public cloud versus what you pay on prem, you realize that it is no longer the cost of compute … it is about the cost of egressing data back and forth.

Many applications need the data to be close enough because of latency, because of compliance, or because of the type of workload you're talking about: an example of this can be AI and machine learning, big-data analytics, simulation, modeling and so forth, where the data needs to be close to the compute. Otherwise, you're paying an enormous bill by egressing data back and forth. When you check in to the public cloud, it is like checking in to Hotel California: You check in and never check out, because that cost is ginormous.

To me, cloud is not a destination, it is an experience. With GreenLake, we bring that cloud experience for all your apps and data. If you have it in your data center, we can automate everything, and we can cloudify everything, and you can [also] pay in your data center for what you consume. So you don't have to invest in massive amounts of capex, where you buy computer storage, then you have your people putting it together, automating everything and then spend an enormous amount of opex running it.

We can do all of that through GreenLake, where we bring the compute and storage we have detected for that workload, so we can take cost out.

You said you had 1,000 customers who are signed up for it. What percentage of that 1,000 were previous HPE customers?

I don't have the exact number, but in general, obviously, many, many customers came through the installed base that we have. And a lot of this is a combination of both legacy workloads and new workloads. Legacy workloads, because it's hard to move them, but they still wanted that as-a-service experience. And then new workloads, an example of this is machine-learning ops as-a-service.

Earlier this year, I talked to Oxide Computer, a company working on next-generation server hardware. It's very, very early for them, but it got me thinking: Do you see the basic server, the compute part of your business, that I believe is still the largest part of your business, do you think there is innovation left in that segment?

There is no question that general-purpose compute has been commoditized by the Intels and AMDs of the world. I've been driving that roadmap for a number of years, particularly with Moore's law. And now, obviously, you run into issues with Moore's law, as we saw a couple of weeks ago, where you go in fabs from 14 [nanometers] to 10, to seven, maybe to four going forward. But that's not easy, and Intel is obviously having some challenges in the short term.

The problem is, you start to run into some issues, because the more you put closer to the CPU, the more you close that ecosystem. And there's less innovation [possible] around what the future could be.

If you think about it, we had a mainframe, then we went to the PC client/server, then the internet came along, and we connected the offices. And now we live in a mobile-first, cloud-first approach where your smartphone is your terminal, instead of that green terminal that some of us used in the past.

What I see now is the cloud moving closer to where the action is. To me, the action is where we live, where we work, in hospitals and manufacturing floors, in cars or autonomous vehicles.

It is easier to move the compute to the data, than the data to the compute. It's just the physics, the physics and cost. The next computer architectures will be much more data-driven than CPU-driven.

Entertainment

Niantic is building an AR map of the world

The company’s Visual Positioning System will help developers build location-based AR games and experiences; a new social app aims to help with AR content discovery.

VPS will allow developers to build location-based AR experiences for tens of thousands of public spaces.

Image: Niantic

Pokémon Go maker Niantic has quietly been building a 3D AR map of the world. Now, the company is getting ready to share the fruits of its labor with third-party developers: Niantic announced the launch of its Lightship Visual Positioning System at its developer summit in San Francisco on Tuesday. VPS will allow developers to build location-based AR experiences for tens of thousands of public spaces, Niantic said.

Niantic also announced a new service called Campfire that adds a social discovery layer to AR, starting with Niantic’s own games. Both announcements show that Niantic wants to be much more than a game developer with just one or two hit apps (and a couple of flops). Instead, it aims to play a key role in the future of AR — and it’s relying on millions of Ingress and Pokémon Go players to help build that future.

Keep Reading Show less
Janko Roettgers

Janko Roettgers (@jank0) is a senior reporter at Protocol, reporting on the shifting power dynamics between tech, media, and entertainment, including the impact of new technologies. Previously, Janko was Variety's first-ever technology writer in San Francisco, where he covered big tech and emerging technologies. He has reported for Gigaom, Frankfurter Rundschau, Berliner Zeitung, and ORF, among others. He has written three books on consumer cord-cutting and online music and co-edited an anthology on internet subcultures. He lives with his family in Oakland.

Sponsored Content

Why the digital transformation of industries is creating a more sustainable future

Qualcomm’s chief sustainability officer Angela Baker on how companies can view going “digital” as a way not only toward growth, as laid out in a recent report, but also toward establishing and meeting environmental, social and governance goals.

Three letters dominate business practice at present: ESG, or environmental, social and governance goals. The number of mentions of the environment in financial earnings has doubled in the last five years, according to GlobalData: 600,000 companies mentioned the term in their annual or quarterly results last year.

But meeting those ESG goals can be a challenge — one that businesses can’t and shouldn’t take lightly. Ahead of an exclusive fireside chat at Davos, Angela Baker, chief sustainability officer at Qualcomm, sat down with Protocol to speak about how best to achieve those targets and how Qualcomm thinks about its own sustainability strategy, net zero commitment, other ESG targets and more.

Keep Reading Show less
Chris Stokel-Walker

Chris Stokel-Walker is a freelance technology and culture journalist and author of "YouTubers: How YouTube Shook Up TV and Created a New Generation of Stars." His work has been published in The New York Times, The Guardian and Wired.

Workplace

Why it's time to give all your employees executive coaching

In an effort to boost retention and engagement, companies are rolling out access to executive coaching to all of their employees.

Coaching is among personalized and exclusive benefits employers chose to offer their workforce during the pandemic.

Image: Christopher T. Fong/Protocol

Executive coaching has long been a quiet force behind leaders in the tech industry, but that premium benefit, often only offered to the top executives, is changing. A new wave of executive coaching services are hitting the market aimed at workers who would have traditionally been excluded from access.

Tech companies know that in order to stay competitive in today’s still-hot job market, it pays to offer more personalized and exclusive benefits. Chief People Officer Annette Reavis says Envoy, a workplace tech company, offers all employees access to a broad range of opportunities. “We offer everyone an L&D credit that they can spend on outside learning, whether it's executive coaching or learning a new coding language. We do this so that people can have access to and learn skills specific to their job.”

Keep Reading Show less
Amber Burton

Amber Burton (@amberbburton) is a reporter at Protocol. Previously, she covered personal finance and diversity in business at The Wall Street Journal. She earned an M.S. in Strategic Communications from Columbia University and B.A. in English and Journalism from Wake Forest University. She lives in North Carolina.

Enterprise

Microsoft thinks Windows developers are ready for virtual workstations

The new Microsoft Dev Box service, coupled with Azure Deployment Environments, lets developers go from code to the cloud faster than ever.

Microsoft hopes a new cloud service will address one of developers' biggest challenges.

Photo: Grant Hindsley/Bloomberg via Getty Images

Microsoft hopes a new cloud service will address one of the biggest challenges that developers have raised with the technology giant over the last several years: managing developer workstations.

Microsoft Dev Box, now in private preview, creates virtual developer workstations running its Windows operating system in the cloud, allowing development teams to standardize how those fundamental tools are initialized, set up and managed.

Keep Reading Show less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Enterprise

Okta CEO: 'We should have done a better job' with the Lapsus$ breach

In an interview with Protocol, Okta CEO Todd McKinnon said the cybersecurity firm could’ve done a lot of things better after the Lapsus$ breach of a third-party support provider earlier this year.

From talking to hundreds of customers, “I've had a good sense of the sentiment and the frustrations,” McKinnon said.

Photo: David Paul Morris via Getty Images

Okta co-founder and CEO Todd McKinnon agrees with you: Disclosing a breach that impacts customer data should not take months.

“If that happens in January, customers can't be finding out about it in March,” McKinnon said in an interview with Protocol.

Keep Reading Show less
Kyle Alspach

Kyle Alspach ( @KyleAlspach) is a senior reporter at Protocol, focused on cybersecurity. He has covered the tech industry since 2010 for outlets including VentureBeat, CRN and the Boston Globe. He lives in Portland, Oregon, and can be reached at kalspach@procotol.com.

Latest Stories
Bulletins