Enterprise

Arm’s old CEO thought an IPO spelled doom. Its new CEO is planning one.

Arm CEO Rene Haas told Protocol that Arm will be fine as a standalone company, as it focuses on efficient computing and giving customers a more finished product than a basic chip core design.

​Arm CEO Rene Haas

Rene Haas is taking Arm on a fresh trajectory.

Photo: Arm

The new path for Arm is beginning to come into focus.

Weeks after Nvidia’s $40 billion bid to acquire Arm from SoftBank collapsed, the appointment of Rene Haas to replace longtime chief executive Simon Segars has set the business on a fresh trajectory. Haas appears determined to shake up the company, with plans to lay off as much as 15% of the staff ahead of plans to take the company public once again by the end of March next year.

It’s hard to argue that Arm isn’t thriving. Last week, the company reported adjusted profitability of $1 billion on revenue of $2.7 billion for fiscal 2021, the latter of which jumped a healthy 35%. Arm’s adjusted profit number leaves a lot to wonder about, as the company didn’t disclose what it had moved around to arrive at the $1 billion figure (the company did not respond to a request for further details). Arm counted $1.13 billion in licensing revenue, which comes from companies like Apple that pay to use Arm designs for their custom chips, and $1.54 billion in royalty sales. Throughout the year, the company said that 29.2 billion Arm-based chips shipped.

Protocol caught up with Haas in San Francisco three months after he took the top job and after nearly 9 years at the company, most recently running its intellectual property group. He discussed Arm’s future plans, its research plans and upstart RISC-V, which makes chip designs that compete with Arm. Haas has his work cut out for him: In a now-deleted blog post from its corporate site, his predecessor described going public as an existential threat to the company. An Arm spokesperson also declined to comment about the deleted post.

This interview has been edited and condensed for brevity.

Your predecessor Simon said last year, in no uncertain terms, that if Arm went public, it would die. Now Arm plans to IPO, and I know you’re shedding some of the workforce, but it doesn’t solve the fundamental problem that Simon articulated — that the pressure to achieve short-term revenue growth and profitability would suffocate the company’s ability to invest, expand, move fast and innovate. So how are you going to make it work?

It’s a fair question, and I’ll answer it. Talking about the results for a moment, because that does segue into it — we’ve never done over $2 billion of revenue in our history, so now we did $2.6 [billion]. Our non-royalty number was over a billion. We’re never done over a billion, ever. It was up 60% year-on-year, which is kind of the indication for what the demand for products looks like going forward. And then our royalties were $1.5 billion, also a record.

I took over the [intellectual property] business in 2017, and we did a kind of fundamental pivot away from the general purpose processors and moved more to market-specific products. At the same time, we started shedding products: display IP, video IP, that weren’t highly differentiated and commoditized. We really doubled down what I would call the compute platform — the CPU, the GPU, and the business around that.

The restructuring was really not about trying to fit into an EBITA envelope. It was a combination of looking at whether we have the right profile of [expenses] versus research and development. We need to invest more in R&D, so part of it was creating space to invest more, and that gives me a high degree of confidence — IPO or non-IPO — that we have a very healthy business.

This year, we’ve started to transition to v9 [architecture] royalties, but not in a big way, that will start next year. V9 royalty rates are better than v8, so that gives us confidence. And at the same time, these new markets with hyperscalers such as Amazon and Microsoft, automotive and such, I think we can manage that balance.

It’s a tricky balance, and I’m not going to take issue with what Simon said because those are his words, not mine. But I’m confident that we’ve got a strong business.

You’re thinking of investing more in research and development, in expending more resources there: How does Arm go about doing that? How do you plan to allocate R&D dollars? Some companies have made specific performance targets, such as an 1,000x increase.

We’re very focused on performance efficiency. Because I think one of the things that’s going to drive our growth, we’ve already seen: more and more chips are using more and more cores.

In a multicore system, you need a high degree of efficiency around performance per watt. And if you think about the data center, or the EV, or the base station, those all need a heck of a lot of performance, but they really need performance per watt. So if Intel is saying 1,000x better performance, we’re certainly going to be competitive on performance, but we’re going to be relentless on performance per watt.

Looking at the data center — if a new data center is going to be built in Ireland, let’s say, for them to get the land and an energy contract, only so much square footage is going to be allowed, [and] there’s going to be only so many megawatts they’re allowed. If they’re going to have no compromise on performance, it's really going to be around performance per square foot, and performance per square watt. This is why when [Amazon CEO] Andy Jassy stood on stage and said, “Why Graviton2?” He talked about 40% better performance in the same power envelope.

More broadly speaking, for us, it’s all about the compute platform, in terms of the GPU, the CPU and machine learning. Arm did have a lot of other products that were around the periphery, which took a lot of engineering work, and I don’t think we were highly differentiated or added a lot of value. Whereas on the flip side, the software investment required for these ecosystems is kind of insatiable. And that's also gonna be a high degree of focus for us.

Arm used to develop cores and then let other people build their own chips: How is that changing over time? Does Arm develop more of the final design for customers now than it used to?

Increasingly, what we're finding is that throwing that piece of IP over the wall isn't going to be enough to ensure a world-class product. There's a lot of things around system design that are increasingly important: the interconnect, the physical design, the memory subsystem. We are being much more prescriptive about how to build a [system-on-chip] using Arm [designs].

As opposed to just a straight IP licensing model — here's the recipe, go forth and build — we're now doing things around subsystems that have essentially allowed people to build better SoCs. That will also give us better performance and performance per watt, because we can now guarantee a certain threshold of performance. We understand that you’re building in TSMC, you’re building in Samsung, you’re building in GlobalFoundries — we do work with the library, and we’re doing more of that ourselves.

When you think of high-performance systems, they’re really hard to build. Our world is disaggregated: there’s Cadence tools, there are Synopsis tools, there’s TSMC, there’s a substrate guy. If you leave it up to everybody to figure out what their secret sauce is, you’re going to have some compromises. Since the performance engine is most tied to the CPU, we feel we have an obligation to be a bit more prescriptive in terms of how those products are built.

Is exerting more control over exactly how products are built more lucrative?

It could be — I can’t say, it’s too forward.

With only two dominant chip designs, who do you see as Arm’s biggest competition at this point?

If you just think about the CPU business, the way to think about competition is the instruction set architecture. There are really only three that are in vogue. There’s x86, which only two companies in the world [Intel and AMD] can build. So it’s a competitor, but not in the classic sense, because it’s not open.

There’s only one other open ISA, and that’s RISC-V. And RISC-V is absolutely a competitor to what we do, and they’re unique because they’re an open-source implementation. They’re a very interesting competitor because one of their greatest strengths could be their greatest weakness.

They’re open source, so anyone can build anything they want with it. It can be modified, it can be extended, everywhere in the world could have their own RISC-V flavor. But the more you differentiate, and the more you create different extensions not adopted by everyone in the ecosystem, you get a high degree of potential for software fragmentation.

For RISC-V, the jury is still out — it’s early. There’s a lot of momentum around RISC-V, no doubt about it. Where we see they’re getting traction is in what I would call the deeply embedded space, where the software externally doesn’t really matter. It’s controlled, in a black box. There’s no developer, there’s no software kits in that world. For the hyperscalers, there’s a lot of work that needs to be done around the software optimizations and workloads. And then you get into the situation of if everyone would be happy with their own kind of risk-five implementation. There has to be a standard.

I think the RISC-V thing will be a very interesting story to see play out.

How do you think about your relationship with Intel? On the one hand, the company is a competitor, but on the other hand, it has opened its factory doors and is willing to make anyone’s chips.

People would be surprised that Intel is a big customer of ours: one of our largest ones, believe it or not. I would love to see Intel Foundry Services be wildly successful, because when you take Arm, which is the most ubiquitous processor in the world, there’s going to be a marriage there somewhere. They cannot be successful without having a strong offering and being very competitive with Arm. On the flip side, for IFS to be successful, we need to work really closely with them. And we do.

Every day, millions of us press the “order” button on our favorite coffee mobile application. When we arrive at the coffee shop, we expect that our chosen brew will be on the counter a few minutes later. It’s a personalized, seamless experience that we have all come to expect. What we don’t know is what’s happening behind the scenes. The mobile application is sourcing data from a database that stores information about each customer and what their favorite coffee drinks are. It is also leveraging event-streaming data in real time to ensure the ingredients for your personal coffee are in supply at your local store.

Applications like this power our daily lives, and if they can’t access massive amounts of data stored in a database as well as streaming data “in motion” instantaneously, you, and millions of customers, won’t have the in-the-moment experiences we all expect.

Keep Reading Show less
Jennifer Goforth Gregory
Jennifer Goforth Gregory has worked in the B2B technology industry for over 20 years. As a freelance writer she writes for top technology brands, including IBM, HPE, Adobe, AT&T, Verizon, Epson, Oracle, Intel and Square. She specializes in a wide range of technology, such as AI, IoT, cloud, cybersecurity, and CX. Jennifer also wrote a bestselling book The Freelance Content Marketing Writer to help other writers launch a high earning freelance business.
Policy

How the internet got privatized and how the government could fix it

Author Ben Tarnoff discusses municipal broadband, Web3 and why closing the “digital divide” isn’t enough.

The Biden administration’s Internet for All initiative, which kicked off in May, will roll out grant programs to expand and improve broadband infrastructure, teach digital skills and improve internet access for “everyone in America by the end of the decade.”

Decisions about who is eligible for these grants will be made based on the Federal Communications Commission’s broken, outdated and incorrect broadband maps — maps the FCC plans to update only after funding has been allocated. Inaccurate broadband maps are just one of many barriers to getting everyone in the country successfully online. Internet service providers that use government funds to connect rural and low-income areas have historically provided those regions with slow speeds and poor service, forcing community residents to find reliable internet outside of their homes.

Keep Reading Show less
Aditi Mukund
Aditi Mukund is Protocol’s Data Analyst. Prior to joining Protocol, she was an analyst at The Daily Beast and NPR where she wrangled data into actionable insights for editorial, audience, commerce, subscription, and product teams. She holds a B.S in Cognitive Science, Human Computer Interaction from The University of California, San Diego.
Fintech

How I decided to exit my startup’s original business

Bluevine got its start in factoring invoices for small businesses. CEO Eyal Lifshitz explains why it dropped that business in favor of “end-to-end banking.”

"[I]t was a realization that we can't be successful at both at the same time: You've got to choose."

Photo: Bluevine

Click banner image for more How I decided series

Bluevine got its start in fintech by offering a modern version of invoice factoring, the centuries-old practice where businesses sell off their accounts receivable for up-front cash. It’s raised $767 million in venture capital since its founding in 2013 by serving small businesses. But along the way, it realized it was better to focus on the checking accounts and lines of credit it provided customers than its original product. It now manages some $500 million in checking-account deposits.

Keep Reading Show less
Ryan Deffenbaugh
Ryan Deffenbaugh is a reporter at Protocol focused on fintech. Before joining Protocol, he reported on New York's technology industry for Crain's New York Business. He is based in New York and can be reached at rdeffenbaugh@protocol.com.
Enterprise

The Roe decision could change how advertisers use location data

Over the years, the digital ad industry has been resistant to restricting use of location data. But that may be changing.

Over the years, the digital ad industry has been resistant to restrictions on the use of location data. But that may be changing.

Illustration: Christopher T. Fong/Protocol

When the Supreme Court overturned Roe v. Wade on Friday, the likelihood for location data to be used against people suddenly shifted from a mostly hypothetical scenario to a realistic threat. Although location data has a variety of purposes — from helping municipalities assess how people move around cities to giving reliable driving directions — it’s the voracious appetite of digital advertisers for location information that has fueled the creation and growth of a sector selling data showing who visited specific points on the map, when, what places they came from and where they went afterwards.

Over the years, the digital ad industry has been resistant to restrictions on the use of location data. But that may be changing. The overturning of Roe not only puts the wide availability of location data for advertising in the spotlight, it could serve as a turning point compelling the digital ad industry to take action to limit data associated with sensitive places before the government does.

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins