Enterprise

Intel’s new companion chip for cloud providers has Arm inside

The newest member of Intel's IPU data-center strategy is a chip with 16 cores designed by its longtime rival.

Intel's Mount Evans IPU

The new Mount Evans IPU, designed to help cloud providers manage their internal computing needs alongside those of their customers, will come with 16 Arm Neoverse N1 cores.

Image: Intel

Intel turned to an unlikely source for the newest version of its infrastructure processing unit strategy: longtime rival Arm.

The new Mount Evans IPU, designed to help cloud providers manage their internal computing needs alongside those of their customers, will come with 16 Arm Neoverse N1 cores. That's the same core that's at the heart of AWS's Graviton2 processor, one of the greatest threats to Intel's decades-long dominance of the data center market.

But Mount Evans isn't designed to run cloud customer applications like Graviton2. Instead, it's the latest iteration of Intel's attempt to take a page from modern cloud-server designs and build its own companion processors to help cloud providers run their data centers more efficiently.

"We're looking at this in a very pragmatic way," Guido Appenzeller, chief technology officer for Intel's Data Platforms Group, told Protocol. "We make design decisions based on a number of different factors, and in this case, these Arm cores met the design target performance we were looking for."

Still, it's yet another sign that Intel is changing more than six months after Pat Gelsinger returned to the company where he played an integral role in shaping the PC era of the tech industry around Intel's x86 instruction set. Almost all modern PC and server software has been designed with those chips in mind, and over the last two decades Intel executives have been reluctant to even acknowledge the existence of alternative instruction sets, let alone discuss their benefits.

Data centers as hotels

Companies that manage their own servers run everything on the processor at the heart of that server. In the very early days of cloud computing companies like AWS and Microsoft followed a similar strategy, but there is a lot of additional overhead required to manage huge data centers and support modern application designs, which began to overwhelm those processors.

This led to a number of cloud providers piecing together sophisticated networking chips and other co-processors to handle some of that administrative load. Nvidia found a lot of success with chips for this market over the last few years, and Intel committed itself to this design strategy earlier this year with the introduction of its IPUs.

Appenzeller compared Intel's IPU strategy to the way living spaces are designed depending on who owns the space. In your own home, you move between different rooms as you like. When you stay in a hotel, you have freedom to use your own room however you want, but you can't get into your neighbor's room and facilities such as the dining area and lobby are available for your use but are controlled by the owner.

In this analogy, an IPU is the dining area and lobby; facilities you want and expect, but have no expectation or desire of controlling. Intel's Xeon processors or AWS's Graviton2 processors would therefore be like individual hotel rooms, where customers (guests) can access and control the activity in their own space.

"We think of this not so much as an offload, but as a dedicated place to run the infrastructure functions and be under the control of the infrastructure operator," Appenzeller said.

Coming down the mountain

Mount Evans will be Intel's first ASIC (application-specific integrated processor) design for its IPU strategy. Earlier versions, as well as two other new IPUs scheduled to be unveiled Thursday, were based around FPGA chips that can be configured by the customer to suit a variety of needs, but Appenzeller said the ASIC design offers better performance.

It will be able to support up to four Xeon processors that are running customer applications inside cloud data centers, helping move data into and out of those chips with Intel's networking technology while offering additional computing resources with the Arm Neoverse cores. Intel has made several programmable chips for embedded systems that use Arm cores, but Mount Evans will be one of the company's most prominent endorsements of its technology.

Intel is unwilling to say much more about Mount Evans than it plans to reveal Thursday, including when it expects to start shipping the new IPU. Mount Evans was "designed in collaboration with a large CSP," or cloud-service provider, Intel said, but it declined to confirm which one.

Microsoft has a long history of collaboration on both PCs and servers with Intel, but the chip company said it was working with "a different partner" on Mount Evans. AWS built something similar to Mount Evans called the Nitro system for its own internal use a few years ago.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins