Enterprise

Cerebras just built a big chip that could democratize AI

Chip startup Cerebras has developed a foot-wide piece of silicon, compared to average chips measured in millimeters, that makes training AI cheap and easy.

A wafer-sized chip made by Cerebras Systems

At the core of Cerebras’ pitch is a chip that is roughly the size of a dinner plate.

Photo: Cerebras Systems

Inside a conference room at a Silicon Valley data center last week, chip startup Cerebras Systems founder and CEO Andrew Feldman demonstrated how the company’s technology allows people to shift between deploying different versions of an AI natural language model in a matter of moments, a task that usually takes hours or days.

“So we’ve made it 15 keystrokes to move among these largest models that have ever been described on a single machine,” Feldman said.

This, to Feldman and Cerebras, represents a triumph worth noting. Cerebras claims the system that achieved this feat has also accomplished a world first: It can train an entire 20-billion-parameter model on a single nearly foot-wide superchip. Without its technology, the company said scaling an AI model from 1 billion parameters to 20 billion parameters might require users to add more server hardware and reconfigure racks inside of a data center.

Training a natural language AI model on one chip makes it considerably cheaper and delivers a performance boost that is an order of magnitude superior to Nvidia’s flagship graphics processor-based systems, Feldman said. The idea is to give researchers and organizations with tiny budgets — in the tens of thousands of dollars range — access to AI training tools that were previously only available to much larger organizations with lots of money.

“Models have grown really fast in this area. Language processing, and the challenges of delivering compute for these models, is enormous,” Feldman said. “We sort of have made this class of model practical, useful to a whole slice of the economy that couldn’t previously do interesting work.”

The AI models that Feldman is talking about are simply methods of organizing mathematical calculations by breaking them up into steps and then regulating the communication between the steps. The point is to train a model to begin to make accurate predictions, whether that’s the next piece of code that should be written, what constitutes spam and so on.

AI models are typically large to begin with, but those built around language tend to be even larger. For language models, context — as in more text, such as adding an author’s entire body of work to a model that began with a single book — is crucial, but that context can make them far, far more complex to operate. Market-leader Nvidia estimates that AI tasks have spurred a 25-fold increase in the need for processing power every two years.

This exponential increase has led companies like Cerebras and others to chase AI as a potential market. For years, hardware investments were seen as bad bets among venture capitalists who were only willing to fund a few promising ideas. But as it became clear that AI as a class of computation would open the door for fresh ideas beyond the general purpose processors made by the likes of Intel and Nvidia, a new class of startups was born.

Cerebras, which is Latin for “mind,” is one of those startups. Founded in 2015, Feldman and his team, which includes a number of AMD veterans in key technology roles, have raised roughly $735 million — including funding from the CIA venture arm In-Q-Tel, the CEO said — at a $4.1 billion valuation.

Chips on the plate

At the core of Cerebras’ pitch is a chip that is roughly the size of a dinner plate, or an entire foot-wide silicon wafer, called the Wafer Scale Engine.

The idea of a wafer-size chip like the one that powers Cerebras’ systems isn’t a novel concept; similar ideas have been floating around for decades. A failed bid by Trilogy Systems in the early 1980s that raised roughly $750 million in today’s dollars is one notable attempt at a superchip, and IBM and others have studied the idea but never produced a product.

But together with TSMC, Cerebras has settled on a design that could be fabricated into a functioning wafer-size chip. In some ways, Cerebras is almost two startups stuck together: It’s interested in tackling the growing challenge of AI compute, but it has also achieved the technological feat of producing a useful chip the size of a wafer.

the cerebras cs-2 server system in a data centerA Cerebras CS-2 system running inside a data center.Photo: Max A. Cherney/Protocol

The current generation of what Cerebras calls the WSE-2 can offer considerable performance improvements over stringing together multiple graphics chips to achieve the computational horsepower to train some of the largest AI models, according to Feldman.

“So it's unusual for a startup to have deep fab expertise, [but] we have profound expertise,” Feldman said. “And we had an idea of how they could, within their permitted flexibility in their flow, fit our innovation.”

The advantage of building a chip of that size is that it allows Cerebras to duplicate the performance of dozens of other server chips — roughly 80 graphics processors, for some large AI models — and squishes them onto a single piece of silicon. Doing so makes them considerably faster, because, in part, data can move faster across a single chip than across a network of dozens of chips.

"[Our] machine is built for one type of work,” Feldman said. “If you want to take the kids to soccer practice, no matter how shitty they are to drive, the minivan is the perfect car. But if you've got your minivan and you try and move two-by-fours and 50-pound sacks of concrete, you realize what a terrible machine it is for that job. [Our chip] is a machine for AI.”

This story was updated to correct the amount of money raised by Trilogy Systems.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins