Enterprise

Cerebras just built a big chip that could democratize AI

Chip startup Cerebras has developed a foot-wide piece of silicon, compared to average chips measured in millimeters, that makes training AI cheap and easy.

A wafer-sized chip made by Cerebras Systems

At the core of Cerebras’ pitch is a chip that is roughly the size of a dinner plate.

Photo: Cerebras Systems

Inside a conference room at a Silicon Valley data center last week, chip startup Cerebras Systems founder and CEO Andrew Feldman demonstrated how the company’s technology allows people to shift between deploying different versions of an AI natural language model in a matter of moments, a task that usually takes hours or days.

“So we’ve made it 15 keystrokes to move among these largest models that have ever been described on a single machine,” Feldman said.

This, to Feldman and Cerebras, represents a triumph worth noting. Cerebras claims the system that achieved this feat has also accomplished a world first: It can train an entire 20-billion-parameter model on a single nearly foot-wide superchip. Without its technology, the company said scaling an AI model from 1 billion parameters to 20 billion parameters might require users to add more server hardware and reconfigure racks inside of a data center.

Training a natural language AI model on one chip makes it considerably cheaper and delivers a performance boost that is an order of magnitude superior to Nvidia’s flagship graphics processor-based systems, Feldman said. The idea is to give researchers and organizations with tiny budgets — in the tens of thousands of dollars range — access to AI training tools that were previously only available to much larger organizations with lots of money.

“Models have grown really fast in this area. Language processing, and the challenges of delivering compute for these models, is enormous,” Feldman said. “We sort of have made this class of model practical, useful to a whole slice of the economy that couldn’t previously do interesting work.”

The AI models that Feldman is talking about are simply methods of organizing mathematical calculations by breaking them up into steps and then regulating the communication between the steps. The point is to train a model to begin to make accurate predictions, whether that’s the next piece of code that should be written, what constitutes spam and so on.

AI models are typically large to begin with, but those built around language tend to be even larger. For language models, context — as in more text, such as adding an author’s entire body of work to a model that began with a single book — is crucial, but that context can make them far, far more complex to operate. Market-leader Nvidia estimates that AI tasks have spurred a 25-fold increase in the need for processing power every two years.

This exponential increase has led companies like Cerebras and others to chase AI as a potential market. For years, hardware investments were seen as bad bets among venture capitalists who were only willing to fund a few promising ideas. But as it became clear that AI as a class of computation would open the door for fresh ideas beyond the general purpose processors made by the likes of Intel and Nvidia, a new class of startups was born.

Cerebras, which is Latin for “mind,” is one of those startups. Founded in 2015, Feldman and his team, which includes a number of AMD veterans in key technology roles, have raised roughly $735 million — including funding from the CIA venture arm In-Q-Tel, the CEO said — at a $4.1 billion valuation.

Chips on the plate

At the core of Cerebras’ pitch is a chip that is roughly the size of a dinner plate, or an entire foot-wide silicon wafer, called the Wafer Scale Engine.

The idea of a wafer-size chip like the one that powers Cerebras’ systems isn’t a novel concept; similar ideas have been floating around for decades. A failed bid by Trilogy Systems in the early 1980s that raised roughly $750 million in today’s dollars is one notable attempt at a superchip, and IBM and others have studied the idea but never produced a product.

But together with TSMC, Cerebras has settled on a design that could be fabricated into a functioning wafer-size chip. In some ways, Cerebras is almost two startups stuck together: It’s interested in tackling the growing challenge of AI compute, but it has also achieved the technological feat of producing a useful chip the size of a wafer.

the cerebras cs-2 server system in a data center A Cerebras CS-2 system running inside a data center.Photo: Max A. Cherney/Protocol

The current generation of what Cerebras calls the WSE-2 can offer considerable performance improvements over stringing together multiple graphics chips to achieve the computational horsepower to train some of the largest AI models, according to Feldman.

“So it's unusual for a startup to have deep fab expertise, [but] we have profound expertise,” Feldman said. “And we had an idea of how they could, within their permitted flexibility in their flow, fit our innovation.”

The advantage of building a chip of that size is that it allows Cerebras to duplicate the performance of dozens of other server chips — roughly 80 graphics processors, for some large AI models — and squishes them onto a single piece of silicon. Doing so makes them considerably faster, because, in part, data can move faster across a single chip than across a network of dozens of chips.

"[Our] machine is built for one type of work,” Feldman said. “If you want to take the kids to soccer practice, no matter how shitty they are to drive, the minivan is the perfect car. But if you've got your minivan and you try and move two-by-fours and 50-pound sacks of concrete, you realize what a terrible machine it is for that job. [Our chip] is a machine for AI.”

This story was updated to correct the amount of money raised by Trilogy Systems.

Enterprise

Why CrowdStrike wants to be a broader enterprise IT player

The company, which grew from $1 billion in annual recurring revenue to $2 billion in just 18 months, is expanding deeper within the cybersecurity market and into the wider IT space as well.

CrowdStrike is well positioned at a time when CISOs are fed up with going to dozens of different vendors to meet their security needs.

Image: Protocol

CrowdStrike is finding massive traction in areas outside its core endpoint security products, setting up the company to become a major player in other key security segments such as identity protection as well as in IT categories beyond cybersecurity.

Already one of the biggest names in cybersecurity for the past decade, CrowdStrike now aspires to become a more important player in areas within the wider IT landscape such as data observability and IT operations, CrowdStrike co-founder and CEO George Kurtz told Protocol in a recent interview.

Keep Reading Show less
Kyle Alspach

Kyle Alspach ( @KyleAlspach) is a senior reporter at Protocol, focused on cybersecurity. He has covered the tech industry since 2010 for outlets including VentureBeat, CRN and the Boston Globe. He lives in Portland, Oregon, and can be reached at kalspach@protocol.com.

Sponsored Content

Great products are built on strong patents

Experts say robust intellectual property protection is essential to ensure the long-term R&D required to innovate and maintain America's technology leadership.

Every great tech product that you rely on each day, from the smartphone in your pocket to your music streaming service and navigational system in the car, shares one important thing: part of its innovative design is protected by intellectual property (IP) laws.

From 5G to artificial intelligence, IP protection offers a powerful incentive for researchers to create ground-breaking products, and governmental leaders say its protection is an essential part of maintaining US technology leadership. To quote Secretary of Commerce Gina Raimondo: "intellectual property protection is vital for American innovation and entrepreneurship.”

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.
Fintech

Election markets are far from a sure bet

Kalshi has big-name backing for its plan to offer futures contracts tied to election results. Will that win over a long-skeptical regulator?

Whether Kalshi’s election contracts could be considered gaming or whether they serve a true risk-hedging purpose is one of the top questions the CFTC is weighing in its review.

Photo illustration: Getty Images; Protocol

Crypto isn’t the only emerging issue on the CFTC’s plate. The futures regulator is also weighing a fintech sector that has similarly tricky political implications: election bets.

The Commodity Futures Trading Commission has set Oct. 28 as a date by which it hopes to decide whether the New York-based startup Kalshi can offer a form of wagering up to $25,000 on which party will control the House of Representatives and Senate after the midterms. PredictIt, another online market for election trading, has also sued the regulator over its decision to cancel a no-action letter.

Keep Reading Show less
Ryan Deffenbaugh
Ryan Deffenbaugh is a reporter at Protocol focused on fintech. Before joining Protocol, he reported on New York's technology industry for Crain's New York Business. He is based in New York and can be reached at rdeffenbaugh@protocol.com.
Enterprise

The Uber verdict shows why mandatory disclosure isn't such a bad idea

The conviction of Uber's former chief security officer, Joe Sullivan, seems likely to change some minds in the debate over proposed cyber incident reporting regulations.

Executives and boards will now be "a whole lot less likely to cover things up," said one information security veteran.

Photo: Al Drago/Bloomberg via Getty Images

If nothing else, the guilty verdict delivered Wednesday in a case involving Uber's former security head will have this effect on how breaches are handled in the future: Executives and boards, according to information security veteran Michael Hamilton, will be "a whole lot less likely to cover things up."

Following the conviction of former Uber chief security officer Joe Sullivan, "we likely will get better voluntary reporting" of cyber incidents, said Hamilton, formerly the chief information security officer of the City of Seattle, and currently the founder and CISO at cybersecurity vendor Critical Insight.

Keep Reading Show less
Kyle Alspach

Kyle Alspach ( @KyleAlspach) is a senior reporter at Protocol, focused on cybersecurity. He has covered the tech industry since 2010 for outlets including VentureBeat, CRN and the Boston Globe. He lives in Portland, Oregon, and can be reached at kalspach@protocol.com.

Climate

Delta and MIT are running flight tests to fix contrails

The research team and airline are running flight tests to determine if it’s possible to avoid the climate-warming effects of contrails.

Delta and MIT just announced a partnership to test how to mitigate persistent contrails.

Photo: Gabriela Natiello/Unsplash

Contrails could be responsible for up to 2% of all global warming, and yet how they’re formed and how to mitigate them is barely understood by major airlines.

That may be changing.

Keep Reading Show less
Michelle Ma

Michelle Ma (@himichellema) is a reporter at Protocol covering climate. Previously, she was a news editor of live journalism and special coverage for The Wall Street Journal. Prior to that, she worked as a staff writer at Wirecutter. She can be reached at mma@protocol.com.

Latest Stories
Bulletins