Chiplets helped save AMD. They might also help save Moore’s law and head off an energy crisis.

To make chips faster, designers used to make them bigger — which is getting harder. To make better chips, the industry is turning to “chiplets.”


Chiplet-making is likely to become a dominant form of chip design in the coming years.

Illustration: Christopher T. Fong/Protocol

In 2015, CEO Lisa Su had only been the top boss at boom-and-bust chip company AMD for a few months. The business was trying to turn around its fortunes after its painful decision in 2009 to exit the manufacturing business and had embarked on an ambitious plan to re-enter the server chip market, which had been dominated by Intel for years.

But executives at AMD came to the conclusion that it didn’t have the resources to replicate Intel’s wide range of server chip designs and compete head-to-head across all those categories. It would be too expensive and difficult for the much smaller rival. And if it copied Intel, nothing about the new line of server chips would stand out either.

“We had one bullet to shoot for chip design,” AMD SVP Samuel Naffziger said about the company’s plans at the time.

So engineers at AMD looked to the past. Instead of trying to pack a larger number of features onto a single big piece of silicon, known as a “die,” they opted to break up their flagship chip into four separate parts and stitch them together.

This approach is called “chiplets,” and it’s likely to become a dominant form of chip design in the coming years.

“These small die were a huge enabler for us,” Naffziger said. “I view this as one of the greatest engineering achievements in the industry and in recent memory because it solves so many problems at once.”

AMD invented chiplets out of necessity, but by breaking up a chip into smaller pieces, it reduced the manufacturing costs by 40%. That had two consequences: First, it let AMD make a full suite of server chips where it could add and remove chiplets as necessary, to create several performance options and target different server chip price buckets. And, by moving to chiplets, AMD could reuse two of the server chiplets and design something less costly that worked for desktops too, the company’s most profitable segment at the time.

The plan helped save AMD — revenue grew to $16.4 billion last year from $4 billion in 2015 — and it might help save Moore’s law.

What AMD accomplished years ago is now on its way to become the industry norm. Intel’s plans include products with chiplets, and others in the industry are coalescing around a standard that will one day allow chipmakers to mix and match silicon from different vendors inside a single package.

The new chiplet-based designs are a nice-to-have at the moment, but they will quickly become a necessity, experts told Protocol.

The world produces and crunches data at a rapidly rising rate, and without the tech that underpins chiplets, it will become too expensive and difficult to continue to deliver the jump in computing horsepower that software developers expect every year with traditional processor designs. And in the longer run, those older designs will consume too much power to be economically viable.

“We're going to be locked into a situation where you're buying the same boxes that have the same performance, same power consumption,” TechInsights' chip economist Dan Hutcheson said. “And that means to scale them you either slow down the growth of the internet and the data or you have to build more data centers and more power plants to feed them.”

Moored in the past

One of the fascinating aspects of the chiplet concept is that it dates back to the seminal paper Gordon Moore wrote in 1965 that loosely set the ground rules of the industry for the next half-century. Those observations, known as Moore’s law, predicted that chips would get faster and cheaper every two years, as the number of transistors chip designers could fit on a chip doubled at the same pace.

But in that same paper, Moore described a world in which the economics of breaking up a single die into smaller pieces would someday make sense. Mixing and matching components would give system designers more flexibility and potentially boost performance, among other benefits.

“It may prove to be more economical to build large systems out of smaller functions, which are separately packaged and interconnected,” Moore wrote. “The availability of large functions, combined with functional design and construction, should allow the manufacturer of large systems to design and construct a considerable variety of equipment both rapidly and economically.”

It makes sense that Moore would suggest that: IBM was already building systems that included the chiplet concept as early as 1964 — at the time, it was the only way to achieve the necessary amount of computing horsepower. Companies such as IBM continued down that course for decades, and have applied the loose idea of chiplets to the most complex and expensive systems, such as supercomputers and mainframes.

But the chiplets of the past were complex and expensive, which led semiconductor companies to squeeze more discrete features such as graphics or memory onto a single piece of silicon: the system-on-chip (SoC) found in smartphones, some server processors and Apple’s latest designs for its laptops and desktops.

“In other words, when we mean chiplet, we mean taking up an SoC and splitting it up into its component functions,” IBM’s hybrid cloud technologist Rama Divakaruni said. “Now that we are going back to using chiplets, we are a lot smarter — with all the innovation we had with the history of 50 years of silicon, we will bring that into the packaging space. So that’s the excitement.”

Big dies, big problems

In the past, when chip designers added more components onto a single monolithic piece of silicon called a die — the term comes from “dicing” a silicon wafer into chip-sized pieces — that meant that chips had to get larger. It’s intuitive: Larger surfaces can theoretically fit more features, especially since the features themselves shrink every time manufacturers introduce better tech.

Bigger dies therefore translated to more computing horsepower. For server chips, it’s especially noticeable, since they tend to run five times the size of a chip found in a typical PC, according to research from Jefferies.

“Now things are getting so fast, the performance is so high, that you're being forced to move [more chips] into the package,” Hutcheson said. “Several technical and economic aspects of chipmaking have conspired to push the industry toward chiplets.”

But big die sizes create big problems. One fundamental issue is that it’s currently impossible to print a chip larger than the blueprint used in the photolithography stage of chip manufacturing, called a photomask. Because of technical limits, the beam of light shining through the photomask to reproduce the blueprint onto the silicon wafer cannot print chips larger than about 850 square millimeters.

Large dies are also much more prone to defects, which in turn reduces the number of good chips that can be cut from each wafer and makes each working chip cost more. At the same time, there are concerns that transistors are getting more expensive as they shrink — coupled with that fact that certain key features on modern chips don’t shrink well — which means it doesn’t make sense to use the most advanced process nodes for wireless communications chips, for example.

“When AMD tried to take the [2017] Naples design, and shrink it from 14 nanometer to seven, just pure lithographic scaling, they found it wasn't gonna work,” Columbia Threadneedle analyst Dave Egan told Protocol. “At the first pass design, they were only able to basically shrink about a half of it.”

No chiplets from Nvidia

Nvidia ran up against the photomask issue, also known as the reticle limit, over five years ago, according to Nvidia Vice President Ian Buck. But the company hasn’t opted for the chiplet approach as of yet.

Part of the reason is that the graphics chips Nvidia is known for operate fundamentally differently than the CPUs from Intel and AMD. Nvidia’s chips use thousands of computing cores to perform lots of relatively simple calculations at once, which makes them well-suited for graphics or for AI-accelerated computing in data centers.

“The GPU is a very different beast,” Buck said. “In the graphics space, it’s not individual cores when presented to a developer; they’re given a scene description and they have to distribute the work and render it.”

To confront the fundamental limit of the size of the photomask without adopting the chiplet approach, Nvidia has focused its efforts around building what it calls super chips. The company has developed its own interconnect technology called NVLink to attach multiple graphics chips and servers together. To Buck, the ultimate expression of that strategy up until this point is the company’s forthcoming Grace Hopper product, which fuses an Arm-based CPU to one of Nvidia’s server GPUs.

Nvidia does make smaller chips for enterprise applications such as AI inference and production. But, for the flagship chips designed for AI training, the company has found that its customers require the maximum amount of compute possible and value the largest processors the company makes.

“This growth greatly simplifies the programming model, but also, for AI, allows you to treat the CPU’s memory as an extension of the GPU’s memory,” Buck said. “They're basically two super chips put together.”

Mix and match

AMD may have been the first major chipmaker to mass produce and sell processors based around chiplets, but other than Nvidia and a handful of others, the rest of the industry is moving in the same direction. Several of the largest chipmakers, such as AMD, Intel, Samsung and cloud service providers, support a new standard for connecting chiplets made by different companies. Called “universal chiplet interconnect express,” the approach could reshape new semiconductor designs.

“Because of the new UCI Express, the whole industry is centering around the term chiplet,” Hutcheson said. “The real significance between today versus what we did before is that before a company had to do it all themselves — it’s not like you could buy this chip and this chip, and make my own electronic device.”

In an ideal world, the UCIE standard would let chipmakers mix and match chips that use different manufacturing process technologies, and made by different companies into products built inside a single package. That means taking memory made by Micron, a CPU core produced by AMD and a wireless modem made by Qualcomm and fitting them together — which could greatly improve performance, while saving an enormous amount of power.

“To allow for a heterogeneous system to be constructed on a package, you want on-package memory because of higher memory bandwidth,” Intel senior fellow Debendra Das Sharma said. “There are certain acceleration functions that can benefit by being on the same package, and also having a much lower latency and low power way of accessing all the components in the system, including memory.”

Mixing and matching chiplets would also enable AMD and Intel to create custom products for large customers that have specific needs. Accelerated computing, which is commonly deployed to tackle AI compute tasks, is low-hanging fruit to Das Sharma. Should one customer need a chip for a specific type of AI, Intel could substitute a general purpose accelerator for something more specialized.

Universally interconnecting chiplets isn’t a reality yet. According to several industry watchers, it’s unlikely to materialize for several years as the standard gets hammered out. The second version — which could arrive in roughly 2025 or so — is more likely to herald the type of hot swapping that Das Sharma discussed.

But whether the industry comes together in 2025 or 2026, chiplets are the future of processors — at least for the moment. Data centers consume a massive amount of the world’s energy, and that consumption will only increase as Mark Zuckerberg attempts to manifest his version of the metaverse, and, in the nearer term, more aspects of our lives turn digital.

“When you move these electrons down this pipe — simply going off chip, the power needed to do it is about 10,000X difference,” Hutcheson said. “To move a signal from one chip to another chip in another package, it’s like a 100,000X difference.”


Binance’s co-founder could remake its crypto deal-making

Yi He is overseeing a $7.5 billion portfolio, with more investments to come, making her one of the most powerful investors in the industry.

Binance co-founder Yi He will oversee $7.5 billion in assets.

Photo: Binance

Binance co-founder Yi He isn’t as well known as the crypto giant’s colorful and controversial CEO, Changpeng “CZ” Zhao.

That could soon change. The 35-year-old executive is taking on a new, higher-profile role at the world’s largest crypto exchange as head of Binance Labs, the company’s venture capital arm. With $7.5 billion in assets to oversee, that instantly makes her one of the most powerful VC investors in crypto.

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.

Sponsored Content

How cybercrime is going small time

Blockbuster hacks are no longer the norm – causing problems for companies trying to track down small-scale crime

Cybercrime is often thought of on a relatively large scale. Massive breaches lead to painful financial losses, bankrupting companies and causing untold embarrassment, splashed across the front pages of news websites worldwide. That’s unsurprising: cyber events typically cost businesses around $200,000, according to cybersecurity firm the Cyentia Institute. One in 10 of those victims suffer losses of more than $20 million, with some reaching $100 million or more.

That’s big money – but there’s plenty of loot out there for cybercriminals willing to aim lower. In 2021, the Internet Crime Complaint Center (IC3) received 847,376 complaints – reports by cybercrime victims – totaling losses of $6.9 billion. Averaged out, each victim lost $8,143.

Keep Reading Show less
Chris Stokel-Walker

Chris Stokel-Walker is a freelance technology and culture journalist and author of "YouTubers: How YouTube Shook Up TV and Created a New Generation of Stars." His work has been published in The New York Times, The Guardian and Wired.


Trump ordered social media visa screening. Biden's defending it.

The Knight First Amendment Institute just lost a battle to force the Biden administration to provide a report on the collection of social media handles from millions of visa applicants every year.

Visa applicants have to give up any of their social media handles from the past five years.

Photo: belterz/Getty Images

Would you feel comfortable if a U.S. immigration official reviewed all that you post on Facebook, Reddit, Snapchat, Twitter or even YouTube? Would it change what you decide to post or whom you talk to online? Perhaps you’ve said something critical of the U.S. government. Perhaps you’ve jokingly threatened to whack someone.

If you’ve applied for a U.S. visa, there’s a chance your online missives have been subjected to this kind of scrutiny, all in the name of keeping America safe. But three years after the Trump administration ordered enhanced vetting of visa applications, the Biden White House has not only continued the program, but is defending it — despite refusing to say if it’s had any impact.

Keep Reading Show less
Anna Kramer

Anna Kramer is a reporter at Protocol (Twitter: @ anna_c_kramer, email: akramer@protocol.com), where she writes about labor and workplace issues. Prior to joining the team, she covered tech and small business for the San Francisco Chronicle and privacy for Bloomberg Law. She is a recent graduate of Brown University, where she studied International Relations and Arabic and wrote her senior thesis about surveillance tools and technological development in the Middle East.


The US plans to block sales of older chipmaking tech to China

The Biden administration will attempt to roll back China’s chipmaking abilities by blocking tools that make a widely used type of transistor other chipmakers have employed for years.

By using a specific, fundamental building block of chip design as the basis for the overall policy, the White House hopes to both tighten existing controls and avoid the pitfalls around trying to block a generation of manufacturing technology.

Illustration: Christopher T. Fong/Protocol

The Biden administration has for several months been working to tighten its grip on U.S. exports of technology that China needs to make advanced chips, with the goals of both hurting China’s current manufacturing ability and also blocking its future access to next-generation capabilities.

According to two people familiar with the administration’s plans, President Joe Biden’s approach is based around choking off access to the tools, software and support mechanisms necessary to manufacture a specific type of technology that is one of the fundamental building blocks of modern microchips: the transistor.

Keep Reading Show less
Max A. Cherney

Max A. Cherney is a senior reporter at Protocol covering the semiconductor industry. He has worked for Barron's magazine as a Technology Reporter, and its sister site MarketWatch. He is based in San Francisco.


Netflix Games had its best month yet. Here's what's next.

A closer look at the company’s nascent gaming initiative suggests big plans that could involve cloud gaming and more.

Netflix’s acquisitions in the gaming space, and clues found in a number of job listings, suggest it has big plans.

Illustration: Christopher T. Fong/Protocol

Netflix’s foray into gaming is dead on arrival — at least according to the latest headlines about the company’s first few mobile games.

“Less than 1 percent of Netflix’s subscribers are playing its games,” declared Engadget recently. The article was referencing data from app analytics company Apptopia, which estimated that on any given day, only around 1.7 million people were playing Netflix’s mobile games on average.

Keep Reading Show less
Janko Roettgers

Janko Roettgers (@jank0) is a senior reporter at Protocol, reporting on the shifting power dynamics between tech, media, and entertainment, including the impact of new technologies. Previously, Janko was Variety's first-ever technology writer in San Francisco, where he covered big tech and emerging technologies. He has reported for Gigaom, Frankfurter Rundschau, Berliner Zeitung, and ORF, among others. He has written three books on consumer cord-cutting and online music and co-edited an anthology on internet subcultures. He lives with his family in Oakland.

Latest Stories