The raw computational power necessary to use machine learning has dwarfed everything else we use computer chips to accomplish by an order of magnitude. And that appetite for power has created a booming market for chip startups for the first time in years and helped double venture capital investments over the past five years.
AI market leader Nvidia estimates that most machine-learning or AI tasks have spurred a 25-fold increase in the need for processing power every two years, while one of the most advanced natural-language-processing models needs 275 times the compute power every two years to work. By contrast, Gordon Moore famously predicted that every two years central processors used inside desktops and servers would merely double their performance.
The big jump in demand for computing horsepower helped make Nvidia the most valuable chip company in the U.S., as its graphics chips and software stack can be harnessed for machine learning. But the boom has also created opportunities for a new breed of chip company, one that is focused on making specialized chips for AI.
“As these [AI] workloads started to grow and expand, it gives the startups an opportunity to come in with purpose-built semiconductor devices that can meet the needs better than the general-purpose-type devices,” Celesta Capital founding partner Nicholas Brathwaite said in an interview.
According to data from PitchBook, global sales of AI chips soared 60% last year to $35.9 billion compared to 2020, with roughly half that total coming from specialized AI chips in mobile phones. By 2024, PitchBook expects the market to grow at just over 20% a year, suggesting it could reach $64.9 billion by 2024. Allied Market Research forecasts that number could rise to $194.9 billion by 2030.
“With the advent of artificial intelligence, and the theme of AI, there has been this resurgence in semiconductor investing,” long-time chip industry watcher and Fidelity portfolio manager Adam Benjamin told Protocol in an interview. “It requires a lot of capital, and it is a big problem that drives a lot of investment not only in the public side, but the private side too.”
New chips on the block
Prior to 2015 only a small handful of venture capitalists saw the opportunity that AI presented, and there was little overall interest in funding chip companies.
Semiconductors are expensive to make: The costs associated with new factories are measured by the tens of billions of dollars, and even companies that outsource fabrication to the likes of TSMC bear chip development costs that start at $30 million to $40 million and top out above $500 million for the most-advanced processors. And chip design talent is scarce and costly.
Today some of that has changed. As Benjamin said, chips are still expensive to develop and require more startup cash from venture investors to get off the ground. But this no longer dissuades investors.
Venture funding for semiconductor companies has more than doubled from 2017 to $1.8 billion last year, according to PitchBook data. And this year is on track to rise again, with nearly $1 billion in funding through early April. The figures include funding from chip companies such as Intel, Samsung and Qualcomm, which operate their own venture units in part to keep tabs on the latest tech and open the option to make acquisitions.
With the advent of artificial intelligence, and the theme of AI, there has been this resurgence in semiconductor investing.
“Data center startups began to trial commercially viable chips [in 2021] after years of research and development — that goes for a number of the unicorn companies in the space that shipped trial chips to customers,” PitchBook analyst Brendan Burke said in an interview. “The high valuation of Nvidia and AMD showed how much growth there is in data center AI, and increased the comparable valuation for startups greatly.”
Micron Ventures Senior Director Gayathri Radhakrishnan told Protocol that the memory-maker is interested in the financial success of its startup investments, but also sees the possibility of becoming a customer. And, occasionally, when Micron is interested in technology for its own purposes, those investments give Radhakrishnan a unique way of completing diligence: deploying the tech inside of Microns’s factories, and seeing how well it performs.
“We understand some of the pain points or gaps that our manufacturing teams are looking for,” Radhakrishnan said. “It [can] almost end up serving as our technical diligence if they can meet the requirements.”
As with just about any venture capital investment, the people writing the checks are looking for exponential returns and technology that can deliver, in some cases, exponential differences in performance. “It’s not enough to be 30% better, it needs to be five to 10 [times] better,” Eclipse Ventures partner Greg Reichow said.
For AI applications, that typically has meant looking at startups developing chips and software around parallel processing, which takes on simpler tasks at far greater volume compared to a traditional processor that completes a single, usually more complex, calculation at a time. Reichow pointed to Cerebras Systems, which doesn’t use the most-advanced manufacturing techniques for its silicon-wafer-sized AI chips, but rather is rethinking how chips communicate with one another.
Keeping up with the Moores
The chip giants themselves already dump billions into annual research and development spending, including on parallel processing technology found in graphics chips made by AMD, Intel and Nvidia. That means investors want to fund companies that take a different approach, an idea that can’t be easily replicated by an incumbent semiconductor company.
“I’m looking for companies that are offering a fundamentally different use case,” Lux Capital partner Shahin Farshchi said. As an example, Farshchi offered the firm’s portfolio company Mythic, which uses an older chip technology to make a chip that performs AI processing at a fraction of the power required to run the type of typical graphics chip that’s used in data centers.
“The way they accomplished that wasn’t just kind of optimizing or tweaking circuits, it was by taking a whole new approach to computation,” Farshchi said. “This new approach didn’t require some fancy new process, or fancy new type of physics, it was taking an existing technology that tens of billions of dollars [have] already gone into, and repurposing it to do something else.”