Enterprise

Mark Zuckerberg’s metaverse will require computing tech no one knows how to build

To achieve anything close to what metaverse boosters promise, experts believe nearly every kind of chip will have to be an order of magnitude more powerful than it is today.

Chips on chips

A disconnect has formed between the way corporate America is talking about the metaverse and its plausibility.

Illustration: Christopher T. Fong/Protocol

The technology necessary to power the metaverse doesn’t exist.

It will not exist next year. It will not exist in 2026. The technology might not exist in 2032, though it’s likely we will have a few ideas as to how we might eventually design and manufacture chips that could turn Mark Zuckerberg’s fever dreams into reality by then.

Over the past six months, a disconnect has formed between the way corporate America is talking about the dawning concept of the metaverse and its plausibility, based on the nature of the computing power that will be necessary to achieve it. To get there will require immense innovation, similar to the multi-decade effort to shrink personal computers to the size of an iPhone.

Microsoft hyped its $68.7 billion bid for Activision Blizzard last month as a metaverse play. In October, Facebook transformed its entire corporate identity to revolve around the metaverse. Last year, Disney even promised to build its own version of the metaverse to “allow storytelling without boundaries.”

These ideas hinge on our ability to build the chips, data centers and networking equipment needed to deliver the computing horsepower required. And at the moment, we can’t. No one knows how, or where to start, or even whether the devices will still be semiconductors. There aren’t enough chips right now to build all the things people want today, let alone what’s promised by metaverse preachers.

“The biggest things that we are looking at in supercomputers today still need to be improved in order to be able to deliver [a metaverse] type of experience,” Jerry Heinz, the former head of Nvidia’s Enterprise Cloud unit, told Protocol.

Zuckerversed

What we now describe as the metaverse is at least as old as early 20th century speculative fiction.

E.M. Forster’s 1909 story “The Machine Stops,” for example, renders a pre-chip, pre-digital version of the metaverse. Fast forward 70 years, and science-fiction writer William Gibson called this concept “cyberspace” in the 1984 book “Neuromancer”; Neal Stephenson popularized the word “metaverse” in his 1992 novel “Snow Crash”; Ernest Cline called it OASIS (an acronym for Ontologically Anthropocentric Sensory Immersive Simulation) in “Ready Player One.” Few of those stories describe a utopian community.

It’s possible that what we now call the metaverse will forever remain the domain of science fiction. But like it or not, Mark Zuckerberg has vaulted the idea into the mainstream.

Zuckerberg’s explanation of what the metaverse will ultimately look like is vague, but includes some of the tropes its boosters roughly agree on: He called it “[an] embodied internet that you’re inside of rather than just looking at” that would offer everything you can already do online and “some things that don’t make sense on the internet today, like dancing.”

If the metaverse sounds vague, that’s because it is. That description could mutate over time to apply to lots of things that might eventually happen in technology. And arguably, something like the metaverse might eventually already exist in an early form produced by video game companies.

Roblox and Epic Games’ Fortnite play host to millions — albeit in virtually separated groups of a few hundred people — viewing live concerts online. Microsoft Flight Simulator has created a 2.5 petabyte virtual replica of the world that is updated in real time with flight and weather data.

But even today’s most complex metaverse-like video games require a tiny fraction of the processing and networking performance we would need to achieve the vision of a persistent world accessed by billions of people, all at once, across multiple devices, screen formats and in virtual or augmented reality.

“For something that is a true mass market, spend-many-hours-a-day doing [kind of activity, we’re looking] at generations of compute to leap forward to do that,” Creative Strategies CEO Ben Bajarin told Protocol. “What you’re going to see over the next few years is an evolution to what you see today, with maybe a bit more emphasis on AR than VR. But it’s not going to be this rich, simulated 3D environment.”

A generational leap

In the beginning, chips powered mainframes. Mainframes begat servers, home computers and smartphones: smaller, faster and cheaper versions of more or less the same technology that came before.

If the metaverse is next, nobody can describe the system requirements specifically because it will be a distinct departure from prior shifts in computing. But it has become clear that to achieve anything close to the optimistic version, chips of nearly every kind will have to be an order of magnitude more powerful than they are today.

Intel’s Raja Koduri took a stab at the question in a recent editorial, writing: “Truly persistent and immersive computing, at scale and accessible by billions of humans in real time, will require even more: a 1,000-times increase in computational efficiency from today’s state of the art.”

It’s difficult to understate how challenging it will be to reach the goal of a thousandfold increase in computing efficiency. Koduri’s estimate might be convservative, and the demands could easily exceed 10 times that amount.

Even assuming those onerous hardware requirements can be met, better communication between all layers of the software stack — from chips at the bottom to end-user applications at the top — will also be required, University of Washington computer science professor Pedro Domingos told Protocol.

“We can get away with [inefficiency] now, but we’re not going to get away with it in the metaverse,” he said. “The whole [software] stack is going to be more tightly integrated, and this is already happening in areas such as AI and, of course, graphics.”

It’s not quantum computing

The generational leap toward the metaverse probably won’t be quantum computing, or at least how we think of it today: a largely theoretical platform decades from practical use that requires calculations to be performed at outer-space vacuum temperatures in room-sized computers. But the performance breakthrough promised by something like quantum computing will be necessary.

Google is exploring using algorithms to design more powerful chips, which could help move the needle. Special-purpose processors for AI models exist today, but by creating even more specialized chips, it’s possible to eke out more performance, Domingos said. Those designs can circumvent roadblocks to increasing the raw performance of existing silicon, such as making an application-specific integrated circuit that performs physics calculations.

“These companies — the chip-makers, or the providers of the metaverse, or who knows — will make more and more advanced chips for this purpose,” Domingos said. “For every level of the stack, from the physics to the software, there are things you can do.”

Domingos noted that, in the 1990s, ray tracing in real time would have been considered impossible, yet decades later it’s now done in real time with chips that power the PlayStation 5 and Xbox Series X. Google’s AI chips, known as tensor processing units, are another example of a specialized type of chip that will only become more abundant in the future, and is necessary for the metaverse.

A fabulous future

But generational shifts in computing also require equivalent shifts in manufacturing technology. Companies such as TSMC and Intel are already pushing the boundaries of physics with extreme ultraviolet lithography machines to print the most advanced chips.

The latest EUV machines are dedicated to squeezing larger numbers of ever-smaller transistors and features onto each chip, continuing down the path that has been established for decades. But at some point in the future, the chip-making machines will become too costly, or it will be impossible to shrink features any further.

“If you look at where the architecture stands, if you look at where the performance per watt stands, I don’t want to say we need a breakthrough, but we’re pretty close to needing a breakthrough,” Bajarin said. “Sub-one nanometer is roughly four or five years away, and that’s not going to solve this problem.”

Without a generational leap in computing, a lower-fidelity version of the Zuckerverse is attainable. Assuming users will settle for graphics somewhat better than Second Life was able to achieve a decade ago, it should be possible in the longer run to make something that achieves some of the goals, such as a persistent, internet-connected virtual world. Building that version of the metaverse will require better networking tech, the specialized chips Domingos described and possibly something like artificial intelligence computing in order to handle some of the more complex but mundane workloads.

“There’s a lot of scaling up to do, which means that today’s data centers are going to look miniscule compared with the ones of tomorrow,” Domingos said.

But it’s going to take a long time to get there. Zuckerberg’s vision of the metaverse could be decades away, and after losing $20 billion on the effort so far, it's not clear Meta will have the cash to turn that vision into reality.

Enterprise

UiPath had a rocky few years. Rob Enslin wants to turn it around.

Protocol caught up with Enslin, named earlier this year as UiPath’s co-CEO, to discuss why he left Google Cloud, the untapped potential of robotic-process automation, and how he plans to lead alongside founder Daniel Dines.

Rob Enslin, UiPath's co-CEO, chats with Protocol about the company's future.

Photo: UiPath

UiPath has had a shaky history.

The company, which helps companies automate business processes, went public in 2021 at a valuation of more than $30 billion, but now the company’s market capitalization is only around $7 billion. To add insult to injury, UiPath laid off 5% of its staff in June and then lowered its full-year guidance for fiscal year 2023 just months later, tanking its stock by 15%.

Keep Reading Show less
Aisha Counts

Aisha Counts (@aishacounts) is a reporter at Protocol covering enterprise software. Formerly, she was a management consultant for EY. She's based in Los Angeles and can be reached at acounts@protocol.com.

Sponsored Content

Great products are built on strong patents

Experts say robust intellectual property protection is essential to ensure the long-term R&D required to innovate and maintain America's technology leadership.

Every great tech product that you rely on each day, from the smartphone in your pocket to your music streaming service and navigational system in the car, shares one important thing: part of its innovative design is protected by intellectual property (IP) laws.

From 5G to artificial intelligence, IP protection offers a powerful incentive for researchers to create ground-breaking products, and governmental leaders say its protection is an essential part of maintaining US technology leadership. To quote Secretary of Commerce Gina Raimondo: "intellectual property protection is vital for American innovation and entrepreneurship.”

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.
Workplace

Figma’s chief product officer: We can do more with Adobe

Yuhki Yamashita thinks Figma might tackle video or 3D objects someday.

Figman CPO Yuhki Yamashita told Protocol about Adobe's acquisition of the company.

Photo: Figma

Figma CPO Yuhki Yamashita’s first design gig was at The Harvard Crimson, waiting for writers to file their stories so he could lay them out in Adobe InDesign. Given his interest in computer science, pursuing UX design became the clear move. He worked on Outlook at Microsoft, YouTube at Google, and user experience at Uber, where he was a very early user of Figma. In 2019, he became a VP of product at Figma; this past June, he became CPO.

“Design has been really near and dear to my heart, which is why when this opportunity came along to join Figma and rethink design, it was such an obvious opportunity,” Yamashita said.

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.

Climate

Microsoft lays out its climate advocacy goals

The tech giant has staked out exactly what kind of policies it will support to decarbonize the world and clean up the grid.

On Sept. 22, Microsoft — seen here, CEO Satya Nadella — published two briefs explaining what new climate policies it will advocate for.

Photo: Simon Dawson/Bloomberg via Getty Images

The tech industry has no shortage of climate goals, but they’ll be very hard to achieve without the help of sound public policy.

Microsoft published two new briefs on Sept. 22 explaining what policies it will advocate for in the realm of reducing carbon and cleaning up the grid. With policymakers in the U.S. and around the world beginning to weigh more stringent climate policies (or in the U.S.’s case, any serious climate policies at all), the briefs will offer a measuring stick for whether Microsoft is living up to its ideals.

Keep Reading Show less
Brian Kahn

Brian ( @blkahn) is Protocol's climate editor. Previously, he was the managing editor and founding senior writer at Earther, Gizmodo's climate site, where he covered everything from the weather to Big Oil's influence on politics. He also reported for Climate Central and the Wall Street Journal. In the even more distant past, he led sleigh rides to visit a herd of 7,000 elk and boat tours on the deepest lake in the U.S.

Climate

The next generation of refrigerants is on the way

It’s never been cooler to reconsider the substances that keep us cool. Here’s what could replace super-polluting greenhouse gases in refrigerators and air conditioners.

It’s incumbent on refrigeration tech companies to not repeat past mistakes.

Photo: VCG via Getty Images

In a rare display of bipartisan climate action, the Senate ratified the Kigali Amendment last week. The U.S. joins 137 other nations in the global effort to curb the use of hydrofluorocarbons, or HFCs. Now the race is on to replace them for climate tech startups and traditional HVAC and refrigeration companies alike.

Most HFCs have a global warming potential (GWP) more than 1,000 times that of carbon dioxide — though some are as much as 14,800 times more potent — which makes reducing them a high priority to protect the climate. The treaty mandates that the U.S. and other industrialized nations decrease their use of HFCs to roughly 15% of 2012 levels by 2036.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Latest Stories
Bulletins