Enterprise

Mark Zuckerberg’s metaverse will require computing tech no one knows how to build

To achieve anything close to what metaverse boosters promise, experts believe nearly every kind of chip will have to be an order of magnitude more powerful than it is today.

Chips on chips

A disconnect has formed between the way corporate America is talking about the metaverse and its plausibility.

Illustration: Christopher T. Fong/Protocol

The technology necessary to power the metaverse doesn’t exist.

It will not exist next year. It will not exist in 2026. The technology might not exist in 2032, though it’s likely we will have a few ideas as to how we might eventually design and manufacture chips that could turn Mark Zuckerberg’s fever dreams into reality by then.

Over the past six months, a disconnect has formed between the way corporate America is talking about the dawning concept of the metaverse and its plausibility, based on the nature of the computing power that will be necessary to achieve it. To get there will require immense innovation, similar to the multi-decade effort to shrink personal computers to the size of an iPhone.

Microsoft hyped its $68.7 billion bid for Activision Blizzard last month as a metaverse play. In October, Facebook transformed its entire corporate identity to revolve around the metaverse. Last year, Disney even promised to build its own version of the metaverse to “allow storytelling without boundaries.”

These ideas hinge on our ability to build the chips, data centers and networking equipment needed to deliver the computing horsepower required. And at the moment, we can’t. No one knows how, or where to start, or even whether the devices will still be semiconductors. There aren’t enough chips right now to build all the things people want today, let alone what’s promised by metaverse preachers.

“The biggest things that we are looking at in supercomputers today still need to be improved in order to be able to deliver [a metaverse] type of experience,” Jerry Heinz, the former head of Nvidia’s Enterprise Cloud unit, told Protocol.

Zuckerversed

What we now describe as the metaverse is at least as old as early 20th century speculative fiction.

E.M. Forster’s 1909 story “The Machine Stops,” for example, renders a pre-chip, pre-digital version of the metaverse. Fast forward 70 years, and science-fiction writer William Gibson called this concept “cyberspace” in the 1984 book “Neuromancer”; Neal Stephenson popularized the word “metaverse” in his 1992 novel “Snow Crash”; Ernest Cline called it OASIS (an acronym for Ontologically Anthropocentric Sensory Immersive Simulation) in “Ready Player One.” Few of those stories describe a utopian community.

It’s possible that what we now call the metaverse will forever remain the domain of science fiction. But like it or not, Mark Zuckerberg has vaulted the idea into the mainstream.

Zuckerberg’s explanation of what the metaverse will ultimately look like is vague, but includes some of the tropes its boosters roughly agree on: He called it “[an] embodied internet that you’re inside of rather than just looking at” that would offer everything you can already do online and “some things that don’t make sense on the internet today, like dancing.”

If the metaverse sounds vague, that’s because it is. That description could mutate over time to apply to lots of things that might eventually happen in technology. And arguably, something like the metaverse might eventually already exist in an early form produced by video game companies.

Roblox and Epic Games’ Fortnite play host to millions — albeit in virtually separated groups of a few hundred people — viewing live concerts online. Microsoft Flight Simulator has created a 2.5 petabyte virtual replica of the world that is updated in real time with flight and weather data.

But even today’s most complex metaverse-like video games require a tiny fraction of the processing and networking performance we would need to achieve the vision of a persistent world accessed by billions of people, all at once, across multiple devices, screen formats and in virtual or augmented reality.

“For something that is a true mass market, spend-many-hours-a-day doing [kind of activity, we’re looking] at generations of compute to leap forward to do that,” Creative Strategies CEO Ben Bajarin told Protocol. “What you’re going to see over the next few years is an evolution to what you see today, with maybe a bit more emphasis on AR than VR. But it’s not going to be this rich, simulated 3D environment.”

A generational leap

In the beginning, chips powered mainframes. Mainframes begat servers, home computers and smartphones: smaller, faster and cheaper versions of more or less the same technology that came before.

If the metaverse is next, nobody can describe the system requirements specifically because it will be a distinct departure from prior shifts in computing. But it has become clear that to achieve anything close to the optimistic version, chips of nearly every kind will have to be an order of magnitude more powerful than they are today.

Intel’s Raja Koduri took a stab at the question in a recent editorial, writing: “Truly persistent and immersive computing, at scale and accessible by billions of humans in real time, will require even more: a 1,000-times increase in computational efficiency from today’s state of the art.”

It’s difficult to understate how challenging it will be to reach the goal of a thousandfold increase in computing efficiency. Koduri’s estimate might be convservative, and the demands could easily exceed 10 times that amount.

Even assuming those onerous hardware requirements can be met, better communication between all layers of the software stack — from chips at the bottom to end-user applications at the top — will also be required, University of Washington computer science professor Pedro Domingos told Protocol.

“We can get away with [inefficiency] now, but we’re not going to get away with it in the metaverse,” he said. “The whole [software] stack is going to be more tightly integrated, and this is already happening in areas such as AI and, of course, graphics.”

It’s not quantum computing

The generational leap toward the metaverse probably won’t be quantum computing, or at least how we think of it today: a largely theoretical platform decades from practical use that requires calculations to be performed at outer-space vacuum temperatures in room-sized computers. But the performance breakthrough promised by something like quantum computing will be necessary.

Google is exploring using algorithms to design more powerful chips, which could help move the needle. Special-purpose processors for AI models exist today, but by creating even more specialized chips, it’s possible to eke out more performance, Domingos said. Those designs can circumvent roadblocks to increasing the raw performance of existing silicon, such as making an application-specific integrated circuit that performs physics calculations.

“These companies — the chip-makers, or the providers of the metaverse, or who knows — will make more and more advanced chips for this purpose,” Domingos said. “For every level of the stack, from the physics to the software, there are things you can do.”

Domingos noted that, in the 1990s, ray tracing in real time would have been considered impossible, yet decades later it’s now done in real time with chips that power the PlayStation 5 and Xbox Series X. Google’s AI chips, known as tensor processing units, are another example of a specialized type of chip that will only become more abundant in the future, and is necessary for the metaverse.

A fabulous future

But generational shifts in computing also require equivalent shifts in manufacturing technology. Companies such as TSMC and Intel are already pushing the boundaries of physics with extreme ultraviolet lithography machines to print the most advanced chips.

The latest EUV machines are dedicated to squeezing larger numbers of ever-smaller transistors and features onto each chip, continuing down the path that has been established for decades. But at some point in the future, the chip-making machines will become too costly, or it will be impossible to shrink features any further.

“If you look at where the architecture stands, if you look at where the performance per watt stands, I don’t want to say we need a breakthrough, but we’re pretty close to needing a breakthrough,” Bajarin said. “Sub-one nanometer is roughly four or five years away, and that’s not going to solve this problem.”

Without a generational leap in computing, a lower-fidelity version of the Zuckerverse is attainable. Assuming users will settle for graphics somewhat better than Second Life was able to achieve a decade ago, it should be possible in the longer run to make something that achieves some of the goals, such as a persistent, internet-connected virtual world. Building that version of the metaverse will require better networking tech, the specialized chips Domingos described and possibly something like artificial intelligence computing in order to handle some of the more complex but mundane workloads.

“There’s a lot of scaling up to do, which means that today’s data centers are going to look miniscule compared with the ones of tomorrow,” Domingos said.

But it’s going to take a long time to get there. Zuckerberg’s vision of the metaverse could be decades away, and after losing $20 billion on the effort so far, it's not clear Meta will have the cash to turn that vision into reality.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins