Hands typing on a laptop next to a stethoscope.
Photo: National Cancer Institute via Unsplash

Nvidia's 'medical grade' chip strategy

Protocol Enterprise

Hello and welcome to Protocol Enterprise! Today: How Nvidia plans to chase a piece of the $10 trillion health care market, how AWS thinks AI will evolve, and the race to save Ukraine’s digital assets.

Spin up

IBM continues to do surprisingly well with its flat-growth-but-profitable mainframe business, and one of the reasons behind that momentum is that the average migration from mainframes to cloud computing can take 18 months, and in some cases, up to five years, according to research from ISG.

The AI chip will see you now

For most of its early life, tech industry insiders thought of Nvidia as a business focused on video game graphics technology; first graphics cards, and then the specialized chips that power them.

But Nvidia has come a long way after it realized that the parallel processing tech that was so good at rendering graphics could also be put to work on machine-learning problems for enterprise tech. It has expanded its chip offerings into the data center, developed a software business and recently pursued the $10 trillion health care market en route to surpassing Intel as the most valuable chip company in the U.S.

Nvidia’s top executive heading up those efforts is Kimberly Powell, vice president and general manager of Health Care and a 14-year veteran at the company. Powell recently spoke with Protocol to discuss Nvidia's recent health care initiatives, and how it plans to approach the sector in the future.

Can you outline Nvidia’s current effort in health care?

We think we can make a significant contribution in the area of medical imaging and medical devices. One of the largest workloads in supercomputing, and accelerated computing, is in the area of life sciences. To be able to do simulation of diseases, and chemical compounds interacting and trying to stop the behavior — [it’s] to do drug discovery, essentially in silico, in a computer.

Our charter, which we put out back in 2018, is that with the accelerated computing platform [and] the new modern AI era, how do we take these computing approaches, and help the health care industry benefit from it. We called the platform Nvidia Clara, after Clara Barton who invented the Red Cross. Our Clara is a platform that helps the industry take advantage of the most-advanced computing approaches — everything from hardware solutions, through accelerations layers, libraries, full-on applications. And we target domains we think we can make a unique contribution to.

What is the market opportunity Nvidia is chasing in health care?

You can think of the [total addressable market] in a lot of interesting ways in health care, and where it’s going to go. But it's approaching a $10 trillion industry. For example, if you look at the radiology practice itself, it’s a half-trillion dollar market, where you’re doing a combination of the devices themselves, but then also the skills needed to interpret these images. You could imagine that whole thing as the TAM.

Does Nvidia make hardware that is designed specifically for health care applications? Or does Nvidia build something like its DGX data center AI server systems for the health care industry?

If you think of what medical devices are, what they are becoming — they are being redefined by AI and robotics. And they need a specific platform that is medical grade. Is that down to the silicon? No, it's built with the three chips Nvidia offers: smart networking interconnection that [can] stream in all of the data that’s coming off the sensors, whether it’s ultrasound or an endoscope. You need to stream that data in and then you need to start operating its data pipeline to do more and more sophisticated things to assist the surgeons or the radiologist, or whoever is reading the result.

We use our network interconnect to stream in to do a direct memory access right into the [graphics processing unit] to start all of the AI processing. If you think of what happens with ultrasound, the first thing you want to do is increase the image quality, or you want to de-noise it. These are all now AI applications that can be done in software. And as soon as you get the image quality where you want it, you want to start providing real-time information. This is the heart, this is the ventricle, how much blood flow is going through this particular area. Cut here, don’t cut there. It’s guiding, so they’re essentially becoming robots themselves.

When you talk about Nvidia’s health care products being “medical grade” what do you mean by that exactly? How can a chip or a piece of software achieve that?

There are two pieces of it. Because we’re a platform, there’s a hardware layer and a software lawyer. And those two pieces need to be medical grade. You can’t put any old computer next to a patient; liquids might spill on it, for example. Or it might interfere with other signaling that’s going inside the health care environment. There are certifications that the market has to go through, so we architect the system for that at the hardware layer. At the software layer, there are similar safety certifications for medical devices. It exists for the automotive industry, all the redundancy things that you need, and similar things exist in the health care industry.

— Max A. Cherney (email | twitter)


DuckDuckGo has an all-in-one privacy solution aimed at simplifying online privacy protection. DuckDuckGo’s app can be used as an everyday browser with private search, tracker blocking, encryption, and now email protection built-in. It’s the free, easy button for online privacy.

Learn more

AWS machine-learning guru shares her AI investment roadmap

What does AWS think is the next big thing in AI and machine learning? It’s not just what’s on the product or partnership roadmap, but where the company puts its investment dollars.

Allie Miller, global head of Machine Learning Business Development for Startups and Venture Capital at AWS, shared the AI- and machine-learning-related trends she predicts will hit the big-time during a talk this week at the ScaleUp, an AI conference held by software investment firm Insight Partners. Here are three:

Large language models as platforms

“You would have had to be asleep for the last two years to not know that GPT-3 has existed,” Miller said of the open-source model that’s formed the foundation for a variety of enterprise AI software for chatbots and other systems. Despite problems even the makers of GPT-3 acknowledge, Miller’s take is that they’ve improved too much to ignore. “The performance of these models is now starting to be indistinguishable from human-generated content. And the use cases are vast — from search to sales intelligence,” she said.

She revealed that AWS doesn’t just think of these as AI models, either. “Large language models, as we see it — or as my team thinks about it — is a new platform. It is a paradigm shift in how software will be created and how AI will be created, how content will be created.”

Synthetic data

There’s no shortage of complaints among AI product developers about a shortage of data to feed machine- and deep-learning models for specific use cases. That’s a recipe for a synthetic data smorgasbord, said Miller, who said startups that create data to train AI models will get more attention in the near future. “I'm trying to open my phone [and] it never opens up because it wasn't trained on this data,” she said. Synthetic data services that could build variations on photos and image data to train a phone’s face-unlocking system, for example, could be useful. “Being able to manage that with millions of synthetic images is incredibly helpful,” she said.

Data decentralization and distributed everything

“You’re going to know your own car data, you're going to know your own retail, shopping and transaction data,” Miller said, noting that the growth of services that produce and use decentralized and distributed data has given rise to distributed data technologies such as federated learning, and is “ushering in a new wave of data and AI needs,” Miller said.

Something similar is happening in terms of computing power, she said, pointing to “ultra clusters,” or high-performance computing clusters for machine learning. “Our team calls this distributed everything, that there's this rise of just throwing it out into many servers and bringing it all back together.” This, she said, is “a very similar pattern to decentralization, this idea of distributed everything, but it's about performance rather than ownership.”

— Kate Kaye (email| twitter)

Disaster recovery

Given the scope of physical destruction that Russia has visited upon Ukraine over the last several weeks, it seems almost trivial to worry about digital destruction. But Ukraine’s digital assets and history have actual and sentimental value to its citizens, and that’s one reason why the Internet Archive embarked on a frantic mission to save what it could of the Ukrainian internet.

As detailed by IEEE Spectrum, the Internet Archive has been expanding its efforts to back up Ukraine’s internet, including cultural and news websites. Mark Graham, director of the Wayback Machine, said the organization is filling up “several 16-terabyte drives every day” because “once it’s gone, you can’t get it back.”

— Tom Krazit (email | twitter)

Around the enterprise

TSMC revenue soared 36% to $17 billion in the first three months of 2022, as the chip shortage continues to lift the fortunes of chipmakers.

AMD CEO Lisa Su will become the first alum in MIT historyto make a donation to a building that will be named after them.


Tracking is a comprehensive problem — over 80% of websites, apps and emails contain third-party trackers. Because of that, people need a multi-pronged privacy solution. DuckDuckGo’s all-in-one privacy app can be used as an everyday browser with multiple features built-in, including private search, tracker blocking, encryption, and email protection.

Learn more

Thanks for reading — see you Monday!

Recent Issues