Enterprise

Nvidia wants to solve 'hard problems' for health care companies with AI. At stake? $10 trillion.

Nvidia is chasing the health care market for AI with a medical-grade version of its data center server and software platform, and Kimberly Powell is leading the charge.

Kimberly Powell, Nvidia's vice president and general manager of Health Care

Nvidia’s top executive heading up its health care efforts is Kimberly Powell, vice president and general manager of Health Care and a 14-year veteran at the company.

Photo courtesy of Nvidia

For most of its early life, tech industry insiders thought of Nvidia as a business focused on video game graphics technology; first graphics cards, and then the specialized chips that power them.

But Nvidia has come a long way after it realized that the parallel processing tech that was so good at rendering graphics could also be put to work on machine-learning problems for enterprise tech. It has expanded its chip offerings into the data center, developed a software business and recently pursued the $10 trillion health care market en route to surpassing Intel as the most valuable chip company in the U.S.

That $10 trillion target is no secret among other big enterprise tech companies that have an AI business either. Cloud titans such as Oracle and Microsoft have also begun to chase the lucrative industry, each announcing big acquisitions in recent months that revolve around the evolution of health care and AI.

Nvidia’s top executive heading up those efforts is Kimberly Powell, vice president and general manager of Health Care and a 14-year veteran at the company. Powell recently spoke with Protocol to discuss Nvidia's recent health care initiatives, and how it plans to approach the sector in the future.

This interview has been edited and condensed.

Can you outline Nvidia’s current effort in health care?

We think we can make a significant contribution in the area of medical imaging and medical devices. One of the largest workloads in supercomputing, and accelerated computing, is in the area of life sciences. To be able to do simulation of diseases, and chemical compounds interacting and trying to stop the behavior — [it’s] to do drug discovery, essentially in silico, in a computer.

Our charter, which we put out back in 2018, is that with the accelerated computing platform [and] the new modern AI era, how do we take these computing approaches, and help the health care industry benefit from it. We called the platform Nvidia Clara, after Clara Barton who invented the Red Cross. Our Clara is a platform that helps the industry take advantage of the most advanced computing approaches — everything from hardware solutions, through accelerations layers, libraries, full-on applications. And we target domains we think we can make a unique contribution to.

We solve really hard problems. One of our most successful bodies of work is Monai, which is where we’ve taken PyTorch and essentially wrapped it and domain-specified it for health care.

What is the market opportunity Nvidia is chasing in health care?

You can think of the [total addressable market] in a lot of interesting ways in health care, and where it’s going to go. But it's approaching a $10 trillion industry. For example, if you look at the radiology practice itself, it’s a half-trillion dollar market, where you’re doing a combination of the devices themselves, but then also the skills needed to interpret these images. You could imagine that whole thing as the TAM.

The expansion will continue too. If you know a bit about health care, only a third of the population of the world has access to things like radiology, or even surgical procedures. And it’s growing. We’re living longer; our population is growing. We don’t have enough medical professionals to serve the existing population. And so we need to have ways of automating and augmenting the medical professionals that we do have. Being able to create competing platforms that could take someone who has studied medicine for a few years and make them a trained surgeon rather than the 30 years it usually takes for them to be a trained surgeon.

So that’s where the market analysis you could take all the way through if you’d like. We think about it in a way that without a doubt every single one of our medical professionals will be augmented with these platforms with these capabilities.

How large is the team and how does it fit into Nvidia?

It’s in the hundreds, because at this point we are building full-on products and platforms. We have everything from applied research, engineering, product marketing, technical marketing, developer marketing, campaign marketing, business development.

I essentially report to [Nvidia CEO] Jensen [Huang]. Our report structure generally doesn’t really signify anything. In general, I’m working with Jensen on efforts in health care and he is very, very passionate about this area, which is why [it's] 14 years in the making.

How do you decide where and how to allocate resources, R&D budget?

We have three fundamental questions that get us set straight on this. One, that we have identified an important problem. [Second], is it super hard to do? Because if it’s not hard to do, I’d love for somebody else to go do it. Truly. Because we attract the world’s most fantastic talent and you want them to work on something that is just hard to do. And the third thing is: Do we have a unique capability to do it? If all three of those don’t check out, then it’s not really something we would deploy resources on.

Does Nvidia make hardware that is designed specifically for health care applications? Or does Nvidia build something like its DGX data center AI server systems for the health care industry?

If you think of what medical devices are, what they are becoming — they are being redefined by AI and robotics. And they need a specific platform that is medical grade. Is that down to the silicon? No, it's built with the three chips Nvidia offers: smart networking interconnection that [can] stream in all of the data that’s coming off the sensors, whether it’s ultrasound or an endoscope. You need to stream that data in and then you need to start operating its data pipeline to do more and more sophisticated things to assist the surgeons or the radiologist, or whoever is reading the result.

We use our network interconnect to stream in to do a direct memory access right into the [graphics processing unit] to start all of the AI processing. If you think of what happens with ultrasound, the first thing you want to do is increase the image quality, or you want to de-noise it. These are all now AI applications that can be done in software. And as soon as you get the image quality where you want it, you want to start providing real-time information. This is the heart, this is the ventricle, how much blood flow is going through this particular area. Cut here, don’t cut there. It’s guiding, so they’re essentially becoming robots themselves.

So Clara Holoscan is a platform for medical devices, and Clara Holoscan MGX hardware is the compute platform that will essentially create that architecture: streaming in massive AI computing, and in a complete real-time sensor to display, because you need that when you’re in a surgical environment.

When you talk about Nvidia’s health care products being “medical grade” what do you mean by that exactly? How can a chip or a piece of software achieve that?

There are two pieces of it. Because we’re a platform, there’s a hardware layer and a software lawyer. And those two pieces need to be medical grade. You can’t put any old computer next to a patient; liquids might spill on it, for example. Or it might interfere with other signaling that’s going inside the health care environment. There are certifications that the market has to go through, so we architect the system for that at the hardware layer. At the software layer, there are similar safety certifications for medical devices. It exists for the automotive industry, all the redundancy things that you need, and similar things exist in the health care industry.

Think of a startup company engineering a system, getting it medically certified, building the entire compute stack so they can run their application on top. They’ll go out of business before they can even dream about doing it. We’re really just trying to create that platform for them to accelerate innovation and go to market.

Picking up on what you said about training a surgeon in three years vs. 30, what will we have to achieve in computing technology, chip design and software in order to enable that?

If you’ve watched Jensen describe what he means by “million-x” — it’s a paradigm, not just for medical but we absolutely apply it. The first thing you need to do is make something accelerated. The next thing you need to do is scale it in parallel, and then the next thing we’ve done is introduce artificial intelligence.

Let me give you the example of drug discovery. We are able to take what would otherwise be an intractable problem, where you have to accurately simulate how drugs interact with diseases. It’s intractable in its current form but it can be refactored by using AI, physics-informed neural networks, to essentially augment that simulation and approach. So these AI models that are essentially able to predict physics behavior applied throughout, so that’s going to be what is absolutely transformational in the area of drug discovery and simulation. And simulation is present in everything that we do, and what we are going to do in the future. So without that, it will be very hard to get where we want to go, and that’s the next five to 10 years — AI being able to do these physical world calculations, physics predictions.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins