The healthcare industry faces major challenges in creating new drugs that can improve outcomes in the treatment of all kinds of diseases. New generative AI models could play a major role in breaking through existing barriers, from lab research to successful clinical trials. Eventually, even AI-powered robots could help in the cause.
Nvidia VP of healthcare Kimberly Powell, one of Fast Company’s AI 20 honorees, has led the company’s health efforts for 17 years, giving her a big head start on understanding how to turn AI’s potential to improve our well-being into reality. Since it’s likely that everything from drug-discovery models to robotic healthcare aides would be powered by Nvidia chips and software, she’s in the right place to have an impact.
This Q&A is part of Fast Company’s AI 20 for 2025, our roundup spotlighting 20 of AI’s most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers. It has been edited for length and clarity.
A high percentage of drugs make it to clinical trials and then fail. How can new frontier models using lots of computing power help us design safer and more effective drugs?
Drug discovery is an enormous problem. It’s a 10-year journey at best. It costs several billions to get a drug to market. Back in 2017, very shortly after the transformer (generative AI model) was invented to deal with text and language, it was applied by the DeepMind team to proteins. And one of the most consequential contributions to healthcare today is still (DeepMind’s) invention of AlphaFold. Everything that makes (humans) work is based on proteins and how they fold and their physical structure. We need to study that, (because) you might build a molecule that changes or inhibits the protein from folding the wrong way, which is the cause of disease.
So instead of using the transformer model to predict words, they used a transformer to predict the effects of a certain molecule on a protein. It allowed the world to see that it’s possible to represent the world of drugs in a computer. And the world of drugs really starts with human biology. DNA is represented.
After you take a sample from a human, you put it through a sequencing machine and what comes out is a 3 billion character sequence of letters—A‘s, C‘s, T‘s, and G‘s. Luckily, transformer models can be trained on this sequence of characters and learn to represent them. DNA is represented in a sequence of characters. Proteins are represented in a sequence of characters.
So how will this new approach end up giving us breakthrough drugs?
If you look at the history of drug discovery, we’ve been kind of circling around the same targets—the target is the thing that causes the disease in the first place—for a very long time. And we’ve largely exhausted the drugs for those targets. We know biology is more complex than any one singular target. It’s probably multiple targets. And that’s why cancer is so hard, because it’s many things going wrong in concert that actually cause cancer and cause different people to respond to cancer differently.
Once we’ve cracked the biology, and we’ve understood more about these multiple targets, molecular design is the other half of this equation. And so similarly, we can use the power of generative models to generate ideas that are way outside a chemist’s potential training or even their imagination. It’s a near infinite search space. These generative models can open our aperture.
I imagine that modeling this vast new vocabulary of biology places a whole new set of requirements on the Nvidia chips and infrastructure.,
We have to do a bunch of really intricate data science work to apply this (transformer) method to these crazy data domains. Because we’re (going from) the language model and (representing) these words that are just short little sequences to representing sequences that are 3 billion (characters) long. So things like context length—how much context length is how much information can you put into a prompt—has to be figured out for these long proteins and DNA strings.
We have to do a lot of tooling and invention and new model architectures that have transformers at the core. That’s why we work with the community to really figure out what are the new methods or the new tooling we have to build so that new models can be developed for this domain. That’s in the area of really understanding biology better.
Can you say more about the company you’re working with that is using digital twins to simulate an expensive clinical trial before the trial begins?
ConcertAI is doing exactly that. They specialize in oncology. They simulate the clinical trials so they can make the best decisions. They can see if they don’t have enough patients, or patients of the right type. They can even simulate it, depending on where the site selection is, to predict how likely the patients are to stay on protocol.
Keeping the patients adhering to the clinical trial is a huge challenge, because not everyone has access to transportation or enough capabilities to take off work. They build that a lot into their model so that they can try to set up the clinical trial for its best success factors.
How might AI agents impact healthcare?
You have these digital agents who are working in the computer and working on all the information. But to really imagine changing how healthcare is delivered, we’re going to need these physical agents, which I would call robots, that can actually perform physical tasks.
You can think about the deployment of robots, everything from meeting and greeting a patient at the door, to delivering sheets or a glass of ice chips to a patient room, to monitoring a patient while inside a room, all the way through to the most challenging of environments, which is the operating room with surgical robotics.
Nvidia sells chips, but I think what I’ve heard in your comments is a whole tech stack, including in healthcare. There are models, there are software layers, things like that.
I’ve been at the company 17 years working on healthcare, and it’s not because healthcare lives in a chip. We build full systems. There are the operating systems, there are the AI models, there are the tools.
And a model is never done—you have to be constantly improving it. Through every usage of that model, you’re learning something, and you’ve got to make sure that that agent or model is continuously improving. We’ve got to create whole computing infrastructure systems to serve that.
The final deadline for Fast Company’s World Changing Ideas Awards is Friday, December 12, at 11:59 pm PT. Apply today.
