Artificial intelligence is only as effective as the data behind it. Without data that’s findable, accessible, interoperable and reusable — known as FAIR data — healthcare organizations risk inefficiencies, inaccuracies and unreliable AI-generated insights.
To unlock AI’s full potential, a well-structured data strategy built on FAIR principles must come first. AI alone can’t solve all data challenges; success hinges on a strong data foundation, according to Aashima Gupta (pictured, left), global director of healthcare solutions at Google Cloud.
Epam Systems’ Ted Slater and Google Cloud’s Aashima Gupta talk with theCUBE’s Rebecca Knight about the role of FAIR data in AI adoption.
“There’s a lot of AI hype, [and] there’s a lot of AI washing,” she said. “All the data projects have become AI projects, AI projects have become gen AI projects and gen AI has become agentic AI projects. But going back to enterprise has diverse starting points: Data matters, data strategy matters, and I would say there’s no gen AI or agentic AI strategy if you don’t have the data strategy.”
Gupta and Ted Slater (right), managing principal of scientific informatics knowledge engineering at Epam Systems Inc., spoke with theCUBE’s Rebecca Knight at theCUBE’s Coverage of Google Cloud at HIMSS25, during an exclusive broadcast on theCUBE, News Media’s livestreaming studio. They discussed the role of FAIR data in AI adoption. (* Disclosure below.)
The role of FAIR data in AI adoption
Organizations seeking to integrate AI into healthcare operations must first focus on ensuring that data is structured for long-term usability. Data must be easy to locate and accessible to both humans and machines, according to Slater. Interoperability remains a key challenge because many systems operate in silos, preventing smooth data exchange.
“If you can’t find data, you can’t use it,” Slater said. “If you find it but can’t access it, you’re still in trouble. And ultimately, reusability ensures data can support unforeseen needs in the future.”
Beyond structuring data for AI, organizations must also consider the cost of failing to implement FAIR data strategies. Without a structured approach, data can quickly become fragmented and difficult to repurpose, leading to inefficiencies and costly delays, according to Slater.
“The right question is, ‘How much is it costing you to not have FAIR data?’” he said. “If you can’t use your data for the things you need them for now [or] if things pop up requiring you to retool everything, that’s just not going to work. It costs too much money to make that happen.”
Practical steps for integrating FAIR data principles
To effectively implement FAIR data principles, organizations should focus on early validation and practical applications rather than aiming for perfection. A phased approach can help demonstrate value more quickly, encouraging greater stakeholder buy-in, according to Slater.
“One of the things that gets expensive is waiting too long to verify your data,” he said. “The longer you wait, the more expensive it is. In the community, we call this ‘born FAIR.’ Almost no one has data that is 100% FAIR, so just make sure you’re doing FAIR to the extent that supports your operations. If you need to do more later, you’ll be in a better position.”
Balancing long-term data strategies with immediate practical applications is also critical, according to Gupta. Demonstrating the benefits of FAIR data early in the process can help sustain momentum and ensure ongoing investment in structured data initiatives.
“Oftentimes, people are building this data strategy for years — build it and they’ll come,” she said. “While that is important, it’s also critical to balance. [You] show the value much faster when taking the FAIR approach. If it’s a five-year strategy and you haven’t spun out any use cases, people will lose the vision. Always balance near-term use cases with a longer-term data strategy.”
With the rise of agentic AI, the approach to data management is shifting. Traditionally, organizations would collect, normalize and transform data before building AI applications. Now, AI agents are increasingly capable of processing data in place, reducing the need for extensive pre-processing and transformation efforts, according to Gupta.
“With agentic AI, we are seeing a shift,” she said. “It used to be: ‘Get all your data, map it, transform it, harmonize it, normalize it, put it in the cloud and then build AI.’ With agentic AI, it’s much more local data processing — data stays in place. Not all use cases are built for that, but now we have agentic AI that can connect to your data where it is.”
Here’s the complete video interview, part of News’s and theCUBE’s Coverage of Google Cloud at HIMSS25:
(* Disclosure: TheCUBE is a paid media partner for theCUBE’s Coverage of Google Cloud at HIMSS25. Neither Google LLC, the sponsor of theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or News.)
Photo: News
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU