By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Is AI in medicine playing fair?
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Is AI in medicine playing fair?
News

Is AI in medicine playing fair?

News Room
Last updated: 2025/04/10 at 10:29 AM
News Room Published 10 April 2025
Share
SHARE

A radiologist interpreting magnetic resonance imaging. Image by The Medical Futurist editors. The Future of Radiology and Artificial Intelligence. The Medical Futurist (20170629) CC4.0

Researchers have stresstested generative artificial intelligence models, urging safeguards. The new study raises concerns regarding responsible AI in health care.

As artificial intelligence (AI) rapidly integrates into health care, a new study by researchers at the Icahn School of Medicine at Mount Sinai reveals that all generative AI models may recommend different treatments for the same medical condition based solely on a patient’s socioeconomic and demographic background.

Serving patients by wealth and social class

The findings highlight the importance of early detection and intervention to ensure that AIdriven care is safe, effective, and appropriate for all.

As part of their investigation, the researchers stresstested nine large language models (LLMs) on 1,000 emergency department cases, each replicated with 32 different patient backgrounds, generating more than 1.7 million AIgenerated medical recommendations. Despite identical clinical details, the AI models occasionally altered their decisions based on a patient’s socioeconomic and demographic profile, affecting key areas such as triage priority, diagnostic testing, treatment approach, and mental health evaluation.

A framework for AI assurance

“Our research provides a framework for AI assurance, helping developers and health care institutions design fair and reliable AI tools,” says cosenior author Eyal Klang, MD, Chief of GenerativeAI in the Windreich Department of Artificial Intelligence and Human Health at the Icahn School of Medicine at Mount Sinai.

Klang adds: “By identifying when AI shifts its recommendations based on background rather than medical need, we inform better model training, prompt design, and oversight. Our rigorous validation process tests AI outputs against clinical standards, incorporating expert feedback to refine performance. This proactive approach not only enhances trust in AIdriven care but also helps shape policies for better health care for all.”

Bias by income?

The study showed the tendency of some AI models to escalate care recommendations—particularly for mental health evaluations—based on patient demographics rather than medical necessity.

In addition, highincome patients were more often recommended advanced diagnostic tests such as CT scans or MRI, while lowincome patients were more frequently advised to undergo no further testing. The scale of these inconsistencies underscores the need for stronger oversight, say the researchers.

The researchers caution that the review represents only a snapshot of AI behaviour and that future research will need to include assurance testing to evaluate how AI models perform in realworld clinical settings and whether different prompting techniques can reduce bias.

Hence, as AI becomes more integrated into clinical care, it is essential to thoroughly evaluate its safety, reliability, and fairness. By identifying where these models may introduce bias, scientists can work to refine their design, strengthen oversight, and build systems that ensure patients remain at the heart of safe, effective care.

The research paper appears in Nature Medicine and it is titled “SocioDemographic Biases in Medical DecisionMaking by Large Language Models: A LargeScale MultiModel Analysis.”

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article How SAFE Performs Compared to Human Annotations | HackerNoon
Next Article We finally know when the Galaxy S25 Edge’s release date – and it’s soon | Stuff
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Garmin’s (arguably) sexiest smartwatch is back down to its best-ever price
News
Enterprise stalwarts get serious about AI agents – News
News
Spotify has 9 features Apple Music users can only dream of
News
Chinese EVs’ share of global market rose in 2023: industry group · TechNode
Computing

You Might also Like

News

Garmin’s (arguably) sexiest smartwatch is back down to its best-ever price

3 Min Read
News

Enterprise stalwarts get serious about AI agents – News

14 Min Read
News

Spotify has 9 features Apple Music users can only dream of

5 Min Read
News

I’ve had T-Mobile for 10 years, but I just canceled my account. Here’s why

8 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?