Generative artificial intelligence powered features such as chatting about what is in pictures, telling children bedtime stories, and imitating podcasters continue to roll out despite fears the technology will be used for more nefarious purposes Copyright AFP Yasuyoshi CHIBA
Medical based AI is expanding, especially the concept of “software as a medical device”, yet regulatory approval is slowing and public acceptance is not significantly growing. What needs to be done to address opaque algorithms in medical AI? The answer may fall within the development of a universal framework based on an ethical structure. Through such a structure, developers, healthcare professionals, and legislators can become better ‘sensitized’ to the needs of the general population.
A new article, from US based medical researchers, has probed the use of artificial intelligencebased software in relation to medical devices. Such devices present the possibility for alleviating suffering through rapid identification and early intervention.
Yet the adoption of such devices in clinical practice has remained relatively slow. The limitation is not so much to do with the technology but more in relation to ethical questions.
While ethical questions will have some cultural differences, and there is an absence of any universal framework for the approval of AIassisted medical devices, it is noticeable that the guiding principles remain very similar globally. However, these are often implemented in a haphazard way.
The article calls for a structured approach for the regulatory approval process. This is based around key principles of medical ethics: autonomy, beneficence, and fair distribution of healthcare sources.
Autonomy
Autonomy concerns the importance of informed consent, selfdetermination, and the right to refuse or accept treatment. In other words, the patient must maintain full control over the decisionmaking process about their health.
In terms of AI, different national legislation shapes whether or not patients retain data ownership, and the extent that users can decide how their data can be used by a healthcare facility or company.
Beneficence
Beneficence obliges the physician to act only for the benefit of the patient and avoid anything that could oppose the patient’s wellbeing. This needs to run in tandem with nonmaleficence, the rules that prevent physicians from harming patients in any capacity in any way.
In terms of AI, this means ensuring that AIbased devices lead to timely intervention and preventive measures.
This means avoiding AI algorithms being trained using biased datasets. The risk otherwise is that AI can perpetuate and amplify existing biases, leading to discriminatory and unfair outcomes.
Fair distribution
Fair distribution is part of the concept of ‘justice’ and this includes having appropriate measures in place to ensure that no implicit bias arises from the use of AIbased devices and that unfair discrimination is eliminated during the development process.
Explainability
An important area is with building public trust for AI. Here the paper calls for “explainability and transparency of AI algorithms” as “the characteristics that are crucial to ensuring the trust and accountability of these systems.” In other words, if the public do not understand what an Ai algorithm actually does and cannot see how their data is being handled, then the public acceptance of the AI and a willingness to share data or to participate in a trial is diminished.
Explainability is not a purely technological issue, and it invokes a host of medical, legal, ethical, and societal questions
In terms of a suitable outcome, the paper recommends regulating quality management, risk assessment, and data privacy to help in building trust to promote the adoption of AI in healthcare.
The research appears in the journal Cureus, titled “Integrating Ethical Principles Into the Regulation of AIDriven Medical Software.”