Disruptive technologies such as AI, digital diagnostics and therapeutics, and machine learning (ML) are revolutionizing healthcare by enabling unprecedented growth and innovation. 79 percent of healthcare organizations are using generative AI and the industry is transforming at a rapid pace. AI improves patient care and management, through telemedicine, drug discovery, imaging analytics, monitoring and predictive analytics, while raising challenging legal questions related to professional malpractice, privacy and product liability claims.
Main legal risks
One of the main problems AI faces stems from the limitations in uploading data to the algorithm for ML. Because the quality of AI depends on the quality of the data used to train, in healthcare the risk that data input is incomplete, selective, has limited understanding or is not representative of the overall population and potential bias cannot be avoided. are underestimated. Comprehensive monitoring of information used for algorithms, assessment of the benefit-risk profile for the intended use and evaluation of potential bias is therefore relevant to avoid legal consequences for stakeholders.
AI misinformation or inaccuracy poses a significant risk in healthcare, where precision is paramount. While AI drives innovation, it also increases the risks of misdiagnosis or treatment, generating plausible but incorrect or misleading information, making it critical for healthcare professionals to critically evaluate and validate results. AI’s inaccurate diagnosis can have multiple liability and cost implications.
The data-intensive nature of AI makes patients’ personal and sensitive medical records vulnerable to cybersecurity and privacy breaches. Healthcare organizations must implement robust security protocols to prevent data breaches. Recent data breaches in India, including the ransomware attack on the All India Institute of Medical Sciences and the Indian Council of Medical Research data breach, highlight the existing vulnerabilities of the Indian healthcare system.
Compliance with laws and regulations affects all stakeholders. While India has yet to implement comprehensive legislation regulating the use of AI in healthcare, experiences from other mature jurisdictions can help develop a robust and efficient legal framework.
The World Health Organization (WHO) mandates digital and public health regulation, the United Nations Charter and the European Union’s international health regulations also require regulatory harmonization regarding the use of AI in healthcare. However, operators currently must navigate complex regulatory landscapes to avoid liability based on local requirements.
Most AI tools use preliminary open source content and are susceptible to larger infringement claims. Understanding the ownership and licensing of AI technology is critical to avoiding infringement claims.
Determining liability attributed to AI-related inaccuracies, across multiple stakeholders (hospital, developer, licensor, and physicians) is challenging. Transparency in AI decision-making processes is critical to ensure accountability. For example, ethical issues may arise with AI-driven diagnoses, which are often not transparent, making it difficult for doctors to explain the reasoning behind a decision and assign responsibility for any liability, especially when the underlying AI -prejudices are made known to patients.
AI also raises antitrust concerns as its use can lead to algorithmic collusion between competitors, causing inadvertent price fixing, which is closely monitored by competition authorities. While the attribution of liability in cases involving ‘algorithmic collusion’ is evolving, it is important to assess this risk and consider monitoring algorithmic pricing tools to detect and prevent such situations.
Medical liability
Tort medical negligence cases typically consider the severity of the injury, the expected standard of care, and the causal link of the AI tools to the injury to determine liability. Vicarious liability allows operators to be held liable for the acts or omissions of physicians or employees. An AI tool construed as a product could bring strict or product liability (depending on severity) or design defect claims against its developers, manufacturers, or licensors, while improper use of AI could lead to professional malpractice claims. A deployed AI tool can be considered an agent of the organization or physicians using it, capable of holding the client liable for infringement.
Case law is rapidly evolving, with the application of AI in healthcare becoming an integral part of patient care. The Texas Court of Appeals (June 2024) found an AI-based medical device manufacturer liable for a defective product for providing faulty guidance to a surgeon. The U.S. Court of Appeals (November 2022) found the developer and vendor of drug management software liable in a product liability and negligence claim due to a defective AI user interface that caused physicians to mistakenly believe they had scheduled medications that were not dispensed. planned. The Alabama Supreme Court (May 2023) held a doctor liable for relying on an incorrect AI software recommendation for heart health screening, incorrectly classifying a young adult with a family history of congenital heart defects as normal.
Risk mitigation strategies
To mitigate the risks associated with AI in healthcare, stakeholders must upskill their workforce with comprehensive manuals, training on safe use and troubleshooting. Developers should transparently disclose information about existing biases, provide mechanisms to explain decision-making, and develop data security processes. Operators must also educate patients about the use of AI and its role in their diagnostic or treatment decisions, obtain informed consent from patients, provide the opportunity to withdraw consent, anonymize sensitive information, and establish multiple layers of encryption.
Operators should adopt appropriate risk allocation methods and strategies, specifically identifying obligations for limitation of liability, indemnification and insurance coverage in the event of faulty production or misuse.
Conclusion
Despite the anticipated risks, the diverse benefits of using AI in healthcare make its adoption a necessity for continued relevance and maintaining a competitive advantage, revealing a landscape packed with potential and complexity. AI brings efficiency and innovation to the forefront, provided that actors understand and mitigate the risks and associated liabilities, to promote a transparent and ethical environment of trust.
This article was written by Aditya Patni, Partner and Achint Kaur, Counsel at Khaitan & Co.
(DISCLAIMER: The views expressed are solely those of the author and ETHealthworld.com does not necessarily endorse them. ETHealthworld.com is not responsible for any damage caused directly or indirectly to any person/organization)