Anthropic has become the latest Artificial intelligence (AI) company to announce a new suite of features that allows users of its Claude platform to better understand their health information.
Under an initiative called Claude for Healthcare, the company said U.S. subscribers of Claude Pro and Max plans can opt to give Claude secure access to their lab results and health records by connecting to HealthEx and Function, with Apple Health and Android Health Connect integrations rolling out later this week via its iOS and Android apps.
“When connected, Claude can summarize users’ medical history, explain test results in plain language, detect patterns across fitness and health metrics, and prepare questions for appointments,” Anthropic said. “The aim is to make patients’ conversations with doctors more productive, and to help users stay well-informed about their health.”
The development comes merely days after OpenAI unveiled ChatGPT Health as a dedicated experience for users to securely connect medical records and wellness apps and get personalized responses, lab insights, nutrition advice, and meal ideas.
The company also pointed out that the integrations are private by design, and users can explicitly choose the kind of information they want to share with Claude and disconnect or edit Claude’s permissions at any time. As with OpenAI, the health data is not used to train its models.
The expansion comes amid growing scrutiny over whether AI systems can avoid offering harmful or dangerous guidance. Recently, Google stepped in to remove some of its AI summaries after they were found providing inaccurate health information. Both OpenAI and Anthropic have emphasized that their AI offerings can make mistakes and are not substitutes for professional healthcare advice.
In the Acceptable Use Policy, Anthropic notes that a qualified professional in the field must review the generated outputs “prior to dissemination or finalization” in high-risk use cases related to healthcare decisions, medical diagnosis, patient care, therapy, mental health, or other medical guidance.
“Claude is designed to include contextual disclaimers, acknowledge its uncertainty, and direct users to healthcare professionals for personalized guidance,” Anthropic said.
