A Bernstein Research analyst says Open AI CEO Sam Altman has the power to crash the global economy or take everyone ‘to the promised land’ as the startup behind ChatGPT races to build artificial intelligence infrastructure costing billions of dollars Copyright GETTY IMAGES NORTH AMERICA/AFP JUSTIN SULLIVAN
There are many points of access to ChatGPT Health with a single search. It’s a chat service that provides a range of services related to medical issues for consumers.
The recent launch of ChatGPT Health is “just” as a general information service, but the frames of reference are huge. It’s an interface for general information, which can be and has been construed as advice.
This Guardian article about ChatGPT Health spells out in clear and alarming terms what can go wrong and what has gone wrong, notably a case of someone taking sodium bromide instead of table salt.
To explain this problem a bit more succinctly:
AI draws on available data to process requests. The available data for sodium bromide is minimal and appalling. Even the manufacturers seem to have only so much information to work with.
This is what the AI would see:
General search data. It’s OK for a casual overview, but hardly at a diagnostic level. It does include safety and toxicity information.
Product information, such as it is. Note the many “No data available” entries on this info sheet.
So this guy takes sodium bromide, starts hallucinating, and winds up inthe hospital. Ironically, the toxicity information in the basic search includes hallucinations.
You can see how this works. How this guy got the idea that sodium bromide was a substitute for table salt is anyone’s guess. Maybe the fact that they both include sodium?
Now the major issue.
To coin an expression, this is “AI overreach”.
In the consumer’s case, he far overreached his own knowledge base. This kind of information simply doesn’t, can’t, and won’t translate into a quick fix for table salt or anything else.
In ChatGPT Health’s case, it’s a huge overreach. It’s one thing to simply recite factual information. To transpose that level of information into any sort of medical advice is out of the question.
One of the reasons online health has taken off is because it’s supposed to be basically the same service you’d get from a GP. It’s quick, it’s efficient, it saves time and money for both parties, and nobody has to risk their lives in traffic to treat a head cold.
AI health services like this are inevitably well out of their depth in these basic functions. They’re not subject to the same level of twoparty scrutiny, either.
A GP and a patient could just both look at the same AI information and decide whether they believe it or trust it.
Sodium bromide certainly wouldn’t have passed this very basic level of scrutiny. A swimming pool disinfectant agent as table salt? Maybe not?
Regulation
Regulation could be easier than it looks, though. Under Australian law, a medical company cannot provide medical services. It’s also not a legal person. These are very important distinctions.
So how could an AI provide the same or similar services? AI isn’t a legal person, either. This could well extend to any form of medical advice.
As a therapeutic asset, AI could be regulated by the Therapeutic Goods Administration. It could only operate under general therapeutic regulations, with any number of safeguards built in. Note that the reasonable interpretation of “therapy” can easily include advisory services.
Why regulate?
Because otherwise any dangerous pseudomedical product can enter the food chain. Because there are serious risks of major damage. Look how much fun America has with its meds, regulated or otherwise. It’s too high a risk.
Never mind “buyer beware”. Why should the buyer of anything have to beware of anything at all? What’s wrong with an obligation to market safe products?
A warning sticker saying “this product could exterminate your entire family” may be noble and uplifting and tell you what nice guys the manufacturers are, but why do you need it? Does “may contain plutonium” tell you enough?
In the case of sodium bromide, it’s a fair assumption that nobody seriously considered that it was a substitute for table salt. You shouldn’t need to be told that, but here’s a documented case of exactly that.
AI is destined to be a critical part of medicine. It needs to be safe.
______________________________________________________________
Disclaimer
The opinions expressed in this OpEd are those of the author. They do not purport to reflect the opinions or views of the or its members.
