Artificial intelligence companies must be transparent about the risks posed by their products or be in danger of repeating the mistakes of tobacco and opioid firms, according to the chief executive of the AI startup Anthropic.
Dario Amodei, who runs the US company behind the Claude chatbot, said he believed AI would become smarter than “most or all humans in most or all ways” and urged his peers to “call it as you see it”.
Speaking to CBS News, Amodei said a lack of transparency about the impact of powerful AI would replay the errors of cigarette and opioid firms that failed to raise a red flag over the potential health damage of their own products.
“You could end up in the world of, like, the cigarette companies, or the opioid companies, where they knew there were dangers, and they didn’t talk about them, and certainly did not prevent them,” he said.
Amodei warned this year that AI could eliminate half of all entry-level white-collar jobs – office roles such as accountancy, law and banking – within five years.
“Without intervention, it’s hard to imagine that there won’t be some significant job impact there. And my worry is that it will be broad and it’ll be faster than what we’ve seen with previous technology,” Amodei said.
Amodei said he used the phrase “the compressed 21st century” to describe how AI could achieve scientific breakthroughs in much quicker time than in previous decades.
“Could we get 10 times the rate of progress and therefore compress all the medical progress that was going to happen throughout the entire 21st century into five or 10 years?” he asked.
Amodei is a prominent voice for online safety, and Anthropic has flagged various concerns about its AI models recently, including an apparent awareness that they are being tested and attempting to commit blackmail.
It said last week that a group sponsored by the Chinese state had used its tool Claude Codeto attack 30 entities around the world in September, achieving a “handful of successful intrusions”.
The company said that one of the most concerning aspects of the attack was that Claude had operated largely independently throughout the incident. Between 80% and 90% of the operations involved were performed without a human in the loop.
after newsletter promotion
Speaking to CBS, Amodei said: “One of the things that’s been powerful in a positive way about the models is their ability to kind of act on their own. But the more autonomy we give these systems, you know, the more we can worry are they doing exactly the things that we want them to do?”
Logan Graham, the head of Anthropic’s team for stress testing AI models, told CBS that the flipside of a model’s ability to find health breakthroughs could be helping to build a biological weapon.
“If the model can help make a biological weapon, for example, that’s usually the same capabilities that the model could use to help make vaccines and accelerate therapeutics,” he said.
Referring to autonomous models, which are viewed as a key part of the investment case for AI, Graham said users want to an AI tool to help their business – not wreck it.
“You want a model to go build your business and make you a billion,” he said. “But you don’t want to wake up one day and find that it’s also locked you out of the company, for example. And so our sort of basic approach to it is, we should just start measuring these autonomous capabilities and to run as many weird experiments as possible and see what happens.”
