WASHINGTON — A team of researchers has uncovered what they say is the first reported use of artificial intelligence to direct a hacking campaign in a largely automated fashion.
The AI company Anthropic said this week that it disrupted a cyber operation that its researchers linked to the Chinese government. The operation involved the use of an artificial intelligence system to direct the hacking campaigns, which researchers called a disturbing development that could greatly expand the reach of AI-equipped hackers.
While concerns about the use of AI to drive cyber operations are not new, what is concerning about the new operation is the degree to which AI was able to automate some of the work, the researchers said.
“While we predicted these capabilities would continue to evolve, what has stood out to us is how quickly they have done so at scale,” they wrote in their report.
The operation was modest in scope and only targeted about 30 individuals who worked at tech companies, financial institutions, chemical companies and government agencies. Anthropic noticed the operation in September and took steps to shut it down and notify the affected parties.
The hackers only “succeeded in a small number of cases,” according to Anthropic, which noted that while AI systems are increasingly being used in a variety of settings for work and leisure, they can also be weaponized by hacking groups working for foreign adversaries. Anthropic, maker of the generative AI chatbot Claude, is one of many tech companies pitching AI “agents” that go beyond a chatbot’s capability to access computer tools and take actions on a person’s behalf.
“Agents are valuable for everyday work and productivity — but in the wrong hands, they can substantially increase the viability of large-scale cyberattacks,” the researchers concluded. “These attacks are likely to only grow in their effectiveness.”
A spokesperson for China’s embassy in Washington did not immediately return a message seeking comment on the report.
Microsoft warned earlier this year that foreign adversaries were increasingly embracing AI to make their cyber campaigns more efficient and less labor-intensive.
America’s adversaries, as well as criminal gangs and hacking companies, have exploited AI’s potential, using it to automate and improve cyberattacks, to spread inflammatory disinformation and to penetrate sensitive systems. AI can translate poorly worded phishing emails into fluent English, for example, as well as generate digital clones of senior government officials.
