The problem is the fast and sometimes poorly thoughtful pace of new AI product implementations together, either by managers who would like to please investors or employees on their own initiative, not even contrary to their IT departments.
“It is a bit unfair that we have pushed AI into every product when it introduces new risks,” said Alex Delamotte, a threat researcher at security company Sentinelone.
Security often lags behind the acceptance of new technology, such as Cloud Computing, which also became popular based on the benefits it offered. But because generative artificial intelligence can do much more than even that breakthrough technology, its forces can cause more damage when they are abused.
In many cases, the new techniques are amazingly powerful. On a recent assignment to test the defense, Dave Brauchler of the CyberSecurity Company NCC Group has misled the AI program of a customer of a customer to perform programs that have fored the databases and the Code Repositories of the company.
“We have never been so foolish with safety,” said Brauchler.
Although some broader surveys show mixed results for AI effectiveness, most software developers have embraced tools, including those of large AI companies, which write chunks of code, although some studies suggest that those tools are more likely than human programmers to introduce security failure.
The more autonomy and access to production environments that such tools have, the more damage.
An attack in August brought established hacking techniques together with that kind of AI manipulation for what is perhaps the first time.
Unknown hackers started with a well-known form of supply chain attack. They found a way to publish official programs that change NX, a widely used platform for managing code repositories. Hundreds of thousands of NX users have unconsciously downloaded the poisoned programs.
As with earlier attacks by Software-Supply-Chain, the hackers sent the malignant code to find account passwords, cryptocurrency portfolios and other sensitive data from those who have downloaded the changed programs. But in a turn they assumed that many of those people would have installed coding tools from Google, Anthropic or others, and those tools may have a lot of access. So the hacker instructed those programs to eradicate the data. More than 1,000 user machines have returned information.
“What makes this attack special is that it is the first time I know that the attacker was trying to hijack the AI in the vicinity of the victim,” said Henrik Plate, a researcher at software security company Endor Labs.
“The great risk for companies in particular is that code that runs on the machine of a developer can be more over -reaching than other machines. It can have access to other business systems,” said Plate. “The attacker could have used the attack to do other things, such as changing the source code.”

Demonstrations on the Black Hat Security Conference of last month in Las Vegas include other points of attention to exploit artificial intelligence.
In one, an imagined attacker sent documents per e -mail with hidden instructions aimed at chatgpt or competitors. If a user was asked for a summary if one was made automatically, the program would carry out the instructions, even find digital passwords and send them from the network.
A similar attack on Google’s Gemini did not even need an attachment, only an e -mail with hidden guidelines. The summary of the AI told the target wrongly that an account had been affected and that they had to mention the number of the attacker, whereby successful phishing would be simulated.
The threats become more worrying with the rise of Agentic AI, which enables browsers and other tools to conduct transactions and to make other decisions without human supervision.
Security company Guardio has already failed the agent comet browser addition of perplexity to buy a watch from a fake online store and to follow instructions from a fake -bank -e -Mail.
Artificial intelligence is also used directly by attackers. Anthropic said last month that it had found a full ransomware campaign that was managed by someone who uses AI to do everything – find vulnerable systems at a company, fall on, evaluate data stolen and even suggests a reasonable ransom to demand. Thanks to the progress when interpreting natural language, the criminal did not even have to be a very good coder.
Advanced AI programs are also starting to find previously undiscovered security errors, the so-called zero days that Hackers very price and exploit to gain access to software that is correctly configured and fully updated with security patches.
Seven teams from Hackers developing autonomous “cyber reasoning systems” for a competition held last month by the Pentagon’s Defense Advanced Research Projects Agency, could find a total of 18 Zuldagen in 54 million Rules of Open Source Code. They worked to patch those vulnerabilities, but civil servants said that hackers develop similar efforts all over the world to find and exploit them.
Some old security defenders predict a one -off, worldwide Mad Dash to use the technology to find and exploit new errors, leaving doors where they can return on leisure time.
The real Nightmare scenario is when these worlds collide, and the AI of an attacker finds a way to communicate and then starts to communicate with the AI of the victim, working in partnership – “The bad guy AI collaboration with the good guy ai”, as Sentinelone’s Delamotte said it.
“Next year,” said Adam Meyers, senior vice president at Crowdstrike, “Ai will be the new insider threat.”