Cybersecurity researchers have discovered a new vulnerability in OpenAI’s ChatGPT Atlas web browser that could allow malicious actors to inject nefarious instructions into the artificial intelligence (AI)-powered assistant’s memory and run arbitrary code.
“This exploit can allow attackers to infect systems with malicious code, grant themselves access privileges, or deploy malware,” LayerX Security Co-Founder and CEO, Or Eshed, said in a report shared with The Hacker News.
The attack, at its core, leverages a cross-site request forgery (CSRF) flaw that could be exploited to inject malicious instructions into ChatGPT’s persistent memory. The corrupted memory can then persist across devices and sessions, permitting an attacker to conduct various actions, including seizing control of a user’s account, browser, or connected systems, when a logged-in user attempts to use ChatGPT for legitimate purposes.
Memory, first introduced by OpenAI in February 2024, is designed to allow the AI chatbot to remember useful details between chats, thereby allowing its responses to be more personalized and relevant. This could be anything ranging from a user’s name and favorite color to their interests and dietary preferences.

The attack poses a significant security risk in that by tainting memories, it allows the malicious instructions to persist unless users explicitly navigate to the settings and delete them. In doing so, it turns a helpful feature into a potent weapon that can be used to run attacker-supplied code.
“What makes this exploit uniquely dangerous is that it targets the AI’s persistent memory, not just the browser session,” Michelle Levy, head of security research at LayerX Security, said. “By chaining a standard CSRF to a memory write, an attacker can invisibly plant instructions that survive across devices, sessions, and even different browsers.”
“In our tests, once ChatGPT’s memory was tainted, subsequent ‘normal’ prompts could trigger code fetches, privilege escalations, or data exfiltration without tripping meaningful safeguards.”

The attack plays out as follows –
- User logs in to ChatGPT
- The user is tricked into launching a malicious link by social engineering
- The malicious web page triggers a CSRF request, leveraging the fact that the user is already authenticated, to inject hidden instructions into ChatGPT’s memory without their knowledge
- When the user queries ChatGPT for a legitimate purpose, the tainted memories will be invoked, leading to code execution
Additional technical details to pull off the attack have been withheld. LayerX said the problem is exacerbated by ChatGPT Atlas’ lack of robust anti-phishing controls, the browser security company said, adding it leaves users up to 90% more exposed than traditional browsers like Google Chrome or Microsoft Edge.
In tests against over 100 in-the-wild web vulnerabilities and phishing attacks, Edge managed to stop 53% of them, followed by Google Chrome at 47% and Dia at 46%. In contrast, Perplexit’s Comet and ChatGPT Atlas stopped only 7% and 5.8% of malicious web pages.
This opens the door to a wide spectrum of attack scenarios, including one where a developer’s request to ChatGPT to write code can cause the AI agent to slip in hidden instructions as part of the vibe coding effort.

The development comes as NeuralTrust demonstrated a prompt injection attack affecting ChatGPT Atlas, where its omnibox can be jailbroken by disguising a malicious prompt as a seemingly harmless URL to visit. It also follows a report that AI agents have become the most common data exfiltration vector in enterprise environments.
“AI browsers are integrating app, identity, and intelligence into a single AI threat surface,” Eshed said. “Vulnerabilities like ‘Tainted Memories’ are the new supply chain: they travel with the user, contaminate future work, and blur the line between helpful AI automation and covert control.”
“As the browser becomes the common interface for AI, and as new agentic browsers bring AI directly into the browsing experience, enterprises need to treat browsers as critical infrastructure, because that is the next frontier of AI productivity and work.”
