By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: OpenAI says AI browsers may always be vulnerable to prompt injection attacks | News
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > OpenAI says AI browsers may always be vulnerable to prompt injection attacks | News
News

OpenAI says AI browsers may always be vulnerable to prompt injection attacks | News

News Room
Last updated: 2025/12/22 at 5:55 PM
News Room Published 22 December 2025
Share
OpenAI says AI browsers may always be vulnerable to prompt injection attacks |  News
SHARE

Even as OpenAI works to harden its Atlas AI browser against cyberattacks, the company admits that prompt injections, a type of attack that manipulates AI agents to follow malicious instructions often hidden in web pages or emails, is a risk that’s not going away anytime soon — raising questions about how safely AI agents can operate on the open web. 

“Prompt injection, much like scams and social engineering on the web, is unlikely to ever be fully ‘solved,’” OpenAI wrote in a Monday blog post detailing how the firm is beefing up Atlas’ armor to combat the unceasing attacks. The company conceded that “agent mode” in ChatGPT Atlas “expands the security threat surface.”

OpenAI launched its ChatGPT Atlas browser in October, and security researchers rushed to publish their demos, showing it was possible to write a few words in Google Docs that were capable of changing the underlying browser’s behavior. That same day, Brave published a blog post explaining that indirect prompt injection is a systematic challenge for AI-powered browsers, including Perplexity’s Comet. 

OpenAI isn’t alone in recognizing that prompt-based injections aren’t going away. The U.K.’s National Cyber Security Centre earlier this month warned that prompt injection attacks against generative AI applications “may never be totally mitigated,” putting websites at risk of falling victim to data breaches. The U.K. government agency advised cyber professionals to reduce the risk and impact of prompt injections, rather than think the attacks can be “stopped.” 

For OpenAI’s part, the company said: “We view prompt injection as a long-term AI security challenge, and we’ll need to continuously strengthen our defenses against it.”

The company’s answer to this Sisyphean task? A proactive, rapid-response cycle that the firm says is showing early promise in helping discover novel attack strategies internally before they are exploited “in the wild.” 

That’s not entirely different from what rivals like Anthropic and Google have been saying: that to fight against the persistent risk of prompt-based attacks, defenses must be layered and continuously stress-tested. Google’s recent work, for example, focuses on architectural and policy-level controls for agentic systems.

But where OpenAI is taking a different tact is with its “LLM-based automated attacker.” This attacker is basically a bot that OpenAI trained, using reinforcement learning, to play the role of a hacker that looks for ways to sneak malicious instructions to an AI agent.

The bot can test the attack in simulation before using it for real, and the simulator shows how the target AI would think and what actions it would take if it saw the attack. The bot can then study that response, tweak the attack, and try again and again. That insight into the target AI’s internal reasoning is something outsiders don’t have access to, so, in theory, OpenAI’s bot should be able to find flaws faster than a real-world attacker would. 

It’s a common tactic in AI safety testing: build an agent to find the edge cases and test against them rapidly in simulation. 

“Our [reinforcement learning]-trained attacker can steer an agent into executing sophisticated, long-horizon harmful workflows that unfold over tens (or even hundreds) of steps,” wrote OpenAI. “We also observed novel attack strategies that did not appear in our human red teaming campaign or external reports.”

Image Credits:OpenAI

In a demo (pictured in part above), OpenAI showed how its automated attacker slipped a malicious email into a user’s inbox. When the AI agent later scanned the inbox, it followed the hidden instructions in the email and sent a resignation message instead of drafting an out-of-office reply. But following the security update, “agent mode” was able to successfully detect the prompt injection attempt and flag it to the user, according to the company. 

The company says that while prompt injection is hard to secure against in a foolproof way, it’s leaning on large-scale testing and faster patch cycles to harden its systems before they show up in real-world attacks. 

An OpenAI spokesperson declined to share whether the update to Atlas’s security has resulted in a measurable reduction in successful injections, but says the firm has been working with third parties to harden Atlas against prompt injection since before launch.

Rami McCarthy, principal security researcher at cybersecurity firm Wiz, says that reinforcement learning is one way to continuously adapt to attacker behavior, but it’s only part of the picture. 

“A useful way to reason about risk in AI systems is autonomy multiplied by access,” McCarthy told News.

“Agentic browsers tend to sit in a challenging part of that space: moderate autonomy combined with very high access,” said McCarthy. “Many current recommendations reflect that tradeoff. Limiting logged-in access primarily reduces exposure, while requiring review of confirmation requests constrains autonomy.”

Those are two of OpenAI’s recommendations for users to reduce their own risk, and a spokesperson said Atlas is also trained to get user confirmation before sending messages or making payments. OpenAI also suggests that users give agents specific instructions, rather than providing them access to your inbox and telling them to “take whatever action is needed.” 

“Wide latitude makes it easier for hidden or malicious content to influence the agent, even when safeguards are in place,” per OpenAI.

While OpenAI says protecting Atlas users against prompt injections is a top priority, McCarthy invites some skepticism as to the return on investment for risk-prone browsers. 

“For most everyday use cases, agentic browsers don’t yet deliver enough value to justify their current risk profile,” McCarthy told News. “The risk is high given their access to sensitive data like email and payment information, even though that access is also what makes them powerful. That balance will evolve, but today the tradeoffs are still very real.”

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article The Path to Globalization of China’s Automotive Industry [2024] · TechNode The Path to Globalization of China’s Automotive Industry [2024] · TechNode
Next Article Clair Obscur: Expedition 33 Stripped of a Game of the Year Award Because of AI Clair Obscur: Expedition 33 Stripped of a Game of the Year Award Because of AI
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Best headphones deal: Save 13% on the Sony XM6
Best headphones deal: Save 13% on the Sony XM6
News
ByteDance moves forward with AI smartphone partnerships, including Vivo, Lenovo, and Transsion · TechNode
ByteDance moves forward with AI smartphone partnerships, including Vivo, Lenovo, and Transsion · TechNode
Computing
Avatar: Fire And Ash Reshot Its Huge Finale At The Last Minute – Here’s Why – BGR
Avatar: Fire And Ash Reshot Its Huge Finale At The Last Minute – Here’s Why – BGR
News
How to Create a Social Media Manager Portfolio (Free Template)
How to Create a Social Media Manager Portfolio (Free Template)
Computing

You Might also Like

Best headphones deal: Save 13% on the Sony XM6
News

Best headphones deal: Save 13% on the Sony XM6

3 Min Read
Avatar: Fire And Ash Reshot Its Huge Finale At The Last Minute – Here’s Why – BGR
News

Avatar: Fire And Ash Reshot Its Huge Finale At The Last Minute – Here’s Why – BGR

4 Min Read
Score a Kindle reader deal: Most are on sale!
News

Score a Kindle reader deal: Most are on sale!

6 Min Read
New study reveals Chrome is bad for your privacy but it’s not the worst browser
News

New study reveals Chrome is bad for your privacy but it’s not the worst browser

6 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?