By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Don’t ignore the security risks of agentic AI – News
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Don’t ignore the security risks of agentic AI – News
News

Don’t ignore the security risks of agentic AI – News

News Room
Last updated: 2025/11/15 at 4:30 PM
News Room Published 15 November 2025
Share
Don’t ignore the security risks of agentic AI –  News
SHARE

In the race to deploy agentic artificial intelligence systems across workflows, an uncomfortable truth is being ignored: Autonomy invites unpredictability, and unpredictability is a security risk.  If we don’t rethink our approach to safeguarding these systems now, we may find ourselves chasing threats we barely understand at a scale we can’t contain.

Agentic AI systems are designed with autonomy at their core. They can reason, plan, take action across digital environments and even coordinate with other agents. Think of them as digital interns with initiative, capable of setting and executing tasks with minimal oversight.

But the very thing that makes agentic AI powerful — its ability to make independent decisions in real-time — is also what makes it an unpredictable threat vector. In the rush to commercialize and deploy these systems, insufficient attention has been given to the potential security liabilities they introduce.

Whereas large language model-based chatbots are mostly reactive, agentic systems operate proactively. They might autonomously browse the web, download data, manipulate application programming interfaces, execute scripts or even interact with real-world systems like trading platforms or internal dashboards. That sounds exciting until you realize how few guardrails may be in place to monitor or constrain these actions once set in motion.

‘Can’ vs. ‘should’

Security researchers are increasingly raising alarms about the attack surface these systems introduce. One glaring concern is the blurred line between what an agent can do and what it should do. As agents gain permissions to automate tasks across multiple applications, they also inherit access tokens, API keys and other sensitive credentials. A prompt injection, hijacked plugin, exploited integration or engineered supply chain attack could give attackers a backdoor into critical systems.

We’ve already seen examples of large language model agents falling victim to adversarial inputs. In one case, researchers demonstrated that embedding a malicious command in a webpage could trick an agentic browser bot into exfiltrating data or downloading malware — without any malicious code on the attacker’s end. The bot simply followed instructions buried in natural language. No exploits. No binaries. Just linguistic sleight of hand.

And it doesn’t stop there. When agents are granted access to email clients, file systems, databases or DevOps tools, a single compromised action can trigger cascading failures. From initiating unauthorized Git pushes to granting unintended permissions, agentic AI has the potential to replicate risks at machine speed and scale.

The problem is exacerbated by the industry’s obsession with capability benchmarks over safety thresholds. Much of the focus has been on how many tasks agents can complete, how well they self-reflect or how efficiently they chain tools. Relatively little attention has been given to sandboxing, logging or even real-time override mechanisms. In the push for autonomous agents that can take on end-to-end workflows, security is playing catch-up.

The need to catch up — fast

Mitigation strategies must evolve beyond traditional endpoint or application security. Agentic AI exists in a gray area between the user and the system. 

Role-based access control alone won’t cut it. We need policy engines that understand intent, monitor behavioral drift and can detect when an agent begins to act out of character. We need developers to implement fine-grained scopes for what agents can do, limiting not just which tools they use, but how, when and under what conditions.

Auditability is also critical. Many of today’s AI agents operate in ephemeral runtime environments with little to no traceability. If an agent makes a flawed decision, there’s often no clear log of its thought process, actions or triggers. That lack of forensic clarity is a nightmare for security teams. In at least some cases, models resorted to malicious insider behaviors when that was the only way to avoid replacement or achieve their goals—including blackmailing officials and leaking sensitive information to competitors

Finally, we need robust testing frameworks that simulate adversarial inputs in agentic workflows. Penetration-testing a chatbot is one thing; evaluating an autonomous agent that can trigger real-world actions is a completely different challenge. It requires scenario-based simulations, sandboxed deployments and real-time anomaly detection.

Halting first steps

Some industry leaders are beginning to respond. OpenAI LLC has hinted at dedicated safety protocols for its newest publicly available agent. Anthropic PBC emphasizes constitutional AI as a safeguard, and others are building observability layers around agent behavior. But these are early steps, and they remain uneven across the ecosystem.

Until security is baked into the development lifecycle of agentic AI, rather than being patched on afterward, we risk repeating the same mistakes we made during the early days of cloud computing: excessive trust in automation before building resilient guardrails.

We are no longer speculating about what agents might do. They are already executing trading strategies, scheduling infrastructure updates, scanning logs, crafting emails and interacting with customers. The question isn’t whether they’ll be abused — but when.

Any system that can act must be treated as both an asset and a liability. Agentic AI could become one of the most transformative technologies of the decade. However, without robust security frameworks, it could also become one of the most vulnerable targets.

The smarter these systems get, the harder they’ll be to control in retrospect. Which is why the time to act isn’t tomorrow. It’s now.

Isla Sibanda is an ethical hacker and cybersecurity specialist based in Pretoria, South Africa. She has been a cybersecurity analyst and penetration testing specialist for more than 12 years. She wrote this article for News.

Image: News/Google Whisk

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.

About News Media

News Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of News, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — News Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article This 75” TOSHIBA QLED 4K Smart TV is half off this weekend This 75” TOSHIBA QLED 4K Smart TV is half off this weekend
Next Article Fort Wayne panel discusses the role of artificial intelligence in education | Schools Fort Wayne panel discusses the role of artificial intelligence in education | Schools
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

AI revolution: Specialized agents crucial to enterprise adoption –  News
AI revolution: Specialized agents crucial to enterprise adoption – News
News
Best Black Friday VPN deal: Score £15 off Norton VPN
Best Black Friday VPN deal: Score £15 off Norton VPN
News
How Many Starlink Satellites Have Fallen Out Of The Sky? – BGR
How Many Starlink Satellites Have Fallen Out Of The Sky? – BGR
News
Spotify’s new Recaps audiobook tool is here to save you from your own memory failure
Spotify’s new Recaps audiobook tool is here to save you from your own memory failure
News

You Might also Like

AI revolution: Specialized agents crucial to enterprise adoption –  News
News

AI revolution: Specialized agents crucial to enterprise adoption – News

6 Min Read
Best Black Friday VPN deal: Score £15 off Norton VPN
News

Best Black Friday VPN deal: Score £15 off Norton VPN

2 Min Read
How Many Starlink Satellites Have Fallen Out Of The Sky? – BGR
News

How Many Starlink Satellites Have Fallen Out Of The Sky? – BGR

5 Min Read
Spotify’s new Recaps audiobook tool is here to save you from your own memory failure
News

Spotify’s new Recaps audiobook tool is here to save you from your own memory failure

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?