When it comes to dealing with artificial intelligence, the cybersecurity industry has officially moved into overdrive.
Vulnerabilities in coding tools, malicious injections into models used by some of the largest companies in the world and agents that move across critical infrastructure without security protection have created a whole new threat landscape seemingly overnight.
“Who’s feeling like they really understand what’s going on?” asked Jeff Moss, president of DEF CON Communications Inc. and founder of the Black Hat conference, during his opening keynote remarks on Wednesday. “Nobody. It’s because we have a lot of change occurring at the same time. We don’t fully know what AI will disrupt yet.”
Bad code vs. secure code
While the full scope of the change has yet to become apparent, this year’s Black Hat USA gathering in Las Vegas provided plenty of evidence that AI is fueling a whole new class of vulnerabilities. A starting point identified by security researchers is in the code itself, which is increasingly written by autonomous AI agents.
“These systems are mimics, they’re incredible mimics,” cognitive scientist and AI company founder Gary Marcus said during a panel discussion at the conference. “Lots of bad code is going to be written because these systems don’t understand secure code.”
One problem identified by the cybersecurity community is that shortcuts using AI coding tools are being developed without thinking through the security consequences. Researchers from Nvidia Corp. presented findings that an auto-run mode on the AI-powered code editor Cursor allowed agents to run command files on a user’s machine without explicit permission. When Nvidia presented this potential vulnerability to Anysphere Inc.’s Cursor in May, the vibe coding company responded by offering users an ability to disable the auto-run feature, according to Becca Lynch, offensive security researcher at Nvidia, who spoke at the conference on Wednesday.
Vulnerabilities in the interfaces that support AI, such as coding tools, represent a growing area of concern in the security world. Part of this issue can be found in the sheer number of application programming interface endpoints that are being generated to run AI. Companies with generative AI have at least five times more API endpoints, according to Chuck Herrin, field chief information security officer at F5 Inc.
Black Hat USA drew more than 22,000 security professionals to Las Vegas this week.
“We’re blowing up that attack surface because a world of AI is a world of APIs,” said Herrin, who spoke at Black Hat’s AI Summit on Tuesday. “There’s no securing AI without securing the interfaces that support it.”
Securing those interfaces may be more difficult than originally imagined. Running AI involves a reliance on vector databases, training frameworks and inference servers, such as those provided by Nvidia. The Nvidia Container Toolkit enables use of the chipmaker’s GPUs within Docker containers, including those hosting inference servers.
Security researchers from Wiz Inc. presented recent findings of a Nvidia Container Toolkit vulnerability that posed a major threat to managed AI cloud services. Wiz found that the vulnerability allowed attackers to potentially access or manipulate customer data and proprietary models within 37% of cloud environments. Nvidia issued an advisory in July and provided a fix in its latest update.
“Any provider of cloud services was vulnerable to our attack,” said Hillai Ben Sasson, senior security researcher at Wiz. “AI security is first and foremost infrastructure security.”
Lack of protection for AI models
The expanding use of AI is being driven by adoption of large language models, an area of particular interest to the security community. The sheer volume of model downloads has attracted attention, with Meta Platforms Inc. reporting that its open AI model family, Llama, reached 1 billion downloads in March.
Yet despite the popularity of LLMs, security controls for them have not kept pace. “The $300 billion we spend on information security does not protect AI models,” Malcolm Harkins, chief security and trust officer at HiddenLayer Inc., said in an interview with News. “The models are exploitable because there is no mitigation against vulnerability.”
This threat of exploitation has cast a spotlight on popular repositories where models are stored and downloaded. At last year’s Black Hat gathering, researchers presented evidence they had breached three of the largest AI model repositories.
This has become an issue of greater concern as enterprises continue to implement AI agents, which rely on LLMs to perform key tasks. “The LLM that drives and controls your agents can potentially be controlled by attackers,” Nvidia’s Lynch said this week. “LLMs are uniquely vulnerable to adversarial manipulation.”
Though major repositories have responded to breach vulnerabilities identified and shared by security researchers, there has been little evidence that the model repository platforms are interested in vetting their inventories for malicious code. It’s not because the problem is a technological challenge, according to Chris Sestito, co-founder and CEO of HiddenLayer.
“I believe you need to embrace the technology that exists,” Sestito told News. “I don’t think the lift is that big.”
Agents fail breach test
If model integrity fails to be protected, this will likely have repercussions for the future of AI agents as well. Agentic AI is booming, yet the lack of security controls around the autonomous software is also beginning to generate concern.
Last month, cybersecurity company Coalfire Inc. released a report which documented its success in hacking agentic AI applications. Using adversarial prompts and working with partner standards such as those from the National Institute of Standards and Technology or NIST, the company was able to demonstrate new risks in compromise and data leakage.

Apostal Vassilev of NIST, Jess Burn of Forrester, and Nathan Hamiel of Kudelski Security spoke at the Black Hat AI Summit.
“There was a success rate of 100%,” Apostol Vassilev, research team supervisor at NIST, said during the AI Summit. “Agents are touching the same cyber infrastructure that we’ve been trying to protect for decades. Make sure you are exposing this technology only to assets and data you are willing to live without.”
Despite the concerns around agentic AI vulnerability, the security industry is also looking to adopt agents to bolster protection. An example of this can be found at Simbian Inc. which provides fully autonomous AI security operations center agents using toolchains and memory graphs to ingest signals, synthesize insight and make decisions in real time for threat containment.
Implementing agents for security has been a challenging problem, as Simbian co-founder and CEO Ambuj Kumar readily admitted. He told News that his motivation was a need to protect critical infrastructure and keep essential services such as medical care safe.
“The agents we are building are inside your organization,” Kumar said. “They know where the gold coins are and they secure them.”
Solving the identity problem
Another approach being taken within the cybersecurity industry to safeguard agents is to bake attestation into the autonomous software through certificate chains at the silicon level. Anjuna Security Inc. is pursuing this solution through an approach known as “confidential computing.” The concept is to process data through a Trusted Execution Environment, a secure area within the processor where code can be executed safely.
This is the path forward for agentic AI, according to Ayal Yogev, co-founder and CEO of Ajuna. His company now has three of the world’s top 10 banks in its customer set, joining five next-generation payments firms and the U.S. Navy as clients.
“It becomes an identity problem,” said Yogev, who spoke with News in advance of the Black Hat gathering. “If an agent is doing something for me, I need to make sure they don’t have permissions beyond what the user has. Confidential computing is the future of computing.”
For the near term, the future of computing is heavily dependent on the AI juggernaut, and this dynamic is forcing the cybersecurity community to speed up the research process to identify vulnerabilities and pressure platform owners to fix them. During much of the Black Hat conference this week, numerous security practitioners noted that even though the technology may be spinning off new solutions almost daily, the security problems have been seen before.
This will involve a measure of discipline and control, a message that notable industry figures such as Chris Inglis, the country’s first National Cyber Director and former deputy director of the National Security Agency, has been reinforcing for at least the past two years. In a conversation with News, the former U.S. Air Force officer and command pilot noted that today’s cars are nothing more than controllable computers on wheels.
“I do have the ability to tell that car what to do,” Inglis said. “We need to fly this airplane.”
Can the cybersecurity industry regain a measure of control as AI hurtles through the skies? As seen in the sessions and side conversations at Black Hat this week, the security community is trying hard, but there remains a nagging concern that AI itself may prove to be ultimately ungovernable.
During the AI Summit on Tuesday, F5’s Herrin was asked what the one thing was that should never be done in AI security. “Trust it,” Herrin replied.
Photos: Mark Albertson/ News
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About News Media
Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.