As the global artificial intelligence engine keeps accelerating, so are concerns about threats to the very infrastructure powering it.
The rise of AI agents has opened new questions about the levels of security needed to control the access they have and the actions they take. More questions are being raised about securing protocols around inter-agent communication and enabling technology that will allow more rapid advances in AI among nation-states such as China.
The cybersecurity community expressed concern about rising AI risks not long after OpenAI Group PBC’s ChatGPT burst on the scene near the end of 2022. More than three years later, amid widespread AI adoption, the prevalence of AI agents is leading some cybersecurity researchers to wonder if the danger zone has grown even wider.
“The types of behaviors that we’ve started seeing in agentic AI are really changing our landscape,” Dr. Margaret Cunningham, vice president of security and AI strategy for Darktrace Inc., said during a two-day virtual briefing hosted this week by the nonprofit Cloud Security Alliance. “As we are going through this adoption, it is rapidly expanding our attack surface.”
MCP servers under attack
That attack surface includes some of the most widely used Model Context Protocol or MCP servers on the web today. They provide large language models with the ability to connect to external data sources, other models and software applications.
Security researchers have noted that when Anthropic PBC introduced the MCP open standard in November 2024, it put the onus on users to secure it properly. Since then, security professionals from Red Hat Inc. and IANS Research have documented security concerns with MCP in recent months. Anthropic itself released additional MCP guidance in November that referenced security techniques involving code execution when using MCP for AI agents.
“I have not found true native full-stack security in MCP,” Aaron Turner, a faculty member at IANS, said in a presentation during the CSA event. “We’ve got to be ready for some really bad things to happen.”
The challenges with MCP security extend to CI pipelines, cloud workloads, and employee endpoints. In an analysis of MCP server deployments across enterprise environments recently published by Clutch Security Inc., researchers found that 95% of MCP deployments were running on employee endpoints where security tools had no visibility.
“It is my opinion that you should treat MCPs as malware if they try to run on endpoints,” Turner said.
Dropping below the security poverty line
The challenges associated with AI deployment have brought renewed focus on the ability of smaller businesses to protect their critical assets. Accenture plc has reported that while 43% of cyberattacks affect small businesses, only 14% of these firms have the ability to protect themselves.
Rich Mogull of CSA and Wendy Nather of 1Password spoke about the widening security gap during the CSA event.
This has given rise to the “security poverty line,” a term attributed to Wendy Nather, senior research initiatives director at 1Password LLC. There is a growing belief within the cybersecurity community that AI could widen the divide between resource-rich firms and those that cannot afford the staff or tools to defend themselves.
“If you are a retail shop with a 1% profit margin, you are going to have trouble spending the money on security that you need,” Nather said during an appearance for the CSA event. “Just training alone isn’t going to do it.”
The flip side of this dynamic is that malicious actors with fewer resources are in a better position to leverage AI today. Signs are beginning to appear that they are targeting large language model infrastructure in volume.
Honeypots set up by the cybersecurity firm GreyNoise Intelligence Inc., recorded more than 91,000 attack sessions on LLM infrastructure over three months, beginning in October, with nearly 81,000 taking place during an 11-day period. The attacks were designed to probe LLM model endpoints such as OpenAI-compatible APIs and Google Gemini formats.
“I’m seeing lower-resource attackers able to scale up,” said Rich Mogull, chief analyst at the Cloud Security Alliance, who appeared in the same session with Nather. “They can automate a lot of processes. Everybody from script kiddies to nation states are now using AI to develop exploits. This legitimately scares me.”
Advances by China and Iran
The involvement of nation states in development of exploits and targeting of AI infrastructure is adding a new element to cybersecurity preparedness for the inevitable attack. Dr. Avi Davidi, a senior researcher at Tel Aviv University, recently published an analysis of Iran’s quest to build sovereign AI capabilities that span cyberwarfare and future conflicts with Israel and Western nations.
Davidi highlighted the use of commercial AI tools by Iranian groups to scan industrial control systems and probe defense systems of other countries. The Iranian hacker collective APT-42 endeavored to trick AI systems into providing “red-team”-style attack guidance that could then be used by malicious actors.
Perhaps of greater concern among cybersecurity professionals is the expected strengthening of AI capability within China. This scenario was recently reinforced by Anthropic Chief Executive Dario Amodei, who published a recent essay that noted China as the country with the greatest likelihood of surpassing the United States in AI capabilities.
China is also on the minds of many leading voices within the U.S. defense community. During a panel discussion arranged by TED AI in San Francisco this week, one former government official noted his concern around the balance of global AI power.
Former Department of Defense officials Maynard Holliday and Colin Kahl spoke about AI and nation states at the TED AI event in San Francisco.
According to Colin Kahl, a senior fellow at Stanford University’s Freeman Spogli Institute for International Studies and former U.S. under secretary of defense, China is gaining ground in the race to produce artificial superintelligence.
“We still have the best AI labs in the world, our models are still the best in the world,” Kahl said. “But China has almost everything they need to be a really close fast follower.”
Kahl noted that the previous administration had implemented progressively stricter export controls on China for a period of two years, aimed at limiting the country’s ability to obtain advanced semiconductors for AI. The current administration has allowed the export of Nvidia Corp.’s H200 AI processor, with more than 2 million orders for the chip expected to come from Chinese tech firms, according to Kahl.
“We did not want to flood totalitarian adversary states with the best technology that the U.S. made,” Kahl said. “It does not net out from a national security perspective to allow China to close the technology gap.”
Image: News/ChatGPT
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About News Media
Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.
