The rise of agentic AI, while viewed by many as a positive new tool to automate enterprise business functions, is also causing heartburn in the cybersecurity community. This unease is because agentic artificial intelligence opens the door to use by threat actors that could result in more powerful, scalable malware.
This scenario was documented in a recent blog post by Roger Grimes (pictured), data-driven defense evangelist at KnowBe4 Inc., titled “Autonomous Agentic AI-Enabled Deepfake Social Engineering Malware is Coming Your Way!” Grimes makes the case that the security industry is already seeing expanded use of AI in attack vectors, and it will be only a matter of time before agentic AI comes into play.
“We’re absolutely seeing AI come in a big way, with 70% to 90% of social engineering attacks now seeming to have some indicator that AI has been involved,” Grimes said. “This year, we’re seeing just an explosion of agentic AI. There’s a really good chance, I would say within a year or two, that the whole malware hacking model is going to be really heavily agentic AI.”
Grimes spoke with theCUBE’s Dave Vellante and Jon Oltsik at the RSAC 2025 Conference, during an exclusive broadcast on theCUBE, News Media’s livestreaming studio. They discussed the impact of AI on the cybersecurity community. (* Disclosure below.)
Deepfakes will be powered by agentic AI
The security researcher makes the case that AI’s ability to create believable, realistic deepfakes will allow attackers to successfully target entire industries, employing knowledge and keywords that will avoid suspicion. Much of this will be fueled by vulnerabilities already seen in the large language models themselves.
Roger Grimes, data evangelist at KnowBe4, talks with theCUBE about the impact of AI on the cybersecurity community.
“These AI bots are going to be able to look at the industry, use industry vernacular and terminology with people,” Grimes explained. “This AI bot is going to know how to perfectly respond. We’re already seeing indications that they’re testing it. Every good LLM seems to be jailbroken like every single day. We’re going to lose that war.”
There is a flip side to this sobering scenario, and it involves the use of agentic AI to defend against attacks. Security practitioners can leverage autonomous technology to improve patching capabilities and automate human follow-up.
“Agentic AI is the idea that you send out this autonomous bot [and] it analyzes this device,” Grimes said. “It already has the parameters to know what should be patched, what hours it should be done, maybe that it has to do a backup first to do testing. At KnowBe4, we’ve been doing AI going on seven years, and we’ve got a ton of AI and agentic AI.”
For most cybersecurity professionals, a true measure of success or failure will be the data. This means finding the right set of metrics that accurately paints a picture of the security landscape.
“As a data-driven evangelist, I think we need to use the data,” Grimes said. “We need to make sure we’re measuring the right thing … your decrease in cybersecurity risk in a measurable way, meaning that you’re actually getting less breaches [and are] less likely to be breached. ‘Am I more or less at risk this year than last year or the previous time period? That’s the ultimate measurement.’”
Here’s the complete video interview, part of News’s and theCUBE’s coverage of the RSAC 2025 Conference event:
(* Disclosure: KnowBe4 Inc. sponsored this segment of theCUBE. Neither KnowBe4 nor other sponsors have editorial control over content on theCUBE or News.)
Photo: News
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU