Cybersecurity threats are entering a new era as artificial intelligence tools such as FraudGPT and WormGPT fuel the rapid rise of cybercrime-as-a-service.
No longer reliant on brute-force tactics, attackers now use AI to automate reconnaissance, craft highly targeted phishing campaigns and execute complex, multilayered attacks with precision. In response, defenders are turning to agentic automation, intelligent orchestration and the convergence of security and network operations to keep pace with this fast-evolving threat landscape, according to Derek Manky (pictured), chief security strategist and global vice president of threat intelligence at Fortinet Inc.
Fortinet’s Derek Manky talks to theCUBE about using agentic AI in the fights against cybersecurity threats.
“It’s a business they’re building, so we see reconnaissance services now,” Manky said. “Reconnaissance services is using the GPT models, the [large language models] that cybercrime like FraudGPT, WormGPT —the things that guardrails have been taken off. They’re using that as a service. You can sign up, subscribe to them, use them to get information on potential victims through crafted information, through social media [and] all those things. Then put [that] into spear phishing emails, that if you want to try to penetrate a [chief financial officer] of an organization, there’s services to do that, essentially.”
Manky spoke with theCUBE’s John Furrier and Jackie McGuire at the RSAC 2025 Conference, during an exclusive broadcast on theCUBE, News Media’s livestreaming studio. They discussed how AI tools are transforming cybercrime into a scalable service model, while cybersecurity teams counter with agentic automation, security operations center and network operations center, and convergence and advanced deception strategies such as interactive honeypots. (* Disclosure below.)
New tactics and AI defenses reshape cybersecurity threats
Cybercrime has become a commercialized, scalable enterprise, enabled by low-cost services and the weaponization of AI. Attackers no longer need to code or hack; instead, they can purchase stolen credentials, rent distributed-denial-of-service campaigns or subscribe to AI-driven reconnaissance platforms. One key takeaway from the latest threat report is a shift toward more targeted attacks and a growing emphasis on bleeding revenue from operational disruption, according to Manky.
“We are seeing things like ransomware dropping, volumes dropping in ransomware, but that’s not a good thing because they’re becoming much more targeted,” he added. “Manufacturing was the number one target that we saw for cybercrime. Why are they doing that? Because the playbooks got more aggressive. They’re not just going after ransom data, they’re going after services, because they know if they take a service offline, they’re going to bleed revenue.”
This change has been accompanied by a surge in stolen credentials. A 42% increase in dark web postings has been driven largely by info stealers such as RedLine Stealer, which account for over 60% of such activity, according to Manky. Cybercriminals are increasingly buying their way into systems using pre-stolen logins instead of finding new exploits, lowering the technical barrier for entry into cybercrime.
“RedLine was over 60% of all info stealer activity we saw,” Manky said. “And those info stealers are a commodity. You can get them. Again, it’s not a big investment for an aspiring cybercriminal or, like you say, hackers. It’s a low barrier of entry. AI, of course, is acting as a catalyst to that, and it’s going to continue to scrape those credentials, to put those into those packs that are sold. And credential stuffing, it’s not going to go away.”
AI-driven defense and deception reshape the cybersecurity playbook
To defend against this rising tide, cybersecurity teams are adopting both generative and discriminative AI, with each playing a different role. Generative models help reduce analyst fatigue by triaging alerts and synthesizing threat data, while machine learning models detect anomalies and identify zero-day threats in real time. This is leading to shorter response times and better alignment between SOC and NOC operations.
“This is where the agentic piece is really coming in,” Manky said. “It’s not just about the [security incident management], but the [system of record] is actually one of the big orchestrators, an intelligent SOR, now, that’s acting as that agent. It’s offloading a lot of those mundane tasks. With the agentic AI, now some of those guardrails are being put in place to actually autonomously do those actions, as well.”
As cybersecurity threats change, deception techniques are also evolving. The modern honeypot is now an interactive, high-fidelity decoy, deployed across environments to lure, observe and trap intruders. Combined with agent-based orchestration, these systems provide both early warning and actionable threat intelligence, blurring the line between passive defense and active disruption, according to Manky.
“A traditional honeypot was just pure detection,” he said. “It’s to lure an attacker in [and] see what they’re up to so you get some intelligence based off of it and you get some lead time.” The modern honeypot, though, is interactive. It’s actually intentionally luring and trapping an attacker in because nobody should be probing and looking around for those because they’re not real environments.”
Here’s the complete video interview, part of News’s and theCUBE’s coverage of the RSAC 2025 Conference event:
(* Disclosure: Fortinet Inc. sponsored this segment of theCUBE. Neither Fortinet nor other sponsors have editorial control over content on theCUBE or News.)
Photo: News
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU