Howard Taylor, CISO Radware, LTD.
It is widely predicted that artificial intelligence (AI) will completely transform the cybersecurity industry over the next decade, leaving no corner of the industry untouched. The good news is that AI automation can improve incident response across the board, mapping attack patterns through powerful real-time analytics. The same trend can also free professionals from a lot of grunt work analyzing alerts, helping close some of the much talked about cybersecurity skills gap.
The problem is that the same advantages apply to attackers who are already using the technology to accelerate the evolution of new threats. It’s a frustrating reality. Every benefit we might think of for AI has a disadvantage hiding somewhere inside it.
AI can work perfectly and still create cybersecurity hazards we haven’t recognized yet. We lack the lived experience to fully understand the risks.
Package Hallucination
Take the unsettling but fascinating example of AI hallucination, the subject of a 2024 university study. Hallucinations happen when an AI LLM model invents nonsensical, bizarre or contradictory output. Normally when this happens, it is easy to detect because the output is outlandish in ways that jump out at us. However, there are also times when the hallucination is either too small to be immediately obvious, or the humans are in too much of a hurry to notice.
It is the latter phenomenon—humans not taking the time—that is one of the underpinning themes of the study above. Today’s developers use multiple open-source libraries to create software in what can be a long and complex software supply chain. They also increasingly use AI tools such as GitHub CoPilot to speed up the coding process, automating and debugging routines as they go.
The researchers discovered that the same coding tools sometimes added or hallucinated references to packages that didn’t exist. About 20% of the code samples examined across 30 different Python and JavaScript LLM models returned hallucinated references to nonexistent packages.
At first glance, this might not seem like a critical issue because programs referencing hallucinated packages would fail to compile or run, exposing the error. However, the researchers raised a chilling possibility: What if a malicious actor noticed the hallucinated package names and deliberately created packages with those names, embedding them with malicious code? In that scenario, the hallucinated package would no longer appear suspicious because it would now exist in the supply chain. This opens the door to an almost undetectable supply chain attack. To exploit this vulnerability, attackers would only need to identify which hallucinated package names are being generated by AI tools—and the researchers found no shortage of targets.
According to the study, “the average percentage of hallucinated packages is at least 5.2% for commercial models and 21.7% for open-source models, including a staggering 205,474 unique examples of hallucinated package names, further underscoring the severity and pervasiveness of this threat.”
Malicious packages are nothing new and have been an issue for a while, but the unpredictable behavior of AI gives them another way inside software repositories that might prove much harder to detect. To me, the possibility of an AI hallucination attack perfectly illustrates three issues that need urgent attention:
1. The attack surface is expanding in a new direction.
Our cybersecurity posture must acknowledge that the attack surface is in a constant state of expansion and transformation. This evolution is inevitable as long as we continue to innovate with software. The rise of AI further accelerates this challenge, making it even more critical to address.
2. Research is an essential investment.
White-hat research plays a pivotal role in protecting us from emerging threats. Increasing funding for researchers in the university sector is particularly important to model and anticipate the risks posed by new technologies, especially AI. Universities already possess significant expertise in these areas, making them well-suited to this task. Proactive investment now can prevent costly surprises in the future.
3. Humans alone can’t cope.
It’s unrealistic to expect developers to thoroughly vet every package they use in their software—time constraints and the fast-paced nature of software development make this impractical. Modern technology is too complex for human expertise alone to keep pace. Without advanced tools to support them, mistakes are inevitable.
AI can serve as a strategic advantage in this context, but it comes with its own risks, such as hallucinations. While we await the development of AI systems capable of reliably filtering inaccuracies, we must embrace AI’s potential while remaining vigilant to its misuse by malicious actors.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?