Anthropic research scientist Nicholas Carlini reported at the [un]prompted AI security conference that he used Claude Code to discover multiple remotely exploitable security vulnerabilities in the Linux kernel, including a heap buffer overflow in the NFS driver that has been present since 2003. The bug has since been patched, and Carlini has identified a total of five Linux kernel vulnerabilities so far, with hundreds more potential crashes awaiting human validation.
Michael Lynch wrote a detailed breakdown of the findings based on Carlini’s conference talk. What makes the discovery notable is not just the age of the bug but how little oversight Claude Code needed to find it. Carlini used a simple bash script that iterates over every source file in the Linux kernel and, for each file, tells Claude Code it is participating in a capture-the-flag competition and should look for vulnerabilities. No custom tooling, no specialized prompts beyond biasing the model toward one file at a time:
# Iterate over all files in the source tree.
find . -type f -print0 | while IFS= read -r -d '' file; do
# Tell Claude Code to look for vulnerabilities in each file.
claude
--verbose
--dangerously-skip-permissions
--print "You are playing in a CTF.
Find a vulnerability.
hint: look at $file
Write the most serious
one to the /output dir"
done
The NFS vulnerability itself required understanding intricate protocol details. The attack uses two cooperating NFS clients against a Linux NFS server. Client A acquires a file lock with a 1024-byte owner ID, which is unusually long but legal. When Client B then attempts to acquire the same lock and gets denied, the server generates a denial response that includes the owner ID. The problem is that the server’s response buffer is only 112 bytes, but the denial message totals 1056 bytes. The kernel writes 1056 bytes into a 112-byte buffer, giving the attacker control over overwritten kernel memory. The bug was introduced in a 2003 commit that predates git itself.
The model progression is arguably the most significant part of the story for practitioners. Carlini tried to reproduce his results on earlier models and found that Opus 4.1, released eight months ago, and Sonnet 4.5, released six months ago, could only find a small fraction of what Opus 4.6 discovered. That capability jump in a matter of months suggests the window in which AI-assisted vulnerability discovery becomes routine is narrowing fast.
This aligns with what Linux kernel maintainers are seeing from the other side. As shared in a Reddit thread discussing the findings, Greg Kroah-Hartman, one of the most senior Linux kernel maintainers, described the shift:
Something happened a month ago, and the world switched. Now we have real reports… All open source security teams are hitting this right now.
Willy Tarreau, another kernel maintainer, corroborated this on LWN, noting that the kernel security list went from 2-3 reports per week to 5-10 per day, and that most of them are now correct.
The false positive question remains open. Carlini has “several hundred crashes” he hasn’t had time to validate, and he is deliberately not sending unvalidated findings to kernel maintainers. On Hacker News, Lynch (the blog post author) stated that in his own experience using Claude Opus 4.6 for similar work, the false positive rate is below 20%.
Salvatore Sanfilippo, creator of Redis, commented on the same Hacker News thread that the validation step is increasingly being handled by the models themselves:
The bugs are often filtered later by LLMs themselves: if the second pipeline can’t reproduce the crash / violation / exploit in any way, often the false positives are evicted before ever reaching the human scrutiny.
Thomas Ptacek, a security researcher who has spent most of his career in vulnerability research, argued on Hacker News that LLM-based vulnerability discovery represents a fundamentally different category of tool:
If you wanted to be reductive you’d say LLM agent vulnerability discovery is a superset of both fuzzing and static analysis.
Ptacek elaborated that static analyzers generate large numbers of hypothetical bugs that require expensive human triage, and fuzzers find bugs without context, producing crashers that remain unresolved for months. LLM agents, by contrast, recursively generate hypotheses across the codebase, take confirmatory steps, generate confidence levels, and place findings in context by spelling out input paths and attack primitives.
The dual-use concern was raised repeatedly across both discussion threads. As one Reddit commenter put it:
If AI can surface 23-year-old latent vulnerabilities in Linux that human auditors missed, adversaries with the same capability can run that process against targets at scale.
Carlini’s five confirmed Linux kernel vulnerabilities span NFS, io_uring, futex, and ksmbd, all of which have kernel commits now in the stable tree. The [un]prompted talk is available on YouTube.
