AutoPatchBench is a standardized benchmark designed to help researchers and developers evaluate and compare how effectively LLM agents can automatically patch security vulnerabilities in C/C++ native code.
AutoPatchBench comprises a collection of tests aimed at evaluating the ability of LLMs to autonomously generate security patches for vulnerabilities identified using fuzz testing.
This benchmark aims to facilitate a comprehensive understanding of the capabilities and limitations of various AI-driven approaches to repairing fuzzing-found bugs. By offering a consistent set of evaluation criteria, AutoPatchBench fosters transparency and reproducibility in research.
Compared to general-purpose benchmarks for evaluating software engineering agents like SWE-Bench and SWE-Bench Verified, AutoPatchBench focuses on the specific challenges posed by bugs uncovered through fuzzing techniques, which often involve security vulnerabilities.
AutoPatchBench is based on a subset of ARVO, a dataset of over 5,000 real-world C/C++ vulnerabilities discovered by Google’s OSS-Fuzz across more than 250 projects. Each vulnerability in ARVO is paired with a triggering input and the canonical patch the developer wrote to fix the issue.
We retained 136 samples for AutoPatchBench that fulfill the necessary conditions for both patch generation and verification. From this refined set, we created a down-sampled subset of 113 AutoPatchBench-Lite samples to provide a focused benchmark for testing AI patch generation tools. These subsets preserves the diversity and complexity of real-world vulnerabilities including 11 distinct crash types, offering a solid foundation for advancing AI-driven security solutions.
Fuzz testing is a technique used to uncover security exploits and vulnerabilities by reaching edge cases that are difficult for human testers to encounter. As noted by the creators of OpenSSF’s Fuzz Introspector, fuzz testing is a promising approach, but its challenge lies in writing effective fuzzers that provide good coverage.
Additionally, once a crash is uncovered via fuzzing, resolving it is no trivial task requiring a thorough analysis of the crash stack trace to identify the root cause, followed by patching the code and verifying the effectiveness of the fix. This is where AI systems may offer assistance, as demonstrated by Google in its tech report on AI-powered patching and more recently with its GITS-Eval benchmark.
One key aspect of patch verification is ensuring the patched program maintains its intended behavior, which goes well beyond checking the program builds and does not crash when fed with the input that originally triggered the crash. To address this concern, AutoPatchBench applies a specific technique to evaluate whether the generated patch produces a program state identical to the ground truth program after the patched function returns.
Along with AutoPatchBench, which includes the full set of 136 samples from ARVO, Meta also released AutoPatchBench-Lite, a smaller subset of only 113 samples where the root cause of the crash is confined to a single function. This makes it better suited for tools in early development or those focused on simpler crash scenarios.
AutoPatchBench is part of CyberSecEval 4, an extensive benchmark suite for assessing vulnerabilities defensive capabilities of LLMs. Meta open sourced its reference implementation for the community to leverage it in open-source projects employing fuzzing or to build better patching models.