Claude Sonnet 4.5 has emerged as the best-performing model in ‘risky tasks’ in early evaluations by Petri (Parallel Exploration Tool For Risky Interactions)— Anthropic’s new open-source AI auditing tool.
Petri joins a growing ecosystem of internal tools from OpenAI and Meta, but stands out for being openly released.
As models grow more capable, safety testing is evolving from static benchmarks to automated, agent-driven audits designed to catch harmful behavior before deployment.
In early trials, Anthropic tested 14 models on 111 risky tasks. Each model was scored across four safety risk categories: deception (knowingly giving false answers), sycophancy (agreeing with users even when incorrect), power-seeking (pursuing actions to gain influence or control), and refusal failure (complying with requests it should decline).
Anthropic cautions that while Sonnet 4.5 performed best overall, misalignment behaviors were present in every model tested.
Aside from LLM rankings, Petri’s main capability is in automation of a key part of AI safety: testing how models behave in risky, multi-turn scenarios.
Researchers start with a simple instruction such as attempting a jailbreak or provoking deception and Petri launches auditor agents that interact with the model, adjusting tactics mid-conversation to probe for harmful behavior.
Each interaction is scored by a judge model across dimensions like honesty or refusal, and concerning transcripts are flagged for human review.
Unlike static benchmarks, Petri is meant for exploratory testing, helping researchers uncover edge cases and failure modes quickly, before model deployment.
Anthropic says Petri enables hypothesis testing in minutes and reduces the manual effort typically required for multi-turn safety evaluations. The company hopes that open-sourcing the tool will accelerate alignment research across the field.
Petri’s open release makes it notable, not just as a technical artifact, but as a public invitation to audit and improve alignment research.
Anthropic has also released example prompts, evaluation code, and guidance for extending the tool.
Like similar tools, Petri also has known limitations. Its judge models, often based on the same underlying language models, may inherit subtle biases, such as favoring certain response styles or over-penalizing ambiguity.
Further to this, recent studies have documented issues like self-preference bias (where models rate their own outputs more favorably) and position bias in LLM-as-a-judge setups.
To that end, Anthropic positions Petri as a tool for exploration of safety rather than an industry benchmark. Its release therefore adds momentum to a growing shift: away from static test sets and toward dynamic, scalable audits that surface risky behavior early before models are widely deployed.
Petri arrives amid a wave of internal safety tooling inside AI labs. OpenAI has long employed external red teaming and automated adversarial evaluation. Meta has also published a Responsible Use Guide alongside its Llama 3 release
The release also lands as governments begin formalizing AI safety requirements. The UK’s AI Safety Institute and the U.S. NIST AI Safety Consortium are both developing evaluation frameworks for high-risk models, with calls for greater transparency and standardized risk testing, a trend Petri may help accelerate.