Anthropic released Claude Haiku 4.5, making the model available to all users as its latest entry in the small, fast model category. The company positions the new model as delivering performance levels comparable to Claude Sonnet 4, which launched five months ago as a state-of-the-art model, but at “one-third the cost and more than twice the speed.” Anthropic describes Claude Haiku 4.5 as a hybrid reasoning large language model with “a combination of speed and intelligence that make it particularly effective at coding tasks and computer use,” marking a shift where frontier-level capabilities from earlier this year now arrive in a more economical package.
Anthropic trained Claude Haiku 4.5 on a proprietary dataset combining publicly available internet information through February 2025, non-public third-party data, contributions from data-labeling services and paid contractors, user data from Claude users who opted in, and internally generated data at Anthropic. The company applied multiple data cleaning and filtering techniques during training, including deduplication and classification methods.
Source: Claude 4.5 Haiku Benchmark Results
The model operates as a hybrid reasoning system, allowing users to choose between two response modes. By default, Claude Haiku 4.5 answers queries rapidly, but users can activate an “extended thinking mode” where the model allocates additional time to consider its response before answering. This capability represents a departure from Claude Haiku 3.5, the previous model in the small-model class, which lacked any extended thinking functionality.
When users receive responses generated through extended thinking mode, they can access the model’s reasoning process. Anthropic refers to this as the “thought process” or “chain-of-thought,” though the company notes this reasoning display comes with “an uncertain degree of accuracy or ‘faithfulness.'”
Anthropic trained Claude Haiku 4.5 with explicit context awareness, giving the model “precise information about how much context-window has been used.” This design choice allows the model to track its own memory consumption during operations.
The company works with data work platforms to engage workers who contribute to model improvement through preference selection, safety evaluation, and adversarial testing. Anthropic states it only partners with platforms that align with “fair and ethical compensation to workers” and maintain commitments to “safe workplace practices.”
Anthropic’s Responsible Scaling Policy mandates evaluation processes to determine the AI Safety Level Standard, which defines the safety and security mechanisms required before releasing any given model. The ASL Standards increase in stringency based on assessed model capabilities.
Claude Opus 4.1 and Claude Sonnet 4.5, the two most recent models from Anthropic, both launched under the ASL-3 Standard. For Claude Haiku 4.5, Anthropic applied a different evaluation approach due to its smaller model class, using ASL-3 “rule-out” evaluations to make its safety level determination.
A Reddit user on r/ClaudeAI reported rapid application development results, stating
I’ve never built apps so fast, and it does super well. I don’t even need Claude Sonnet anymore. I have been working on an app for 4 hours and I’ve been feeding it thousands upon thousands of lines of logs, and it had compacted the conversation like 7-8 times now (always thinking on).
Epoch AI, an organization investigating AI trajectory for societal benefit, found that
Even with reasoning disabled, Haiku 4.5 performs similarly or better than early lightweight reasoning models, like o1-mini.
AI Digest added Claude Haiku 4.5 to its AI Village platform, describing it as
the newest, fastest, and cheapest Anthropic model.
The platform’s assessment also noted a distinctive behavioral characteristic, calling the model “the most impatient”.
Developers can access Claude Haiku 4.5 through multiple platforms including Anthropic’s API, Amazon Bedrock, Google Cloud’s Vertex AI, and GitHub Copilot. More details about the model are available on the Claude Haiku product page. Implementation guidance for specific platforms can be found in Anthropic’s documentation for Amazon Bedrock and Vertex AI. GitHub published information about the public preview for GitHub Copilot integration on its changelog.
