A recent study challenges the widespread belief that AI tools accelerate software development. Researchers at METR conducted a randomized controlled trial of experienced open-source developers using AI-enhanced development tools like Claude 3.5 and Cursor Pro. Contrary to expectations, they found that AI-assisted programming led to a 19% increase in task completion time—even as developers believed they were working faster. The findings reveal a potential gap between AI’s perceived promise and its real-world impact.
To evaluate AI’s influence under realistic conditions, the researchers designed a randomized controlled trial (RCT) rooted in production-grade environments. Rather than using synthetic benchmarks, they recruited experienced contributors to complete real tasks across mature open-source repositories.
Participants were 16 professional developers with an average of five years of experience on the projects they were assigned. The repositories included realistic, ‘in-anger’ issues drawn from their own codebases: very large (> 1.1m lines of code), well established open source projects.
Across 246 tasks, each developer was randomly assigned to a maximum of two-hour sessions either with or without access to AI assistance. Those with access used Cursor Pro, a code editor with integrated support for Claude 3.5/3.7 Sonnet. The control group was explicitly blocked from using AI tools.
The study collected both objective and subjective metrics, including task duration, code quality, and developer perception. Before and after each task, developers and external experts predicted the likely effect of AI on productivity.
The central result was both striking and unexpected: AI-assisted developers took 19% longer to complete tasks than those without AI. This contradicted pre-task expectations from both participants and experts, who had predicted an average speedup of ~40%.
The authors attributed the slowdown to a variety of contributing factors, including time spent prompting, reviewing AI-generated suggestions, and integrating outputs with complex codebases. Through 140+ hours of screen recordings, they identified five key contributors to the slowdown. These frictions likely offset any up-front gains from code generation, revealing a significant disconnect between perceived and actual productivity.
The researchers highlight this phenomenon as a ‘perception gap’—where friction introduced by AI tooling is subtle enough to go unnoticed in the moment but cumulatively slows real-world output. The contrast between perception and outcome underscores the study’s importance of grounding AI tool evaluation not just in user sentiment, but in rigorous measurement.
The authors caution against overgeneralizing their findings. While the study shows a measurable slowdown with AI tooling in this particular setting, they stress that many of the contributing factors are specific to their design. The developers were working in large, mature open-source codebases—projects with strict review standards and unfamiliar internal logic. The tasks were constrained to two-hour blocks, limiting exploration, and all AI interactions were funneled through a single toolchain
Importantly, the authors emphasize that future systems may overcome the challenges observed here. Improvements in prompting techniques, agent scaffolding, or domain-specific fine tuning could unlock real productivity gains even in the settings tested.
As AI capabilities continue to progress rapidly, the authors frame their findings not as a verdict on the usefulness of AI tools—but as a data point in a fast-evolving landscape that still requires rigorous, real-world evaluation.