AI may be able to help complete some types of work, like coding, faster. But it may come at the cost of actually getting better at what you’re doing and mastering new skills in the process, new research shows.
AI firm Anthropic, which develops the ChatGPT-competitor Claude, conducted the study on 52 junior software engineers. The participants were given a series of Python-based coding tasks, preceded by a short warm-up, and then quizzed on the skills they learned afterward. The whole process lasted about one hour and 15 minutes.
The researchers found that the group given AI assistance finished the tasks two minutes faster than the non-AI group, but generally performed far worse on the quiz afterward. The AI group averaged 50% on the post-task quiz, compared to 67% in the non-AI coding group. The largest gap in scores between the two groups was on debugging questions, where programmers were asked how to fix faulty code.
“Cognitive effort—and even getting painfully stuck—is likely important for fostering mastery,” the researchers said. “This is also a lesson that applies to how individuals choose to work with AI, and which tools they use.”
The study also found it wasn’t just the fact that programmers used AI that impacted skill acquisition—it was how they used it. Researchers identified several common patterns that cropped up among high and low performing groups on the post-task test.
The worst-performing participants had either entirely delegated all their coding to AI or started by attempting to code manually before calling in AI’s aid. Users who employed AI to debug their code directly, rather than asking questions about where it went wrong, also generally performed poorly on the later test.
Meanwhile, programmers who asked AI questions about why the generated code worked—and followed up with additional questions—performed far better. Users who took a hybrid code-explanation approach, asking the AI to explain code as it generated it, performed better still. Users who asked only “conceptual” questions—requesting explanations of concepts and issues rather than having the AI do the work directly—performed by far the best on the test afterward.
Recommended by Our Editors
The findings come as companies like Google and Microsoft set ambitious targets for incorporating AI into their code output, with Meta saying it plans for over 50% of its code to be written by AI. Even cutting edge NASA missions aren’t immune from AI written code. In December, commands which were generated by Anthropic Claude, with human supervision, were sent to NASA’s Perseverance Rover on Mars.
And while coders in this study were broadly faster when using AI, whether AI-assisted coding is actually quicker overall remains up for debate. A study from the AI research nonprofit Metr earlier this year found that AI actually slowed down the programmers it tested, as time spent prompting the AI equaled, or exceeded, the time saved by its assistance.
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy
Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
About Our Expert

Experience
I’m a reporter covering weekend news. Before joining PCMag in 2024, I picked up bylines in BBC News, The Guardian, The Times of London, The Daily Beast, Vice, Slate, Fast Company, The Evening Standard, The i, TechRadar, and Decrypt Media.
I’ve been a PC gamer since you had to install games from multiple CD-ROMs by hand. As a reporter, I’m passionate about the intersection of tech and human lives. I’ve covered everything from crypto scandals to the art world, as well as conspiracy theories, UK politics, and Russia and foreign affairs.
Read Full Bio
