Software developers have embraced artificial intelligence tools with the enthusiasm of children who discover candy, but they rely on the output about the same amount as the promises of a politician.
Google Cloud’s 2025 Dora Report, Released on Wednesday, shows that 90% of developers now use AI in their daily work, an increase of 14% Last year.
The report also showed that only 24% of the respondents actually trust the information that these tools produce.
The annual research, which has investigated nearly 5,000 technology professionals worldwide, paints an image of an industry trying to move quickly without breaking things.
Developers spend a two -hour work median daily with AI assistants and integrate them into everything, from cod generation to security reviews. Yet 30% of these same professionals trust AI output “a little” or “not at all”.
“If you are an engineer at Google, it is inevitable that you use AI as part of your daily work,” Ryan Salva, who supervises Google’s coding aids, including Gemini Code Assist, to CNN.
The company’s own statistics show that more than a quarter of the new Google code now comes from AI Systems, where CEO Sundar Pichai claims a productivity boost of 10% for technical teams.
Developers usually use AI to write and change new code. Other use cases are error detection, assessment and maintenance of legacy code In addition to more educational purposes, such as explaining concepts or writing documentation.
Despite the lack of trust, more than 80% of the developers surveyed reported that AI has improved their work efficiency, while 59% remarked improvements in code quality.
Here, however, where things become special: 65% of the respondents described themselves as highly dependent on these tools, even though they do not fully trust them.
Among that group, 37% reported “moderate” dependence, 20% said “a lot”, and 8% admitted “a lot of dependence”.
‘Copypasta’ attack shows how fast injections can infect ai to scale
This trust productivity paradox Connect with the findings from the survey of Stack Overflow’s 2025, where distrust in the AI -nuclearity in just one year rose from 31% to 46%, despite the high Adoption rates of 84% for the year.
Developers treat AI as a brilliant but unreliable colleague for brainstorming and grunt work, but everything has to check double.
Google’s answer includes more than just documenting the trend.
On Tuesday, the company unveiled his Dora AI options model, a framework that identifies seven practices that have been designed to help organizations use the value of AI without running risks.
The model argues for user-oriented design, clear communication protocols and what Google calls “small batch workflows”-in orphans, avoiding uncontrolled AI processing without supervision.
New AI system predicts the risk of 1,000 diseases years in advance
The report also introduces team archetypes, ranging from “harmonious high-drivers” to groups that are stuck in a “legacy bottleneck”.
These profiles stem from an analysis of how different organizations deal with AI integration. Teams with strong existing processes saw AI strengthen their strengths. Freagmented organizations saw AI exposing every weakness in their workflow.
The full state of AI-assisted software development report and the Companion Dora Ai Capacity Model Documentation are available via Google Cloud’s Research Portal.
The materials include prescriptive guidelines for teams that want to be more proactive in their approval of AI technologies – that assumes someone, trust them sufficiently to implement them.