Nearly all software testing teams are either using or plan to use agentic AI, but many leaders admit they lack a clear grasp of testing realities, according to a recent survey of 400 testing executives and engineering leaders by Sauce Labs. Although 97% of companies are embracing or expect to embrace agentic AI in testing workflows, 61% state that their leadership does not fully understand the requirements for effective software testing.
The survey reveals several tensions and emerging patterns. A trust gap is evident: 72% of respondents believe agentic AI could enable fully autonomous testing by 2027, yet the same share of respondents are uncomfortable granting AI agents full access to their data. Likewise, while enthusiasm for automation is high, most teams (85%) prefer a hybrid model that combines human expertise with AI agents, rather than ceding all testing to automation. Another notable insight is accountability: when agentic AI misbehaves, 60% of organizations place blame on individuals rather than the technology, suggesting potential pitfalls in how responsibility and error handling are organized.
These findings matter amid rising hype around AI in testing. The survey underscores that while companies are eager to deploy new tools, there’s a mismatch in expectations between technical teams and executives, especially around the pace of adoption, data access, and trust. For example, leadership often sees agentic AI as a means of accelerating digital transformation, while practitioners remain cautious, pointing out risks such as data leakage, model hallucinations, and unclear ownership of errors. This misalignment could stall adoption if not addressed early.
To succeed in integrating agentic AI, according to Saucelabs, organizations need clearer leadership insight, robust frameworks for accountability, and realistic roadmaps for adoption. Building trust will require transparent governance models, defined guardrails for agent autonomy, and cultural shifts that recognize AI as a partner rather than a replacement. Sauce Labs suggests these elements be central to strategy so that innovations don’t outpace understanding or governance. Ultimately, the survey highlights a key inflection point: the technology is racing ahead, but sustainable success will hinge on how effectively companies align people, processes, and trust frameworks around it.
What the published findings do not include, however, is a detailed breakdown of adoption across different sectors, leaving some gaps in understanding how adoption varies between industries such as technology, finance, healthcare, and retail.
To fill in this picture, insights from other studies provide useful context. The AI Agents Survey by the MLOps Community shows the technology sector accounting for around 43 percent of respondents, with finance at about 10 percent and healthcare at roughly 6.5 percent. Retail and e-commerce make up just over 10 percent. A separate survey from Lyzr places technology even higher at around 46 percent, followed by consulting and professional services at 18 percent, finance at 12 percent, and significantly lower adoption in healthcare and life sciences at 4 percent, with education at 3 percent. Meanwhile, Blue Prism’s survey has suggested that in regulated industries like finance and healthcare, adoption often proceeds more cautiously, with strong interest but slower rollouts due to compliance and security concerns.
Taken together, these findings suggest we might expect to see technology firms leading the way, with finance and healthcare showing interest but advancing more carefully, and retail and e-commerce adopting at a moderate pace. In this light, the survey’s overall figure of 97 percent engagement reflects a broader trend across industries, but the actual pace and depth of adoption are likely to vary significantly depending on sector-specific constraints and priorities.
