There’s a paradox among developers surrounding their use of artificial intelligence today: They’re willing to use AI, but trust in AI tools has dropped sharply.
That was among the findings contained in the annual developer survey commissioned by Stack Overflow, a popular web resource in the developer community. The survey found that though 84% of developers now use AI, only 29% trust the accuracy.
Results such as these highlight the growing pains AI is experiencing as the technology becomes ingrained into enterprise operations. As questions swirl around issues such as security, memory, cost and interoperability, developers are challenging the presumption that AI is ready to make their lives easier.
“If AI is supposed to be a revolutionary productivity tool, then why am I still doing most of the work?” asked Tony Loehr, a solutions engineer at Cline Bot Inc.
Solving the memory problem
Loehr spoke at the Developer Week conference in San Jose, a gathering on Thursday and Friday of engineers and enterprise executives focused on independent software development and AI tools. The event provided an opportunity to assess how AI has impacted enterprise operations and what key issues have moved to the front. One of these involves memory.
As recently documented by News Chief Executive John Furrier, AI is “memory-bound.” A typical AI server demands about eight times more memory than a traditional machine to deliver informed, accurate results.
Richmond Alake of Oracle spoke during the conference about the importance of memory for AI developers.
“The nature of what we have to do to build AI agents in production is changing,” Richmond Alake, director of AI developer experience at Oracle Corp., said during a presentation at the conference on Friday. “We need memory to be front and center.”
Part of Oracle’s solution is to provide representations of memory as tables in an Oracle database, allowing developers to build agents that can remember. The Oracle AI Database serves as an Agent Memory Core, providing unified retrieval, scalable persistence and foundations for building agents that learn and adapt over time.
“Agent Memory Core is the part of your system that sees the most traffic of data,” Alake explained. “It’s a bunch of system components working together to make sure your agents adapt. Memory is not just a layer, it became a product, it became a core feature.”
Advantages of smaller models
Another solution to improve AI’s accuracy and reduce the amount of memory needed involves a move toward smaller models. One of the companies involved in streamlining this process for developers is Red Hat Inc.
The goal is to accelerate inference and a leading choice to facilitate this is quantization, a technique for converting large language model weights to lower-parameter formats and reducing the memory load. Red Hat cites an example where quantization allows users to run a Llama 70 billion-parameter model on a single Nvidia A100 GPU versus the need for four A100s to run the same model. The company maintains a repository of pre-quantized models for developers to access.
Legare Kerrison of Red Hat offered a new perspective about the benefits of smaller models.
“When the LLMs are smaller, it means that our costs are also smaller and we are moving faster,” said Legare Kerrison, a developer advocate for AI at Red Hat. “There’s less time to first token.”
Despite the promise of smaller models, many of the tools on display in the exposition hall at Developer Week were designed to facilitate access to leading AI models such as those from OpenAI Group PBC, Anthropic PBC and China’s DeepSeek. The developer community is looking closely at the merits of both small and large models, a situation that will likely become clearer over the coming year.
“Right now, it’s hard to bet against the foundation models, especially in our space,” Jody Bailey, chief product and technology officer at Stack Overflow, said in an interview with News during the conference. “I do believe there are lots of places where small models make a ton of sense.”
Building governance for MCP
Another area of focus for developers has been the increasingly influential role of MCP or Model Context Protocol servers in their work. MCP servers provide LLMs and AI agents with the ability to connect to external data sources, other models and software applications.
MCP governance has been a key concern in the development community. Security professionals from Red Hat and IANS Research have documented security concerns with MCP in recent months, and one research report found that nearly 2,000 of the MCP servers exposed on the web today lacked proper authentication and access controls.
“It’s like a lot of things where you have adoption first,” said Stack Overflow’s Bailey, who noted that his firm’s own MCP server employs enterprise-grade access controls. “Anybody can get an MCP server.”
The problem is that MCP does not offer the governance controls required for production systems. To address this issue, companies such as Descope Inc. and WSO2 LLC have recently announced solutions designed to facilitate more secure use.
Descope released Client ID Metadata Documents or CIMD support in January as part of its Agentic Identity Hub. CIMD addresses the client registration challenges of MCP by creating a stronger identification and verification path with server interactions.
WSO2 has approached the governance issue by building a gateway feature into its API manager and SaaS platform Bijira. The WSO2 MCP Gateway adds governance, security and operational controls to the MCP standard.
“You can’t rely on agents to govern themselves,” said Derric Gilling, vice president and general manager of the API Platform at WSO2. “Your gateway might need to evolve.”
Gateways for interoperability
Evolution of the gateway may indeed play a more important role for AI in the enterprise, as seen in the statements and actions of major players such as IBM Corp. The company’s vision of the AI Gateway as a specialized middleware platform that facilitates the integration and management of AI tools is a central part of its AI strategy. Interoperability will be key, according to Nazrul Islam, chief architect and CTO for AI and the integration platform at IBM.
Nazrul Islam of IBM outlined the importance of AI agent interoperability for Developer Week attendees.
“The problem is not the model, the problem is not the agent, the problem is the interactions,” said Islam. “We’re missing the interoperability and not the intelligence.”
IBM’s AI Gateway is a feature of the DataPower service in API Connect. It’s designed to make it easier to manage access to API endpoints used by various AI applications.
“There is the policy enforcement point,” Islam explained. “Agents recommend, the gateway authorizes. We need a control layer for agent-to-agent interaction.”
The challenge confronting developers and the technology community in general is that frenetic activity to build the infrastructure around AI does not carry the promise that it will deliver expected results. Much as Stack Overflow has documented the trust gap among developers, open questions remain around full enterprise adoption, at least until some of the core issues such as security and interoperability are resolved.
“Just because AI can solve a problem doesn’t mean that people will adopt that solution,” said Caren Cioffi, co-founder and CEO of Agenda Hero Inc. “Agents can’t automate adoption.”
Photos: Mark Albertson/ News
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About News Media
Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.
