Not surprisingly, AI was a major theme at Gartner’s annual Symposium/IT Expo in Orlando last week, with the keynote explaining why companies should focus on value and move to AI at their own pace. But I was more interested in some of the smaller sessions where they focused on more concrete examples, from when not to use generative AI to how to scale and govern the technology to the future of AI. Here are some of the things I found most interesting.
When Not to Use Generative AI
Rita Sallam (Credit: Michael J. Miller)
“AI does not revolve around gen AI, although it might feel like it right now,” Gartner Fellow Rita Sallam said in a presentation entitled “When Not to Use Generative AI.” She noted that while boards may now be asking technology leaders to use generative AI, in reality many organizations have used AI of different kinds for many years, in things such as supply chain optimization, sales forecasting, and fraud detection.
(Credit: Gartner)
Sallam shared data from a recent survey that showed that gen AI is already the most popular technique that organizations are using in adopting AI solutions, followed by machine learning with things like regression techniques.
She stressed that generative AI is very useful for the right use cases, but not for everything. She said it was very good at content generation, knowledge discovery, and conversational user interfaces; but has weaknesses with reliability, hallucinations, and a lack of reasoning. Generative AI is probabilistic, not deterministic, she noted, and said it was at the “peak of inflated expectations” in Gartner’s hype cycle.
She warned that organizations that solely focus on gen AI increase the risk of failure in their AI projects and may miss out on many opportunities.
Gen AI is not a good fit for planning and optimization, prediction and forecasting, decision intelligence, and autonomous systems, Sallam said. In each of these categories, she listed examples, explained why gen AI fails in those areas, and suggested alternative techniques.
“Agentic AI” from gen AI vendors offers promise for solving some of the issues, but she said this is now just a work-in-process, and she urged attendees to beware of “agent-washing.”
(Credit: Gartner)
Sallam said companies need to start with the use case, then pick the tool that works best for it. For a variety of families of use cases, she showed a heat map that lists the suitability of different kinds of common AI techniques that are best-suited for those cases. No technique is perfect, she said, so many people will want to combine different AI techniques.
Best Practices in Scaling Generative AI
(Credit: Gartner)
For those projects that will use gen AI, Gartner’s Arun Chandrasekaran shared some “Best Practices in Scaling Generative AI.” He started this session by repeating a statistic Gartner publicized earlier saying that at least 30% of generative AI projects will fail (which didn’t surprise me much – I think 30% of all big IT projects don’t succeed, at least not on time). He said more recent numbers suggest that as much as 60 to 70% of gen AI projects don’t make it into production. The top reasons for this, he said, are data quality, inadequate risk controls (such as privacy concerns), escalating costs, or unclear business value.
Arun Chandrasekaran (Credit: Michael J. Miller)
According to another survey, one of the first and most used applications of gen AI is for IT code generation and similar things like testing and documentation, Chandrasekaran said. It’s also being used to modernize applications and other infrastructure and operations areas such as IT security and devops.
(Credit: Gartner)
The second-most common application is customer service. He said generally this is not customer-facing chatbots now, but rather things that convert customer service calls to text, or perform sentiment analysis on those conversations. We are seeing AI systems help agents better answer customer queries, he said, but generally there is still a human in the loop.
Next up is marketing, from generating calls to creating personalized social media calls. Overall, he said 42% of AI investments are for customer-facing applications.
(Credit: Gartner)
Chandrasekaran then listed methods for scaling generative AI, beginning with creating a process for determining which use cases have the highest business value and the highest feasibility, and prioritizing those use cases. Then comes the question of build versus buy, but with gen AI that is more nuanced, he said, because there is a range of choices, including applications that have gen AI built in, those that embed APIs, those that extend gen AI with retrieval-augmented generation (RAG), those that used models that are customized via fine-tuning, and finally those that use custom models.
Most customers will choose one of the first three of these, he said, but it’s most important to align the choice with the goals of the application.
From there, you need to create pilots or proof-of-concepts. The goal is to experiment, and find out what works and what doesn’t. He suggested a sandbox environment for testing these out.
(Credit: Gartner)
To build an application, he said, you’ll want a composable platform architecture, in part so you can use the model that is the most cost-effective at any point in time. Then you’ll want a “responsible AI” initiative including data privacy, model safety including “red teaming,” explainability, and fairness.
Next is investing in data and AI literacy, since so many knowledge workers will be using gen AI tools in the next few years, including things like prompt engineering and understanding what AI is good at and where it has issues. He agreed that “it’s important to know when to use AI and also when not to use gen AI.” Then, he said, you’ll want robust data engineering practices, including tools for integrating your data with gen AI models.
Today, machines and people have an uncomfortable relationship, so you’ll want a process for enabling seamless collaborations among humans and machines. This includes techniques such as keeping a “human in the loop” to vet gen AI system outputs and things like empathy maps.
Then there are special financial operations practices, so you’ll need to understand things like using smaller models, creating prompt libraries, and caching model responses. Model routers can figure out the cheapest model to give you an appropriate response.
Finally, he said companies need to “adopt a product approach” and think of IT as a “product owner” making sure the product is on a continuous update schedule and that it continues to meet people’s needs.
AI Strategy and Maturity: Value Lessons From Practice
Svetlana Sicular (Credit: Michael J. Miller)
Gartner’s Svetlana Sicular talked about “AI Strategy and Maturity,” beginning by saying an AI strategy needs to have four pillars: vision, risks, value, and adoption. These pillars balance each other out, she said, noting that without adoption, you won’t deliver on the vision and without managing risk, you won’t get value.
The average mature company has 59 AI use cases in production. But the key first step is to select and prioritize the use cases that make most sense for your organizations. For most, she said, you should start with selecting a series of three to six use cases that all more or less use the same technique. Once you understand one case, the others will go faster, and you’ll be able to determine which ones pan out and which do not. Only 48% of use cases end up in production, she noted.
Sicular said it’s more important to experiment with use cases, not spend your time comparing vendors, because the products will change in six months. And she stressed that the AI solution doesn’t need to be gen AI, just something that adds value. Only after you have use cases and have experimented will you be ready to create a strategy, setting the expectations for your organization or in your budget. In doing so, she said, you need to follow the value for your business and create a strategic roadmap of use cases.
That strategy has to be adaptable, she said, with the two key issues being how to set expectations for AI maturity, and when to pivot.
(Credit: Gartner)
Only after you have your use cases should you build an AI governance structure, with the principles, guides, and standards you need to scale. And then you need to know when to pivot and expand your use cases. For this, she said, data will be central to your AI strategy, and she suggested establishing an end-to end AI lifecycle. If you have a framework for developing, delivering, and testing, then you can scale AI with automation.
Sicular used Fidelity as an example of industrializing the AI process. She suggested watching out for what people really do with AI and how that differs from what you expected they would use it for, and urged IT professionals to “develop a prioritization approach that allows tactical and strategic projects to emerge from shared efforts and understanding of AI’s capabilities.”
Executive Guide to AI Governance
Frances Karamouzis (Credit: Michael J. Miller)
Drilling deeper on AI governance, Gartner’s Frances Karamouzis noted that AI governance is difficult and not as widely applied as it should be.
Part of this, she said, is because of where the spending and control of AI projects is happening. According to a different Gartner survey, the primary source of funding for AI initiatives only comes from IT roles (CIO or CTO budgets) 31% of the time; while 52% of the time it comes from non-IT functions, the largest of which is a business unit or function budget. When asked who is responsible for the delivery of the function, respondents said the CIO was responsible 21% of the time, the CTO 19%, and an AI leader – usually someone outside the traditional IT roles – 19%, followed by a long tail of other responses.
“One of the reasons that governance is so difficult in this market is that for those who are in charge of governance, it’s even unclear what their agreement is with their scope or the scale, or what’s underneath them,” Karamouzis said.
(Credit: Gartner)
She went on to share a model of six pillars for a governance operating model, covering things from describing the mandate and scope of the policy to the structure and roles that will be necessary. She said there are no “best practices” because the technology is so new and every organization is different, and no one best way of doing governance; instead, she said, you need to find that appropriate way for your organization.
But Karamouzis said there are four big areas that need to be addressed: the multiple disciplines the governance will cover; the scope and portfolio – what remit and authority you will have; communications and enforcement – how do people know what is the right thing to do; and the approach you will take.
She told the audience that their role in their organizations is to actually think through how to divide up the governance, asking whether the chief data officer is responsible for this, or is there a Chief AI officer, or an AI board, or whatever; and to monitor how this changes over time.
Most organizations will end up with some solutions they buy, and some they build, and this will impact governance. Similarly, the communications strategy isn’t static, but the goal is appropriate behavior by both humans and machines, and this needs to be in the DNA of the organization. And the approach will vary depending on the starting point, depending on what existing enterprise-wide governance policies you already have in place.
“Governance isn’t just one thing,” Karamouzis said. “You actually need different kinds or specific kinds of governance across the spectrum.” In general, a decision on AI governance isn’t a “one and done” thing, it will change over time.
Measuring and Quantifying Cost, Risk and Value of AI Initiatives
In another session, Karamouzis discussed measuring and quantifying the cost, risk, and value of AI initiatives. She noted that 73% of CxOs are planning to increase spending on AI in 2024, but noted that there is a low success rate in AI projects, often because the leadership is not ready.
She said companies should define, develop, and curate a portfolio of AI initiatives, and said that each initiative demands a cost, risk, and value assessment.
(Credit: Gartner)
She talked about creating “opportunity radars” of specific applications and shared one in manufacturing. She divided this up into everyday AI and game-changing AI, and external customer-facing and internal operations.
For much of the everyday AI—applications like coding assistants, ChatGPT, or Copilot for Microsoft 365—using it is just table stakes (not bringing a competitive advantage), and it is typically adopted by only one-third of employees. She noted it’s expensive and hard to calculate the return-on-investment for such applications. But she also said these have demonstrated benefits in productivity and work quality, and they provide a foundation that prepares organizations for differentiation.
(Credit: Gartner)
She then shared a method for visualizing the cost of gen AI, and said it’s important to monitor costs very carefully, because estimates can be off by 500 to 1,000%.
The Future of AI: Less Talking, More Doing
Erick Brethenoux (Credit: Michael J. Miller)
“Today, AI is not doing its job,” Gartner’s Chief of AI Research, Erick Brethenoux, said in a session on the future of AI. He asked whether AI is failing us, or if it is our lack of imagination because we don’t know what to ask it to do.
Often, AI automates sub-optimal processes. He built on the discussion from this year’s keynote of deep productivity that results from low-experience workers doing low-complexity tasks and high-experience workers doing high-complexity tasks. AI works great in the zone of deep productivity, he said, noting how Mitsui Chemical discovered 160 new materials that generally create $7 million in value each year. Yet he continued, “But more often than not AI distracts.”
Brethenoux said that AI should simplify the work we have to do, and echoed many speakers who said, “the future of AI is human-first.” He noted that today we get so many interruptions, we have less time to pay attention, and too often the new technology is just generating more. If we were to ask AI to take control of these interruptions, through various agents, it could be scary.
Yet, he noted the power of simplicity, using the original iPod as an example. If we could get applications that reduce the time you need to spend dealing with the notifications, that would give people more time. Some of that will initially result in productivity leakage, but that will reduce over time. For instance, he said, people might save 30 minutes a week, and initially spend that time getting coffee. But the next week they will spend less time, and within a few weeks, they’ll only spend 15 minutes at the coffee machine, but also talking with colleagues and learning from them.
He notes today that one Gartner client has a fleet of drones that survey wind farms in the North Sea. If these drones find problems, they can launch more expensive drones that can look more closely. They only contact the operator if they see real problems, he said.
In general, he proposed an “employee-first approach,” where instead of looking at tasks to automate, we ask people what things they don’t like about their job. This leads to instance acceptance, and then you can move on to the next task and the one after that to result in “empathetic AI.”
“The future of AI is up to us,” according to Brethenoux, who said we could try to make AI more like us (including our limitations, asking why we would want that), to make AI complementary to us (since we can look at maybe eight dimensions of a problem, while machines can look at thousands), or to make AI into a super intelligence (which he’s skeptical about). He believes we should be focusing on the second one and pushing for more automation.
(Credit: Gartner)
He then went through what he calls the five foundational elements for the future of AI.
Recommended by Our Editors
The first of these is AI agents, which he said is not a model, but rather an automated software entity. “We’ve seen some of vendor promises which they are starting to deliver on, and that is going to change the way we think in terms of AI,” he said, suggesting we should think of it like a new “teammate is joining a team.”
Then there is composite AI, which involves assembling multiple AIs together, including older models such as decision intelligence, the gen AI models that are now getting attention, and new things such as neurosymbolic AI.
A third trend is for AI engineering, he said. Implementing AI deeply inside your business processes is hard and the technical data associated with it is massive. This was a big challenge before ChatGPT was released in 2022, and since then most organizations stopped to play with the new technologies. Now, he said, “recess is over” and we have to figure out how to measure productivity with the new tools.
AI literacy is also crucial, with many people fearing the technology will replace their jobs. He noted that in most cases, AI will replace tasks not jobs, but that we still will need to manage expectations while limiting the fear and training people on how to use the tools. One training program will not be enough, and this needs to constantly change as the technology changes.
Finally, he talked about responsible AI, including ethics, security, governance, and sustainability. AI has been good at solving complex problems, he said, and we need elegant programming to solve these challenges. He noted that Gartner believes you don’t need a Chief AI Officer, but you do need an AI leader to ensure governance, the use of best practices, and the right competencies.
AI is here to stay, Brethenoux said, so we must deal with it. In fact, organizations have been using machine learning for the last 25 years, and most have a head of data science.
By saving us time by doing some tasks for us, AI will give us more time to think, to thoughtfully decide to act, and to interact with colleagues.
AI Will Not Replace Software Engineers (and May, in Fact, Require More)
Philip Walsh (Credit: Michael J. Miller)
“Rumors of the demise of the software engineering role have been greatly exaggerated,” Gartner’s Philip Walsh said in a session arguing against statements from many heads of AI companies that their solutions could replace software engineers.
He noted that calculators did not eliminate the need to learn mathematics, because math isn’t calculation, it’s problem solving. Likewise, he said, software engineering transcends coding, saying the real skill of software engineers is their creative and critical thinking abilities. He notes they need to handle problems in increasingly complex environments.
But, Walsh said, the role will evolve in three phases. First, we will augment existing work patterns, then we’ll start to push the boundaries, and then we will break through them and create new innovative and more complex solutions, which will required highly talented designers, engineers, and architects.
Echoing back to the keynote, he noted that today’s AI code assistants are focused on little things, and show lots of promise, but also lots of disappointment. He notes that in some large organizations, software designed by the code assistants have completion acceptance rates of less than 30%. He said today’s tools are not pair programmers, because they hallucinate and show sycophancy and anchoring bias.
(Credit: Gartner)
Some studies show that junior developers are using these tools more and getting more out of them, but these studies are measuring activity, not results. Instead, he said, senior developers actually can get the most out of the tools because to get it right you need to know about what you want in advance—a skill that junior developers don’t have—and you need be able to prompt, iterate, and validate the results. Junior developers may show more enthusiasm, he said, but if they overly rely on the tools, that may inhibit learning. He urged organizations not to overprioritize on productivity measures, and said the expertise of senior developers matters more than ever, as they need to cultivate junior developers.
“Humans have always used tools to augment and enhance their capabilities,” Walsh said, comparing using AI tools to using a hammer, “but all of the intention and all of the agency resides in me.”
He noted that augmentation is nice, but the real move will be in offloading entire aspects of tasks, so that humans can focus on doing higher level tasks. To do this, companies are proposing AI agents that will have to become more like a teammate. He said software agents that bridge the gap between augmentation to offloading will happen, but as with self-driving cars, this will take longer than expected because there are compounding layers of context and complexity. Fully autonomous AI software engineers would need to deal with an incredible amount of complexity, so we should be cautious about our expectations.
He noted that foundation models are gaining incredible capabilities, and models, data, platforms and tools are important, but he said we don’t pay enough attention to the organizational and human side. He noted that the electrification of industrial manufacturing in the 19th century took over 30 years. Getting such AI coding tools won’t take that long, but it “will take longer than many people expect.”
(Credit: Gartner)
And even if and when they do get these capabilities, Walsh said, we’ll just need different kinds of software engineers. He described an “AI-native software engineer, who would assign agents to do various parts of the development plan, but would still need to assess business needs, frame and assign tasks, supply the context, and review and approve the process. In this way, he said, a software engineer will become more like a conductor than a musician.
This will take time, so companies should approach this as a long-term change management and upskilling project.
Beyond today’s boundaries, the efficiency gains that come from AI-native software engineering will enable us to build new types of software. This will require exceptionally talented AI designers and engineers.
Walsh pointed to the “Jevons paradox,” which occurs when making things more efficient increases demand, rather than reducing it. That was first observed as coal demand increase due to more efficient engineers during the Industrial Revolution, and we see it now when we add extra lanes to the highway yet traffic problems worsen.
He noted that over the past few decades we’ve seen advances in programming languages, all promising to democratize programming, but that instead fueled the demand for software and software engineers. He noted that software backlogs aren’t getting any smaller. He noted that the Bureau of Labor Statistics predicts a 25% increase in need for software developers from 2022 to 2032.
“It’s not just about how we build software, it’s about what kind of software we build,” according to Walsh, noting that only 54% of AI projects are successfully deployed. He added that this will require a new breed of software professional – the AI engineer. He said 55% of organizations plan to add or increase AI engineers in the next year, and that there is a big skill gap.
AI is certainly driving great software engineering efficiency, but the very thing that is transforming developer productivity is also transforming what we can do with software, Walsh noted. “Generative AI will power a surge of demand for software engineering.”
Get Our Best Stories!
Sign up for What’s New Now to get our top stories delivered to your inbox every morning.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.