The Davos 2026 World Economic Forum has witnessed a significant shift in the discourse on artificial intelligence. If the event is a year ago was marked by optimism with great fascination resulting from a lack of data on the true impact of this phenomenonthis year technology leaders have taken a more pragmatic tone, focusing on the real challenges of implementation, return on investment and the need for colossal infrastructures on a planetary scale and resource estimation that is starting to look scary.
The largest investment in infrastructure in history
In his conversation with Larry Fink, CEO of BlackRock, Jensen Huang, CEO of Nvidiadid not describe artificial intelligence as just another vertical industry, but as the founding basis of what he called “the largest infrastructure build in human history,” structuring it as a “five-layer cake” that encompasses energy, chips, cloud data centers, AI models and applications. Huang explained that each of these layers requires massive investments and is creating demand for specialized workers at all levels.
Huang emphasized that the real economic value for most companies will not lie in the creation of foundational models (layer 4), but in the application layer (layer 5). It is in the application of intelligence to specific problems (drug discovery, logistics optimization, materials design) where the return on investment will occur. A key concept introduced by Huang and reinforced in other sessions is the idea of ”National Intelligence” or AI sovereignty. Huang urged nations to “develop their own AI, continue to refine it, and have their national intelligence as part of their ecosystem.”.
This has an interesting reading for the corporate environment: the Sovereignty of Enterprise AI. Organizations cannot afford to completely outsource their corporate brain to third-party vendors without retaining the intellectual property of fine-tuning and context data. AI infrastructure must be treated with the same criticality as power grids or national highways: an essential public utility that must be resilient and, to some extent, autonomous.
Despite the complexity of the underlying infrastructure and the titanic investments it will entail, Huang offered an optimistic view on the democratization of access. He declared that AI is “the most user-friendly software in history” and that the fundamental shift in work will be from “performing tasks” to “directing purposes.” This transition from “code writer” to “AI teacher” means that IT departments will move from being mere technical support centers to continuing training schools, where the workforce is taught to “teach AI” rather than program it. A paradigm shift that we are already witnessing for the software development sector that will be one of the most affected by this revolution.
Energy as a decisive factor
As we already pointed out and returning to the first layer that Huang mentioned, in his conversation with Larry Fink in Davos Satya Nadella, CEO of Microsoftwas blunt: “GDP growth anywhere will be directly correlated to the energy cost of using AI.” Nadella warned that “we will quickly lose even the social permission to use energy to generate these tokens, if these tokens are not improving outcomes in health, education, public sector efficiency and private sector competitiveness”.
Nadella introduced a new macroeconomic concept: «tokens per dollar per vatio»suggesting that future economic growth will depend directly on this energy efficiency metric. Residential electricity costs in the United States have increased about 13% since January 2025, according to the Energy Information Agency, while the average utility payment for electricity and gas rose 3.6% year-over-year in the third quarter of 2025.
Along the same lines, in one of the most commented conversations on the forum with Larry Fink, Elon Musk He issued a warning that every digital infrastructure manager should hear: “By the end of this year, we will be producing more chips than we can power on.” The restriction is no longer the silicon, but the voltage. Musk, true to his style, proposed a solution that mixes extreme engineering with science fiction: placing solar-powered AI data centers into orbit.
According to their calculations, solar energy in space is five times more efficient and, thanks to the reuse of the Starship rocket – which promises to reduce launch costs by a factor of 100 – this orbital infrastructure could be economically viable sooner than we think. Musk didn’t stop there, predicting that AI will surpass the intelligence of any individual human by 2026 or 2027, and that of all of humanity combined by 2030.
AI needs tangible results
That line that Nadella commented about tangible results was reiterated by Ruth Porat, president and CIO of Alphabet. During the session on “New Growth Perspectives,” Porat was blunt: AI cannot just be about chatbots or marginal cost reduction. The real challenge for companies is the complete transformation of their processes. However, data presented at the forum by consulting firms such as Deloitte show that there is a gap between this aspiration and reality: only 25% of organizations have managed to take their AI pilots to a scalable production phase. Companies are trapped in “pilot purgatory,” held back by technical debt and the lack of a cohesive data strategy.
On the model development front, the discussion turned to autonomy and complex task substitution. In the panel «The day after AGI» (Artificial General Intelligence), Dario Amodei (Anthropic) y Demis Hassabis (Google DeepMind) released a prediction that should force any IT department’s hiring plans to be rewritten: we are just months away from AI agents can perform most of the tasks of a junior software engineer from start to finish. This poses a training crisis for the industry: if AI eliminates the entry step, how will we train the senior architects of the future? Reflection and enhancement of expert analysts and engineers is necessary to achieve a transformation of their tasks into trainers and supervisors.
Demis Hassabis offered a more cautious viewexpecting the creation of “new, more significant jobs,” although he acknowledged a likely slowdown in the hiring of interns, which would be “offset by the incredible tools available to everyone.” Hassabis estimated that there is a 50% chance of achieving artificial general intelligence (AGI) before the end of the decade, although not through models built exactly like current AI systems. He also downplayed the Chinese threat, estimating that Chinese AI labs are six months behind their American and European counterparts.
For his part, Sam Altman, although maintaining a more discreet profile in the main panels, generated headlines in the side events that confirm the commercial maturity of OpenAI. Altman revealed that the company’s API business has surpassed $1 billion in annual recurring revenue, validating the deep integration of its models into the business ecosystem. Additionally, it was confirmed that OpenAI’s long-awaited hardware device, designed in collaboration with Jony Ive, is “on track” to be revealed in the second half of 2026, promising a “more peaceful” and less intrusive user experience than the current smartphone.
