Microsoft CEO Satya Nadella stated during an interview with OpenAI CEO Sam Altman that the big problem for the deployment of AI does not lie in computing capacity, but in lack of energy to power the thousands of accelerators that are being installed in data centers. Nadella was descriptive: we currently cannot connect all the GPUs we have in inventory.
Energy consumption is one of the big problems in powering the huge artificial intelligence infrastructure projected for the coming years. As artificial intelligence systems expand more and more, their energy footprint grows even faster and the latest reports say that the energy consumption of AI threatens even the global sustainability of the planet. And it is not only necessary to provide energy to the servers, but also to cool them, so the consumption of goods such as water is an added problem.
«The biggest problem we have now is not excess computing capacity, but energy; “It is the ability to perform compilations quickly enough and with the necessary power.”Nadella explained. “So if that can’t be done, we may have a lot of chips in storage that can’t be used. In fact, that’s my problem today. It’s not a chip supply problem; I don’t have processors available to use.«.
Energy and water, supplies for the deployment of AI
The Microsoft CEO’s mention of “shells” refers to the structure of a data center, which is essentially an empty building with all the necessary elements, such as electricity and water, to start production immediately. The energy consumption involved in the deployment of AI has been a topic of debate among many experts. This issue became especially relevant once NVIDIA solved the GPU shortage, and many tech companies are now investing in solving an even bigger problem.
The need for cheap energy has generated a renewed interest in nuclear energy and specifically by small modular reactors (SMR). These types of reactors are much smaller than conventional reactors and can be built in one place and then transported and installed at the final location.
Almost all cloud providers and hyperscalers have announced some type of initiative in this area. AWS collaborates with X-Energy and Dominion Energy; Google bets on Kairos; Microsoft is working to revitalize Three-Mile Island, and Oracle has announced plans to deploy at least three SMR reactors with more than one gigawatt of combined capacity. NVIDIA, the world’s largest supplier of AI accelerators, does not want to be left behind and has also announced its own investments, specifically in the SMR reactor startup backed by Bill Gates.
It seems that SMR is the future bet to build own power generation facilities, but the latest generation is still in the prototype stage and will take years to be deployed. Until then, how do we power data centers to satisfy AI? The debate continues.
