The energy consumption of artificial intelligence is a big topic at the moment, partly because the top AI firms have announced some large-scale plans to power their future endeavours. Meta and Google have both discussed bringing nuclear power back to feed their AI ambitions, while OpenAI is playing around with the idea of building data centers in space. With plans as sci-fi as these in the works, people naturally start to wonder why big tech companies need so much power, and how much energy our day-to-day interactions with AI products actually consume.
In response to our curiosity, companies like Google have released information this year about energy consumption and efficiency in relation to their AI products, and OpenAI was not far behind. In June, CEO Sam Altman posted a blog that included the energy consumption of “the average” ChatGPT query: 0.34 watt-hours. Altman equates this to “about what an oven would use in a little over a second, or a high-efficiency lightbulb would use in a couple of minutes.”
So, is the answer to how much energy each ChatGPT prompt really uses 0.34 watt-hours? Unfortunately, it’s probably not that simple. While the number may be accurate, Altman included no context or information about how it was calculated, which severely limits our understanding of the situation. For instance, we don’t know what OpenAI counts as an “average” ChatGPT query since the LLM can handle various tasks, such as general questions, coding, and image generation — all of which require different amounts of energy.
Why is AI energy consumption so complicated?
We also don’t know how much of the process is covered by Altman’s number. It’s possible that it only includes the GPU servers used for inference (the output generation process), but there are quite a few other energy-consuming parts to the puzzle. This includes cooling systems, networking equipment, data storage, firewalls, electricity conversion loss, and backup systems. However, much of this extra infrastructure is common across different types of tech companies and often presents challenges in energy reporting. So, although we may not be getting the full picture from Altman’s number, you could also argue that it makes sense to isolate the GPU server numbers, as these are the main source of energy consumption that is unique to AI workloads.
Another thing we don’t know is whether this average is taken from across multiple models or if it refers to just one (and if so, which one is it?). Plus, even if we did know, we would need regular updates as new and more advanced models are released. For example, GPT-5 was released for all ChatGPT accounts just two months after Altman posted this blog, and third-party AI labs quickly ran tests and released estimates that it may consume as much as 8.6 times more power per query compared to GPT-4. OpenAI hasn’t shared any information itself, but if the independent estimates are even close to accurate, it would render Altman’s blog obsolete and leave us just as uninformed about ChatGPT’s energy consumption as before.
