The notion that AI is a high energy consumer is a “misnomer”, the co-founder of Google DeepMind has said, amid heightened scrutiny of the carbon footprint of frontier artificial intelligence businesses.
Sir Demis Hassabis, who launched the company in 2010, said AI firms were likely to lead to greater energy efficiencies globally in the long run than the total power consumption of the industry.
Responding to concerns about AI’s high energy consumption, he said: “I think there’s quite a big misnomer here in my view.
“The amount of compute to train a model can be quite large but you do that once only and then it’s deployable very cheaply from an energy and cost perspective and very widely.
“The models that we’re going to build for things like climate modelling but also material design, this technology in AI is going to one of the main drivers of solutions to fight the climate solution.
“So I think the amount of energy they use is going to be very small compared to the amount of savings they’re going to make.”
The remarks come as AI companies take an increasingly large share of national energy grids to power and cool the vast data centres on which frontier large language models are trained.
Data centers and transmission networks are each responsible for as much as 1.5% of global consumption, according to the International Energy Agency, which estimates that training an AI model uses the more power than 100 households in a year. There are also concerns of the amount of water used in cooling systems for data centres.
Google Cloud CEO Thomas Kurian said: “If you want models to be widely used, just economically you cannot run it forever at a loss, so you’re going to naturally find that models get more optimised in order for it to be widely used, and the advancements we’ve see in terms of inference improvements over the last two years is astonishing.”
The comments by Hassabis and Kurian were made at an event hosted at DeepMind’s London headquarters to mark the launch of new AI products, as well as initiatives and skills training for the UK, in a bid to reinforce Google Cloud’s long-term dedication to the UK, highlighted by its $1bn investment in a new data centre, opening this year.
Among the AI announcements made at the event were:
- Chirp 3, Google’s audio generation model, is joining Gemini, Imagen, and Veo on Vertex AI, with HD Voices available in 31 languages, offering 248 distinct voices with eight speaker options;
- An expansion of Google’s UK data residency commitment to include Google Agentspace, which offers expertise for employees with agents that bring together Gemini’s advanced reasoning, Google-quality search, and enterprise data; and
- An expansion of Google’s AI skilling initiatives in the UK, offering new training and certification programs for developers, students, professionals, and higher education institutions.
“I’m really proud of our UK roots, having founded Google DeepMind in London, in large part due to the amazing talent and academic institutions based here,” Hassabis said.
“As the engine room for Google, through our Gemini models we continue to contribute to the thriving UK tech sector by helping developers and businesses across the UK and worldwide drive breakthroughs with the help of AI.”
Register for Free
Get daily updates and enjoy an ad-reduced experience.
Already have an account? Log in