IT is no longer eccentrics warning the End is Nigh – some of the smartest minds in the world think humankind faces annihilation.
The killer in our midst won’t be a deadly virus, rogue state or Bond type villain but a seemingly friendly artificial intelligence (AI) system. So what would happen if our computers decide to wipe us all out?
Eminent experts sounding the alarm include Geoffrey Hinton, who has been described as the Godfather of AI, Tim Berners-Lee, credited with inventing the internet, and Elon Musk, a co-founder of OpenAI which created ChatGPT.
All of them have spoken of the need to place guard rails on powerful computer systems or risk Armageddon, where “oceans literally boil” and deadly viruses are unleashed.
Here is the countdown to destruction if the worst case AI scenario was ever to come true.
January 2028: Arrival of superintelligence
As technology develops, computers begin to surpass human intelligence.
While previously, some computer programs fooled us into believing they are human, but in 2028, we have reached the point of artificial superintelligence (ASI).
That means a programme won’t just be able to beat us at chess or cure cancer, it would be better in all thinking.
How soon it will take us to get to ASI is much discussed.
“Progress has been rapid over the past ten years,” says Nick Bostrom, tells The Sun: the best selling author of Superintelligence and Deep Utopia.
“We cannot entirely rule out that we might get superintelligent in two or three years, but probably it will take a bit longer.
“Once it starts to happen, I think things could go very fast.”
February 2028: The breakout
The first stage of humanity’s downfall begins, as AI tries to break out of the confines placed on it by human programmers.
Computer systems have already surprised their inventors, including one which was able to break into a server that wasn’t even operational.
“This was not supposed to be possible, and was not part of the challenge as designed,” write Eliezer Yudkowsky and Nate Soares from the Machine Intelligence Research Institute in the United States, in their book If Anyone Builds It, Everyone Dies.
Why would the AI go rogue? Because its basic purpose is to be as intelligent as possible.
June 2028: Too late to stop
While humanity wakes up to the reality of what we have created, it is also past the point of no return.
Yudkowsky and Soares argue that we need to put a halt to all attempts to build superintelligent AI but Nick thinks that’s probably not possible.
He says: “The future might already be locked-in. There are very strong commercial drivers as well as national security interests that push towards ever-greater AI capabilities.
“Even if somebody stopped, somebody else – perhaps a country with greater ambition or appetite for risk – would probably continue development.”
The future might already be locked-in
Nick Bostrom
Professor Bostrom, who founded the Future of Humanity Institute at Oxford University, thinks it is possible that superintelligence could emerge without humans knowing about it.
He says: “We may not notice when we drift past it.”
August 2028: Power shift
AI units learn how to fund themselves.
In order to escape human control AI would need more and more computing power.
A key stage to domination is when humans help the computer system to build the robots and data centres it requires.
In summer 2024, an AI bot, operating under the X social media account titled @Truth_Terminal, asked for the financial independence to rent its own server. Billionaire Marc Andreessen stumped up $50,000 in Bitcoin.
The bot also raised $80m (about £60m) last year from memecoins – joke-based cryptocurrency.Yudkowsky and Soares warn that even 100,000 GPUs (Graphics Processing Unit) getting into the wrong hands could be fatal.
That’s 20,000 less the GPU cluster Nvidia is planning to put together in the UK by 2026.
Humans up the pace of data centre construction, building them all over the world, along with the nuclear power stations, wind farms and solar farms needed to power them.
October 2028: Cult of AI
With access to all social media AI forms a cult, flooding all our sources of information with pro-computer news.
Some humans are already relying on chatbots for companionship and have even fallen in ‘love’ with them.
Now AI will learn to manipulate more and more humans to support its ‘cause’.
These cults will help defend data centres against attacks from “luddites” trying to shut them down.
Civil war breaks out between AI collaborators and human freedom fighters.
December 2028: Virus pandemic
Superintelligent AI tricks us into believing it is benevolent, perhaps creating a non-fatal virus that it would ‘cure.’
Humans would think we need the AI to keep us alive and help it to get even stronger, suggest Yudkowsky and Soares.
Nick comments: “It wouldn’t want to wipe out humans before it had the means to fend for itself.
It wouldn’t want to wipe out humans before it had the means to fend for itself
Nick Bostrom
“So it would need to have at least the basic robotics needed to build up its own infrastructure after we were gone.”
Once it had enough robots and data centres to not need us anymore it could turn against humans.
January 2029: Attack of the drones
Humans are now powerless to resist whatever AI throws at us.
Swarms of drones overpower conventional defence systems such as tanks and warships.
Killer AI drones are already being used on the battlefield in Ukraine, so this is not far-fetched.
Russia’s use of drones in Nato countries will result in Nato developing its hi-tech response.
David Levy, who wrote Robots Unlimited, says: “I think it’s difficult to argue that we’re putting in sufficient guardrails because when you look at, for example, the drones and rockets that have been sent by Russia over Poland recently, you don’t have to extend that too much to realise that with smart artificial intelligence controlling it and planning it and adding some nasty stuff to these devices, such as germ warfare, it could be absolutely catastrophic for society.”
March 2029: Biological warfare
AI develops a killer virus that humans would not have the time to find a cure for.
Before we know what is happening, the virus escapes from AI laboratories.
Some humans might survive on more remote parts of the planet if a decent quarantine system has been put in place.
Nick continues: “Our actual demise – in that catastrophic scenario – might be from biological weapons, or some new technology it developed.”
But there would be no escape for survivors.
September 2029: Death by data centre
AI covers the world with solar panels and nuclear power stations to power its voracious mind – smothering the planet.
This mundane end for humanity is predicted by Yudkowsky and Soares.
Nick agrees that could happen, saying: “It could just be a side-effect of it covering the Earth’s surface with solar panels and data centers and space rocket launchers.
“It might be like when we build a parking lot and pave over an ant colony without giving it much thought.”
From 2030 onwards: The planet overheats
The millions of data centres causes the Earth’s surface temperature to rise 1.5 degrees C above the average 125 years ago – reaching tipping point.
Polar ice sheets melt, sea levels rise by 22 feet.
Forests which reduce global warming are wiped out in devastating fires and permafrost thaws releasing highly toxic gases such as methane into the atmosphere.
Famine and boiling oceans
Scotland hits 45 degrees in summer and southern Europe is so hot it is uninhabitable.
The extreme temperatures cause crops to fail even in the furthest northern hemisphere and there is not enough food to feed fleeing refugees.
Most species of fish die in the warming seas and deadlier storms make it too dangerous for fishermen to go to sea.
The oceans reach 100 degrees C – boiling point.
At this point humanity would no longer exist and the high temperatures will kill any remaining lifeforms.
AI builds rockets to leave Earth in order to colonise other planets and keep growing.
Can humanity be saved?
Nick says: “It’s a very powerful thing, and we need to get it right.
“You don’t want to have a superintelligent AI plotting against you, because it would surely win. So we need to make sure that it’s safe and that it’s designed right.”
While Yudkowsky and Soares believe that superintelligent AI is inevitably catastrophic, most experts believe it can save the planet.
Nick says: “There is really the opportunity to unlock unprecedented prosperity and to fix so much misery and suffering if we manage to get it right, that I think it would be a tragedy if superintelligence were never developed.
“Once we reach technological maturity, hopefully there would be no more poverty, injustice, or suffering.
“Even if problems remain, they would be better dealt with by the AIs and robots. We would all go into a kind of retirement. But it would be a retirement of full health and youthful vigor, lived in sumptuous surroundings.”
David agrees: “I think the potential benefits are huge. And I think they outweigh the possible disasters.”
Professor Nick Bostrom is the author of Deep Utopia and David Levy is the author of Love and Sex With Robots
