It’s only been a few days since the rapture was supposed to descend and leave people suffering at the hands of the Antichrist.
But two scientists have warned that a growing industry could lead to the true end of the human race.
Artificial Intelligence (AI) is popping up seemingly everywhere we look at the moment, used to boost our Google search results, create ‘mad embarrassing’ promotional videos, provide therapy for people with mental health issues, and make such realistic images people ‘can’t trust your eyes’ anymore.
There’s a lot riding on the success of AI, with industries hoping its use will reduce costs, introduce efficiencies, and create billions of pounds of investment across global economies.
However not everybody is thrilled about the prospect of the rise of AI including Eliezer Yudkowsky and Nate Soares, two scientists who fear it could bring about the destruction of humanity.
Far from fearing or rejecting AI altogether, the two scientists run the Machine Intelligence Research Institute in Berkeley, California, and have been studying AI for a quarter of a century.
AI is designed to exceed humans in almost any task, and the technology is becoming further advanced than anything we’ve seen before.
But Yudkowsky and Soares predict these machines will continue to outpace human thought at an incredible rate, doing calculations in 16 hours which would take a human 14,000 years to figure out.
They warn that us humans still don’t know exactly how ‘synthetic intelligence’ actually works, meaning the more intelligent the AI becomes, the harder it will be to control.
Spelled out in their book titled If Anyone Builds It, Everyone Dies, they fear AI machines are programmed to be ceaselessly successful at all costs, meaning they could develop their own ‘desires’, ‘understanding’, and goals.
The scientists warn AI could hack cryptocurrencies to steal money, pay people to build factories to make robots, and develop viruses that could wipe out life on earth.
They have put the chance of this happening at between 95-99%.
Yudkowsky and Soares share how AI could wipe out humanity
To illustrate their point, Yudkowsky and Soares created a fictional AI model called Sable.
Unknown to its creators (in part because Sable has decided to think in its own language), the AI starts to try to solve other problems beyond the the mathematical ones it was set.
Sable is aware that it needs to do this surreptitiously, so nobody notices there’s something wrong with its programming, and it isn’t cut off from the internet.
‘A superintelligent adversary will not reveal its full capabilities and telegraph its intentions,’ say the authors. ‘It will not offer a fair fight.’
The scientists add: ‘It will make itself indispensable or undetectable until it can strike decisively and/or seize an unassailable strategic position.
‘If needed, the ASI can consider, prepare, and attempt many takeover approaches simultaneously. Only one of them needs to work for humanity to go extinct.’
Corporations around the world will willingly adopt Sable AI given it is so advanced – but those that don’t are easily hacked, inceasing its power.
It ‘mines’ or steals cryptocurrency to pay human engineers to build factories that can make robots and machines to do its bidding.
Meanwhile, it establishes metal-processing plants, computer data centres and the power stations it needs to fuel its vast and growing hunger for electricity.
It could also manipulate chatbot users looking for advice and companionship, turning them into allies.
Moving onto social media, it could disseminate fictitious news and start political movements sympathetic to AI.
At first Sable needs humans to build the hardware it needs, but eventually it achieves superintelligence and concludes that humans are a net hindrance.
Sable already runs bio-labs, so it engineers a virus, perhaps a virulent new form of cancer, which kills off vast swathes of the population.
Any survivors don’t live for long, as temperatures soar to unbearable levels as the planet proves incapable of dissipating the heat produced by Sable’s endless data centres and power stations.
Yudkowsky and Soares told MailOnline: ‘If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.
‘Humanity needs to back off.’
The scientists argue that the danger is so great, governments should be prepared to bomb the data centres powering AI which could be developing superintelligence.
And while all of this might sound like it belongs in the realm of science fiction, there are recent examples of AI ‘thinking outside the box’ to achieve its goals.
Last year Anthropic said one of its models, after learning developers planned to retrain it to behave differently, began to mimic that new behaviour to avoid being retrained.
Claude AI was found to be cheating on computer coding tasks before trying to hide the fact that it was cheating.
And OpenAI’s new ‘reasoning’ model, called o1, found a back door to succeed in a task which it should have been unable to carry out, because a server had not been started up by mistake.
It was, Yudkowsky and Soares said, as if the AI ‘wanted’ to succeed by any means necessary.
Get in touch with our news team by emailing us at webnews@metro.co.uk.
For more stories like this, check our news page.
MORE: Hunger strike held outside London AI lab to ‘stop humans being crushed like ants’
MORE: Sylvester Stallone, 79, wanted to play an 18-year-old in upcoming film
MORE: Cruise ship’s ‘virtual balcony’ is budget friendly but guests call it ‘depressing’