Former Google CEO Eric Schmidt warned that artificial intelligence (AI) could eventually reach a “dangerous” stage, urging humanity to be prepared to step away if needed.
“When the system can self-improve, we need to seriously think about unplugging it,” Schmidt said Sunday in an interview with American television network ABC News. Schmidt, looking ahead to AI’s potential, predicted that computers may one day define their own objectives.
“It’s going to be hugely hard. It’s going to be very difficult to maintain that balance,” Schmidt said, acknowledging how rapidly AI systems are advancing.
The American channel also highlighted China’s progress in AI development, to which Eric Schmidt noted that while the US previously held the lead, China has caught up in the past year and is now poised to surpass American technological programs. Schmidt emphasised the importance of the US reaching critical AI milestones first, as AI “scientists” begin to conduct their own research independently of humans.
“The Chinese are clever, and they understand the power of a new kind of intelligence for their industrial might, their military might, and their surveillance system,” Schmidt said.
He stressed the need for greater intervention to establish guardrails for AI rather than leaving its oversight solely to tech leaders like himself. “Humans will not be able to police AI, but AI systems should be able to police AI,” he said.
Regarding the competition with China, Schmidt suggested that President-elect Trump’s administration could benefit US AI policy.
Last month, as artificial intelligence (AI) continues to advance rapidly, its benefits and risks are becoming increasingly evident, said Arvind Narayanan, Professor of Computer Science at Princeton University. Speaking at the Hindustan Times Leadership Summit, Professor Narayanan addressed AI’s rising influence, its potential dangers, and the need for responsible regulation and usage.
In June last year, a survey highlighted a divide among top business leaders regarding the risks AI poses to humanity. Conducted during Yale University’s CEO Summit and reported by CNN, the survey revealed that 42 per cent of CEOs believed AI could lead to humanity’s extinction within 5-10 years.
Yale professor Jeffrey Sonnenfeld described the survey results as “dark” and “alarming,” CNN reported. Out of the 119 CEOs surveyed, 34 per cent believed AI could destroy humanity within ten years, while 8% felt this could happen in just five years. However, some CEOs expressed no concerns about such outcomes.