The world seems steadily advancing toward a risky AI future, as an “AI Safety Clock” designed to track progress of an uncontrolled artificial general intelligence is three ticks closer to reality. Symbolically, we’re now only 26 minutes away from “AI midnight” — the dire digital demarcation point when a UAGI could come online causing computerized chaos — according to a team of Switzerland-based academicians.
Instructors at the IMD business school in Lausanne launched the AI Safety Clock in September to create an easy-to-understand model for critical discussions around AI for the general population as depicted in the image below. They note that the closer we get to midnight — the greater the AI risk becomes.
The IMD team also wanted to create something that could be easily updated based on quantitative and qualitative AI developments derived from real-time technological and regulatory changes.
Michael Wade, professor of Strategy and Digital and director of the TONOMUS Global Center for Digital and AI Transformation, IMD who led the team that created the clock explained in an email why they were compelled to advance the clock three minutes earlier today.
“The action highlights the accelerating pace of AI advancements and associated risks. Breakthroughs in agentic AI, open-source development and military applications underscore the urgency for robust regulation to align innovation with safety and ethical standards,” Wade explained.
AI Safety Clock Methodology
He added that they developed a proprietary dashboard that tracks 1,000 websites, 3,470 news feeds and reports from experts. That combines with manual research to glean additional insights into global AI regulatory and tech updates.
Wade stated that through automated data gathering and continuous expert analysis, they strive to ensure a balanced, deep understanding of the ever-changing landscape in AI.
He shared several recent news developments that suggest an acceleration toward the development of a UAGI threat:
- Elon Musk’s advocacy under a new U.S. administration could push open-source growth and decentralization.
- OpenAI’s “Operator” and “Swarm” announcements herald agentic AI that executes tasks autonomously, hinting at steps toward AGI.
- Amazon announced its plans to develop its own custom AI chips, while simultaneously launching multiple AI models and a massive supercomputer.
- OpenAI’s decision in June to appoint retired U.S. Army General Paul M. Nakasone, former Director of the National Security Agency, to its Board of Directors. It’s expected that Nakasone will open doors for OpenAI into the defense and intelligence sectors of the U.S. government.
AI Evaluated On Three General Factors
Wade explained that the team also evaluates risks based on the respective AI’s sophistication, autonomy and ability to execute its plans.
“While sophistication reflects how intelligent an AI is — and autonomy its ability to act independently — execution determines how effectively it can implement those decisions. Even a highly sophisticated and autonomous AI poses a limited risk if it can’t execute its plans and connect with the real world,” he noted.
While a UAGI that could assume control of critical resources, utility infrastructures, supply chains and other necessities for human survival is frightening; it’s just one of hundreds of possible risk scenarios we collective face as various types of AI advance. Wade says the time to act is now.
“We strongly reiterate our opinion that AI development must be subject to robust regulation. There remains an opportunity to implement safeguards, but the window for action is rapidly closing. Ensuring that technological advancements align with societal safety and ethical values is imperative to mitigating these growing risks,” Wade concluded.