Technological development has reached warp speed – in a flash, stars have stretched into star lines and where we stand today is a far cry from where we were just a few days ago. It is becoming increasingly difficult to predict where we will be tomorrow.
One thing is clear: we are entering the artificial general intelligence (AGI) spectrum and artificial superintelligence (ASI) now seems clearly within reach. However it is defined, AGI will not appear suddenly; it will evolve and we are already seeing signs of its gradual unfolding.
The dawn of AGI
AGI has long been the ultimate goal: a technology capable of doing the mental work of humans and transforming the way we work, live and think. As we move into 2025, glimpses of AGI are already emerging and promise to grow stronger as the year progresses.
This is a shift so profound that some, like Sam Altman of OpenAI and Ilya Sutskever, former chief scientist of OpenAI with his own startup focused on ASI, believe it will define the arc of human progress.
In September 2024, Altman published The Intelligence Agea manifesto arguing that AGI is not just a tool, but a new phase in human history.
Since then, OpenAI has released increasingly powerful reasoning models: AI systems that not only answer questions from a knowledge base that includes much of the world’s written text, but can also devise and solve complex problems. The implications of this progress have not yet penetrated the public consciousness. But they are profound.
For example, OpenAI’s GPT-o1 model scored 83% on the International Mathematical Olympiad (IMO) qualifying exam, widely considered one of the most difficult math competitions in the world, requiring creativity and deep reasoning skills to solve problems without advanced mathematical tools such as calculus.
Subsequently, the GPT-o3 model achieved a breakthrough score of 87.5% on the ARC-AGI benchmark, which evaluates an AI’s ability to solve entirely new problems without relying on pre-trained knowledge. ARC-AGI is considered one of the toughest AI benchmarks because it tests conceptual reasoning and adaptive intelligence, areas traditionally dominated by humans.
From limited intelligence to general capabilities
So far, AI systems have excelled as specialists – writing texts, diagnosing diseases, optimizing logistics – but only within narrowly defined boundaries. AGI promises something fundamentally different: the ability to adapt, reason, and solve problems across domains.
Large language models (LLMs) and multimodal models already demonstrate proto-AGI features such as generalization across tasks, multimodal reasoning, and adaptability. These capabilities are iteratively improved through better architectures, larger data sets, and more efficient training methods.
Meanwhile, OpenAI is redefining what AGI means. Its public definition remains “a highly autonomous system that outperforms humans at the most economically valuable work.” But that endpoint has become so blurry that Microsoft and OpenAI are reportedly linking AGI to an AI system’s ability to generate $100 billion in profits.
AGI challenges our understanding of what it means to be human. Intelligence, long considered the defining characteristic of humanity, will no longer be ours alone. The way we integrate AGI into our lives – whether as a tool, partner or competitor – will shape our culture, values and identity in ways no one can yet understand.
Superintelligence
It also puts us on the path to ASI, when self-learning AGI systems eventually surpass collective human intelligence.
Domain-specific AI systems today exhibit superhuman limited intelligence in areas such as science, programming or medicine. For example, AlphaFold has revolutionized structural biology by predicting protein structures with unprecedented accuracy – a task beyond human capacity.
OpenAI’s reasoning models include a recursive loop that refines their results during inference. While this refinement is temporary and does not change the underlying parameters of the model, it demonstrates the potential for more dynamic and adaptive AI systems.
Researchers are diligently exploring techniques such as incremental learning and iteration-based approaches to enable AI systems to retain knowledge while acquiring new knowledge, allowing a single system to learn continuously.
The goal is ambitious: to create machines that not only think, but also evolve. If these efforts succeed, the consequences will be staggering.
A new era of collaboration between humans and machines
“We are about to create instruments that are not just an extension of human capabilities, but entities with capabilities that, in some domains, will surpass ours,” Sutskever said last December. He envisions a world where AI can unlock scientific breakthroughs, cure diseases, and solve problems previously thought to be intractable. Such developments, he argued, could usher in a new era of human flourishing – a Renaissance powered not only by human ingenuity, but also by collaboration with machines.
AI agents, powered by reasoning models, could navigate complex environments, integrate disparate data streams, and solve problems that once seemed insurmountable.
In healthcare, this could mean AGI systems that not only flag potential diagnoses, but design entire treatment plans tailored to an individual’s genetic makeup. In education, virtual teachers can adapt to a student’s needs in real time, teaching not only any subject, but also in any language and at any pace. This is not a distant dream; it’s the kind of progress that Altman says could happen in “a few thousand days.”
And if machines can one day learn continuously and adapt seamlessly to new challenges, their climb to superintelligence may not be far behind.
For the time being, one thing is certain: 2025 marks the beginning of a new era. The Age of Intelligence has arrived, and with it comes the possibility of a future as transformative – and as fraught – as humanity has ever experienced.
The rise of AGI will not be a sudden event. It will unfold gradually as AI systems move along a spectrum of general intelligence toward ASI. The real question is not when AGI will emerge, but whether we are willing to direct its development for the better.