While you’re using AI tools like ChatGPT and Google’s Gemini to write emails, generate pictures and automate tasks, the race to achieve artificial general intelligence (AGI) has already intensified. Meanwhile, Big Tech CEOs like Meta’s Mark Zuckerberg and Tesla’s Elon Musk are aiming at something far more ambitious: the development of artificial superintelligence (ASI).
AI is moving fast: Terms like AI, AGI, and ASI sound similar, but represent totally different levels of capability. The AI you use today is “narrow,” as it’s trained for specific tasks like text or video generation, and requires human intervention. AGI would give machines human-level cognitive abilities, which mainly is the ability to think, learn, make decisions, and solve problems without specific training for each task.
ASI goes further, aiming to surpass humans at practically everything. Experts say it would make its own decisions and improve itself without human input. AI is already replacing thousands of jobs, experts say AGI and ASI could pose an even bigger threat. So, what exactly is ASI? And should we be wary of it?
What exactly artificial superintelligence is
Superintelligence is a hypothetical AI system that surpasses human intelligence across every domain. From writing code and generating videos to performing surgery and driving cars, it would do everything simultaneously, something the current AI systems lack. The AI tools we use hallucinate, and require training with massive data to perform tasks in a specific way. ASI would find solutions to complex problems with better reasoning and the knowledge of context. Philosopher Nick Bostrom, who popularized the term, defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.”
Current AI systems need humans to improve them: Engineers work on their code, feed the models vast amounts of data to improve their predictions and responses. ASI, on the other hand, could improve itself. It could hypothetically rewrite its own algorithms, design capabilities, and control systems without instructions.
It’s unclear if superintelligence is possible or not. However, if machines would become smarter than humans and improve themselves, some say that AI could exceed human control if they aren’t careful. Some experts predict superintelligence could arrive within a decade, and the race to develop it has intensified already, as investors pour in billions of dollars into companies looking to build it. OpenAI co-founder Ilya Sutskever left the company in 2024, and founded a startup focused on building ASI safely. Sutskever has already raised investment running into billions of dollars without even launching any product so far.
Interestingly, in 2023, Sutskever joined OpenAI CEO Sam Altman and President Greg Brockman in calling for called for regulation of super-intelligent AI, warning it could pose an “existential risk,” adding that it’s “conceivable” that AI would exceed expert skill level in most domains within 10 years.
Should we be scared of ASI?
While it is believed that humans will be able to achieve more and solve complex problems with superintelligence, as it could be “the last invention humanity will ever invent,” the risks look bigger than the advantages. Generative AI has already started replacing humans at various job roles, which could lead to huge economic implications. Some believe entire professions could vanish because of ASI, potentially leaving billions unemployed.
Job loss is one issue, but we also face “existential risks” as well. If machines start thinking and take control of systems on their own, it could misalign with humanity and pose threats ranging from “national security risk to potential human extinction.” In October 2025, several prominent figures signed a public statement demanding a pause on the development of AI superintelligence, until there is a consensus that “it will be done safely.”
The signatories include Apple Co-founder Steve Wozniak, Google Deepmind CEO Demis Hassabis, AI pioneers Geoffrey Hinton and Yoshua Bengio, Anthropic CEO Dario Amodei, among others. The fact that the very people behind companies looking to develop ASI are advocating for a cautious approach, it does show the level of risk superintelligence poses for humans.
