Editor’s note: The Astana Times continues a section with articles from our readers. As a platform that values diverse perspectives and meaningful conversations, we believe this new section will provide a space for readers to share their thoughts and insights on various topics that matter to them and the AT audience.
The question of whether artificial intelligence (AI) will become an existential threat to humanity remains unresolved. Despite the lack of a definitive answer, a growing number of experts and scientists are raising concerns about the potential risks associated with the development of intelligent machines.
Some believe that AI does not pose as much of an existential threat as it may seem at first glance. For example, large language models such as ChatGPT work based on trained algorithms and instructions. They are unable to independently develop new skills and are controlled, predictable and safe systems. This perspective offers an optimistic perspective.
However, the rapid advancement of AI systems is fueling earlier concerns. If AI is not managed properly, it can evolve autonomously, surpass human intelligence and possibly become hostile to humans. At worst, AI could see humanity as a harmful species and try to eliminate us. This is the more pessimistic view.
Technology leaders are also grappling with the question, “Is AI a real threat?” Their answers vary. The idea that AI is a threat to humanity slows its development and use, delaying finding solutions to important problems that require rapid action. As AI models become more complex, they begin to tackle challenges that are currently unpredictable.
There are concerns that large AI models could acquire new skills, such as thinking and planning, and therefore pose a threat to humans. However, a thorough examination of existing AI models shows that they can only function as useful assistants within specific domains. They cannot exceed engineers’ instructions and learn new skills independently without external input. In other words, they are unable to independently discover and master new knowledge that goes beyond their limited specialization.
On September 29, three major Western jurisdictions – leaders in the development of artificial intelligence technologies – signed an agreement to regulate AI systems. The countries that signed the treaty have committed themselves to complying with all its requirements. Companies also support the adoption of this agreement because differing national intellectual property laws pose obstacles to the development of this technology.
The convention, signed by the United States, the European Union and the United Kingdom, prioritizes human rights and democratic values in regulating AI systems in both the public and private sectors. The agreement, which was developed over two years by more than fifty countries, including Canada, Israel, Japan and Australia, sets out requirements for the liability of signatory countries for any harmful or discriminatory consequences arising from AI systems. It requires AI systems to respect equality and privacy rights.
This is the first legally binding agreement of its kind, bringing together several countries and demonstrating that the international community is preparing for the challenges posed by AI. The convention shows that the global community shares a common vision on the development of artificial intelligence technologies. Joint innovation requires respect for universal values and the promotion of human rights, democracy and the rule of law.
However, regulating AI is not always easy. The European Union’s proposed AI regulations, which came into effect last month, have sparked significant controversy in the technology community. For example, companies such as Meta refused to bring their latest product, Llama, to the EU market.
Despite the challenges, AI’s potential for harm cannot be ignored. Its ability to generate fake news, automate cyber attacks and even disrupt the labor market poses a real threat if misused. Autonomous weapons and AI-driven surveillance systems could raise major ethical concerns, while privacy violations become more likely as AI collects and analyzes large amounts of personal data. If left unchecked, AI can also make decisions that are incomprehensible to humans, raising questions about fairness and responsibility.
These risks make it clear that strict regulation is essential – not only to protect against the machines themselves, but also to prevent misuse by the individuals who make and operate them. AI in itself is not the danger. The real threat lies in its potential to be used irresponsibly or maliciously by humans.
The ongoing debate about the risks of AI should not paralyze its development. Instead, it should push us toward smarter, more ethical innovation. Balancing caution and progress is critical to ensuring AI can reach its full potential without endangering humanity.
The author is Begim Kutym, a graduate student at the Nazarbayev University Graduate School of Public Policy.
Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the position of The Astana Times.