Technology & Innovation
February 4, 2025 • 9:18 am ET
Five AI management strategies—and how they could shape the future
It is a truth universally acknowledged, if not often interrogated, that artificial intelligence (AI) is in need of governance. This stems from its perceived risks, such as it becoming superintelligent and taking over, threatening employment, exacerbating biases, increasing tech monopolies, spreading disinformation, violating intellectual property rights, and supporting the schemes of bad actors. Even more risks, both real and imagined, may emerge as AI continues to improve. The latest paradigms of large language models (LLMs) and generative AI, for example, are trumpeted as likely game changers in science, government, crime, entertainment, warfare, industry, and management.
But it’s important for policymakers working on AI governance to keep their eyes on the prize. Even factoring in the hype cycle, the potential for AI to improve the lives of individuals, communities, and societies across the globe cannot be ignored and shouldn’t be traded off against tail risks.
AI governance needs to ensure that such goods are realized while minimizing the anticipated evils. The measures that can achieve this include statutory regulation, institution construction, technical protocols and standards, economic incentives, and codes of practice. It’s a complicated issue.
In 2021, we published our book Four Internets, arguing that the internet’s governance was conceived and driven by a range of moral and political considerations. In particular, four ideal types of governance, reflecting geopolitical and ideological considerations, could be detected, each creating an internet of its own. In aggregate, these four internets comprise the global network used throughout the world. Traffic between these different internets is neither seamless nor impossible, but they are increasingly run on divergent lines.
The four internets are:
- The Open Internet, the original conception of a collaborative, permissionless, transparent, and flexible space with anonymity, interoperability, and free flows of information
- The Bourgeois Internet, where civility and rights are preserved by regulation
- The Paternal Internet, where certain outcomes of internet use (for instance, political speech or pornography) are prohibited
- The Commercial Internet, regulated as property to produce market solutions for collective action problems
A fifth ideal type, a spoiler model based on the hacker ethic, valorizes the power of coders to challenge authority and undermine security. The spoiler doesn’t create an internet of its own, but is parasitic on the others, undermining their safeguards and subverting their ideals.
Governments and organizations may emphasize one or another of these ideals, but they are not mutually exclusive. They are deployed alongside each other, each privileging competing considerations which are negotiated in political processes.
The world has changed since we published our book in 2021, yet the framework remains relevant—and not only to the internet. AI is dependent on the internet for data to train LLMs, cloud computing power, and user access. It is no coincidence that internet companies are driving the generative AI revolution.
A taxonomy of AI governance
The AI governance regime is evolving, and fortunately it is focused on predictable or evident risks, not speculative existential threats. Governments can legislate, and some have—China is strongly concerned, the European Union (EU) has been somewhere in the middle, the United Kingdom and the United States have legislated minimally, while the United Arab Emirates and Japan are wary of hampering development. New institutions, such as the EU’s AI Office and Britain’s AI Safety Institute, have emerged. Supranational groupings foster cooperation and standards, such as the United Nations AI Advisory Body, or the Group of Seven’s Hiroshima Process, and alongside these has been a tsunami of summitry and experience sharing. The combination of government regulation, global policy frameworks, research and testing infrastructure, and best practices will gradually coalesce into a recognizable AI governance regime with established norms and shared principles.
In this shuffle, we see the repurposing of the ideal types of governance of the Four Internets framework as governance strategies in the AI context, which we term Artificial Intelligence Management Strategies, or AIMS. The five AIMS are:
- Open AIMS: collaborative and shared innovation for the public good
- Bourgeois AIMS: achieving the potential of AI only when rights and civility are secured
- Paternal AIMS: setting limits to the outcomes of AI applications
- Commercial AIMS: letting markets and investors predict where future profits will emerge
- Hacking AIMS: unleashing the potential of the software to challenge authority
What can the Five AIMs framework tell us about AI governance? There are three sets of questions for which the identification of AI management strategy is pertinent.
First, AI may not be uniquely responsible for certain risks and harms. For example, it may turbocharge the creation and dissemination of fake news, but disinformation existed before AI and will continue to be created without it. In that case, rules of the road focused on AI, rather than the problem at hand, are unlikely to make it go away.
Second, what kind of AI causes the problem? AI is evolving, neither fixed nor mature. Focusing too strongly on generative AI as it exists now is likely to miss the moving target.
Third, what exactly is to be regulated? There are several components of the AI ecosystem, including applications, models used, technology used for development, the infrastructure that implements the technology, and training data. Which of these would be appropriate to regulate for the problem, and to what extent can regulations be effectively enforced?
In all these cases, the essentials of AI governance should be properly framed. Paternal AIMS are concerned about specific outcomes of AI’s use, Bourgeois AIMS by the development process, Open AIMS looks to produce social good, Commercial AIMS to create profit, and Hacking AIMS to exercise power against authority.
The goals of governance are not always precisely specified, and strategies may simply be performative or reactive to perceived risk. But generative AI is immature. Its potential is clear, but so far it has yet to deliver. At worst, poorly targeted regulation or badly crafted principles may hinder development of beneficial and powerful AI-informed methods for addressing genuine problems. Might privacy concerns prevent the use of personal data for social good? Might apprehension about high-risk applications check progress in the medical field? Might worries about hard-to-explain black boxes curb the use of AI in administration?
And ultimately, might excessive regulation in risk-sensitive jurisdictions suppress innovation, to the detriment of technological advancement, or raise barriers to entry so that the technology cannot be distributed equitably beyond the wealthiest parts of the world?
Setting out our AIMS clearly is an essential first step in avoiding these pitfalls.
Kieron O’Hara is an emeritus fellow, University of Southampton. His latest book, Blockchain Democracy: Ideology and the Crisis of Social Trust, will be published in June. He can be reached at kmoh@soton.ac.uk.
Wendy Hall is Regius Professor of Computer Science, University of Southampton. She was a member of the UN High-Level Advisory Body on AI and is a nonresident senior fellow at the ’s GeoTech Center.
This article is part of the GeoTech Center’s AI Connect II project.