April 3, 2025 • 1:11 pm ET
Sovereign remedies: Between AI autonomy and control
Introduction
Sovereign AI has gained a foothold in several capitals around the world. As Michael Kratisios, the Trump administration’s acting director of science and technology policy, stated in 2024, “Each country wants to have some sort of control over our [sic] own destiny on AI.” Analysts have mapped the modes and methods to achieve sovereign AI, and the interplay with antecedents like data sovereignty. However, there remains a critical gap: analysis of stated goals for these initiatives and what the core pillars of sovereign AI are, distinct from related concepts.
The goals outlined by governments are varied and wide-reaching: some center on preserving values or culture; others focus on the privacy and protection of citizens’ data; some initiatives center on economic growth and others of national security; and finally, there is a set of concerns around the current global governance vacuum, where in the absence of global frameworks, AI companies must be held accountable through physical presence.
However, each of these stated goals require differing levels of indigenized capability and control and will have varied consequences as a result. This paper will:
- Outline the various stated goals of sovereign AI, suggesting illustrative categories.
- Hypothesize the reasons for the emergence of sovereign AI as a concept, with an analysis of industry buy-in for this concept.
- Propose a streamlined definition of sovereign AI and suggest policy implications.
Defining sovereign AI
Sovereignty is defined as supreme authority within one’s territory, including a Westphalian state system.Most components of this definition are, however, malleable. What constitutes one’s territory, for instance, needs not be rooted in a fixed point in time. The digitization—and by extension, the datafication—of social and political life has disrupted traditional notions of state sovereignty, which have long been tied to physical borders. Similarly, what constitutes supreme authority within a given territory is similarly varied. There are nonabsolute forms of authority, where sovereignty does not equate to authority over all matters within a territory. Examples include regional institutions like the European Union or specialized subnational systems like those once exercised in Pakistan’s Federally Administrated Tribal Areas (FATA) or India’s Jammu and Kashmir, a union territory.
Roland Paris noted in 2020 the reemergence of older monarchic interpretations of sovereignty, which he identifies with Putin’s Russia and Xi’s China, among others:
Non-Westphalian understandings of sovereignty have also experienced a resurgence in recent years. Some portray sovereignty as the power of leaders to act outside the constraints of formal rules in both domestic and international politics, or extralegal sovereignty. Others characterize sovereign power as the quasi-mystical connection between a people and their leader, or organic sovereignty.
In the context of information and communication technologies (ICTs), sovereignty has similarly found new forms. This includes data sovereignty, which asserts a country’s legal jurisdiction over all data generated within its boundaries; and digital sovereignty, referring to the assertion of state control over information flows, whereby the state both defines and guarantees rights and duties in the digital realm.Some data sovereignty laws, such as the EU General Data Protection Regulations and India’s Digital Personal Data Protection Act, have extraterritorial application, if data processing relates to a subject/principal within their jurisdiction.
Sovereignty as a norm is therefore continually challenged, reshaped, and reinterpreted, contrary to beliefs about a post-Westphalian consensus. In the context of the recent artificial intelligence boom, sovereignty has taken on new modes and methods.
Sovereign AI has been defined variously as “a nation’s capabilities to produce artificial intelligence using its own infrastructure, data, workforce and business networks”; “countries harnessing and processing their own data to build artificial intelligence domestically, rather than relying on external entities to do so”; and as a concept “asserting that the development, deployment, and regulation of AI technologies should . . . align with national laws and priorities.The most all-encompassing of these is the definition from the United Nations Internet Governance Forum (IGF) Data and Artificial Intelligence Governance Coalition: “The capacity of a given country to understand, muster and develop AI systems, while retaining control, agency and, ultimately, self-determination over such systems.
The EU AI Act (2024) and the African Union’s Continental AI Strategy (2024) both touch on aspects of AI sovereignty. The 2023 IGF (in Kyoto) saw the launch of the official outcome document of the inaugural UN IGF Data and AI Governance Coalition, centered on sovereign AI. The term shot into mainstream parlance after Nvidia CEO Jensen Huang declared that every country needs sovereign AI at the World Governments Summit in Dubai in February 2024.
It is well worth noting the context for Huang’s statement, which came at the tail end of an Asia tour where he visited Japan, Singapore, Malaysia, Vietnam, China, and Taiwan. This tour culminated in the announcement of several collaborations in support of national large language models (LLMs), national supercomputers, and future telecommunications.
Nvidia reflects a broader trend in an industry which has supposedly embraced a rhetoric of digital sovereignty, in part attributable to regulatory pressures such as the EU General Data Protection Regulation, and now the EU AI Act. A speech at a European think tank summit in June 2020 by Microsoft President Brad Smith highlights this trend:
When I look at digital sovereignty initiatives, I see them addressing three goals. One is protection of personal privacy, a second is the preservation of national security, and a third is local economic opportunity. As a global technology player, it’s important for us to advance all three.
Another example of major AI players embracing sovereign AI includes G42, an Emirati AI company, which boasts partnerships with Microsoft, OpenAI, Nvidia, Oracle, IBM, and Mistral, among others. A G42-Politico report identifies an overlap between data sovereignty and sovereign AI, asserting there is an ideal level of data sovereignty, balanced against global coordinated approaches, which can help realize the economic and security benefits of localization.
Current understandings of sovereign AI both extend the core components of data and digital sovereignty to AI and add a value alignment component. In addition to the loose interpretation of territoriality, and the supreme authority of national law over cyberspace, statements about sovereign AI encapsulate cultural preservation and (subjective) ethics. Dr. Leslie Teo, senior director of AI products at AI Singapore, said in the context of the launch of SEA-LION, an LLM for Southeast Asia languages, “[Western] LLMs have a very particular West Coast American bias—they are very woke.” The African Union’s Continental AI Strategy similarly notes that “external influence from AI technologies developed outside Africa may undermine national sovereignty, Pan-Africanism values and civil liberties.”
However, sovereign AI must not be conflated with individual rights. While some aspects of sovereign AI, including value alignment and legality, may overlap with autonomy and self-determination, there is no simple cause-effect relationship. An actionable and useful definition of sovereign AI must therefore avoid category errors and capture key distinctions from its antecedent terms.
The core components of sovereign AI, recognizing the definition of sovereignty are:
- Legality: The design, development, and deployment of AI should adhere to any applicable laws and regulations.
- Economic competitiveness: The development and deployment of AI should create value for the host economy. Some sovereign AI initiatives further require the creation or bolstering of a national AI industrial ecosystem.
- National security: AI applications pertaining to critical infrastructure, military, and other functions critical for national security require additional safeguards against disruption.
- Value alignment: Due to anticipated wide and deep applications of AI, models should be aligned with national or regional political and constitutional values.
Sovereign AI is therefore a model of AI development and deployment where inputs adhere to a state or political union’s laws and institutional frameworks, and outputs are contextually relevant, secure, and create value for the economy.
Note that this definition is not exclusionary: Countries can turn to external partners to support their sovereign AI efforts if these partnerships adhere to the four core components mentioned above. This definition also recognizes the contemporary evolution of territoriality, such as the fact that digital sovereignty regulations have extraterritorial application with “territory” being expanded to include the digital footprint of populace. Finally, given that sovereignty is an organizing principle for states, not individuals or communities, it concretizes the abstract notion of value alignment by framing it as a constitutional and political concept.
Mapping sovereign AI initiatives
Below is an illustrative list of sovereign AI initiatives.
Conclusion
Sovereign AI as a phenomenon is going to gain momentum, as national governments find “wholesale” AI offerings unsuited to their needs. AI, especially general- purpose AI, requires sizable investments or innovative new methods of data collection, compute (mainly GPUs), related energy infrastructure, and workflow management.
An optimal blend of localization of AI inputs and regulation of outputs for each country could help each one to realize its outlined goals for sovereign AI. In other words, the four components of sovereign AI outlined in this paper—legality, economic competitiveness, national security, and value alignment—will necessarily involve different strategies, with governments weighing each one differently. US AI sovereignty strongly centers on maintaining the country’s leadership as a key driver of American prosperity, a prioritization that has not changed with the change in administration in 2025. Value alignment also holds varied meanings, with some, like the African Union strategy, grounding values in an anti-neocolonial framing, while others like Taiwan’s placing an emphasis on democratic values in opposition to mainland China.
Finally, factors will influence the possibilities of sovereign AI including infrastructure constraints, such as energy production capacity and the availability of water, and trust, both in governments as legitimate arbiters of people’s interests and in industry’s commitment to social good. Nevertheless, for now, the operative word in the future of AI appears to be sovereign.
About the Author
Trisha Ray is an associate director and resident fellow at the ’s GeoTech Center.
Related Content
Explore the program

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.
Image: Particles bouncing inside a sphere, leaving a trace. Credit: Mario Verduzco via Unsplash