July 21, 2025 • 11:59 am ET
Navigating the new reality of international AI policy
Since the start of 2025, the strategic direction of artificial intelligence (AI) policy has dramatically shifted to focus on individual nation-states’ ability to win “the global AI race” by prioritizing national technological leadership and innovation. So, what does the future hold for international AI policy? Is there appetite for meaningful work to address AI risks through testing and evaluation? Or will there be a further devolution into national adoption and investment priorities that leave little room for global collaboration?
As India prepares to host the next AI Impact Summit in New Delhi next February, there is an opportunity for national governments to advance discussions on trust and evaluations even amid tensions in the policy conversation between ensuring AI safety and advancing AI adoption. At next year’s summit, national governments should come together to encourage a collaborative, globally coordinated approach to AI governance that seeks to minimize risks while maximizing widespread adoption.
Paris: The AI adoption revolution begins
Initial momentum for policies focused on ensuring AI safety and its potential to pose existential risks to humanity began at the first UK-hosted AI Safety Summit in Bletchley Park in 2023. This discussion was further advanced in subsequent international summits in Seoul, South Korea, and San Francisco, California, in 2024. Yet, as France held the first AI Action Summit in Paris in February of this year, shortly after US President Donald Trump was sworn in for his second term and Prime Minister Keir Starmer took the helm of a brand-new Labour government in the United Kingdom, these discussions on AI risks and safety appeared to lose momentum.
At the AI Action Summit in Paris, French President Emmanuel Macron declared that now is “a time for innovation and acceleration” in AI, while US Vice President JD Vance said that “the AI future is not going to be won by hand-wringing about safety.” As the summit concluded, the United States and the United Kingdom opted not to join other countries in signing the Statement on Inclusive and Sustainable AI for People and Planet. Days later, the United Kingdom renamed its AI Safety Institute to the AI Security Institute, reflecting its shift toward focusing on the national security-related risks stemming from the most advanced AI models as opposed to addressing broader concerns around existential risks to society that AI systems might pose. This approach has also been adopted by the United States, which rebranded the US AI Safety Institute to the Center for AI Standards and Innovation in June.
The Paris AI Action Summit was an early indicator of what the first six months of 2025 would further reveal: a shift away from focusing on the potential existential risks and societal harms posed by AI. Instead, more countries have doubled down on AI research and development investments and the development of secure AI data centers, further increased their focus on extended training for large language models (LLMs), developed national AI adoption mandates, and made proposals to slow down or prevent additional regulation that may inhibit AI adoption.
AI investment and adoption mandates
The United States has taken several steps in this new direction. The Trump administration repealed several Biden-era executive actions on AI during the first few weeks of January, including repealing the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” In February, the administration issued a request for information to develop a new “AI Action Plan,” pursuant to an executive order signed in January called “Removing Barriers to American Leadership in Artificial Intelligence.” The Trump administration’s AI executive order calls for the reduction of administrative and regulatory burdens to AI development and adoption, as well as a further alignment of US AI strategy with national security interests and economic competitiveness goals. Taken together, these actions emphasize an approach that views deregulation as essential for US global leadership in AI.
Simultaneously, a policy debate has emerged in the United States over whether the federal government should preempt state-level AI legislation that would impose regulations on the industry. The industry has been concerned that the numerous and varying approaches to AI legislation being developed at the state level could create a patchwork of regulations that would render the compliance environment complicated and overwhelming.
But even as the debate over federal preemption continues, state-level proposals on AI risk management and governance have stalled. Virginia’s proposed High-Risk Artificial Intelligence Developer and Deployer Act was vetoed by Governor Glenn Youngkin in March. Meanwhile, the Texas Responsible Artificial Intelligence Governance Act was significantly slimmed down before it was signed into law last month, with all references to high-risk AI systems and corresponding prohibitions removed.
Across the pond, the European Union (EU) continues to take steps to identify ways in which the implementation of its cross-cutting EU AI Act can be simplified as part of its AI Continent Action Plan. Industry expressed growing concern over the ability to meet enforcement deadlines without additional guidance and clarifications from the EU AI Office, with forty-four European CEOs calling for a delay in its implementation. This focus on adoption over safety concerns was also reflected at the Group of Seven (G7) Leaders’ Summit held in Canada last month. At the summit, G7 leaders issued a statement on AI for Prosperity, which highlighted the ways in which AI can drive economic growth and benefit people, in addition to laying out a roadmap for AI adoption.
Adapting to this shift in global AI policy
Given this marked shift in the tone of global AI policy discussions, some might wonder whether there are still opportunities to advance conversations on AI trust and safety. Yet, businesses crave certainty and trust remains paramount to creating an ecosystem that supports adoption. Moreover, the AI landscape continues to evolve, requiring continued discussions on “what good looks like” when it comes to AI models used in a variety of enterprise applications and scenarios. Emerging technologies such as agentic AI—AI systems designed to act autonomously, making decisions and taking actions to achieve specific goals with minimal human intervention—as well as evolving enterprise deployment challenges, make it clear that 2025 does not represent the dusk of international AI policy aimed at evaluating and mitigating risks, but a potential dawn.
The upcoming AI Impact Summit in New Delhi presents an opportunity to continue conversations about how creating a robust AI testing and evaluation ecosystem can drive innovation and foster trust, furthering AI security and adoption. There are four key areas that national governments should individually prioritize in their efforts to advance AI adoption while also collaborating on a global level.
1. Assess and address regulatory gaps based on new evolutions in AI technology. Agentic AI is the next evolution of AI technology. Like other iterations of the technology, it can offer significant benefits, but at the same time, either amplify existing risks or introduce new risks because it can execute tasks autonomously. Governments should undertake an assessment of existing regulatory frameworks to ensure they account for any new risks related to agentic AI. Additionally, to the extent that gaps are identified, they should consider the creation of flexible, future-proof frameworks that can be adapted to future evolutions of AI technology.
2. Advance industry-led discussions around open-source and open-weight models, including specific considerations for national security concerns. Transparency and access vary widely across open-source and open-weight models, and researchers and businesses should understand the extent to which models and data sets remain open. Stakeholders—including national governments—need to understand not only what constitutes an open-source or open-weight model, but also what elements of those models are necessary to share downstream. Additionally, enterprises and industry players need certainty around any relevant fault lines for these considerations when choosing partners and third-party vendors when open-source or open-weight models could impact national security. Such discussions will allow enterprises to determine which models and markets offer safe and secure foundations for experimentation and what transparency measures can reasonably be expected.
3. Foster trust by encouraging the development and adoption of AI testing, benchmarks, and evaluations. Governments should encourage the adoption of globally recognized, consensus-based AI testing, benchmarks, and evaluations. Frontier model developers need to be able to understand, analyze, and iterate on their LLMs with the help of detailed performance and safety evaluations. Governments should support the development of robust testing and evaluation frameworks to ensure that such frameworks are fit for purpose, address issues such as a lack of consistency and reliability in how evaluation results are reported, and improve the availability of high-quality and trustworthy evaluation datasets. These frameworks should also be built to further understand and iterate on evaluation results to improve models without overfitting, or creating models that match the training set so closely that they fail to make correct predictions on new data.
4. Drive public-private collaboration across borders to promote AI adoption and drive risk management. The technological conversation is not bound by national borders. Thus, it is important that both public-sector and private-sector stakeholders recognize and harness the interdependence of the AI value chain while engaging in conversations about AI governance and transparency. It is also vital that policymakers and different actors in the AI value chain have a clear understanding of their roles and responsibilities. Enterprises and national governments should continue to use international fora such as the Organisation for Economic Co-operation and Development, the Global Partnership on Artificial Intelligence, the International Network of Safety Institutes, and the United Nations to facilitate public-private collaboration across borders. This will help ensure that different approaches are interoperable and that countries and organizations are best leveraging their own strengths.
***
The world must not lose the gains already made by researchers, policymakers, and enterprises that have been working to address AI risks over the past several years by over-indexing on adoption alone. The answers required to address the challenges and risks associated with AI are intertwined with the ability to capitalize on the opportunities AI presents and can ensure the accountability and security of these technologies for years to come. If AI adoption is the objective, then AI testing, evaluations, and governance are the methods. A collaborative effort to advance AI policy that reflects this fact should be every nation’s priority.
Evi Fuelle is a nonresident senior fellow at the ’s GeoTech Center.
Courtney Lang is a nonresident senior fellow at the ’s GeoTech Center.
The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.
Image: Credit: Alexander Kagan via Unsplash