On February 11, heads of state convened in Paris’s Grand Palais for the third AI Safety Summit, or the “AI Action Summit” as France has eponymized it. In contrast to the first summit in London in 2023 and the second in Seoul in 2024, the United States and the United Kingdom did not sign onto the communiqué this year. Instead, this week’s summit saw the breakdown of the agreement from Bletchley Park and the drift into the “third way” approach emphasizing strategic independence. With the United States’ cessation of partnerships in favor of leadership, the charge on affirmative technological sovereignty has gained ground.
What this has resulted in is the beginnings of a drift from traditional power centers toward a multi-stakeholder, collaborative approach on artificial intelligence (AI). With India as co-chair, the summit demonstrated that nations from the Global South are not just participants but architects of the emerging AI order. Newly announced initiatives and commitments to open-source development reflect a growing consensus that AI’s future must be both innovative and rooted in shared prosperity.
Why France sees an ally in India
France’s invitation to India to co-chair the AI Action Summit benchmarked both the evolving scope of the Indo-French Strategic Partnership, as well as increasing alignment at the European Union (EU) level on digital regulation.
The twenty-fifth anniversary of France and India’s bilateral strategic partnership saw a flurry of engagements between leaders from the two countries: Indian Prime Minister Narendra Modi was the guest of honor at Bastille Day in 2023. In return, French President Emmanuel Macron was chief guest at India’s Republic Day in January 2024. Furthermore, in July 2023 the Indian Ministry of Electronics and Information Technology (MeitY) and the French Ministry of Economy, Finance, and Industrial and Digital Sovereignty signed a Memorandum of Understanding on digital cooperation, spanning electronics manufacturing, high-performance computing, AI, and digital public infrastructure, among other areas.
At the EU level, the Digital Markets Act (DMA) and Digital Services Act (DSA) both took full effect in early 2024, and the EU AI Act has begun its gradual entry into force starting in August 2024. The DMA promotes fair competition in the marketplace of digital services, while the DSA is a consumer-centric regulation geared toward stemming illegal or harmful content online. In June of the same year, the French Competition Authority (Autorité de la Concurrence) issued its opinion on the generative AI sector, noting the high level of vertical integration among major generative AI players. The advantage for these companies, the opinion states, “is reinforced by their integration across the entire value chain and in related markets, which not only generates economies of scale and scope, but also guarantees access to a critical mass of users.” In other words, generative AI is dominated in effect by a small handful of companies, who exert a great amount of control at all levels of the value chain—from data and chips, to cloud services, developer hubs, and applications.
The Competition Commission of India is pondering its own version of the DMA, while MeitY is considering a Digital India Act that would outline rules on ethical development of AI, building on the AI for All framework set out in India’s National AI Strategy in 2018.
India, which is no stranger to jugaad, or frugal innovation, is also seeking its own DeepSeek moment, especially as it finds itself in the lowest tier of the United States’ Regulatory Framework for the Responsible Diffusion of Advanced Artificial Intelligence Technology. Most recently, the IndiaAI Mission has put out a call for proposals to build indigenous foundational AI models, and its Union Budget allocated record amounts to AI initiatives, including an additional 200 crore rupees (approximately $23 million) to AI Centers of Excellence and 2,000 crore rupees (approximately $230 million) to the IndiaAI Mission. Not surprisingly then, Modi in his opening speech in Paris declared that “governance is not just about managing risks and rivalries, it is also about promoting innovation and deploying it for the global good.”
Finally, India is a major player in global AI policy debates, through the myriad of partnerships that it has fostered over the past decade-plus. This includes the Group of Twenty (G20), the BRICS grouping, the Global Partnership on AI, and I2U2 (India, Israel, the United Arab Emirates, and the United States), to name a few. Given its influence, India is also a frequent fixture at Group of Seven (G7) summits, home of the Hiroshima AI Process.
What the summit achieved
Ahead of this week’s Paris Summit, the first International AI Safety Report, a key deliverable from the Bletchley Park process spearheaded by AI pioneer Yoshua Bengio, outlined a sweeping agenda. The report contained some key implications for international partnerships on AI, a few of which are highlighted below:
- There is a research and development (R&D) divide. The report identifies a “global R&D divide,” stating that there is insufficient evidence that infrastructure investment and AI training programs in low- and middle-income countries are effective. This means that there are more factors than the availability of infrastructure and skilled workforces driving the current concentration of R&D in a handful of countries.
- Technical risk management approaches must be standardized: The report notes the limitations of existing technical methods of risk identification, mitigation, and monitoring. While the network of AI Safety Institutes is a first step toward standardization of AI risk management, the future of the network is uncertain as the United States’ policy priorities are shifting toward unfettered innovation.
- There are trade-offs between competition and AI risks: In the interest of “staying ahead,” governments and companies may deprioritize safety in favor of rapid AI development. International partnerships should mitigate this through cooperative agreements that balance innovation with safety.
- Early warning systems are essential in an unpredictable technological landscape: The report highlights the “evidence dilemma” faced by policymakers—the need for a critical mass of incidents of AI harms before regulations can be implemented. However, due to the widespread and rapid implementation of AI, including for determining access to critical services, the report stresses the need for early warning systems and frameworks, as waiting for stronger evidence weakens governments’ abilities to protect their societies.
The report also notes the rapid advancement of general-purpose AI models, although DeepSeek has since challenged some of the underlying assumptions on the resource intensiveness of building these models. Nonetheless, the point on progress stands. Large language models (LLMs) have gone from generating gibberish that barely approximated human speech, to “PhD-level” intelligence that is outpacing most LLM benchmarking tools.
The Paris Summit did attempt to address one key criticism of the 2023 Bletchley Park Summit—that Global South representation was symbolic at best. Ninety countries were invited to Paris, with nearly one thousand participants from all sectors. The Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet has sixty signatories, including the African Union. Macron also announced the launch of Current AI, a $400 million “public interest AI platform and incubator” backed by public and private entities in France, India, Germany, Chile, Kenya, Morocco, Nigeria, Finland, Slovenia, and Switzerland. This initiative complemented the summit’s emphasis on open-source, “democratized” AI, as Europe, as well as India and other players in the Global South, hinge their hopes of an AI boom on this mode of AI development.
The challenge now is to translate these dialogues—from London to Seoul and now in Paris—into concrete, lasting frameworks that ensure AI serves as a force for global good. The next host for this summit series has yet to be announced. But as Paris has proved, the commitment, resources, and priorities of the host determine the summit’s successes and failures, as well as the level of buy-in from its guests. Countries that choose not to engage do so at the peril of isolating themselves as a global consensus forms on the future of AI.
Trisha Ray is an associate director and resident fellow at the ’s GeoTech Center.
Wed, Jan 29, 2025
Is DeepSeek a proof of concept?
Sinographs
By
Understanding how Deepseek emerged from China’s innovation landscape can better equip the US to confront China’s ambitions for global technology leadership.
Image: French President Emmanuel Macron shakes hands with Indian Prime Minister Narendra Modi after their speeches to close the plenary session of the Artificial Intelligence (AI) Action Summit at the Grand Palais in Paris, France, February 11, 2025. REUTERS/Benoit Tessier