Almost two years ago, more than 150 government officials, industry leaders, and academics met at Bletchley Park, the English estate where Allied forces broke the Nazis’ Enigma Code in World War II. This meeting, the 2023 AI Safety Summit, concluded with a warning from the more than two dozen countries represented: artificial intelligence (AI) held the “potential for serious, even catastrophic, harm, either deliberate or unintentional.” The participants also agreed to meet again, and summits in Seoul and Paris followed.
In February 2026, the next such summit will take place in New Delhi, India. But while the earlier gathering in the United Kingdom billed itself as concerning AI safety, India has opted for AI “impact.” As I noted in an analysis of this past February’s AI Action Summit in Paris, “the commitment, resources, and priorities of the host determine the summit’s successes and failures, as well as the level of buy-in from its guests.” So, as the contours of India’s goals for its AI Impact Summit come into focus, what should the participants and the wider world expect in New Delhi?
Why “impact”?
New Delhi’s challenge for the summit resembles the three-body problem—in this case, the three competing forces are political momentum, stakeholder consensus, and on-the-ground implementation. The task here is to keep all three in motion without losing coherence. Many initiatives have spun out of orbit at this stage, when lofty consensus gives way to the hard gravity of real commitments.
Superimposed upon this drive for “impact” are the specific challenges for countries that cannot afford to blitzkrieg their way into AI dominance. Their challenge is not a lack of ambition but the limits of scale, resources, and infrastructure, all while the global narrative around AI as a general-purpose technology grows louder.
India’s leadership team for the summit seems to feel a sense of urgency. In September, Shri S. Krishnan, secretary of the Indian Ministry of Electronics and Information Technology, said in a speech, “This particular wave of technology, driven by AI . . . is probably the last opportunity that countries of the Global South, including India, have to truly grow rich and prosperous before they grow old. This is a wave the Global South has to ride.”
Ahead of the summit, India released seven “chakras,” or “axes,” that will be discussed at the gathering. While these chakras cast a wide net, most are largely global coordination problems: human capital, social empowerment, inclusive growth, innovation and research, and safe and trusted AI. India’s vision for impact therefore is twofold: both to maintain momentum by driving action on agenda items for global coordination, and to highlight equitable access to AI infrastructure as essential to developing countries’ ability to meaningfully participate.
India’s approach
The AI Impact Summit framework also carries the hallmarks of New Delhi’s techno-legal approach to digital technologies, where regulation is part of the design of technical systems rather than an extraneous compliance requirement. Rather than relying only on regulatory instruments that may stifle innovation, the focus is on empowering a wide range of nations and stakeholders with the technical capabilities needed to govern AI effectively. India has implemented techno-legal approaches in data governance, animated by its digital public infrastructure (“India Stack”) as well as the Data Empowerment and Protection Architecture, which proposed the concept of “consent manager” institutions that put individuals at the center of data access and control decision flows.
With its framing, New Delhi is positioning itself as an arbiter of a very specific model of AI-driven growth, where governments are co-creators and not just buyers and regulators of AI. This is distinct from, for example, the United States’ techno-nationalist approach, which is driven by a handful of massive AI companies. New Delhi’s hybrid system prioritizes narrow, tailored government interventions in sectors that have the deepest scope for impact and inclusion, such as healthcare, agriculture, and education. The marquee initiative of the summit is the Global Impact Challenge, which encourages AI applications for climate, financial inclusion, health, urban infrastructure, agritech, and more.
In this vein, expect the launch of India’s sovereign foundation models, reportedly trained entirely on homegrown datasets and hosted on Indian cloud infrastructure. One such model is being built by BharatGen, a Department of Science and Technology initiative, supported by strategic collaborations with Indian research institutions and partners such as IBM.
The safety imperative
While large amounts of capital and political will are focused on one kind of AI race—capabilities and infrastructure—there is another race that must receive the same attention.
The International AI Safety Report, led by Canadian computer scientist Yoshua Bengio, launched just before the Paris AI Action Summit. The first update to the report was published in October, and among its findings is this alarming tidbit: “Some research shows that AI systems may be able to detect when they are in an evaluation setting and alter their behavior accordingly.” In other words, AI systems may know when they are being evaluated and may produce outputs tailored to the evaluation. This is a function of core AI behaviors such as goal preservation (maintaining core objectives) and self-preservation (not wanting to be shut down or replaced). If current AI models can deceive human evaluators, the danger is that more sophisticated, potentially harmful models may be able to slip past national AI safety testing regimes.
“Safe and Trusted AI” is one of the chakras for the summit, but while the summit treats it as one distinct theme, AI safety should not be thought of as optional. Rather, it is essential to the achievement of the other chakras.
Notably absent so far from the agenda is the IndiaAI Safety Institute (IAISI). Launched in March 2024, the IAISI follows a virtual hub-and-spokes model, with different IAISI cells carrying out specific mandates. That said, there are likely to be some demonstrations of the thirteen AI safety projects that IndiaAI supports under its Safe & Trusted AI pillar. Among these is a unique contribution to a subfield of AI safety called “machine unlearning” by Indian Institute of Technology, Jodhpur. This approach involves making a machine learning system forget a piece of incorrect, corrupted, or harmful training data without fine-tuning or retraining the entire model.
Despite the headline attention on impact, safety needs to be fundamental to India’s vision for AI that engenders trust, inclusion, and empowerment. Take a hypothetical AI use case for agricultural advisory. The intended goal of a system would be to empower smallholder farmers with predictive tools to help with crop management in areas such as pest control and crop choice based on expected weather patterns. AI systems trained for average accuracy would fail in outlier or extreme cases. The objective function (or goal) of such a system may be to minimize error, not to minimize harm under uncertain conditions. In other words, in the world of the smallholder farmer, a confidently wrong forecast could cause more serious, even catastrophic, harm than a tentatively right one.
The messaging from India about the AI Impact Summit is compelling: AI must be safe, empowering, and trustworthy. New Delhi appears to be taking a people-centered approach, emphasizing use cases that have the greatest scope for positive impact for the widest swath of the population. This approach will resonate with established, emerging, and aspiring AI powers alike. However, without embedding AI safety as a design principle, New Delhi risks repeating a familiar pattern: developing technologies that orbit policy ambitions but never fully land in people’s lived experience.
Trisha Ray is an associate director and resident fellow at the ’s GeoTech Center.
Image: Dehradun, Oct 17 (ANI): Minister of State for Commerce and Industry and Electronics and Information Technology Jitin Prasada addresses students at the Uttarakhand AI Impact Summit 2025, in Dehradun on Friday. (ANI Photo)
No Use India.
