Nigeria’s proposed Artificial Intelligence (AI) Bill arrives at a pivotal moment in the country’s technological evolution. With global competition accelerating and domestic innovators pushing new boundaries, the legislation promises to create a regulatory foundation for the future of AI in Africa’s largest economy. But behind its ambitious language lies a contentious debate: Will this bill empower local builders, or create the very barriers that push them out of the ecosystem?
A close look at the bill—as well as interviews, policy analysis, and comparisons with international frameworks, including the Alliance4AI, reveals a law caught between two competing visions. One vision seeks to protect the country from the risks of automated systems, algorithmic bias, and misuse of data. The other seeks to harness AI as a growth engine, enabling Nigerian engineers, startups, and researchers to build globally competitive systems. The tension between these visions is what will determine whether the bill becomes a catalyst—or a constraint—for Nigeria’s digital future.
“Nigeria risks making the same mistake we made with fintech over-regulation, startup compliance burdens, and excessive government agencies,” said Anda Usman, Co-founder and CEO of Datum Africa, a centralised, searchable repository for African datasets. “When you regulate too early and overdo it. You become a consumer of other people’s technology rather than a producer.”
A bill designed for control, not creativity?
At first glance, the AI Bill looks like a step toward modern governance: a framework for managing high-risk AI systems, ensuring accountability, and preventing harmful use. It mirrors the language of the European Union’s AI Act and other global regulatory models. But deeper analysis shows that the structure of the bill tilts heavily toward regulatory control, with broad powers concentrated in government agencies that lack proven capacity, and licensing requirements that could raise the cost of compliance beyond what most local builders can afford.
“Our regulatory approach can be likened to the EU instead of the USA,” said Usman. “The EU regulates more than incentivising innovation, while the USA does the opposite. And we can all see where the AI powerhouses are.”
The EU is reportedly rethinking its tough approach and is expected to introduce a digital package simplification of the General Data Protection Regulation (GDPR) on Wednesday, November 19, 2025, where most of its AI plans will be laid out. Some of the tougher provisions outlined in the GDPR may be considered for revision.
Unlike the approaches used in globally competitive markets, such as the U.S., UK, Singapore, and even the African Union’s emerging AI framework, the Nigerian bill adopts a risk-based regulatory model without the accompanying support infrastructure. There are few mechanisms to help startups meet compliance standards, no incentives or capacity-building programs, and no clear protections ensuring that regulation does not become a barrier to innovation.
As Alex Tsado, founder of Alliance4AI, puts it, “Nigeria is about to regulate innovation before enabling it.” That has been the fear across the local developer community: that the bill creates obligations without offering pathways for compliance.
“The most important thing a government can do during the AI era is to cultivate local builders,” Tsado told . “Any regulation that impedes this goal, especially at the grassroots level, risks solidifying Nigeria’s role as a lifelong consumer of expensive, culturally irrelevant, and potentially harmful foreign technology.”
The burden of licensing and compliance
At the heart of the controversy is the bill’s licensing regime. To build or deploy certain categories of AI systems, developers must apply for licences—licences that may require payment, documentation, third-party audits, and ongoing monitoring. While large global companies can easily absorb these costs, Nigerian startups, university labs, and independent AI developers cannot.
This raises a fundamental question: Who is this law designed to serve?
The bill’s supporters argue that licensing protects Nigerians from exploitative or unsafe AI systems. John Wambugu, a creative and digital industries executive, is particularly impressed with the provision of an AI Council.
“A central body for oversight is essential for trust, coordination and standard-setting,” he told . “Requirements for transparency, audits and oversight are important for public trust and responsible use.”
But critics point to a different risk: by making compliance expensive and cumbersome, the bill could drive local developers out of the market, forcing Nigeria to rely on foreign firms whose systems operate with less oversight.
In effect, Nigeria could end up importing the very AI technologies it hopes to regulate—while local builders take their talents to jurisdictions with friendlier innovation environments.
No clear path for foundational AI models
One of the most glaring omissions in the bill is the absence of a category for foundational AI models—large-scale systems like GPT-4, Claude, or locally trained Nigerian LLMs. Globally, the debate around foundational models focuses on safety, transparency, compute resources, and downstream risk. Nigeria’s bill, however, treats all AI systems under the same broad categories, leaving regulators to decide classification on a case-by-case basis.
This creates regulatory uncertainty for the very builders Nigeria claims it wants to empower. Developers cannot plan infrastructure investments or model training pipelines without clarity. This uncertainty disproportionately hurts local developers, who already face significant barriers: limited access to GPUs, unstable power supply, and high cloud compute costs.
Foreign companies, meanwhile, can operate from abroad, treat Nigeria as an end-market, and bypass most domestic constraints.
A missing piece: Incentives for local innovation
A major gap in the bill is what it leaves out: meaningful support for local innovation. Unlike the European Union, which complements its AI Act with billions in research funding, or countries like the UAE and Singapore, which invest heavily in sandboxes and national compute facilities, Nigeria’s AI Bill offers no mechanisms to help local builders thrive. There are no grants or funding streams for AI research, no tax incentives for startup AI labs, and no national compute infrastructure or GPU credits to ease the high cost of training models. The bill does not provide open government datasets for model development, encourage university–industry partnerships, or invest in ethical AI education programs. It lacks national fellowships, scholarships, or cloud infrastructure subsidies that could strengthen the talent pipeline.
Instead, the legislation is heavily weighted toward compliance, enforcement, prohibitions, and licensing—an approach that risks burdening innovators without offering them the tools to succeed. As Tsado puts it, “Nigeria is regulating without enabling, controlling without empowering.” In a country where much of the AI ecosystem is driven by fragile startups and young developers working with limited resources, this absence of supportive measures represents a significant structural weakness.
Risks of centralising power in a single regulator
Another concern is the concentration of authority in a newly created agency with broad discretionary powers. The bill grants regulators the ability to determine which systems are “high-risk,” set licensing terms, approve providers, and impose sanctions. Without strong checks and clear mandates, these powers raise fears of regulatory overreach—or simply regulatory paralysis.
“When innovation requires permission, innovation becomes fragile,” said Tsado.
Startups that cannot wait months for approvals or survive regulatory delays will either scale down their ambitions or simply relocate to countries with clearer, faster frameworks.
The foreign advantage problem
One of the most concerning implications of the bill is that foreign companies will be able to comply with its requirements far more easily than local firms. The licensing and documentation demands—such as safety testing, risk assessments, compliance reports, and impact audits—are already standard practice for large international tech firms like Google, Meta, or OpenAI, which prepare similar filings for regulators in multiple countries. Nigerian startups, by contrast, lack the resources and institutional capacity to meet these obligations. They do not have full-time compliance teams, safety or alignment experts, EU-style documentation pipelines, in-house legal departments, or the funds needed for third-party audits and certifications. Many also lack the compute resources and data governance structures required to support the level of oversight the bill demands.
“Licensing and compliance structures should be lighter for startups so the ecosystem isn’t stifled,” said John Wambugu, a creative and digital industries executive.
This imbalance creates a structural advantage for foreign companies, many of which already dominate Nigeria’s digital ecosystem. Unless addressed, the bill could inadvertently transform Nigeria from a potential builder of AI technologies into a country that primarily imports them—undercutting the very innovation ecosystem it aims to regulate.
Tsado said a better approach would be to “shift the focus to ‘Control of Foreign Products & Enablement of Local Production’, Implement a tiered system: zero regulation for academic/pre-commercial local projects; strict safety and tax regulation for large foreign models operating in Nigeria.”
A crossroads for Nigeria’s digital future
The question facing Nigeria is not whether AI should be regulated—almost every global power agrees that it should be. The question is how.
Does Nigeria want a regulatory model that protects citizens while giving local innovators room to grow? Or a model that creates barriers, advantages foreign companies, and pushes local builders out of the ecosystem?
The answer will define Nigeria’s role in the future of global AI development and adoption.
