Artificial intelligence (AI) is redefining daily life, and Nigeria’s legal system is struggling to keep up. With the rise of AI-generated content like ChatGPT essays, deepfake videos, and algorithmic trading, a critical question has emerged: how can AI-generated evidence be treated in a Nigerian court of law?
While Nigerian courts are yet to tackle their first AI-generated evidence case, South Africa faced its AI-related legal misstep in January 2025. A South African lawyer, Ms. S. Pillay, came under scrutiny after allegedly submitting court documents that included fictitious case law generated by an AI tool. This incident sparked a broader debate about the responsibilities of lawyers using AI, highlighting the need for stricter checks to prevent such errors.
Nigeria’s legal framework for digital evidence
Nigeria’s 2011 Evidence Act already recognises electronically generated evidence, a category broad enough to include AI-generated content. Section 84 of the Act sets the conditions for admitting computer-generated evidence, including reliability and proper storage. However, this law was crafted in an era when “digital” meant emails, PDFs, and CCTV footage, not hyper-realistic deepfakes or AI-generated content.
Nigerian lawyers are sounding alarms about an immediate threat: the admissibility of AI-forged evidence in a legal system still grappling with basic digital forensics.
“Our laws are very archaic right now,” says Queen-Esther Ifunanya Emma-Egbumokei, a corporate lawyer specialising in international commercial law and the creative economy.
Bernard Daniel Oke, who specialises in intellectual property rights, data protection and privacy, and media, explains that the most critical requirement is proper authentication. He notes that digital evidence “must be properly authenticated and shown to be reliable such that it emanates from the right device, untampered and proper foundation being laid.” The party presenting the evidence must convincingly show that it originated from the device or source they claim.
This authentication process is essential because, without it, the court cannot determine if the evidence is genuine or relevant to the case at hand.
This authentication isn’t limited to AI-generated content alone. Content on platforms like WhatsApp, Instagram, and Slack can now be edited, which raises a critical concern about how Nigerian courts will verify the authenticity of potentially manipulated digital evidence on these platforms.
“The implication of edited messages on social media platforms being tendered in evidence can be inadmissible for being tampered with,” Oke explains. “ Tampered evidence is inadmissible in evidence.”
The implication is that any sign that an electronic message, screenshot, or document has been edited, manipulated, or otherwise changed can render it inadmissible. The law is clear: tampered evidence is not to be trusted and will likely be rejected by the court.
However, there’s a potential workaround. Tolu Adeyemi, a dispute resolution lawyer, explains that while computer-based evidence is generally inadmissible in court, it can be accepted if supported by oral evidence or a certificate of authenticity from the person operating the device. She says the certificate of authenticity is one that “complies with the requirements of Section 84 of the Evidence Act.”
The requirements state that the document tendered was “produced by a computer during a time when the computer was regularly used to store or process information; that the computer, phone in this instance, was operating properly at the relevant time; [and] that similar information to what was tendered was regularly inputted in the computer; that the document was produced from information given to the computer in the ordinary course of activities.”
Nigerian courts have no specific legal tests for detecting AI-generated evidence. Lawyers agree that the burden falls on the parties: if one party claims a piece of evidence is fake or AI-generated, they must present contrary evidence or expert testimony to prove their case.
“The court isn’t an investigator, it wouldn’t go on a frolic of its own to determine if the evidence is true or not, it just judges based on what is placed before it and whatever is not contradicted or opposed is deemed admitted,” Adeyemi says.
The gaps and the future: Nigeria’s legal lag
While Section 84 of Nigeria’s Evidence Act 2011 mandates certificates of authenticity for electronic records, Hauwa E. Amuneh, a corporate and commercial lawyer, notes critical gaps: “Nigeria isn’t versed with the forensics department, and detection [of AI-generated evidence] would either take longer or come at a high cost.”
As Amuneh points out, if evidence has been altered by one of the parties editing a message, it is up to the parties to prove or disprove authenticity, often relying on additional evidence, expert witnesses, or cross-examination in court.
Oke advises that since messages on these platforms can only be edited within a few minutes of being sent, “a party who notices such editing can contest the edited content and seek clarification in the same chat immediately such alteration is noticed, to avoid future arguments. In law, equity aids the vigilant and not the indolent.”
He adds that a party can keep screenshots of original messages and have them backed up “as evidence against a party for future reference, especially since the disappearing messages feature is also a possibility”.
AI disputes in Nigeria: It’s not a matter of if, but when
Despite the growing use of AI, none of the lawyers interviewed have encountered a Nigerian court case directly involving AI-generated evidence.
While AI-related disputes are already cropping up globally, Nigeria is yet to enact targeted legislation or develop forensic capacity to address these challenges.
“As for AI-related [legal] disputes, we already have those,” Emma-Egbumokei says. “ Not in Nigeria, of course, but there are ongoing legal disputes all over the world concerning AI.”
Oke and Adeyemi echo Emma-Egbumokei’s view, saying they predict that AI-related disputes will inevitably reach Nigerian courts in the next few years as the technology becomes more deeply integrated into everyday life.
“I believe there will be disputes in the intellectual property aspect, as a lot of people create works, inventions from what AI has fed to them,” says Adeyemi.
Need for legal reforms
If anything, there is an urgency for updates in the Nigerian constitution to make provision for AI-generated evidence.
Oke recommends Nigerian laws to “compel AI users to disclose tasks done with AI and to what extent the use of AI helped in delivering a task. This would help track, review, and appreciate human intelligence in the effective use of AI to deliver tasks, [and] hold AI users accountable to the potential risks and aftermath of such use of AI tools.”
Amuneh supports this, saying that “as much as AI is relevant in society today, it should be regulated and controlled.”
Adeyemi adds that there should be a way for AI-generated content to be indicated, “especially what it’s used for these days [like] creating songs with celebrities’ voices or creating a picture of a celebrity doing something they’ve never done.”
While some lawyers believe that Nigeria needs a dedicated digital evidence law that explicitly addresses the unique challenges of AI, others argue that what is needed is not necessarily new legislation but a reinterpretation of existing laws through a tech-aware lens.
Either way, the next big courtroom battles won’t just be about what’s true or false, but about whether the evidence itself can be trusted. In a world where AI can create as much as it can expose, the stakes have never been higher.