Nyambura Kogi, Chairperson of the Association of Women Commercial Drivers of Kenya, sits on the edge of her couch, taps her two phones Redmi 12 Pro and Sumsung Galaxy A56 impatiently with a sigh. “I hate Gemini, especially because it makes both my phones really hang and there seems to be no way to disable it.” Kogi does not recall opting into Artificial Intelligence (AI) assistants. It was simply there—bundled in a phone update, learning from her daily habits.
Across Africa, millions are in the same boat. From AI-suggested playlists to personalised news alerts and autocorrect that adapts to their slangs, AI assistants including voice, text, and search assistants, are now embedded in the apps and devices Africans rely on daily. But what feels like convenience masks a more troubling truth: users are rarely asked, explicitly, if they consent to this invisible data exchange. And for most, opting out can be unclear or nearly impossible.
Consent without choice
The promise of AI is personalisation. But it’s powered by data—your data. Every tap, scroll, and voice command potentially becomes a training input. And in many of the world’s most popular apps, this data is collected under vague terms or behind dense privacy policies most users don’t read nor fully understand when they “agree”.
“As consumers we find ourselves in a situation where we are not in control of AI introduced in our gadgets because those who design the gadgets are the ones who decide whether to put AI tools or not,” said Zenzele Ndebele, a director of the Centre for Innovation and Technology (CITE).
Even when tech companies disclose that their AI tools are collecting data, the process often remains opaque. Meta, for example, states that messages sent to its AI assistants “may be used to improve AI.” But what does that mean in practice? How long is data stored? Can it be deleted? Can it be sold?
“These are questions users have a right to ask,” says Ndebele.
The invisible cost of “free” products and services
Jean-Pierre Murray-Kline, a business technologist based in South Africa, puts it plainly: “If the product is free, you are the product.”
He describes AI as a digital mirror—one that reflects our behaviours back to us, but amplified. “It’s watching how we speak, what we type, what we search. Then it gives us more of the same—reinforcing habits, biases, even political leanings.”
And yet, users often do not know what happens behind the scenes. Many apps do not just gather data—they harvest it continuously, sometimes even when the app is not open.
Candice Grobler, community marketing strategist and a founder of Candid Collab, noted that “It’s really difficult as a user to really know what these AI assistants in apps are doing with our data. They have terms of service but they update automatically without always being clear on the real impact.”
“Some developers design their apps specifically to avoid triggering permission requests,” says Murray-Kline. “If an app does not ask for any permissions at all, that should be a red flag—not a relief.”
At the core, AI remains a business-driven tool. While everyone will eventually use it, companies prioritise profitability—optimising AI systems to extract data with little focus on empowering users with control. Ndebele cautions that while businesses invest heavily in AI, “users must be vigilant about what is being collected, how it is being used, and what is being withheld from them.”
AI literacy and smart policy matter in Africa
Africa is one of the fastest-growing mobile markets in the world, yet its users are largely passive participants in the AI economy. Local startups are still developing AI capacity, while global platforms dominate usage—and set the rules.
Data from African users fuels global AI systems, but those same users have little control over how that data is used.
AI literacy is crucial as users need to quickly mature their understanding of AI—learning what it is capable of, what data it collects, and which permissions they can give and should withhold. “In 2025, AI ignorance is not a defence,” Murray-Kline says. “Users need to know their rights, protect their data and demand transparency.”
Beyond individual action, Ndebele noted that African governments need to be very serious about data protection laws. “Instead of using data protection laws for their selfish ends, they need to monitor and regulate how tech is collecting data and using it,”said Ndebele.
Ndebele urges African governments to move beyond rhetoric. “We need robust, user-centric data laws. And we need tech companies—local and global—to respect African users.”
In Nigeria, the National Information Technology Development Agency (NITDA) has begun to address data privacy, but enforcement is patchy. In Kenya, the Data Protection Act is a step forward, but public awareness and implementation remain low. South Africa has established sophisticated data protection laws, notably the Protection of Personal Information Act (POPIA), which provides robust rights and enforcement mechanisms for individuals. However, similar to Kenya, challenges in awareness and compliance persist, with many organisations and consumers still struggling to fully understand or meet their obligations under the law.
AI is here to stay. But how it shapes our lives depends on vigilance, digital literacy, and willingness to demand better.
Mark your calendars! Moonshot by is back in Lagos on October 15–16! Join Africa’s top founders, creatives & tech leaders for 2 days of keynotes, mixers & future-forward ideas. Early bird tickets now 20% off—don’t snooze! moonshot..com