Udeme Jalekun is a Senior Quality Assurance (QA) Engineer and educator who transitioned from customer support into quality engineering after identifying that most recurring production issues were preventable through stronger use-case testing. Her work spans functional and non-functional testing, including integration, system, performance, and automation testing, with a focus on building resilient, scalable systems that protect revenue and improve user experience.
She has delivered quality across more than 10 major fintech and social-impact products, including SwervePay, Geegpay, Quickteller Paypoint, and Kippa Pay, collectively serving millions of users and generating over ₦5 billion ($3.7 million) in revenue. At Interswitch and Kippa, she transformed QA operations, improving service uptime by 20%, reducing defect rates by 15%, doubling engineering productivity, and standardising automation frameworks across teams.
Beyond delivery, Udeme is deeply committed to workforce development. As Director of Professional Training and Development at the Association of Nigerian Software Testers (ANST), she has trained and mentored over 300 QA professionals and delivered structured learning to more than 2,000 members nationwide.
- Explain your job to a five-year old.
Let’s assume the 5-year-old loves to watch the ‘Paw Patrol,’ I would explain software testing using that animation series. Each pup is awesome in their own way because of the unique tools in their pup packs, which they use for each of their missions. For example, Rubble’s pack is for construction, while Chase’s is for fire and medical supplies. And then there’s their brilliant leader, Ryder, who creates these super cool tools and tweaks them from time to time so the pups are always armed and ready for their missions.
A software tester is the guy who checks that each tool does what it’s expected to do, how it is expected to and at the right time. So, should Rocky hit the button for a screw driver, what should come out of his pup-pack should not be a hammer, but a screw driver.
A software tester does a lot of checks to ensure that these tools work as expected and act as a line of security to make sure the Paw Patrol does not run into trouble with their gears on those critical missions. They also ask a lot of questions, like: Does this ‘eject net’ button respond properly? What happens if I want to drill clockwise really fast? What if I try to put a square brick in a round hole?
The pup packs are what we call software applications or just ‘apps’, and the software tester is the super-cool superhero who ensures things work, raises complaints about anything that isn’t working, and tests again to confirm fixes, when done by ‘Ryder.’
- What’s the first moment you realised “software quality” is a product decision, not just a QA job?
My first realisation came a few years into my QA journey, when I realised a deployment could still go ahead even after the test team had identified several issues in that build. Then, when I grew into leadership, I understood how prioritisation from a business perspective really drives not only development work but also the quality narrative.
The choice of what to build, which features/fixes to prioritise, and resource allocation became clearer when juxtaposed against product metrics such as customer reach, retention, revenue, and other commercial impact. Quality was no longer a QA-specified decision; it required the whole team’s effort and should align with the product and business impact.
- What’s the pathway to becoming a QA engineer today? If someone is starting from scratch, where should they begin?
First, ensure you have done enough research and are sure you know enough to decide on a career in QA.
Find out what it is, what’s not, what skills are required, what the career roadmap looks like, are learning resources easily available and accessible? Is there a community available that keeps you up to date with trends and provides networking opportunities? Is there room for growth in this career?
The outcome should be both a resolution and a learning plan capturing the next 90 days.
Spend the first 30 days immersing yourself in understanding the fundamentals of software testing: its types, principles, levels, and strategies, using resources already available online. I advocate structured learning if the newbie wants to achieve a lot in a specific time. This can mean you join a bootcamp or subscribe to a course(s) that follow a learning path tailored to your learning level (resources that are beginner-friendly).
There are a lot of free and useful resources available online, but it can get overwhelming if you choose to run this alone. It is great to have a clearly planned roadmap for your learning (should you choose to do it alone) or join a bootcamp to learn with others in a structured way.
Finally, get a mentor who can hold you accountable for your commitment to learning, show you the ropes, and provide clarity when you need it. You can leverage the experience, wins, losses and network.
- Did being exposed to the frontline as a customer support professional early in your career give you a close view of how real users struggle with products? Do you consider that an advantage, and how has it shaped how you approach testing?
It’s a huge advantage, as the basis of my transition was the need to help end users enjoy the products/services they use and contribute to their overall experience with the product.
Customer service provided the lens through which I looked at product designs, interactions, functionality, and performance. Since I have had to support different user personas, this meant I had to position myself as those personas (digitally illiterate, tech-savvy, elderly, young ones, the calm, the easily irritated, etc.) when testing products.
This background in customer support has made me think more about product use across different personas: Is this error message clear? Do we need to include tooltips for signposting certain features/flows? Will colour-blind users have trouble navigating the app?
The truth is, many critical issues are caught simply by walking in the user’s shoes. This is something my background in customer service has empowered me to do.
- What’s the biggest misconception startups and founders have about QA?
The biggest misconception many startups and founders have about QA is treating it as optional, something you add later when you’re bigger, not something you build with from day one. They assume developers can simply test their own work and that this will be “good enough,” especially when the pressure is to ship fast. But this thinking usually comes from seeing QA as a blocker rather than a multiplier.
In reality, QA isn’t what slows releases down; unclear requirements, untested edge cases, late bug discovery, and repeated rework do. When QA is removed, teams may ship faster for a moment, but they pay for it later in regressions, production incidents, customer churn, and a growing backlog of issues that make future releases slower and riskier.
- If you could redesign how Nigerian startups approach testing from day one, what would you change?
Start your product ideation and development with a quality mindset at the very beginning. Many startups only get one shot to bring their ideas and solutions to life and break into the market successfully. Do it right the first time.
Bring in testers early. Early testing is crucial to a product’s success and should not be treated as an afterthought. The requirement analysis sessions, the design reviews, and the technical walkthroughs are collaborative sessions that help identify issues earlier, before the users, cutting down rework time and money.
- Five years from now, what will separate average QA engineers from exceptional ones?
Product understanding and sector-based knowledge. With the advent of AI, many people feel that human thinking will be contracted to AI agents. But in principle, AI is trained by data, data obtained from humans’ responses to unprecedented challenges. The exceptional QA has interacted long and deeply enough with their sector/product/service and can predict edge cases and scenarios based on actual interactions with the systems under test and with people who have used the system under test.
Another distinguishing factor will be the ability to architect test systems and solve day-to-day problems, whether through automation, requirement analysis, design reviews, customer interviews, or simply interacting more with the product.
