At QCon London 2026, Yinka Omole, Lead Software Engineer at Personio, presented a session exploring a recurring dilemma engineers face, whether to spend time mastering the newest technologies and frameworks or to invest in deeper, foundational problems that may appear less exciting but deliver long-term value.
The talk argued that while hype cycles regularly promise to transform or even replace programming, the most valuable engineering expertise tends to compound over time when it is rooted in fundamental problems rather than specific tools.
A Long History of Predictions About the End of Programming
Predictions about the decline of software engineering have appeared repeatedly throughout computing history. A 2023 article by Matt Welsh in Communications of the ACM suggested that advances in AI could significantly change the role of programmers.
Similar claims appeared decades earlier. In the late 1950s, languages such as FORTRAN were promoted as enabling business professionals to write software without specialized programmers. In 1965, computer scientist Herbert Simon predicted that machines would soon be capable of performing most human intellectual tasks within twenty years.
In the 1990s, Computer-Aided Software Engineering tools promised to generate complete applications directly from diagrams. Generated systems frequently struggled with real-world edge cases, requiring experienced engineers to intervene.
More recently, similar discussions have reemerged with AI-generated code. In 2025, Dario Amodei, CEO of Anthropic, predicted that AI could write most software code within a year.

Despite these recurring predictions, the number of developers worldwide continues to grow. Estimates suggest the global developer population increased from roughly 14 million in 2019 to around 21 million by 2025.
Why the Profession Keeps Growing
Rather than slowing down, the profession continues expanding because the underlying problems engineers solve repeatedly reappear across industries and technologies.
Developers who invest in understanding these recurring problem classes build knowledge that compounds over time. While specific tools change quickly, many of the underlying engineering concepts remain stable.
The talk highlighted the idea that although technology appears to move quickly, the fundamentals often remain the same. Across different industries, stacks, and architectural trends, engineers continue to deal with similar issues involving data modeling, reliability, distributed systems, and workflow orchestration.
Foundations That Compound
One example discussed was the evolution of PostgreSQL. For many years, MySQL dominated open-source database deployments, particularly as part of the LAMP stack.
Over time, PostgreSQL gradually gained traction and eventually surpassed MySQL in several developer surveys around 2023.
This shift reflects PostgreSQL’s early focus on correctness, transactional guarantees, and extensibility. Although that emphasis slowed early adoption, it created a strong architectural foundation.

As new capabilities became important, such as full-text search, JSON support, or vector extensions for AI workloads, the database was able to incorporate them without major architectural changes. Long-term investment in core capabilities allowed the system to evolve naturally as new requirements emerged.
Matching Technology to the Problem
Another example came from the architecture behind WhatsApp. When Facebook acquired the messaging service in 2014, it supported hundreds of millions of users with a team of roughly 32 engineers.
The system was built using Erlang, a language developed by Ericsson in the 1980s for telecom systems requiring extremely high reliability.
Because Erlang was designed for distributed communication systems that must remain available under failure conditions, it aligned closely with the needs of a global messaging platform. That alignment allowed the small engineering team to operate infrastructure handling tens of billions of messages per day.
The Risks of Rewriting Systems
The talk also examined the dangers of rewriting complex systems from scratch. A well-known example is Netscape’s decision in the late 1990s to rebuild its browser codebase entirely.
The rewrite took several years, leaving the company unable to ship new features during a critical period of competition. Engineer Jamie Zawinski later described the effort as one of the biggest software disasters in history.
Rewriting a system does not only discard outdated code. It can also remove years of accumulated operational knowledge embedded in edge-case handling and architectural decisions, forcing teams to rediscover solutions to problems that had already been solved.
When Simpler Architectures Win
A more recent case involved a system described by Amazon engineers for analyzing video quality within Amazon Prime Video.
The system was initially built using a distributed serverless architecture based on AWS Step Functions and Lambda. However, the design struggled to scale beyond a small percentage of its expected workload.
Much of the latency came not from analyzing video but from transferring state between services.

Engineers eventually replaced the distributed design with a simpler architecture running on ECS. By keeping operations in memory rather than across network calls and storage round trips, the team reduced costs by roughly ninety percent while doubling throughput.
Managing Innovation Carefully
To help teams evaluate new technologies, the talk referenced the concept of innovation tokens, introduced by engineer Dan McKinley.
The model suggests that organizations have limited capacity to adopt new technologies. Each new framework or architectural pattern consumes one of these tokens, so teams should spend them carefully.
The key question becomes whether a new technology solves a real problem or simply follows industry trends.
Identifying Durable Engineering Problems
The talk also explored how individual engineers can decide where to invest their learning time.
Across different roles in payments systems, banking infrastructure, and payroll platforms, a recurring pattern appears, systems that manage multi-step workflows in which entities move between states such as initiated, processing, approved, or rejected.
Recognizing this pattern allows engineers to focus on the underlying concept of workflow orchestration and state machines rather than specific implementation tools.
Whether implemented with custom logic or platforms, the underlying problem remains largely the same. Developers who recognize these recurring structures can build expertise that transfers across industries, organizations, and technology stacks.
AI Tools and the Future of Engineering
The session concluded by addressing the rapid emergence of AI coding tools.

Techniques tied to specific models or prompt strategies may evolve quickly as models improve. However, core engineering skills, such as decomposing complex systems, designing reliable architectures, and evaluating correctness, are likely to remain valuable regardless of how code is generated.
Engineers who built strong foundations before the rise of AI tools continue to lead teams today, largely because their expertise focuses on the underlying problems rather than the tools used to implement them.
As the industry continues to evolve, focusing on these boring but durable problems may be one of the most reliable ways to build long-term engineering expertise.
