At the InfoQ Dev Summit in Boston, Michelle Brush, Engineering Director of Site Reliability Engineering (SRE) at Google, delivered a keynote that spoke directly to software leaders on the broader changes underway in software engineering, systems thinking, and leadership through complexity.
She opened by acknowledging the uncertainty that many practitioners feel, affirming that this was a shared experience and an expected part of navigating today’s technological landscape. Brush argued that the nature of software engineering work is shifting, not disappearing. As AI systems automate pieces of software development, engineers will face harder and more complex challenges.
Citing Bainbridge’s “ironies of automation”, she explained, “when you automate some piece of work, the job that you leave behind for humans to do is actually harder.” The result is a landscape where engineers must monitor, debug, and validate automated systems, even as their direct responsibilities evolve.
She illustrated this point with a simple analogy: “Dishwashers are great… but we didn’t get rid of all the work.” While machines may handle routine tasks, humans are left with responsibility for exception handling, quality assurance, and system maintenance. In software, this translates into higher-level abstraction work, deeper troubleshooting, and a reliance on engineering judgment. “Our brains are going to start working on higher and higher abstractions,” she said, emphasizing the cognitive shift required in modern development.
Brush explained that large language models (LLMs) today operate with a kind of “unconscious competence.” They can produce impressive results, but lack explainability and awareness of their limitations. “They don’t know what they don’t know,” she said, framing hallucinations as a natural byproduct of this architecture. By contrast, humans sit in the space of “conscious competence”—we understand what we know and can explain it, which is essential for teaching, mentoring, and validating machine outputs.
A central concept in her talk was the importance of “chunking,” or cognitive encapsulation, as engineers deal with increasing complexity. She argued that the ability to move between abstraction layers—while still being able to drill into the underlying systems—is crucial. “All abstractions leak,” she reminded the audience, “especially our hardware abstractions.”
Brush also stressed the enduring importance of foundational technical knowledge. “I have used calculus in my day job. Definitely discrete math. I’ve had the misfortune of using assembly twice,” she joked, highlighting how education in the fundamentals continues to pay off—even as tools and platforms evolve. She called this kind of knowledge essential for engineering resilience, not just in code, but in understanding systems holistically.
To this end, she advocated for systems thinking, citing Donella Meadows’ work on flows, feedback loops, and change. She recommended supporting disciplines such as control theory, cybernetics, and behavioral economics to better model and design socio-technical systems. For engineering leaders, this was a call to develop broader lenses for decision-making and risk assessment.
Sharing a case study from Google, Brush detailed a 2019 outage that brought down two data centers due to runaway automation. The assumption that geographic distribution was sufficient proved wrong when a third data center also failed under the load of recovery traffic. The takeaway? “We realized we needed to be in more than just three data centers,” she said. The response involved not just more capacity, but smarter design—using latency injection testing and intent-based rollout systems to surface risks before deployment.
Developers looking to learn more can watch infoq.com in the coming weeks for videos from the event.