Zendesk recently argued that generative AI has changed the limiting factor in software delivery from writing code to what it calls “absorption capacity”. Absorption capacity is the organisational ability to define problems clearly, integrate changes into a broader system, verify that they behave correctly, and turn implementation into dependable value. In the company’s framing, once code becomes abundant, the main challenge is no longer producing it quickly but ensuring that rapid generation does not outrun architectural coherence, review capacity, and delivery flow.
In a post on Zendesk Engineering, Bence A. Tóth builds the argument through analogies from agriculture and manufacturing. He argues that improving one part of a system does not necessarily increase total throughput if another constraint remains in place. In software, he writes, generative AI has lowered the cost of producing code enough that implementation is no longer the narrowest constraint.
Reimagining Margaret Hamilton’s iconic Apollo software photograph, when code production remained the primary constraint on software delivery (source)
Tóth’s term “absorption capacity” covers the work required to convert generated code into reliable outcomes. That includes deciding what should be built, aligning implementation with the surrounding architecture, establishing confidence through verification, and determining whether the resulting change actually improves customer outcomes.
The article proposes four practical responses. First, problem framing should become a shared responsibility between product and engineering rather than a one-way handoff, because ambiguous requirements can now produce plausible but misaligned implementations at scale. Second, teams should lower the cost of confidence by strengthening verification loops, including CI signals, static analysis, security checks, observability, staged rollouts, and rapid product feedback after deployment.
Third, architecture and engineering conventions should serve as scaffolding for AI-assisted delivery, with clear boundaries, consistent naming, templates, lightweight Architecture Decision Records (ADRs), and guardrails enforced in CI. Finally, teams should measure throughput rather than output, favouring metrics such as lead time, review queue time, change failure rates, rollbacks, and incident load over lines of code, pull request volume, or token counts.
AI, he argues, will scale whatever structures already exist in the codebase and delivery workflow. In systems with clear module boundaries, documented invariants, and a small number of well-understood implementation paths, AI can accelerate work while remaining easier to direct and verify. In systems with ambiguous conventions or architectural drift, the same acceleration can amplify inconsistency, increase review burden, and weaken trust in changes that may look locally correct while degrading the system more broadly.
In a recent InfoQ news item on Agoda’s view of AI coding tools, Agoda similarly argued that coding was never the real bottleneck, and that specification and verification become more important as implementation accelerates. Zendesk pushes that argument further by naming the replacement constraint and framing it as an organisational design problem: how to increase a team’s ability to absorb rapid change without degrading architecture or delivery quality.
For architects and engineering leaders, the implication is that the advantage may not go to teams that generate the most code, but to those that can safely absorb more meaningful change.
