Identity and authentication services company Authress shared its strategy to stay operational during major cloud infrastructure outages like the massive October 2025 AWS outage that disrupted many major services. The company’s resilience architecture relies on strategies like multi-region deployment and minimizing reliance on AWS control plane services, Authress CTO Warren Parad explains.
Parad says the AWS October 20 incident was the worst seen in a decade. Even so, Authress maintained its SLA reliability commitments thanks to a reliability-first design centered on a failover routing strategy.
Simply put — our strategy is to utilize DNS dynamic routing. This means requests come into our DNS and it automatically selects between one of two target regions, the primary region that we’re utilizing or the failover region in case there’s an issue.
A critical part of this approach is rapid incident detection, enabling the DNS layer to determine when to switch traffic between regions. Parad notes that Authress intentionally avoids relying on AWS Route 53’s default health checks or any third-party service to monitor availability:
We wouldn’t know if that’s an issue of communication between AWS’s infrastructure services, or an issue with the default Route 53 health check endpoint, or some entangled problem with how those specifically interact with our code that we’re actually utilizing.
Authress custom solution performs several checks across the database, SQS, and the core authorizer logic, while also profiling requests latency end to end. This allows them to reliably determine whether the primary region, out of six total, is experiencing issues and to update the DNS based on that.
Parad notes that while this failover strategy is a solid starting point, it has limitations. Most notably, it cannot easily isolate and replace a single failing component. To address this, Authress designed an edge-optimized architecture that uses AWS CloudFront with AWS Lambda@Edge for compute.
This architecture offers two benefits: it brings Authress services close to where their users are, reducing latency, and it enables a more robust failover strategy.
Using CloudFront gives us a highly reliable CDN, which routes requests to the locally available compute region. From there, we can interact with the local database. When our database in that region experiences a health incident, we automatically failover, and check the database in a second adjacent region. And when there’s a problem there as well, we do it again to a third region.

An additional element in Authress’s overall resilience strategy addresses application-level failures. Parad acknowledges that writing completely bug-free code is nearly impossible, and systems should be designed with this reality in mind.
Hacker News reader rdoherty notes:
This is probably one of the best summarizations of the past 10 years of my career in SRE. Once your systems get complex enough, something is always broken and you have to prepare for that. Detection & response become just as critical as pre-deploy testing.
Some commenters raised concerns that automation and IaC can introduce additional points of failure. Parad responded that Authress mitigates this risk by keeping things simple and small:
We’ve split up our infrastructure to go with individual services, so each piece of infra is also straight forward. In practice, our infra is less DRY and more repeated, which has the benefit of avoiding complexity that often comes from attempting to reduce code duplication. The ancillary benefit is that, simple stuff changes less frequently. Less frequent changes [bring] less opportunity for issues.
This is a brief overview of the key elements of Authress’s approach to resilience, which also includes root cause analysis, validation testing, impact assessment, AI-driven filtering of non-incidents, and more. Be sure to check the original article for the full details.
