On October 20, 2025, Amazon Web Services (AWS) experienced a major outage that disrupted global internet services, affecting millions of users and thousands of companies across more than 60 countries. The incident originated in the US-EAST-1 region and was traced to a DNS resolution failure affecting the DynamoDB endpoint, which cascaded into outages across multiple dependent services.
According to AWS’s official incident report, the fault began when a DNS subsystem failed to update domain resolution records within the affected region correctly. Customers were unable to resolve service endpoints, halting operations even though data stores themselves remained operational. The impact was widespread: according to data from Ookla and the monitoring service Downdetector®, more than 17 million user reports were logged globally during the outage window.
The incident stemmed from a latent race condition in DynamoDB’s automated DNS-management system. AWS uses two subnet components to manage its DNS records: a DNS Planner, which tracks load-balancer health and proposes changes, and a DNS Enactor, which applies those changes via Route 53. When one Enactor lagged, a cleanup job mistakenly deleted active DNS records, leaving the dynamodb.us-east-1.amazonaws.com endpoint pointing to no IP addresses.
Because the broken DNS record went uncorrected, clients, whether AWS services or customer applications, couldn’t resolve the DynamoDB endpoint. Even though DynamoDB itself remained internally healthy, the loss of DNS reachability made it effectively unreachable.
The outage didn’t stop there. Internal AWS subsystems that relied on DynamoDB, including the control planes for EC2 and Lambda, began to malfunction. As customer SDKs retried failed requests, they created a retry storm that further overwhelmed AWS’s internal resolver infrastructure. The Network Load Balancer (NLB) health-check service also started misfiring, rejecting newly launched EC2 instances, which made recovery slower.
Ultimately, it was a chain reaction: a DNS bug leading to endpoint invisibility, triggering retries, cascading control-plane strain, and widespread regional instability.
The ripple effects extended far beyond AWS’s cloud platform: major consumer, enterprise, and governmental applications, including social media, gaming, finance, and e-commerce, either experienced degradation or were offline entirely. Experts point to the incident as a stark reminder of the risks associated with heavy dependence on a single cloud region or provider.
In response, AWS has taken steps to disable and re-evaluate the automation systems responsible for DNS updates and load balancing in the affected region, and is recommending customers adopt multi-region architectures and diversify dependency chains to mitigate similar risks in the future. This outage raises questions not just about AWS’s resilience but about the broader architecture of today’s internet, where a single point of infrastructure failure can trigger a global cascade of service interruptions.
For companies to best adopt AWS’s suggested multi-region architecture approach, the following methods are recommended:
AWS customers should design for multi-region failover, not just multi-AZ high availability. Relying solely on US-EAST-1 (or any single region) can be risky when DNS or control-plane failures happen. As one Reddit commenter put it: “AZ replication keeps you up; region replication keeps you alive.”
Systems should be designed with resilience in mind, using fallbacks such as asynchronous replication, local or distributed caching, and durable queueing so that a temporary disruption in one service doesn’t cascade across the entire application. These patterns help ensure continuity even when upstream components become unavailable or slow.
Client-side resilience also plays a major role. The incident highlighted how large numbers of uncoordinated retries can overwhelm already-degraded services, turning a localized failure into a widespread event. Implementing exponential backoff with jitter, circuit breakers, and request shedding allows systems to back off gracefully rather than contributing to overload during partial outages.
DNS proved to be another critical weak point. Organizations should consider more resilient DNS strategies, such as using custom resolvers, lowering TTLs to improve responsiveness to failovers, and introducing internal fallback mechanisms that reduce reliance on any one managed DNS provider. These approaches help contain the blast radius when a control-plane component falters.
Finally, resilience needs to be validated continuously. Running chaos engineering experiments that deliberately stress or disable control-plane dependencies, like DNS, load balancer health checks, or internal metadata services, can reveal hidden fragilities before they matter. Coupled with a clear incident-response plan that addresses DNS rebuilds, throttling of internal operations, and controlled scaling under stress, organizations can ensure they are better prepared. As seen during the outage, even AWS had to throttle EC2 launches to restore stability, underscoring the importance of having predefined procedures for managing cascading failures.
