Google Cloud’s Expert Services Team has released a detailed guide on chaos engineering for cloud-based distributed systems. It highlights that the intentional creation of failures is essential for developing resilient architectures. The initiative provides open-source recipes and helpful guidance for applying controlled disruption testing in Google Cloud environments.
The Google Cloud team addresses a critical misconception in the industry: that cloud providers’ SLAs and built-in resiliency features automatically protect business applications. Apps that aren’t built to handle faults or expect constant service will fail if cloud services go down. This happens regardless of what the cloud infrastructure promises.
The framework outlined by Google Cloud is built on five fundamental principles. First, teams must establish a “steady state hypothesis” defining what normal system behavior looks like before introducing disruptions. Second, experiments should replicate real-world conditions that systems might encounter in production. Third, and most distinctively, chaos experiments should run in production environments with real traffic and dependencies—this differentiates chaos engineering from traditional testing approaches.
The fourth principle emphasizes automation, treating resiliency testing as a continuous process rather than one-off events. Teams must conduct a thorough assessment of the “blast radius” of experiments. They should sort applications and services into tiers, depending on how much they could affect customers.
The practical implementation includes six key practices:
-
Define steady state metrics, like latency and throughput.
-
Formulate testable hypotheses, such as “deleting this container pod will not affect user login.”
-
Start in controlled non-production environments, then expand to production.
-
Inject failures directly into systems and indirectly through environmental changes.
-
Automate experimental execution using CI/CD pipelines.
-
Derive actionable insights from the results.
To make it easier to get started, Google Cloud suggests using Chaos Toolkit. This is an open-source Python framework. It has a modular design and supports extension libraries for Google Cloud, Kubernetes, and more. The PSO team has made a full set of Google Cloud recipes on GitHub. Each recipe tackles specific failure scenarios.
Chaos engineering has changed a lot since it started. Major tech companies around the world now use it, each creating their own methods to fit their infrastructure needs.
In 2010, Netflix created Chaos Monkey. It tests system stability by randomly ending instances and services in Netflix’s cloud architecture. The company later added the Simian Army. This suite includes tools like Latency Monkey, which adds artificial delays, and Chaos Kong, which simulates whole availability zone failures. In 2014, Netflix introduced Failure Injection Testing (FIT), which added more precise control over failure injection by pushing failure simulation metadata through their systems.
Around the same time Netflix developed Chaos Monkey, Google came out with Disaster Resilience Testing, or DiRT. This allowed Google to ensure it was ready for any disaster by checking its systems regularly and automatically to confirm they were prepared to respond and recover. Over time, DiRT turned into a yearly, multi-day testing event to check Google’s disaster preparedness across the board.
Also AWS has its tool for chaos engineering called AWS Fault Injection Simulator (FIS). This is a fully managed service for running fault injection experiments. It simulates real-world AWS faults and works with tools like Chaos Toolkit and Chaos Mesh. This expands the types of failures you can test. AWS also created a Scenarios Library. It has pre-built experiments that test application resilience and make chaos engineering easier. The FIS Scenarios Library features experiments such as AZ Availability: Power Interruption. This simulates power outages in certain zones.
Modern architectures have shifted from monolithic to microservices-based systems. This change has increased the complexity of service dependencies. Cloud environments are very distributed because applications are in many availability zones and regions. This distribution creates many potential failure points. Traditional testing methods often struggle to cover these issues fully.
