Transcript
Torkura: I’m going to be talking on a very nice topic, cloud attack emulation, leveraging the attacker’s advantage for effective defense. I’m one of the founders and the CTO at Mitigant. We’re a cloud security company based in Potsdam. Potsdam is very close to Berlin. I’ve spent about 12 years in cybersecurity. I’ve done academic research, also worked in different companies. Also, I’m one of the pioneers of what we call security chaos engineering. A lot of the concepts are based on this idea. I’m also a five-time member of the AWS Community Builders program.
The agenda is pretty much straightforward. We’re going to be looking at the following points, what I refer to as the attacker’s perspective. Then different aspects of cloud attack emulation. We’ll just go through it and see how we can apply the idea of cloud attack emulation to these concepts, and also see the limitations in what we do today. Then we also look at threat-informed defense. Then we conclude.
I want to start this talk with a quote from Sun Tzu. A very popular philosopher. He said that if you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained, you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.
Essentially, I know we are not fighters here, we are not in the army or in the military, but over the years, a lot of people have used this saying and a lot of teachings from his writings, and they’ve used it to improve themselves. In terms of cybersecurity, there’s a lot that we can learn from here, because, essentially, the moment we deploy things on the internet, on the flip side, there are people whose job is to make sure or to try to attack these resources, this infrastructure, and essentially, at the end of the day, we find ourselves in a battle. We are trying to protect stuff, and on the other end, there are people who actually want to get into this stuff, they want to take over it, they want to compromise it. There’s a lot to learn from here. The central part which I want you to take away is, it’s very important for us, if we want to do a good job of defending our cyber resources, to take the time to understand the attacker, to always have the attacker at the center of what we are doing.
The Attacker’s Perspective
The attacker’s perspective, what is it? I think it’s very important to strive to understand your infrastructure, to view it from an attacker’s standpoint, because, usually, you have infrastructure, you deploy it, and obviously, most of the time, we are looking at it from a user’s standpoint. You want the users to be happy. You want them to have the best experience.
On the other hand, there are attackers who are on the opposite side, they want to deny this goal that you have in mind. There’s another thing I also want to talk about, the assume breach mindset. This mindset is very popular, a lot of people talk about it. It’s about you being able to assume, rather than saying that your infrastructure cannot be breached, rather than assuming that it’s rock solid, it’s bulletproof, people who have this mindset, they act from the belief that they want to prove that they have not been attacked. It’s an evidence-based approach, very similar to other physical sciences like medicine, so they want to get a proof that they are not attacked before they actually agree that they are not yet attacked.
When we look at what we’ve been doing in cybersecurity domain, somehow, we do these things, we look at it, we look at our infrastructure from an attacker’s perspective. Sometimes we do this mechanically, and so we don’t really do it efficiently. What I call passive is where we do threat modeling exercises where we come together, it’s like a game, you try to understand your infrastructure, you try to identify risks, gaps, and so forth. It’s very similar to tabletop exercises. In the end, a lot of it is very qualitative in nature. You’re biased. Most of the time people come out and say, yes, everything is great. On the other hand, we also have what I refer to as active security measures, things like security assessment, penetration testing, bug bounty programs, all of these are active, what I see, you’re knocking the door. You’re actually touching these resources and getting feedback.
Based on that feedback, you come to arrive at some assumptions, which you use to help you to understand what actions you have to take. Compliance is not security, because this is a big problem in the industry. I’m not against compliance, but compliance is all about checkboxes. You want to prove to some person out there that everything is fine. It’s like window dressing, you dress everything nicely. They come, they give you a certificate, they give you all the green check marks, and they go. That’s all. It’s not security because security is continuous. If after you get all these certificates, PCI DSS, you see every company, they have all this nice stuff on their homepages. That’s a snapshot in time. It’s not a reality. That’s why you see a lot of companies, regardless of how big they are, they still get attacked because compliance is all about a snapshot in time. It’s not a proof of the real status of security of an organization.
We have these three security controls. Usually, this is what we refer to as a defense in depth security architecture. On the front, we got preventive security controls. You have things like firewalls and so forth. They’re designed to prevent attackers from even getting into your system in the first place. Then we have detective controls, which are designed to say, because you cannot stop everybody from getting in, you can’t stop all the attackers, if somehow they’re inside, how do you get to know? Detective controls. We will look at some examples later. They’re designed to be able to look at patterns, to look at signals, to look at indicators of compromise. To be able to arrive, to say, it seems we have been breached, this is what the attacker is doing, and this is how we can go about kicking him out. Kicking him out is basically what you do in recovery. You’re actively in the mood of trying to figure out what the attacker has done and what you can do about that.
For example, you have things like ransomware, where the attackers actually tell you, “I have attacked you. I have this, you have to give me that, otherwise I will do that”. In this case, it’s like damage control, and you’re just running around or trying to spend money to hire or bring in some specialists that will help you to do the recovery. What is not really talked about is security testing or validation, or some people actually call it security assurance. At the center of this idea is that regardless of what you’re doing, whether it’s preventive, detective, or recovery, you have to validate. You have to check. There is this saying that hope is not a strategy. You can’t hope that you’re secure, you actually have to check and validate.
Who is a software engineer here? I always tell my cybersecurity colleagues, we got a lot to learn from software engineers, because once you start your software engineering career, you start learning, within a few months or maybe weeks, or maybe on the same day, you’re already being taught about testing. You can’t assume that the code that you wrote is going to behave the way the requirements were provided. At the center of this is the user. Even if you’re not actively having the user in mind, you might have in mind your code reviewer or your manager.
Actually, at the end of the day, it’s the customer you’re looking at, because you don’t want a bad user experience. You have all these tests, unit tests, integration, smoke, load test performance, A/B testing. I really like the way Netflix does it. They have this A/B testing where they do very expansive and detailed testing where every single feature they’re going to be releasing, they’re going to release it to different sets of people, and they want to measure how these people respond, how they interact. At the end, they want to select. The selection is not based on how they feel. It’s not based on who brought up the idea. It’s not based on how innovative it is. It’s purely based on how users interact with it. That’s a very nice way for looking at quality of whatever we’re presenting to our users.
In security, also, we have some testing. We have already talked about penetration testing, web application testing, red and purple teaming, adversary emulation, bug bounty programs, all of this stuff. There is a bunch of problems with these kinds of testing which I would like to highlight. What are the problems or the limitations? I’m actually narrowing this talk specifically to the cloud. Most of the modern infrastructures or applications are built in the cloud. On the right here, you have this diagram which is called the 4C’s of cloud native security. It’s a diagram that was proposed by the Kubernetes security team. They actually wanted to allow or to help people to understand the complexity of a cloud native environment which, as you see, you have code, you have container, you have the cluster, and you have the underlying cloud infrastructure. What’s the problem here? Most security tests are superficial.
Usually, some of us here, maybe if you’re working with maybe your CISO or head of security, and I’ve seen that before where if they’re testing, they say, “Just test here. Don’t touch here. This is special. Don’t tamper with this one. This is the scope of your testing”. That’s great. You don’t want things to spoil. You don’t want things to break. It’s superficial. It’s just at a single layer. You don’t have control when the attackers come in. You can’t define where they should attack. The other thing is context. When it comes to cloud, the cloud is moving rapidly, and most of the security tools are still struggling to align with what the cloud is. The other problem is that of vulnerability.
Every now and then, maybe if you’re listening to the news, you hear people talking about vulnerabilities, and everybody is running around the whole place because they found a vulnerability that has a base score of 10 on the CVSS base score, 10 is the highest, and everyone is concerned about it. In reality, nobody is going to exploit it. It doesn’t make sense. There has to be some context around that. The most important point, which is the center of this talk, is, attackers are not talked about. You’re preventing these attackers from getting into your system, but most of the time, you’re talking about auditors. You’re talking about your company. You’re talking about the CVEs. Attackers are not in scope of all these kinds of assessments.
Cloud Attack Emulation
Let’s talk about cloud attack emulation. What is it? It’s basically a specialized form of adversary emulation where we are looking specifically at the cloud, and we’re saying, this adversary emulation, it’s about mimicking tactics, techniques, and procedures, which are just a definition of how attackers behave. We are just looking at how this concept can be applied to the cloud because the cloud has its own specialties. Surely, the goal is to evaluate and enhance an organization’s cloud security posture. Let’s look at two aspects here: the cloud security testing, threat-informed defense. These are two aspects that I just want to focus on during this talk. Let’s start with threat detection. Who has heard of threat detection? Threat detection is all about, as I said before, you want to identify if there are any malicious behavior or indicators of compromise in your environment.
Regardless of how huge or how fancy a security tool is that does threat detection, what you see here is the very center of the design. They’re getting logs from different sources. They are putting it in a centralized bucket. Then they’re trying to do things like query. This is an example of a threat detection system, like a threat detection engine. There it’s basically running a query against AWS CloudTrail logs to identify an action where the API call was GetSecretValue. This API call is to get secret value from Secrets Manager. What can go wrong here? This is a very good example. Last year, November or so, AWS released a new version of the Secrets Manager. They released a new API called BatchGetSecretValue, which essentially allows you to get as many as, I think, about 20 secrets at once.
Before then, you have to get one, so you have to make a loop and get multiple. With this, you can just harvest as many secrets as possible. This is great from a performance standpoint, from a development standpoint. From a security standpoint, it means that before you know what’s happening, an attacker already harvested everything from your Secrets Manager. That’s very bad. This is what you will see in a CloudTrail record. Here, you can see 10 secrets were collected by this event. What we realized when we were looking around, investigating state-of-the-art threat detection tools is that they are not able to detect this purely because here, basically, it’s very simple, you just need to add that CloudTrail event name. That’s all.
Unfortunately, there’s no magic around threat detection. People have to actually sit down, think about it, and add this kind of query to make it work. The entirety about threat detection is like this, and so, in the end, there are a lot of gaps. If a cloud service provider introduces new APIs, for example, most systems are blind. Attackers will have a free day using these new APIs or these new features, and they’re not being detected.
Let’s look at incident response. Incident response, this is basically the incident response workflow: we’ve got preparation, detection and analysis, containment eradication, recovery, post-incident activity. Importantly, this is a graph from the CrowdStrike recent report, actually, about threats. What I want you to understand here, you can see the red part here, actually cloud-agnostic cases. These are attackers, before, that when they got into a cloud environment, they are not aware. They are attacking a system, and they find themselves in a VM, and they don’t know if it’s a VM on-premises or in the cloud. Now we have cloud-conscious attackers. These guys, immediately they’re in a VM, it’s an EC2 instance, they know. They know that in this VM, there’s going to be an IMDS or something that allows them to get secrets or access, which they can inherit whatever access the VM has, and they can, from there, jump into the control plane and just explore. They are getting wiser. They are getting smarter. There is a 110% increase between the last one year of these kinds of attacks.
Bear in mind that CrowdStrike has a huge suite of security systems, so they collect data from everywhere, and so it means that what you see here is the reality, the state of the art, the way attackers have improved. When it comes to incident response, one of the ways to overcome these kinds of attackers, the way they are getting sophisticated, is to run incident response exercises. As you see here, this is the AWS guide on this. Basically, you have to run, some people call it simulations, I call it emulation. The difference is that simulations are simulations, emulations are emulations. Imagine pilots, they go into a simulation room, they train, but before they get to become pilots, they have to get into the aircraft, and that’s the reality. Emulation, you can look at it as that part.
Simulation, there are still false positives, but we talk about emulation here, so it is the real deal. That’s one way to improve your incident response. If we come back to this diagram here, it means that basically if you have a system that is supposed to detect and respond, you don’t assume that it’s going to detect and respond. You have to do some emulation so that you can have that confidence that it actually will perform if you are attacked.
Let’s look at a very quick example. What you see on the left here is a server-side request forgery. It’s an attack that was conducted around 2019 against Capital One. Capital One is a fintech, one of the biggest. It had this high-profile attack. The attacker was actually a former AWS employee, so very much aware of all that stuff. She got into the AWS EC2 instance, knew that there is a metadata service. As I said before, the metadata service basically allows applications to be able to interact automatically without any human effort. If you ask the metadata service, give me your credentials, it will give to you, and if the credentials are root access, or admin access, it means that you automatically have admin access into the entire account. If it is an AWS organization, it means the entire organization. That’s how attackers take over.
From here, she went, got access to the AWS IAM, got access to the S3 buckets. There were CloudTrail logs here, nobody was analyzing the logs. They were just there piling up. People pile it up. When there’s an incident, that’s when people go to harvest these logs, begin to look at it, to do forensic analysis, and so forth. Because we’re talking here about playbooks and runbooks and the need to validate, this is a document about how to take care of this attack. It’s basically a document provided by AWS. At the end of the day, the countermeasure for here is actually to use version 2, like IMDS version 2, which solves the problem of version 1. If you enable this document, it basically will identify instances that are using version 1 and upgrade them to version 2. The point I want to mention here is that the services in AWS change very fast.
If you have such a runbook and you haven’t even played with it for some time, you’re not sure, if there is an attack, that everything will work. You have to continuously validate. Could be just a very simple line of command that you need to change. It could be something more than that. It could be that resources have moved around. Anything can happen. Validation is super important. The way you do that is by running attacks. This is just an example from our system where we are basically running this SSRF very easily just to allow you to validate your runbooks.
Let’s look at purple/red teaming. Anybody heard about that, purple/red teaming? It’s taken from the military where they have red team, they have blue team, and basically, blue team is defending, red team is attacking. They play a cat and mouse game. In the end, they’re able to identify gaps. On the side of the blue team, they’re able to see whether their defenses are efficient. You see here that this is actually the way GitLab practices it.
Basically, you see, if we just concentrate on the attack emulation part, the red parts are the responsibilities of the red team. Validate, detection, and response, is what the blue team does. That’s their responsibilities. They do this, actually, without telling the SOC team. No one is aware, only a few people. The attack might be going on for like a month. The guys here are basically in a state of pandemonium, running around. At the end of the day, when things are getting bad, they will tell them, it’s an exercise. At the end of the day, you see that they get better. Maybe you’ve not really heard about GitLab in the news about attacks and so forth, because this is what they practice.
Now let’s look at cloud penetration testing. This diagram is basically similar to the one we saw before. Its’s just in another way here, I just redrew it. You can see the red lines are attack paths. You see that an attacker has the advantage. There are so many approaches that they can use to attack the cloud. They can start from the code. They can start from Kubernetes cluster. They can start from the cloud control plane. They can start from the Docker image itself. They can be in there before you deploy. How do you use traditional penetration testing to solve this complex environment? How do you do it? There are a bunch of problems with penetration testing. The traditional one first is expensive. Most of us, if you have been involved, you know how much you have to pay consultants.
Then, because it’s expensive, people do it once or twice a year. Then you have this huge window of opportunity where attackers can get in. Compliance, people just want to make the auditors happy. Superficial, you’re just checking some defined, narrowed-down scope items. Then, periodic, once, twice. Only in the end, people are happy. They say, yes, we did a pen test. It’s actually a false sense of security because the value of the pen test report is tied to the timestamp when it was done. Don’t assume it goes further than then. That’s the reality. Essentially, people have this false sense of security. Here is something that might be better. When you look at it, you see continuous testing. On the left, you’re looking at when the cloud infrastructure changes.
Essentially, every time you ship, you deploy Terraform or CloudFormation templates or whatever you’re deploying, it’s a possibility that you’re shipping in gaps, vulnerabilities, and so forth to the cloud. That’s an opportunity for you to test because that’s exactly what happens in software development. Each time there’s a new feature, there’s tests, you’re trying to test different kinds of testing. I tell my security folks that that’s the way to go. We have to look at software development. We have to learn the way to do testing. That’s the way we can actually be secure. This diagram puts that in context. You can see that if the security team identify that the test failed, they just stop the deployment and they ship it back. Of course, there’s going to be some compromise here. You don’t want to become gatekeepers. You want to allow things to move fast. There’s got to be some context.
Let’s talk a little bit about security for GenAI. Good thing is you can also apply this concept to GenAI. What we see here is AWS Amazon Bedrock. This is basically in the front, you got the chatbot. This is for a restaurant booking application where the restaurant customers can basically go there to ask questions. What kind of menu do you have? Are you for a vegetarian? Do you have kids’ menu? This is the architecture here, very simple. There are a bunch of problems that might be here with this system. We just want to look at one which is data poisoning. The data poisoning attack here is the fact that for AWS RAG architecture, they keep the documents that you need to use for fine-tuning, essentially in the S3 bucket. There are other options, but this is the most popular approach. They’re in S3 buckets, and it means that everything that we knew about S3, all the problems, is applicable here. Attackers can easily just go to the bucket and they can do some stuff. It could be from any direction. Here we see that it’s possible for an attacker to first start from the knowledge bases, because if you go to the Bedrock Knowledge Bases, and ask it, where is your datastore? It will tell you where the datastore is.
Datastore, usually it will tell you, it could be OpenSearch, where they have their databases, where they keep these documents. It could be different formats. In the end, knowledge base will give you that information. From there, you can begin the attack. Here, he just went forward, disabled the S3 login so that no one becomes aware when they begin to make calls. Then they basically add malicious data into here. In the end, this malicious data is part of what will be used for training, so the agent will be mistrained. That’s basically the concept of data poisoning. What happens? When you do this, today AWS really doesn’t have a way to tell you or to identify this problem. The best I have seen is, if you go to GuardDuty, you will see something like that.
Basically, what GuardDuty is telling you is that it saw that someone was writing into a bucket and this operation seems to be malicious. If you haven’t seen it before, you are completely lost, because someone is writing to a bucket, no problem. There’s also a chance that you completely ignore it. Another problem with GuardDuty is it will record this event three times, and after then it will not record it because it will become part of the baseline. If an attacker is there doing it continuously, you don’t get the opportunity as a defender to get a warning.
Threat-Informed Defense
Let’s go to threat-informed defense. Threat-informed defense, by definition, is a systematic application of a deep understanding of adversary tradecraft and technology to improve defenses. It’s basically an idea that was proposed or is proposed by MITRE Engenuity. They looked at all the problems we face, all the problems with just making decisions based on vulnerabilities, and said, we can do better by actually looking at attackers. Let’s look at what this means. Usually, what we do is, if you have a system to defend, you’re looking at the vulnerabilities and you have tools that will just scan it and tell you, you have 1,000 vulnerabilities.
Usually, if it’s small like that, it’s not a problem. It becomes so much, you’re like that. Where do you start from? How do you fix this problem? You’re overloaded. This is a task. You have to fix it one after the other. Threat-informed defense is of the idea that when you have such vulnerabilities or weaknesses, you should associate it directly with a threat actor, someone who actually will take that knowledge and use it against you. That’s evidence for you to prioritize the information, the resources that you have established this relationship, like that.
The first pillar of threat-informed defense is defensive measures. Has anyone seen this before? Basically, this is a matrix. On the top, you see reconnaissance, resource development, initial access. It basically talks about how attackers get into a system and how they move from one point to the other. Down you have what you call the techniques, which are exactly how attacks are conducted. You also have procedures. This information is collected from different researchers, from different companies that have access to your machines, my machines. They collect all this information, and when they see evidence of an attack, they contribute it to this database. Whatever you see in this database is not a hypothesis. It’s not someone’s theory. It’s evidence of what people have seen. It’s more tangible. The second pillar is cyber threat intelligence. Intelligence is all about getting information: this time around, information about specific individuals or threat actors or attacker groups that are doing stuff.
For example, we have Scattered Spider. Scattered Spider is the group that was responsible for the MGM Resorts attack last year. They basically got into this nice hotel, and they attacked it, and key locks, smart locks were not working, computers were not working, ATMs were not working. Everybody was stranded. They said, if you want us to release all that stuff, you have to give a certain amount of money. They’re still actively involved. Cyber threat intelligence basically tells you such information. You see here the U.S. CISA, they release this information about these kinds of groups. They tell you who they are. They tell you the kinds of industries they attack, the kinds of approaches they use. It’s very tangible information about threat actors that you get. You can also put it in this form for using it.
The third pillar is basically testing and evaluation. We’re back to testing. You have information about attackers, you implement some kind of defenses, but you have to evaluate it because something might be wrong. Either your defenses are not well configured, or maybe somehow the threat information you got is wrong because you get it from different sources, or your environment is designed in a quite unique way and it’s a little bit different. You have to test so that you’re very sure. Let’s just look at an example where we are basically putting all this together.
This is the example I talked about before. Here you see an attacker getting access to AWS Secrets Manager. Because he has access to AWS Secrets Manager, he can basically get keys that allow him access to things like RDS, Redshift, Document Database, every single thing that has keys in that Secrets Manager. Here we are using Datadog SIEM. It’s one of the systems that we looked at. That’s the emulation here where we basically looked at the two types of APIs for Secrets Manager. What we found out here, what we refer to as zero-day detections, is that, when you use this BatchGetSecretValue API call, this is not recorded in the trait. It’s not detected, so it’s basically a blind spot. This is just one experiment, but one way you can identify gaps that might be in your system.
Conclusion
I want to conclude with this quote from Mike Tyson. He says, everyone’s got a plan until they get punched in the face. It’s very realistic because you have a plan. It’s nice. You spend days, months writing these policies. You have auditors who came and gave you the green check mark. Everything looks good. Is it going to stand when you’re actually attacked? That’s the question you actually have to ask yourself. The next time security people come to tell you all that stuff, they show all the nice papers, you need to ask them, is it going to stand the test of time? I wrote a nice article, “Getting Punched in The Face”, it’s on LinkedIn. These are some resources. We have done a lot of research around this topic, and you can actually see it on our homepage.
Questions and Answers
Losio: Usually, when we play with the cloud, we tend to feel a lot like shared responsibility with the cloud provider, whatever it is, AWS, in this example, or Microsoft Azure, or Google. We think, some is our responsibility for the security, some is their responsibility, or their data center, whatever. We tend sometimes to think that the responsibility maybe is a bit too much on their side. I think, it’s their problem. Where does this fit when I think about sharing responsibility with the cloud provider?
Torkura: The shared responsibility model describes the responsibilities of the cloud service providers as against ours. When it comes to most of these things, everything logical, like configuring your gateways, your VPCs, all of that stuff is your responsibility as a customer. For example, incident response, as long as the hardware is not involved in the attack, AWS will tell you, you were supposed to configure it well. Even if you run to them, they will ask you questions that you should know. They don’t usually have root access to your account. There’s some basic information that you have to provide for them. This is a very important question because you really need to understand the shared responsibility model, things like penetration testing, all those things. AWS will not do it for you. It’s what you should do as part of your responsibilities.
Participant 1: For that and hype stuff like GenAI, the AWS and Azure are basically just known platforms. They have an established security team, say, like that. We can say about shared responsibilities from that regard. If we’re talking about Anthropic, OpenAI. If we are not talking about AWS that can host Sonnet or Azure that can host GPT-4, but I’m talking about really purely usage of the OpenAI hosted LLMs, for example. They can state whatever they want. What about the trust and all the stuff? For maybe an even more general question about the providers that are not that famous, whether they state something. What could be the strategies here?
Torkura: When you’re talking about Anthropic or OpenAI, these are Software as a Service. In Software as a Service, it means that the provider even has more responsibilities, they have more responsibilities. However, for example, if you’re using OpenAI, you get your API key. If an attacker steals it, you can’t go to them and say they’re responsible for that, because it’s your responsibility to have kept the key in a way that it’s secure from being stolen. There are all those things. I think usually they will try to explain it in some kind of document about what you should do and what they should do.
Participant 1: I’m just thinking in a little bit different direction, say the users, so the company, or just the end users that will use this SaaS provider, like OpenAI and Anthropic, say they will send prompts. These prompts can contain sensitive information. Say OpenAI will have a breach. An attacker will just get the data, potentially they may store it. Just not like something that will not happen. When they have this data, it will be just taken.
Torkura: Sometimes when companies are breached, like the Capital One attack I mentioned, or other ones, some companies find them. Maybe just a recent example, CrowdStrike, they had this problem where airports were shut down and all of that. Some airlines find them. They say, we lost X amount of money because of your fault. You need to give me this. It becomes a court case. If OpenAI gets breached and because of that attackers take advantage and begin to attack your company, you can take them to court because that data was with them and they are responsible for protecting it, and they failed. Some companies take them to court. I’m not sure if it’s always the wise way because in the end it might take a lot of time. Sometimes some companies win because it becomes a legal case. Most of the time all this information is written somewhere in the terms and condition of service. They write it down. The legal people like reading these documents because they can see.
Participant 2: A, normally when you do penetration tests on the cloud providers, they ask you to tell them that you’ll do it. How do you go about with a constant approach? Because if you tell them, yes, I’m constantly doing pen tests. How do they figure out that they’re actually coming from you and not from a malicious source? B, how do you see it with defense in depth? Because if you have a cloud solution that is constantly testing your defense in depth, you need to give it access to the depth. You open another attack vector.
Torkura: You’re correct in terms of the traditional penetration tests. Most of the time you’re testing from outside. You’re looking at the environment from externally, and when you start hitting AWS, you get all these alarms, and they say, you need to tell us you’re going to do, we don’t want to run around. When it comes to what I’m showing here, attack emulation, we actually have some prior access, which goes to your second question.
Firstly, it means that we are attacking from within, and therefore, AWS doesn’t even know because it’s part of your shared responsibility. You can do it as often as you want, the way you do your normal software testing and they will never ask you a question because it’s happening right inside of your cloud account. In the end, you basically do something similar to penetration testing because you’re knocking all the doors and pushing all this stuff.
You remember I talked about the assume breach mindset. The way we do it is, basically after the attack, we roll back. If we make a bucket public, at the end of the day, part of the process is to take it back to being private. There are also some open-source tools which do not do this, and in the end, you have to put it as part of your work. At the end of the day, you need to clean it up. This is one of the reasons why most people don’t like to do that. In the end, it should be just part of the workflow that, ok, at the end of the day, you have to clean it up and you don’t leave it vulnerable.
See more presentations with transcripts