By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Cloud Security Challenges in the AI Era – How Running Containers and Inference Weaken Your System
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Cloud Security Challenges in the AI Era – How Running Containers and Inference Weaken Your System
News

Cloud Security Challenges in the AI Era – How Running Containers and Inference Weaken Your System

News Room
Last updated: 2025/11/17 at 6:37 AM
News Room Published 17 November 2025
Share
Cloud Security Challenges in the AI Era – How Running Containers and Inference Weaken Your System
SHARE

Transcript

Olimpiu Pop: Hello everybody. I’m Olimpiu Pop, an InfoQ editor, and I have in front of me Marina Moore. Marina is involved with various aspects of the security space, particularly in platform and Kubernetes. But without further ado, I would like to invite her to introduce herself. Marina.

Marina Moore: Thank you so much for having me. Yes, I’m a research scientist at Edera, a company that develops hard and runtime security solutions to secure the security boundary that Kubernetes was missing. Additionally, I am one of the co-chairs of the TAG security and compliance committee. That’s the Technical Advisory Group under the CNCF, which advises CNCF projects on security and provides security assessments of projects, as well as answers questions when projects have them, to help improve the overall security of the ecosystem.

Containers don’t provide the isolation level you might think [01:25]

Olimpiu Pop: Perfect hats you’re wearing. Thank you for helping the community learn how to be safer. I believe that containers are currently a comprehensive category. It started with virtualisation a long time ago. That was the natural step from bare metal to virtual machines. And then some people realised that there are features in Linux that can help make things even more efficient. And then one thing led to another, then we had containers, and then Docker stole a lot of the thunder for everything. We didn’t hear that much; it works on my machine. And then we had Kubernetes as an orchestration solution, and other things appeared in the meantime as well. Therefore, it’s a good idea to define containers and establish the context for our conversation moving forward. So, what are containers for you?

Marina Moore: So, a container is a unit that packages code and everything needed to run that code. It’s like an abstraction over your program. If you simply write the program, you can compile it or not, and then it will run on your machine. A container puts all the other necessary components on your machine that help it run smoothly. So, it’s not like a whole operating system or anything, but it’s all those pieces of code that you need to solve the ‘it runs on my machine’ problem.

Olimpiu Pop: So, a container is, in fact, a container. So, you have the host operating system, and on top of that, you have processes that contain your code, which act like isolated units.

Marina Moore: Sort of. Yes, they act like processes on the computer, which have some isolation properties, of course, on their own. They run separately on the CPU and other components, but they share memory and overcome the hurdle posed by the host operating system.

Olimpiu Pop: As you mentioned earlier in your presentation, it’s not always as isolated as we think.

Marina Moore: No. Yes, the name can be deceptive at times because it contains your code, which has everything you need to run it. However, it doesn’t fully contain it, as it separates it from everything else on the same operating system. If you have multiple containers on the operating system and no other measures are in place, most people use at least some sort of isolation properties on top of the container. Without those, you could list the processes running in other containers or perform different tasks. So, on its own, the container doesn’t actually provide that isolation boundary.

The risks of the various isolation levels of containers [04:05]

Olimpiu Pop: Is this related to the container itself, or is it related to one of the technologies? What other thing? Is this risk predominant for all types of containers or Docker? We just see that one.

Marina Moore: Yes, it’s inherent to all of them. People have put in a lot of effort to solve this problem. Tools such as namespaces, cgroups, and seccomp have become industry standards for addressing specific aspects of this problem. All of the technologies I mentioned, as well as others, run in user space. So, they’re also on top of the operating system that we’re sharing between these different containers, but they just provide boundaries. They say these resources should be shared among containers in this way, or these containers should only be able to access these system calls, or other kinds of protections should be implemented.

Olimpiu Pop: And what do you say, if we run through all the risks now? How would you prefer to look at them, and what would be the wiser approach, as you mentioned in the presentation? Or we can simply examine all the risks and then analyse them?

Marina Moore: Either way. We can look at all of them.

Olimpiu Pop: What are the risks that we as developers encounter while running containers? Because, while it’s on our machine, it’s okay, but after that, in production, it becomes a lot more interesting for malicious actors to try to extract more from it.

Marina Moore: Exactly right. And so, when you’re testing it on your local machine and doing things like that, the isolation you need is different. And we can discuss Apple containers later, which have some exciting properties that Apple added to them, allowing you to achieve isolation even on your own machine for the tasks you’re running. However, regardless, when you have a container in production, it may contain sensitive data, such as customer secrets or other confidential information. You might be running your container on shared infrastructure.

So, what’s called multi-tenancy is that you have multiple containers, either from different companies on the same machine or from various customers of the same company running the product in other containers. It’s great for efficiency if you can share the infrastructure. Hardware can be expensive. Compute time can be costly. And so if we can share that compute time and optimise the use of those resources, this is great. However, if you’re sharing that infrastructure between these different containers, what this means most of the time is that you’re sharing the operating system kernel.

And so, if you have different containers running in a multi-tenant manner from various companies, say, and one of them can access the memory because it’s shared, it’s essentially the same memory space in this user space on the kernel. And so if they’re able to access that memory, they can see secrets running in the other container or escape out into that operating system kernel and start messing around. And that’s really what the risk is. Once you have established this shared infrastructure, you must ensure that not only is the content in a container, but you also have some kind of boundary around that container to prevent an untrusted workload in the container from escaping and affecting other systems. Additionally, your container is protected from the workloads of other random companies that are on the shared infrastructure.

Architecting for the proper isolation level [07:08]

Olimpiu Pop: Thank you. And what should we do? Although we now have LLMs to solve many of our problems, I still rely on human intelligence. So, please guide us.

Marina Moore: There are numerous things we can do, and various solutions are available on the market to address our needs. I work at Edera, and we have a great solution that’s in the VM-based isolation space. One of the nice things about that VM isolation space, which also features many open-source technologies, is that it provides a very clean boundary. So, instead of having that shared operating system kernel that you have to put a bunch of band-aids on to try and isolate resources between containers, things like namespaces, cgroups, all that kind of stuff that attempts to define the boundary, the problem with all those band-aids is that they’re still running in the same kernel.

And what we’ve seen with many of our container escape CVEs is that they’re essentially attacks on the Linux kernel. This means that if one flaw is found in the Linux kernel, which is a massive piece of software, it can have significant consequences. If a flaw exists, it means that attackers can overwrite these protections, and as a result, your containers may not be secure. And this is why I prefer the cleaner boundary of VM-based isolation, as it eliminates the entire attack surface of the shared kernel. You now have different micro kernels for each container.

Olimpiu Pop: Did I hear that right? One of the easiest solutions or the most straightforward solution would be to drop containers and revert to virtual machines.

Marina Moore: The abstraction of containers is still instrumental. And so, that’s where this world of micro VMs has come in, with these VM-compatible container runtimes. This means that you get all the convenience of containers, while still retaining all the abstractions and tooling that Kubernetes and other systems have built. Under the hood, a virtual machine is secretly performing the isolation. So, yes, we’re adding that virtual machine back, but some technologies have made them a lot more efficient, and also allow developers to maintain this abstraction of containers themselves. And then all that’s different is the infrastructure.

Olimpiu Pop: Let me see if I get the parallel right. A decision was made around a decade ago that microservices are a better approach than monoliths. Now, two years ago, three years ago, we decided that the monolith is a good option for most cases, but we call it a modulith. So, now I hear you saying the same thing: We left virtual machines, moved to containers, and are now transitioning to microservices, or micro-containers. Okay, fair enough.

Marina Moore: Exactly. We take the lessons from both sides and combine the best of both worlds.

Olimpiu Pop: To reiterate what you’re saying, we are a technology group. We are a group of professionals who are very dependent on the phrase “it depends”. So, we have to be very careful what type of technology we use when, and then in the cases when it makes sense to use the containers as they are, but then when institutions require it to use micro virtual machines because they have the added benefits of the containers, but in the same time they allow us to have the proper degree of isolation that we need. To avoid issues at the kernel level.

Marina Moore: Yes, exactly. This is especially important when you have multi-tenant applications and sensitive workloads running within them. I always hesitate to see that there’s one solution for everybody, but when you need that isolation, the micro VMs are a good way to go.

Olimpiu Pop: Okay. Interestingly, I had similar conversations from a different perspective a couple of weeks ago. I spoke to a bunch of guys from Couchbase and a guy who was a consultant in this space, and they took it one level further, and what they said is that, “Why use the kernel at all? Just drop it, use Wasm on top of the bare metal, and that’s it. It’s magic”. What do you think about this solution?

Marina Moore: Yes, this goes back to the fact that there isn’t a single solution for everyone. Wasm is another notable technology in this space, offering impressive sandboxing capabilities. One of the significant downsides of Wasm, or one of the reasons that I prefer micro VMs, is that, in general, I still really like Wasm. It still has a firm isolation boundary, but you do have to recompile your programs for Wasm, right? It’s not compatible with all this existing container infrastructure. It creates a barrier to adoption for many people who have existing applications and containers that they really like. That being said, I want to see where the work goes, as it’s moving in a similar direction. It’s saying, ‘ Let’s make a better sandbox. ‘ Let’s make a better box around our application.

Olimpiu Pop: So, what I hear you say is that Wasm can be a solution, but it’s better suited for a greenfield project when you first start using it. Because at that point, you can choose, okay, you choose Wasm, and that’s okay, you’ll just compile the code. However, on the other hand, if you don’t, you’ll have to recompile everything, which comes with an added cost. Additionally, the different types of technologies and interpreters for Wasm have varying degrees of maturity, which can put us at risk in themselves. Right?

Marina Moore: Exactly. Yes. So, that’s why I’m excited to see where it goes. However, for many people today, sticking with something container-related can be beneficial.

Olimpiu Pop: You mentioned micro virtual machine, micro virtualisation. Can you single out some technologies? Since we no longer use Google, we can’t simply search for it. We have to ask an LLM, and these LLMs are hallucinating, as we know. To ensure that we obtain reliable sources, we can avoid thinking that they’re hallucinations.

The micro-VM stack [12:57]

Marina Moore: Yes, totally. To discuss technologies, I need to briefly explain the stack of a micro VM. I don’t quite understand how these things fit together. I think there are three main vital pieces to understand, well, sort of four anyway, main vital pieces to know about a micro VM architecture. You have some kind of hardware, and then you have a hypervisor that sits on top of that hardware, managing the actual hardware like how an operating system manages it. And then you have what’s called a virtual machine monitor, which helps allocate CPU, memory, and other resources to individual virtual machines. Therefore, it manages the various resources allocated to the different virtual machines. And then if we’re talking about containers on top of that, you need some sort of container runtime because you need something that can make this compatible with all of our existing container stuff.

Starting from the bottom up, at the very bottom layer, the two key options to discuss, especially in the open-source space, are KVM and Xen. KVM is a module of the Linux kernel. So, it’s like using the Linux kernel to handle many hardware tasks that it already handles, and then adding a module that allows it to function as a hypervisor instead of just an operating system. This is really cool because it’s compatible with a lot of stuff, and it utilises what’s called hardware virtualisation extensions on most modern CPUs, which allows it to pass through the CPU more efficiently. Instead of having a virtualised CPU, you get actual access to the CPU from these different VMs. This is very cool, but unfortunately, access to those hardware virtualisation extensions requires bare metal access to that CPU, or at least pass-through access, if you’re working on a system in the cloud.

So, that’s KVM. And then there’s Xen, which, instead of being built on Linux, is essentially its own standalone system, what we call a Type 1 hypervisor because it runs directly on the bare metal. Additionally, Xen also supports a feature called PBH mode, which enables hardware pass-through. Still, it also supports what’s called para-virtualisation, which, instead of relying on CPU extensions to perform the hardware pass-through, relies on minor tweaks to the guest operating systems to make what are called hypercalls, rather than system calls in some cases, to access those hardware resources. That’s a fundamental overview of those. And then one layer up from that, we have the virtual machine monitor, a couple of big ones in the cloud space. Firecracker is a very popular one, AWS Lambda and Fargate. AWS created it for those projects to do the virtual machine management layer, and it’s known for having a pretty fast startup time.

That’s its big thing. Cloud Hypervisor is another popular open-source one that resides at that layer. My company also develops a Xen-based hypervisor at that layer. So, sitting above that, you have to have some sort of container runtime. Kata Containers is a very popular open-source project. Under the hood, we have a virtual machine system, which essentially translates into what appears to everybody else as a container. Everyone above that was stacked like a container system. Those are the three layers, very briefly, of the hypervisor. And sometimes people refer to the whole thing as a hypervisor, which can be very confusing, but I prefer to keep them separate.

Olimpiu Pop: So, to confirm that I have my list right. You have the kernel level, and you gave the Linux VM as an example, which is the base level; on top of that comes the hypervisor. No, the other way around, right?

Marina Moore: The hypervisor is, yes, KVM or Xen.

Olimpiu Pop: Yes, it’s the middle level.

Marina Moore: Exactly. It sits directly on top of the hardware, followed by the VMM, which is similar to Firecracker or Cloud Hypervisor, and then the container runtime as the third layer.

Is micro-VM’s speed comparable to that of containers? [16:45]

Olimpiu Pop: Great. Lessons learned. Thank you. Okay, so you mentioned that Firecracker is relatively fast, and speed is handy these days because everybody’s trying to expand their ecosystem, if we call it that, because everybody tends to have cheaper infrastructure. And then obviously we all want to be good citizens. Most of us want to be good citizens and reduce carbon emissions in the atmosphere. What does fast actually mean? Are we talking milliseconds? We are talking seconds and minutes. Where is the Firecracker or any of them? I like the name, which is why I’m just “abusing” it.

Marina Moore: It’s a tricky question to answer because it depends on what you mean by startup time. So, in the Firecracker paper, when they first released Firecracker, the number they cite may be incorrect. It’s 200 milliseconds. It’s the Firecracker paper. You can find the real number. However, what they’re measuring there is the boot time of the VMM specifically. That’s specifically what they’re measuring there, but they’re not measuring the full time it takes for the whole stack to turn on. We just talked about all three layers of the stack, and each of those took 200 milliseconds, which would add up. In my testing, I found that an end-to-end approach yields results, although exact numbers depend on your hardware. However, the complete startup of a VM using Firecracker takes closer to a second.

Olimpiu Pop: That’s good in my book.

Marina Moore: Yes, it’s not too bad. And it can be tricky if you’re frequently turning machines on and off. And part of the argument they make in that Firecracker paper, though, is that you can do what’s called a warm start. If you know you’re going to need a new container, you pre-start the virtual machine layer, so you don’t lose that one second. The user doesn’t notice for one second.

Olimpiu Pop: So, how they used to do it in the good old JVM Java days, when they would just run it and warm it up, getting it to a warm enough state to achieve the most efficient execution time. Okay, great. Now I have a lot of links to read. Thank you for that. And because we used a lot of it, it depends. What would you say if you’re looking at a couple of everyday things that you’ll see in regular architectures, and then try to create some kind of matching, I don’t know, let’s say some sort of microservices, small scale, a couple of tens of services, what should I use? Should I go for that or the other one, or is it something that has an ‘it depends’ level, and we cannot single them out?

Marina Moore: There are a couple of key characteristics of performance that I will highlight here. I suppose I have a white paper for Edera that I can send and link to somewhere, as it includes several graphs with real numbers detailing various performance characteristics. That might be a helpful resource. But I think in general it depends on where we’re talking about the CPU heavy workloads, which I found don’t tend to vary that much between different stacks in this space versus the memory heavy workloads, which do have some variance and the system call heavy workloads tend to have the most considerable variance just because the system calls by their nature have to pass up and down this stack a little bit more.

And so the characteristics of that performance can differ more between systems. In that paper, running on something like Docker is the fastest; however, simply because you’re not adding these extra layers, you can achieve similar performance with some of the other options. Edera performed well in that testing, and I need to find my graphs to recall all the different performance characteristics. But yes, there’s a lot more detail on those graphs.

Olimpiu Pop: Thank you. So, it will remain with it, depending on the type of code that we are running into it, rather than-

Marina Moore: Exactly.

Olimpiu Pop: … just putting it in the wrong buckets. So, if it’s something computation-intensive that requires a lot of CPU, then that’s one of the choices we should consider based on that, or if we are writing a lot of these things.

Marina Moore: Yes.

The AI inference brings new challenges for container security [20:48]

Olimpiu Pop: What else should I know? Are there any industry best practices for running safer containers that we should be aware of?

Marina Moore: I’ve been thinking about that a lot recently. Many people have been discussing AI recently. It’s GPUs, right? GPUs in the cloud. And I think that this is definitely a growing space, especially as people start using GPUs, not just for training but for inference in the cloud, which means that lots of different people are doing inference in the cloud and not only do they need GPUs, they don’t always need an entire GPU, right? They just need some time on a GPU when they’re doing inference. This has led to a massive rise in multi-tenancy for containers. We also have multi-tenant containers that access GPUs.

And so I think this is another exciting space that adds another layer of complexity here, mainly because I think the GPUs, just because of what they were initially designed for, which is not really originally designed for AI, they happen to be very good at that. Still, the security characteristics, especially of running inference for different users on the same GPU, are not always outstanding. They typically do not clear memory between different processes running on the GPU and other components, which presents numerous challenges in integrating GPUs into the complex mix of container isolation.

Olimpiu Pop: I was thinking about one thing for an extended period: Kubernetes was the solution for everything. If you have a problem, just put Kubernetes in place. Well, Kubernetes is another type of abstraction that is on top of containers. In the entire landscape you provided, where does Kubernetes fit, if at all?

Marina Moore: Kubernetes excels at what it does. What it does is container orchestration, which is essentially the scheduling of containers. And so it still really fits in at that level. And what people are starting to realise is that something good at scheduling containers might also be good at scheduling other things. I’ve seen more people using Kubernetes to plan various types of workloads and other activities. One of the benefits of Kubernetes is not just its ability to schedule tasks effectively, but also the vast ecosystem that surrounds it. I don’t know the current number of CNCF projects, but it’s in the hundreds, and these are all projects built around improving the experience for users already using Kubernetes. And so, not only do you get the great scheduling of Kubernetes, but you also gain access to observability, security monitoring, and other features on top of that.

Olimpiu Pop: Okay. We discussed a lot about memory, and that’s the main point: they’re sharing a memory kernel. The only legislation for cybersecurity that remains in place in the US is the Memory Safety Programming Language. That’s obviously something that distinguishes Rust. Does it make a difference in the code that we run in the container? Does it provide extra safety space for us, or doesn’t it really matter?

Marina Moore: It doesn’t matter. I like to think a lot, especially in this space, about the idea of an attack surface. Therefore, the attack surface encompasses all the code necessary to execute your workload. If you have a traditional system, this includes the operating system and the container runtime. If you’re running a micro VM, right? It’s that microkernel that’s unique to your container, and perhaps the hypervisor and any vulnerabilities in that space matter. And what Rust does, and what other memory-safe languages do, is that they significantly reduce the likelihood and instances of memory errors.

There’s some research that suggests more than half of all vulnerabilities originate from memory leaks. And so, this is an effortless way to eliminate, but nearly eliminate, a whole widespread category of vulnerabilities. And that’s one of the reasons we’ve chosen Rust at Edera to build the hypervisor level, because it’s the key part of your attack surface. Having it in Rust makes it easier to trust that piece of the stack. Rust is an excellent choice for most new projects for that reason.

Olimpiu Pop: So, it all boils down to your attack surface.

Marina Moore: Exactly.

Olimpiu Pop: Keep your things simple, and that will make your life simple.

Marina Moore: Yes.

Olimpiu Pop: Basically, that’s it. If we apply that rule of thumb for everything, that’s obviously the best way moving forward. I like that. It’s easy to remember.

Marina Moore: Yes, exactly. The easiest code to maintain is the code that’s not there, right?

Olimpiu Pop: Exactly. Now it’s just thinking about the VPN solution that at some point we chose because of that does the simplest possible one if they had very small footprint. Is there anything else that we should have for our listeners or that’s a wrap?

Confidential Computing – encryption in use [25:31]

Marina Moore: I guess I could talk a little bit about confidential computing and going even one layer beyond this micro-VM based architecture, if that sounds interesting.

Olimpiu Pop: Well, we started the conversation saying that we store a lot of things, or we have a lot of things in containers, secrets and other information that is usually PII that we typically don’t want to leave the boundaries of the company. What should our listeners think when they hear confidential computing?

Marina Moore: Yes, I think the most important thing that confidential computing does is encrypting memory. For years we’ve known that it’s great to do encryption at rest, encryption in transit, so the data can’t be accessed in those places. But confidential computing ads is encryption in use. So, in the writing memory of the process, it can also be encrypted. And the way it does this is with what’s called a trusted execution environment, which is a special piece of hardware that enables this encrypted memory in a somewhat performant way so that if you run your container workload inside the trusted execution environment using something like the Confidential Containers project or there’s other projects out there, what you get is that you get that encryption in use.

So, and we were talking about an attack surface earlier, you don’t have to actually trust your underlying operating system or anything that’s not inside that TE because you can encrypt stuff in memory and do what’s called remote attestation to kind of prove the running state of the application in that TE. So, I think that’s the big thing is encryption and use and reducing your trusted computing base or your attack surface.

Olimpiu Pop: So, as I mentioned earlier, we should minimize the attack surface. So, the code that we have to make it in the best potential way, right? Now I’m thinking how should we take care of that is we are just thinking about developing and the developers writing the code and so on so forth. And what we heard in the last two to three years is we are shifting left, well, everything shifted left, probably it’s an overflow and we got to on the right side. Well, that’s a whole different conversation. Are there tools that can help us diminish our attack surface? Well, I don’t know, I’m just thinking about the tool chain that you can use as developers and trying to see and minimize it, optimize it, and so on and so forth.

Marina Moore: Yes, definitely. I think there’s a lot of tools out there for minimizing different layers of the stack. You can make your container images smaller, you can make your kernel that switch, that micro kernel whatever is smaller. And I think that anything that you do that removes code, reduces your attack surface and can make stuff more secure. My PhD advisor actually has a paper that he wrote about, it’s called Lind, that talks about this where basically in the next kernel, the vulnerabilities that were the most common vulnerabilities were from really rare portions of code. The code that’s least used tends to have more vulnerabilities because people aren’t noticing the bugs because not using those portions of the code. So, if you can remove those less used parts of the kernel or of any other layer of the stack, then you can make it more secure. So, that’s I think a really great way to do that as well.

Olimpiu Pop: But that’s something that theoretically should happen on the developer side. So, for most of us, the operating system is something that we take for granted. We have just pulling it from somewhere and we are using it. Is there a solution for that though? I don’t know. Well, I’m just thinking now that again, in the Java virtual machine, there are moments when you had the code being optimized and then as you said, those beaten tracks were natively compiled so that they run faster. Do you know of anything that will help us do that?

Marina Moore: So, you’re saying something that’ll help us just make there be less code and that trust computing base? Is that the question?

Olimpiu Pop: Yes, that’s the question. So, for the operating system, as you said, there are parts of it, we are just taking the kernel as a whole. Most of us are just using part of it or our application use just part of it. We cannot go and chop it down or take parts of it out. Is there a solution for that so we know of anything that can help us in that regard?

Marina Moore: There’s a couple of different approaches here. So, you can try and optimize it, like we were talking about, reduce the amount of code in it. I think that can be very developer intensive, but it can be very useful for reducing your trusted computing base. I think the other thing you can do is just put barriers in place to reduce the splash zone of some sort of vulnerability in one of those layers. I think this is something that things like container isolation, things like trusted execution environments are very good at because you don’t quite assume a vulnerability, but you’re prepared for it.

If there is one, you can really limit what it can do. So, if you, for example, have a container inside a micro VM and do a container escape and the attacker is able to do a container escape, they’ve now escaped onto their own machine with nothing else running on it, the attack becomes useless even though you haven’t prevented the attack. And I think that’s the thing that a similar severe argument in confidential computing is that if you are able to only run your one small piece of trusted application in the TE and there’s a vulnerability somewhere else in the stack, it has way less impact on your particular application.

Olimpiu Pop: So, what we said until now is that minimizing the attack surface and our code or all the code that we actually expose being code that we write ourselves as a company or the dependencies that we import or whatever we use, it’s one way of looking at it. That’s one approach. And now what I’m hearing you saying is we should isolate the application in such a way that we diminish the blast radius. In case something happens it’s something that is contained rather than just affecting more components of our system. Is that right?

Marina Moore: Yes, exactly. This is, I think, a great way to say it because it just affects less stuff. So, I think that code will never be perfect, right? I think that’s something that is always important to understand that there always will be vulnerabilities. We can slim stuff down as much as we want, make it tiny, but we’ll still, even if it’s one line of code, there’s probably some vulnerability in it, right? That’s just the way it is. And so if you can just limit what that vulnerability can do, limit how bad it can be, I think that’s a really good approach.

Olimpiu Pop: Thank you.

Marina Moore: Yes, thank you so much for this. This was a great conversation.

Mentioned:

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Apple’s Dynamic iPhone charger is finally available in the UK Apple’s Dynamic iPhone charger is finally available in the UK
Next Article How to create, schedule, and post LinkedIn carousels in 2025 How to create, schedule, and post LinkedIn carousels in 2025
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

When My Dog Chewed a Hole in My Duvet, I Discovered This Cool Textile Recycling Service
When My Dog Chewed a Hole in My Duvet, I Discovered This Cool Textile Recycling Service
Gadget
The Fatal Math Error Killing Every AI Architecture – Including The New Ones | HackerNoon
The Fatal Math Error Killing Every AI Architecture – Including The New Ones | HackerNoon
Computing
The ultimate way to convert an old iMac into a Studio Display – 9to5Mac
The ultimate way to convert an old iMac into a Studio Display – 9to5Mac
News
T-Mobile’s free Pixel 10 deal is back — and it comes with another 0 freebie
T-Mobile’s free Pixel 10 deal is back — and it comes with another $130 freebie
News

You Might also Like

The ultimate way to convert an old iMac into a Studio Display – 9to5Mac
News

The ultimate way to convert an old iMac into a Studio Display – 9to5Mac

2 Min Read
T-Mobile’s free Pixel 10 deal is back — and it comes with another 0 freebie
News

T-Mobile’s free Pixel 10 deal is back — and it comes with another $130 freebie

4 Min Read
Netflix reveals massive list of 42 movies and TV shows being axed in just DAYS
News

Netflix reveals massive list of 42 movies and TV shows being axed in just DAYS

6 Min Read
Beyond The Pitch: How Emerging VCs Can Still Raise
News

Beyond The Pitch: How Emerging VCs Can Still Raise

5 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?