Transcript
Ian Miell: “Bringing DevOps Principles to Controls and Audits”, not the most exciting title in the world, but I’m actually going to be talking about a revolution. What I’m talking about is an open-source project that me and my colleagues at Container Solutions are working on. My goal is that by the end of it, you’ll be interested enough to check it out, perhaps give feedback, maybe even get involved. We’ve just hit an important milestone with this, we’ve just got our first customer looking at hiring us to improve this open-source project that we’ve built, to the point where we’re thinking about pivoting. Container Solutions is a cloud-native consultancy. My day job is to help organizations build and run software in a cloud-native fashion. Doesn’t necessarily mean in the cloud, but in a cloud-native fashion. I’ll talk about how we got here later on. We accidentally built this product, and it’s being very well received.
This is QCon, so a lot of people are heads-down developers. Are you a heads-down developer, I just want to code type person? Who here works in a regulated industry? Yes, figures. Of those people, which are not working in a non-financial regulated industry? It’s interesting, because one of the challenges of building something like this is that large financial institutions, as many of you will know, are very conservative about using an open-source project or just doing something new. They’d rather buy off the big people. What we found was that we’ve actually hit a vein in internet of things companies. Healthcare is probably a big one. We found that there’s companies, for example, that work in oil and gas who do telemetry, for example, on gas tanks. It’s very important they don’t blow up. It’s very important they’re compliant.
Traditionally, this has been done by people wielding a hammer and looking at big metal objects and figuring out if they’re safe or not. Increasingly, that’s being automated, too. We’re going to get onto that. Of the people who work in a regulated industry, how many have created or contributed to a Confluence page or a spreadsheet related to controls? You’re the people I’m trying to address, because as cloud-native engineers, we got terribly frustrated with this. Again, I’ll talk about how we got here. The lack of automation in this space is very significant for us. We think it should be solved. I’m assuming you don’t like to do that kind of activity. I’m just going to take that as red.
Here’s my argument for today, the way we manage compliance is wrong and is changing. Incidentally, I studied history as my first degree. How many people recognize that figure? The education system half works, it seems. For those who don’t know, it’s from over 100 years ago. It’s quite understandable if you don’t know this. That’s Lenin, the leader of the Soviet Revolution. Lenin, I found out recently, went to the Duma, the parliament in Russia, and started the revolution. He had to take a tram to get there. I think of Lenin as complying with the rules and regulations, and buying a ticket with however many kopecks it was, and meekly going along to the Duma and taking over a huge continent. I’m going to talk about what’s wrong with compliance and how we manage it today, what we’re doing about it in Container Solutions, and where we’re going to go with it. I think revolution is coming in this space for external reasons. There’s a few quite significant factors that are making this space going to move on.
Who Am I?
I’m a consulting partner at Container Solutions. I’m an engineer and architect by background. If you’ve ever placed a sports bet in the last 20 years, it’s probably gone through code I’ve written. I spent 15 years working in an online gambling backend systems company. There’s basically only one. I built them, then led teams, then ran those. Through that, I got into Docker because I got frustrated about the lack of test environments. I got into Docker. Then, through Docker, I got a job at a major UK bank as an architect, where I implemented OpenShift. Then I worked for another bank, and then I became a consultant. I blog at zwischenzugs.com. Zwischenzugs combines a few of my interests. It’s a very odd name. It’s a German word from chess, meaning in-between move. I speak German. My mother’s Austrian. I play chess. My blog was an in-between move at work. That’s why I call it that.
Unfortunately, it’s almost impossible for people to spell. I’m an author and O’Reilly trainer. I wrote “Docker in Practice” about 6, 7 years ago. I’ve also written “Learn Git, Bash, and Terraform the Hard Way”. I teach for O’Reilly, Git, Bash, and software architecture courses. A little bit about Container Solutions. We’ve written open-source projects before. Does anyone here use Kubernetes? If you do, do you use External Secrets Operator? That was written by us, and merged with GoDaddy. The main architect of that is now running his own company called External Secrets Inc. We also wrote the operator SDK in Java for Kubernetes. That’s been taken over by Red Hat, who kindly pay for its development now.
1. What’s Wrong with Compliance and Audit Today?
Part one, what is wrong with compliance and audit today? I’m just going to divide compliance and audit into two broad areas. One is controls, the things that are done to keep you compliant, and audits, the checking that you’re compliant. In terms of controls, the ones I was most familiar with before I asked ChatGPT was preventative, reactive, and detective. Preventative, as many of you will know, is configuring a system such that it can’t spin up a VM with a public IP, for example, or configuring your cloud provider so that you can’t open up certain ports to the world. Then you’ve got reactive controls. You can spin up a VM, but something will be watching and will then smite that VM if it’s not conforming to spec. Then there’s directive, the things we don’t read or which we’re supposedly trained on and have to sit through multiple choice questions on regularly.
Then there’s deterrent, fines, pretty much, financial penalties or other penalties for failing to comply. Compensating, which is if someone doesn’t comply, is the things we can do to reduce the effect of that. Finally, detective. Detective is the area where I think we’re not doing enough at the moment. We’re not doing enough to be aware whether there are issues. When we do detective controls at the moment, they tend to be distributed across the estate. As we’ve taken this and showed this to people, one of the things they’ve said is, I’ve got 20 different systems that tell me about vulnerabilities. If something happens, I have to go and check 16 different places.
Another thing we’ve heard, in various industries from compliance managers, is that they have a lot of third parties that they work with. They’re semi-federated. If you work for a big oil and gas company, you’ve just bought 20 businesses and you’re trying to integrate them into your systems. You don’t know what they’re up to. You don’t know what they’re complying with, not complying with. There’s an element of trust, or not really knowing what’s going on. This other challenge, even if you don’t have that problem, you’ve still got the hybrid world. I’m guessing most of the people are on more than one cloud. Or if they were on one cloud, they might be buying another business, and that business is on another cloud, or one business unit is on one cloud, another business unit is on another. That’s all very common. We’re in a hybrid world. If you’re in charge of compliance for that whole business, then it’s not clear necessarily where you need to go to check things. That’s controls.
In terms of audit, checking that we’re compliant, there’s four problems that we think are happening at the moment. First is, it’s manual. I mentioned Confluence pages, spreadsheets. Whenever I mention those, people smile and say, yes, we do that too. There must be a better way. Audits are often also done with screenshots. They can be put into PDFs. All these things are used to track state against specific controls. In a way, audits are done typically manually, and haven’t been brought into the ops portmanteaus that we’ve seen with other things. We’ve got DevSecOps and so on, but we haven’t got DevAuditOps. It’s not been shifted left yet. That’s a typical kind of spreadsheet that some of you may be familiar with, we’re looking at over and again, maybe every 6 months or so.
Second problem with audits is that they’re periodic. They might happen every n months, often every 6 months or every 12 months. When I put OpenShift live at that bank, we were checked every year by security auditors who would come in, ask us questions. Are you encrypting your disks? Are you using Kubernetes Secrets? Have you got an encrypted secrets store? That kind of thing. Very manual process. You have to be phoned up. You have to be told. Between those periods, who knows what happens? Maybe your team’s patched something to make it look ok for that week, and then they reverted to their old ways. Do you know what’s going on?
If an auditor comes in and says, do you know what’s going on? Can you show them? No, you can just say, here’s our report from three months ago. We reckon everything’s the same, Gov. Recently, I spoke to a consultant who works for a big European bank about this. He said, “It’s really interesting, but at the moment, we’re not even doing point-in-time properly”. Even in a lot of these cases, people are way behind the curve. As I’ll mention, there’s regulation coming, that means that they’re going to have to really up their game soon.
Next problem is that their audits can be or tend to be very process-focused. They focus on the documentation of processes rather than working processes. As I was writing that, I thought, I’ve heard that somewhere before. It was the Agile Manifesto, which talks about working software over comprehensive documentation. Audits are not Agile. They’re the antithesis of Agile at the moment. Evidence is taken periodically and manually. Not uncommon to have screenshots and so on. The effectiveness of these processes over time is simply not measured. Finally, it’s bespoke. If I work for one bank, they might have one format for their audit documents, their compliance documents.
If I work for another, they might do it a different way. There’s no interoperation between systems. There’s no interoperation between teams. The question is, why is it like this? Why have we ended up in this state? I like this one. It says, all you’ve done is chisel all day. Do something useful, like help your brother drag those rocks up the hill. I like this image. It’s like the other one where the guys are holding the stone wheels and someone else says, no, we’re too busy to look at those. I like the fact that it’s a child and he’s seen as playing. Traditionally, audit and compliance has been seen as two things. It’s a side activity that comes along periodically that you have to deal with.
Therefore, it’s treated like a tax. It’s just something we have to do because we have to do it. We’re not interested in dwelling on it. We don’t really want to talk to those people. Do we have any auditors? I’m speaking for us, but I don’t know who us is. This is illustrative. You’d never get someone who works in IT audit necessarily coming to a conference like this. Or, rarely, it seems. Actually, how many people work in security? There’s a real siloization of this. I think it can be broken down. Because it’s manual, and it’s a tax, it’s not scalable. We’re going to see that the development of regulations is going to mean that it’s going to have to be scalable.
Compliance has always been very resistant to innovation. Although, actually, I have a little anecdote here. When I got into Docker, at that gambling company, I built a system. I built an automation which shoved a whole three-tier set of applications into a single container. I did that for development purposes so that developers could just easily move around their changes to each other, and we could get some CI going. This was in 2013. As I was doing this, someone at one of the bookmakers got wind of this and said, we’ve got an audit coming up where we have to demonstrate to a third party that we can rebuild this company’s whole system from scratch if there’s a disaster. If all their data centers got bombed, could we rebuild the system? I was like, yes.
Traditionally, this had taken a week, and I’d got this process down to a couple of hours with a Docker image build. I went with the tech lead to the auditor and said, “We’ve got this new method of doing it. Are you ok with that?” They said, “What is Docker? Never heard of it”. We showed them it. They said, “Yes, that’ll do. I can see the website. It’s running on your machine. Yes, you’ve rebuilt it. I’ve observed that”. They took some screenshots and told the escrow they were happy. If you do engage with them, they can be very reasonable, is my experience based on that.
The other reason it’s like this is that, only recently, lots of industries have got into needing to be compliant for internal reasons or external reasons. 25 years ago, I just graduated, and within a year, I had root access to the production database of a FTSE 100 company, which was revenue of millions and millions at the time. I remember saying to someone, this is a bit crazy that I can do this. No one would know it’s me. They only have the IP address of my company to know that it was me. Could have been anyone. It was a very high-trust environment. I think a lot of businesses were like that. Post-2008, I think things have tightened up a lot. More companies are either internally pushing for compliance or externally pushing for compliance. I think it’s part of the increasing professionalization.
It’s not in a great state, but who cares? We’re paying the tax, whatever. The tax is, to give you an idea, apparently 10% of banking operational costs is spent on compliance. That was according to McKinsey. Not all of that is software compliance. A fair chunk of it is, and an increasing chunk. After 2008, the regulators around the world really focused on financial stability. They focused on making sure that the banks didn’t go bust. That was obviously really important. Now that’s become a lot more stable, they’re moving to looking at operational stability. We’ve got various changes going on in the world that are pushing us towards a bigger focus on operational security. One of these is DORA, which is the Digital Operational Resilience Act in the EU, which has gone into effect this year, but no one really knows exactly what it means. It’s classic EU regulation, which is saying things should be better. Over the coming years, it will become clearer what that means. We all know about GDPR.
Another factor around this is that more regulators are introducing machine-readable regulations. They’re publishing machine-readable regulations. Australia is leading the way on this. A lot of Australian banks are looking at automating a lot of their compliance lifecycle. The UK also has operational resilience rules that have changed. Some quotes from that. In the UK operational resilience rules, regulators stress reliance is not a once-and-done tick box exercise, but must be embedded into the firm’s culture. That’s a new thing. Firms are expected to continuously identify vulnerabilities and remediate them via regular testing. Ongoing third-party oversight is required, meaning continuous monitoring of vendor performance. Adoption of mapping and simulation tools to constantly test systems. That’s the UK operational resilience rules. DORA talks about continuous testing, monitoring, and reporting of ICT systems, and so on.
2. What We’re Doing About It
What are we going to do about it? We’ve built this tool called Continuous Compliance Framework. It’s available on GitHub. It’s only really become a real product in the last couple of months. Before that, we spent a lot of time working on the backend and the infrastructure around it. Now we’re ready to talk about it. I’ve got videos, but I’ll do it live, because it’s probably going to be a better demo. Here we go. This is the main page of the Continuous Compliance Framework. This is all the findings we have. In this demo, I’ve got some AWS stuff. I’ve got an AWS VPC with things in it. I’ve got an Azure subscription with things in it. I’ve got a couple of Docker containers representing on-prem machines. Each line here represents a single stream of information. It’s like a monitoring tool. Each stream of information is by default every minute, but it’s periodic. I can set it to whatever I want.
If I look here, I can see there’s a history. If I go down here, this check was collected at 10:26. The explicit security group on this instance is satisfied. The subject of this finding is here. There’s observations. These are all OSCAL terms. They’re standardized terms within the industry. Then for each of these, you can drill down and see what happened to gather the evidence. If I go back to these findings, for each of these findings, there are flags. When you set up a stream of checks, you specify also flags. Anyone familiar with Kubernetes, similar idea of labels. I can take these labels and query against them. If I do, type = AWS and service = EC2, then I can narrow down my findings to those specific things. If you’re in charge of compliance and a vulnerability to the public has come up, you might want to just quickly figure out what may be exposed. If I do this, I can then save that search. I can say AWS EC2.
Then it will be added to my dashboards here, so I can keep track of different things as the auditor. I can also view by subject. A subject is a component in the system. For example, I’ve got an EC2 instance, and there are streams of findings for that particular instance. Two are positive, two are negative. I’ve got an Azure virtual machine, one positive, three negative, and so on. Again, if I go into each one, I can view the details. At the moment, we’re writing plugins to integrate with other tools, such as Snyk, Black Duck, whatever. We’re not in the business of identifying specific vulnerabilities. There are companies that do that way better. What we’re in the business of is trying to centralize and coordinate that so that this is more easily managed by a compliance manager. We’ve done the subject.
The last thing is catalogs. OSCAL has this concept of control catalog, which is sets of controls that you have to work against. For example, that might be NIST SP 800-53, which is a very standardized set of cybersecurity controls that businesses reference. They have different levels of severity, depending on how significant your system is to the infrastructure of the nation. We’ve done some work recently to integrate this with the Saudi Arabian Money Authority.
You can actually tie each individual control to the findings we’ve got. If an auditor says, we caught a bank that was completely exposed, and we weren’t aware of it, and they were down for three days because they weren’t complying with 3.3.1, what are you doing about that? I can say, actually, we’re not doing so well on that because we’re all red. I could then quickly phone those people up and say, sort that out now, and the auditor might go away happy. Or if it’s something we’re stronger on, like infrastructure findings, then we can show them that. Basically, we’re gathering that evidence in those streams of data and then being able to slice and dice in different ways, depending on who’s looking.
Why we got here. The reason we started this project was because one of my teams was in a meeting with a bank, with a CISO, and going over these Confluence pages. The CISO said, there must be a better way to do this than the way we’re doing it. One of our engineers said, yes, sure, we’ve talked about this before internally, and we think we can make it better. He said, ok, go off and build something. That bank promptly went bust. You can probably tell who it is if you look it up, they went bust soon after that. We were left with this project that we started and it was in the very early stages. Then 2023 was not a great year for consultancies, so we had quite a big bench. We started working on this, and we showed it to potential clients who said, great, when can we install it? We said, no, we’re not ready for that. We’re very far from ready for that. We thought, ok, there’s something here.
Then we went away and built the backend properly, and architected the backend properly. Now we’re getting to the point where we’ve got something real to show people. Architecturally, it’s agent-based. The agent’s written in Golang. The agent is very small, can run anywhere. Limited capability. There is a plugin policy separation. The agent runs, and it defines the policies it’s interested in looking at in the context it’s in. If it’s a machine and it’s checking the SSH configuration, then it’ll be simply running on that machine. If it’s a collating plugin, so, for example, I want to check that all my AWS EC2 instances are a certain way, then I might have a machine specifically deployed for that reason. Then it would go off and collect all that information and then send it back. That’s the plugin. Then the policy is written against the plugin. There’s separation between those. You can configure it to check whatever you like.
Configuration is done at the moment using Rego, which a lot of people aren’t really comfortable with. I hear my engineers swearing about it a lot, but it’s the principal policy tool at the moment. We’re probably going to add support for Kyverno at some point. Plugin policy separation is critical. Plugins and policies are both pulled from OCI registries. A little-known feature of OCI registries is you can just treat it like an artifact tool and just download specific files. That’s what we do. We download specific files from OCI registries. If you’ve got a Docker registry, you can use this. We use MongoDB for the backend, for the datastore, because OSCAL is in JSON. OSCAL is essentially a set of documents that you use to represent your compliance workflow.
At the center of here, you’ve got the CCF API. You’ve got agents running on different machines. You’ve potentially got agents that connect to third-party tools that you want to have a policy against. I’m often asked, would your system support X, Y, Z? As long as you can turn it into JSON, or as long as there’s consistent output, in fact, the system can have policies written against it and therefore send results back to the center. It’s designed to be a very open, flexible architecture. It’s all open source, so anyone can use it.
I’ll talk a little bit about OSCAL. I’ve mentioned it a few times. OSCAL is a standard written by NIST, and it’s been in existence for about 8 years, I think, and 1.0 for about 2 years. OSCAL is Open Security Controls Assessment Language. It allows you to produce documents like this, which are machine readable. The idea is that if someone writes a tool for OSCAL from the compliance side, they can output documents that can be read in by someone from the regulator side or development team side, and they can use their own tools to interpret it. It’s just standardizing. The thing I talked about earlier about everything is bespoke in this field, this is reducing that significantly. We decided to go all in on OSCAL. It’s a very ornate standard. It’s quite opaque to understand. We’ve actually implemented our own training course for our own engineers on it because it’s actually quite hard to follow.
However, the more you understand it, the more you realize how much thought has been put into it and how difficult it is to have a standard that covers thousands of different organizations and tries to bring them together. Through building this, we’re talking to NIST about some of the changes we’d like to make in future editions of OSCAL. The big one is that OSCAL is built around point-in-time compliance. It’s really reflecting that existing state of things. We’ve had to add concepts like the labels, like the streaming of data that is consistent with the continuous world. I think we’re going to be influencing the changes in OSCAL in the future. If anyone’s interested, let me know because there’s a working group on it as well. A little bit more on OSCAL there. There are different layers across the lifecycle.
I wrote a tool to help me understand OSCAL using Neo4j. This is a visualization of the OSCAL schema. It helped me understand the relationships between the different parts of it. If anyone’s interested in that, it’s available online. It’s Dockerized, so anyone can spin it up.
What lessons have we learned building this so far? We’ve found no competition to what we’re doing exactly. If there is competition, it tends to be point-in-time or very expensive, not open source, and not holistic, and not using open standards. We’ve noticed that other people are focusing very much from a compliance orientation, whereas we could come from more of a developer. We don’t want to spend our time doing these spreadsheets. We’re coming at the same concepts from a slightly different angle. I think we’re coming together as we explore more of the OSCAL standard. Still debating whether to go with MongoDB. I’m very old-fashioned. I like relational databases, but we’re still with MongoDB. I think we’ll hit performance problems at some point, but that’s for a later time to think about.
The big thing for me was the use case refinements. As we started showing it to people, we were initially talking very much about regulation in OSCAL, and the people we’re talking to are saying, this can really help me sleep at night. I talked to one person who has 200 engineers working under him, and he said he has no idea what they’re doing. If an auditor comes and asks him what’s his understanding of how he’s controlling the entire dev lifecycle, he has no idea. He has the artifacts that he’s written. He’s got the PDFs and so on, the documentation of processes, but he doesn’t know what’s actually happening. He’s living in fear that bad practices are going on, and he’s not aware of it. We’ve switched more to the help me sleep at night scenario, because we think the regulation changes are so important, we want to continue with the regulatory point of view as well.
3. Where We’re Going with It
Where are we going with this? We’re focusing on sleep at night use cases. We would like to be the standard for central control management. We’d like it to remain open source. We’re building more plugins and integrations with whatever is being driven by the user. The other thing we’re doing is probably going to get more involved in the SDLC, compliance in the SDLC cycle. Software Composition Analysis, Static Application Security Testing, Dynamic Application Security Testing, Bill of Materials, those are the things we’re probably going to be getting more into. Even things like joiners, movers, leavers. Every company has a joiners, movers, leavers process. How do you know that your leavers are having their credentials revoked? That kind of question is something we think there’s mileage in.
At the extremes, we’re thinking about, do we want to have distributed CCF instances? If your company has a CCF instance, another company has a CCF instance, they talk to each other. They could exchange information about the controls they’re tracking and not tracking, and so on. I think that’s something interesting, but that’s very much for the future. In my head, I envision like a Terraform world where you’ve got these public plugins that anyone can use. They’re standardized and they’re usable by anyone. It doesn’t take long to write a plugin, especially with ChatGPT.
Questions and Answers
Participant 1: How do you handle subjective requirements? Example, somebody should have a SOC to report, and now I can attach any PDF report into it as a compliance, or pen test should be done when major changes are done. These kinds of requirements in compliance.
Ian Miell: If I’m going to interpret the question slightly to my benefit, you said subjective requirements, so, for example, things that aren’t so easily automatable? A feature we can easily build in to this is store my manual control auditing in here. That would just require a form and somewhere to store it. We focused very much on the automated side for the big wins. Yes, if a customer came along and said, we want you to build something that captures those things, we could do that. If it can be automated, we want to automate it. Even there, I think there might be scope for adding other things. There are certain things like, does my data center have a backup generator? It’s hard to generate JSON for that. Someone has to go and physically look and say, yes, I’ve seen it. I think that’s coming down the pipe.
Participant 2: Have you thought at all about taking actions off the back of the findings as well and maybe auto-remediating stuff or intercepting things before it gets a chance to go bad?
Ian Miell: Yes. One of the first people we showed it to, said, could you take action and basically go and change a Git repo based on it? We went, yes, you could do that. You could even automate the change with AI and automate the pull request and take it away from the developers. That particular person was complaining to me. He was an engineering manager, and he said, “All my engineers are complaining that 50% of their time is spent on compliance, and I think they’re full of it. I’d like to take away all the excuses”. Yes, we could do that. It would require engineering. We’re waiting for someone to pay us to build it, basically.
Participant 3: Just a question around data sovereignty and talking about how you store the data. How is that partitioned? If you’ve got different customers, how is the data secured?
Ian Miell: It’s not a SaaS. We don’t want to get into the business of being a SaaS, mainly for the reasons that you described. We assume that if you’re a top tier bank who wanted to implement this, you’d want to store it on-prem, run it on your own infrastructure. We’ve made it easy for you to run it on your own infrastructure. We haven’t given any thought to running it as a SaaS because of that challenge. This is going to have some connection to all the systems of a client, that’s going to open up a lot of security questions. Then you’re going to have data streaming out, which is going to be sensitive, so names of machines, IP addresses. I can’t see many banks wanting to give us that information.
Participant 4: You talked earlier about the siloing between tech and audit. As a tech team, who do you think should be excited to install a service like that? My audit team or the tech team as me trying to be compliant?
Ian Miell: Who should be excited by this? If I’m an auditor, I’m getting well paid to go around taking screenshots of things. Quite happy it’s paying the mortgage. If I’m a developer, it’s just annoying. I don’t want to deal with this stuff. The people we’re really targeting is the people who can’t sleep at night. It’s the compliance leader who wants to have answers for regulators when they come in. That’s the person we think is going to be excited about it. I hope engineers will be excited by it because it’s automation and they can spend more time writing features or whatever it is they want to do. It’s the leaders.
Interestingly, another thing we’ve learned as we’ve shown it to people, it’s not just compliance people, it’s engineering managers who are really interested in this. For example, we spoke to a retail CIO who said, yes, great regulators, whatever, but I get phoned on a Friday that a third party that we work with has opened up hundreds of VMs to the public, potentially exposing data. They get on a call, priority 1 call, and then they find 18 other problems. They said, I would like to see that earlier and I would like to be able to send an email to that team and say, you’re not compliant, go fix it, and don’t come to me if it doesn’t work. A CYA process for them. I think it’s any sort of leader who’s responsible for the work of other people.
Participant 5: I spent some time working with Azure Policies, which is a similar concept, even though it’s probably proprietary, their format. Installing this on a Kubernetes cluster running AKS would yield two different compliance reports. Is there any interest in the industry for standardization? Have you been approached by other vendors?
Ian Miell: We’re talking to vendors in adjacent areas. A lot of people are interested in this tool, adding to their armory. This tool is not going to solve all your controls problems, compliance problems. You should have reactive controls. You should have preventative controls. This is very much aimed at the person responsible for things across different systems. You mentioned Azure Policies, yes. If you’re just using Azure, Azure Policies is fine for you. It’s those people who have six different data centers, three different cloud service providers, the polyglot group of people who just want to have a single point of view. It’s those people that we’re targeting. We thought we’d get a negative reaction from vendors, but actually they’re very welcoming of this because they see it as something they can plug into and show their clients.
For example, we talked to some vendors and they said, yes, our client’s always complaining they’ve got eight different systems like ours. If we have this thing, then we can solve that problem for them. We are getting positive feedback on that.
See more presentations with transcripts
