Recent high-profile software supply chain attacks have highlighted the importance of security in the DevOps environment. But this can be challenging because DevOps teams are at the forefront of digital transformation and use agile techniques to deliver applications quickly, often not following traditional paths of identity management. Join experts from KuppingerCole Analysts and GitGuardian as they discuss security vulnerabilities in DevOps environments, which are often due to a lack of visibility and control of widely distributed secrets such as API keys, database passwords, cloud access keys, certificates, SSH keys, and service account passwords, leaving millions of credentials exposed.
Paul Fisher, Lead Analyst at KuppingerCole will discuss the challenges and importance of managing secrets in DevOps environments, which increasingly include the use of multi-cloud, workload containerization, and infrastructure-as-code. He will also explain how the business advantages and security of DevOps can be improved. Mackenzie Jackson, Developer Advocate at GitGuardian will explain the issues of secrets sprawl and poor secrets hygiene. He will also discuss in detail a secrets management maturity model developed by GitGuardian, highlight the benefits of automated secrets detection and remediation, and describe how these can be used to infuse security into development workflows.
So that's to come. So here's the agenda. So I start, first off, I'll be talking a little bit about DevOps and the business culture that we currently see. Then Mackenzie will be talking in more detail about secrets management and with particular relevance to DevOps. And of course get guardian. And then we'll have the q and a session. So let's get going. Let's talk a little bit about cultures. But before we do that, let's have the first poll, which is, what do you think is this biggest challenge in security in a multi-cloud, in multi-cloud environments? So we have answer credentials, secrets or data left are protected in the cloud. No control over privilege accounts with excess to the cloud or poor cloud architecture design, and a lack of network hygiene. So we'll start the vote now and just give you a few, like 30 seconds or so to, to, to vote. I just noticed that we, we so impressed with that question that we've added an extra question mark on there. So that's good. That one slipped through the q a, the q a, the qa. So number one, the credentials are secrets. Two, no control over privileged accounts, or three poor cloud architecture design.
So I'll just leave there running a little bit longer. Okay, I think we've now got enough results in for that. So let's move into the main part of this part of the webinar. So what are businesses want? We often talk about technologies, we talk about emerging technologies, we talk about infrastructure cetera, but we're always think about the business. So let's say just quick look at what businesses want. So obviously they want to be agile. Agile is possibly an overused word, but it's, it's still a relevant word. The more quickly that things can happen, the more things can be changed or, or fixed, the better it is. So we've seen the emergence of agile philosophy or agile scrum, things like that. But that leads to, you know, the goal of that is obviously to have a rapid rollout of services and products. The competitive landscape in virtually every sector is such that you can't afford to be behind your competitors when it comes to products.
But that's works on the inside as well. We're not just talking about what the business does for its customers. Obviously that's the paramount important, but also what happens inside the business and inside the organization. So improving those infrastructures through quite often these days. Digital transformation or digital initiatives or moving stuff to the cloud, et cetera. And in an, in an age of inflation, in an age of supply, squeezes in an age of uncertainty. In fact, let's face it, we do live in a turbulent time at the moment across the world. So anything that can reduce costs, reduce reliant on unreliable supplies, etcetera. Also what the business wants. Productivity again, is a word where a lot of people will be using politicians use it to describe what the gross output is or isn't really of, of a nation. But productivity is also relevant on a macro basis.
So any business that can increase output and productivity is going to also be at an advantage. And finally, behind all of that, one of the positive, and obviously also one of the demands of digital business culture is the amount of data that is produced. But increasingly, companies see that as a source of intelligence and a source of better ways of refining product, better ways of working, better understanding of their customers. So that's essentially there's six key things that the businesses are looking for in today's environments. So let's now get a bit more in focus with what we are kind of talking about. Key key to all of that has to be said are coders, programmers, developers. This is a quote from a book written by Clyde Thompson called Coders, in fact, published a couple years ago, which captures the essence of the kind of people and the kind of culture that exists within them and how the way they work is having an impact on businesses operate. And it's not always a mutually beneficial relationship at the moment because the two working towards the same goal, but necessarily in the same path.
So developers, and there's the book I was telling you about there, the cover there, the developers tend to work extremely hard. They're very, very target focused. They more than anyone are cloud native, cloud savvy. They, they can work in multiple cloud environments, you know, right from birth, as it were. Coders are, are in a multi-cloud environment. They use tools such as Mongo DB for example, which is a cloud based database tool for, for application development. And they love automation, they love speed. Anything that can make their job quicker but more accurate is also essential to them. They even more so after the pandemic, more coders are, are now working in remote places. And that can be all sorts of places, not just necessarily the stereotypical hotel room or coffee restaurant, but also obviously at home, obviously in, you know, even abroad, et cetera.
They are not afraid of open source. They're, they're not afraid to try new things. They're obviously, which is relevant again to what McKenzie are we talking about? But they use tools like Slack and GitHub, et cetera to get stuff done. And they look, this is key. They're very into collaboration and sharing, which again, is a good and a bad thing. Collaboration and sharing between coders actually encourages innovation, encourages ways to find solutions to problems. But it can lead to sometimes things being overshared or things being left behind, which shouldn't be left behind. But ultimately coders are problem solvers. They're there. The problem the business has is to develop an application to do a certain thing. And coders are increasingly the people that are the ones that are gonna provide that solution. So the point is, developer culture coders are hugely important now to more and more organizations, even even those organizations that once perhaps were considered more static, such as financial services or insurance services and things like that, even in pieces like health code are now becoming more relevant and the pace of change and what the businesses want is all leading to a much more dynamic environment everywhere.
So we used to call about a few years ago the term SEC devs became, came to use it, sometimes known as dev sec ops. But the, the, the key is that people started thinking we need to secure these guys, these DevOps guys. It was a great idea bringing dev developers and operations together, but they seem to be in our, a law unto themselves, which they weren't. I think that was a bit unfair because as I said, what they were ever, I don't think DevOps developers actively saw themselves as a law unto themselves, but they like to get things done. And sometimes this meant they didn't follow the traditional security rule book. But what happened to SEC DevOps? Did it actually improve security? Did it improve productivity? Wearing all this, were the traditional identity management tools such as iam, privilege, access management, and more laterally cloud infrastructure entitlement management.
Was it sustainable to have yet another sort of department of security or IT security that was specifically that designed to look after developers? I would argue personally that it's not really a sustainable practice. The idea that you can keep up with the speed of developers with a, with tools which are branded SEC devs or are just sort of add-ons to existing applications. I, I really think that that was probably a non a nonstarter. And I think that we now are moving to a different age which we should also be discussing. So that means that developers effectively are unleashed. We should give them as much freedom as is possible, as much room as as much dynamism so they can deliver the stuff that we expect them to, but we need to know how to protect them. It's not so much that they, as I said, are willingly seeking to break the rules or damage the business.
It should be perhaps the other way around. So we can enable them to be efficient and be productive and deliver the tools, but we need to protect them not the other way around, if you see what I mean. We need, if we protect them, we protect the business also, what can we learn from them? I mean, like I said, they are the cultural, technological, cultural leaders in the way that they do stuff and they innovate amongst themselves. Or we can certainly learn from when we are building access management tools, how we can certainly learn from them some lessons in how to make things more innovative but also more dynamic closer, much, much closer to where the operations are actually happening.
So let's take a quick pause and run our next poll, which is, which cloud platform do you mainly use for your digital transformation? And if you just select one, the the, the choices are pretty obvious, I guess aws, Azure, Google Cloud, IBM or any other. I imagine most people on this call will use a cloud of some form or another, but we're just interested to see who is more popular. So the aws, Azure, gcp, IBM and others. And I point, because Microsoft certainly is developing and cloud infrastructure, entitlement management tools specifically for Azure and Azure active directory. So there is a kind of arms always going on between I, aws, Azure and Google in providing the more secure and more efficient access tools. So let's look at how identities might fly flow, excuse me, in a typical organization. So what we have here is identity flow, and this one I've highlighted cloud infrastructure, entitlement management as one, as the particular tool for allowing identities flow through the business.
So the core business infrastructure as it shows extends obviously the developers or the end users or machine identities, et cetera, or obviously at some point enter the core business infrastructure either through an endpoint or through a workstation, et cetera. And then they possibly now would use a privileged access management tool if the developer has a privilege account. More likely they now may be using cloud infrastructure, entitlement management tools or the proprietary tools that come with AWS or as your Google cloud or three have their own versions of some form of access management. But what those developers specifically want to get to is more cloud services. They need to access platform as a service, software as a service, all sorts of tools, infrastructure, and of course private clouds. And that's where the bottlenecks can happen, which is where we need more dynamic entitlement, particularly when they might be accessing private clouds, which are still kept on premises. To get to what McKenzie would be talking about, this is where your secrets management piece really comes into play. And this is where the secrets management piece that we need to see become more common, needs to be as friction free as close to those developers as possible so they can actually get to the resources such as the files, servers, workloads, containers, et cetera. So once again, we see it's always everything in modern organizations, modern infrastructure is about matching identities with resources and it's how we get them together is the important bit in the middle.
So identity management or cloud management, secrets management is something that we need to adapt to the ways of developers working, coders working. We need to move away a little bit, I think from the more traditional gear that security was everyone's responsibility in a business so that everyone went around with the mantra that, you know, we, we shall stop all fishing attacks, we shall never do this. And we are very security aware. I I perhaps controversially some suggests that we should actually get back to a position where the IT security platforms, the cybersecurity platforms, access management platforms, secrets management, look after the security are smart enough or automated enough to do the, the security management that enables the doers, the coders, the employees, the partners, everyone to do the job they want to do. Rather than being lumbered with this sort of extra responsibility, we need to start thinking about do we need to move away from monolithic platforms?
Traditionally privileged access management identity and access management have always been pretty big centralized platforms which sort of work bottom up 'em up and sideways to manage identities from across the infrastructure. The problem with that is that those, those platforms were traditionally deployed managed by CIOs or CSO teams, not necessarily directly by those offices and those teams, the developers particularly are starting to move away from that zone of influence because of things like speed and the ease in which end users developers can spin up cloud resources, can spin up a virtual machine, can spin up things like Windows 360 fives pretty easily. And there's absolutely no way that a CIO or a CSO or that team can see that on a, on a daily basis, let alone, you know, as it happens we need to think more about zero trust. I had put zero trust there, but we need to, like, there's an adaptation of zero trust, which goes into, if we verify an entitle and secure at the start of any identity workflow, we don't need to worry about their responsibilities when it comes to security awareness, et cetera.
So those are are sort of four areas in which I think identity management put identity management is secrets management is privilege access management as well. But we, we have to adapt these tools. So just to, to, to wrap up my, my part of the, the webinars, just three things to think about when you're thinking about sequence management, not just for for DevOps, but really for any part of your business, start thinking about allowing autonomy. Except there there will be new centers of control, not centers of security, but centers of control. Think about what I call zero distance identity and access management. So the control is right next to the access point. So we're talking about decentralized identity, think about embracing infrastructure as a service and operations automation. The, the, the piece of DevOps, the ops part is becoming increasingly unimportant I think. And DevOps is operations and much of automation, much of of ops is being automated in the morning.
You can automate, you will take away those workflow tours and start thinking about what we have cognitive coal classified a dynamic resource entitlement access management along with cloud infrastructure entitlement management solutions, which includes some PAM solutions, but also some newer solutions. There is actually a leadership compass, which was recently published on that theme. So there are vendors out there which are understanding this new environment and that well worth reading about in that report. So with that, I'll hand back now to, or rather I'll introduce you to Mackenzie and I'll see you in a little while.
Thank you so much for that. I believe, well hopefully you can all hear me. I'm sure someone will say something if you can't and you can see my screen. But yeah, I want to kind of continue on a little bit from what followers saying, and we're gonna go through a dev security module model for secrets management and we'll discuss exactly, you know, what that, what that means throughout the presentation. But essentially, you know, what it looks like to be an organization with a high level of maturity when it comes to handling, you know, what we call secret sensitive information.
So just a quick overview I'm gonna go through quite quickly. I got a lot to cover so I'll, I'll go fairly quickly, but just so you know that everything that we're going through is outlined in reports that we have that I believe you'll have access to to, So we'll go through the problem of hard coded secrets, exactly what they are. Then, we'll actually kind of, I'll try and scare you all a little bit by talking about some high profile breaches. So I think it always good to relate conceptual ideas and, and, and models and maturity to what happens in reality when, when those fail. So we'll talk about some, some recent high profile breaches that have happened as a result of poor secrets management. We'll talk about unsecure development practices, the maturity model that G Guardian has put together of what we believe companies should strive for in secrets management in automated detection and remediation.
And then I believe we'll go into some q and a where you'll have an opportunity to ask me some very difficult questions. So let's start with the, the problem of hard code of secrets. I, I like to get everyone onto the same page when I'm using the term secrets of just exactly what I mean when I'm talking about secrets. So I'm talking about what we call digital authentication credentials. And these can really be anything that authenticates us or unencrypt or gives us access to systems, databases, services. So typically these will be things like your API keys to connect to third party services. It may be security certificates, it may be encryption keys, it can be just credential peers, username and passwords to things like databases or to services, browser session tokens and much more. So really, you know, any, you can get the idea of this is essentially going to authenticate, going to give access to, you know, to potential malicious actors or to yourself access to the services that you, you use.
So very quickly, Gig Guardian does a lot of research into how much of these secrets actually get exposed in the wild. So one of the places that we look is GitHub. So GitHub is a tool that nearly every developer will be using in their arsenal. They've got 80 million developers, more than 80 million developers on GitHub, and it's really a codesharing platform. It's the larger space to share open source tools and repositories and code. You know, and developers often use this, it's portfolio, it's unavoidable to kind of have some form of coverage with GitHub as a developer. So what actually Geek Guardian does is, is we scan every single public activity on GitHub. So every time you commit, if you're not familiar with the word commit, you can think of it like upload. Every time a developer up commits code publicly to GitHub gig, Guardian will scan it to get an idea of the volume.
There was 1 billion commit last year by 56 million developers. So I said that there's 80 million on, on, on GitHub. The difference here is that not all of them pushed code actively publicly last year, but 56 million of them did. So huge numbers. So how many secrets did we actually find? Well, we found 6 million secrets publicly. So this is out in the open, no authentication, not behind any layers sitting out in the public. We found 6 million credentials here. So huge problems you don't as an attacker, it gives you a lot of options as to what to do. Now these also these range and severity, but I can tell you that 15% of them were for cloud providers. So 15%, you know, over a million secrets for cloud providers, which even if it's personal, is still highly problematic and you can do a lot of damage. We're leaked last year publicly on GitHub.
So let's talk about some high profile incidents now of what's actually happened when these secrets find their way out into the wild. So the first one I wanna talk about is Code Cove. I won't go into too much detail about what Code Cove is, but it's a, it's a tool that essentially helps you test your application. It, it, it tests how much of your application is being tested. So it does a very specific job and it sits in the C I C D pipeline in your software delivery process. So what actually happened, So code cough ships their product often by a docker image. A docket image is basically just a collection, a snapshot of an application and everything it needs to run. So organizations take this docker image, they plug it into their services to run Code Cove. This docker image was publicly accessible to everyone, it was stored on Docker hub and adversary decided to investigate this stocker image and they found that actually there was a plain text credential in there was a hard coded secret.
Anyone could have found this had they known how to kind of dissect this image. It it, but it's really not that difficult. What this credential gave access to is it gave read right access to some files that code were hosting and this allowed the attackers to actually inject a malicious line of code into a script that they had a bash script that 20,000 customers were actually using. So 20,000 organizations were, were using this version of, of Code Cove. What that script actually did, that malicious line of code actually did, it was buried very well, It'd be very hard to notice it have you if you were perusing this, this document which had over 2000 lines of code in it. And what the script did is it said, when an organization uses Code Cove in that testing environment, capture all the credentials that are being used by that application in the environment.
So all the secrets and I want you to send them remotely to this address, to the attacker's address. So every time a customer ran or a user of Code Cove ran that application, it stole the credentials and sent them to the attacker. The main credential that an attacker was after was access to internal version control systems, internal get repositories, namely. So what we know is that a lot of large companies that were using Code Cove actually had their internal version control systems access by this malicious actor. And some of these companies, Huey Corp, Twilio Rapid seven monday.com were all affected by this. So an attacker found a hard coded credential in a docker image. They used this to inject some malicious code and from that they were able to move laterally into different organizations where they were able to access internal repositories. And in all the companies I just named Hashi Corp, Rapid seven Twilio, there was actually additional secrets in their internal version control systems.
So pretty bad scenario here. As I said, 20,000 users were affected by this. So this is one example of a secret entering out into the wild in this case through a deployment process. Another very, very recent one happened a couple of weeks ago, which is Uber. You may have heard that Uber suffered a very significant data breach. This one was actually incredibly bad, but luckily for for Uber, the the adversaries that actually did this, it appears that they were more after making lots of noise than they were eventually achieving malicious goals. So, but it could have been extremely bad. But what exactly happened? Well a contractor for Uber, an EXE contractor had their credentials stolen and put on the black market. I'm not exactly sure the process that that that happened, but essentially you can buy all kinds of credentials to organizations on the black market.
Could have been through a different breach, could have been through fishing, but we can, we can assume that one of these reasons was how this contractor had their credentials put on the black market. The attacker then tried to access Uber's internal network via a VPN with these credentials, but were blocked at to a multifactor authentication. So the, the attack could have stopped there. However, the attacker in this case applied some social engineering to get that contractor to accept the multifactor authentication request. Exactly how they did this, I'm not sure, but it could have been a phone call of the attacker claiming to be from the security team and that they needed to accept this cuz they needed access to the account. However it happened, the attacker was then able to gain access into Uber's internal network. Now just cuz they have access to the internal network doesn't mean that they can do horrible things.
It's not good and it's certainly, you know, certainly very, very critical. But this was elevated to a whole nother level thanks to some hard coded secrets that were lying around in Uber's internal network. So they started scanning documents and files enumerating through all this different data and they found some PowerShell scripts. So PowerShell scripts are used for various methods of automation. So, but inside one of these PowerShell scripts there was a hard coded credential for the privilege access management system, the PAM system. And this was an admin, an admin credential for the PAM system. So the the privilege access management system, this is what controls basically access to all your to different third party activities. It stores your secrets IT admin access to this is is worst case scenario. It's basically your password manager and your secrets manager combined. And the attacker will then basically head access to everything.
So they moved laterally from their PAM system, which happened to be psychotic into aws gcp, Google Suite, Slack, Sentinel one, Hacker one. And that's what we know of. And at that point, the attacker then started making a lot of noise posting and Slack channels internally to say that Uber had been, had been hacked. So what we can see here is that there was hard coded credentials in this PowerShell script. Why they were there, well it was admin access to APA user. So probably they were automating some processes of adding users. But because of that worst case scenario and a Taha got access to basically everything had there been malicious could have been much worse. But a PSC goal was to make more noise and get publicity more than anything. So unsecure software development practices. So we're really building software at a rapid pace now. And there are a couple of things that have really caused and enhanced some insecure practices that have been in place for a long time that have been really being exposed now.
So number one is that we're, we're adopting C I C D pipelines, we're adopting DevOps practices and this is accelerated software development to a point where it's gone past what we can manage from the security point of view. Deployments are happening daily in large companies. There is so many schools that have automated processes, you know, one from a developer can control the entire deployment process from code now. So we are codifying all these different steps, but those coding practices haven't become more secure. Paul touched on, you know, the monolithic application. So we truly are moving away from this and I always like to use the example of all the different third parties tools that we use today. Credit card processing's, the easy one to talk about, you know, are you gonna build your own credit card processing or are you gonna implement a company that does this already?
Stripe paper, right? You're gonna implement a company unless that's the core thing that you want to do, but this is extrapolated to everything now, managed databases, maybe we're gonna implement search, so we're gonna use Algolia, maybe we don't want to write our own authentication, so we'll use Okta or, or thero, you know, number of different tools. And these are all kind of used by developers. But then when we go to the DevOps team, they have their own tools. You know, we've got cloud, cloud infrastructure. Are you gonna build your own servers? I'm host your application. Are you gonna put it on aws? Are you gonna create a version control system? Are you gonna use, get all these different areas? And we can extrapolate this more into microservices and tool sets, sales and marketing use as well. The key point here is that all of these different tools are external and they all leverage secrets, right?
We have to authenticate ourselves. Every single one of these tools is the potential access point for an attacker. Now that tool may not be critical, but as we saw with Uber, you know, you get access to the internal network, you get access to the internal repository, you get access to the internal messaging system, then you're probably gonna be able to find additional sensitive information to move that early. So we're really accelerating our, the rapid pace of code, but we're also introducing another level of vulnerabilities in our applications. So why does secrets get exposed? Well, the number one reason I could skip past this whole slide and that I could just say basically human error, but we can go a little bit deeper. You know, you can imagine a scenario a developer is quickly trying to get something to work. He hard codes a credential, he or she hard codes a credential.
They commit that even if they do the right thing, even if they remove that credential, even if they use environment variables or whatever else we're using. And by the time that goes to code review, there's no sign of anything trouble. Well, source code is a vulnerability in it itself because it keeps the entire history. You do something a year ago, there's a record of that. You hard kit, card code, a credential on a development branch, 500 commits ago, it's still there in the history. It's very hard to delete, especially if you're working in teams and attackers know this, which is why they're doing it. So, you know, that's one area. This would pick particularly true when a, a repository is made public, right? We, we deal with a lot of open source companies are regularly maintaining open source projects. These often start as internal projects that the company decides to make public for various reasons.
Perhaps they don't have the the budget to maintain that themselves. So they open source it. But had they done something a year ago in the safety of a private network, well now they've open, sorted it, sourced it, they've long forgotten about that hard code credential. But now it's public logs and generated files are another way, you know, you could be doing everything right, But if you, if you come across a problem as a software developer, as an engineer, you're gonna probably produce a debug log so you can figure out where it's went wrong. Guess what's in that debug log environment variables, different kinds of secrets. If that gets version controlled, that gets put in a network somewhere and forgotten about, then that exposes to secrets too. We have sensitive files as well and we also see a lot of people accidentally pushing things to the wrong areas on GitHub.
If you use it personally and professionally, you actually have the same account. It just sends it to different areas, makes it really easy to make a mistake and sends something to your personal account that was meant to be on a secure network. So what does it look like to have a secure, have a mature model of managing secrets, A security model? Well, just to emphasize a little bit of the problem that we're actually facing here. I'm gonna devolve from public spaces and I'm gonna start talking about internal spaces now. So if we take a medium size company that has an average of 400 developers. So what we see when Git will investigate the internal networks and code of companies, a company ran about 400 developers will typically have about a thousand secrets. It's scrolled across their, their networks and inside their source code. In particular, these thousand secrets will occur in an average of 13 different places.
Total, 13,000 secrets. That same company probably has four AppSec engineers. So now let's talk about if this company is magically able to get visibility to locate all of these secrets. You now have four people that have to deal with 13,000 secrets. That's 3,400 secrets each. You do 10 a day for a year, you don't take any holidays and that's all that you do. And you may, you may get there, but this is the problem and we can't rely on any one person to stop it. We have to, we have to stop this throughout the entire software supply chain. So this is just kind of an idea of why we really need to have maturity in secrets management. Cause the problem's huge. And even if you can uncover that you actually have a problem, which is kind of like little three, you actually get, you're aware of your problem, it's already a little bit late.
So I'm gonna go through four levels that we've come up with that we believe will give you good, good maturity in handling secrets. So I'll go through each one of these levels and what it involves now. But first before I do that, I just wanna say, you know, that when we're talking about this, of course everyone should be at level four, right? We should have fantastic, we should have a fantastic security model. But you can't go from zero one to level zero or level one to level four immediately. There's no silver bullet. A lot of security vendors will say that their product solves the issue and then tomorrow you implement it and then you are, you're fantastic, you're set. The reality that we all know that work in this space is that vendors provide tools but not complete entire solutions to massive problems. And it's a journey that you need to get there.
So even if, so tomorrow you implement some great tools, you uncover that you have 13,000 secrets in your network. Well, we need to go through a series of processes to actually get to level four and we need to have a plan to get there. Throwing money at the problem doesn't completely solve it, right? We need tools to help us in various ways, but there isn't a silver bullet to this. And doesn't matter how much money you throw it at, if you don't have that plan in place, you won't get there. So what's zero level zero you, you'll be amazed at how many companies are here, and this is basically that speed is more important in security. We manage all our secrets kind of in an unsecure way because it gives all the developers access to them, it gives all the DevOps guys access to them quickly.
We don't publish them publicly, but they're lying around everywhere. This is basically where a lot of companies are at not even understanding, you know, how easily these are access accessible. And the argument may be that their source code is private, therefore it's not an issue. But I mean, these are the most sensitive items that we have. You know, if you're anywhere but a tiny company, you'll have tens and hundreds of developers that have access to this. You basically giving them access to your credit cards, given all those people access to everything. Not to mention the fact that security breaches are more and more common. Level one is when we're starting to see an understanding of this, it's basically mean that secrets are still stored on developers machines in unencrypted ways. So configuration files, but it means that they're not purposely hard coded in there. But maybe you have an environment variable file that you saw in plain text and in our source control we have, we have these secrets grouped, so maybe we have a little bit more understanding of where they are.
But essentially, you know, we are still, you know, they're still in plain text, they're still distributed. And important to know that if something gets into your version control system in your, so in your source code, it's gonna be backed up into multiple different areas. It's gonna be on developers machines, it's gonna be in wikis, you're gonna lose track of it. Now you also notice there's some kind of gray out areas there. We have a report on this that you can go into that has much more granular details. But for the sake of this presentation and the time, I'm just gonna focus on the developer environment and the source code. But in the report that you'll have access to, you'll be able to go further into these different areas that we talk about as well. But as we get through some more intermediate levels on here, you might start storing your secrets in appropriate places.
For instance, let's say you implement a vault, how she called vault is one that often comes up, you know, and you're sharing them through a secret manager, but developers still have access to them. And you still, what's critical is you still don't, you may have policies in place, but you have no idea of whether those policies are being followed, right? You have a tool, you have a secret manager that's great, and you have some idea of how that should be used, but is it being used that way by your developers? Do you know? And then over on the secrets detection side, you know, you may have some manual detection for these secrets, but it's not sophisticated enough that it's gonna catch everything. And then in our source control we'll see that perhaps these are encrypted, but they're, and they're checked into our repositories. But just because your secrets are encrypted, you know, doesn't, doesn't give you a high level of maturity because that encrypted file is distributed everywhere and you've given yourself a single point of failure that just requires a secret to be linked.
And if you don't have, you know, high levels of maturity and managing secrets, yet the idea of having one single secret that will unlock everything quickly is certainly not a great practice. And in terms of secret detection, you know, you might have some areas that are continuously scanned at different stages, but you don't have that continuous scan over your entire environment. You still don't have visibility as to whether or not your policies that you created are being followed. So level three, this is the first level where you're not completely screwed. I would say level zero to to two is really you're at a extremely high risk of exposing sensitive information. And that is because you don't have clear visibility over whether the policies are being in place and whether the tools are being used correctly. You have some ideas of how to do it, but level three is now where you actually start syncing together some visibility, some tools and key or some really important thing here is some remediation.
So in this, at level three, then you have your secret stored in vaults and you're only sharing these with your developers through appropriate secrets managers. And here's what's really important. You have ro rotation policies in place. What this means is that if secrets do get exposed in history that are dated, that you've rotating them regularly enough so that they don't pose an active threat. And also when the development and things you're now started to implement, scanning at different stages of the, of the software development life cycle. Not just on your networks, not just on your source code in this server, but actually on the developer's machine. Why that's important is it's gonna prevent breaches from happening. If a secret reaches diversion control system, it's already game over. But having it on the, on the, on the system means that we can stop the bleeding so much and we no longer have secrets in our source code.
But here's what's key in that secret detection stamps, we continuously monitoring for hard coded credentials and stuff on all of our repositories. So this is actually giving us some insights and we've just started to implement some remediation policies that bring together different teams. And finally, this is the level that we all should be going to. I'm just gonna point out kind of what's the key differences between this and level three. There's number one, we have dynamic secrets. This is the idea of just in time secrets are being created and destroyed just for the lifetime of the purpose that they're being used for. So that means that they get exposed, they're immediately redundant. And in terms of that secret detection stage, what we have is remediation that's automated that involves everyone. I'm gonna talk very quickly about remediation and why that's important soon, but this is really the, kind of the difference between being that expert level and still having a developing area is that things should be automated, they should cover everything and they should involve multiple teams.
And we are, you know, and, and automated in the creation of secrets automated in the rotation of secrets as well. So this is what we really should be aiming for. So how do we build this effective program? Well, an A, an effective detection and remediation program for secrets will include monitoring, detection, alerting, remediation and analytics. And all of these are important. Yes, we need to be able to detect secrets, but we also need to make sure the alerts are going to the right people. And that the remediation process is, is set up. We go back to those 3000 secrets that every AppSec engineer has to do. Well, we need to make sure that remediation is spread across different people so that they, that becomes actually a manageable task. And analytics is really important so that we can get visibility into where the problem is, right?
What teams are leaking the most amount of secrets, what projects we need to focus on, What systems are most vulnerable? How, how long are secrets valid for? All of this we need in our, in our tooling. So get Guardian has a kind, you know, is one way of accessing that, getting that visibility into your systems, making sure that you can detect secrets, finding out our secrets valid what teams, where they're coming from. So that's one way and that's critical of getting visibility into your systems so that when you do implement systems like vaults and you do implement policies around that, you can actually check that they're being followed up on and the remediation processes are being automated and you know, we need to make sure that we have detection in multiple stages. I touched a little bit on this scanning just your server isn't good enough because the threats are already active.
We need to define them before on the developer side in your local environment. And we also need to detect them when we deploy applications. So in the case of Code Cove, way back secret was in a DACA application that was part of the deployment process. Maybe that secret skipped all of those stages and just reveal itself at a deployment stage. So it's, you know, you could have detection on all your developers machines, you can have detection before they reach your server, they could have detection on your server, but secrets can expose themself in every single step of the way. So you need to have detection at every single step of the way. And on that local environment side, that's where you're gonna make it manageable because that's where you're gonna stop the bleeding.
So, you know, we've talked about DevSecOps, I know Paul talked about DevSecOps. No, I'm nearly finished, but I, you know, this is often thrown around, you know, by people. But you know, we actually need all these teams to work together and in secret detection and you know, having a mature model of managing secrets will involve all of these teams because security teams will be able to see that the policies are created are being enforced. DevOps and our SRE teams will be able to implement automated detection so that when something happens, they know about it, security knows about it, and then developers can actually help in this process with literally shifting security. Left shift left is the key word everyone likes to use because they're involved. If a developer leaks a secret, then they can, they need to be part of their remediation process because they're the ones that know what it does.
So like, here's an example of that. If a developer hard codes a credential to something, it gets into your version control system, security team gets an alert, the developer automatically gets a survey to make sure that they can alert, you know, is a secret valid, what was it for? Who leaked it? All that information so that security actually has that at their fingertips. That's what mature remediation looks like. It involves everyone. Everything's automated and everyone's providing information so that the decision makers that need to take action have everything at their hands. So concluding loads. My last slide is that, you know, secrets management is on of three core pillars, processes, people and tools. So processes is being able to create policies, being able to, you know, know what to do when we have breaches, people, we need to train people, we need to raise awareness of this. And of course we need the tools to help remediate, to help automate and to help solve this problem. But as I said at the start, there's no silver bullet. All three of these need to be followed if you gotta reach that level four maturity. And with that, I'm open to the q and a session of the tool and I believe Paul will be back for this one.
Thank you so much, Mackenzie. Yeah, we have some time for questions just before just gonna look at the Paul results. The first part was, what do you think is the biggest challenge to security? 58%, not surprisingly thought credentials, secrets or data left on protection in the cloud. 25% said no control over privilege, over privilege accounts with access to the cloud and 17% poor cloud architecture design and lack of network hygiene. The second poll, which was, which cloud platform do you use? Well, there weren't any great surprises there. Or 50, almost 50 50 really on aws, Azure, none for Google, which is interesting. Six spent others, we'll never know what the others are, but that's, what do you think of those results quickly in McKenzie?
GCP surprises me, but kind of not. Azure is the one that we are seeing a lot come up, kind of AWS was the standard, but more and more and more we're seeing Azure become, you know, really big competitor in that space, especially with some of their tools, Azure pipelines and Azure DevOps is, is the kind of their, their gi your GitHub competitor and you know, we seeing a lot of big companies use that.
Yeah. Yeah, I think it's looking like a two-way, like, like in many things in life, it's like iOS versus Android, probably seeing the same kind of split. I mean one thing's. Sure neither of those are ever gonna acquire each other. So I think we're guaranteed at least two different cloud structures for the future. We do have, as I said, we've got a couple of questions, so they're a bit long. But what I recommend these, this is to you McKen, what are your recommendations for an organization looking to implement secrets management with centralized storage? What should they start with first? So I guess that's a very, Yeah, that's a open question, but,
Well, I'll give an open answer. No. Yeah. Look, look, that's, that's yeah, a great question. You know, when, when starting from from zero, it's, there's a lot of avenues that you can go for in centralized secret management. So that, for me, one of the most important elements is you need to understand your problem that you have in your organization before you make any significant moves forwards. What I mean by that is kind of gaining visibility into the areas that you are, that you are suffering from. Where are your secrets being leaked? Where are they kind of visible at? What process in your current setup are, are, are, are they being bypassed? And these aren't easy things to kind of find the answer to, but if you don't find the answer to them and you just try and buy expensive tooling and, and other areas to try and plug those holes, you're not gonna implement an effective model to be able to actually combat combat those, those sorts of sorts of things.
What are the secrets that are being exposed? What teams are exposed in them? Where in the software development life cycle or where in the deployment life cycle? Are they being exposed? Are they in my source control? Are they in my configuration files? Start there and then you're gonna be able to figure out where to move forward to next because there's no point in my opinion that if you're at level zero, level one or level two, that you should aim for level four immediately. You need to kind of implement this gradually. Because otherwise, if you do things like, if you start, if you start finding secrets and blocking developers from, from access, you know, you, you're gonna cause all these disruptions. So before you are deciding your centralized model, find out where you need to take action. So that would be my open, my open ended answer to, to that open ended. Okay.
All right. We are running out of time. Actually, I just thank Andrew Marshall, who has commented more smart than question AWS versus Azure. You're not expecting this competition to go the way of VHS versus Bax. I guess he, he's referring there that AWS is the Vmax in this case and Microsoft vhs, but no, personally, I think that one will not defeat the other. Let's have just one more question again few. Mackenzie, what pitfall should security teams look for when getting developer buyin for secrets detection and remediation?
It's really important to kind of get everyone on board with your strategies, particularly developers, but often they don't understand the severity of this. So before you implement tools that are going to block workflows or create additional steps, you need to make sure that your developers, your employees, are on board with understanding the criticality of this and actually why you're doing it. We're implementing the system because we have thousands of secrets on our networks, on our, on our source control. This is a big security risk and here is how we are, we are going to do it. And then always involving the developer because you need their help and you know, you need to let them know that they, they need to be part of this, this, this process too. And having those, you know, those controls where possible. I always say, you know, like, don't try and block people in their workflows from that because you're gonna create frustration and anger.
And that's why it's important to kind of move through the levels. Because if you're gonna implement, you know, blocking checks or areas like that in your sequence management strategy, then you need to already be at a, at a, at a higher maturity level. If you're not going to just annoy all your developers and then they're gonna bypass everything, right? When, when engineers, when, when DevOps and SRE or developers or anyone else is annoyed by processes, trust me, they'll find a way to bypass it. So I think that's a pitfall when you're looking for developer buy-in, make sure they're involved in the process, they're aware and that you're not blocking them, but you're moving towards a maturity model together.
Okay, well, right out time, I'm afraid. McKenzie, just quickly someone's asked, can we get a copy of the slide deck? Yes, of course. That will be available as soon as possible. Both slides actually slide deck on the website. There were a couple of questions that didn't get answered, but we'll forward those on McKinzie and he'll reply separately if that's okay with you, Mackenzie. Yeah. In the meantime, let me just thank you all very much for listening, being with us today. Thanks especially to Mackenzie for that excellent presentation. And as I said, if you do have any questions, follow ups, you can email me, pf we'll do our best to answer your questions. But in the meantime, thank you again and.
How can we help you