Welcome to our co Coal Analysts webinar on urchin find and block identity centric security threats. Today. This webinar is supported by Sherlock, and the speakers are Andrea of Sherlock and me, Martin Koran, principal Analyst co. Cole Analyst. Before we start some, some quick information for this webinar and first poll as well before we go dive into the agenda. So we are controlling audio. We are, we'll run two polls during the webinar. We have a q and a session by the end of the webinar. And the more questions you enter into the q and a, the better it is, the more likely the q and A will be. And we will do a recording of the, or we are doing a recording of the webinar and we also will provide the slide text we are using for your download. So all set before I start as set or before we really dive into the subject of the today's webinar, I want to run a quick poll here first, and then by the end of the webinar, by the end of my part of the webinar, we'll run a second one.
So we will talk, when we talk about today's subject, we will talk about on one hand the attended threats and then how to detect to them, how to respond to them. And detection also brings us to the field of ai and that is also where, where i, where the sort of the focus of my first poll is and the policy around, are you already deploying AI supported technologies for IGA and or access management? So is there some, some AI element in your identity management already? So answer opportunities, no. Or you are evaluating, you're in a concept phase or you have already implemented something. So looking forward to your responses.
And that's usually the more responses we get, the better it is. So click the button, don't be shy, give you another five for 10 seconds and then we close it. Okay, thank you. And that brings us to the agenda of today's webinar. The first part of this webinar will be my presentation about, which is a bit about why cybersecurity starts with identity, the, the role of identity or how to to look at identity, threat detection response and also a bit touching the role of AI and ML in this context. In the second part, then Andrea will talk about making I identities, right, detection response, a reality and how this works. So he will really look at it from a concrete angle and, and how to implement this and which types of indicators of, of behaviors to look at. And then observe part, we'll do a q and a session.
I'd like to start with, with some numbers, and this is a bit more generic. It's about what are the most concerning attack vectors from a p we, we recently have been running. And when we look at bit closer at these numbers, then it becomes obvious that many of these attacks are related to identity when we just start with ransomware. So ransomware usually starts with a phish attack. Phish is about phish password phish credentials. So there's an identity element in in business email compromise and CEO fraud. We, we surely can discuss, but sure there's an impersonation, a aspect behind that ethics, some critical infrastructure tends to be at least the more concerning ones tend to be what we call a trans persistent threats. And this is where, where someone really gains access into the infrastructure and then sort of tries to gain access to more accounts, to more powerful accounts.
Also an important identity angle in there, malicious insiders up using their, the existing entitle mens and so on. So at the end of the day, when we look behind cyber attacks, then most commonly it's about identities. There are numbers that say whatever, 80, 85% of the, of all cyber attacks are related to identity. We can't discuss this back and forth. There's truly also this, this part which just uses vulnerabilities in, in software. So not going through the identity but but going through vulnerabilities. But at the end of the day, a lot, a lot of this is really, or using weakness of unpatched systems, but a lot is really related to identity. And we have quite a number of iden, I think threat vectors around enterprise identity threat. So what can happen, and this is, this doesn't, this is not necessarily complete. So it's, it's a list of, of, of, of, of aspects.
So we have privileged accounts and British accounts are typical attack targets, so to speak. So attackers always try to gain access to highly privileged accounts because this gives them the sort of the biggest power when running their attacks. We also have some of these which which aren't sometimes adequately protected regarding the password side, like service accounts or application accounts. We have many, still many, many poorly managed frequently, not every organization but many organizations. So accounts that are used by a group of users. So nonpersonal accounts and if multiple people know use of an account and know the password, we are in trouble shadow IT accounts which are not managed centrally and the admins for for shadow it sometimes remote access accounts, weak authentication policies, allowing still standard username, password over entitlements, very common, a lot of access governance. Looking at that, we also usually are not perfect in life cycle management, so often accounts, et cetera.
We have exposed credentials, we have plain text, we have certain types of, of information which can be analyze sometimes was a bit more brutal for us sometimes by, yeah, just binding stuff in memory, which is not deleted again. So they they're quite a number of these things. And also for, for contractors, for new employees, employees and bring own device and for partner access. Our, our approach of managing accounts not always is perfectly good. So there, there are quite a number of areas where can come in and I, I would say at the end of the day, one of the maybe most relevant ones is still the password. Passwords are very fishable and they are used in quite a number of texts. And so as, as I've already said, a lot of these attacks leverage identities, username credentials, passwords, token ticket, all that stuff and, and even what we have seen in the past couple of months, so is that we see more and more of sort of MFA targeting attacks. So MFA attacks or attacks against mfa, which try to, to overcome the sort of inherent strengths of mfa.
And there are various ways to do that and I think we always need to be very, very clear about the click. Frequently are at least the ones who are have a, have a clear defined target, which are really common for targeted attacks. They, they, they are sophisticated and they can use quite a number of, of types of, of access. So they can work with insiders or even be insiders. They can gas for information in the dark web, they can use sort of every type of recon cells fishing, not to forget, they also sometimes just use pru force attacks depending on what, what they're attacking. And the older, the weaker technology is, the more likely it is that a pru force attack can succeed. And so when we look at for instance, the mire attack metrics, which probably all of you know, so this is the metrics which which looks at a range of attacks and how this can happen and so on.
So this is a, I think of a very, very good starting point also to understand which different types of attacks that they are and, and how they work and what happens in these attacks. I don't want to go into detail. I think the, this framework is well known. It's easy to find, easy to access. So this is a starting point, but what what I want do is to look at where does identity come in when we look a bit closer in, into and attack framework. And then we have in this re sense sounds based, we have the fishing, we have probing against authentication services, gathering victim IDs and so on. And then I take a strategy, established accounts or paw victim accounts to come into the systems from there for the initial access, again, it may, may go through phishing to supply chain compromise to exploit by exploiting trust, relationships, et cetera.
And then sort of go into the execution of, of, of code malware in some way. Add the system data, be persistent, edit account, create new accounts. Depending on where you already are, the more powerful you are in the system, the more power you have as an attacker, the more you can do there. So modifying policies, whatever is, is feasible by finding weak spots on the system. So staying persistent, escalate the privileges. So go from there and say, I try to get more, this is really for the targeted attacks, abuse, weak spots in the, in the software, unpatched software, privileged elevation controls, whatever is feasible at maybe even the main controllers and stuff like that, depending on how deep your, your, as an attacker already are in the network.
So then work against the defense. So manipulate stuff, masquerade, all these things where, where you, where you don't try to to hide who you are that you are in the system, et cetera. That's the credentials. So there, there are ton, a ton of options for doing it. I don't want to go into all the details of that, but I think when we just look at the pure list like or middle and man in the middle and stealing tokens, forging stuff, there are so many different ways to access credentials and to do for fraudulent activities around identities. It's part of cybersecurity as part of attacks that that it becomes clear, it's not easy to defend again, discovery, who, who is out there, et cetera. Lateral movement moving into other systems. So ending in or starting in one system, moving across the, the network business by the way we're zero trust started with the, the sense of we need to avoid lateral movement, the, the impact at the end.
So denier of service accounts that are that, that are removed, services that stop resources that are high checked and all the stuff that can happen with high checked resources, data is blocking services, whatever. There, there, there many, many things you can do once you are in. And I think it's an interesting exercise. So I just sort of look at, this is a very high level, but there are so many things within this mutual tech framework that if you look at more, more detailed map to identity related threats, that it's very clear we must understand what is going wrong and our identities used the way they're should be used. There. There quite quite a number of of things to to do here. This is about monitoring the real-time events. This is about detection and this is really where Eldon comes into play. And this is where Andrea will give you a lot of insight in his part of the talk At the end we need to understand where are the anomalies, where are things happening that are not what we expect to happen.
So only then we can respond only then we can do do our deception work. That also requires that we, that we do a better identity proving this is before everything, who's the identity, a better authentication, all that stuff. And use as many signals as we can from the devices that are used and from the behavior of the user. That also includes the the behavioral biometrics, many other things. And so what we need to do is we, we need to monitor, we need to apply sort of strong security push from the very beginning device binding, strong authentication password, a syndication, behavioral behavioral biometrics forever and only then we can move to response and deception because then o only then we can detect the detection on the other hand is the challenging part because you're talking about frequently very large amounts of users. You're talking commonly about a huge number of signals and we need to respond relatively quickly.
But we also need to understand what is changing in our system is all that what we have in our systems that state is still the to be state or did something change and how can we mitigate this? How can we sort of harden and and re and reduce the risks from the very beginning. So when you look at identity, right, detection response, this problem of identity based attack is getting worse. But on the other hand, solutions are getting better and a very important part of in that place, everything which is based or which is done well and based on AI and ml. Why? Because it helps us dealing with the huge number of signals because we can apply it to understand where outliers in our entitlements, we can understand where anomalies and outliers in our authentication, et cetera. So we have this growth of cyber crime, we need fraud reduction intelligence, which is really more understanding where's fraud and we need identity threat detect and response.
And this as, as I've said, requires a way more extensive use of ML detection models at various stages. At various levels. And so we need to bring in these technologies in addition to what we have standard IHA solutions. So this brings me already to my second poll and then I'll hand over to to Andrea. And the second part is maybe a bit of thinking about how would you expect intelligent and ML based IGA solutions also impacting your IM workload and processes. So, so if you apply sort of these solutions, where can they help you best? Is it in the daily routine works for automation? Is it in a better role management and better access entitlement controls? Do you see just as part of the zero trust, so continuous verification part or is it more an improvement in compliance versus regulatory requirements? Looking forward to your responses. So come on, I know it's requires a bit of reading and thinking to phrase it simpler, where do you think helps AI in identity management most? So leave it open for another 10 seconds. Okay. And that brings us right now to Andrea, while I've been talking about sort of the broader concept of identity for detection response, Andrea right now will go way deeper in detail and say, okay, how can this finally work?
Good afternoon everybody or good morning for those connected from the us Thanks for joining us. And so thank you Marty for introducing this basic concept that basically states that behind any cyber threat there is an identity hack that can be some in insider threat, some malicious insider. Most often it's some act identity that someone else is abusing from the outside. So this is a great intro for explaining what we Sherlock do. So first of all, my name is Andrea. I've been in the space of identity centric something for a while before Sherlock, I co-founded a company called Crosses, which then sold to ibm. And the Sherlock is in a way built on the core of the core team of cross areas, which is continuing a journey of, of an identity centric mentality. So if we look at what we do, we are essentially a phenomenal behavioral anomaly detection engine built on the obsession that every security risk and threat should be detected using behavioral anomaly detection.
And we have pushed that so much to the limit that today out of our engine can offer two macro use cases. One is the identity threat detection response, which is the topic of today and works on top of ingestion of human application telemetry. And on the other side it's the newest toddler of the family, the ability to monitor security anomaly on Kubernetes cluster, which is a very important unprotected new domain for cybersecurity. Before we get started and and to further slides, we'll be addressing essentially four questions today. So first of all I want to explain a little more the way we do machine learning. I don't like the term ai I, I'd rather go into term machine learning and now we can help identity management and security operation. And then we continue with other four questions. But I want first to clarify our identity threat detection and AI ML are connected and I want to do that clarifying a bit the term because I realize there is a bit of a hype in the market but there's also some disillusionment about using AI and machine learning too much.
So I just want to bring us back to the same page. So first of all, machine learning works in as in oppose in opposition to programming system. So in the case of machine learning, the system learn from data and the react based on data, they learn a lot of security out there. It's still based on the programming. You try to tell the computer what are the data points that they should detect in order to find a threat, which I think you can clearly see from modern description. The word is quite complex if you want to describe that in programmatic terms. Now when it comes to anomaly detection, we need first need to find behavioral habits that are technically called baselines. And that is typically done in opposition to the old way of doing it, which is basing the detection on threshold. The very simple example is what is the risky threshold for fail logins?
Well it's five for me, it's maybe one from Martin, it's probably three for any one of you. So you need to look at personal baselines rather than generalized thresholds. When we talk about behavioral habits in security, there are two ways of doing that. One is building and monitoring behaviors using an I O B or indicator of behavior, which is in opposition to, I would say a more traditional way of finding threats using IOCs indicators of compromise. Now what an indicator of behavioral find is basically a behavioral anomaly that could be an anomaly, anomaly in the way you navigate through your SAP transactions or the way you access the SharePoint folder or the way you use the browser. And so we, in the world of behavioral anomaly, we detect the, as I said, anomalies over a baseline of or or a set of habits and that's in opposition to what typically security finds, which is finding footprints.
So that's how we get to finding threats. But the side, very good side effect of what we do is that for us providing a recommendation and identity centric recommendation for adding, removing entitlement comes out of the same ability to find anomalies. Now going back to behavioral habits, there are two ways in the market of doing that, by the way, the bold part is the way we do a share look. So there is a unsupervised way and a supervised way. The supervised way. It's basically you need to massage the data set where you train the algorithm, the machine learning algorithm and that's in opposition to the unsupervised where basically the system can learn on the data as they are without any tagging. And last but not the least important, it's, it's very important to have a model in our opinion that is explainable. What by mean, by explainable is that you can track down the set of anomalies and explain what was anomalous and yeah, back trace the origin of a threat as a composition of several anomaly.
That's in sharp contrast with another machine learning technique, which is called neural networks or deep learning where you basically have an output but you can't really trace back to, I mean why that output of a threat or high risk of risk was calculated. And this explainable machine learning model, it's very important in light of the upcoming AI machine learning, GDPR equivalent European regulation that will try to put some basic principle of transparency and ethics among other things into the way you do machine learning. So that was just to set the stage. Now you know I am in security, they are all about trust, right? So there are, and that's psychology that has nothing to do with technology. There are two ways of trust. One is a granted trust where you basically entitled somebody with a set of permission say, oh I trust, I trust you because I define you now what you're trusted upon versus the gain trust. The gain trust is, well I don't trust you that much but behave well and then you'll gain more authority to my eyes. So if we look at how these two say psychological principle are translated into identity management today we see that today identity management has a lot of prescriptions meaning a very complex role model, super granular, super prescriptive and there is very little ability to detect anomalies in what the actual users but not just users could also be applications or other identities are doing.
We all know this is becoming slow and also expensive to maintain. So the way we believe a shell lock is that the future I am should be more with an infusion of gain trust where prescription is important but it's becoming way more important the ability to detect and react to anomalies that are resulting from people behavior. What I mean by a smaller infusion of prescription could be a less sophisticated role model roles that are broader than previously designed. So you know, I think you understand the principle and that is basically cheaper to design but it's also faster in terms of adapting to a sudden change of circumstances that I think you were realized. This is the unique feature of cybersecurity today, the landscape of threat, it's constantly changing.
Now there is another aspect that we are putting into the equation, which is the human firewall. Today we completely forget about asking the end user opinion in a number of things. So if we detect something which we consider an abnormal pattern, we ask the user, well you know, in this case we're not totally sure this might be a data exfiltration or there might be an accounting over in this specific situation. So what do you think, for example, dear manager of a given person where that account belongs to. We might say well dear security, I'm the human firewall here. No it's not normal. I can tell you it's not normal. And by the way, thanks for asking. So we captured that feedback because there is no way security can do security without you know, extending the perimeter of contribution. That's what we call the human viral.
Now let's explain a bit and that's again our interpretation of identity, threat and detection, identity threat detection response, how, where and why it helps. So first of all, who is benefit from an implementation of whatever you might call I T D R? Well first of all we think that the IM team, which is typically a different buying center within many organization, they should be able to protect themselves and augments their IM infrastructure with a close loop detector react, which is the ability of self remediate, like locking an account, identity centric threats which are typically either account takeover or insider threats. So someone misbehaving from the internal. And the other thing which is very important is to use some of the analysis that we are able to perform to reduce the complexity, removing accounts, removing access to some stuff that it might not be needed. So you reduce the attack surface trying to implement at least privilege set of recommendation.
But there is the other side of the equation that I tdr, you know, glues together, which is security operation. If you look at the guys that are dealing with the traditional security operation center, they are able to detect identity threats out of endpoints, infrastructure, network. But the reality they can make little sense of any log file coming out of the identity ward or the surrounding business application. So we strongly believe that it T D R, it's the identity probe, the translation layer that can provide to security operation people identity centric threat. They can correlate with other type of threats that might be detecting with more usual or traditional security approaches. So now if, let's ask ourselves the question, where do we place it? T D r in the large chemo things, well we say I T D R, it's like a Rosetta Stone for companies trying to build a less silo-based approach to security. And as I said just a minute ago, this identity threat detection response is the identity probe from traditional security operation into identity. But it's also the way for identity people and systems to show a real contribution to a larger, and I would say more complete security posture.
Now talking about a bit the way we do. So Sherlock inaction end of the day we do simple things on the surface. So we just user activity, also application activity out of what we call the business applications. That can be the sap, the traditional MS 365 word, Salesforce, GitHub, imagine the word that of application that user are doing and also the audit trail resulting out of access management system or IGA system or converge platforms. We ingest this bunch of data and we find habits, habits and it's plural, habits for humans, habits for machines, AB for system credentials and different type of habits that we can correlate, right correlate to find a specific threat and then remediate. Now it's very important to state here that in our interpretation I tdr looking just to IM or trail is not enough. If an account gets hacked, you might get some information out of the access layer, but it's also important maybe to understand what that account in the calming day of hours could be due on some surrounding application because that's the only way to really detect the purified threat and avoid post positives.
So it's very important to correlate data across multiple domains and that's also a shared opinion across Analyst. Just monitoring ad or just the IM endpoint is not enough. Okay, if you just do that, you're wasting your time and money. Now, probably one of the simplest scenario that we encounter during our activity is there is a lot of people that want to understand the risk of external threats or inside a threat because it's important to say that with this approach you can find the bad guys from the outside but also the bad guys that are already inside the so-called malicious insiders. For us it doesn't make too much of a difference assuming that we calculate anomalies across across a set of applications. So a very common scenario, especially fashionable nowadays, is to look at at the complexity of user activities out of Microsoft 365 and combining that with access and authentication anomalies.
That can be while login Azure activity director itself, Okta and also combine that data with the V P N. So you couldn't believe how many bad things could be detected if you just look at some, even some basic anomalous behavior out of these data sources. Now one very important thing in I TDR is that as we are finding identity centric threat, which is that account for Andrea, well risky, you know, you have options to remediate it in different ways, but still using the acc, the IM system itself. So a very simple example, you could lock the user off in case it's a suspicious activity or block it or you might trigger a recertification campaign on the IGA platform. So the remediation takes place in the IM system itself. Besides sending that information maybe to a security orchestration and automation platform.
Alright, we think that there are key requirements for I T D R and again this term is quite new and in a way it's blurring, okay? Everybody puts its own boundaries to the definition of I T D R. So these, these are what we think are the clear requirements for a successful one. Of course it's biased, I mean that's what you expect from every vendor. So the, we think that the first requirement, and it's been already said in a bit, it's must deliver value to identity and security stakeholders. I mean the two churches are coming together but the reality they are still two different teams inside the company with two different mentalities may be reporting to the same cso but the reality different mentalities. So I TDR must contribute to security operation with external identity centric threat in an alphabet that they understand and they understand the word of this is a threat, a situation I need to manage nothing else. If you look at the word of IM people, they might be interested also in insider threats, okay, some bad guy internally and also recommendation or IM I gene insights to reduce the entitlement complexity in the attack surface. So not necessarily something which is a threat, but a set of recommendation to keep the house cleaner.
Second requirement, and I will never get tired explaining this because this is the area where the fluffiness out there in the market is marvelous. So we spent hell of a significant time to give you guys the ability to monitor any baseline of behavior on humans entity attributes and detecting all the possible set of anomalies that are required to detect. So that's why we developed, we maintain and we design our algorithm for unde detecting anomalies across diff, you know, a spectrum of requirements because the machinery algorithm that allows you to understand an access anomaly is different from the one that detects time anomalies, occurrence or user journey, whatever we have been built, it's, it's built on the principle of flexibility because the only way to detect threats is be able to understand many anomalies and just the correlation of many anomaly can filter out the real threat you should spend your time on.
So there are a number of technical details here, which I don't want to bother you with, but just before all this stuff got a patent for the behavioral baseline architecture and there is a reason for them, again, it doesn't have to first surface the complexity to user and that's in fact what we do. We don't expose it. But it's important for you to know that this flexibility is important in the long run. Now LA last but not least, in order to be able to get quick to the market and ability to get external and internal threats out of this sophisticated I O B indicator of behavior engine, we built what we call a reference model or I T D R reference model, which is a combination of integration with the most common access and IG platform and business application. But most importantly it's not a technical integration that matters, it's the ability to ready to use a set of anomalies that you might want to detect.
And believe me, regardless of being SharePoint or box or G-Suite equivalent G docs, it doesn't matter, the anomalies that you want to look after are always the same anomalous download anomalous folder that you've been accessing and so forth. And same applies to SAP or transactional obligation. And same happens to, for example, the access management platform. They attribute that they share where we can detect anomalies upon are always the same. So this is probably, I mean, the best value that you guys can get out there because that brings you in, I wouldn't say zero day. It takes a bit of time to teach the system and to understand what the historical baselines are. So we typically either use three month of historical data present, otherwise you need to wait a bit to ingest and learn. Keep in mind we have no rules, rules, thresholds, whitelist, blacklist, nothing.
Okay? The system learns on on typical behaviors and on and the text the anomaly. So the beauty in the long run is you don't have to manage it, okay? Because it auto manages in the autopilot way. Now there is one product capability that I want to emphasize because it, it's, it's important to explain that recommendations are resulting out of the same behavioral engine but they offer a different purpose. So one very cool thing that we do, it's we compare with peer clustering techniques. The data that we get out of an IGA platform could be SalePoint or one identity or even my former cross IDs, IBM I G platform and see how the user should be cluster it. Okay? And that's sort of mapping the role definition that you have done, that's the theory of the books. But then you have on the right, the AS is which is based on the actual utilization of applications and entitlement and then you do a different clustering. What's the purpose of it? The purpose of it is to measure a gap and based on that gap provide recommendation to close this gap. The gap, this gap will never be closed, but the smaller the better. In the old days of compliance, we call it better compliance. In the days of cybersecurity we say a way more reduced attack surface.
We're now towards the end and I want you to connect you with another interesting term that you might or might not never heard about it, which is C W P, that stands for cloud workload protection. What is it? As I said at the beginning, it's imagine that you know, same engine but two different set of data telemetry we connect to. So cloud workload protection, it's something that we built out of the same behavioral engine on a different telemetry, which is Kubernetes telemetry. I will explain to you why we do that in a second. But for the time being a year ago we realized out of the market signal that there was a huge demand for managing runtime security on Kubernetes workload. Guess why? Because airbag doesn't work there too well either. So there is a lot of open holes in cyber and Kubernetes bays architecture which are running significant business rich workloads.
So what we do here, again, think about the same engine, different telemetry. The telemetry is all about system call network calls and there is a sophisticated way to collect that stuff again on Kubernetes. But the way we do it as a longer term projection, which is on ITT v r, you are essentially monitoring the human and cloud application and API side of the equation. Okay? We look at identities using applications or APIs and we signal anomaly. But if we're also able to look what is happening in the basement on that infrastructure, imagine a word where everything is running on a cloud containerized way. Of course it's a bit of a philosophical view, but that's where we're going. The combination of anomaly detection will bring, shell lock in a new position to correlate anomalies out of human application behavior and infrastructure network called Cisco behavior. Again in a Kubernetes environment, which is becoming already the standard for the cloud containerized wor cloud application worth.
That's the last slide. I hope I'm on time. So the punchline is very simple. I told you the beginning briefly. So Sherlock, it's the take two of a company that sold the light with a different name. So it's a different company of course, but sharing a bunch of the same people including myself. It's a privately owned, self-funded, it's we're quite an obsession as we, an exception as we used to be at during cross areas in terms of self-managing the company without any vc. And our mission is very simple. We want to make behavioral normal detecting a bit of the new normal for digital security. With that, I think we can open up to any question that you might have. And in the meantime, thank you very much and here's the the website and the email in case you want to reach us out.
Thank you Andrea for the insightful presentation. And so let's go back to the agenda and the next part of the agenda is quite straightforward, stick Q and a. You already received a a couple of questions and so sort of first one is I think, I think I start with this one. So IT identity threat detection response feels more like a tool for a security operation center rather than the IM team. So given that there needs expertise and threat detection response, could you elaborate a bit more on that? I think it's probably somewhere in between and requires both I'd say
Or Oh, you're right. And that's it for a question. I think it T V R is a discipline as a practice beside the tooling. It's the first attempt to, to connect two words that today are separate. So the mentality of IM people is not really to detect and respond to think, I mean they program the system and sometimes they audit it, but that's no longer enough. So in a way this detection react ability, it's way closer to the way traditional security operation has always been working. But I think that there is no way for IM team to add some kind of that component to their domain of management. So the first part is you don't have to manage a threat you might be sending to someone else and be useful in the, in the, in the eyes of the company. So I am, people are delivering value to the broader security posture of the company, which is not always the perceptional case in many companies.
But two always keep in mind that the recommendation and the correlation of anomaly can provide to IM people a number of beneficial insights for optimizing the platform that are already running. That's what we call recommendation. So in a way, yeah, it smells initially more in favor of security operation, but the reality, I strongly believe there are identity management people. They need to, you know, stretch a bit into that domain. And you know, as I'm exaggerating and joking with some friends guys, you need to come up with some night shifts. Okay, maybe not 24 by seven, but you need to look after your system and don't thinking that someone else is doing it.
Okay, great, great answer. And I think the, the other questions also bit related to that and that is entities or detection response feels very similar to SIM and U E B A user user behavior analytics or user entity behavior analytics. How are they different? So how is it different to cm? How is different to uba?
Yeah, that's a very common question. So cm, the only thing that CM and I T R have in common, it's the data ingestion. But that's, you know, like sharing the first letter of an alphabet, the reality is the case of cm, it's all about keeping the data storing, but there is no concept of an identity threat data model inside a CM system. And the little user behavioral patch that many CM vendors have put will not be sufficient for that. And I'm glad to see out there in the market already an established position saying you can't do it. I, you can't find identity threats using a cm. And in fact we see more and more clients finally coming to us saying, yeah, you know, you're right. I can't do that with my whatever CM system. So the bottom line is CM system have no concept of an identity centric data model inside them.
So that's why they can't do it. It's not a specific blame to an vendor, it's the conceptual design that is lacking the capability, the user, user and entity behavior analytics. Well that's something that started in principle many years ago and it had some nice ingredients that were too much in favor of just looking to analytics and alerts and they were very course grain and and premature and I would say primitive in terms of the machine learning techniques that they were using 10 years ago. So what we have done is nothing more than taking the good stuff out of the old UBA concepts and expanding into the requirements of what we need today where again, the only way going forward to cope with this ever changing landscape of threats just to look at anomalous behavior, there is no way you can program it.
Okay, one more question here, and I think you're taking a bit of a different approach here. So can the Sherlock solution also be integrated to access management systems to detect the on Western ongoing transaction might be a threat and then trigger other responses? Or is it really more on the IGA side of things?
No, we, we do integrate with pure access like the one login Okta as Azure active director and the others, they all share the same audit trail. Where we add value is the ability, of course we can say there is a risk on top of your access management platform, but I think that most of the vendors are building something which is good enough for the real time access management, confined threat detection. The value we bring is the ability to correlate that anomaly with a lot of other surrounding anomalies because any attacker, I mean their desire is not just to crack the Okta one logging layer is to crack that and to do something else elsewhere. So that's where we had most of the value. When you realize that just the realtime fraud detection of the browser that changed, or a location that changed is no longer enough. That's just a tiny portion of what you need for identity threat detection. So we add value to any access management platform that might be lacking that capabilities, but you know, the real value, it's really the correlation of their anomalies identity anomaly with the rest of business applications.
Okay. Final question, at least from the ones I have here. Is there anything you should consider regarding gdpr? Things like whatever the chairman, workers council, et cetera?
Oh, that's one of my other favorite question. You know, that's the FUD number one thing. So let's face the reality. Most of the seizure security people, they're just scared about talking to HR or illegal. The sheer reality is any labor regulation in any country in Europe, I'm talking and gdpr, they allow any company to protect itself against threats. So what we do is though we don't look at what people are doing, we are just highlighting security situation and the investigation afterwards. So in practice we don't do anything different compared to what you might be doing with the CM system. Now the only caveat to that is that we sometimes use the word behavior habit, but then then you know, links to, okay, you're monitoring my productivity. No, no, no, no, we don't care about that. We take the data, we find the baseline that are meaningful for security and we just highlight the security situation. And every GDPR worker council says you have the right to protect yourself via company, but just on a very specific situation, which is highlighted as a result of very risky factors.
Okay, perfect. Thank you Andrea. Thank you to Carlock for supporting Call Analyst webinar. Thank you to all the attends of this webinar. I think this was a very interesting one again and I hope to have you back soon at one of our other webinars or at our European Identity Conference in May. In Berlin. Thank you.