Event Recording

Continuous ZeroTrust as a Way To Know Your User

Log in and watch the full video!

Organisations perceive their users through data. In the world of fewer and fewer opportunities for physical contact, identity verification is going all remote. All online service providers need to model the risks related to user impersonation and user manipulation attacks.
In this talk, we will dig through the classical methods of Knowing Your User through the static data:
Coupling the session with the device
Checking the network environment
Next, I will present manipulation methods related to data spoofing to express the business impact. Usual scenarios are primarily associated with rewards in the form of money for the attackers.
Time-series data analysis and the impact on the business and customer experience will be presented to show the way forward in the adaptive risk management context.
Finally, food for thought related to the standardisation of behavioural biometrics that is getting more and more attention as one of the defence methods will be shared to show that we need Zero Trust and a way to verify if and how the vendor products are working.

Static data can be easily spoofed. Dynamic data analysis (mainly in a time series manner) is the way to go.
Data resilience related to side-channel time series data analysis.
Zero Trust is also about not trusting your data sources and all the environment related to it.
Behavioural biometrics strives for standardisation.

Mateusz Chrobok, VP of Innovation, Revelock

Log in and watch the full video!

Upgrade to the Professional or Specialist Subscription Packages to access the entire KuppingerCole video library.

I have an account
Log in  
Register your account to start 30 days of free trial access
Subscribe to become a client
Choose a package  
I'm working for Revello, which is bug guru that just got acquired by ZY. We're doing anti fr solutions. So I'm all about the continu zero trust as a way to know your user. Thank you. Thank you very much. So it's a pleasure to be here again in person, ladies and gentlemen. Very great to see you to have a chance to, to chat with you. That's entertaining today. I want to tell you a little story about the COVID has changed the way the anti-rail systems has been doing with the true positives as false positives. But first, let's start with some of the basics, any online, shall I move front to, to avoid the echo? Okay. Any online service provider is having this issue when they need to know the user during the whole session. So of course, you know, there is onboarding process. There is the process of logging and most of the providers are focusing on logging or on transaction, whatever the transaction is.
If it's a money transaction, if it's a basket, there is always something that is mission critical. And then hopefully there is a logout it's not happening always, but in a perfect word, it is. And then there is this concept of zero trust, the concept that was first invented for the network, but still just our quick recap, the models that removes the implicit trust. So you need to explicitly trust somebody to move on and you have to assume that the breaching is inevitable and the, you need to have a continuous monitoring to see whatever is happening, because otherwise you don't know really what you're dealing with. The presentation will be fewer of mimis, but the very simple message is like, you cannot trust the data. You cannot trust the vendors. You cannot trust the products. You need to verify everything. That's, that's a very simple message.
And just to put it in the context, how big is the whole? So as you see over the numbers, like the number of fraud, attempts and money lost and by money lost time in the money that is feeding into cyber crime market, it's increasing over the time. The reasons for that is mostly account takeover, which obviously relates with the identity, but it also is related with the increase in complexity to get from, from my fi background to get the transactions done, get the things happening. So the profit is made between the use cases. So like the account takeover, money, mules, synthetic identities, which all start the money flow. And on the other side, wherever your organization need to comply with certain, with certain regulations like the anti-man laundering, strong customer authentication and so on. So on, in between, there is a lot of profit for the cyber criminals through the use cases.
So how does the service provider see their users over the time? I would love to say it's user-centric, but you really never know who is on the other side. So the only way you see your user is through the data, through the data, what by means. So you can see what is their, their phone, their browser. They, you can do the fingerprinting of the device. You see where they're coming from, the network, you do, behavioral biometrics, you user behavioral analytics and so on and so on. So that seems like a quite complete image. There are so many methods that's being used, but at the end of the day, at the end of the day, the, some of these methods are not only being used in specific cases. So only during the login time, which is like a, one of the issues. There is this login password check device, fingerprinting check, the network checks.
Some of them are like the MFA are only happening during the transactions. So it's not in continues. It's only static. And then there are very few methods just for the completeness. I wanted to show you like device fingerprinting. Usually it's being calculated on the device. It's being calculated from the hardware you're using. There is nothing easier than just copy paste it from your victim and present it to the service provider to just overcome this static check. Another method is to spoof the static data. So you don't know really how many impostors do you have in your system? Because there is so many data that is related to identities that is lying around. And some of the service providers, for example, as Facebook's allow to use this static data like social security numbers, or ID numbers to recover your passwords so sooner or later your within your system, you're gonna have multiple accounts that are fake or that are just synthetic identities.
Another example is like it's very common these days, looking from the fraud perspective for the rat. So remote accession attacks, there are different systems that are looking into the usual ways you're using your network. And whenever the model says, okay, you're coming from your natural network. So from the network you're logging in very, usually there is no additional step up required. So the frauds are usually reusing the tunnels through the victims in order to mimic the same network connections. So this static check is also circumvent. So the user, if you're thinking it from the database perspective is a relation is a relation between the identity. So like the user ID username and the sessions he's doing and the device context and the networks and so on. So on, but all of this data is static. Moving on to the more complex cases and more market that is out here.
We see that there are more and more automated tools like moka, for example, that is being used to hijack the OTP tokens as the slides will be there. I will just keep, you know, the, the, the more, the bigger presentation, but the very simple message in here is like, as this is being more and more adopted, it's easier for the fraudster to also circumvent the OTP token text tax. So how to, how to approach it. What is the next step? What is the road to zero trust to know your user through the data? Because that's the only way as people are not going towards the physical contact we are using. And multiple people are starting to use the behavioral biometrics, especially type two behavioral biometrics, which is like a human computer interaction, your keyboard, behavioral mouse behavior, there is a behavioral analytics. So how you're moving throughout the system, there is the malware patterns.
So of course the things, there were things that we know for years, which is like the static and also the behavior, the dynamics. So how the malware is changing, for example, the banking website or whatever website in order to gain the risk, then the device with continuous device finger printing in order to check if the session hijacked has happened or not. Then the network anomalies, which I just mentioned a while before and the threat Intel. So to know what are the current attacks, the most actual attacks that are happening around, and as you can see it like the continuous risk scoring across the journey with all of these engines, with all of this detection, is something required to know what is happening in, in the every moment. You cannot just make a check once and then forget it. You need to see if the context has changed because the most usual attacks like on the onboarding, the synthetic identities during the logging, the fishing, which is always green, the malware zero days, attacks on transactions, remote access, trans, and tools that exploded during the COVID.
So if you perceive it dynamically, just, this is one just of the example views, you'll see over the time and over the context, how the risk has changed. So in the beginning, there was some risk with the device that was not patched over the time there was remote accession, but from the analytical perspective, you can see much more. You can see and make assumptions on how risky is that behavior. And right, we're going to the problem with the engines. And this is a backstory when in March 20, 20 multiple countries have closed their bank offices. So the people that were using banks usually stopped to use it. They were having an issue to get their transfers done. So the things that they were using, they were, for example, sharing the credentials over WhatsApp, or they're providing a remote access to their friends, neighbors, or whoever relative there was it coast and great number of false positives on the perspective of anti fraud systems.
Because the behavior of the people that were connecting was very different. That was their neighbor, nephew, son, or somebody else. So you can imagine that most of these systems were providing a lot of false positives, and there are also well known attacks against every of the engines like generative adverse networks against behavioral biometrics or spoofed against the behavioral analytics and so on. So on which can be used on their own against every of these systems. So as you see, if you have an ability to forge the data on your, your own way, you can fool every engine on its own. This is the moment when, when I want to show about the zero trust. So in order to eliminate the trust in LA element, in this case to any every engines, you need to connect them and you need to see it, how, how they relate from the perspective of time series of the data.
So we know that the bridge is inevitable, the users will get fish, they will get malware and so on. So on, and the reaction shall be equivalent according to the risk, because it doesn't make sense to block every user that is using a rooted device, but maybe it just increases the risks a little bit more specific example. So if you're gonna look in here, this is a connection between three engines, for an example of zero trust. So behavioral biometric checks are correct. That means the user that is getting into the system is using it the usual way. So he's typing patterns. The mouse movements is the usual way of using it, but what would he see? For example, for the device fingerprint, the device fingerprint can be spooled from a mobile device. It's a usual device fingerprint for that user, but it doesn't match the behavioral biometrics.
That means his, the behavioral data that we are getting is related to the keyboard like physical keyboard and the device is presenting itself as a mobile, very simple example. And then behavioral analytics does not fit behavioral biometrics with behavioral analytics. Like the way you're moving across the, the system is not feeding the dynamics that you're using by clicking, by, you know, pointing your mouth and so on. So this is like a connections between multiple systems in order to not trust any single of them. Because from the perspective of behavioral biometrics, it's not gonna work on its own. So a little bit of evolution as I'm trying to explain it myself. So the rule engines that were out there on the very beginning, which was mostly related like, Hey, is your ID valid people coming to the banks and checking out is if there are them by making a physical connection, then the machine learning that get much more into the environment.
When we have protection method like stack and sample against the generative networks and these continuous retraining concept, because everything flows, everything changes. The UX of the service providers is always updated. So you cannot just make a model and forget about it forever. And here we are right now coming to the zero trust. So you cannot trust the data because it can be forged. You cannot trust the single machine learning engines because they can be fooled with specifically prepared the data. And you need to continue learning to follow the changes that the business provides within the service providers, and also provide the feedback between the engines. So they're not alone with the detection. The next step in my head is the adaptive risk management. So apply the business logic depending on the risk apply the user experience that will be relevant to the risk provided. I don't have a timer.
Don't know how much more time I have. So the question is like, thank you. So the question is like most of the data science people will tell you, okay, we just need more data to solve the problem. So we'll adaptive risk management solve the problems. And I want to show you just three counter examples on one of the last blackheads accelerometer was used to try to breach some of the privacy gyroscope can be used to capture acoustics, not like explicitly because the resolution of something is not enough right now, but with some smart methods of the, having three for English, it was possible to use the accelerometer that you probably having with your smartwatch or with your phone to recover the words you don't need. Really the microphone, the key, which is obviously in one hand, when you're using ATM or your phone based on the gestures, get your pin number.
If you're using one to detect it, this is quite specific, but detecting intoxication think about countries where alcohol or other things are prohibited with that. You can have a site channel to detect if your employees or somebody else is, is intoxicated, just up to eight steps. So this is, this is one of the privacy risks. Another one very briefly earbuds. There is a new research that is related with the type four behavioral biometrics, which is gait. So we are different as a people, our muscles, our bones are a little bit different. So the way we're removing is also unique and it can be misused as a, as another weight. There is also a very big problem of correlation bys, and it's in, and it's related to the way we are thinking about the, about the world in general. One of the research that is out here is saying, okay, it's quite unusual to have a stop sign in such environment.
And the issue is like most of the most popular image detection engines did not detect the stop sign. We see it perfectly. It was put that physically virtually, and it was not detected. We can translate it into our words saying, okay, it is probably not possible to get a malware from makos system from this government network or so. And so, but it's, it's a bias, it's a bias that we are introducing as a people into the simple rule system. So it's not possible to make it. And the only way to approach it is to adapt is to retrain over the time. And don't don't trust any single engines. So where are we going? We are going towards standardization. And the, the issues that are in here is like, like there is no average behavior of people because people are not average, not fitting at the average at all.
There is no average body trait. There is no standard that's dataset, especially for the behavioral biometrics for biometric data interchange. There are ASO standards and so on. So on, that's quite easy to move from one vendor to the other. So because the models are in inter not interchangeable, and there are multiple vendors, you cannot compare anything that you're having on here. One of the findings that I had yesterday, which was great on the open ID foundation is like, there is the SSE group that is trying to standardize some information, sharing. Some of the information that I believe can be also used for that use case, which is related, for example, with behavioral biometrics and so on. So I have high hopes in standardizing that maybe I'm looking forward for it. I cannot change the slide after yesterday. So that's why it's not in here. Then next step that every customer and everyone of us will be asking is like, where's the trustworthy or explainable AI.
So if you're making a decision that their risk is high, why have you made it? Because the device is rooted, the user behavior is not fitting and so on. So on, so more and more users. And this is part also of the U legislation will, will ask for the explanation why the decision like that have been made. So there is a need for the common way of validation and common way of understanding of what is happening during the know your user. I believe this is almost the last slide. Yep. So we are all sure that there will be always be threats on our way. So the threats will evolve. It's just a matter, if you can follow it quick enough, there will be always be the detection systems that get pulled over the time, but we cannot rely on single one of them. And we need to keep learning to provide the feedback, train and standardize because otherwise you don't know if the user on the other side is really the user that you're thinking it is. Thank you very much. If you got any questions, I'm happy to get them. Well,
Thank you very much, Matthias.

Stay Connected

KuppingerCole on social media

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00