Thank you following that is always a hard, hard act to follow fantastic presentation there by that a few minutes ago, bot human friend or fall. I think coming back to the theme of, of AI in the world of machine learning, I think it's, it's imperative now to have a, a PowerPoint title, which really, really tricks and confounds all of the, the algorithms on the social media sites these days. So I really wanna talk about some of the evolutions we're seeing in the authentication space and the authorization space.
And the main premise really is to look at how as service providers or as API owners or delivery owners, we're constantly faced with making numerous micro decisions. Are we interacting with a bot? Are we interacting with a person? Is this person known? Are they trusted? Are they untrusted? And somehow we, we with organizations delivering those solutions need to find pragmatic and evolution ways to, to solve some of those problems.
So, first of all, let's set the scene as, as to where we are today and the classic problems of what we face as vendors and assis, and as architects really trying to deliver both secure yet usable information systems. And the classic one is the whole barrier versus asset value. And clearly the bigger the, the asset being protected, the higher, the barrier, every classic, simple information security paradigm, but that equally leads us to a, a very similar and quite paradoxical situation where the barriers that you place in front of your assets often become further away.
You know, a classic system of having a, you know, the classic old system of firewall with public and private networks, classic examples of having step up authentications of very high barriers to authentication. And we certainly see those barriers end up becoming further and further away from the assets that you want to protect.
And that leads us to a third paradigm, which, which is constantly been with us since the internet age is the whole concept of assurance versus time.
Clearly we can introduce things like multifactor authentication and barriers and step up and transactional authorization, but the assurance level associated with an access token or recession clearly degrades over time. The longer an access token is in the wild, the bigger, the threat of malicious use of theft of misuse, etcetera cetera. So we're all constantly, somehow trying to find answers to these three very basic information, security problems.
So we all then start looking for a method to, to increase our level of assurance, our level of understanding with the ecosystem, which we're delivering our services, APIs applications, and so on. So clearly talking about buzzwords like AI microservices, the, the sort of relatively modern buzzword is zero trust.
You know, it's been around ironically 15, 16 years since I guess the Jerich forum Analyst work eight, 10 years ago.
Google's beyond core paradigm seven, eight years ago. And that's again all trying to look to increase security by moving the, the analysis from a network perspective through, to an identity and device perspective.
So sort of distributing how we analyze the interaction between the consuming user and the underlying service add into that the whole concept of continuous continuous authentication, continuous authorization, continuous analytics near, near realtime analysis of something. So again, coming back to the assurance versus time paradox, we're not having a situation with a, a two hour access talk.
And, and at the end of those two hours, there's a much lower level of assurance. We, we somehow need to continually verify. And if you squint a little bit, it sort of looks a little bit like a stateless ecosystem. Everything becomes stateless. Every interaction gets verified, every call, every get every access token, every context request, everything has to be revalidated.
And the other third assurance component is that of context. This is from a distance, looks like a nice, pretty pattern close up.
You can actually see it's, it's two buildings and someone's taking a picture from, from the ground up, looking up into the heavens. Now clearly context plays a huge part in how that picture is interpreted. And it's exactly the same from an authentication and authorization standpoint. The context is absolutely critical in the context now seems to be one of the key components of not only increasing security, we're delivering a much more usable experience for the end user. I wanna try and touch on a couple, these components in the next sort of 10 minutes or so.
So a lot of talk around intelligence and analytics and, and how we can do things in, in a better way. And that that's led us to a, a relatively new concept around intelligent OFX.
So we're not just looking at very linear approaches to authentication, you know, traditionally looking to authenticate somebody in a directory or a database, but somehow adding in extra data points, extra signals, extra pieces of information to allow us to make intelligent decision making.
Now clearly in the world of big data and AI and lots of other mechanisms to capture data intelligence only really comes about when we're trying to answer a very specific question. And the concept of intelligent a is really to, to do two things. One clearly increase security, or at least apply risk reduction in a much more fine process, but also improve the end user or customer experience.
Now, those two things have always been a log ahead. They've always been a paradox between increasing security and improving the end user experience and somehow leveraging intelligence into the authentication and authorization process. We need to try and fulfill those two very, very complicated use cases.
And the key part I've been able to do that is being able to really integrate with a very rich security ecosystem. We're moving away from just verifying a person or a device. We need to look at that entire interaction using a whole host of different data sources.
And those sources could be internal. It could be external, it could be third party, but we're trying to build a much, much bigger, a more detailed picture of the person and thing we are interacting with. So that allows us to start building much more complicated and much more advanced login journeys. And we start to move away from authentication, you know, user names and passwords to a journey of discovery. You want to start to discover who you are interacting with. If you are spinning up an API or a log page or an application, who is it you're working with?
Is it a bot clearly lots of different recapture mechanisms that are available and you sitting there selecting how many red fire hydrants are on the page, or how many bridges or whatever on the picture, but that process now it's a DEAC part of any logging system like it or not. And clearly that needs to be a plugable injectable component. It's a default component. But if you then start to build that barrier process down is the person you're interacting with a returning user. Have they used your system before?
Is there a simple mechanism to register that person, redirect them to a route where if it is a person they're unknown, let's have some sort of smart profile to, to incrementally increase our knowledge of the actual person themselves. But what happens if they are a returning user? So they are a person they're returning, they are known to you, but are they trusted?
And that brings out a much, much bigger and more complex question. Are they trusted? How can you make that decision? Are they a friend? Are they a fall?
And, and this is again where we start to look at lots and lots more sort of micro interruptions to that journey is the device they're using known to use. You're doing some sort of device and artist, characteristics, maybe some sort of additional MFA processing or initial barriers. If that device perhaps is not known, but then we start to look at context and this context could be anything from the location, the user agent, the browser, maybe analyzing the applications.
If it's a mobile interaction, you know, have they got any applications on there which has a, a level of assurance, which you don't necessarily think is pertinent to a trusted transaction, but then with allowing ourselves to make a much more fine grained decision on the back of that, we're not just saying yes or no allow or deny.
We're trying to, we're trying to make a much more fine grained decision to allow that context, to have some relevance in those downstream interactions, but to allow us to make those contextual decisions, we need to start to integrate much more from a non identity signaling perspective, the context isn't just about the person we need to somehow integrate a lot of non identity related signals into our authentication journeys. Classic examples are gonna be things around device analysis, threat intelligence systems.
There are numerous different threat intelligence, APIs, applications, cloud-based systems that you can integrate into your authentication journeys, identifying malicious IP addresses, identifying malicious apps from the app store, identifying malicious domains. For example, how can you integrate that information into your standard authentication journeys? And clearly the traditional identity signals those persistent signals in databases and directories biometry, hashes, and so on.
Clearly they're not going anywhere, but we need to be able to augment that information in a very simple and very business as usual fashion and leverage the huge new security ecosystem, which is, which is clearly available on, on our doorstep.
So what does that really mean to us when we're devising our login journeys?
Well, we need to start to move away from this black and white decision making, you know, authentication fails to 4 0 1 authorization fails to 4 0 3. We need to provide much more context, much more fine grained responses down into our intelligent gateways and our applications. So we start to look at things like caveats, and this allows us to really start making much more intelligent decision making when it comes to both authentication time and our authorization processing.
So I've amplified here that the contextual analysis components and these 6, 7, 8 different components are very much B spoke to the deployment themselves. And this is again, tying into the risk analysis, which your downstream systems can interpret and can actually make some relevance with. But the key takeaway is having that fine grained response information and a caveat.
In this example, we're starting to look at things around application assurance within a user's mobile phone credentials are correct.
They may well have gone through a biometric process, but there could well be an application on that device, which has failed some sort of application assurance checking process. And we're not just gonna block the user because that's a very bad user experience. Won't allow the user in, but we're going to deliver a caveat to their particular session. And this caveat is short-lived it's immutable. The next time they perhaps interact with us that caveat may well be removed or altered or changed.
But this is a simple payload, which is gonna get presented back downstream to an intelligent gateway to basically say the credentials are correct, but something suspicious just throttle our access. So if the next minute just allow them five page hits on that particular piece of content.
So again, it, it's not just hitting a, a big high level of friction or a barrier to the end user experience. The end user may well be legitimate, but there's a slight caveat to the access which they may get in the downstream system.
And the downstream systems can then deliver a much more personalized experience. So what does the authorization side look like when we were talking about context?
Well, clearly authentication events want to collect some stuff, want to collect some information around not just the user identity, but the context associated with that user identity. When the authorization event takes place, we're basically looking to compare the context that occurs a login time with the context that occurs an authorization time, and that well could be a signed object, cryptographically signed jot, which may contain the user agent IP application assurance, but whatever context you deem relevant for your particular ecosystem.
And then from a decision making perspective, whether you are using a traditional centralized PDP, or maybe a distributed ecosystem, you can make a much more informed decision. And again, we're moving away from big authentication events, big authorization events, to much more micro context, driven, decision making and context is occurring at every part of the ecosystem, every part of those interactions and the whole, the whole part we're starting to see a lot more now is not just, you know, either applying an authorization event to an API or an intelligent gateway.
It's starting to trickle into things like debt reduction services and DLP debt loss prevention. And this is a basic example where you have an intelligent gateway sitting in front of an API. And the intelligent gateway is actually redacting a response back from a cars, API, giving a make model price of a particular API cars database.
And it's dynamically redacting the price information because the context of the user session has altered.
The user may well have logged in on a corporate network or a trusted network they've dipped out to Starbucks or cost of coffee or wherever to get a, to get a drink. The IP address has changed. So the underlying intelligent contextual response from the PDP is to actually say, well, hang on, credentials are correct. The device is correct, but something's not quite right. So I'm actually gonna redact dynamically, remove some of the content which this end user should be able to see.
So from a security perspective, the security is more dynamic, it's contextual, but from an end user perspective, they can still go about and do their do their job, but just in a slightly more dynamically altered fashion. And that is allowing hopefully from an end user perspective, a better experience, but from a contextual perspective, but fine grain micro analysis is allowing us to deliver a much more secure experience.
So to quickly wrap up, I think there's, there's certainly a, the whole aspect of context really from a, an authentication perspective is, is looking to collect non identity related information, whether it's breach, credentials, threat intelligence systems, device analysis, we need to capture that information during the login event. And clearly from an authorization perspective, we're no longer just looking at groups permissions that, you know, traditional signature based attack vectors. We're looking to analyze contextual differences. The result of that is leverage caveats.
You know, we need to move away from, from true and false and allow and deny. And 4 0 1 and 4 0 3, we need to be much more fine grained in how we deliver downstream access. We need to caveat access and have that to be much, much more dynamic in how we deliver that to intelligent gateways and agents and stuff.
We also, number one, though, don't, they should be number four, number four, the correct level of friction, the correct time seems really simple, but certainly from a digital user end user experience, you know, only apply friction when you absolutely have to. I think gone are the days of, of step up authentication and big barriers and, you know, MFA, I think you need to have a much, much more user friendly experience.
And again, leveraging things like proxies. We heard about sort of sidecars in the microservices world, intelligent, proxies, intelligent sidecars WAFF web application firewalls.
CASBS, you know, leverage those tools within the ecosystem to be able to take the results of like caveating take the results of that contextual analysis, not to block, but to try and say yes, as as many times as we can. And it's, I think the context aspect seems to be the, the classic case of, of having breaks on a Ferrari. It's not to slow the Ferrari down it's to allow the Ferrari to actually speed up and, and go faster. And that really is the key with, with context and digital identity. Thank you.
So thank you, Simon, and let's stay a little, we have a few questions here, and so maybe we have a look at the questions. So even a couple of questions already. So I think the first two questions are, are somewhat related. The one is about scalability. The other is about standardization. So if you build an application against such system, obviously there's, it would be great to have some sort of standard in between. The other thing is scaling with authentication can be already challenging.
Scaling with authorization is more challenging scaling with dynamic or continuous authentication, even more so. So how do you see these aspects of centers and scalability?
I think it's a very fair question.
I think, I think there's the scale aspect is there's two parts to that as well. I think one is things like performance and throughput and all of the, the TPS style questions you get. But the other one is, is, is the complexity of how you would deploy and the governance and all of the, the aspects of that.
So again, like anything, it has to be applied to the correct projects. I wouldn't expect this to be applied to every single deployment. It has to be applied in a specific manner. And that allows the functionality to be, to be scaled down if you like. Okay. Standardization early does, I think is the case that I think trying to look for a standardized set of use cases, are,
Are you driving as for, are you driving standards?
We, at the minute, it's more about requirements gathering. See if we can find a consistent set of use cases and that can help to drive future.
So looking forward to that. So thank you very
Much. Cheers.
Thanks, bye. Thank you very much.
And.