KuppingerCole's Advisory stands out due to our regular communication with vendors and key clients, providing us with in-depth insight into the issues and knowledge required to address real-world challenges.
Compare solution offerings and follow predefined best practices or adapt them to the individual requirements of your company.
Meet our team of analysts and advisors who are highly skilled and experienced professionals dedicated to helping you make informed decisions and achieve your goals.
Meet our business team committed to helping you achieve success. We understand that running a business can be challenging, but with the right team in your corner, anything is possible.
This session aims to explore the practicalities and paradigms of integrating AI identities into current and future digital infrastructures. Topics will include the regulatory and governance challenges posed by autonomous AI operations, the technical requirements for creating and managing AI identities, and the technical and even legal considerations of recognizing AI as identifiable entities, focusing on accountability and traceability within various frameworks.
This session aims to explore the practicalities and paradigms of integrating AI identities into current and future digital infrastructures. Topics will include the regulatory and governance challenges posed by autonomous AI operations, the technical requirements for creating and managing AI identities, and the technical and even legal considerations of recognizing AI as identifiable entities, focusing on accountability and traceability within various frameworks.
We are still in identity security as the, as the main topic of that track. And we had a great discussion earlier with Adam Price and with Justin Richer regarding inclusivity and vulner digital vulnerability. And we want to cover another topic, which is a bit more forward looking and not yet fully covered. And I don't think fully understood, we want to talk about the topic that's called autonomous, yet accountable. Do we need identities for ai? But first of all, I would like the two of you to quickly introduce yourself.
I don't know, I don't think it's necessary, but at least we need to maybe starting with yacouba, Well, Yacouba Cedars, I've run identity management, everything in the financial sectors for the past 22 years. Been been there, seen that, seeing the whole topic grow.
So, and being audited all the time. So, Okay, that helps.
So Martin, Kuppinger, principal Analyst, and one of the founders of Kuppinger called Analysts, and in the identity space, I think for more than three decades now, Right? And the same goes, holds true for the, as for the last panel that we had. If there are questions in the room, just give me a sign. I try to weave it into the discussion as well. But as a starting point, whoever wants to pick that up, do we see a need for individual identities for AI entities? And what do you see as the primary factors driving this need? Do we need it?
And if so, why do we need it? Maybe starting with you, Koba, My statement would be, well, there are dependencies, but in principle, yes. And if it's not because we, we have an opinion, it's in the AI new regulation that you have to assess the types of ai, you have to report about them. You have to define them as high risk or not. And even for auditing alone, you would need to identify your ia.
And yeah, so I think that that's one of the reasons. Yeah, I I, I, I think I would give it a bit, bit of different perspective.
So, so I, I'd like to talk about AI identity because I think there's a lot of AI towards identity, a lot of identity towards AI as well. Let's call it ai, AI identity. And I think there's, there's another area which is we have and will have, think about metaverses, not the metaverse develop, not be the metaverse. There are already many metaverses out in organizations which are about augmented reality used in manufacturing, et cetera, which are sort of, of metaverses.
So it's, it's not the metaverse, it'll be many, but what, what is happening increasingly will be, there will be sort of a digital double of us. I would love to use the term digital twin, but it's already used in some area. So let's call it a digital double. There will be whatever, an avatar of Martin or many avatars acting somewhere on behalf of Martin. So things that are powered by some sort of AI and that are doing things on behalf of me.
They are, so to speak, digital representations of Martin. And in that sense, they should have some sort of an identity. Okay? We could get philosophical and, and ask ourselves is identity right term? Because to identity, when we look at the term itself would require consciousness. So maybe they only need Actually philosophical. I believe that's Very, very philosophically. So maybe we, I mean We go back to plateau and all those Yeah, We, we, we, we better refrain from that part maybe and identify for sure. But we would commonly, we use the term identity.
And so in that sense, I think we, we should have an identity for everything that is behalf acting on behalf of someone, of something, of some organization or whatever. Well, I would rather compare it to something I've done in one of the banks implementing a, a attribute based access control. Meaning you have a set of rules that could calculate new rules or do some learning based on data. And the outcome would be after the rule is played, there's a request, there's the rules, there's the data set, there's an outcome.
You can, or you cannot withdraw your 10 euro. This outcome has to be when the auditor comes, he wants to know was there the right, right for these euros to get away to be taken, I have to prove the whole calculation, the whole reversed engineering rule set or algorithm or whatever it is to prove that yes, this was not, this was correct. Now this I use as a parallel. So it was in 2015. It was not ai, it was rule-based. But I learned a lot from that, that you can have algorithms and you can have rules.
But even above that, how do the rules play together when there's one rule overruling the other one that's becoming quite complex all the time. So I would want it, I would want to know which algorithm with which capacities, talking to which attributes, what data sets gave me the outcome that yes, you have access or no. Or anything you were searching for. But of course there are many types of ar ia, ai, sorry. So if that's a simple rule set with a clear purpose and it's a critical thing, I want to identify which exact algorithm is at play and I want to freeze it or store it or get some trail.
And that's what I'm talking about. And, and right now we are talking about that, so to speak on steroids. Yeah. Isn't it in the, in the sense of there's something which is way, way more powerful sort of saying, okay, I'm the avatar of Martin, let's stick to this, and I assume I know what Martin wants. Maybe there are some, some defined rules that are set and this thing evolves, so to speak. It learns. Yeah.
It, it goes further. We, we, we may end up with the, the very still very common explainability and I would say traceability challenge in ai. Yeah. Where do these things come from, et cetera.
And then, then I, I think we have this thing which, and go back, going back to the identity, we have something which in some sense has an identity, which has a representation of someone or is totally anonymous. So we take connected smart, smart traffic where we have a ton of autonomously acting systems, which should learn, yes, they should learn from what is happening in the traffic and, and get better on that.
But, so all of them have an identity, but then we, we end up with control, with liability with a ton of, I think, interesting questions. Right? And I think we, when I asked you initial questions, do we need identities for ai? Hey Matthias, what do you mean with AI in that case?
So you, you've mentioned the, the, the avatar that acts on behalf of a person. So there's a relationship we have mentioned the totally autonomous entities that act Yeah. Autonom And give you an outcome and as a per, as a person of themselves actually as an actor, let's say. Exactly. So that the first thing would be some kind of identity relationship management to say, okay, this is something that acts on behalf of Martin, but a it's Martin who's responsible maybe, or on the other hand, this is really fully autonomous. So you've mentioned Still a lot of relationships.
Absolutely, absolutely. But between each other, but no, no re individual responsibility maybe, is it Yeah, that's, that's the question. No responsibility or accountability are two things. Yeah. Liability. I think that the process itself could have the responsibility like an actor on behalf of someone doing stuff. But you are the yeah, the accountable, the end responsible, accountable person.
I, I, I just recently thought about, and I, I don't have a, an answer on that, but I think it relates a bit to that. I thought about, so you have an sort of a vehicle that allows for autonomous driving, and so then you're speeding. So the first question is, could this happen at all?
And, and if so, you know, it would require either or probably both. You at least it would require that this vehicle is able to ignore the speed limit so that the manufacturer has built it in a way that it's, or programmed it in a way that it can ignore the speed limit. And then the question is, does it do that by itself or are you able to configure the system, your, your vehicle so that you say, oh, you can easily drive 10% faster or 20% faster. And if that allowing it, what is in a situation where you need to speed a bit to avoid an accident?
And this I think is, is a very interesting point because it involves a lot of things about the, there's this, this identity of the car, it's your identity. There's the organization or complex relationship.
A lot of, a lot of questions around them. Yeah. Responsibility, accountability, liability. Exactly. And that's why I want every acting autonomous process as I see, or algorithm or whatever you call it, would be identifiable that this was the one running on your car this moment with these parameters and this is the reverse engineered outcome and that's why you crashed. And then the next one is the legal liability. Was it your fault? Was it these, the car's mistake or the producer or whatever.
But first you need to know which rules was playing, which, which, and that should be identifiable by some unique identifier. And and that's what I call the identity. Yeah.
Fully, fully agreed. And I, I think the, the vehicle, by the way, everyone understands vehicles a bit, probably the vehicles is a very interesting thing because when we think about the, of the vehicle as an identity, then I think we are way, way, way, way to cause grain because the vehicle is in fact a conglomerate of connected entities, call it entities like Yeah, yeah.
It, it, it, and very complex relationships, not only with the vehicle but with the outside. So take the black box for, for recording of, of accidents, et cetera. So this black box is one element in this, which gets data from other systems, which has very sophisticated access rules because in, in, in some countries, the police may be allowed to access this at any point. In some countries, insurance companies may read a lot of data out of that.
In other countries, only in the case of an accident, certain groups are, and then, then around this vehicle, there are many, many different entities like the driver, the, the, the other people sitting in the car, the leasing company, the manufacturing, the garage, the police insurance company, and the, the people working there as well, plus all the other vehicles that, and the traffic control systems that are connected. And then we end up, without an identity, it'll not work and everything which is autonomous. And this brings us back to ai.
'cause when we talk about autonomous, then we always have a bit of AI in our, how big the quotas may be. But it means without this identity concept, we can't get in control neither for tracking it. So you come from the governance side, it's Just an element of your car, like exhaust by, It's also about, about the control side. So who is allowed to change what under which circumstances? So the entire, so we need a need, identities and sophisticated, so to speak, access control concepts to make all these things work. I believe We had a question.
Sorry, I have to that the microphone just a sec as usual keeps me moving. Yeah. To come back to the question, you were talking about speeding and what would happen, but isn't there already something happening now with AI systems and then especially with the image generating ones, and they say if, if you use our image generating system and you get sued for copyright because of something that was there in the training data that we as manufacturer, we will be limitedly liability.
We will, we will take on the, let's say the, the, all the court costs up to a certain point. Isn't that more of the less the same thing? So you get a disclaimer co culture because, and everyone is trying to get away from the liability. That's what you mean? Yeah.
Oh, sorry. No, no, that's good. Yeah. But I think the AI is some element in your car, like your motor for you, if you don't know anything about cars, your motor block is also a black box for you. And there are elements and they're numbered and there are series numbers and you have to put in the oil and that's it.
And you, you know how to drive. But if something's wrong in your car, yeah, that could also be the ai. That's I i, but it should be traceable.
I, I believe there's a fundamental difference between your sample and my sample. In your sample. It's always about, so to speak, Conscious app use, so to speak in, in some sort, you say create something that looks like Dali or, or stuff like that.
So, so it, it's, it's, it's something where you say, I use this actively to create something. When you have a vehicle, you say, I, I want to have something that drives me autonomously. And when you then say is a, when, when the, that thing by itself or that conglomerate of things ignores the speed limit, or when you say, I ignore the speed limit, then then you have a different scenario because it's a bit more complex.
Because in, in the one case, you just get the tool to where you do something with, in, in the case of this speed limit thing, it's, it's probably that, as I've said, the vendor could allow you to, to change settings for ignorance or you, or it's the thing could ignore it by automatically, by himself, by itself. So that's the autonomy and then then autonomy and then, then you have very different scenarios. So I think it's, it's more complex.
But yes, there's the risk of liability culture. On the other hand, we see also that, that there, there is some level of liability. I think for autonomous driving level three, as a vendor, you need to take a certain level of liability. For instance, If I jump in Below, it's a bit different. But you're running out of time a bit.
Yeah, yeah, yeah. We are running out of time. Maybe I think we should take one step back and look into use cases.
So you, you've mentioned identifiability of an AI to know Yeah. Who, who did What. If I read the, the, the act about artificial intelligence, it classifies ais in three levels, right? Unacceptable, high risk for life of people, threats to people's lives or whatever. Then high risk and then minimal risk or lower risk, right? And the impact of the, on the process that's being executed. If it's driving a car, of course that could be high risk, but acceptable because you, you should manage it properly. And there's a lot of, I've been thinking about that. It depends.
If it's just drawing a cow with a, with a dolls head, that's not a high risk ai. And then it's less important to know exactly what, what algorithm they did. But I think the high risk ais as mentioned in the act, those are the ones in, in hospitals, cars, aviation, there's a lot of legislation also in those industries themselves talking exactly about the com, the composite risk that now emer me emerges from the AI conjoined with the, the existing industry, old school risks that are all safety risks, which are all data.
But, but, But at least I think we, with all the things we touched, we at least agree there needs to be identity for ai And we need to identify the right use cases. So it's authentication of, in Some cases it's more important than in other cases.
Yeah, Exactly. Yeah. But also for forensics, you've mentioned why happened, why did this happen? So you need to have auditability, Auditability, but it's also right writing transparency you need and human intervention. If someone has a claim that this was wrong, some decision or whatever kill switch, they, there should be a kill switch and there should be a manual intervention and all these things. So if you don't know where to intervene, intervene, yeah. You need at least an identifier for the process. Right. Okay.
So we are now at the same point at the, in the last discussion, we just touched upon the surface of the topic, but I think this is an important discussion to start as well also with AI spawning ais and, and, and having Yeah. Clouds of ais communicating with each other. Yeah. That will be topics that are much more complex than we just touched upon. But nevertheless, we agree upon at least or somebody in the room who disagrees with that, we should have identifiable AI instances acting within our systems. In Critical systems. Yeah. Critical process. And in critical systems.
I'll, I'll just grab the mic. I would say if something goes out into the world, it should be identifiable. But if I run something which I run in my garage, a hobby project, yeah. Should it be identifiable? I Minimal risk, low risk, low when it runs across, when it runs over your cat?
Well, when, when it, when it escapes my garage, then I do think if once they track it back to me that there, there will be something set. But as long as, Okay. Okay. Thank you very much for this discussion. Maybe We can ask AI to help us in This question. Right. Ask pt.
Just, yeah. Okay. Thank you very much. You're welcome.