First time here. And I have to say I was impressed by the fact that, and I'll tell you why, but it's great to see two female speakers. It's fantastic.
I mean, I'm also the founder of an organization called women leading in artificial intelligence. Not that I see many women in this room, but I can see some there, there, and I saw some over there. You'll be joining the women leading in AI by the end of my presentation, because we need global help.
So the, and, and I'll, I'll, I'll explain, I'll explain you why in a minute. So, so kudos to the organiz, the organizers, it's really good to see female voices, especially leaders talking about the crucial issues that Katrina was talking at the very beginning this morning, which is the fact that we are in a space where we are constantly online. And the pH fellow channel Flo says we are on life, which means we are not offline.
We are not online all the time. We are constantly somewhere in the middle with data being connected at every point.
So what I'm going, I'm going to run quickly a few things. So what I do in day in day out, I work with organizations across the, the UK, but also the EU and globally with the ethical governance around good handling of personal data, especially. And 99% of my work is, is in, in relation to, into organizations, deploying artificial intelligence and wanting to do the right things.
So not just in terms of privacy and data protection, but also in relation to transparency, fairness, and how they communicate with the customers and users around the use of the algorithms, especially when they are there to make decisions about them.
And we've had so much talk about all this recently, we've had so much talk about data ethics and digital ethics and companies for I'm sure you've heard about what happened with Google, with the, with the ethics board, which were set up, and then this baned because of some controversy about it, especially by the, from the workers, the employees at Google, but organizations deploying artificial intelligence wanted to use AI.
They have to be aware of the complexities around it, especially in relation to fairness, especially in relation to bias, but also in relation to the degree of explainability and the trade off that they want to establish between between the, the accuracy and the efficiency of the system and what they want to communicate to customers. And, and there is indeed a large movement around all this. So organizations, if you use AI for your identity management programs, there is a lot of work going on at the moment in relation to standards.
For example, the IEE I've just published their initial draft or their standards. The ITU communications is doing something similar and the ISO standards are going to be developed with a lot of global power. Katrina was saying tech is about power. It is with a lot of global power struggle around the, the, the standards and how they have to be developed.
At the same time, the European union have published the guidelines on trustworthy AI, covering privacy, transparency, accountability, explainability, and, and a whole range of issues, including a algorithmic impact assessment.
And they are looking for organizations to take up the AIA and see how it works in practice. Therefore, I would really recommend if you use AI to try and look at the guidelines for trustworthy artificial intelligence, because you could be one of the pioneers in try and see if their AIA algorithmic impact assessment works and the way operate.
Now, I'm going to focus on organizations using AI for they work. And I want to set the scenes for this. So first of all, let me run through this quickly. So the first few slides, which is, is staff, all you all of, you know, very well. I work with organizations coming to me and say, whether it's a bank, whether it's another organization, say what want to use it in relation to employees wanting to find the right balance, which AI allows, which is around the know your customer anti-man laundering stuff.
And at the same time, and at the same time, creating a seamless approach to identity and management. And I say to them, okay, let's work on this. And you need to bring all your actors together because starting to use identity management as an organization that does require a governance around it, it does require your technical team. It does require your vendors.
And I'm sure there are many in this room, but he also requires the domain experts because especially when it comes to using artificial intelligence for biometrics, facial recognition and whatever it is, we know that the challenges within that are huge. It was only yesterday, the same Francisco and midnight last night banned the use of facial recognition in their city. Why did they do that? Because on the basis that individuals, citizens must ever say in relation to the technologies, used to surveil them to make decisions about them, but also because it's about accountability, transparency.
And it's also about the recognition that of course facial recognition will, if I'm feted and I'm regulated will change the way that we habit and exist in our public shared spaces. So why am I saying this? Because the complexity around the use of artificial intelligence and when it comes to facial recognition, for example, are huge. And the fact that they discriminate against people of color is a main major issue that needs to be taken into account.
There is some fantastic work done by scientists at the MIT, especially Joyable and Veni who are really actually recommend as a read, really showing how facial recognition technique discriminate against people of color. And it was only, it was only a few years ago that sadly, sadly, Google was showcasing black women as Gorins. And this is unacceptable. And therefore, in order to work on, on, on these issues is really important to take all this into account.
So what we are using, what we briefly, and I've already mentioned some of these issues, we are gonna talk about this and the use of, of artificial intelligence and the ethics that go with it. So we all know what AI is.
I'm I'm, I'm sure we know that to me, the, the definition of artificial intelligence, plain and simple is humans machines doing what using intelligence and doing what humans are able have been so far able to do the branches and subgroups of artificial intelligence, machine learning and deep learning technologies are part of it. We are far away from the general artificial intelligence, gens, AGI that many people are talking about and scaring us, all of us in the media, in the films, we are far away from that.
Although we may get to that point, don't have to repeat this definition, identification, authentication, and authorization.
This is your bread and butter and how, and the issues that organizations are facing in relation to the misappropriation one's identity in related to terrorism, money, launder, financial crime, alien smuggling, weapons, Ling, and all that. And this is going to increase and become more prevalent as digital trans digital information become abit as our speakers before where talking about again, I don't have to run all through this.
We, we do. But when I work with organizations, they have, they, this is what I talk to them about. And I say the authentication, what it, what it is, what can be based on.
And, and the next phase that we are all working on at the moment, as you've heard from Microsoft just now is the so something that you are, so the fingerprint, the fingerprint have recently working with an organization, working on the retina recognitions, which is really fascinating, really interesting, and bringing control around it in, in, especially in relation to multi enrollment, which I think is important, especially when it comes to the writing an Iris recognition.
And then of course, the modern system, they seem, and they work with organizations seem to incorporate the other ways to, to for identification, which is for example, around signature dynamics, typing or pinpointing to geolocation. And this seems to be the wide range that I see organizations wanting to progress with their identification systems in terms of passwords. This is where we are at the moment. And of course, we all know the danger with using passwords. Although there are different degrees of security around it, but this is the, the situation rate is at the moment on passwords.
Biometric recognition is something that I see used more and more with organizations that I work with. And it requires a close enough match between the information they're taken at the involvement phase and the information that are then obtained via a live sample accuracy is absolutely paramount for this to function. I'm afraid and accuracy, which is a 90% or may not be good enough. And I'm sure that you've seen what happened a few days ago in China with the use of WeChat and powered by Alibaba.
What you've seen is about a, a somebody who was asleep and a photo, and they, somebody enrolled them. And in, in the, in the authenticate them in their system using the photo of them when they were asleep. And then the person realized that there was a huge amount of money that they were losing. So what I'm saying is that, you know, the, the it's important to really look into, into issues around accuracy at the false accepted rejection rate and the false acceptance rate, the false rejection rate indicates the system when it incorrectly denies access to legitimate individuals.
So understanding before you roll out a system like that, what is your degree of accuracy that you reckon is good enough before you roll out? The system becomes absolutely important.
I'm sure you are aware of this because you are expert in this field is the crossover ever rate is the point of which the F R R and meets the FFA Raed. And the crossover between the two can be motive can be changing and modified by changing the systems sensitivity. But this is the most important element that you need to look at in biometric accuracy.
So interestingly, AI is becoming very important in relation to identity and authentication, and they are becoming more, more important in improving the accuracy. And moving that point, we were talking about one second ago, physiologic algorithms can be used for cases where there is no clear distinction as per the analysis of their match and using effective, effective AI may remove the identification as a crucial step.
So for example, and I laugh always when I see this, but I think it's extremely fascinating, but biometrics can already identify the users when they're driving a car modify and modify the seat settings and even prevent car theft when the user having to stay, where is or shares.
So this is where artificial intelligence may become absolutely important in a years to come. And similar systems can be adopted in office environment to manage access, establish permissions, thus eliminating the step of identification as a preliminary step. This is where AI becomes absolutely crucial.
Obviously there are issues around storage. So data taken during enrollment is simplified by a mathematical model.
So, and the data is stored in the server, distributed data storage. And this is where the governance becomes crucial. I work with organizations, okay, where are we gonna store all this? Is it a distributed data storage? Is it stored in on the device? Is it a portable token? So these are the conversation. This is why I always say to my organizations I work with is let's create a structure, bring it together. The data architects, the developers, the vendors, and the domain expert to define all these processes, where do we store this data, for example, what is the security around it?
And what are the risks of story biometric data? So the confidentiality, the integrity and availability. So the biometric files should be encrypted rest and was in trans in transit. What are the weak points? And what about by availability? Because of course a lack, any lack of viability will cause the system to become unusable, which is a single point of failure.
So
With, and they want to move into this because this is important to me. So these are all technical and risk management approach around ible biometric templates.
And they're meant to tackle the problem with biometric data being immutable, which is a permanent biometric compromise when a template is stolen or is corrupted. And this technology applies as a filter to the biometric template taken during involvement. So of course, if the, if the biometric template then is stolen, the distorted template version is canceled and the new one needs to be created with different characteristics. And the main method that I'm sure you're aware of in this room is biometric salting and non invertible transforms.
And I'm sure the expert in this room will be aware of all this. I just want to close on a number of things, artificial intelligence and using AI for biometrics, facial recognition and identity is not an easy process, and it's not.
Cause as we've seen over the last couple of years, artificial intelligence needs to be in algorithms. They need to be scrutinized, not just audited tested at the latest stage, but ethical considerations, for example, to make sure that the outputs of the algorithm are not biased and they produce fair outcomes.
So for example, are not biased against people of a certain background or women in order for that, to be the case, there needs to be a good approach of ethics in the design and by design, which needs to, to be embedded from the start. As you may be aware, the bias in algorithms can come from many different reasons. For two main reasons. One is because of the actual data that is used. The second is because it's what is called an equal ground truth. The first one is in relation to the data and it comes for two reasons because of historic data, which is naturally biased.
But also the second reason is because of a particular sample that you use the equal ground through normally emerges as an issue and it's called proxy discrimination. And that relates to the weightings and the variables that you use for that particular algorithm. You may not even ask yourselves these questions, if at the design stage of a particular product, you don't have the domain experts in the room to make sure that they ask the right questions around where the bias in the algorithm may come from.
So it's really important that our dreams are assessed against potential buyers at the very early start and the way that we are moving at European level and wider than that through the standards and guidelines is for these considerations to be embedded from the start. And these are some examples that I've put in here and they relate to exactly these issues about bias and fairness in algorithms.
So if there is one thing that I really want you to take away from today is this when you decide, and I always say this with the organizations I work with, when you decide to implement biometric facial recognition, AI within your identity management programs, don't do that in isolation. There is some serious governance that has to go with it and involves the data architects involved, the developers, your vendors, but also the domain expert asking the right questions about ethics, bias, fairness, and all the privacy issues that have been discussed before.
And then they've put in the, in the, in this slide. So this is what I wanted to talk to you about. So the main issue for me really is about the governance that goes with it. And the complexity the algorithms are taken into the public space. The use of cover biometric recognition has been challenged for example. And therefore when you work with your organization within your organization and with your vendors, all these complexities need to remain high in your agenda. Thank you.