KuppingerCole's Advisory stands out due to our regular communication with vendors and key clients, providing us with in-depth insight into the issues and knowledge required to address real-world challenges.
Optimize your decision-making process with the most comprehensive and up-to-date market data available.
Compare solution offerings and follow predefined best practices or adapt them to the individual requirements of your company.
Configure your individual requirements to discover the ideal solution for your business.
Meet our team of analysts and advisors who are highly skilled and experienced professionals dedicated to helping you make informed decisions and achieve your goals.
Meet our business team committed to helping you achieve success. We understand that running a business can be challenging, but with the right team in your corner, anything is possible.
Yeah, welcome home. I'm glad to be here. And to tell you a little bit about the research project I'm doing and the work I'm doing at the moment. So to dive a ride in my current positions are as an ethics officer at the, for bank. It's a Dutch bank and as a PhD in AI and ethics at the Rasmus university of Rotterdam, I started these functions both at the same time because they're part of the same project and the project is all centered around the question of how do we operationalize ethical principles in a practical settings or in practical AI contexts.
And for the first year we actually explored within the bank, what is needed? What do, for example, algorithm developers need, when it comes to discussing Del the ethical aspects of algorithm development. And the result was that it could be convenient to have an ethics officer in place that can provide guidance as well as governance. And I will tell you a little bit about that at, during the end of my presentation. And at the same time, I thought it was interesting to do some more academic research on this topic as well.
Because what I found was that there are a lot of frameworks, there are a lot of high level principles and so on, but that actually this organizational dimension of ethics was pretty much lacking or underdeveloped. So I decided to combine both functions into one research project where I also have the practical experience of an ethics officer and a bank where I can supervise and join in on actual AI development use cases to see how this organizational dimension plays out and what it means for AI and ethics.
So the main research question actually for both the position is how can we operationalize ethical principles and artificial intelligence contexts. And just to give you a short overview of why it is actually necessary to have ethics in the field of AI, I listed a few headlines in regard to algorithms or AI applications at that went wrong. One is of course the, the Amazon recruiting tool that basically filtered out women for technical related positions. And so basically what you see when it comes to these applications, they can be incredibly convenient. They can be incredibly helpful.
They can speed up efficiency and processes, but there are certain risks to it. And to give you a brief overview of what has been done in regards to ethics, when it comes to AI, I selected the high level expert group guidelines, but there are, I read over 84 guidelines, ethical guidelines for artificial intelligence. So there are a lot of principles and high level frameworks out there, but basically they all converge towards the same themes, which are human agency and oversight. There's of course, this technical aspect, aspect of technical robustness and safety, privacy transparency.
And in this framework, that's also understood as explainability. There's this huge topic of diversity, nondiscrimination and fairness, and there's always this societal wellbeing and accountability aspect. So basically most of the frameworks and most of the ethical considerations when it comes to AI, come down to these principles. And when you look at how these principles are evolving or how they are developing, you can see that at least in the European setting, these are moving towards some form of regulatory framework. And this regulatory framework is a form of risk based regulation.
So the European commission has proposed in 2020. So pretty recently that we should actually move towards risk based regulation, where the applications with the highest ethical risks should face the most string forms of regulation. So all of this is pretty high level.
I mean, human agency, accountability, fairness, they are all ethical concepts. That makes sense. When we talk about AI, we want to avoid these black boxes. We want to avoid discrimination, but how can we actually bring them into practice is still another question. And one of the ways the European commission has proposed to bring them into practice is by an assessment list for trustworthy AI. And that would mean that developers, but ideally a multidisciplinary team would fill in a self-assessment for each AI application that they develop.
And you start to see here that there are some, some practical issues because these assessments are quite intense and they are not tailored to the organizational context. And so what I thought both with the PhD and with the position I'm having right now at the bank is okay. So we see on the one hand that there are a lot of ethical principles, a lot of frameworks actually, proliferation of frameworks.
And on the other hand, we have some value sensitive design methods and some self-assessments from a governance perspective that is really made on the designer a level, but there's like a huge, huge space in between, which is the organization. I mean, every organization can adopt a policy of a set of policies or can adopt an ethical framework.
I mean, no one is against world peace and all these principles sound wonderful. And you can all explore with failure sensitive design, what these values would mean for your actual design. But research shows that these principles and giving these principles directly to developers or discussing these principles with developers actually has near to zero effect on design outcomes, which means that principles by themselves are not enough.
And that just discussions on values before you go into the design process, often have very limited results in real world setting, because they are tested in academic settings. There they work wonderful, and people come up with great ideas, but actually real world settings. It might prove to be difficult to actually implement these value, value, sensitive design methods.
So my idea was let's look at the organizational dimensions, the organizational values, because what you see when you start operationalizing ethical principles is that there are basically I've discussed them in general is two forms of tension, inter principle tension, and intro principle tension, basically, meaning that you will have principles, ethical principles that place multiple and often conflicting demands on a certain design or on a certain algorithm. And I give just two very brief examples.
The one we are confronted with at a bank is that we have in Europe, the privacy regulation, which requires from us that we minimize our use of data. Well, on the other hand, we have the duty of care where we need to be, to get to know our customer as well as possible, and thus have to collect as much data as possible in order to provide him the best possible service and to sell him for example, a mortgage or a product that he can actually have.
So these are two conflicting principles and where one says, well, if you wanna perform your duty of care to the best of your ability, you should maximize the data you have of your client. On the other hand, when you want to adhere to the principle of privacy, you should minimize your data use. And the truth is probably somewhere in between, but it's like an open norm. And another study, just as an example, was done by Corin Davis in 2017. And they looked at the cost of fairness or the cost of fairness constraints.
When it comes to safety, they looked at the famous risk scoring for possible offenders. And they found that if we apply fairness constraints, this will have a cost for the actual public safety that we're trying to, to help. And that that's not to say that I'm against fairness constraints or anything. It's just that we should be aware that there's no such thing as a free lunch when it comes to ethics.
If you want to implement fairness, whether that's in a public safety setting or whether it's in a, in a financial services setting, it will come with a cost and somebody has to pay the price for that cost. And somebody has to make the decision to actually take that cost in order to adhere to the principle of fairness. And now another problem with operationalizing ethical principles is intra principle tension. And that means that the interpretation of a specific ethical principle can have multiple formalization.
And the example here is of course, fairness when I just started working at the bank and just read up on all the literature on AI and ethics, fairness of course, was a big topic. And I entered the data science meeting and ask them like, guys, why don't we do something with fairness? And the reply I got was, well, we have 21 definitions of fairness mathematically speaking. So which definition of fairness do you want us to use? And I thought that's interesting because that's actually the, the real ethical question.
How can we decide which definition of fairness is the most appropriate for a given use case or for a given situation? So these forms of tension are come into play when you start operationalizing ethical principles in AI, and they cannot necessarily be resolved on a data scientist level. And that's why I think the organizational dimension is crucial importance. If we want to operationalize ethics, it's also the reason that the project could start the project that I'm doing could start in the first place.
It was because the data scientist at the bank said, well, we are not comfortable with the responsibility of having to make these sort of decisions. Of course, we're the most knowledgeable in our domain when it comes to data science, when it comes to the trade offs between fairness and so on, but we shouldn't be the one making these ethical decisions or giving an interpretation to these ethical principles. That should be something that is done from an organizational perspective, and that is supported by the organization. And shouldn't be just our responsibility.
So what we did actually was we tried a lot of different ways of getting some sort of organizational embedding of ethics when it comes to algorithm development. And we tried the self-assessments from the European commission. And as I was saying earlier, these self-assessments they're really thorough, but they are not warranted for all types of applications.
Of course, when you have an credit risk scoring algorithm, that's high impact. You want to get it scrutinized thoroughly, but when you're making an AI application, for example, that suggests whether you send somebody a letter or send somebody an email, which was an actual use case we, we looked at, then it makes no sense to do a high level impact assessment because well, the first one of the questions was, did you perform a human rights impact assessment?
And then, so I was discussing this with the algorithm owner and with the algorithm developer, they were looking at me like, are you, are you serious? Do we have to do an human rights impact assessment for something that has so little and so limited impact?
And I, I agree with them. So you do want to be proportional when it comes to your, your ethics and the embedding of your ethics in the organization. And the structure we came up with was a new ethics committee and a new ethics office.
And I, I work at the ethics office now. And basically my role is to coordinate and facilitate the algorithm owner and the algorithm developer with the algorithms they develop. And even though the algorithm owner is still ultimately responsible for the choices, he gets guidance from the ethics office. And we propose a new form of governance as well, which is the ethics committee. And what we do is we write an ethical advice for every AI application that is developed within the bank that is written up by the ethics office.
And we propose it to the ethics committee that has representative from all our, for the organization. And basically they come up with suggestions on how to improve this model in order to be more in line or to align better with the ethical principles that we, that we value this way. We have dynamic governance and guidance structure where the responsibility for the ethical choices is still with the algorithm owner, but where you have to support from the organization and from the existing committees within the organization.
Now, very brief, what I do in the academic sense then is I want to explore and do research on this organizational background. And one of the lines I'm following is a, is a term coin a long time ago by Andrew femur. And it's called a technical code. And the technical code is basically the background of failures and assumptions within organizations that actually shape the design. And for me, this term explains why failure sensitive design methods have had such limited results and why principles by themselves are not enough.
If you work in an environment where your main business is to optimize profits, for example, then it makes sense that fairness or fairness constraints will not be part of your design or cannot ultimately be part of the design further. I found that from an organizational perspective, we talk a lot about ethical risks, but assessing these ethical risks is quite difficult. So I'm working on how can we actually assess these ethical risks?
And as the European commission proposed risk based regulation, how can we actually come up with forms of risk assessments, where we can take these ethical and social failures into account, and the final and most interesting topic to me is how we can distribute the moral responsibility when it comes to the development of these systems and how we can actually close the responsibility gap that starts to emerge when systems learn in interaction with their environment and neither the, the programmer or the developer, nor the organization knows how exactly it's coming to its decisions, posing.
Basically the it's the problem of many hands. How can we have some form of moral responsibility from this organizational perspective? How can we make sure that not, not only can help people accountable, but also people know their responsibility when it comes to the ethics of algorithm development. So I'm very brief.
That's, that's what I do, what I research. And I would love to hear some questions or some, some discussion on, on these topics.