The Ethical Part of AI Governance – my personal learning journey
This talk is about my personal learning journey in AI and AI Ethics together with Bosch. I want to share what brought me to AI and AI Ethics personally and professionally and what instrument is used at Bosch to bring AI Ethics to life.
So, and also quite early in, in my twenties, I, I met the guy on the right side. Some of you might notice him so better set his legacy. It is robot BOSH. And right from the beginning, I was truly fascinated by the values that he put inside the company, which is still still followed by. So it's the social responsibility. So if you, if you look at surveys, why, why people enjoy working at the company and why it's so had such a good reputation? It's always that it's, it's founded inside of the, the company's DNA, the social responsibilities and shows in, in multiple ways. And, and also the setup of the whole company. Some of you might know the Bosch foundation where 94% of the, the company's capital is located in the foundation that, that fosters health education and goal responsibility as their is their main topics.
So during my studies in 2015, for computation computation linguistics, I also got to know a quite upcoming technology that received more and more attention. And when you, I mean, when I say 2015, most of you will probably already know what I'm talking about and it is machine learning. So at the time it was almost impossible to publish a paper without machine learning, because I remember my professor saying if it doesn't have machine learning on neural networks in the title, no one will read it because it was, it was such, such a trending topic and, and such highly upcoming. So just a few words, what is machine learning machine learning is the ability of computer computer to learn patterns based on examples. So let's take the example of an image ator that justifies images as containing or not containing a cat. So, so the cat classifiers, it's a quite well known example.
So the examples are basically data points. So data points can be an image, sound snippet. It can be a text or role and Excel file in our scenario. It's lots and lots of pictures of cats. And it always the goal to learn a pattern based on the data that we already have. So based on historic data, the algorithm has the task to find the pattern. What makes a cat, a cat. So most probably it will learn things like characteristics that a cat has. Most of the cases, two ears, it has whiskers has tails, some fur, maybe. I mean, there are also cats without fur, but those are the overarching characteristics that in most of the cases makes the cat a cat, the quality of the model, or let's say the recognized pattern is then defined by the ability to deal with unseen data points. So in our case, there would be pictures that have not been part of the input data that we used for the, the training, the model.
So if we show the train model and new picture, it should be able to tell if there's a cat or better said, if it finds characteristics that have been learned before on that picture. So saying, does it contain a cat or not a cat? So machine learning is such a quite fascinating and such a powerful tool that we can use numerous use cases. And it's not only in media, but also in industry applications, such as predictive maintenance, production planning, and the list seems endless. So that's also the reason why it was so, so trending so upcoming at the time and still is you can probably feel the passion for the technology because it's, it's really fascinating what it can do and what enabled us to do what we learned before. So, I mean, what can possibly go wrong? I mean, machine learning, amazing technology, we can do so much with it.
This is a tweet or better set of post on LinkedIn that I saw a couple weeks ago and it described it quite well. So if you would ask a machine learning machine learning model, if your friends jumped off a cliff, would you do it too? The machine learning model would say yes, because this is exactly the point that we talk about. We learn from historic data, we learn from experience and we already have, it's not always the case that this experience is the best. And this is where we need to really pin to find the pitfalls of the technology.
I, I brought some examples, don't get overwhelmed. I just picked the ones, which seems to describe the problem the best way they're quite famous. So some of you already might know them. Let's, let's take on the upper left side first, which is Thai. It's a chat bot by Microsoft, and it got deployed to Twitter and the tie, the chat bot learned everything that it found on Twitter. So it got, it gathered everything on Twitter as input data, as data points. And I mean, people, we know people and people on social media and that behavior probably isn't, isn't better on social media than it is in real life. So tie the chatbot became a racist. So because it learned without filtering everything, which we have as historic data on Twitter and looked at it as gold standard. So saying everything I saw before is perfect and I have to behave that way.
So TA became racist, Amazon. They, they had the idea to optimize their recruitment process, which is actually quite smart because I mean the HR colleagues, they get probably hundreds and hundreds of applications or so they thought we could kind of filter it out first and make their life easier. Seems like a good idea. It turned out that this tool didn't like women so much, the reason is in the historic data, there were mostly men employed at Amazon and the tool thought, of course, we have lot of men. So this should, this is probably good for the company. So I'm only, I'm gonna fill out all the women. Maybe not the best idea. The fourth is in, in the public sector, predictive police work. So the idea was where we found a lot of crime. We should take a closer look, but because there might be a lot of crime or even more, this just became some kind of a self-fulfilling prophecy because it reinforced the status quo where you look closer, you're gonna find more, of course.
And so the problem areas in some cities, they became even more reinforced because more police were sent there, of course more crimes are found. So also maybe something to think twice. The last example is from the Harvard review, it looked at face recognition technologies by IBM Watson, Microsoft, cognitive services face plus plus, and some, some other, some other companies, they realized that they performed much, much better on light male faces compared to, for example, darker female faces. Well, this is doesn't really seem fair, but what's the reason we just had more examples for light males. So we were able, or the tool was able to learn the characteristics of a light female face much, much better compared to a darker for a light male face compared to a darker female face. So also something where the technology has to be, you know, deployed the right way.
So it's not the fault of the technology itself because I mean, this is neutral. It depends how we deploy them, how we use them. So to, to maybe do something about all those issues, we have to discipline of AI ethics. So why do we need AI? Ethics simply said, we don't want to repeat patterns from the past blindly. We know that machine learning technology learns from the past, but it doesn't mean that we are doomed to look at the patterns from the past as the gold standard. For example, mostly men being employed at Amazon. We need to carefully think about what is the cost gold standard thinking of Amazon's recruitment tool. Again, it is a fact that in the past several less women in the workforce, but it doesn't mean that by all means we should keep it that as the goal standard. So it is under our control.
What kind of input data we choose that avoids discrimination by minimizing bias patterns. So it, it is up to us to ensure fairness that also closely monitored the output of the AI models. The decisions that are taken with the help of AI have to be ethical. That's that's for a fact, especially when those decisions are automated. We need this AI in a way that makes sure that the system that use this AI take a decision that an external observer could consider as ethical decisions. So, meaning not that AI takes takes ethical decisions because AI is what it is. It can't be ethical. They have to be seen as ethical by an external observer. So having ethical systems that uses a of AI can have the chance to fulfill the needs of our customers way more differentiated and makes it possible to enrich target groups that we maybe haven't thought of before considering AI in ethical systems. It's, it's a chance it's not only the right thing to do. It's a chance to develop and create technology and considers the impact on its environment. Some of you might know, invented for life as the guiding principle of Bosch. We want not only invent for life, but we want to invent for the life of all people. So what do we do regarding the discipline of AI ethics?
We have the Bosch codes on AI and ethics. It was released in 2019 at the Bosch connected world in Berlin, not so much of a surprise. You will find some of the elements that I talked about earlier now in the codex, and which describes our values and what is important to us when it comes to AI. When saying us, I mean the blood box, the major pillars of the BOSH AI codes are always suspecting the frameworks of social consensus. Those reads from human rights, which is a global consensus to the, we are BOSH statement, which is a Bosch consensus regarding our value foundation that I talked about earlier, when introducing robot BOSH, AI should be a tool for people. It should be save robust and explainable AI decisions that affect people should not be made without a human arbiter. As you want AI to be a tool for people, not the other way around and all Bo AI products should reflect our invented for life. Ethos that combines a quest information. Innovation, sorry about that. With a sense of social responsibility, always going back to our founders intention for the company, they must Kindle people's enthusiasm, improve quality of life and conserve natural resources. So trust is one of our trust is one of our company's financial fundamental values. And we want to develop trustworthy AI products. This is the major goal of all our endeavors in, in AI.
So now we know the problem. We know how boss responded to it, coming back to the rather my personal, personal learning journey. What can I do now as one of the 400,000 employees of this huge company, what can I personally do to contribute, to bring this code into life, to make sure that the product that we sell and everything that we do in AI has this ethical characteristics. And that is something that is good for society. I found out that what I can do is a role called the AP Q a. I mean, some of you who know big corporations know that we love abbreviations. So this is one of ours. It stands for AI product quality and adherence. AI product quality refers to the quality of a product in terms of its AI maturity. Whereas adherence refers to the adherence to principles of the Bosch AI codes, hence P Q a.
So this is a new role that people can talk, take over in the company and, and help to operationalize this codex and bring it into life to make sure that our AI based products confirm to this codes in order to achieve the required maturity of AI under the boundary conditions and the principles of the Bosch AI codes, this independent role, AI product quality and adherence is introduced. The role has its focus on preventive quality assurance and maturity of AI products in respect to adherence to the Bosch AI development released and deployment process, as well as the IA products, the role itself applies to all areas and functions where we have development of AI products, which includes operations and service that deploy somewhere and which are using are, are built on artificial intelligence.
So what are the main reasons for this role? Why, why do we need it in the company? We have an increased use of AI during development and also manufacturing of products such as predictive maintenance and like the things that I talked before about, we use AI in product and applications in autonomous autonomous systems, such as autonomous cars, robots, or drones, and we have new services and business models containing AI technologies. And we even want more. And what target for our products actually to, to contain AI or being developed or manufactured with the help of AI.
All right. So the, a Q a is being part of the first line of defense and will therefore have a significant contribution in securing the adherence to the AI development and deployment guidelines in regards to the codex. Okay. So seems like quite, quite responsibility to take on. So how is this role integrated into a project? So, as I said, it's going around the whole circle. We have the AI project as an endeavor. We have the design phase where the, a Q a consults in the design planning and application of the AI related preventive quality. So the, a, B Q a knows about the different pitfalls, what we need to do in, in the, in the planning and designing phase, then we have to build where, what the actual project is running, where we build the solution, where we have the different experiments, where we try out different things here.
Also the, a Q a can consult support and enable the organizational entity towards an effective and efficient implementation of the principles of the codex. So the, a BQ is also connected within the companies can, can consult regarding different experience and maybe other projects had. And the last cycle is the operation. Of course, that's some specialty of an, a product once you are done with development, the operation, I mean, and let's say the, the monitoring, it never stops because you always need to make sure it that the, the thresholds are still in line and, and still the AI acts towards the ethical guidelines. So the QA is also seen as an, an role that independently evaluates and reports of the AI product quality and its maturity team. So as you see, the AQ is basically never done, it's it, it follows the project and the complete circle.
All right. So back to myself, what do I need to fulfill this role? So what do I, as a, as an individual, who's probably interested in AI ethics. So having some, having read books in, in my free time, also, you know, just being interested in the topic as well, exchanging with other people, being an AI I AI engineer. So what is the gap that, that needs to, that that is expected of me to be filled, as I said, the vision of Bosch is mainly worked out. So it's still kind of in a process of definition, what the, a P Q a in detail should do, because it's, it's actually not life yet.
And together with the people who are going to fill this role, we need to define what gap can, could there be. So the motivation is high, but I have to, as a compliance and AI ethics expert, and this has to be go to go, to go hand in hand with the existing processes we already have, of course, quality assurance processes, and everything needs to be integrated together. The aspect of AI ethics needs to be integrated in the current quality processes. I also need to help to transform the intrinsic quality standards into understandable guidelines. What do I mean by intrinsic quality standards? I'm pretty sure no one at Microsoft wanted to create a racist spot, but it still happened. So we all have this intrinsic quality standard to make good products as an engineer. We, we always name it by the engineer's honor to make good products.
No one intentionally wants to create a faulty product or something that, that goes completely wrong. But with AI, it can be hard because we have this massive amount of training data and data that goes in into the system, and you cannot monitor everything by hand. So you need to know the tools and, and the pitfalls and to, to transform this intrinsic quality standard and make sure you don't do anything wrong by, by, by mistake, without, without intentionally wanting it. I also need to be aware of the struggles in diverse projects. I mean, projects are already struggling with, with deadlines, with requirements. I mean, we all know it. So it's very crucial that this, this ethical aspect of AI is not seen as, as another hurdle for the project just to hold deadlines. So we need to make them understand that it's, that it's crucial that we take care of this and that it actually helps the product to be better, better at the end.
And of course the struggles can be very diverse in projects and just someone filling this role needs to be aware of this. Also, basically being a helping hand for other developers. So if there are colleagues coming to me saying, listen, I really want to make this good. I want to make the AI to respect human rights and to be ethical. How can I do it? Because this technology is still new to us. So it sometimes can be hard to really be sure that everything goes as it should be. So just consulting other and being this helping hand.
All right. So what the next steps? What is the look ahead? As I already said, this role is just it's being worked out during this year. So the first first people filling this role are, are planning to be trained. So we have to define training measures to fulfill this role. It has to be quality processes, as well as AI and AI ethics, depending on the background of the person. So those are like the main theoretical background pillars that, and Q a needs to have. Also, we want to introduce this role of the AP Q a to the first project to, to gather experience also build a network. The PQAs because as I already said, only then we will be able to share experience and to learn from different projects. Yeah. So this is, I'm really much looking forward to fill this role and see and see what we, what we can achieve, how it goes and, and where we go from there. And I am at the end of my presentation, and I think I did pretty good in the time.
How can we help you