Matthias Reinwarth and Anne Bailey talk about Artificial Intelligence and various issues and challenges of its governance and regulation.
KuppingerCole's Advisory stands out due to our regular communication with vendors and key clients, providing us with in-depth insight into the issues and knowledge required to address real-world challenges.
Optimize your decision-making process with the most comprehensive and up-to-date market data available.
Compare solution offerings and follow predefined best practices or adapt them to the individual requirements of your company.
Configure your individual requirements to discover the ideal solution for your business.
Meet our team of analysts and advisors who are highly skilled and experienced professionals dedicated to helping you make informed decisions and achieve your goals.
Meet our business team committed to helping you achieve success. We understand that running a business can be challenging, but with the right team in your corner, anything is possible.
Matthias Reinwarth and Anne Bailey talk about Artificial Intelligence and various issues and challenges of its governance and regulation.
Matthias Reinwarth and Anne Bailey talk about Artificial Intelligence and various issues and challenges of its governance and regulation.
Welcome to the KuppingerCole Analyst Chat. I'm your host. My name is Matthias Reinwart, I'm an analyst and advisor at KuppingerCole analysts. My guest today is Annie Bailey. She's an analyst covering emerging technologies here at KuppingerCole. So this includes blockchain and artificial intelligence, and she helps to identify and boil down the implications that these technologies have for companies, industries, and markets. So Hello, and thank you.
Yeah, thanks for inviting me, Peter, have you, and especially when we're talking about emerging technologies with that, of course, it's something that, that has lots of aspects to look at it today. We will talk about artificial intelligence governance, and there was a, an event that just happened recently in the, in the aftermath of the George Floyd incident. That was that IBM announced that they will no longer be providing general facial recognition software based on artificial intelligence, of course, to the us government. And I think that is really a good example of applied AI governance.
Do you Agree? Yeah, exactly. And so when you look at the news headlines for the last week, of course, what's featured very prominently, are these protests against police brutality. And so you wouldn't normally make the leap from a very strong social issue such as that to technology or specifically AI. So that link isn't totally apparent, but IBM is seeing their own responsibility.
And that's not that they've directly caused anything here, but that they see that there could be a potential conflict of interest that providing facial recognition technology could lead to a situation of mass surveillance of racial profiling or violations against basic human rights, either in the U S or abroad.
And so this is a recognition of IBM's own responsibility over their technology and the implications that it could have and society at large, This is something that you would suggest all providers of artificial intelligence or machine learning technology to customers should apply in any case, it's the safeguarding weapon technology. Yeah, this is, this is tricky to say if, if this is recommended for all companies using or developing AI or not, because this is not set in stone, there is not any legally binding requirement that they do.
So, however, from the building trust standpoint, from the transparency from really using AI for good or for improving society, as we know it, then yes, this is, this is a really important part in that where a company really reconciled the, the business impact of their technology and what efficiencies it's delivering, what, you know, smart insights it gives with the, the implications in society for environment, for interpersonal relations, what impact this has on vulnerable groups. So this is really a next step in bringing governance to life and, and playing this out beyond paper.
Okay, understood. So maybe if we take one step back, we looked at that example, but if we take that step back and say, what, what would be a, a general definition of AI governance and how would you define that that complete topic is the, is there already an existing well known and publicly used definition of AI governance? Yes and no. There are some governance frameworks, which are being floated out from different institutions, academic, private, and governments, but generally you can understand governance as being a process of providing appropriate boundaries for execution.
And so for AI in particular, this means topics like addressing bias, addressing the ethics of using this in society at large protecting privacy, assuring security of data, and also a big one is explainability, but AI governance for the most part has the impression that it is a limiting institution, that it constricts development, and it puts it in a, in a small, more narrow box.
But I would argue that this is actually the opposite that you can't look at the development of AI for a particular business case without addressing governance to, to bring it to a bigger picture, to understand not how an could be limited by proper governance, but actually how its opportunities could be expanded by proper governance.
I Think developing a technology and applying it to a certain business case are really different, different steps within the process though, that does not prevent technologies to be analyzed and developed over time as a, just as a technology from an asset, as you said, from a, from a technology or from a, from an academic point of view, but the application to a certain use case just as I am IBM decided not to use their machine learning for this specialist specific use case, being facial recognition, these are really different processes.
So the governance comment comes into play when you apply the technology to a use case. Yeah.
And, and even before, there's a lot of support which companies can have in the full development process. So going all the way from protecting and, and being aware of the data provenance, where it's coming from, how it's treated, what's the record keeping for that development process is all the way to conceptualizing the actual end use. And this is something which I think gets missed out on in, in some ways when considering the governance question that oftentimes the development of AI is treated kind of like a replacement part for something which is already working.
So if you could take a small example of, of a chat bot actually. So we know that this is something with a natural language understanding and into a retrieval. And so this is kind of replacing a somewhat limited human customer service agent who maybe can't remember all the details off the top of their head. And then this is scalable. It's able to interact with thousands of customers at once and, you know, potentially boost customer satisfaction. And so this is kind of our model based on a ideal human customer service interaction, where it is a conversation.
You chat a little bit, make some small talk, pull up some information and say, here, this is what I found, but we're also seeing that this is not exactly the sort of NRA interaction that a customer might want to have with a bot. And it's rather leaning much more towards the Burt example from, from Google AI. And so the ability to handle question asking searches where a person can simply type in their question as they would speak it to a human directly into the search.
And it, the results come out as a fully formed answer, taken directly from a piece of text or summarized from, from several pieces of texts, which are relevant to that answer. And so, although the chat bot example began as directly replacing the human customer service agent and mimicking really exactly what an ideal person in that situation would do, it's evolving and it's morphing into something which is the ideal human interaction. And so this is where governance can come in and, and start to reinvision.
What is AI, not directly replacing what we already have in our life, whatever it may be, but how can it reinvision achieving that end goal in a different way. And so That also means redefining the actual purpose that means re redefining and rethinking the actual business value. That's such a, such a process such as system can, can present moving away just as you said, from, from, from the original use case, just replacing that surface desk clerk by a, by, by a machine to moving to better solution or to more adequate solutions. Yeah.
And this really does require some hard questions to be answered by, by companies wanting to implement this because you can, you can of course ask the easy questions, the normal governance questions, which is, you know, how is the data preparation governance? You know, how is the model able to be explained?
You know, granted these are difficult questions, but they're somehow easy because there's a quote unquote, correct answer for them, but you can then begin to see when we question the, the end goal of whatever AI project is being developed. You have questions, like, what is the societal impact of this?
You know, is it treating vulnerable populations, equitably, you know, is there, you know, another opportunity that we're missing, which we could use to generate value instead of generating waste environmental waste or, or social harm. And so this is, this is granted a much larger question that is going to be difficult in which we see IBM kind of beginning to, to reckon with The, these areas of, of AI governance, like, like bias, like privacy, like security. So introducing bias into a solution, really not improves anything. It's deteriorates things.
I remember seeing a hand dryer, which was not capable of, of, of being switched on buyer by a colored by a black hand move below it, but was working perfectly for a white person to, to, to start that well. So just, it's such a simple thing. It is AI just encapsulated in some smaller technical embedded solution, but even there you see that, that it's introduces inequality in reacting to different groups of people.
Yeah, exactly. And there's some really good advice, which is being floated in, in most every AI governance framework, which is out there today, which is that you need multi-disciplinary team members on your governance team, on your development team, on your implementation team to watch out for these sorts of things, because it's all about the things which you don't think about.
You know, you can, you can be sure that the top thinkers are on these teams already the best in their fields. And they're of course, focused on the things which they already know how to do, and they do very well, but it's, it's the things which you don't think about the, the ability to have a center recognize a colored or a darker, darker skinned hand rather than a white hand. So this is absolutely cute. And there's kind of an example.
I, I love this, this journey towards smart cities, and there's a really interesting example that comes out of the UAE. So you have kind of the old model of a city and, and the newer version, which is all about perception. And you have Dubai, which, you know, is, is built on oil money and, and kind of in the 1960s, this huge wave of construction happened to essentially create the ideal Western city.
You know, the, the planners at that time, you know, wanted to put Dubai on the map and what better way to do that, then make it the most impressive city of the cities that we recognize already. And so, of course it had Skype scrapers with, you know, floor to ceiling, glass, and wide highways and everything which you could recognize from a major Western city, but Dubai, as in the desert, you know, you have floor to ceiling glass that turns your building into a greenhouse when you already have to pay high costs for air, if that's installed.
So it's, you know, in these wide streets are not creating shade, they're not creating air ventilation pathways to, to create natural wind and cooling effects in the city. It's not designed for the environment it's designed on the perceptions that people had of what is ideal. And so AI also has a, has a huge role to play in designing smart cities. And it's all based on the conception of what is ideal, that's where we start from.
And so if you don't take time to ask these larger governance questions of what is the impact of building or applying this technology to a certain effect, what is that, what does that impact, what does that actually doing? And so, yeah, that was actually Quite some, some perfect summary already for this episode that we did.
And I think this, this getting the bigger picture, really understanding what we really want to do, not only focusing on one single solution, but getting to a, to a more complete vision and applying governance also to that is maybe an important aspect that many people, including me and many organizations obviously have to get to, to really understand what serves the city, the country, the people society best when it comes to using such a, such a highly capable technology like machine learning or AI. So thank you very much.
Annie, do you want to add anything to that? That I just try to sum up? Yeah. I guess that the idea that we have of governments as a, as a limiting factor, doesn't have to be the case. And in fact, it's, it's really the opportunity for a positive somewhere where you can really assess what the impact is and is that actually the impact that it should have.
And, and it's the time to really imagine a future that should be, and not that actually is. Okay. Great. Thank you very much. I assume that we will soon follow up with it. I would love to do a series of episodes on that, but I think we will quickly follow up with more details with more specific advice when it comes to applying AI governance to that for now. Thank you very much, Annie, for being my guest today. Thank you to the audience for listening and for their time. And I'm looking forward to having you on that show again soon. Thank you. Yeah. Thank you. Thanks for having me.
Thank you. Bye bye. Bye. Bye