AI governance and regulation. I'd like to ask Scott David to come to the stage. So Scott is, has a long title. He can explain his title himself, 'cause it, it has three lines on the app and the other one will be me, Martin. And we will talk, as I've said, we'll talk the next 20 minutes about AI governance and regulation. Feel free to raise your questions at any time and maybe Warwick can, can watch the online questions coming in if there are any alert us. So, and I think what we want to look up is the sort of navigating the, the landscape of AI regulations, where we stand, where we should go probably, and share a bit of our thoughts on that because I think it, it's, at the end of the day, it's a very interesting challenge we are facing between having governance without disrupting innovation. Which is surely one of these things where, where balance is needed. And I think many of us have learned there are some cool things we can do with chain ai. There are things where the results are below expectations. And so finding that right mix is very important. So welcome Scott. Maybe you introduce yourself quickly.
Thank you. My name is Scott David and I'm at the University of Washington applied physics lab. Also a fellow Analyst now with Kuppinger Coal as of several weeks ago. And at the physics lab I run an initiative called the Information Risk and Synthetic Intelligence Research Research Initiative. And so delighted to be here.
Okay, pleasure. And I, I think you can as a starting point. So, so we, we had, we already heard a couple of sorts from Right. Maybe as a starting point. Could you a bit so summarize the state of what you're talking
About? Yeah, it was funny 'cause Martin said he'd like me to summarize global regulations on AI in, in three minutes. So for me, three minutes can extend into, yeah, 30 minutes, but I'll keep it in two,
Two minutes. And the prob the problem is you, you've been a lawyer so it's three minutes anyway, very short for you. That's right.
We get paid by the word the, so there there's a couple of resources I'd like to point out to folks. There's the OECD maintains a list of global regulations in AI and that's very useful also, George Mason University just put out a report recently where they took AI and did a survey of AI policies. One of the things that's emerging now is there's a number of things that aren't really regulatory in the sense of constraining behaviors, but, but rather our AI policies generally, because a lot of what you're seeing now is a combination of restriction and encouragement of innovation. And some of the themes I made a little list 'cause it's not easy to remember all of them, but there's, there's five in, in the George Mason report, it was very interesting 'cause they talked about patterns that are emerging globally and, and, and they talk about it as a wardrobe and you know, you go get a shirt and a pair of pants and some socks, same kind of thing.
There are certain categories of things that are being available. And if you wanna look at different jurisdictions, you can kind of look at what's the combination, what's the outfit that got put together in order to, from the wardrobe in order to deal with ai. And there are really five things. One is category of safety and security is one of the kind of themes. Another one is transparency and explainability. Another one is accountability, non-discrimination is another. And then data protection. And I thought what re said, made me think about something I hadn't thought of before is very interesting is in a, in a sense AI invites us to think about how to render information reliable. We've been focusing on making data reliable and the bringing forward of the GDPR and other data learnings is part of it. It's necessary but not sufficient to make the information reliable. And so the question would be, you know, if data and meaning equals information or data and context equals information, how do we make meaning and context reliable? And I think that's a stray, thank you for making me think about that. I hadn't thought about that before, but it may be that we're transitioning now and that provides a lot of opportunities for new risks, but also a lot of opportunities for new products and services in that area to figure out what does it mean to make meaning and context reliable.
And, and I think that that is interesting what what you're saying about also information and data. So, so at the end we are all in a space which is called it information technology. But factually I think a lot of things we do is sort of data processing lesser than really dealing with information right now with, for instance, LLMs. We, we go from from from so to speak, from data to information. Yeah. In in a sense. And I think this is something which is, which is important to understand that we are finally there were so to speak the promise of of information technology. Yeah. Yeah. Has been, by the way, cybersecurity also once has been called in info information security. Yeah. Not that long ago before the cyber term Yep. Came into play. And I think that is something, and I remember we, we a while ago in CSO council could call Analysts is running one of the CSOs that in, in a conversation about it was more the data security aspect is that we did, didn't care enough about data and we, we definitely also didn't care enough about information. So also when I go to my domain identity management, most of the things we're doing don't deal with information and not even with data, they deal with functions I have in a product. Yeah. I can use that function. And I think this is something we definitely need to change. Where we want to get a grip, we, we need to change our understanding re any plans from your end.
So when it comes to any kind of regulations, I always believe one simple thing that we always wait for regulators, regulators as, as of now or any, any kind of agencies which make regulations and recommendations are at the backseat of the car. You know, the pace at which the technology is moving forward is much, much, much more faster at the pace at which the regulators are moving forward. So it's very important for these regulations, self-regulatory body, government, regulatory body to move from the backseat, come in the front seat, drive the car along with the drivers who are tech technology and make match the pace, the time when you have, you know, backward looking regulations, you'll not win a regulation. Have to be forward looking. So for example, if I am a product owner of a tech company, I know what I'll do in next five years, 10 years, what is my plan of development? Regulators need to understand that what will happen in five years, 10 years, and their plan, their regulations, which are future futuristic, which has a viewpoint which are futuristic.
Yeah. But, but, but, but is it that they need to understand what will happen in five or 10 years? Or is it that they need to understand what, which sort of constraints to set so that the things can evolve without getting out of control? I think it's, it's a, it's a bit of a difference here.
You know,
I think so both. Sorry, go
Ahead. Please. I was just gonna say re you're making me think about all sorts of new things. So I, one of the things that just made me smile is that regulators, you know, Biden just came out with that executive order and he's talking about floating point operations. Now if we have Biden talking about floating point operations, he actually didn't write the executive order. Right. So we can't rely on governance for tech. We got that. Yes. But it just made me realize if we're talking about meaning and context reliability, that's what we do that all the time in human governance. So maybe we can start to, if we, obviously we need to understand the tech in re regulation establishment, but maybe it's much more of the natural process to have regulators working with meaning and context. So if the can tech can help understand enough, humans have dealt with that forever.
You know, there's a, a gentleman who asserts that property itself, the concept of property is a collective hallucination. It's not, there's no property, it doesn't exist in the world. It's a relationship among humans with respect to an article and, and that's out there in the world. So we can start to cast the regulatory construct in terms of meaning and context management and leave the data management to the tech people. So maybe we've been asking the government to be too knowledgeable about the tech directly and you know, what is, when we say meaning and context re reliability, what does that mean? Well, you know, we were one village in Africa a hundred thousand years ago and we migrated out around the world and we developed different food and language and politics and music. And then the internet brought us together for show and tell. So we didn't have shared meaning, right?
We have different languages and et cetera. The shared those kind of things, making bridges among those things. Yesterday it was remarkable using the AI for the translation, the instantaneous translation. So we're already having meaning sharing, being made possible by the technology. The technologists can verify whether the you have an accurate kind of transcription and then a linguist can also come in and verify whether the word just like a translator notes in the beginning of a book. So we have these other mechanisms that are in the already instantiated, in the non-technical fields to measure degrees of reliability and predictability and meaning and context. That's the kind of metrics that maybe we need to bring into regulation in addition to the data metrics and other technical metrics.
Yep. Yeah, absolutely. Nice. Completely agree. And I'll add one more point to it. You are completely right. But you know, I always believe, and I certainly have this impression that governments are handicapped when it comes to AI understanding. They do not understand artificial intelligence or the fundamentals at which artificial intelligence is being built. They find it something like, you know, humanity destroying initiative, that is a concept which will overshadow humanity and destroy everything, go to singularity and things like that. Which is, which is a lot of, lot of background coming from a sci-fi movie. Yeah. So it's very important that government needs to have an AI governing body. I remember, I don't remember the country exactly, but one Ian country beat Saudi Arabia or UAE have created something called Ministry of artificial intelligence. Yeah. But and they have Yeah. And they have a very, sorry, go ahead. Yeah. And they have a very good number of people who knows what is ai and that I think that that kind of initiative is required. You can't merge AI into technology and say, Hey, it'll destroy the entire world.
Yeah. But but isn't, isn't it also partially because we, we are using still this term ai Yes. Which is a very old term and has a history. And at the beginning when I, when I go back into the sixties or there was a lot of yeah, about, oh, we will soon be at general ai, et cetera. We are far away. What we actually have is not artificial intelligence. We have augmenting intelligence. And, and when we, when we would sell this a bit differently and say, the purpose of this is not because what you say, this is the perception of replacing and, and humans factually it's about augmenting humans. It's about making things better. When we have a an an an As assistant or assisting system in a, in a, in a car, it's augmenting us as a driver. It's making us better. Gen AI is, is is intended to make us humans better doing our chops better.
And I think there are some, some great examples of what, what is out. And then I think the carry factor of that can go a bit away. And I think this needs to be explained, but not only by a governing body. It needs to be explained by everyone, but not overpromising. And, but by, by, by making clear where the value is, we use it in many, many areas to really make things better. But I, I want to come to, to another point first, and I think, so there was a bit this Oh, the, the regulator in, in, in, in the backseat. I, I think we have a, from my perspective for ai, we have a quite positive situation because at least the regulators in the backseat, in most cases of technical development, it has been and is that the regulator at best at some point comes in the trailer that is attached to the car not being in the backseat and, and not even being close to the driver's seat. And I think we are much further with discussing regulations and thinking about regulations than we have been in the past for most other areas. Yeah,
I was polite.
You know, it's, it's funny because, so when I was a kid, I was at the beach and a wave knocked me down and then I got up again and the wave knocked me down again. And then I got up again and the wave knocked me down and I looked down the beach and there were surfers, the same wave, they were surfing the wave. And so we can regulation, if we look at it as something that constraints, then we're going to get knocked down. And part of it is, is there an inevitability here? Is this wave something that we can manage directly? Or really if you look at regulation, what we're doing is managing human to human relationships with respect to this thing, whatever it is, ai, railroads, whatever the new technology is, it took 50 years to come up with regulations for railroads. So we have to have the experience to know how to manage ourselves. But there's some, I, I won't go into the all of it, but there's some inevitability here. Part of our work, we look at AI as just an artifact of the fact of the exponential increase in interaction volume since the big bang, quite frankly. So, and I've talked about exponential increase in Berlin in the EIC as humans, we have no management, we have no metrics of exponential change. We just don't know how to do that. We have exponential metrics like Richter scale and decibel scale, but those are instantaneous measurements. Yeah.
And, and I'd like to bring up in the little bit of time, we, we only have left one more point and, and when I look at some of the stuff which is going on currently around regulation, then this is about sort of very critical dangerous versus lesser critical ET things. When when I think about this, to me it sometimes looks a bit like in, in some sort, some of the most compelling use cases, unfortunately, are the ones which are closest to security and safety. I think it starts with autonomous vehicles where a lot of safety things are around. And it's, I think it holds true for many other areas, which, which is a bit of a dichotomy. Dichotomy. We, we are, we are facing that. And that also would mean we need, need governance, we need regulations that find a good balance between enabling things, between accountability, as you said, between data privacy, data privacy and all the other stuff. But really focus on the, the use cases that are, are, are sort of most attractive, but was the most
Critical. Well one of the things is we've sort of been here before, I make reference sometimes to self-driving corporations. And what I mean by that is, if you wanna see what the AI future looks like, look at what corporation relationships look like to humans. Because there's levels of abstraction that happen in systems that are not entirely human. You know, you have personhood, legal personhood for corporations, some independence and discretion. And so you start to have the abstraction can be a form of violence. You know, the painter Kandinsky once asked the question or asserted that violent societies yield abstract art. And I often wonder if the reverse is also true. Is abstraction itself a form of violence? And when you're dealing with corporations, you know, you call the phone company and you want to get something corrected and they treat you as a number, not as a person in identity. We wanna be known for who we are. But in all group dynamics you have to have some level of abstraction to deal with the group. And so part of what we're dealing with here is, is fundamentally an identity issue and a human identity issue. And I think the question we need to ask ourselves is not how do we regulate ai, but what does it mean to be a human in the, in an AI future?
Okay, that's very philosophical, but maybe let's get a bit more to the ground to to, to close up that panel. And, and a very, very simple question where I'd like to have a very short and precise answer is what is the most, the main thing you you'd like to see from the work on AI regulations in the next, in the futures or from now on? What is the most, most important thing three?
Yeah. So what I would like to see that when it comes to regulations, regulations are talking to business, they're talking to academics, they're talking to industry, they're talking to expert. And I would like to see a proper governing body, which is multi-country in sense it is across different geographies and by, say, by a, by global body so that any regulations which we come out with or they come out with, it's not contradictory nature. That is China's regulation is contradicting with something else. Okay. That should not be a situation. We'll avoid that situation.
Okay. Thank you Scott.
Yeah, and along the same lines, I think that, that we need something that's uniform enough so that we can de-risk together. I've found that the most effective systems are the ones that allow us to mitigate risk and leverage in ways that we can't do alone. Because AI is a challenge for all humans and all countries. There will be new things that we can de-risk together that we can't do alone. And if we're explicit about that, that can actually help not just in terms of AI risk, but in the geopolitical and other risks that we have as humans. So it actually can be a time for healing of the human experience.
I, I'd like to add, I, I'd like to see regulations that are, are clear and tangible. Yeah. And, and precise and, and what they do. And not only look at the risks, but the opportunities we have. So enable but mitigate the risks. Sore. And Scott, thank you very much for
Thank you.
Being at that panel.