KuppingerCole's Advisory stands out due to our regular communication with vendors and key clients, providing us with in-depth insight into the issues and knowledge required to address real-world challenges.
Optimize your decision-making process with the most comprehensive and up-to-date market data available.
Compare solution offerings and follow predefined best practices or adapt them to the individual requirements of your company.
Configure your individual requirements to discover the ideal solution for your business.
Meet our team of analysts and advisors who are highly skilled and experienced professionals dedicated to helping you make informed decisions and achieve your goals.
Meet our business team committed to helping you achieve success. We understand that running a business can be challenging, but with the right team in your corner, anything is possible.
This panel discussion brings together legal experts and tech innovators to bridge the gap in understanding and communication. "Crossing the Divide" aims to create a common ground where participants can explore the legal implications of decentralized infrastructures and AI services. The session will focus on fostering a dialogue that demystifies legal responsibilities, ensuring tech professionals are equipped with the knowledge to navigate the complex landscape of legal accountability in tech innovation.
This panel discussion brings together legal experts and tech innovators to bridge the gap in understanding and communication. "Crossing the Divide" aims to create a common ground where participants can explore the legal implications of decentralized infrastructures and AI services. The session will focus on fostering a dialogue that demystifies legal responsibilities, ensuring tech professionals are equipped with the knowledge to navigate the complex landscape of legal accountability in tech innovation.
I would like to introduce the speakers that we will hear, but continuing with the last topic about the regulations and about all the changes that are happening nowadays. Our next panel, we will touch a little bit of this topics and we will discuss crossing the divide, bridging legal, accountability and tech innovation. Please join me in welcoming our speakers for today. We have here, well, Dr.
hva, she was here in the previous presentation. Emily Land Cyber Risk Consultant, independent. Michael Palash, chief Trust Officer in Info Networks, balancing Novello funding partner and legal consult of social vendors. Thomas Lowinger, executive director of Epicenter that works. And Pamela Ingal, director of Identity Standards Microsoft. Thank you so much. Thank you. This is great. So I just wanna give you a sense of what we're gonna cover. We have a nice chunk of time here. We're 40 minutes. We're we're the things we're gonna cover.
We're going to, the title of the panel is The Bridging of Legal and Tech. And so we're gonna, among the things we're gonna cover are how do we make tech more compatible with law? How do we make law more compatible with tech? And then we're gonna start, I think, with some of the current deficiencies in the regulation of tech before we move to the AI amplifications and new risks.
So I wonder if, if any of you or each of you have some thoughts on current deficiencies in the regulation of information technologies and you can, you know, allude to the, how those will be amplified in in the AI context. Anybody care to do that? Yeah. Would you like to try that? I think harmonization, so we don't need more, but we need more simply understandable things. 'cause I see a lot of doubles getting, you know, if I read all these texts. So I think in the context of ai, it's all about trying to catch up.
So one of the things I've kind of looked at is in the context of copyright law and there has been an ongoing tension going back to player pianos, radio, television, photocopiers, VCRs. And what I've always found interesting is the laws have adopted to the technology to protect the underlying principles.
So to me, I think what is, what's particularly most challenging with AI is the pace at which the change is coming and how one ad adjust and whether the legislation and private contracts can co compensate accordingly. Yes, I would add to that, I'll keep it quite short. I think that it's fundamental that we have very multidisciplinary teams working on legal regulation for technologies. Because very often, as Yuko mentioned in our previous talk, you've got the legal people, you've got the techies, but there's still very big bridges or very big gaps to bridge between these two.
And I think this starts very early in the education of, of the people who are going to be writing the legislation. So I believe that this also starts with people in law school already from law school, that people should have stronger technical knowledge to be able to, to build better implementable legal laws. I I fully agree and I will add the reason products nowadays are more digitalized, are more compliance gets more important, governance gets more important. So all legal aspects in products that we see are so heavy and so have such big influence on the product.
And you can fall out of compliance of regulation very quickly that this is actually a part where legal and tech should really work closely together. Thank you.
Well, I agree with a lot of what was said. I I also think that we have to be serious when we are drafting legislation how to enforce it.
I mean, if we, we live in a world where Facebook meta is fined for breaking the law in the US a ridiculously low fine next moment the stock price goes up. So clearly we have a problem here. Clearly we have a situation where breaking the law can be a successful business model in the internet. And this is also very much a European problem because we call it in a privacy community the Irish problem. We certainly, we have certain countries in the EU who made it their business model not to enforce laws against big corporations.
And that creates an unfair playing field, particularly unfair for European corporations. By the way, this is not a civil society problem. And so the biggest challenge in my opinion when we draft new legislation is enforcement. How do we get it right? The DSA has some good approaches there and happy to go into detail. Pamela online I'll hear me.
Oh, just, just a moment. We we're working on the audio here. Keep talking and we'll, there we wait. I think I heard you. I We have you. Can you hear me now? Yes. Excellent.
Yeah, I, I think everything everyone has said is absolutely right, especially the pacing. But the other piece that is a, a concern certainly for Microsoft as a multinational company, but I think for many folks is the fragmentation. Meaning we need to comply to so many regulations in so many countries that are all just slightly framed differently. That there is a, there's a burden here for folks to comply and a burden to proof to prove that we are complying.
So the, you know, it's interesting because it's not new in some ways what we're dealing with is not new in some ways it's new. And I mentioned in an earlier session that eavesdropping, the word eavesdropping, which is now used for monitoring, was originally a trespass violation, a legal trespass when someone dropped in under the eaves of the house. So they were sufficiently close to the window so they could hear why did they use a trespass violation? 'cause there was no privacy law at the time.
And so that was a result of agriculture as a technology because agriculture allowed us to have cities and to have more concentrated populations. So then the printing press comes along and then we have the publication problem and that's Brandeis. Some of you may have heard of the article, famous article, Brandeis, oh don't publish Things About Me. Those an intrusion. Then the, the systems keep, the technology keeps changing our interactions and our interaction volumes, the quality and quantity of our interactions.
And so in some ways it's not new what's happening now, but in some ways it, it is entirely new. Are there differences in that we observe now, either in the AI going forward or what we've experienced to date in terms of the internet that you can draw from the past and, and make assertions about what might, how we might approach things in the future? Is it just a matter of picking up what's already there, like trespass laws or existing laws and trying to apply them in a new place? Or are there new patterns and new ways we might think about? Some of you raised some of those issues.
How might we think about how do we even start to address this new set of problems? Is that of interest to anybody in panel Repeat? My previous life as an anthropologist, before I started in digital rights a decade ago, what I can say you, you're absolutely right, the law evolves with technology, but cultural norms in many of these respects are universal. You always had a connection between an artist and the art. You always had a distinction between the public and to private.
Where we draw these boundaries was always culturally specific, but the fact that we had these boundaries is universal. And in a way, particularly when we talk about privacy, this stems from human dignity. There's a reason why stripping someone naked, there's a form of torture that's illegal onto international law. So when we are now in a world where technology has become ubiquitous and intrusive, we also have to update these norms. And that's why digital rights or digital policy is so interesting 'cause it's our generation that drafts these foundational laws.
The questions in 20, 30 years will be totally different and we are the generation that gets it aside. So we better all pay attention.
Michael, I'll turn you next on the G Life and things like that. In the last panel there was reference made to licensing as a model for individuals to control data flows.
And we, I, I started salivating as an ex-lawyer because licensing means I can write some 40 page document, which I would be delightful for me, but an individual wouldn't be able to consume that. And I know you've been doing some work on using the life system and existing systems for identity projection for starting with Michael, but for all of us are there, what are the existing legal tools, legal structures, legal approaches that we might bring that that they're sufficiently familiar?
So we're not introducing something entirely new, but we're kind of u reusing them in these contexts for new challenges. Do you have some thoughts Michael?
Yeah, And to actually build on Pam's comment about predictability for businesses. So one of the things that I think is that I've always found interesting from G Life is it basically was the G 20 that conceived of the concept of gly and leis. After the 2000 economic meltdown, your role Understand what GLY is.
Oh, so yeah. So after the 2008 economic meltdown, the G 20 got together and said part of this economic meltdown was attributed to the fact that different financial instruments could not be attributed to different organizations. So G 20 came together, G Life is the global legal entity identifier foundation and they assign unique 20 digit, 2020 digit characters.
Yes, I was thank you character identifiers for legal entities. And what this does now is it provides predictability. And this is the one thing that most organizations, particularly operating on a global scale, one, how can we have predictability? So one of the things that I do think is, is very interesting with what Golia has done with the leis is it was originally created to solve a specific problem of identifying financial instruments, but now with vle, they are actually looking at how they can extend that into other use cases.
So I think again, this is part of how do you look at what tools are in your toolbox and how you can repurpose them for different situations. And when those tools have a global app applicability, I think those are kind of some of the better ones that you want to use in your toolbox. I I just would like to weigh something in here.
I agree that there are certain regulations or certain tools, let's say, that we, we have to deal with the problem or the challenge that I see is that AI and technology is evolving very fast and then I don't see that sometimes the regulation are evolving in the same pace. So then I, I think like that one would be the, the main challenge here.
I dunno, what are your thoughts on that? That's a absolutely a very, a challenge that regulation tech don't go in the same pace. And you were asking for tools. I think a very, very effective tool that you, I see that you can apply every day in every product, every situation is be clear about what you're doing, what is the product, who does what, define that part on a very factual basis. And then everything on top who's accountable, who is liable, makes it so much easier. So the first step is understand what you're doing.
Very often we have a fantastic technical product, no idea who's, what we're doing on the legal side, what is the legal framework, where do we, do we position that? And not only in startups, I saw that earlier today, I saw a presentation on a, the framework for the UD wallet, not saying which country, but it's a pure technical proposition, no business operations, a little bit no legal, no social, you said the word bolts, business operation, legal, technical, social. And if you take only one part makes it difficult to add the other parts at the end.
Be clear about the product, be clear about what you were trying to do, analyze it from the different sides, and then everything makes it easier. The second step, accountability, reliability, responsibility, et cetera. So in to expand on that notion, the bolts idea, business, operating, legal, technical, social, it's kind of like a handy checklist if your information product, because these are sociotechnical products, services, they involve all of those elements and the failure to take account of the what's going on in any of those categories will cause a failure of the product.
And I know Pamela's been involved in a lot of product development cycles and I'm, I'm sure has seen that. And it, and it's hard because there's different vocabularies in each of those domains. And part of what we're talking about here is we're not going to, the technologists are not gonna go to law school. The lawyers are not gonna learn all the technology as completely. So how can we bring those things together more effectively?
Are, you know, how can we, is it that you put it out in the market and then have a failure and that teaches you or are there ways that we can have these communications, obviously EIC and other meetings like this provide us with an opportunity to share vocabulary. Michael, did you have a thought on that? So one of the things that I found interesting in the NIS two directive is, I think it's Article 14 talking about the cooperation group.
And one of the things that I think they got right was they specifically one of the require, one of the tasks of the cooperation group is to look at best practices. And in there there's also Article 28, which deals with the accuracy of domain name data registration data. And what they did is in recital one 11, they specifically made reference to best practices in the area of digital identity.
So what I thought was interesting there was, they didn't know what it, they didn't say X or Y, they said they recognized that this was going to continue to evolve and they basically provided for the flexibility for emerging technologies and to have the cooperation group feed that back in. So I don't look at NIS two as a static, but something that is dynamic. So I think that security legislation, I am wired can never be static because it's, once your ink is dry, the legislation is already outdated.
So that's why a lot of operations and technical standards are not in the regulator, the director for the regulation itself, but in underpinning implementing acts. And some of these accountabilities to define them are in the member state area. But I think as a company, and I just presented it in my previous slide, for those who were not there, I saw it working quite well before any product development was being done in our bank. I found it, I had this problem all the time, you know, I was head of IM and then the business kept developing lots of apps and things and using data.
And then after that, that would be the risk department, the audit department, the CSO department. And then after that it had to be all re-developed to be compliant to this, that, and the other. Only afterwards, so I found a group, a multinational group of staff, group one, the head of compliance was there as the head of the sanctions desk, the head of legal, the head of whatever audit. So all really content knowledgeable people in these fields because the reality is location or department agnostic and companies are built like silos for each.
And that's, and in the, on the globe it's, it's, it's even in a larger scale the same. So, and we had every week of, of legs on the table session, okay, business, what are you planning? Well they're planning this and this, do you know that? That's a then you have to, these are the compliance requirements and that would be really with a lot of coffee and tea and cakes.
And it would two hour take two hours and in the end it was really good friends and it was there, it, it took away the threshold for the business to show what they were doing to these nasty mafia guys of all the legal risk and compliance people. Well, now that I know that cakes are gonna be involved, please invite me. Thank you. And Pamela, I wanted to go to you on the product development and on the standards work you've done and other experiences.
You know, what, what's your take on into that integration? You know, how do you bring these different voices in so that you make sure you don't have a, a failure out there in the market? Can you comment on that a little bit?
Yeah, I think, and this is just my personal experience, not any kind of of corporate statement, but it has been my experience that many of the technology efforts are negotiated on a, on a bilateral basis, often, especially in the beginning. So if you, for example, look at federation, there are legal and operate operational considerations there, but many companies are negotiating, you know, individual contracts with individual companies for individual reasons.
And so what I, I feel like we lack is the ability, there goes my camera, sorry, is the inability or perhaps the, the lack of a, an initial business reason for us to start with design patterns that, that match regulatory asks, you know, we do things individually first to solve individual problems and then we try to retrofit to those processes. I I don't know if that's anyone else's experience. One of the things I want to pick up on, while people are thinking about that is the mention of design patterns. It's something that I wanna make sure folks in the audience are aware of.
In the early years, and Pamela correct me on this or others in the early years of secure software development, they were referencing the work of a gentleman named Christopher Alexander who wrote books on architecture. And his assertion was that if you wanna design new architecture, you should look at what's been done, the patterns of what's been done in the past.
So if I wanna design an a portico, an entryway, I should look at the entryways of yurts and churches and residences, look at the patterns and then use those patterns to inform what I do that was picked up in secure software development. And it feels, and I think Pamela's alluding to it here, it feels like that might be some way, even if we don't have direct precedent to use, it might be a way for us to think about how, what patterns did we see in the past that had been effective and how might we use those?
You know, to, to give a specific example, and, and this actually involves Microsoft's involvement within the IAN stakeholder process. So IAN is a multi-stakeholder organization responsible for the internet's unique identifiers, domain names, and IP addresses. And as part of the consultation process, there was a policy development working on the accuracy and access to domain names. And Mark SV from Microsoft actually participated in this particular group.
And there was a lot of efforts by businesses to advance certain initiatives, but unfortunately a lot of the contracting parties pushed back and those recommendations did not come to fruition. But what was interesting is one of the people also participating in that was someone from the commission. And that person actually then was in part responsible for writing article 28 regarding the accuracy and access of the main name information.
So I think what's important, and, and I think Pam, this goes to what you were talking about is businesses, when you participate in these dialogues, it's really important to listen to what regulators or other government bodies are thinking because if you do not find the ability to reach mutual agreement, the ability to have something imposed upon you could not be very favorable. So there you go, Emily.
And you were Talking about patterns earlier, and I was just thinking, I'm listening to this conversation and we are talking about the regulations, how they have been set up for the way current society is currently set up. I'm actually thinking about the future, there were quite some conversations yesterday about agentic ai, AI agents who are able to set up companies on the fly of a whim. I know some people who set up a drop shipping company. One actually it wasn't people, it was one guy.
He spent an afternoon, he got several different AI agents to do to outsource several parts of the business. And in one go he had a drop shipping company that was running all year. But at the same time I was watching, I was watching a YouTube video just the other day about how multi-agent frameworks, I also used to imitate more complex structures. For instance, a small hospital was actually also simulated where they were giving medical advice based on several AI agents all controlled by one person.
So I think that taking this into account, we may face a lot of institutional overwhelm when every single person could technically have the capability of setting up several companies and several institutions that work together also working together with, with the ion collective intelligence. So I think that just like how you were mentioning transport really changed the way that people interact. That has changed. It's become very hard to define liability. There's a big in law as a very big question always on which territory did it take place.
But that's really hard when you look at digital services, it's actually really hard to define. But as technology is evolving even further and that we are fragment fragmental businesses even further, I'm very curious to see how we're going to tackle the question of, of liability in that sense. So that's to answer on the patterns you were mentioning. Milli staying with you for a minute, your presentation last year freaked me out.
So I, I, I wanna talk a little bit about, let's talk about patterns of harm because you know, the harm I may remember you made allusion to the children and protection of children. And that's something that throughout the ages we have had theft, protection of children, these are issues that are perennial issues. And so those are things that from the human aspect, what humans do. And I wonder if there's, we can be informed by, you know, we kind of take the precedent, take what we did before and say, well, let's bring that forward. And that's what our focus has been so far.
But are there other ways we might be thinking about this since we are assuming that our existing tools will be ineffectual to some extent, maybe looking at the patterns of harm and saying, okay, we have people who are innocents, how do we protect the innocent? You know? And so are there some thoughts you have on that or, and others as well?
Yeah, that's a really good question. I'm always very intrigued when I look at the younger generation, I feel like there's a very big like gap again to bridge increasingly as the generations, as the generations keep coming with technology. And this is also going on, on something that you said earlier about how, how the perceiving of harm is evolving in terms of privacy.
Before it was, it was very strange to have someone eavesdrop in your house Today, it's very weird to have someone look at the neurological patterns in your brain, but looking at tomorrow, if I'm looking at my little cousins for instance, I'm not sure if they're going to find it so strange for people to have direct access to what is going on inside of their minds, which for us freaks us out, right? But for the next generation of regulators, they may think it's fine.
So I think answering your question, the most important thing I think we can do today as parents, as brothers, as sisters, as colleagues, as just people or you know, instructors, teachers handling young people who we increasingly no longer understand on fundamental levels is to really go in on that dialogue with them and go like, okay, what to, to basically keep the fundamental human values going as the generations keep coming to keep that dialogue open.
Thomas, I agree with you that ultimately we have to look at society as it is right now and to vote Schneider fundamentally, technology is nothing more than a lens and, and a method to increase power wherever power lies at the beginning could be activists like me to to, to suddenly appear much stronger. But in the long run, big corporations and governments will keep up. We talk a lot here at this conference about digital identity and all the goods that it would bring this week.
Also report came out how digital identity systems are abused in Uganda for a very draconian government control scheme and mass surveillance. So there always is a cost.
And if we, if we proliferate these systems on a population level, we have to look to the most vulnerable parts of society, to the ethnic minorities. So the people are affected by racism, by sexism. And there is, you, you just mentioned earlier, yeah, there's only the technical debate in Aida. Yes. And I mean we, we paid attention to that for the last three years since the law came out and, and did what we could to make the regulations safe. But there are so many unanswered questions, just to give you a glimpse.
When you have a general purpose, universal system to identify people and share data about them and everything in the data, including things like family status, sexual orientation, gender, we have not even thought about what the standardization of these values would mean. And I can even within Europe where we have quite a harmonized system see that we'll not agree.
So there, there's so many central issues that are out of the spotlight. And again, coming back to patterns, the pattern that's repeated here is that these laws are discussed in a silo without the people that are affected by them without proper processes to really understand what technology will do once it's out in the wild. And that's just hugely irresponsible. And I see the only people with a big plan are the big touch platforms and it's only their plan for them.
And of course I, after the pandemic, everybody agrees that we need to digitize government and we need to offer things online, but that's not the end of it. And we really have to ask the question, what type of society will these systems bring us in? And if we are not even capable of asking, let alone answering these questions for simple things like digital identity and we understand the nature of passports 400 years, then we have no chance of actually answering forward ai, which is in technology that so few people actually have understood, if at all.
So that any meaningful regulation is out of the question. I mean, I would make one exception to that statement, which is biometrical recognition, face recognition. They think it is actually very clear, this is dangerous, this is removing freedom from physical spaces. This eliminates the, the public nature of public spaces. Yet the big AI act of the EU failed to establish strong safeguards for this technology. Click the can down the road, now it's up to member states and it's just irresponsible. That's not to Europe that I want. So we have a question from our virtual audience.
Once again, I would like to thank you for engaging and I remind you here in the room, if anyone has a question, please feel, feel free to raise your hand. The question is, with the use of generative ai, what are your thoughts in terms of intellectual property and copyright that that is a, that is, you know, like I, I would like to make a comment on this. I'm a lecturer as well, and I remember, you know, with colleagues in academia where we say is like, how do we know when the paper was actually written by a person or co-author by judge GBT?
So, So I kind of have a unique, so I'm an engineer and a lawyer, so I kind of look at this from two different lenses. Perfect illumination. Yes. So going back, if, if I talked about how under US law, it's kind of always evolved and one of the interesting, one of the interesting cases was under, or the original copyright law, it only protected physical works, right? And what happened was in the early 19 hundreds, the piano role came out, which was a mechanical role.
And the, the authors were quite upset when their music was being automatically played on pianos and they sued, and that case went to the United States Supreme Court and the Supreme Court said no copyright protection. So the copyright owners went to the US Congress and in 1909 they amended the copyright laws. So part of, you know, lawyers are really good at suing at times, and sometimes those lawsuits can result in good or bad action, but that I think is part of the thing.
So going back to ai, which I find interesting, some of the early litigation involving AI is tending to be on copyright, what was it fair use for certain copyrighted material to be in included in certain LLMs. Then also as far as the output, right now, at least under the us there still has to be an author.
So there, one of the cases in the US involved, I think it was, it was an A or primate that took a selfie and the person whose phone it was tried to claim copyright protection and the courts were like, no, you didn't, that's your phone. But it was the animal that took this selfie. So you are not able, so again, one of the things that's interesting is copyright law and patents are actually provided in the constitution Article one, section eight, clause eight. It literally is embedded to protect the authors and inventors. So that I think is, is the issue.
Now, if someone is involved in prompt engineering, that is, I think where the struggle is going to be. It's not the AI generating something, but when you have a human interacting, crafting that prompt evolving it, that's where I think the law will extend copyright protection to that work. That's kind of my gut. I don't believe that, you know, using a search engine for Google to help me write a paper somehow prohibits me from exerting copyright protection.
So again, to be determined, but this is where I, I think the lawyers, for better or for worse will probably help provide some clarity. This comment reminds me to the presentation of, of Jacob Viol that says, okay, the lawyer who said, it depends so Well, yeah, it, well that was literally the very first day of law school.
The, the very first day of law school, the professor asked this question and all a bunch of eager law students were like this, that, and after about 15 minutes, he's like, wrong. He goes, the first answer is the pen, the second statement outta your mouth is once you provide a retainer, I'll provide further clarity. So that was literally my first day of law school.
So yes, Just to, just to follow up on that, on the copyright notion, my recollection is that before the statute of Anne, the copyright right, was the money went to the printing press owner, not the author. And so we need to talk about economics and power here.
Also, the rule of law is about the perpetuation of power structures ultimately. And, and Thomas is itching on this one.
So let's, I'll finish this thought, Spotify all along. That's right. So it's interesting because you hear a lot of weeping about author rights and things like that, but the printing press was the equivalent of the large Hadron Collider at the time. It was extremely expensive and you had to amortize the cost. So we had similar things in, in music play with the publisher versus the composer and the, there's all sorts of division there. So one of the things for us to keep in mind is that ultimately a lot of this may be choreography in front of power.
And so for us to be aware of, and, and we've alluded to it many times, we have nation states, we have the right, which are sovereigns, they don't ask for permission or forgiveness. And now we have companies that are accreting power in the information domain, not in the physical domain like nation states did with the piece of a failure divided up the world.
So Thomas, I don't know if you have some thoughts on the economics issues and what, what you might want to add there. I mean, as we just said, and, and that's similar to continental Europe copyright, ultimately it is meant to protect the, the artists, the author. Yet if you look at the current economic realities, that's the least it does. If you are in music, then most likely you will not be able to live off your art.
And, and that's to the determined of the production of music, but to the huge benefit of few very big operations, the labels and the streamers. And here we also see that current licensing deals give those companies the rights to train AI based on this. And we know from the big Hollywood strike what the right response to that should be, that that creatives protect their art and the rights to it.
And that, that we have this debate. Again, I would agree with you that this needs to be decided by the legislators. I don't think we should toss this question to the courts, but it's very hard to stall things around on copyright in particular. I i I don't wanna make an assessment on like when the next US legislation will be passed on this.
But even in Europe, having worked on a copyright directive from 2019 till 20 19, 20 15 till 2019, I can tell you that even the EU has very little wiggle room here because we do have international treaties dating back over hundreds of years that limited what even the EU can do. So it is a really difficult space where I fear that the most powerful players will just create realities. The licensing deals that open my eyes, currently striking with many publishers is a worrying sign in that direction. The only good thing that could come from it is the eventual downfall of Google.
Speaker 11 00:37:51 Yeah, I have a remark, not specifically on copyright, but on legislation in general. Legislation tries to capture reality, physical reality, life into a rule-based approach to defer, to, to make sure, to set a rule set for your, for what should and could and should not and should not be done. And that will always fail because there will always, you can't describe the complete multifaceted reality of the world and define what should and could and should not be done because there will always be new realities emerging. That's what we see with ai.
So by default on a highest abstraction level, law can never be sufficient. So in the, this is in the closing minutes, I just wanted to ask people for one sentence on what good looks like in the year 20, 40 and 15 years from now. Anybody wanna start?
What is, what does good look like? What is the what, what is a, a satisfactory condition in the human condition with law and technology being balanced, looking way out? I can start, I've been thinking about this and something that's a bit on my mind is I think that many of us, we have artificial intelligence. One sentence you said, right? Okay. I need to I need to cut it down. I'm gonna cut it down. I want people to take accountability themselves and make sure that they stay accountable in an age where they use technology that is smarter than them. And Pamela, you've been very patient.
Do you have a thought on this? I do.
I I mean, I think innovation in this space means bringing legislation, regulation and technology closer together so that you have, you know, patterns for success that are easy and that do the right thing according to guardrails that are responsive. You know, in less than a decade's worth of turnaround Other folks less thoughts. Yeah. In my ideal world, in 15 years, people are still thinking, doing the thinking and technology is supporting, helping to do it better. But the thinking part is still with the humans.
Thomas, We know that we've won when the regulation makes the technology to equalize power imbalances in our society instead of amplifying it. Michael Symbiosis would be, I, I i, if, if there is a symbiotic relationship between humans and ai, that's a good thing in my opinion. I want to close with Plata who said, well the world's going to be better when kings become philosophers, and philosophers will become kings. So I would change it a little bit when engineers become lawyers and lawyers become engineers.
And on that note then please join me in thanking the panelists for wonderful discussion. Thank You. And with this, we've analyzed this Part of the track.