Hi, and yeah, most of you saw me already in the panel just before the break and as I said, they're already in my 50 minutes, which is very short amount of time. I will try to give you a general overview of what is now coming in the field of AI regulation and also put it a little bit in place from an international perspective.
First of all, and I said it already during the panel, it's really important because there were a few news articles here in Germany, but also in other member states that we are saying it's another piece from Brussels overregulating, a new field of technology coming out of the blue and so on.
With ai, this is not really the case because there was ongoing prep work on international level and actually as you see here in point number two, all this prep work on international level was starting with the fact that all the AI systems, so-called experts systems that were really based on an original code by the expert and then just operating along the rules that were included by the expert, the new AI systems, data driven AI systems of the second wave are making or creating a few unique situations where all over the world countries were realizing that maybe our legal systems were not fully capable to cover all those different situations.
What you see here is an overview that the commission has used in the early days of the AI discussions in Brussels, and I think especially with with certain levels of autonomy, it becomes a little bit clearer what the concerns are with artificial intelligence. And as I said, it was not only the European Union that was looking into it, but really all over the world, different regions, but also international, international organizations from the United Nations to Council of Europe to standardization organizations to the OECD.
And if you compare to other areas of international cooperation, ai, and I said it also on as a panel is actually one of the few areas where countries actually were able to agree on a common set of principles. You see here the 2019 or ECD principles that, as I said, we are adopted by G 20, so India, China and so on and so on.
And that we can now actually see in all those different AI policies and AI regulatory approaches from non-binding code of conducts in Singapore to the automated decision making law in China to the executive order on AI in the United States or in the European AI Act as we have discussed. This will very likely enable us to streamline and to align our different regulatory approaches because at least the core principles are the same.
What I now want to do is pinpoint a little bit where the AI act is, however, nevertheless a little bit unique and where it's becoming a really European approach and really, yeah, kind of going back to our ethical principle of our legal values. The first point that is really important and you cannot really find in any other legacy approaches on AI is the product safety framework that the European AI Act is using.
The commission originally have chosen this approach because it is set to be rather innovation friendly mainly because there are already 23 product safety legislations in the European Union. For example, medical device regulation, for example, toys, regulation, radio equipment directive, and so on and so on. And it's a kind of very, very established ecosystem with conformity notifications with bodies that are used to make third party conformity assessments and so on and so on. Companies know to which players they need to reach out, where to find help and so on and so on.
This having a mind, the commission was thinking, okay, let's just include AI as another product and benefit from having a common line of minimum criteria that are necessary if you want to bring an AI system on the market. What they needed, however, to include was the element of fundamental rights protection because compared to a vacuum or an AI system, because it can be basically developed and deployed for all different sectors, has maybe certain activities that could be harmful to fundamental rights that could be leading to biases and so on.
So one big challenge of this conceptional choice of the European Union will now to bring together people like Mark Schrems and the product safety people that normally don't really have talk with each other so far. The second point, and you saw it probably already in the news or in some assays from academia, is the risk-based approach.
Again, the commission here had this idea of really limiting the regulatory burden to those systems that are posing a high risk or that have certain other yeah elements with with them that demand certain disclosure obligations and also a number of AI systems that should be completely banned from the internal market of the European Union because of such high level or unacceptable level of risk because already on the first conceptual choice of the European Union, I underlined sir negative points also here there is one big issue that we didn't, yeah, talk enough about it in Brussels, what is actually the evidence of those risk categories?
The commission didn't really make an assessment of all those different risk categories. They never assessed how many AI systems that are currently on the market would be, for example, prohibited. You see here a number of five to 15% that is coming from an external think tank called SEPs from 2021. But apart from that, we are basically making a regulatory shot in the dark, bringing forward a law, and actually a lot of those AI systems that are either listed as prohibited or as high risk AI systems are not existing on the market.
One example in the migration sector, high risk, there is a lie detector based on ai. We ask around no member state is actually moment using such an AI system.
So again, hopefully it's working but we currently don't know a little bit more positive. I think we can assess and we discussed it already during the panel discussion, our value chain approach, product safety and a lot of other EU laws are always focusing here on level three and level four.
So those entities, market entities that are bringing a system on the market, placing it on the market and deploying it or using it. This however, leads often to a situations that rather small market players are, yeah, are in the situations that they need to fulfill everything.
And all those powerful upstream companies that are, for example, platforms that are distributing a model or a data set would be completely out of the regulation. As I said during the panel, we fixed it, at least there is now a strong obligation to share information to even upstream fulfill certain minimum criteria. So the idea is that in the future, cooperation along the value chain is happening much faster, much better. But right now it's a theory. It's also heavily depending on standard contractual clauses, it's a commission still needs to provide.
So also here there is a big question mark and this brings me to this overview.
When my boss and I made the final assessment after the AI negotiations ended at the end of last year and on technical level at the beginning of this year, we actually had a long, long, long list, I think 44 pages with 30 32 plus points and 32 negative points. So it's really a mixed bag. I summarized everything here with those five points, which can again be summarized in.
We created a very maybe future proof, but at least more dynamic and cooperative law, which is on the other hand leading to a lot of legal uncertainty. What I mean with that, what I said already, the AI act is based on international standards, on international concepts and it will help a lot that for example, the United States and the EU is using the same definition of ai.
We also made the AI act much more principle based, meaning that not every company that is falling into a high risk AI category needs to do the same obligation to the same extent and so on, but it's really depending on the use case.
I talked already lengthy about the value chain argument and the final positive point is that now the AI act basically is just creating a kind of general framework that is then complemented by secondary legislation, templates, guidelines, harmonized standards.
Those will be developed together with stakeholders, with company, with civil societies, with academics and so on. And hopefully this public private partnership will create a situation where changes in the field of AI can be rather quickly integrated in the law or at least in the secondary legislation, which then means that not like with the GDPR that it's out there and then yeah, it's now since 2016 into force, but still there's no reform in sight and at least through the AI Act, those things will not happen again.
I talked about legal uncertainty as a kind of overarching element of the negative points and I talked already about product safety legislation and this attempt of including fundamental rights in an already yeah, quite well functioning system, which leads to problem.
The inclusion of AI actually leads also to a huge problem for product safety legislation because product safety legislation is used to regulate fixed products that are not changing, but AI, as we all know, are evolving systems. They do change if there's new input data and so on.
There is, as I said already a huge question mark if legally this is working or if it requires companies to do basically constant updates of their quality management system, of their technical documentation and so on. The AI Act also because of their, as this horizontal scope, the attempt to really have roots for all sectors and use cases is creating a lot of legal overlaps, but also enforcement overlaps because for example, in the health sector and the finance sector, there is already legislation for ai, medical device regulation for example.
It's right now completely unclear how this legislation, sectorial legislation is yeah, complementing the AI act or is the AI act or the sectorial law.
No one really knows which enforcement body is enforcing. No one knows this is even make worse by the fact that the text is in many areas, extremely vague definitions. If you are reading them as a lawyer, you can argue a lot of different things. I talked already about the prohibitions and the high risk use cases that the evidence is missing, but also if you just study the legal language, it's of not really clear what in fact is falling into it.
Just one example, article 5.1 C is about social scoring. Everyone knows that very likely the commission had the Chinese test of the government in mind, but how they wrote it down, it overlaps with commercial practices from banks, from insurances that are currently already in place. We don't know if they are in the future suddenly prohibited. And the final point is that everything about how do you process data in a compliant way is not really solved.
But there's a lot of new legislation that is now in contradiction or at least standing next to the GDPR and the future will tell how companies should follow up on it. Because I'm over time, just a few slides, at the very end, as I said, the AI Act is just one part of a lot of digital legislation that is out there and that is coming in the next month and becoming applicable together with the Brussels space. Think tank Brule. I made this overview and in November, 2023 there were already 77 laws into force or even applicable and yeah, I would say a very tough shop for companies.
The same as I mentioned. Also when it comes to enforcement to whom I'm actually speaking as a company, it's a rather tricky question because now in November 23 we had already 65 enforcement bodies again, many times overlapping our contradicting and the final slides, all of that, what you heard now with the AI Act should have helped the European Union to put us in front or to help us in the whole digital competitiveness in in this whole race on AI leadership because the European Union were thinking we are lacking very much behind in technical terms, in venture capital terms and so on.
We maybe will not make it, but probably with positive frameworks that is giving companies a lot of legal uncertainty. We can bring us forward. But due to all this legal uncertainty that I was speaking about, this ambition of establishing a third way on AI has, again, a huge question mark over it. The only thing that is helping in the next month is to really use the secondary legislations that I was talking about in order to yeah, solve this legal uncertainty and make the AI a clearer, what you see here is the kind of timeline when certain parts of the AI act became applicable.
You see that in the winter already the prohibitions, yeah, are becoming applicable. So what companies right now should do is share their expertise. We were also talking about it at the very end on the panel, so make the point what you are already using now, how the new routes could affect it.
If there is, for example, a case that something commercially very useful could certainly be prohibited and so on and so on.
Engage in standardization, make sure that those technical standards make sense and are really representing not only a few big companies, but really is a broad field of AI developers and also use new things for the field like regulatory sandboxes in order to really engage with the regulator in a, in a close way. And the really final slide, sorry for being a little bit over time is also that we legislator are hopefully not stopping there, but leaving our policy silos and thinking about AI as a broader topic and trying to make all those connections.
For example, here with point number three, we have a digital green transition, but so far both transitions were isolated and hopefully now with the new commission after the European election, we will use for example here the synergies that are there.
Thank you. Thank you. I fully agree. Warm applause for kaizen. So there are a number of questions from the online audience or maybe from here, people in the room, but we don't have that much time. So sorry
For
The one question to you, which is please define unacceptably high risk.
Yeah, how would you define it?
So it's easy. You can basically look in article five where the prohibitions are listed and there's not one definition what is prohibited, but really as I said, use cases that are listed, for example, shows with scoring or emotional recognition and so on, which is however still very broadly. And the same is for high risk AI systems that are listed in NX three and again going in the area of migration, police enforcement, employment and so on. And then there are sub use cases which should pose a high risk.
Anything that has very deep impact on people's personal life. Exactly. And if you want to read the text for the legislation, it's on the UEX website. I think you can Google.
Yes. And very soon, next month maybe it should be finally really published in the official EU journal and then after 20 days it's finally entering into force. Yeah.
Well thank you very much, Kai.
Yeah, that was interesting.