KuppingerCole's Advisory stands out due to our regular communication with vendors and key clients, providing us with in-depth insight into the issues and knowledge required to address real-world challenges.
Optimize your decision-making process with the most comprehensive and up-to-date market data available.
Compare solution offerings and follow predefined best practices or adapt them to the individual requirements of your company.
Configure your individual requirements to discover the ideal solution for your business.
Meet our team of analysts and advisors who are highly skilled and experienced professionals dedicated to helping you make informed decisions and achieve your goals.
Meet our business team committed to helping you achieve success. We understand that running a business can be challenging, but with the right team in your corner, anything is possible.
Yes, good evening. Be with me. You are all from the AT sector now. You have to listen to your law to a lawyer at the very last session. I'm promising not to cite any provisions or you won't see any paragraphs, et cetera. I'm coming from a law firm, specialized in data protection, information security, and AI based systems are actually our daily business at the moment.
Corporate, the corporate industry. The companies are coming to us asking us how to use them, are they allowed to use them? What need to be, what is the legal framework, et cetera.
And well, it's no longer science fiction and somehow this year it's been a hype around AI based systems. I guess chat UTP is the main reason for this, but it has been a lot around us for a long time. It's daily business, right? We have search engines, navigation systems, weather forecast, et cetera. So fast developing technologies, but also methods which you can use in business. There's lots of future potential, but also lots of unresolved legal issues.
I'm not coming up with the legal solution today, not gonna tell you how the law should look like, but hopefully I can give you some thoughts on why we need at least some regulation. But I will also show you some regulations which are already in place and which you should really consider when using AI-based tools in your business. So how do they work? Not gonna look into the technology, the algorithms, I have no idea about it, but as a lawyer, obviously I have to look at when are they used and how are they used.
So the whole idea behind it, it's basically to solve certain tasks without human intervention. I think we can agree on this. So in a corporate environment, I could use it for creating content. For example, chat gdp, writing me a block article or as a virtual assistant checking contracts. We like to use it when we consult business in information security systems. If you want to be iso certifi, you want to have an ISO certificate, you need to produce lot of content to show that your management system is working.
So it would be good to scroll through all the documents to see whether you have controls on excess control, physical control, et cetera. Instead of reading those papers, obviously customer support, lot of companies, your chatbots, those tools are trained with publici available information and the speaker before already set the risk. Those information can be false. And then the content obviously created by those tools will also be false, discriminating, homophobic, et cetera. So there is a risk of false output.
And of course what is interesting for me as a privacy expert, the use of personal data, When we create a legal framework, we need a definition. So I chose to take the definition from the EU commission. They already in 2018 published a communication on artificial intelligence saying that AI assist is our systems that display intelligent behavior by analyzing their environment and taking actions with some degree of ordinary to achieve specific goals. So this definition is technological neutral, as we call it.
It's not limited to a specific technology or methods, very important, but also has its drawbacks important because the development of such tools is fast. And the law always lacks behind the drawback or drawback of such definitions. Board definitions is how to interpret them and we see it. The EU general data protection regulation is technologically neutral. We have lot of court cases at the moment because how to interpret it, how to use it, legal and ethical issues we have to look at. And a framework needs to not focus on privacy and data protection, but also on surveillance.
And later on I will show you an example where surveillance with false algorithm has led to bias and discrimination. Also the role of human judgment. I create a content and I rely on it. Copyright issues are also a problem. Think of using the administrative justice lawyers replaced by, There are trials already and lots of errors, but of course those tools advance over time. They are getting better. People behind it are working on removing or reducing at least those buyers. So at the moment we have nobody defined regulations in place to address these issues.
But there is a proposal and the EU AI Act is the first major regulation which has been proposed by the EU Commission. They hope they will come into force this year, adopt it because it has lots of draws as well. It's regulating the industry, but not so much the authorities. And I think that will be the problem. But the key objectives are, and this is usual for EU law, that we harmonize laws within the union that we, at the same time, when facilitating the development of such technologies, also need to protect fundamental rights and union values.
So the AI Act uses or proposes a risk based approach. Some AI based system towards will be prohibited like social scoring. Others with a high risk, for example, ABIs tools using for CV assessment, credit scoring, et cetera, need to undergo an assessment. Then surveillance should be monitored, et cetera. Standards for the technology behind shall be developed to make sure that AI systems are fair and then also transparent. So this is Justin proposal. We'll see what is happening.
Today was a very important day because it went through some of the legislation procedures and if it passed passed today, I didn't find any information. It will go further to EU parliament and they will come up with a decision in June I think. So let's keep an eye on the proposal if it's gonna pass or not.
However, we have already frameworks in place that actually highly regulate AI based systems. And these are the data protection laws. And of course we are looking here in the European Union at the General Data protection law, which also is applying in the uk, at least for now. But also other countries like Switzerland we see or will have a new law in September. I think India just came up with the law, several states in the US having data protection laws, especially California, Canada, et cetera. Any technology that is designed to process personal data.
And this includes also AI falls under data protection law. And and when companies come to me and ask, can I use a chat board? Can I use a tool that is going through business card business cards to analyze which company, which person could be relevant to me? They often say, well, it's not me responsible for collecting data. Google is always a good example. Google Analytics is processing lots of personal data. And companies say like, but we can't identify the people behind it.
No, the company can't, but Google can. But you are still the controller. You are still according to the law, responsible for it. So if you use an AI-based technology in the company, you are responsible for it. You have to think about, am I allowed to use it? What are the requirements when personal data is processed? Maybe there's a joint controllership, but this needs to be a case by case analysis. So with Strategy P, open ai, you need to have for example, data processing agreements in place.
What often is missing as a legal basis in the European Union, UK and many other countries, you are only allowed to process personal data. If you either have the consent or the legal basis, you'll come up with some cases later on and you trying that some companies have received. What is more, even more problematic is informing the data subject. Do you actually know what the ai AI-based system that you are using is doing with the data? How it is processing. And people in the European Union have a right to have human intervention.
So if a decision is based on machine, on a machine learning system, you have a right to be explained the decision by a person. So you cannot just relay, rely on AI-based systems. You have further obligations. Privacy by design, meaning use only those data you need to have. Not much is nice to have in the future, et cetera. You Might have to do a data protection impact assessment if it's a high risk imposing on the individual. Maybe you need a data protection officer in Germany, most likely. But also maybe in other countries. There are a lot of decisions of authorities already.
Companies like Open, my I chat G, are focused in the focus of the e u watch watch docs. You know that Italy was banning chat G for a while, lack on legal basis with the data that has been used for training the the software. Then there was something with the age limit that they couldn't really make sure that children using it rarely 13 years old. It's not just a problem with gtp, but Well, Clearview AI is a good example. They got a lot of fines. So they may find by authorities for violating data protection principles because they used images without consent.
France 20 million, Greece, 20 million, Italy, 20 million. And even the UK imposed a high fine, but also Canada and Australia. Why all those com? All those countries, because open AI doesn't have a headquarter in the European Union. And if you do not have a headquarter, you are in the focus of all 27 authorities. If you have an headquarter, you have one single authority taking care of you. Like Ireland was Google. So it's expensive. The last example ever album, same or almost same company like Like doing something similar. Also facial regulation, images, database with images.
I just want to show that the US is also not silent. They actually ordered to stop the whole procedure, to delete all the data collected because lack of information and no consent of the users, apparently they, they had a settlement. Interesting is an authority that got fined 3.2 millions by Holland or in Holland. They based on a false organism, they looked into citizenships. And all those people with double citizenships were supposed to be at high or at high risk to the civilization and more fraudulent.
So to sum it up, I hope I showed you some examples that AI-based systems pose significant legal challenges, also ethical challenges, that there is already a framework which can impose huge fines, not only on the companies developing AI-based systems, but also those using such tools. And well, in Europe and at least in the uk, some authorities are not holding back with fines. We always look at Germany and saying like, you are so over-regulating, you are so strict, et cetera. But on my examples, Germany has not imposed a fine yet.
It's more France and Italy and especially the UK who are giving high fines to companies. Thank you very much. That's interesting. Are there any questions from this room here? Anyone? Any questions in the app?
No, not, not yet. When will this act be enforced, do you estimate?
Well, first it has to go through parliament to the eu. To the EU parliament. They are criticizing the act highly because of national security. It could be used by the governments to do surveillance, et cetera, even though we need court orders for governments to use such information, et cetera. But I think there's still need some amendments.
Also, the data protection authorities, the EU data protection authorities are criticizing the act. So we will see if further amendments are necessary before it actually is not any longer a proposal, but an act if it passes this here, it will go into effect in two years. And then would the member states need to make implementing acts as well?
No, no, no. It will be a regulation. So it'll be the same within. That's all right. And what's about jurisdictions? It's a sort of black box. You don't know where the data, the the trained data or the data base that is giving the outcomes. That could be a black box and that could be in different jurisdictions. So if that's not transparent, exactly how to enforce anything. That's true. And how do you, I mean, if you look into the laws, we have already anti-discrimination laws at the moment, but they apply to people making false statements or discriminating statements.
But if it's a tool giving you such statements, how do you want to regulate it? Yeah, really difficult. Thank you so much. We're looking forward to see how it develops.