The world is caught between high expectations, futuristic fears, and non-legally binding recommendations on how to proceed with AI development and management. This means that although technologies are advancing and that many use cases are being proven effective to support enterprise data management, security, interactions with customers, and many more customized solutions per industry, most organizations are hesitant to board the AI train. For many, they do not feel they have enough guidance to launch a future-proof AI project. For the moment, all we have is a growing collection of AI governance frameworks from reputable groups such as the OECD and the European Parliament. These provide valuable insight into what concepts should be front and center of any AI project and are the precursors to legislation. But better will be when governments and regulators clearly specify what is required for a robust, trustworthy, and human-centric model to be developed and used in a business context.
Edging Closer to a Regulation
On October 20th 2020, the European Parliament voted on and accepted recommendations on what should be addressed in a future legislation, the proposal of which is expected in early 2020. The guiding principles that were agreed upon are: a human-centric and human-made AI; safety, transparency, and accountability; respect for privacy and data protection.
A risk-assessment that defines different levels of risk caused by an AI system may be used to further regulate high-risk AI technologies. The higher the risk a system poses because of malfunction, malicious action, or because of its intended purpose, the higher the regulatory oversight and responsibility an implementing organization could carry. This is one option that could help facilitate the proposed requirement of human oversight at any time for high-risk AI technologies.
A separate proposal discusses a "future-oriented civil liability framework" that would clarify and reconsider liability of self-learning algorithms in relation to developers, implementing organizations, and users. The Product Liability Directive (PLD) is one regulation that could serve as a starting point, and eventually be adapted to offer protection regarding digital and AI-based products. The framework that defines liability will have to be extended to include distinctions of frontend and backend operators, both of which would be held liable proportionately to the degree of control that each holds with the model's operation. Overall, liability should be proportionate with a model's risk, which requires clearly defined risk categories that models can be assessed by. Insurance coverage must also adapt, and the proposal suggests that the European Commission work closely with the insurance sector to find policies that offer sufficient protection.
Learn How to Prepare
These proposals are promising in that they show movement by the European Parliament, edging closer to a regulation. But these proposals did not have any surprises in them -- they are a natural extension of the European AI governance conversation. AI governance will continue to formalize, and if you want to know how to get started, join KuppingerCole's workshop on AI Governance on November 24th at cybernetix.world 2020.
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
Subscribe to our Podcasts
How can we help you