Not only is there no form of AI that understands what it says, can draw conclusions from it, and can base decisions on it, but it is not even known how such a synthetic intelligence could be created. In our time, let's say in the next two and a half decades, it is not primarily a question of developing an ethical code within which AI's can unfold as independent subjects, but rather of a far more profane view of responsibilities. If a self-propelled car decides to drive against a traffic light pole without any action on my part, who is responsible for the damage?
Are there already solutions in our current legal system for the regulation of such matters, in which only the former of the "basic manifestos" of injustice - the constituent elements of the offense, illegality, and guilt - still plays a role, or must a new category be devised for this?
This keynote will offer an interesting reflection on the current and future situation.
Language: English • Duration: 17:42 • Resolution: 1280x720
Watch on YouTube
Learn more about this congress