Many argue that there is a possibility for an ethical robot, in addition to creating robots that are able to exhibit human emotions, scholars point out that efforts should be made to "induce emotions toward AI in humans," which is believed to be necessary in order to instill reciprocal ethical and moral codes in machines. Research in artificial intelligence over the past twenty years has focused on debating this issue of morality and how to gauge it in machines.
Should a code of ethics for machinery, be considered within the framework of its function instead of fitting into a broader understanding of ethics?
Some insist that machines cannot be taught ethics, they have no moral agency and we should be focused on those who build the machines and the purposes for which they build them rather than the machines itself.
As the relevance of this topic increases, more and more questions are asked. While a universal understanding of what constitutes an ethical machine does not exist, everyone agrees - safety should be a priority and avoiding harm to human life is paramount.
This Panel will present some of the critical issues concerning this topic, discuss and argue about ethics in AI, current guidlines and if we should have agreed-upon framework for enforcing these guidelines.