Is Artificial Intelligence in Cybersecurity Trustworthy or Deceivable?
Facebook Twitter LinkedIn

Is Artificial Intelligence in Cybersecurity Trustworthy or Deceivable?

Keynote
Wednesday, May 13, 2020 09:50—10:10
Location: AUDITORIUM

Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defence strategies of several governments explicitly mention AI capabili- ties. At the same time, initiatives to define new standards and certification procedures to elicit users’ trust in AI are emerging on a global scale. However, trust in AI (both machine learning and neural networks) to deliver cybersecurity tasks is a double- edged sword: it can improve substantially cybersecurity practices, but can also facilitate new forms of attacks to the AI applica- tions themselves, which may pose severe security threats. I argue that trust in AI for cybersecurity is unwarranted and that, to reduce security risks, some form of control to ensure the deployment of ‘reliable AI’ for cybersecurity is necessary. To this end, I offer three recommendations focusing on the design, development and deployment of AI for cybersecurity.

Dr. Mariarosaria Taddeo
Dr. Mariarosaria Taddeo
Digital Ethics Lab, Oxford Internet Institute, University of Oxford
Dr. Mariarosaria Taddeo is Senior Research Fellow at the Oxford Internet Institute, University of Oxford, where she is the Deputy Director of the Digital Ethics Lab and is Faculty Fellow...
Subscribe for updates
Please provide your email address