The huge promise offered by AI, and particularly machine learning, has led to an explosion in uptake of AI methods applied to a wide variety of business problems. However, machine learning and the data to which it is applied can be extremely complex, resulting in opaque systems that are not understood by their creators and not trusted by their users. In this talk, I will discuss approaches that can improve trust in AI systems, focussing in particular on explainability (how and why did the system produce this output?) and robustness (how can the system be exploited, and can we defend against such attacks?).
Key takeaways:
- Deploying AI comes with its own unique set of risks and challenges
- Your AI system will be of no use if the relevant stakeholders do not trust it
- AI systems can be gamed and exploited in surprising and subtle ways