The Ethical Part of AI Governance – my personal learning journey
This talk is about my personal learning journey in AI and AI Ethics together with Bosch. I want to share what brought me to AI and AI Ethics personally and professionally and what instrument is used at Bosch to bring AI Ethics to life.
The harm that the misuse of AI/ML can have is obvious, from the ProPublica Recidivism piece from 2016 to the latest discovery of bias in facial recognition classifiers by Joy Buolamwini.
The need for tools to use AI/ML ethically is concentrated in two particular areas: transparency and fairness. Transparency involves knowing why an ML system came to the conclusion that it did—something that is essential if we are to identity bias. In some forms of ML, this is difficult. We’ll cover two tools to assist with transparency: LIME and SHAP. We’ll highlight where each of these tools performs well and poorly, and provide recommendations for utilizing them in unison where appropriate.
Once transparency is established, we’ll pause to evaluate potential sources of bias that would affect the fairness of a particular algorithm. Here the number of tools available is far-reaching. We’ll start with an explanation of bias metrics, explaining the roles that true/false positives and true/false negatives play in calculating various accuracy metrics. The basics of fairness established, then we will explore various tools used against a few, publicly available sample ML implementations. Tools in this review will include: Aequitas, AIF360, Audit-AI, FairML, Fairness Comparison, Fairness Measures, FairTest, Themis™, and Themis-ML. We’ll compare these tools, providing recommendations on their usage and profiling their strengths and weaknesses.