IIA Canada National Conference - September 14 - 16, 2020

Managing the Risk of Bias in Artificial Intelligence

Day 2 - Concurrent Session 5 - Track 6
15 Sep 2020
11:25 - 12:15
Day 2 - Concurrent Session 5 - Track 6

Managing the Risk of Bias in Artificial Intelligence

Level: Intermediate

Each year Machine Learning / Artificial Intelligence (ML/AI) models make daily decisions that affect the lives of millions of people. The models make these decisions of behalf of corporations and governments and as such; these entities are responsible and accountable for the results of these decisions.

Here’s the tough question: Who is responsible for ensuring the ML/AI model is making fair, unbiased decisions?

Is it the model developer? No. Basic internal control principles dictate that the person(s) responsible for creating a system cannot be impartial evaluators of that same system.

Is it the users of the model? No. Users typically have neither the expertise to evaluate an ML/AI model nor the inclination to question a model that seems to be performing well.  As an example, consider a predictive policing model. If implementation of a ML/AI solution leads to increased arrest rates and a reduced crime rate, it is unlikely that users will question whether that system unfairly targets a particular group – from their point of view, the system works! 

There has been a lot of academic research and interest in the ML/AI community on the subject of fairness and reducing bias in models but to be clear, in the real world:

The task of providing assurance to Senior Management and Governance bodies that the risk (reputational, financial, legal) of implementing an unfair or biased ML/AI system is being managed appropriately belongs to Internal Audit.