To Build “Fairer AI”, First Thoroughly Understand “Fairness”: A Multidisciplinary Review Through an Intercultural Lens

Abstract:

News reports of “unfair” AI have become all too familiar, stretching back beyond Amazon’s inadvertently sexist recruitment system and Google’s facial recognition system confusing black humans with gorillas. Few would disagree these are wrong, but different academic disciplines offer very different views of not just how to address such issues, but what the actual issue is to address. This paper explores what we could and should learn about building fairer AI, if we treat these disciplinary differences as a form of cultural difference. As with other forms of culture, each discipline brings its own perspective, context and ontology to the subject. For example, computer science typically treats AI unfairness in explicit mathematical terms, whereas social scientists may point to implicit attitudes and behaviours rooted in colonial history. Meanwhile, legal scholars are likely to focus on demonstrable, material harm linked only to specific protected characteristics. Psychologists, neuroscientists, politicians and philosophers add further variety, alongside other, equally valid, academic perspectives. This paper examines implications of this diversity on building fairer AI systems. It is based on an ongoing investigation into discrimination towards under-represented groups in business decision-making. The findings are based on and illustrated by unpublished interview data and publicly available examples. The paper lays out a foundation of diverse literature, including critical review of common AI fairness assurance approaches and mechanisms. It focuses on underlying concepts, definitions, indicators/metrics, and language, and considers different cultural models. It compares and contrasts how different disciplines perceive (AI) fairness, using a common structure. This multidisciplinary review highlights some valuable insights into fairness that are rarely considered in AI work rooted in a single field. It reminds us of a deceptively simple finding of the overall work – to create fairer AI, we must ensure we understand fairness (at least) as much as we understand AI.

Author bio: Was Rahman is a researcher and consultant in ethical AI. He is a doctoral researcher at Coventry University, and CEO of AI Prescience, a consultancy helping organisations use AI ethically and responsibly. He has over 30 years global experience using data and technology to improve business performance.

His research interests include the use of AI in organisational decision-making, ethical AI governance, and the impact of AI on social division. His current work is an investigation into the ontological diversity of different disciplinary approaches to AI fairness, focusing on AI-enabled business decision-making.

In business, Was has worked with large corporates, start-ups and SMEs around the world, working with CxOs, Boards and Investors. He has held leadership roles at Accenture, Infosys and Wipro, managing business in the US, EU and Asia Pacific. He has also run start-ups and raised funding. For governments, Was has advised UK and Indian political leaders on technology industry policy.

Was graduated in Physics at Oxford, and Computing at Coventry University. His AI and data science education is courtesy of Stanford, Johns Hopkins, Amazon and Google. He has been a guest lecturer at Oxford’s Saïd Business School, Cambridge’s Judge Business School, London Business School and IIT Madras.

Recorded Presentation | 26 April 2023

#Fairness #InterculturalApproaches

Previous
Previous

Automating Desire: Laws of sex robotics in the US and South Korea

Next
Next

AI Regulation in Brazil: National Knowledge or Foreign Appropriation?