Epistemic injustice and AI ethics

Abstract: Epistemic injustices are harm to our capacity as knowers by reason of identity prejudice. (Fricker, 2007) A burgeoning literature considers how the use of Artificial Intelligence (‘AI’) systemically perpetuate such epistemic injustices. (Nihei, 2022; Rafanelli, 2022; Symons & Alvarado, 2022; Sardelli, 2022) In this presentation, I introduce a taxonomy of AI-mediated causes of epistemic injustice in India. Focusing on how AI impacts background identity prejudices, as understood with insights from India’s critical AI fairness literature (Sambavisan et al, 2021), I propose a typology of epistemic injustices: ‘AI-exacerbated’, ‘AI-generated’ and ‘AI-consolidated’. The taxonomy contributes to an understanding of AI-mediated epistemic injustice in India’s unique socio-political context. ‘AI-exacerbated’ epistemic injustices are caused by pre-existing identity prejudices, the effect of which is exacerbated by AI. An illustration is Delhi’s predictive policing system that reproduces pre-existing police bias against vulnerable groups risking ‘testimonial’ and ‘hermeneutical’ injustice. (Marda & Narayan, 2020) ‘AI-generated’ epistemic injustices are caused where the deployment of AI systems generate new forms of identity prejudice. An illustration is use of inaccurate Emotion Recognition Technology (‘ERT’) by Lucknow police to identify distressed women. ERTs risk creating a new form of identity prejudice whereby certain expressions recognised as ‘distress’ by the ERT may be prejudicially equated with the presumed appearance of a ‘distressed’ woman leading to ‘testimonial injustice’. (Ara, 2021) ‘AI-consolidated’ epistemic injustices are caused where a pre-existing identity prejudice is consolidated with a new identity prejudice generated by AI. An illustration is the mandatory use of efficiency tracking smartwatches on Indian sanitation workers. Pre-existing identity prejudice (ie. cast bias) enabled the intrusive surveillance practice which was consolidated with a new form of prejudice whereby the apparent objectivity of the tracking data resulted in the AI-based smartwatches being considered more reliable than human workers possibly leading to ‘testimonial’ and ‘hermeneutical’ injustice. (Inzamam & Qadri, 2022)

Author bio: Suvradip is currently completing a Master of Laws at the University of Melbourne. As part of his Masters, Suvradip will be a Visiting Student at the Leverhulme Center for Future of Intelligence. At LCFI Suvradip will be working under the mentorship of Dr Kanta Dihal to explore the ethics of AI systems from an intercultural lens. In his research, Suvradip is interested in combining understandings of colonial histories of knowledge with theories of epistemic justice and intercultural ethics to understand the impact of AI on marginalised populations. Suvradip graduated with First Class Honours from the University of Queensland with a Bachelor of Science/LLB (Hons) majoring in physics. Since graduating, Suvradip has been involved in various projects researching the impact of technology on society, including at Harvard University’s Berkman Klein Centre for Internet and Society, and Global Catastrophic Risk Institute. His research has been published in the Australian Law Journal and Proceedings of the AI and Ethics Society Conference. He has previously practised as a commercial lawyer and was associate to a Judge in the Queensland Court of Appeal.

Recorded Presentation | 26 April 2023

#SocialJustice #EpistemologicalDifferences #India

Previous
Previous

Can the Ghost Worker Speak? De-colonializing Digital Labor

Next
Next

AI Colonialism