Towards a Praxis for Intercultural Ethics in Explainable AI

Abstract: Explainable AI is often promoted with the idea of helping end users understand how machine learning models arrive at respective predictions. Still, the majority of these benefits are reserved for those with specialized domain knowledge, such as machine learning developers. Recent research has argued that making AI explainable can be a viable way of making AI more useful in real-world contexts, especially within low-resource domains and emerging markets. While AI has transcended borders, a limited amount of work focuses on democratizing the concept of explainable AI to the “majority world”, leaving much room to explore and develop new approaches within this space that cater to the distinct needs of users within this region. This work introduces the concept of an intercultural ethics approach to AI explainability. It aims to examine how cultural nuances impact the idea of “explaining”, how existing cultural norms and values influence the adoption of modern digital technologies such as AI, and how situating local knowledge in the development of AI technologies can improve user understanding and acceptance of these systems.

Author bio: Chinasa T. Okolo is a fifth-year Ph.D. Candidate in the Department of Computer Science at Cornell University. Before coming to Cornell, she graduated from Pomona College with a B.A. in Computer Science. Her research interests include explainable AI, human-AI interaction, global health, and information & communication technologies for development (ICTD). Within these fields, she works on projects to understand how frontline healthcare workers in rural India perceive and value artificial intelligence (AI) and examines how explainability can be best leveraged in AI-enabled technologies deployed throughout the Global South, with a focus on healthcare.

Recorded Presentation | 26 April 2023

#Explainability #InterculturalApproaches

Previous
Previous

AI Regulation in Brazil: National Knowledge or Foreign Appropriation?