Many Worlds of AI

Date: 26-28 April 2023

Venue: Jesus College, University of Cambridge

Panel 13 : Intercultural and Decolonial Approaches in Practice

27 April | 2.00 pm | Chair: Chelsea Haramia | Venue: Frankopan Hall

Presentation 1: Operationalizing decolonial AI through Ethics-as-a-Service

Presenter: Saif Malhem, Daricia Wilkinson, Kathy Kim, Paul Sedille and Nupur Kohli

Abstract: With more than 80% of papers published at AI conferences since 2018 attributed to authors in East Asia, North America, or Europe, efforts in AI Ethics risk being futile if they continue to fail to account for the cultural and regional contexts in which AI operates. Meanwhile, two concepts have garnered increasing prominence in the same time period. The first is decolonial AI, and the second is Ethics as a Service. Each has its own merits that offer needed contributions and improvements in the design and deployment of Artificial intelligence. Decolonial AI acknowledges the evolution of value and power, and leverages historical hindsight to explain patterns of said power that shape our intellectual, political, economic, and social world. Employing foresight, it provides tactics to better align research and technology development with established ethical principles, centering vulnerable peoples who continue to bear the brunt of negative impacts of innovation and scientific progress. Meanwhile, Ethics as a Service offers an on-demand customizable approach to examining AI development and deployment on a case by case basis, and in a manner that can satisfy both the agreed upon principles and the technical translational tools tasked to fulfill them. It does so by calibrating said tools in a balanced fashion so they are not too flexible (and thus vulnerable to ethics washing) or too strict (unresponsive to context). Our research connects the two concepts and offers a practical framework for operationalizing the foresight and tactics provided by decolonial AI when deploying Ethics as a Service. In doing so, our research first provides a global list of some of the most prominent regional and cultural values and a replicable methodology for sourcing and identifying more. Second, our research takes a given AI deployment scenario at hand and provides answers to the following questions: how to select the values (Western or not) that befit said scenario, how to interpret each value selected, and what's the roadmap for operationalizing said value, via software tools, akin to existing approaches with more literature-popular values such as explainability and fairness.

Author bios: Daricia earned her Ph.D. in Human-Centered Computing at Clemson University. Her dissertation investigated alternative pathways for the design of justice-oriented safety countermeasures particularly for people in non-Western contexts. During her tenure as a graduate student, she has been fortunate to have been selected as Meta Fellow (formerly Facebook), Google Scholar, and a Trailblazer in research by the United Nations for her work on online safety in the Caribbean.

Saif Malhem is the founding co-chair of the AI Future Lab: the largest global lab for millennials and Gen Z’s in artificial intelligence, built by members of the World Economic Forum’s Global Shapers Community. In 2022, the AI Future Lab launched the world’s youth AI manifesto at the International Telecommunication Union’s Generation Connect conference in Kigali, Rwanda. For his leadership in AI and climate technology, Saif was named one of Canada's Top 30 Under 30 in sustainability in 2020. He is an engineering professional with experience in Fortune 500, nonprofit and start-up environments. Within the Global Shapers Community, Saif sits on the impact council and was one of the #Davos50 Global Shapers invited to attend the World Economic Forum’s Annual Meeting in 2022. Saif has been a public speaker and public speaking coach for over 10 years and has spoken at a number of international conferences in India, Germany, and Canada. Paul Sedille Paul Sédille is a Belfer Center Student Fellow pursuing a joint degree between the Harvard Kennedy School and Stanford Graduate School of Business. Prior to this degree, he lived 10 years in China, working as a writer and videographer in Hong Kong and Beijing. His research interests cover China, media, and tech, from US-China relations to new media business models. He is a member of Global Shapers, where he has worked on digital literacy, public involvement in AI, refugee rights, and ocean conservation. Paul is a graduate of the Beijing Film Academy, Sciences Po Paris, and Sorbonne University.

Kathy Kim is a lead data scientist and data strategist with the Booz Allen Hamilton CTO Artificial Intelligence (AI) Integrated Management Team (IMT). She has extensive experience in engaging federal agencies on topics including data governance, policy, architecture, other specialized technologies, and national security. She previously supported the Millennium Challenge Corporation’s MCC-PEPFAR Data Collaboratives for Local Impact (DCLI) program and also the Aspen Institute Philanthropy & Social Innovation Nonprofit Data Project as a William Randolph Hearst Fellow. Kathy received her bachelor's degree in international studies from American University's School of International Service and also the SIS Resonator's Award for Outstanding Service upon graduation. She recently received a certificate the Center for Asian Pacific American Women (CAPAW) "Unleash the S(Hero) in You", a one-year long Women's Leadership Program funded by the Walmart Center for Racial Equity. In her spare time, Kathy runs a 501c7 organization, WEF Global Shapers Community DC, and leads pro bono career coaching services for first-generation college students and BIPOC professionals.

Nupur Kohli is an award-winning healthcare leader, advisor and medical doctor. She previously worked as a medical advisor to the largest health insurance company in the Netherlands. Nupur is an appointed member of the European Health Parliament which aims to create a resilient European Health Union. She is also a supervisory board member for UNICEF Netherlands and member of International Advisory Board of Amsterdam Economic Board. Nupur aspires to make healthcare better, more efficient and accessible. Her expertise extends to building resilient health systems, social determinants of health, health equity, stress and productivity beyond. Nupur actively contributes to the World Economic Forum projects: Chatbots RESET, Generation AI: Developing Artificial Intelligence Standards for Children and a shared learnings platform on the transformative role of women and girls in health.

Presentation 2: Multicultural AI design and Ubuntu philosophy

Presenter: Bev Townsend, Bongi Shozi and Donrich Thaldar

Abstract: Much has been recently published on the core ethical values guiding policy frameworks on responsible AI. While many of these ethical principles form a common core or corpus of values shared between and across applications and locations, their realisation must be articulable through lenses that are relevant and appropriate to a particular context. Lenses, we argue, should be multicultural in formulation. One such multicultural lens is the traditional philosophy of sub-Saharan Africa of Ubuntu which prizes communitarianism and values ideas of humanness, co-operation, and reciprocity. The development and deployment of AI cannot be divorced from important socio-political, philosophical, and normative debates involving inclusion and diversity. While foreground values such as transparency, fairness, and justice give an appearance of consensus, their use is highly contextually-sensitive and application-specific. Informing an algorithmic outcome requires consideration of a multitude of normatively-relevant reasons – both quantitative and qualitative – and includes consideration of underlying prevailing values, interests, and duties. Reasons that provide a scaffold for building an ethical case and providing a legitimate pathway for explicitly selecting (either by justifying or refuting) the course of action an algorithmic system should follow. We argue that any account of meaningful embedded intelligence should include as part of the conversation previously marginalised, silenced, and under-represented voices in both establishing this common core of values and in articulating how these values find application. Not only is the system itself a network of functions, but it is one embodied and embedded within a broader holistic and connected functioning system of real life and is not independent from it. A real-world that is by nature and design multicultural. Thus, the system must not only account for, and follow, the rules of – and be integrated within – a cultural and societal domain but actively participate and contribute to it.

Author bios: Dr. Bev Townsend is a Postdoctoral Researcher at the York Law School at the University of York, UK and an Honorary Research Fellow at the University of KwaZulu-Natal, South Africa. Her expertise is in integrating law and ethics into safe and resilient autonomous systems (robots). Her research has focused on law, ethics, human rights, artificial intelligence, and governance.

Dr. Bongi Shozi is a Postdoctoral Scholar at the Institute of Practical Ethics at the University of California, San Diego and an Honorary Research Fellow at the University of KwaZulu-Natal, South Africa.

Professor Donrich Thaldar is an academic at the Law School of the University of KwaZulu-Natal, Durban, where he chairs the Health Law & Ethics Research Interest Group. He is the Principal Investigator of a research project on the legal aspects of the use of data science in health innovation in Africa, funded by the NIH. He also has a private legal practice, and has served as legal counsel in 13 reported cases.

Presentation 3: How People Ethically Evaluate Facial Analysis AI: A cross-cultural study in Japan, Argentina, Kenya, and the United States

Presenter: Severin Engelmann, Chiara Ullstein

Abstract: In computer vision AI ethics, a key challenge is to determine how digital systems should classify human faces in images. Across different fields, there has been considerable scholarly debate about normative guidelines that inform policy-making for facial classification. In our previous work, we have applied an experimental philosophy approach to investigate how non-experts and experts in AI deliberate about the validity of AI-based facial classifications [1, 2]. Our analysis of 30,000 written justifications using the transformer-based language model roBERTa quantified the normative complexity behind classifying human faces. Experts and non-experts found some AI facial classifications morally permissible and others objectionable. We also found justificatory pitfalls that legitimized invalid facial AI classifications. These justifications reflected an over-confidence in AI capabilities, while others appealed to narratives of bias-free technological decision-making or cited the pragmatic benefits of facial analysis for specific decision-making contexts such as advertisement or hiring. Thus, contrary to popular justifications for facial classification technologies, these results suggest that there is no such thing as a “common sense” facial classification that accords simply with a general, homogeneous “human intuition.” However, cross-cultural perspectives have been missing entirely in this debate. In ongoing work, we add such missing cross-cultural perspectives working with collaborators in Japan, Argentina, and Kenya to extend this research project to an analysis of non-experts’ justifications of facial AI classification in these countries. We are curious to understand whether there are cultural commonalities and differences in the ethical evaluation of facial AI classifications. At the Desirable AI conference, we would present the quantitative and qualitative results of our cross-cultural study in Japan, Argentina, Kenya, and the US. This research supports critical policy-making by documenting cross-cultural perceptions and judgments of computer vision AI classification projects with the goal of developing ethical digital systems that work in the public’s interest.

Author bios: 1Severin Engelmann With a background in philosophy of technology and computer science, Severin is an ethicist focusing on the ethics of digital platforms and systems. Currently, he studies how non-experts in AI ethically evaluate AI inference-making across computer vision decision-making scenarios. In this research project, he also investigates whether and to what extent participatory approaches to AI ethics help advance the ethical governance of algorithmic systems.

Chiara Ullstein Chiara Ullstein is a Ph.D. student at the Chair of Cyber Trust. With a background in Politics and Technology, Chiara's research explores public participation in the development and regulation of AI applications. Chiara applies both qualitative and quantitative research methods.