Many Worlds of AI

Date: 26-28 April 2023

Venue: Jesus College, University of Cambridge

Panel 1 : Common Vocabularies

26 April | 9.30 am | Chair: Maurizio Ferraris | Venue: Frankopan Hall

Presentation 1: To Build “Fairer AI”, First Thoroughly Understand “Fairness”: A Multidisciplinary Review Through an Intercultural Lens

Presenter: Was Rahman

Abstract: News reports of “unfair” AI have become all too familiar, stretching back beyond Amazon’s inadvertently sexist recruitment system and Google’s facial recognition system confusing black humans with gorillas. Few would disagree these are wrong, but different academic disciplines offer very different views of not just how to address such issues, but what the actual issue is to address. This paper explores what we could and should learn about building fairer AI, if we treat these disciplinary differences as a form of cultural difference.

As with other forms of culture, each discipline brings its own perspective, context and ontology to the subject. For example, computer science typically treats AI unfairness in explicit mathematical terms, whereas social scientists may point to implicit attitudes and behaviours rooted in colonial history. Meanwhile, legal scholars are likely to focus on demonstrable, material harm linked only to specific protected characteristics. Psychologists, neuroscientists, politicians and philosophers add further variety, alongside other, equally valid, academic perspectives.

This paper examines implications of this diversity on building fairer AI systems. It is based on an ongoing investigation into discrimination towards under-represented groups in business decision-making. The findings are based on and illustrated by unpublished interview data and publicly available examples.

The paper lays out a foundation of diverse literature, including critical review of common AI fairness assurance approaches and mechanisms. It focuses on underlying concepts, definitions, indicators/metrics, and language, and considers different cultural models. It compares and contrasts how different disciplines perceive (AI) fairness, using a common structure.

 This multidisciplinary review highlights some valuable insights into fairness that are rarely considered in AI work rooted in a single field. It reminds us of a deceptively simple finding of the overall work – to create fairer AI, we must ensure we understand fairness (at least) as much as we understand AI.

Author bio: Was Rahman is a researcher and consultant in ethical AI. He is a doctoral researcher at Coventry University, and CEO of AI Prescience, a consultancy helping organisations use AI ethically and responsibly. He has over 30 years global experience using data and technology to improve business performance.

His research interests include the use of AI in organisational decision-making, ethical AI governance, and the impact of AI on social division. His current work is an investigation into the ontological diversity of different disciplinary approaches to AI fairness, focusing on AI-enabled business decision-making.

In business, Was has worked with large corporates, start-ups and SMEs around the world, working with CxOs, Boards and Investors. He has held leadership roles at Accenture, Infosys and Wipro, managing business in the US, EU and Asia Pacific. He has also run start-ups and raised funding. For governments, Was has advised UK and Indian political leaders on technology industry policy.

Was graduated in Physics at Oxford, and Computing at Coventry University. His AI and data science education is courtesy of Stanford, Johns Hopkins, Amazon and Google. He has been a guest lecturer at Oxford’s Saïd Business School, Cambridge’s Judge Business School, London Business School and IIT Madras.

Presentation 2: Towards a Praxis for Intercultural Ethics in Explainable AI

Presenter: Chinasa T. Okolo

Abstract: Explainable AI is often promoted with the idea of helping end users understand how machine learning models arrive at respective predictions. Still, the majority of these benefits are reserved for those with specialized domain knowledge, such as machine learning developers. Recent research has argued that making AI explainable can be a viable way of making AI more useful in real-world contexts, especially within low-resource domains and emerging markets. While AI has transcended borders, a limited amount of work focuses on democratizing the concept of explainable AI to the “majority world”, leaving much room to explore and develop new approaches within this space that cater to the distinct needs of users within this region. This work introduces the concept of an intercultural ethics approach to AI explainability. It aims to examine how cultural nuances impact the idea of “explaining”, how existing cultural norms and values influence the adoption of modern digital technologies such as AI, and how situating local knowledge in the development of AI technologies can improve user understanding and acceptance of these systems.

Author bio: Chinasa T. Okolo is a fifth-year Ph.D. Candidate in the Department of Computer Science at Cornell University. Before coming to Cornell, she graduated from Pomona College with a B.A. in Computer Science. Her research interests include explainable AI, human-AI interaction, global health, and information & communication technologies for development (ICTD). Within these fields, she works on projects to understand how frontline healthcare workers in rural India perceive and value artificial intelligence (AI) and examines how explainability can be best leveraged in AI-enabled technologies deployed throughout the Global South, with a focus on healthcare.

Presentation 3: Automating Desire: Laws of sex robotics in the US and South Korea

Presenter: Michael Thate

Abstract: Oliver Wendall Holmes, the great American Supreme Court Jurist, famously asserted that basic principles do not get one very far. Legal disputes or thorny ethical challenges live betwixt and between competing and equally valid claims of rights and interests. What is needed in such scenarios is an ability to draw a line between them.

“Automating Desire” adopts this legal realism by considering the legal and policy challenges introduced by the new industry of A.I. Sex Robotics. As opposed to an “ethics of” paper, or a basic principles argument, “Automating Desire” is an exercise in legal reasoning through case law as it relates to the legal framing of A.I. and desired policy outcomes in the sex industry. The emerging technology is creating new legal challenges—and not a few uncomfortable policy scenarios.

The paper will proceed in three broad moves. Part One, “Obscene Desire,” will work through three American cases as it relates to import bans on the technology of child sex robots. In U.S. law, case law distinguishes between pornography and obscenity. Pornography, insofar as does not encroach the so-called “Miller Standard” of obscenity, is protected speech. Obscenity, which includes child pornography, is unprotected speech (Roth v. U.S.; and New York v. Ferber). With respect to child sex robots, there are three landmark cases to consider: Ashcroft v. Free Speech Coalition; Williams v. U.S.; and Miller v. California. What emerges from these cases are fundamental challenges of legal classification and the designation of real v. artificial. In the Supreme Court’s holding of U.S. v. Williams, for example, the court stated, leaning on Ashcroft v. Free Speech Coalition, that child sex robots are protected speech insofar as the person who solicits the material reasonably believes the material involved does not involve real children. This section considers not only the legal framing of the matter, but takes seriously our intuitive sense of wrongdoing irrespective of the standard of real v. artificial in policy questions of therapy through legal philosopher, Joshua Kleinfeld’s rubric of “victimization” (2013).

Part Two, “Governing Desire,” considers the South Korean court’s ban of imported sex robots on the grounds of what it deemed: threats to disrupt the constitutional order (Article 234 of their Customs Law). This ruling, I suggest, alerts us to the governing function of “Desire” on the one hand, and, on the other, the policy question of control: who gets to control what I desire.

Part Three, “Automating Desire,” concludes by thinking prospectively from the edge of where law and policy currently stand on the question of sex robots. This section reflects on both the discomforts and pro-social applications of the technology of A.I. Sex Robotics.

Author bio: Michael J. Thate, Ph.D. is an Associate Research Scholar and Lecturer at Princeton University’s Faith & Work Initiative, SEAS, and the Keller Center for Innovation in Engineering Education; and a Law Student at Northwestern Pritzker's School of Law.

He has held visiting fellowships and lectureships at Yale, Harvard, Durham University (U.K.), and l’École normale supérieure, Paris. He was a recipient of the Alexander von Humboldt Award, spending three years at Universität Tübingen in Germany. Michael’s academic interests are informed and complemented by his corporate consulting experience on matters relating to brand equity, communication strategy, and corporate trust.

Michael is the author of two monographs: Remembrance of Things Past? (Mohr Siebeck 2013) and The Godman and the Sea (UPenn Press, 2019). He has edited four other volumes and written several articles on subjects ranging from suicide, philosophy of religion, participation, labor, time, money, the second space age, the attention economy, design thinking, and business ethics. His work attempts to track genealogies of thought and set into comparison the assemblages of ethical questions. He is currently working on two books: Scented Life and Natural Prayers of the Soul. The former considers the ethical challenges of difference among and between lifeforms. The latter is an ethical reflection on the so-called attention economy.