Conference Preview: Many Worlds of AI

The Leverhulme Centre for the Future of Intelligence is hosting a conference on intercultural approaches to AI ethics. “Many Worlds of AI” 26th-28th April 2023, comprises three days of presentations and workshops, spanning policy and governance, philosophy and design practice.

Why hold the conference?


There’s a persistent drive around the world to establish a common, universal approach to responsible AI. Emerging guidelines often foreground concepts like ‘transparency’, ‘fairness’, and ‘justice’, usually presenting them without precise definitions to comfortably accommodate different perspectives. 

On the one hand, looking for common ground is necessary with regard to tech, which will have planetary impact. But on the other, applying universalizing ideas often—in practice—means forcing a single perspective. There’s a danger that one definition of an ethical value will be taken as standard and forced into contexts where it’s inappropriate. Or, that valuable but marginalised ideas for solving the crises ahead of us may never come to fruition. 

This conference will examine what it means for a plurality of visions to co-exist philosophically and for designers and developers working at this time of extraordinary change. 

 These technologies will affect all of humanity, but they won’t impact all humans in the same ways. We’re trying to bring out the different ways of thinking about technology and the world, or world-building, and think through the conflicts rather than around them – Dr Tomasz Hollanek, Conference project lead.

To bridge philosophical concerns and practical strategies


Oftentimes, the makers of AI tools lack time and training to explore in-depth questions of how to interpret ethical principles into design choices. In any case, there are rarely clear-cut answers on how to operationalize values like ‘human dignity’. 

This conference seeks to learn which higher-level questions designers, developers, and policymakers should explore during the creative process in order to realise the values of frameworks like the EU’s upcoming AI Act.

When it comes to bias and discrimination for example, rather than just focusing on making datasets more inclusive, we want designers to be more aware of structural inequalities that caused the bias in the first place, and what that might mean for their work – Dr Tomasz Hollanek, Conference project lead.

To create dialogue between cultures, industries and disciplines 


The conference benefits from the expertise of academics and professionals from diverse communities around the world, with presentations spanning topics as various as UN’s peacemaking tools in Libya and Yemen, data capitalism in Sub-Saharan Africa, and the digital afterlives of colonialism in India. 

The agenda was jointly curated by the Universities of Cambridge (UK) and Bonn (Germany); one panel is co-organized with the Berggruen Institute’s Research Center at Peking University  (China), and another with Ashoka University (India), which will also host a follow-up conference later in 2023.

To work through the ways artificial intelligence will relate to society as a whole, presentations will be cross-disciplinary, as well as inter-cultural. The call for papers was extended beyond academic circles to include artists, communicators, policy specialists and technologists. 

We are aware of the need for structured conversations between philosophers, developers, policy makers, and artists. Putting artists in the same room with DeepMind scientists is just one of the first steps in the process – Dr Tomasz Hollanek, Conference project lead.

How did the conference come about? 


“The Many Worlds of AI” forms part of the “Desirable Digitization” research programme, a collaboration between the universities of Cambridge and Bonn which investigates strategies for designing AI in a responsible way, centring the values of sustainability and social justice. 

Desirable Digitization” follows on, in many ways, from an earlier programme—the LCFI’s “Global AI Narratives” project (2018-2022). This investigated the cultural histories and philosophical traditions that AI technologies emerge into—portrayals, values and attitudes which shape how these developments are received. 

The project culminated in a book: “Imagining AI: How the World Sees Intelligent Machines” edited by Dr Stephen Cave, Director of CFI and Principal Investigator on the “Desirable Digitization” program and Dr Kanta Dihal, formerly a research fellow at CFI and PI on the Global AI Narratives project and currently a Lecturer in Science Communication at Imperial College London, which will launch at the conference. 

Desirable Digitization utilises the network of global partners developed during the Global AI Narratives project but shifts the focus more concretely to practice, examining applications of AI projects operating in different contexts.

An ongoing project 


“Many Worlds” is the inaugural event in a biennial series that will facilitate intercultural dialogue in the field of AI ethics. 

By teasing out the differences between diverse approaches to AI technologies and their role in society, this series aims to identify ways to respond better to the opportunities and challenges they present.


—--------
You can view the conference website here
You can also view a detailed schedule, including speaker bios and presentation abstracts, here

Written by Miranda Gabbott. 

Previous
Previous

New Book! “Imagining AI: How the World Sees Intelligent Machines”

Next
Next

Call for Abstracts