Panel 10 : In Search of New Fundamentals
27 April | 9.30 am | Chair: Alan Blackwell | Venue: Frankopan Hall
Presentation 1: Korean value of ‘Jeong’
Presenter: Robert M Geraci and Yong Sup Song
Abstract:
Developing ethical artificial intelligence has become a crucial problem, especially as advancements in machine learning lead to its increasing deployment across a broad spectrum of social and political processes. The frequent assertions of the independence of science from culture and religion have been widely debunked, and the social impact of digital technology makes this reality increasingly obvious. Drawing on religious and cultural values helps expose the lacunae in current approaches to robotics and AI, and creates opportunity to design AI for human flourishing. The Korean value of jeong offers a specific example of such cultural theology, and can be applied in the ongoing development of AI and related technologies. Jeong is a complex social phenomenon including empathy, solidarity, and mutual obligation. Making jeong a priority in the generation of new AI technologies will be relevant to the use of AI in human-human and, theoretically, human-robot interactions. The conjunction of theological, religious studies, and social AI approaches shows that ethical AI depends on more than the current focus on western philosophical ethics. If AI design incorporates human-human and human-AI jeong, the challenges of surveillance, algorithmic bias, and even hypothetical AI superintelligence become more manageable.
Author bios: Robert M Geraci is Professor of Religious Studies at Manhattan College. He is the author of several books, including Futures of Artificial Intelligence: Perspectives from India and the US (Oxford 2022) and Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality (Oxford 2010). His research has been supported by the US National Science Foundation, the American Academy of Religion, and two Fulbright-Nehru research awards. He is a Fellow of the International Society for Science and Religion.
Yong Sup Song is an Assistant Professor of Christian Ethics and Theology at Youngnam Theological University and Seminary, South Korea. His recent interests are focused on the ethical issues of Artificial Intelligence. As a Korean Christian ethicist, he emphasizes the priority consideration of the poor and the marginalized, and the inclusion of regional values in the development of AI. He is currently working on discovering and introducing cultural values in Korean society as regional values for moral AI. He has conducted three research projects on theology and artificial intelligence for the National Research Foundation of Korea.
Presentation 2: The Five Tests: Designing and Evaluating AI According to Indigenous Māori Principles
Presenter: Luke Munn
Abstract: As AI technologies are increasingly deployed in work, welfare, healthcare, and other domains, there is a growing realization not only of their power but of their problems. AI has the capacity to reinforce historical injustice, to amplify labor precarity, and to cement forms of racial and gendered inequality. An alternate set of values, paradigms, and priorities are urgently needed. How might we design and evaluate AI from an indigenous perspective? This article draws upon the 5 Tests developed by Māori scholar Sir Hirini Moko Mead. This framework, informed by Māori knowledge and concepts, provides a method for assessing contentious issues and developing a Māori position. This paper takes up these tests, considers how each test might be applied to data-driven systems, and provides a number of concrete examples. This intervention challenges the priorities that currently underpin contemporary AI technologies but also offers a rubric for designing and evaluating AI according to an indigenous knowledge-system.
Author bio: Luke Munn is a Research Fellow in Digital Cultures & Societies at the University of Queensland. His wide-ranging work investigates the sociocultural impacts of digital cultures, from data infrastructures to platform labor and far-right radicalisation, and has been featured in highly regarded journals such as Cultural Politics, Big Data & Society, and New Media & Society as well as popular forums like the Guardian, the Los Angeles Times, and the Washington Post. He has written five books: Unmaking the Algorithm (2018), Logic of Feeling (2020), Automation is a Myth (2022), Countering the Cloud (2022), and Technical Territories (2023 forthcoming). His work combines diverse digital methods with critical analysis that draws on media, race, and cultural studies.
Presentation 3: What would an anti-casteist AI system look like?
Presenter: Shyam Krishna
Abstract: This paper fills an acknowledged gap in AI and wider digital research which positions ‘caste’ as a complex phenomenon and explores anti-casteist principles interpreted for AI contexts. This paper then seeks to distil an ontological interpretation of caste and to forward an anti-casteist ethical approach to the design, policy and governance of AI and algorithmic technologies. A conceptual indeterminacy and a subjective experiential nature add to the challenge of how caste can be interpreted within digital contexts. Caste cannot be explained either solely through the conceptualisation of race and racial biases, or, more markedly, through prevalent class-based analysis. Further caste is constituted of and experienced through largely communicative practices and contexts. We propose as first principles, going back to Dr. B.R. Ambedkar’s treatise on caste and its ‘mechanism, genesis and development’. From this and other literature, some main explanatory aspects can be identified and interpreted for AI contexts. We describe caste-based enclosures - by which we mean factors that allow for ideological, or functional boundedness formed in the (digitally mediated) real world or its virtual representations, that pre-suppose or permit caste-centric in/exclusion and caste homogeneity. Next is a preponderance of caste-centric purity in social relations, either implicitly or explicitly found and which can be queried in human contexts or digital content of AI systems. There is also an intersubjectivity of caste among groups and individuals which establishes differing positionalities, in/visibility of caste markers, and caste-originated power differentials, all of which can be questioned among the various stakeholders of AI systems. The paper maps the above conceptualisation to AI’s sociotechnical lifecycle of design, development and deployment. Through this, the paper argues for a practical, ex-post and ex-ante embedding of Ambedkarite ideals to present a framework for an ideologically anti-casteist mechanism for the ethical assurance of AI.
Author bio: As an engineer-turned-researcher in critical data studies, my research interests lie in developing an ethical and social justice-oriented view of emergent digital innovations and the technopolitical ecosystems they inhabit. My research interest includes algorithmic and data practices of gig work, digital identity, and fintech platforms. I am interested in policy and governance issues surrounding AI, particularly in the global South, and how this overlaps with contexts of labour and credit. Currently, at The Alan Turing Institute, I research and advise organisations on national and global ethical policy agendas for AI and how this can be directed by principles of data justice.