Artificial intelligence as a decolonisation tool: Lessons from libraries, archives and museums

Abstract: The decolonial turn in cultural institutions has shed light on existing power dynamics behind galleries, libraries, archives and museums (GLAMs) surfacing the existing biases and exclusions generated by Western forms of knowledge production. Cataloguing frameworks used to structure and document collections are modeled upon Western epistemologies. GLAMs’widespread digitisation efforts and adoption of collection management systems and digital methods have supported preservation, accessibility, and research Artificial intelligence is increasingly used by these institutions to facilitate decision making tasks around documenting and cataloging activities as well as to improve user access to the information they hold about their objects. While AI can be perceived as an assisting tool it functions as a threatening one too. This perception holds relevance for GLAMs as postcolonial digital humanities and decolonial computing emphasize how technologies rehearse colonial dynamics (Risam 2021; Adams 2021). Many of these technologies are built upon classification systems and methods, such as statistics, used to control populations in colonial territories and racialized neighborhoods, and employ datasets that misrepresent non-dominant cultures with the use of derogatory terms. However, AI, like other digital tools, can be used as “technologies of recovery” (Gallon 2016) that unmask, repair, and remodel existing inequalities, biases, and other forms of colonial violence. For example, these projects use AI to identify discriminatory and problematic terms in documentation; tackle omissions of historical marginalised people in documentation; and seek to resurface hidden and forgotten objects. Through a series of examples of projects that use AI to decolonise museums and archives, this presentation will highlight strategies proposed by critical, postcolonial, and decolonial digital humanities that can be relevant to a wider AI community that tries to make AI fairer and more equitable – In particular, AI practitioners that are interested in developing systems that address issues around biased and problematic datasets.

Author bios: Dr Maribel Hidalgo Urbaneja a postdoctoral research fellow at the University of the Arts London working on the Worlding Public Cultures project and was research associate for LUSTRE, Unlocking our Digital Past with Artificial Intelligence, at Loughborough University. She obtained a PhD in Information Studies from the University of Glasgow. Her research interests span digital humanities, digital museum and heritage studies, digital narratology, and critical and decolonial approaches in digital humanities. Additionally, she has held positions in digital departments at The Getty in Los Angeles and the National Gallery of Art in Washington DC.

Dr Lise Jaillant is a senior lecturer in Digital Humanities at Loughborough University. She has a background in publishing history and digital humanities. Her expertise is on issues of Open Access and privacy with a focus on archives of digital information. She was the first researcher to access the emails of the writer Ian McEwan at the Harry Ransom Center in Texas. Her work has been recognised by a British Academy Rising Star award. Since 2020, she has led several international networks on Archives and Artificial Intelligence: LUSTRE Network, AURA Network, AEOLIAN Network and EYCon project.

Recorded Presentation | 27 April 2023

#NLP #DecolonialApproaches #ArchivesAndMuseums

Previous
Previous

Sharp Image, Vague Face: Disrupting the Facial Transparency in A.I. through a Diasporic Approach

Next
Next

Multicultural Design and Ubuntu Ethics