Panel 14 : Alternative Practices: New Datasets and Archives
27 April | 3.00 pm | Chair: Miri Zilka | Venue: Frankopan Hall
Presentation 1: Artificial intelligence as a decolonisation tool: Lessons from libraries, archives and museums
Presenter: Maribel Hidalgo-Urbaneja and Lise Jaillant
Abstract: The decolonial turn in cultural institutions has shed light on existing power dynamics behind galleries, libraries, archives and museums (GLAMs) surfacing the existing biases and exclusions generated by Western forms of knowledge production. Cataloguing frameworks used to structure and document collections are modeled upon Western epistemologies. GLAMs’widespread digitisation efforts and adoption of collection management systems and digital methods have supported preservation, accessibility, and research Artificial intelligence is increasingly used by these institutions to facilitate decision making tasks around documenting and cataloging activities as well as to improve user access to the information they hold about their objects. While AI can be perceived as an assisting tool it functions as a threatening one too. This perception holds relevance for GLAMs as postcolonial digital humanities and decolonial computing emphasize how technologies rehearse colonial dynamics (Risam 2021; Adams 2021). Many of these technologies are built upon classification systems and methods, such as statistics, used to control populations in colonial territories and racialized neighborhoods, and employ datasets that misrepresent non-dominant cultures with the use of derogatory terms. However, AI, like other digital tools, can be used as “technologies of recovery” (Gallon 2016) that unmask, repair, and remodel existing inequalities, biases, and other forms of colonial violence. For example, these projects use AI to identify discriminatory and problematic terms in documentation; tackle omissions of historical marginalised people in documentation; and seek to resurface hidden and forgotten objects. Through a series of examples of projects that use AI to decolonise museums and archives, this presentation will highlight strategies proposed by critical, postcolonial, and decolonial digital humanities that can be relevant to a wider AI community that tries to make AI fairer and more equitable – In particular, AI practitioners that are interested in developing systems that address issues around biased and problematic datasets.
Author bio: Dr Maribel Hidalgo Urbaneja a postdoctoral research fellow at the University of the Arts London working on the Worlding Public Cultures project and was research associate for LUSTRE, Unlocking our Digital Past with Artificial Intelligence, at Loughborough University. She obtained a PhD in Information Studies from the University of Glasgow. Her research interests span digital humanities, digital museum and heritage studies, digital narratology, and critical and decolonial approaches in digital humanities. Additionally, she has held positions in digital departments at The Getty in Los Angeles and the National Gallery of Art in Washington DC.
Dr Lise Jaillant is a senior lecturer in Digital Humanities at Loughborough University. She has a background in publishing history and digital humanities. Her expertise is on issues of Open Access and privacy with a focus on archives of digital information. She was the first researcher to access the emails of the writer Ian McEwan at the Harry Ransom Center in Texas. Her work has been recognised by a British Academy Rising Star award. Since 2020, she has led several international networks on Archives and Artificial Intelligence: LUSTRE Network, AURA Network, AEOLIAN Network and EYCon project.
Presentation 2: Diasporic communities and datasets
Presenter: Yifeng Wei
Abstract: Algorithm bias occurs when there is a lack of data diversity. A commonly adopted solution is to better an A.I. with datasets coming from minorities, implying a more general and severe process of data reaping. Unfortunately, recent discussion of A.I. ethics fails to consider this procedure as a constant exposure of the marginalized, including the diaspora, and latent risks brought by inevitable watching and listening. Meanwhile, the potential of the diaspora’s elusive identity has gained scant attention when reflecting on a possible way to resist persistent contemplation from the dominant. Based on this knowledge gap, my research criticizes compulsory transparency in facial recognition as an expression of power exercise while imagining an alternative and indistinct A.I. ethics originating from the diaspora. First, my study elucidates how power is exercised by pursuing face transparency and certainty. It further elaborates on how the process of facial dataset-making colludes with colonial photography on this point. Second, my research unpacks a poetic opacity that originates from the nomadic identity of the diaspora. Such ambiguity has the potential to contribute to the A.I. ethics focusing on marginalized communities and to resist top-down viewing. In addition, my study uses the documentary Welcome to Chechnya as a case study. It argues that the obfuscation created by Deep Fake technology in this work not only protects the Chechen LGBTQ diaspora’s privacy and dignity but also gives rise to a chance to challenge the totalitarian surveillance system. Last, my research articulates that the potentiality of A.I. for the weak lies not in how accurate and transparent an algorithm can be but to which extent those people can retain their opacity and invisibility with A.I. in front of the viewing entangled with power.
Author bio: Yifeng Wei is an artist, curator, and PhD candidate in Visual Culture at the National College of Art and Design in Ireland. While reflecting on the legacy of cybernetics and system theory, Wei’s research interests lie in digital colonisation and emancipation, as well as resistance against algorithm bias and surveillance capitalism. His study relates current technological surveillance to the desire for certainty in cybernetics and system theory and manages to find a possible way of resistance by resorting to the aesthetics of opacity. Wei’s investigation of such aesthetics involves writing an alternative art history focusing on anonymous and incognito artists. Also, it touches upon the analysis of artistic practices that protect and liberate the oppressed by adopting nontransparent technologies, including the black box mechanism in artificial intelligence. Revolving around artists who apply such a nebulous approach to resist the power structure looming behind technology society, his recent curatorial practice, “The Cloud of Unknowing”, was shortlisted as a finalist for Hyundai Blue Prize Art+Tech 2023.
Presentation 3: AI's Colonial Archives
Presenter: Rida Qadri, Huma Gupta, Katrina Sluis, Fuchsia Hart, Emily Denton
Abstract: Generative AI technologies, such as text-to-image models, have received a lot of attention recently. As with all AI technologies, critics note these models enact various forms of simplifications, erasure and bias in their outputs. Yet, to understand the visual representations AI models produce (and to disrupt them), we must understand the politics and history of the canonical archives they build upon, attending to the power differences and gazes that have historically been amplified in these archives. In this paper, we evaluate AI-generated images by situating them within broader global histories of cultural preservation and representation of visual archives. We specifically focus on the representations AI models generate of the global south. By juxtaposing AI-generated images about communities and practices from the global south with historic examples from visual archives we can trace the lineage of the racial, ethnic, gender and class narratives these models reproduce. We use critical scholarship on colonial archives, museum curatorial practices, and history of photography, to show how the visual archives underpinning AI models are sites of miscategorization, produced through an elite imperialist understanding of the “other” to perpetuate an orientalist gaze. Importantly, this gaze persisted in the cultural archives produced not just “by the west” but from within the south also, complexifying the question of where we can find an ‘inclusive’ archive.
Author bios: Rida Qadri is a Research Scientist at Google. Her research interrogates the overlaps between culture and AI. She is interested in the organizational, epistemological and geographical cultural assumptions underpinning the design and deployment of AI systems. She also studies the tensions and frictions that emerge when mono-cultural AI design choices are universalized through a global deployment at scale. Prior to joining Google she completed her PhD at the Massachusetts Institute of Technology in Urban Studies.
Huma Gupta is Assistant Professor in the Aga Khan Program for Islamic Architecture at MIT. Gupta holds a PhD in the History and Theory of Architecture and a Master's in City Planning from MIT. Currently, she is writing her first book The Architecture of Dispossession, which is based on her research examining state-building through the architectural production of the dispossessed. Her broader research interests include the economic, cultural, and political relationships between discourses of architecture, development, and urban planning. Developing methodologies using sonic, visual, and other sensory archives to construct histories of subaltern spaces and subjects is of particular interest to her.
Fuchsia Hart is Iran Heritage Foundation Curator for the Iranian Collections at the Victoria and Albert Museum in London. She holds a BA in Persian, an MPhil in Islamic Art with Arabic, and is working towards the completion of a PhD at the University of Oxford, also in Islamic Art, with a focus on shrines in 19th-century Iran and Iraq.
Katrina Sluis is Associate Professor and Head of Photography & Media Arts in the School of Art & Design at Australian National University where she convenes the Computational Culture Lab. Katrina’s research is broadly concerned with the politics and aesthetics of art and photography in computational culture, its social circulation, automation and cultural value. As a curator and educator, for the past decade she has worked with museums and galleries to support digital strategy, digital programming and pedagogy. Her present work addresses the emerging paradigms of human - machine curation, as a contemporary response to the massive intensification of global image production and circulation.
Emily Denton (they/them) is a Senior Research Scientist at Google, studying the societal impacts of AI technology and the conditions of AI development. Prior to joining Google, Emily received their PhD in machine learning from the Courant Institute of Mathematical Sciences at New York University. Though trained formally as a computer scientist, Emily draws ideas and methods from multiple disciplines, in order to examine AI systems from a sociotechnical perspective. Emily’s recent research centers on a critical examination of the histories of datasets — and the norms, values, and work practices that structure their development and use — that make up the underlying infrastructure of AI research and development