The Five Tests: Designing and Evaluating AI According to Indigenous Māori Principles
Abstract: As AI technologies are increasingly deployed in work, welfare, healthcare, and other domains, there is a growing realization not only of their power but of their problems. AI has the capacity to reinforce historical injustice, to amplify labor precarity, and to cement forms of racial and gendered inequality. An alternate set of values, paradigms, and priorities are urgently needed. How might we design and evaluate AI from an indigenous perspective? This article draws upon the 5 Tests developed by Māori scholar Sir Hirini Moko Mead. This framework, informed by Māori knowledge and concepts, provides a method for assessing contentious issues and developing a Māori position. This paper takes up these tests, considers how each test might be applied to data-driven systems, and provides a number of concrete examples. This intervention challenges the priorities that currently underpin contemporary AI technologies but also offers a rubric for designing and evaluating AI according to an indigenous knowledge-system.
Author bio: Luke Munn is a Research Fellow in Digital Cultures & Societies at the University of Queensland. His wide-ranging work investigates the sociocultural impacts of digital cultures, from data infrastructures to platform labor and far-right radicalisation, and has been featured in highly regarded journals such as Cultural Politics, Big Data & Society, and New Media & Society as well as popular forums like the Guardian, the Los Angeles Times, and the Washington Post. He has written five books: Unmaking the Algorithm (2018), Logic of Feeling (2020), Automation is a Myth (2022), Countering the Cloud (2022), and Technical Territories (2023 forthcoming). His work combines diverse digital methods with critical analysis that draws on media, race, and cultural studies.
#IndigenousPerspectives #Values #Religion #RelationalApproaches #NewZealand