Panel 3 : Accounting for AI harms
26 April | 12 pm | Chair: Kanta Dihal | Venue: Frankopan Hall
Presentation 1: AI Colonialism
Presenter: Karen Hao
Abstract: Over the past few years, a growing number of scholars have argued that AI development is repeating colonial history. European colonialism was characterized by the violent capture of land, extraction of resources, and exploitation of people for the economic enrichment of the conquering country. While it would diminish the depth of past traumas to say the AI industry is repeating this violence, it is now using more insidious means to enrich the wealthy and powerful at the great expense of the poor. My AI Colonialism series for MIT Technology Review, supported by the MIT Knight Science Journalism program and Pulitzer Center, dug into these parallels between AI development and our colonial past by examining communities around the world that have been profoundly changed by the technology. I will share the stories I gathered from South Africa, Venezuela, Indonesia, and New Zealand, which together reveal how AI is impoverishing the communities and countries that don’t have a say in its development—the same communities and countries already impoverished by former colonial empires. They also suggest how AI could be so much more—a way for the historically dispossessed to reassert their culture, their voice, and their right to determine their own future.
Author bio: Karen Hao is a Hong Kong-based reporter at the Wall Street Journal, covering China's technology industry and its impacts on society. She was previously a senior editor at MIT Technology Review, covering artificial intelligence. Her work is regularly taught in universities, and cited in government reports and by Congress. She has received numerous accolades for her coverage, including an ASME Next Award for Journalists Under 30, two Front Page Awards, and several Webby Award nominations.
Presentation 2: Epistemic injustice and AI ethics
Presenter: Suvradip Maitra
Abstract: Epistemic injustices are harm to our capacity as knowers by reason of identity prejudice. (Fricker, 2007) A burgeoning literature considers how the use of Artificial Intelligence (‘AI’) systemically perpetuate such epistemic injustices. (Nihei, 2022; Rafanelli, 2022; Symons & Alvarado, 2022; Sardelli, 2022) In this presentation, I introduce a taxonomy of AI-mediated causes of epistemic injustice in India. Focusing on how AI impacts background identity prejudices, as understood with insights from India’s critical AI fairness literature (Sambavisan et al, 2021), I propose a typology of epistemic injustices: ‘AI-exacerbated’, ‘AI-generated’ and ‘AI-consolidated’. The taxonomy contributes to an understanding of AI-mediated epistemic injustice in India’s unique socio-political context. ‘AI-exacerbated’ epistemic injustices are caused by pre-existing identity prejudices, the effect of which is exacerbated by AI. An illustration is Delhi’s predictive policing system that reproduces pre-existing police bias against vulnerable groups risking ‘testimonial’ and ‘hermeneutical’ injustice. (Marda & Narayan, 2020) ‘AI-generated’ epistemic injustices are caused where the deployment of AI systems generate new forms of identity prejudice. An illustration is use of inaccurate Emotion Recognition Technology (‘ERT’) by Lucknow police to identify distressed women. ERTs risk creating a new form of identity prejudice whereby certain expressions recognised as ‘distress’ by the ERT may be prejudicially equated with the presumed appearance of a ‘distressed’ woman leading to ‘testimonial injustice’. (Ara, 2021) ‘AI-consolidated’ epistemic injustices are caused where a pre-existing identity prejudice is consolidated with a new identity prejudice generated by AI. An illustration is the mandatory use of efficiency tracking smartwatches on Indian sanitation workers. Pre-existing identity prejudice (ie. cast bias) enabled the intrusive surveillance practice which was consolidated with a new form of prejudice whereby the apparent objectivity of the tracking data resulted in the AI-based smartwatches being considered more reliable than human workers possibly leading to ‘testimonial’ and ‘hermeneutical’ injustice. (Inzamam & Qadri, 2022)
Author bio: Suvradip is currently completing a Master of Laws at the University of Melbourne. As part of his Masters, Suvradip will be a Visiting Student at the Leverhulme Center for Future of Intelligence. At LCFI Suvradip will be working under the mentorship of Dr Kanta Dihal to explore the ethics of AI systems from an intercultural lens. In his research, Suvradip is interested in combining understandings of colonial histories of knowledge with theories of epistemic justice and intercultural ethics to understand the impact of AI on marginalised populations. Suvradip graduated with First Class Honours from the University of Queensland with a Bachelor of Science/LLB (Hons) majoring in physics. Since graduating, Suvradip has been involved in various projects researching the impact of technology on society, including at Harvard University’s Berkman Klein Centre for Internet and Society, and Global Catastrophic Risk Institute. His research has been published in the Australian Law Journal and Proceedings of the AI and Ethics Society Conference. He has previously practised as a commercial lawyer and was associate to a Judge in the Queensland Court of Appeal.
Presentation 3: Can the Ghost Worker Speak? De-colonializing Digital Labor
Presenter: Sergio Genovesi
Abstract: The current training and development processes of AI systems are based on the exploitation of “ghost work” (Gray, Sury 2019). Typical tasks of ghost workers include labeling images and sentences, verifying data, and moderating content. Based on remote work, ghost work represents a case of work outsourcing and offshoring, and can be seen as part of the larger phenomenon of “algorithmic coloniality” or “data colonialism” (Mohamed et al. 2020). For example, web-based microwork platforms such as Amazon’s Mechanical Turk, Samasource, and CrowdFlower enabled new forms of labor offshoring from corporations mostly based in the U.S., U.K., India, and Australia to workers in Africa, Latin America, and Southeast Asia (Rani, Furrer 2020; Anwar, Graham 2020; Royer 2021). In the first part of my talk, I outline the main ethical concerns related to digital labor (Fuchs, Fischer 2015) focusing especially on the practices leading to its outsourcing, offshoring, and exploitation, and considering the perspectives of scholars from the global south (Albrieu 2021). In the second part of my talk, I explore solutions targeting AI systems’ design process and regulation. On the one hand, I suggest that ethical questions concerning the fulfillment of tasks usually performed by ghost workers should be already addressed by product design and not be left to chance. On the other hand, I stress that ghost workers’ rights should be protected by regulations. In order for this to happen, digital labor performed by ghost workers should be acknowledged as work and regarded as part of the software development process by lawmakers, corporations, and consumers (Snower, Twomey 2022). Moreover, in a globalized economic context, international supply chain laws addressing new forms of digital labor are necessary to prohibit the sale of products developed through work offshoring and exploitation.
Author bio: Sergio Genovesi is a postdoctoral researcher at the Center for Science and Thought of the University of Bonn and a team member of the KI.NRW flagship project “Zertifizierte KI” (Certified AI). His research focuses on the ethics and ontology of technology, an specifically of AI systems. He holds a Ph.D. in theoretical philosophy from the University of Bonn.