Many Worlds of AI

Date: 26-28 April 2023

Venue: Jesus College, University of Cambridge

Panel 19: Many Stories of AI

28 April | 9.30 am | Chair: Jan Voosholz | Venue: Bawden Room

Presentation 1: Artificial Intelligence in National Media: How the North-South Divide Matters?

Presenter: Claudia Wladdimiro Quevedo

Abstract: This study addresses the issue of how discourses around Artificial Intelligence have been presented in national media. To explore this topic, I analyze news articles to identify narratives and imaginaries that contribute to building the concept of AI from a North-South perspective. To attempt to answer these questions, I have selected two different countries to gather the data from, one in the Global North (Sweden) and one in the Global South (Chile). However, both are located in the same “Large/Medium” cluster when combining land area and population. And both countries are known as innovative leaders in the utilization of new technologies. Drawing on data collected from 103 news articles, I found that in both cases, AI is presented as a positive tool for the development of local and global economies. Furthermore, AI is seen as driving the creation of exciting and disruptive businesses. However, my analysis shows that there is uncertainty about the future of the current status quo, both regarding the labor market and the cur-rent geopolitical power balance if China were to win the so-called ‘AI race.’ The data was coded and analyzed using a combination of critical discourse analysis and a data extractivism and the approach introduced by Sheila Jasanoff, the sociotechnical imaginaries. These perspectives can help to understand the relations between scientific and technological projects, and political institutions and power. Throughout the sample, the hegemonic (dominant) voice prevailed through discussions of the economy having a particular North-centric representation. This is important to explore as it can shed light on whether the AI is to provide real opportunities or if it is replicating the power relations of the globalized world. In this sense, the study also criticizes the sociotechnological imaginaries since, despite the fact that they propose a local view of power relations, they confirm that technological developments are often subject to global, political and corporate planning, regardless of the particular reality of each country.

Author bio: Claudia Wladdimiro Quevedo. MSc Digital Media and Society from Uppsala University. Research Assistant Department Informatics and Media, Human-Computer Interaction (HCI) unit, Uppsala University, Sweden. She focuses on communications and how through new technologies and human centered focused work, a more empathetic society can be channeled.

Presentation 2: Cross-Cultural Narratives and Imaginations of Weaponised Artificial Intelligence: Comparing France, Japan, and the United States

Presenter: Ingvild Bode, Hendrik Huelss, Anna Nadibaidze and Tom Watts

Abstract: “Thinking machines” have long featured in popular culture, as Cave, Dihal, and Dillon demonstrated in their research on artificial intelligence (AI) narratives. Cave and Dihal identify several narratives related to AI and argue that these influence the perceptions, actions, and decisions of developers, political actors, as well as the public. However, existing research of AI imaginaries do not specifically cover perceptions of weaponised AI. Moreover, few studies go beyond the perceptions of the English-speaking public. In this article we investigate narratives and imaginations surrounding weaponised AI technologies across different cultures. Our analysis is based on data from a public opinion survey conducted in France, Japan, and the United States in 2022-23. In a first step, we assess the extent to which publics in these three states are familiar with the narratives of weaponised AI determined by previous research such as the literature in media and literature studies, as well science and technology studies. Based on our survey data, we also identify alternative narratives of weaponised AI that go beyond existing categories. In this way, we contribute to understanding how cultural contexts and embeddings foster different imaginations of weaponised AI. In a second step, we address the question of whether, and if so in what form, such narratives shape public attitudes towards regulating weaponised AI. In the context of global discussions surrounding the prohibition of some forms of these technologies, examining public perceptions is essential. Imaginaries linked to AI and autonomous weapons, for example in science-fiction, and have been linked to political discourses and decisions in the sphere of security and defence. This article therefore also contributes to the literature seeking to understand if and/or how popular images of “intelligent” machines influence public opinion in relation to the regulation of AI and autonomy in warfare.

Author bios: Ingvild Bode is Associate Professor of International Relations at the Center for War Studies, University of Southern Denmark. She is the Principal Investigator of the European Research Council funded project AutoNorms, which investigates how autonomous weapon systems may change international norms. Ingvild is principally interested in analysing processes of policy and normative change, especially in the areas of weaponised artificial intelligence, the use of force, and United Nations peacekeeping. Ingvild’s research has been published in journals such as European Journal of International Relations, Review of International Studies, and Chinese Journal of International Politics and with leading publishers. Previously, Ingvild was Senior Lecturer in International Relations at the University of Kent and a Japan Society for the Promotion of Science research fellow with joined affiliation at United Nations University and the University of Tokyo.

Dr Hendrik Huelss is Assistant Professor of International Relations at the Center for War Studies, Department of Political Science and Public Management, University of Southern Denmark. He works at the intersection and knowledge frontier of international political sociology and AI with a focus on military technologies. His research and publication activities aim at producing critical knowledge on how AI influences the emergence and function of norms in international relations’ settings. Theoretically, he draws on different insights from critical security studies, STS, Foucault studies and IR theory. Dr Huelss publishes in high-ranking journals such as Journal of European Public Policy, International Theory; International Political Sociology and Review of International Studies.

Anna Nadibaidze is a Ph.D. Research Fellow in International Politics at the Center for War Studies, University of Southern Denmark. She is also a researcher for the European Research Council funded AutoNorms project.

Tom Watts is a Leverhulme Early Career Fellow at Royal Holloway, University of London (RHUL), leading a project on Great Power Competition and Remote Warfare. Previously, he was a Teaching Fellow in War and Security at RHUL (2018-2020) and a Graduate Teaching Assistant at the University of Kent (2014-2018). Tom graduated with a PhD in International Relations at the University of Kent in 2019. Tom’s research interests are in the field of International Security with a particular focus on American foreign policy, “remote warfare” and lethal autonomous weapons systems. His research has been published with Geopolitics, Global Affairs, the Bulletin of the Atomic Scientists, Drone Wars UK, and the Oxford Research Group.

Presentation 3: Responsible AI reporting requires cross-border collaboration

Presenter: Boyoung Lim

Abstract: AI-powered global platforms impact the lives of billions around the world. Yet the failure of the news media to include diverse perspectives, especially those of the global south, results in the exclusion of those best positioned to produce contextualized and nuanced stories. This also perpetuates the harm in those communities by treating non-Western communities as an afterthought. Unless we bridge this gap between the global reach of AI-powered technologies and the lack of inclusivity and collaborative-mindedness in the journalism industry, the much-needed global scrutiny for these platforms and positive change for the world will always be out of our reach. To illustrate this point, this presentation will highlight the exclusion of global south journalists from accessing documents from the 2021 Facebook leaks. Then we will compare the representation and inclusivity among reporters of the 2021 Facebook Papers consortium to that of other more successful journalistic cross-border collaborations led by the International Consortium for Investigative Journalism (ICIJ), Organized Crime and Corruption Reporting Project (OCCRP), and Forbidden Stories. By comparing the impact of each of these reporting projects, it becomes clear that collaborative and inclusive practices are required not just as a moral imperative but as a practical way to produce impactful, quality journalism on a global issue such as AI.

Author bio: Boyoung Lim is a Senior Editor and AI Network Manager at the Pulitzer Center on Crisis Reporting.