Panel 17: AI and the Planetary
28 April | 9.30 am | Chair: Tomasz Hollanek | Venue: Elena Hall
Presentation 1: Occupying Urgency: How AI Solutionism Shapes the Narrating of Urgency around the Climate Crisis
Presenter: Eugenia Stamboliev and Mark Coeckelbergh
Abstract: Climate data narrates the climate crisis and its urgency, which is crucial to mitigate its effects and urge for preventive and globally aligned actions. However, climate data is bound to powerful technologies known as artificial intelligence (AI) or machine learning (ML); these are not neutral storytellers but powerful narrators. Lately, AI has fostered a new 'culture of prediction', but such culture does not only provide us with large-scale information about climate events but also manoeuvres between nudging us to urgency while also manifesting institutional and corporate power. One way to critique the power and expansion of AI is by looking closely at how it occupies urgency narratives. This means discussing the narratives of urgency that drive the use of AI as ‘a solution’ in relation to the colonialist power of AI as ‘an industry’. On the one hand, we see AI as a mediator of urgency, one that is otherwise not representable. For instance, algorithmic modelling helps us explore the causality and dynamics of climate data globally, and AI can heighten space-time resolutions and clarify stochastic aspects. On the other hand, we see the role of AI as a dominator over (situated) urgency that uses climate narratives to implement Western techno-solutionism by claiming total power over global predictions. In the latter, AI stands for more than the attempt to mediate a global urgency but for an attempt to implement corporate powers and colonialist narratives into climate narratives. By exploring how AI moves between shaping and dominating climate urgency, we discuss the epistemic shift in thinking urgency as human experience of time to that of using urgency to justify a kairopolitical technology.
Author bios: Eugenia Stamboliev is a postdoctoral scholar of ethics of technology and media at the University of Vienna. As a fellow in the WWTF project 'Interpretability and Explainability as Drivers to Democracy', she explores the political power of complex algorithmic models. Her work equally looks at the explainability of AI and political authority, and how to think dis/trust in platform labour.
Mark Coeckelbergh is a Professor of Philosophy of Media and Technology at the Department of Philosophy, University of Vienna since 2015 and was Vice Dean of the Faculty of Philosophy and Education until 2020.
Presentation 2: An Approach Based in Eastern Philosophy to Identify Ethical Issues in Early Stages of AI for Earth Observation Research
Presenter: Mrinalini Kochupillai
Abstract: AI and Machine Learning models have been used in Earth Observation (EO) and Remote Sensing (RS) research (“AI4EO research”) for decades to study and analyze the petabytes of data that would otherwise be almost impossible to process and understand. Ethical issues are taking center stage in this field of research as the resolution of EO/RS data increases rapidly, as newer sources of data are fused to achieve better results at lower costs and greater speeds, and newer use cases of AI4EO research emerge. Nevertheless, not all ethical issues can be identified in the present – partly because of rapid technological evolution and almost blind focus on innovation as an end in itself, and partly because of uncertainties inherent in AI4EO research methods, analysis, and results. Real-world applications of research findings also give rise to uncertainties vis-à-vis ethical impact. Recent academic research and surveys conducted by the author suggest that AI ethics guidelines are not practically useful for many AI(4EO) researchers. Yet, ethically mindful choices at the early stages of research can help in conceiving, designing, and regulating AI, and in developing applications that are more acceptable to the global community. This paper recommends a novel approach to identifying and avoiding ethical issues in the early stages of AI(4EO) research, based on a combination of Eastern and Western philosophical thought. More specifically, it explores the imagery associated with Skanda, a mythological figure that appears in both Indian (Hindu) and Tibetan (Buddhist) philosophies to develop and implement a novel approach to ethical decision-making. This approach can be especially useful for scientists engaged in research and innovation with emerging technologies. Using a concrete example from AI4EO, the paper also describes a step-wise process and questionnaire based on this approach that can help researchers identify major ethical issues and opportunities in the early stages of their research.
Author bio: Mrinalini Kochupillai (Nalini) is a guest professor and core scientist at the Artificial Intelligence for Earth Observation (AI4EO) Future Lab at the Technical University of Munich (TUM), where she leads the ethics working group. She is also a faculty member at the Munich Intellectual Property Law Center (MIPLC) and an affiliated senior researcher at the Institute for Ethics in Artificial Intelligence (IEAI), TUM. Nalini has been a senior research fellow with the Max Planck Institute for Innovation and Competition (2014-2018), and with the Chair for Business Ethics, TUM (2018-2019). She has also been a Program Director (2014-2017) at the MIPLC, and an adjunct faculty at the EU Business School, Munich. She has over 15 years of experience in teaching and research in the field of business law, business ethics, intellectual property (patent) law, plant variety protection, and Incentive systems for sustainable innovations. Nalini obtained her B.A. LL.B (Hons.) degree from the National Law Institute University, Bhopal and an LL.M. in Intellectual Property, Commerce & Technology from the University of New Hampshire. She completed her Ph.D. at the Ludwig Maximilian University (LMU), Munich as a full scholar and fellow of the International Max Planck Research School for Competition and Innovation (2009-13).
Presentation 3: AI for Datong: A Normative Framework for Sustainable AI
Presenter: Pak-Hang Wong
Abstract: In an early article, I proposed datong, a normative ideal in Confucianism akin to the common good, offers an alternative way to formulate AI for Social Good’s agendas; and, I called this approach AI for Datong (Wong 2021). More specifically, I argued that the idea of datong requires AI-based projects, if they are to contribute to the social good, to be (i) public-centered that they ought to be motivated and justified by the good of the general public but not the interests of specific groups of individuals, (ii) care-centric that they are based on altruistic care for all but not aim for any personal advantages, and finally (iii) transformative that they should not merely attempt to prevent, mitigate, or resolve problems adversely affecting human beings and the environment but more fundamentally transform the individuals and social conditions such that the problems do not arise. The approach of AI for Datong, I contend, can also contribute to the discussion of sustainable AI (van Wynsberghe 2021). In addition to the (i) public-centeredness, (ii) care-centricity, and (iii) transformativeness, the idea of datong also accounts for—or, indeed, transcends—temporality and intergenerality. Together, they provide an account of our moral obligations to sustainability in design, development, and deployment of AI. In this paper, I first review the normative challenges related to sustainable AI, highlighting in particular the questions about moral obligations arise from the temporal and intergenerational gap in design, development, and deployment of AI (see, e.g., Halsband 2022; Robbins & Wynsberghe 2022). Next, I rehearse my approach of AI for datong and then make explicit its temporal and intergenerational dimension. Finally, I elaborated on how the approach of AI for Datong, with its temporal and intergenerational dimension, offers a unique perspective to answer the normative challenges related to sustainable AI.
Author bio: Pak-Hang Wong is a philosopher and ethicist of technology working in the industry, where he explores and addresses the social, ethical, and political aspects of AI, data, and other emerging digital technologies. Wong received his doctorate in Philosophy from the University of Twente in 2012 and then held academic positions in Hamburg, Oxford, and Hong Kong prior to his current position in the industry. Most recently, he co-edited with Tom Wang Harmonious Technology: A Confucian Ethics of Technology, where they provide an alternative, non-Western approach to the ethics of technology.