Many Worlds of AI

Date: 26-28 April 2023

Venue: Jesus College, University of Cambridge

Panel 7: Alternative histories of AI in Europe and the Anglophone West

26 April | 2.00 pm | Chair: Christiane Schäfer | Venue: Bawden Room

Presentation 1: Praxis, or the Yugoslav Search for Man: Thinking and Human Self-Realization in the Age of Generative AI

Presenter: Ana Ilievska

Abstract: In this talk I will propose praxis as an essential concept for the study of human-centered Artificial Intelligence. The importance of this concept, outlined by the Yugoslav Praxis school of philosophy in the 1960s and ‘70s, is particularly evident in the analysis of the impact of generative Artificial Intelligences (GenAI) such as ChatGPT on human self-realization, thinking, and creativity. For the Yugoslav philosophers, emancipated from Stalinist thought and ideological censorship already in the 1940s, the central concern of Marx’s thought was man as a being of praxis, i.e., “as a being capable of free creative activity by which he transforms the world, realizes his specific potential faculties, and satisfies the needs of other human individuals” (Mihailo Marković). Unlike “practice,” usually understood in opposition to theory, praxis stands for human potentiality which, in certain adverse situations, may be impeded. Does GenAI present such an impediment? Debates about ChatGPT’s potential in the tech industry, business, and Anglophone universities have centered on its creative or destructive capabilities. However, what this technology makes clear above all is that the mastery over writing and form is not exclusive to humans and should not be the primary focus in the cultivation of the mind. The real issue at stake is, again, our ability to think, as Hannah Arendt wrote around the same time that the Yugoslav philosophers were developing their critical Marxist Humanism. With these propositions in mind, I ask: how does ChatGPT impede or, conversely, facilitate human self-realization? What happens when thinking, or the “soundless dialogue (eme emautô) between me and myself, the two-in-one” (Arendt) is externalized and relegated to an AI? I conclude by making specific proposals about what a New Praxis school of philosophy can contribute to the understanding of thinking as the essential feature of human self-realization in the age of GenAI.

Author bio: Ana Ilievska holds a Ph.D. degree in Comparative Literature from the University of Chicago (2020), and a BA and MA in Romanistik and Comparative Literature from the Eberhard Karls Universität Tübingen (2011, 2013). Prior to joining Stanford, she was Humanities Teaching Fellow in the College and the Department of Comparative Literature at the University of Chicago (2020-2021) and Adjunct Lecturer at the Università degli Studi di Catania in Sicily (2020) where she was also a Fulbright doctoral scholar. Currently, she is Andrew W. Mellon Postdoctoral Fellow at the Stanford Humanities Center as well as board member and membership secretary of the Pirandello Society of America.

Presentation 2: Conceptions of Ethics in World-Making Machines: Colonial Iconographies of AI in Britain

Presenter: Peter Rees

Abstract: This paper examines an intercultural dispute in which rival conceptions of ethics were mobilised to deploy cultural models of artificial intelligence in society. The public dispute in question took place in mid-c20th Britain between C S Lewis, a British colonist, and J D Bernal, an Irish colonial subject. Lewis forged a neomedievalist model whereas Bernal campaigned for scientific communism on the Soviet model. I argue that their conflict between fundamentally opposed conceptions of artificial intelligence and its regulation can only be understood through consideration of their cosmological worldviews: their rival conceptions of knowledge and ontological assumptions. On their own terms, they lived in different worlds. I draw on John Tresch’s studies of worlding with cosmograms and Arturo Escobar’s notion of the ‘Pluriverse’ to provide a methodologically symmetrical approach to achieve meaningful and productive comparison of their rival approaches to AI ethics. The paper responds especially to the questions pursued at ‘Many Worlds of AI’ under the ‘intercultural AI’ and ‘AI across borders’ themes. Lewis’s and Bernal’s rival programmes grew out of their education and experiences of growing up in Ireland, the British Empire’s most perturbing colony. Their biographies provoke consideration of their diasporic thinking about AI and global justice: Lewis descended from Ulster Scots settlers and Bernal descended from Sephardic Jews. I also show how their iconographies of intelligent machinery trained their audience to discern what counts as machine ‘intelligence’ and produce assent to their cultural model. Lewis forged what he called the ‘Medieval Model’ by subjecting readers to emotional training through the intelligent text-based ‘machine’ of medieval literary tradition, whereas Bernal co-opted the scientific methods of X-ray crystallography to explicitly visualise the living machinery which he argued must organise the whole of social life.

Author bio: Peter Rees was trained in natural sciences and history & philosophy of science at the University of Cambridge. His research on the history of science and ethics addresses cultural models of artificial intelligence. He is especially interested in how such models of intelligent machines and their place in society are deployed to co-produce emotion and political order. Currently he is working on a monograph provisionally titled ‘The Wars of the Human Machines’ which investigates how iconographies of intelligent machines were used in Cold War public polemics to produce humans and even forge worlds. He is also investigating the media strategies employed to marketise neoliberal accounts of the free-market as a ‘machine’ or ‘price mechanism’. Rees also has a background in neuroscience and biochemistry and completed laboratory research at the University of Edinburgh and Harvard Medical School. He is enthusiastic about interdisciplinary collaboration and is also a member of software start-up Cambridge BioNexus.

Presentation 3: Contentious Others: Logo and Dilemmas of Difference in the US, Britain, and France

Presenter: Apolline Taillandier

Abstract: Recent historiography suggests that social and ethical concerns have long been central to AI research. This paper traces ambitions to develop socially responsible AI in the history of the Logo computer programme between the late 1960s and early 1990s in the US, the UK, and France. The main inspirations for Logo were Jean Piaget’s developmental psychology and Marvin Minsky’s decentralised theory of intelligence. For designers and educators, the hope was that Logo would enable the flourishing of a diversity of learning and programming styles, thereby undermining the dominant culture within computer science. I suggest that distinct national imaginaries shaped understandings of the programme’s political and epistemological possibilities across the three cases. Logo was developed at MIT’s AI lab as part of a libertarian, anti-authoritarian education project, later recast as a tool to undermine patriarchy. In France, it was accommodated with republican ideals and welcomed as a first step in developing ‘informatology’, the study of people and computers across cultural differences. At Edinburgh, researchers emphasised the structural making of difference (esp. along gender lines) that limited the programme’s alleged revolutionary potential. This paper bridges AI historiography and the history of feminist thought. Studying how imaginaries of nationhood, debates about the aims and nature of AI, and conceptions of justice and otherness contributed to shaping distinct and partly contradictory ideals of computer emancipation in various national contexts, it helps to complexify AI historiography but also to recover alternative conceptualisations of moral development and subject configurations for current AI ethics.

Author bio: Apolline Taillandier is a postdoctoral research associate at the Leverhulme Centre for the Future of Intelligence and POLIS at the University of Cambridge, and the Center for Science and Thought at the University of Bonn. Apolline studied political theory at Sciences Po in Paris before joining the Max Planck Sciences Po Center on Coping with Instability in Market Societies, where she wrote her dissertation under the supervision of Prof. Jenny Andersson. During her PhD, she studied the history of contemporary transhumanism as articulating a set of projects about liberalism’s future. She was a Fulbright student researcher at the University of California, Berkeley Sociology Department in 2018 and a Cambridge Sciences Po visiting student in POLIS in 2019. In her postdoctoral research, she investigates the historical role of feminist thought and activism in the critique of computer technology and the remaking of artificial intelligence as a scientific project from the 1980s onwards. In the context of rising concerns about the discriminatory and stratifying effects of AI, she studies the transnational circulation of ethics and gender justice norms and their reinterpretation and appropriation by scientists and industry actors, focusing on European and U.S. American sites of technical AI research.