Many Worlds of AI

Date: 26-28 April 2023

Venue: Jesus College, University of Cambridge

Panel 20: AI with/for the Youth and the Elderly

28 April | 11.00 am | Chair: Rune Nyrup | Venue: Bawden Room

Presentation 1: Exploring Children's Rights and Child-Centred AI

Presenter: Janis Wong, Morgan Briggs, Mhairi Aitken and Sabeehah Mahomed

Abstract: When considering the responsible design, development, and deployment of AI technologies and the plurality of visions for technological futures, children are not only frequently missing from the conversation but are also on the receiving end of many data harms and injustices. The uncurbed development of AI systems with little consideration of human rights, needs, or preferences disproportionately affects children who have not been considered as part of the decision-making process, resulting in innumerable harms. Cases such as the Ofqual exam grading crisis in the UK, mental health chatbots mishandling children’s reports of sexual abuse, and smart toys selling data including children’s voice recordings illustrate some of the widespread harms to children that have gone unchecked and unaddressed. This important stakeholder group must be meaningfully included in the conversations surrounding the future of technological innovation in order for them and duty bearers to collectively steward a shared future for responsible AI and ensure that such harms do not persist. To address these concerns and operationalise child-centred AI decision-making, we examine whether and how children are considered across the AI project lifecycle. Through desk-based research, our project with UNICEF interviewing UK public sector organisations, and a 2-year research project with Children’s Parliament and the Scottish AI Alliance engaging with over 120 pupils across Scotland, we assess how children have or have not been engaged with AI and identify the challenges to incorporating children’s views regarding digital technologies. Given that many UK public sector organisations aspire to engage children but do not know how or where to begin, in our paper, we introduce various international frameworks involving AI and children and analyse the principles outlined. Sharing preliminary insights from our engagement work, we bridge the gap between those principles to pave a way forward for responsible, child-centred AI development across the globe.

Author bios: Dr Janis Wong Janis Wong is a Research Associate in the Public Policy Programme. She has an interdisciplinary PhD in Computer Science from the Centre for Research into Information, Surveillance and Privacy (CRISP), University of St Andrews. Janis is interested in the legal and technological applications in data protection, privacy, and data ethics, where her PhD research aimed to create a socio-technical data stewardship and governance framework that helps data subjects protect their personal data under existing data protection, privacy, and information regulations. She also holds an MSc in Computing and Information Technology from the University of St Andrews and an LLB Bachelor of Laws from the London School of Economics.

Dr Mhairi Aitken Mhairi Aitken is an Ethics Research Fellow in the Public Policy Programme. She is a Sociologist whose research examines social and ethical dimensions of digital innovation particularly relating to uses of data and AI. Mhairi has a particular interest in the role of public engagement in informing ethical data practices. Prior to joining the Turing Institute, Mhairi was a Senior Research Associate at Newcastle University where she worked principally on an EPSRC-funded project exploring the role of machine learning in banking. Between 2009 and 2018 Mhairi was a Research Fellow at the University of Edinburgh where she undertook a programme of research and public engagement to explore social and ethical dimensions of data-intensive health research.

Morgan Briggs Trained as a data scientist, Morgan currently serves as the Research Associate for Data Science and Ethics within the Public Policy Programme. Morgan works on a variety of projects relating to ethical considerations of data science methodologies and digital technologies at the Turing including continued, ongoing work on AI explainability, building upon the Turing and ICO co-badged guidance, Explaining decisions made with AI. She is also a researcher on the UKRI-funded project called ‘PATH-AI: Mapping an Intercultural Path to Privacy, Agency and Trust in Human-AI Ecosystems’ and an international project entitled ‘Advancing Data Justice Research and Practice’ which is funded by grants from the Global Partnership on AI, the Engineering and Physical Sciences Research Council, and BEIS. Morgan has continued to research topics related to children’s rights and AI stemming from research that was conducted with UNICEF.

Sabeehah Mahomed Sabeehah Mahomed is a Research Assistant in Data Justice and Global Ethical Futures under the Public Policy Programme. Her current work includes researching and analysing the context of children’s rights as they relate to AI through a series of engagements and research into policy frameworks both nationally and internationally. Sabeehah holds a MSc in Digital Humanities from the Department of Information Science/Studies at University College London (UCL) with distinction. Her master’s thesis focused on Racial Bias in Artificial Intelligence (AI) and public perception, a portion of which was presented at the 'Ei4Ai' (Ethical Innovation for Artificial Intelligence) Conference in July 2020, hosted by UCL and University of Toronto.

Presentation 2: Human First Innovation for AI ethics? : a Cross-cultural Perspective on Youth and AI

Presenter: Toshie Takahashi

Abstract: New and emerging technologies such as AI and robots introduce a range of risks and opportunities, locally and globally. Narratives surrounding the development of AI often seem to fall into a dichotomy between utopia and dystopia. The extent to which narratives are utopian or dystopian seems to vary by culture, with Japanese views in particular leaning more towards utopia, focusing on the potential societal benefits of AI, especially to cater to a rapidly aging population. By contrast, European and other Western narratives, exemplified by the image of “The Terminator” are typically dominated by fears; for example, that AI/robots will drive mass unemployment and inequality. In order to maximize new opportunities and minimize risks and create a better AI society, we need to understand AI use globally. Generation Z (GenZ: born between 1996-2010) will be the main beneficiaries and users, nevertheless, there are few studies which focus on youth and AI. This study introduces two of my on-going cross-cultural projects on youth and AI: “a future with AI” project in collaboration with the United Nations and “Project GenZAI” in the Moonshot R&D program. The latter project conducts large surveys and in-depth interview studies in six countries (Japan, China, Singapore, US, UK and Chile). Theoretically, this study extends the complexity model of communication (Takahashi, 2016) by exploring key dimensions of AI engagement. The aim of this study is to show universalism and cultural specificities in terms of both opportunities and risks of AI/robots, to global understanding of an AI future where human happiness takes centre stage. Finally this study offers suggestions towards an AI future driven by “Human First Innovation”. AI has to be used towards achieving our sustainable future globally. But to do so, we must move from “AI first” and “nation first” to “human first” innovation.

Author bio: Toshie Takahashi is Professor in the School of Culture, Media and Society, as well as the Institute for AI and Robotics, Waseda University, Tokyo. She has been appointed as an Associate Fellow of the CFI, the University of Cambridge. She has held visiting appointments at the University of Oxford, Harvard University and Columbia University. She conducts cross-cultural and trans-disciplinary research on the social impact of robots as well as the potential of AI for Good. She is currently leading two projects on youth and AI. The goal of both projects is to contribute towards a vision of a future where human happiness takes centre stage. The first one is “A Future with AI” project in collaboration with the United Nation. She is also involved in “Moonshot R&D projects” by leading the “Project Gen ZAI”, engaging youths now for a global AI future in collaboration with the CFI, University of Cambridge, Stanford University, University of Chile, Pompeu Fabra University, Nanjing University, the National University of Singapore and others. Finally, Takahashi sits on the advisory committee of the Information and Communication Council, Ministry of the Internal Affairs and Communications, Japan.

Presentation 3: (Old) age in the age of artificial intelligence – crossing generational borders in AI research and development

Presenter: Justyna Stypinska

Abstract: In the last few years, we have witnessed a surge in scholarly interest and scientific evidence of how algorithms can produce discriminatory outcomes, especially with regard to gender and race. However, the scholarly debate of fairness and bias in artificial intelligence (AI) has paid insufficient attention to the category of age (within the life-course perspective) and older persons as a socio-demographic group. Ageing populations have been largely neglected during the turn to digitality and AI and older persons were identified as potential “vulnerable data subjects” at higher rate of exclusion (Malgieri and Niklas, 2020). Ethical AI needs to cross the generational boarders that are currently ruling in both, the AI research and development of AI products and services. Perspectives of all demographic groups are fundamental to creating desirable AI for the future and older persons should not constitute yet another “dislocated community” in AI ethics. In this presentation, the concept of “AI ageism” is introduced to make a theoretical contribution to how the understanding of inclusion and exclusion within the field of data-driven technologies can be expanded to include the category of age. “AI ageism” can be defined as practices and ideologies operating within the field of AI which exclude, discriminate or neglect the interests, experiences, and needs of older populations and can be manifested in five interconnected forms: (1) age biases in algorithms and datasets (technical level), (2) age stereotypes, prejudices and ideologies of actors in AI (individual level), (3) invisibility of old age in discourses on AI (discourse level), (4) discriminatory effects of use of AI technology on different age groups (group level), (5) exclusion as users of AI technology, services, and products (user level). Additionally, the paper provides empirical illustrations of the way ageism operates in these five forms.

Author bio: Justyna Stypińska completed her PhD on the topic of “Age discrimination in the labour market. A sociological-legal analysis”. Her research focuses on multiple forms of age discrimination and age inequalities in contemporary societies, especially in their most recent digital forms of late capitalism. In her newest project, starting beginning of 2023 and funded by the Volkswagen Foundation Germany “AI Ageism: new forms of age discrimination and exclusion in the era of algorithms and artificial intelligence” she will analyse with an international team (UK, Spain, Poland) the effects of use of artificial intelligence technology on the ageing populations in Europe.