Exploring Children's Rights and Child-Centred AI

Abstract: When considering the responsible design, development, and deployment of AI technologies and the plurality of visions for technological futures, children are not only frequently missing from the conversation but are also on the receiving end of many data harms and injustices. The uncurbed development of AI systems with little consideration of human rights, needs, or preferences disproportionately affects children who have not been considered as part of the decision-making process, resulting in innumerable harms. Cases such as the Ofqual exam grading crisis in the UK, mental health chatbots mishandling children’s reports of sexual abuse, and smart toys selling data including children’s voice recordings illustrate some of the widespread harms to children that have gone unchecked and unaddressed. This important stakeholder group must be meaningfully included in the conversations surrounding the future of technological innovation in order for them and duty bearers to collectively steward a shared future for responsible AI and ensure that such harms do not persist. To address these concerns and operationalise child-centred AI decision-making, we examine whether and how children are considered across the AI project lifecycle. Through desk-based research, our project with UNICEF interviewing UK public sector organisations, and a 2-year research project with Children’s Parliament and the Scottish AI Alliance engaging with over 120 pupils across Scotland, we assess how children have or have not been engaged with AI and identify the challenges to incorporating children’s views regarding digital technologies. Given that many UK public sector organisations aspire to engage children but do not know how or where to begin, in our paper, we introduce various international frameworks involving AI and children and analyse the principles outlined. Sharing preliminary insights from our engagement work, we bridge the gap between those principles to pave a way forward for responsible, child-centred AI development across the globe.

Author bios: Dr Janis Wong Janis Wong is a Research Associate in the Public Policy Programme. She has an interdisciplinary PhD in Computer Science from the Centre for Research into Information, Surveillance and Privacy (CRISP), University of St Andrews. Janis is interested in the legal and technological applications in data protection, privacy, and data ethics, where her PhD research aimed to create a socio-technical data stewardship and governance framework that helps data subjects protect their personal data under existing data protection, privacy, and information regulations. She also holds an MSc in Computing and Information Technology from the University of St Andrews and an LLB Bachelor of Laws from the London School of Economics.

Dr Mhairi Aitken Mhairi Aitken is an Ethics Research Fellow in the Public Policy Programme. She is a Sociologist whose research examines social and ethical dimensions of digital innovation particularly relating to uses of data and AI. Mhairi has a particular interest in the role of public engagement in informing ethical data practices. Prior to joining the Turing Institute, Mhairi was a Senior Research Associate at Newcastle University where she worked principally on an EPSRC-funded project exploring the role of machine learning in banking. Between 2009 and 2018 Mhairi was a Research Fellow at the University of Edinburgh where she undertook a programme of research and public engagement to explore social and ethical dimensions of data-intensive health research.

Morgan Briggs Trained as a data scientist, Morgan currently serves as the Research Associate for Data Science and Ethics within the Public Policy Programme. Morgan works on a variety of projects relating to ethical considerations of data science methodologies and digital technologies at the Turing including continued, ongoing work on AI explainability, building upon the Turing and ICO co-badged guidance, Explaining decisions made with AI. She is also a researcher on the UKRI-funded project called ‘PATH-AI: Mapping an Intercultural Path to Privacy, Agency and Trust in Human-AI Ecosystems’ and an international project entitled ‘Advancing Data Justice Research and Practice’ which is funded by grants from the Global Partnership on AI, the Engineering and Physical Sciences Research Council, and BEIS. Morgan has continued to research topics related to children’s rights and AI stemming from research that was conducted with UNICEF.

Sabeehah Mahomed Sabeehah Mahomed is a Research Assistant in Data Justice and Global Ethical Futures under the Public Policy Programme. Her current work includes researching and analysing the context of children’s rights as they relate to AI through a series of engagements and research into policy frameworks both nationally and internationally. Sabeehah holds a MSc in Digital Humanities from the Department of Information Science/Studies at University College London (UCL) with distinction. Her master’s thesis focused on Racial Bias in Artificial Intelligence (AI) and public perception, a portion of which was presented at the 'Ei4Ai' (Ethical Innovation for Artificial Intelligence) Conference in July 2020, hosted by UCL and University of Toronto.

Recorded Presentation | 28 April 2023

#IntergenerationalPerspectives #YouthAndChildren #UK

Previous
Previous

Human First Innovation for AI ethics? : a Cross-cultural Perspective on Youth and AI

Next
Next

Artificial Intelligence in National Media: How the North-South Divide Matters?