AI and Responsible Journalism Toolkit

by the Leverhulme Centre for the Future of Intelligence, University of Cambridge

Two digitally illustrated green playing cards on a white background, with the letters A and I in capitals and lowercase calligraphy over modified photographs of human mouths in profile.

Alina Constantin / Better Images of AI / Handmade A.I / CC-BY 4.0

Empowering journalists, communicators, and researchers, to responsibly report on AI risks and capabilities

Read more about the research behind the toolkit.

Hier geht es zur deutschen Übersetzung.

About the Toolkit

The Toolkit is intended for a broad range of stakeholders shaping public perceptions of AI. It aims to empower journalists, PR specialists, and researchers to communicate the risks and benefits of AI more responsibly: to avoid perpetuating problematic AI narratives, to foster inclusivity and diversity in discussions about AI, and to promote critical AI literacy.

The resource is a culmination of insights from a collaborative research workshop held in 2023, convened by Dr Tomasz Hollanek, Dr Eleanor Drage, and Dr Dorian Peters at the University of Cambridge. While there already are lists of principles and recommendations for AI-focused journalism, the workshop participants highlighted the need for a user-friendly ‘hub’ gathering tools and resources that help communicators gain a deeper understanding of AI's technical and socio-technical dimensions, connect them with a diverse array of AI and AI ethics experts, and provide them with information about support networks and funding opportunities.

This Toolkit is meant to fulfil this need comprehensively. It is a dynamic and evolving repository that will continuously receive updates, guided by the input from the community it seeks to empower. The resources are organised into four distinct categories: EDUCATION, ETHICS, VOICES, and STRUCTURES (see below).

The Problem:

Research has shown that media representations of AI can…

  • perpetuate negative gender and racial stereotypes about the creators, users, and beneficiaries of AI

  • erroneously give the impression that AI is equivalent to robotics

  • misdirect attention from the harms implicit in the real-life applications of the technologies that are already changing our lives

The Toolkit aims to:

  • to challenge industry-driven narratives of artificial intelligence, by providing them with resources on the socio-technical realities beyond the AI hype, connecting them with experts in critical, pro-justice approaches to AI, and facilitating community support.

  • of various backgrounds in communicating the risks and benefits of AI effectively and responsibly, by offering a spectrum of resources, ranging from basic to advanced – introducing journalists to key concepts in AI, AI ethics, and professional ethics adapted for AI-focused reporting.

  • about the potential harm caused by AI and the need for responsible reporting on both risks and benefits of AI.

  • including copywriters and PR managers, in crafting communication materials informed by critical perspectives on AI.

  • to engage with journalists responsibly, ensuring accurate portrayals of their products and services in the media.

  • to effectively communicate their research findings, taking into consideration the broader public perception of AI.

Tools / Resources

 A collection of resources that introduce journalists and communicators to key themes in AI research and debunk harmful myths.

A collection of guidelines on the ethics of representing AI and communicating AI to the public

 A database of AI experts, including those representing minoritised stakeholders groups, helping communicators draw on a diversity of perspectives on AI

A list of institutions and professional networks that support independent / investigative tech journalism via dedicated funding schemes or free legal advice.

Positive examples of reporting on AI

  • The AI Colonialism series

    by Karen Hao, MIT Technology Review

  • OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic

    by Billy Perrigo, TIME

  • What does GPT-3 “know” about me?

    by Melissa Heikkilä, MIT Technology Review

  • Lost in AI translation: growing reliance on language apps jeopardizes some asylum applications

    by Johana Bhuiyan, The Guardian

Acknowledgments

The toolkit was developed by Tomasz Hollanek and Abdullah Safir as part of the Desirable Digitalisation: AI for Just and Sustainable Future research programme, funded by Stiftung Mercator.

The toolkits draws on the insights from a collaborative research workshop convened by Tomasz Hollanek, Eleanor Drage, and Dorian Peters. We thank the workshop participants for their inputs: Varsha Bansal, Stephen Cave, Florian Christ, Kanta Dihal, Tania Duarte, Abhishek Gupta, Karen Hao, Melissa Heikkilä, Justin Hendrix, Irving Huerta, Bronwyn Jones, Boyoung Lim, Kerry McInerney, Karen Naundorf, Jonnie Penn, Christiane Schäfer, Hilke Schellmann, Lam Thuy Vo, Marina Walker Guevara, Chloe Xiang, and Miri Zilka.

If you have any questions or suggestions, please email: desirableai@lcfi.cam.ac.uk