AI and Responsible Journalism Toolkit

by the Leverhulme Centre for the Future of Intelligence, University of Cambridge

Research

Learn more about the research behind the toolkit.

The Toolkit builds on the insights from a collaborative research workshop held in 2023, convened by Dr Tomasz Hollanek, Dr Eleanor Drage, and Dr Dorian Peters at the University of Cambridge.

The immediate goal of this online workshop was to bring together academics working on the representations and public perceptions of artificial intelligence (AI), journalists, media executives, civil society groups, and technologists to think about:

  • how to empower journalists who do not normally focus on technology reporting to cover AI more responsibly;

  • how to assist technology journalists in fulfilling their role in keeping tech companies (and their products) accountable;

  • how to effectively inform media professionals about the social, cultural, and ethical implications of AI and other digital technologies, and about communicators' role in ensuring that technologies are used and developed responsibly.

While the ultimate goal of the workshop was to provide grounds for a new collection of co-curated resources on AI ethics and journalism ethics aimed at media professionals, the proceedings will also be processed into an output aimed at an academic audience. 

Beyond the insights from the workshop, the Toolkit draws on and responds to the research on AI and narratives (broadly construed), including the publications listed below.

Selected bibliography:

  • This paper finds that journalists often use guesswork to understand AI’s role in news production, which then limits their ability to develop a critical comprehension of the technology, effectively use AI for professional benefits, and report responsibly on how AI is affecting society. It argues that the AI ‘intelligibility problem’ is sociocultural rather than solely technical, and suggests that strategies need to be developed on individual, organisational, and community level to foster AI literacy among journalists.

    Full citation: Bronwyn Jones, Rhianne Jones & Ewa Luger (2022) AI ‘Everywhere and Nowhere’: Addressing the AI Intelligibility Problem in Public Service Journalism, Digital Journalism, 10:10, 1731-1755, https://doi.org/10.1080/21670811.2022.2145328

  • This report scrutinises the prevailing narratives surrounding AI within academic, communication, policymaking, and general public discourse in English-speaking Western societies. These narratives often place a strong emphasis on embodiment, gravitating towards either utopian or dystopian extremes, and are characterised by a lack of diversity among the creators, protagonists, and types of AI represented. The report suggests that rather than attempting to control these popular perceptions, there should be a concerted effort to broaden, diversify, and foster public dialogues about AI.

    FULL CITATION: Cave, S., Craig, C., Dihal, K., Dillon, S., Montgomery, J., Singler, B., & Taylor, L. (2018). Portrayals and perceptions of AI and why they matter. The Royal Society. https://doi.org/10.17863/CAM.34502

  • This article advocates for the use of critical AI art as a medium for advancing the understanding of the inherent structural power inequalities behind AI systems and for experiential learning that encourages interpretation rather than mere exposure to explanations. Additionally, it stresses the significance of cross-disciplinary discussions on aesthetics, ethics, and the political economy of AI, and suggests that these discussions should inform the design of AI systems.

    FULL CITATION: Drew Hemment, Morgan Currie, SJ Bennett, Jake Elwes, Anna Ridler, Caroline Sinders, Matjaz Vidmar, Robin Hill, and Holly Warner. 2023. AI in the Public Eye: Investigating Public AI Literacy Through AI Art. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT '23). Association for Computing Machinery, New York, NY, USA, 931–942. https://doi.org/10.1145/3593013.3594052

  • This paper presents the findings of a large-scale review of influential films made in the last one hundred years, demonstrating that only 8% of AI professionals portrayed in the selected films were women. It ties this lack of representation of women AI professionals in film to underlying reasons such as persistently gendered narrative tropes or the dominance of men film directors in the industry, and elucidates how portrayals of women AI scientist in film (or lack thereof) have real-life effect on choices made by women scientist.

    FULL CITATION: Cave, S., Dihal, K., Drage, E., & McInerney, K. (2023). Who makes AI? Gender and portrayals of AI scientists in popular film, 1920–2020. Public Understanding of Science, 32(6), 745-760.

  • The article highlights the pervasive presence of Whiteness in various aspects of technology, including humanoid robots, chatbots, virtual assistants, stock images, and media representations of AI. It argues that this predominant White racial perspective in AI can exacerbate already existing bias against non-White producers and users of AI, and divert attention from crucial efforts to reduce bias in AI systems.

    FULL CITATION: Cave, S., Dihal, K. The Whiteness of AI. Philos. Technol. 33, 685–703 (2020). https://doi.org/10.1007/s13347-020-00415-6

  • This paper unveils the results of a nationally representative survey conducted to explore public attitudes towards AI in the UK. The findings indicate that a substantial majority of the UK population is experiencing significant anxiety regarding the future implications of AI. However, it is also noteworthy that a considerable portion of the respondents erroneously equates AI with robotics. These findings underscore the urgency for interventions to improve the representation and communication of AI to the public.

    FULL CITATION: Stephen Cave, Kate Coughlan, and Kanta Dihal. 2019. "Scary Robots": Examining Public Responses to AI. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (AIES '19). Association for Computing Machinery, New York, NY, USA, 331–337. https://doi.org/10.1145/3306618.3314232