Many Worlds of AI

Date: 26-28 April 2023

Venue: Jesus College, University of Cambridge

Panel 9 : Contemporary China and AI

26 April | 4.30 pm | Chair: Kerry McInerney | Venue: Bawden Room

Presentation 1: A community-of-practice approach to understanding Chinese policymaking on AI ethics

Presenter: Guangyu Qiao-Franco

Abstract: Extant literature has not fully accounted for the changes underway in China’s perspectives on the ethical risks of artificial intelligence (AI). Some of the ethical principles promulgated in Chinese policies on AI, such as privacy, fairness, justice, and inclusiveness, bear some similarity to those developed in Western countries, but they embody different connotations and philosophical assumptions in Chinese culture.This article develops a community-of-practice (CoP) approach to the study of Chinese policymaking in the field of AI. It shows that the Chinese approach to ethical AI emerges from the communication of practices of a relatively stable group of actors from three domains – the government, academia, and the private sector. This Chinese CoP is actively cultivated and led by government actors. This paper draws attention to CoP configurations during collective situated-learning and problem-solving among its members that inform the evolution of Chinese ethical concerns of AI. In so doing, it demonstrates how a practice-oriented approach can contribute to interpreting Chinese politics on AI governance.

Author bio: Dr Guangyu Qiao-Franco is Assistant Professor of International Relations at Radbound University and Senior Researcher of the ERC funded AutoNorms Project, University of Southern Denmark. Her research leverages practice theory, norm contestation, norm diffusion, and actor-network theory to interpret legal and foreign policy instruments developed by the Chinese government, as well as other developing countries. Her work has been published in International Affairs, The Pacific Review, International Relations of the Asia Pacific, and Policy Studies, among others.

Presentation 2: From Accuracy to Alignment: The Practical Logic of ‘Trustworthy AI’ among Chinese Radiologists

Presenter: Wanheng Hu

Abstract: The increasing use of machine learning algorithms to support human decision-making has brought about the popular notion of “trustworthy AI”. Accuracy and explainability, among other things, are considered two key elements in the trustworthiness of machine learning systems and are formulating ethical AI guidelines as well as major research efforts in computer science. The underlying assumption is that, if the output of AI systems is more “accurate” and “explainable,” then they become more trustworthy and trusted by users. Drawing on extensive participant observations and interviews with radiologists in China, this paper problematizes such universal assumptions and proposes an alternative, locally-rooted framework centered on “human-machine alignment” to understand AI trustworthiness. I argue that radiologists in China develop their trust based on the degree of alignment between their own judgment and the algorithmic output, including “direct alignment” and “adjusted alignment.” Regardless of the claimed performance indicated by statistical parameters, Chinese radiologists are still prompted to judge if the algorithmic decisions directly align with their own because of two factors. First, the probabilistic nature of evaluation metrics cannot guarantee algorithms’ correctness in individual cases in the clinical setting, for which typically no ready “ground truths” are available. Second, under current Chinese legal and regulatory regimes, radiologists are held accountable for medical reports and are therefore motivated to doublecheck AI’s recommendations. Yet, even if the direct alignment is low, radiologists may still trust and use the algorithmic output if they can observe certain patterns of, and thus explain away, the misaligned algorithmic output. This leads to an “adjusted alignment” based on the radiologist’s own interpretations. In conclusion, the paper suggests that universal notions of accuracy and explainability are misplaced in conceptualizing and regulating trustworthy AI in the real world; instead, trust in AI is a result of human-machine alignment that is subject to social and institutional shaping, and could not be reduced to some intrinsic technical features of the algorithms.

Author bio: Wanheng Hu is a Ph.D. candidate in Science and Technology Studies at Cornell University and a research fellow in the Program of Science, Technology and Society of the Harvard Kennedy School. At Cornell, he is also a member of the Artificial Intelligence, Policy, and Practice (AIPP) initiative and a graduate affiliate of the East Asia Program. His dissertation research examines the use of machine learning algorithms to cope with expert tasks, with an empirical focus on the development, application and regulation of AI systems for image-based medical diagnosis in China. The project has been supported by the National Science Foundation, China Times Cultural Foundation, and a Hu Shih Fellowship in Chinese Studies, among others. His research is broadly situated at the intersection of the sociology of expertise, medical sociology, critical data/algorithm studies, and development studies. Wanheng holds an M.Phil. in Philosophy of Science and Technology, a B.L. in Sociology, and a B.Sc. in Biomedical English, all from Peking University.

Presentation 3: AI Ethics and Governance in China: from Principles to Practice

Presenter: Rebecca Arcesati

Abstract: In the recent past, China’s government has recently taken remarkable steps at regulating Artificial Intelligence (AI). In March 2022, China’s regulation on algorithmic recommendation came into effect, breaking new ground internationally as regulators in several jurisdictions are beginning to approach the technical challenge of promoting algorithmic transparency and explainability. The Cyberspace Administration of China (CAC) is primarily focused on the role algorithms play in disseminating information, which is unsurprising given its censorship authorities. However, the regulator is also concerned with how recommendation systems impact consumers and shape labor conditions for platform workers. A separate regulation dated January 2023 addresses AI-generated content, such as deepfakes. With input from industry and research institutes, other branches of China’s government are pursuing parallel efforts, such as developing testing and certification methods for ‘trustworthy’ AI systems. Despite these developments, China’s AI governance efforts remain poorly understood abroad, particularly outside scientific circles. Justified outrage at the Chinese Communist Party’s use of AI for mass surveillance and ethnic profiling, as well as at the associated human rights abuses, has led to skepticism towards the ability of the country’s ethical and political tradition to produce responsible AI. In fact, misconceptions about the relationship between AI and social credit experiments in China have even made their way into the European Union (EU)’s draft AI Act. Against this backdrop, this working paper seeks to investigate emerging approaches to ethical AI in China by studying the country’s first local regulations addressing AI, namely those issued by Shanghai and Shenzhen in the fall of 2022. Through an examination of original government documents, Chinese media coverage and local expert commentaries, which will be complemented by interviews with practitioners in a subsequent phase, this ongoing research aims to shed light on the actors as well as the ethical, political, and sociocultural forces that shape AI governance in China. Following an informed overview and analysis of the main Chinese AI governance developments at the national level, the paper will contextualize and examine the cases of Shanghai and Shenzhen to identify similarities and differences between emerging approaches to AI ethics and governance in China and in the EU, as well as any instances of—or opportunities for—mutual learning and intercultural dialogue.

Author bio: Rebecca Arcesati is an Analyst at the Mercator Institute for China Studies (MERICS) in Berlin, Germany. Her research focuses on China’s technology and digital policy and regulation. She covers the global footprint of Chinese tech firms, digital infrastructure and surveillance tools, governance of data and artificial intelligence, and Europe-China relations in the technology and innovation spaces, including tech transfer. Prior to joining MERICS, Rebecca gained experience helping Italian tech startups scale in China and as a research assistant in the UN Women China office. She holds an LL.M. in China Studies with a focus on politics and international relations from Peking University, where she was a Yenching Scholar. Rebecca received an MA degree in International Studies from the University of Turin and a BA in Language Mediation and Cross-Cultural Communication from the University of Milan. She has studied and worked in Beijing, Shanghai and Dalian, Liaoning.