Many Worlds of AI

Date: 26-28 April 2023

Venue: Jesus College, University of Cambridge

Panel 8: AI Histories in India

26 April | 3.00 pm | Chair: Maya Indira Ganesh | Venue: Bawden Room

Presentation 1: Taking off with ease or Face-off with Justice? Mapping Digital Citizenship and ‘Ways of Seeing’ the Indian Biometric State

Presenter: Madhavi Shukla

Abstract: In November 2022, the Planning Commission of India or the Niti Aayog released a discussion paper titled ‘Responsible AI for All’ for public comments introducing Facial Recognition Technology (FRT) at domestic airports. While the document awaited feedback, earlier this month the Indian civil aviation ministry brought out the biometric system named as DigiYatra (an acronym for Digital travel) app for domestic passengers travelling from New Delhi airport. The official website promotes FRT as the ‘future of air travel’ encouraging users for voluntary participation in the policy. With face as the boarding pass, people, data and technologies thereby, intersect with one another promising ‘a simple and easy’ travel. However, the Indian case pertaining to use of FRT by law enforcement has drawn criticism from human rights activists for targeting individuals belonging to marginalized backgrounds. This notwithstanding the fact that the Indian capital city with 1862.6 CCTVs per mile, is the site of greatest government surveillance project internationally. There are additional concerns regarding accuracy of algorithms and, given that there are no Indian laws regarding data protection, the official recommendations cited making a case for ‘Responsible AI’ (RAI) have further raised concerns of legal experts regarding privacy violations as well. Situated within this context, my paper reviews the policy formulations listed in of DigiYatra that describes the futuristic images of FRT as desirable AI. It ‘looks’ into how a gaze of camera categorically renders the body of citizen-subject as visual portrait in a digital database, while the visual persuasion furbishes a process of governance that the state deems as just, efficient and good. The rise of the AI-enabled ethnographic state thus, in this manner reveals how the ocular-centric capital of FRT pivots on the promise of an efficient digital future, imaging ‘ways to see’, positing itself as currency of modern state rule, and ‘law by other means’.

Author bio: Madhavi Shukla is a doctoral candidate studying at Centre for the Study of Law and Governance, Jawaharlal Nehru University at New Delhi. She works on law and visuality, law and society, and cultures of technologies for her research. Additionally, she is part of a study group on ‘AI and Facial Recognition Technology’ anchored at the Centre for the Study of Law and Governance that has shared its recommendation on the Niti Aayog discussion paper titled ‘Responsible AI for All: Adopting the Framework – A use case approach on Facial Recognition Technology.’ As a legal feminist she is passionate about feminist jurisprudence and imagines possibilities for equal futures for everyone. When she is not dreaming of equality, her other hobbies involve tending to her plants and playing with cats.

Presentation 2: Data power, AI and the "doubtful citizens": The case of India's National Population Register

Presenter: Anirban Mukhopadhyay

Abstract: This paper analyzes how the National Population Register (NPR) project in India uses techniques of data power to shape the boundaries of citizenship rights. With religion becoming a shadowy marker of identity in framing "real Indians," the present governmental regime in India actively seeks to identify the "doubtful citizens" within the national space by linking AI-powered biometric data and other forms of legal identification. The notion of the "real citizen" is reframed through biometric identification, documentation, and the bureaucratic web of this governmental reframing affects who will access essential services like opening a bank account or becoming part of the food distribution system. The entanglements of data infrastructures and systemic discrimination in this context are shaped through surveillance machinations, neoliberal governance, and the erasure of citizenship rights. In Documents and Bureaucracy, Matthew Hull emphasizes "the way documents link to people, places, things, times, norms, and forms of sociality" (Hull, 2012; p. 255). He notes that documents are often thought of as offering access to things and processes they document. However, they have broader capacities, particularly as they relate to administrative control and the active construction of subjects and socialites. This paper questions how the NPR in India seeks to re-construct socialities and rights using AI and data power. Through an examination of the discourses in the mainstream media, the rhetoric of the politicians of the ruling party, and the official government rhetoric, this paper explores how the logic of NPR engenders specific forms of statehood, governance, and citizenship. Using Foucault's ideas of biopower and Agamben's conceptualization of Homo Sacer, the notion of citizenship building in relation to legal documents and biometric identification is analyzed in this paper.

Author bio: Anirban is a Ph.D. candidate in Communications and Media at the Institute of Communications Research at the University of Illinois Urbana-Champaign. He is interested in the history of communication/media technologies, media and space, race and media, infrastructures of media, media policies, and critical information studies. He is intrigued by how the media affect cultural changes in the public sphere, build cultural citizenship, and (re)produce frameworks of everyday life.

Presentation 3: Palmistry , Predictive Analytics and Imprints of Colonized Bodies

Presenter: Charu Maithani

Abstract: This paper aims to draw links between the ancient practice of palmistry or chirology and the colonial history of fingerprinting that became the basis of biometrics that are widely used to identify, control and surveil bodies through a critical cultural analysis of machine learning techniques. In 1858, William James Herschel, a British officer in the administrative services in Bengal, India got Rajyadhar Konai’s handprint as a testament for Konai to honour his road building material supply contract. Frances Galton’s study on fingerprints made use of finger and palm prints documented by Herschel in Bengal leading to the swift institution of finger and palm prints for identification purposes in administrative and legal areas. Although finger and palm printing have legacies of anthropometry, they also have a key place in biometrics. Moreover, current machine learning and data practices also inherit some of the methods established by Galton in the study of fingerprints. This way contemporary AI and data practices bear imprints of colonized bodies. On the other hand, palmistry continues to be widely practiced in parts of South and East Asia where the lines of a person’s palms are interpreted to predict the future. This aim common to the application of machine learning in predictive analytics has been explored by developing machine learning programs (including various mobile apps) using the principles of palmistry to predict the future. My paper is interested in exploring the connections between machine learning programs, palmistry, predictive analysis and colonized bodies to two somewhat opposing ways. Firstly, to think of current practices in AI in the framework of a cultural practice such as palmistry to challenge the universalizing vision of AI. Secondly, cloaking AI in ancient palmistry practices as seen in the popular apps continue the extractive practices of historic colonialism.

Author bio: Charu Maithani is a researcher who organises her inquiries in the form of writing and curated projects. She is currently a sessional academic at UNSW, Sydney.

Presentation 4: The Digital Afterlives of Brahminical Colonialism: Biometric Surveillance, Facial Recognition Technology, & AI Ethical Complicities in India, 1858-2022

Presenter: Nikhil Dharmaraj

Abstract: This project is a historically-informed investigation of artificial intelligence (AI) surveillance systems in modern day India. In particular, I aim to trace the evolution of biometric surveillance in India from British-Brahminical analog schema relying on data such as the fingerprint to digital Hindu nationalist reincarnations powered by facial recognition technology. In 2021, Ameya Bokil, Avaneendra Khare, Nikita Sonavane, Srujana Bej and Vaishali Janarthanan published a report entitled “Settled Habits, New Tricks: Casteist Policing Meets Big Tech in India,” examining the important rearticulations of Brahminical surveillance under emerging techno-capitalist regimes. Building off of such scholarship, this paper will posit a novel historical framework for linking the casteist-colonial origins of state surveillance in British India to contemporary facial-recognition technology concerns under the Hindu Right — focusing in particular on three strategies (1) the colonial history of surveillance technology informed by age-old Brahminical ideas of criminality (2) facialization as a site of biometric power under the British Raj (3) legal resonances such as the Criminal Procedure (Identification) Bill of 2022 that permit targeted surveillance of ghettoized communities in India today by the Delhi and Hyderabad police forces. Applying these perspectives to contemporary discourses, I wish to audit contemporary technical AI Ethical papers and frameworks from the Anglophone West in terms of their (in)ability to capture the violence of such Brahminical, colonial power and multidimensional social variables such as religion, caste, etc. How can we contextualize the emerging popularity of facial recognition technology in India, both as a continuation of colonial biometrics as well as an instrument of current Hindu nationalist/Brahminical regimes? How well do contemporary (Western) AI ethical frameworks capture these complex sociopolitical realities?  

More broadly, this line of inquiry generalizes to discourses of AI across the Global South, making resoundingly clear that AI Ethics must be attentive to the nuanced social histories of regional power, post-colonial complicity, and technology, rather than positioning "decolonization" as an empty moral imperative. I conclude with notes on the (im)possibilities of an AI Ethics, guided by abolitionist thought and Black feminist theory which foregrounds hopelessness and refusal as generative sites for ethical practice. Overall, I aim to continuously contend with my own positionality within these matrices of power, as a dominant-caste Hindu-raised class-privileged Indian-American with family ties to the Silicon Valley technology industry, writing with the intention of unearthing my own complicity in these networks of mass digital death and urging others like me to do the same.

Author bio: Nikhil is a student of Harvard College Class of 2023 and A.B. Candidate in History & Literature (Modern World Track) and Computer Science, Secondary Degree in South Asian Studies.