The Digital Afterlives of Brahminical Colonialism: Biometric Surveillance, Facial Recognition Technology, & AI Ethical Complicities in India, 1858-2022

Abstract: This project is a historically-informed investigation of artificial intelligence (AI) surveillance systems in modern day India. In particular, I aim to trace the evolution of biometric surveillance in India from British-Brahminical analog schema relying on data such as the fingerprint to digital Hindu nationalist reincarnations powered by facial recognition technology. In 2021, Ameya Bokil, Avaneendra Khare, Nikita Sonavane, Srujana Bej and Vaishali Janarthanan published a report entitled “Settled Habits, New Tricks: Casteist Policing Meets Big Tech in India,” examining the important rearticulations of Brahminical surveillance under emerging techno-capitalist regimes. Building off of such scholarship, this paper will posit a novel historical framework for linking the casteist-colonial origins of state surveillance in British India to contemporary facial-recognition technology concerns under the Hindu Right — focusing in particular on three strategies (1) the colonial history of surveillance technology informed by age-old Brahminical ideas of criminality (2) facialization as a site of biometric power under the British Raj (3) legal resonances such as the Criminal Procedure (Identification) Bill of 2022 that permit targeted surveillance of ghettoized communities in India today by the Delhi and Hyderabad police forces. Applying these perspectives to contemporary discourses, I wish to audit contemporary technical AI Ethical papers and frameworks from the Anglophone West in terms of their (in)ability to capture the violence of such Brahminical, colonial power and multidimensional social variables such as religion, caste, etc. How can we contextualize the emerging popularity of facial recognition technology in India, both as a continuation of colonial biometrics as well as an instrument of current Hindu nationalist/Brahminical regimes? How well do contemporary (Western) AI ethical frameworks capture these complex sociopolitical realities? More broadly, this line of inquiry generalizes to discourses of AI across the Global South, making resoundingly clear that AI Ethics must be attentive to the nuanced social histories of regional power, post-colonial complicity, and technology, rather than positioning "decolonization" as an empty moral imperative. I conclude with notes on the (im)possibilities of an AI Ethics, guided by abolitionist thought and Black feminist theory which foregrounds hopelessness and refusal as generative sites for ethical practice. Overall, I aim to continuously contend with my own positionality within these matrices of power, as a dominant-caste Hindu-raised class-privileged Indian-American with family ties to the Silicon Valley technology industry, writing with the intention of unearthing my own complicity in these networks of mass digital death and urging others like me to do the same.

Author bio: Nikhil is a student of Harvard College Class of 2023 and A.B. Candidate in History & Literature (Modern World Track) and Computer Science, Secondary Degree in South Asian Studies.

Recorded Presentation | 26 April 2023

#Facial Recognition #Biometrics #Colonialism #Caste #History #India

Previous
Previous

A community-of-practice approach to understanding Chinese policymaking on AI ethics

Next
Next

Palmistry , Predictive Analytics and Imprints of Colonized Bodies