What would an anti-casteist AI system look like?

Abstract: This paper fills an acknowledged gap in AI and wider digital research which positions ‘caste’ as a complex phenomenon and explores anti-casteist principles interpreted for AI contexts. This paper then seeks to distil an ontological interpretation of caste and to forward an anti-casteist ethical approach to the design, policy and governance of AI and algorithmic technologies. A conceptual indeterminacy and a subjective experiential nature add to the challenge of how caste can be interpreted within digital contexts. Caste cannot be explained either solely through the conceptualisation of race and racial biases, or, more markedly, through prevalent class-based analysis. Further caste is constituted of and experienced through largely communicative practices and contexts. We propose as first principles, going back to Dr. B.R. Ambedkar’s treatise on caste and its ‘mechanism, genesis and development’. From this and other literature, some main explanatory aspects can be identified and interpreted for AI contexts. We describe caste-based enclosures - by which we mean factors that allow for ideological, or functional boundedness formed in the (digitally mediated) real world or its virtual representations, that pre-suppose or permit caste-centric in/exclusion and caste homogeneity. Next is a preponderance of caste-centric purity in social relations, either implicitly or explicitly found and which can be queried in human contexts or digital content of AI systems. There is also an intersubjectivity of caste among groups and individuals which establishes differing positionalities, in/visibility of caste markers, and caste-originated power differentials, all of which can be questioned among the various stakeholders of AI systems. The paper maps the above conceptualisation to AI’s sociotechnical lifecycle of design, development and deployment. Through this, the paper argues for a practical, ex-post and ex-ante embedding of Ambedkarite ideals to present a framework for an ideologically anti-casteist mechanism for the ethical assurance of AI.

Author bio: Shyam Krishna - As an engineer-turned-researcher in critical data studies, my research interests lie in developing an ethical and social justice-oriented view of emergent digital innovations and the technopolitical ecosystems they inhabit. My research interest includes algorithmic and data practices of gig work, digital identity, and fintech platforms. I am interested in policy and governance issues surrounding AI, particularly in the global South, and how this overlaps with contexts of labour and credit. Currently, at The Alan Turing Institute, I research and advise organisations on national and global ethical policy agendas for AI and how this can be directed by principles of data justice.

Recorded Presentation | 27 April 2023

#Caste #Values #Diaspora #India

Previous
Previous

Relational Philosophies and Ethical Diversity in the Intercultural Evolution of AI Ethics: A 'Disruptive' Conversation

Next
Next

The Five Tests: Designing and Evaluating AI According to Indigenous Māori Principles