Panel 2: Shared Policies?
26 April | 11 am | Chair: Jocelyn Maclure | Venue: Frankopan Hall
Presentation 1: Big Tech and its adversaries: Situating platform power within the geopolitical battle for data
Presenter: Amber Sinha
Abstract: Perhaps there is no clearer indication of the primacy of data in this age than the overworked metaphors that are often used to describe it. In the last few years, data has been likened, aside from the hackneyed comparison to ‘oil’, to any number of (tangible entities) such as mineral deposits, (Hooper, 2017) dividend deposits, (Sumagaysay, 2019), and even the Alaskan Permanent Fund (Hughes, 2018). On the other end of the spectrum, commentators have also compared data to radioactive materials such as uranium and pollutants such as carbon dioxide (Tisne, 2019). As tired or inventive these metaphors may be, they signify a desperate need for a clear conceptual model through which we can think through the legal, social and economic ramifications of data. The comparisons of data to an asset are intrinsically linked to the question who controls this asset. In their 2006 book, Who Controls the Internet?: Illusions of a Borderless World, Jack Goldsmith and Tim Wu narrated how the idea of a borderless Internet first ran into territorial governments. A decade and a half later, we see new battlelines being drawn, only now with large American BigTech companies and governments, more notably from the Majority World at its centre. The struggle over who controls the Internet has only continued its journey to becoming the important geopolitical tug-of-war of our times, with data—how it is collected, stored, protected, used, and transferred over national borders—being its most recent site of battle. This new geopolitical war has a diverse cast of characters and interests. In the last decade, the biggest technology firms Facebook, Apple, Google, Amazon, and Microsoft, and their foreign counterparts such as Alibaba, Huawei, Baidu and Tencent have created a ‘new dimension in geopolitics’ I will situate platform power as a consequence of data governance practices within this emerging geopolitical battle for data. Significant attention has been paid to the struggles between BigTech companies and governments in the US and EU. However, there has been little or no scholarly analysis of states in the Majority World flexing their power to not only rein in power exerted by BigTech players in their jurisdictions but also the use of regulatory muscle.
Author bio: Amber Sinha works at the intersection of law, technology and society, and studies the impact of digital technologies on socio-political processes and structures. Until June 2022, he was the Executive Director of the Centre for Internet and Society (CIS), India. He has led programmes on privacy, identity, AI, and free speech. He is a Senior Fellow-Trustworthy AI at Mozilla Foundation studying models for algorithmic transparency, and Director of Research at Pollicy Data Institute, Kampala. Amber is a member of the Steering Committee of ABOUT ML, an initiative to bring diverse perspectives to develop, test, and implement machine learning system documentation practices. He also serves on the GPA Reference Panel of Global Privacy Assembly. His first book, The Networked Public, was released in 2019. Amber is the Director of Research at Policy.
Presentation 2: “Made in Europe”: exporting European values to the peripheries through the regulation of Artificial Intelligence - an exploratory analysis of the case of Morocco.
Presenter: Oumaima Hajri
Abstract: Much has been written about the transformative power of AI systems and how countries are racing for global AI dominance to reap the economic and geopolitical power expected to result. Nevertheless, this ‘race to AI’ is bringing forth a ‘race to AI regulation’ where a new playground for global regulatory competition seems to emerge. With the AI Act, the European Commission is introducing an extraterritorial regulatory framework for AI systems to ensure that systems placed on- and used within the European Union (EU) market comply with EU values. Notably, despite this being a good step in the right direction to protect fundamental rights - it remains a rather pompous and self-proclaimed aim to produce and foster universal AI systems that are ‘made in Europe’. Specifically, aware of the power of norms, the EU seems to strategically capitalise on the opportunity to spread its normative influence, export its values and promote its vested interests through regulating AI systems. Consequently, the question arises whether the AI Act can be perceived as a tool for Western normative dominance, as it denies the diversity of humankind’s ethical stances. Stepping outside of this self-referential Western point of view requires a view from elsewhere, which this thesis aims to accommodate by analysing the case of Morocco. To date, Morocco is unconsciously still tied to its colonial masters in Europe and is actively trying to independently boost its economic and social development through digital transformation – of which the national AI strategy is a prime example. However, being a postcolonial and a ‘periphery’ country, it will soon find itself at the receiving end of the EU’s dictum of what ‘ethical AI’ is – leaving no other choice than compliance considering its significant dependencies on the EU market. Therefore, the central thesis investigates whether the EU’s attempt to regulate AI through the AI Act can be charged with ‘normative imperialism’ when investigating Morocco's case. The aim is to conduct a literature review to locate the existing knowledge on this topic and to fill the cavity with qualitative semi-structured interviews covering different institutional perspectives in Morocco. The aim is to fill the void left by previous research on how the EU's normative imperialism, tied to its suppressed and colonial history of ‘peace’ and ‘prosperity’, continues to impact Global South countries to date through contemporary forms.
Author bio: Oumaima Hajri is a researcher and lecturer at the Rotterdam University of Applied Sciences. Her work focuses on the social impact of AI. For the Designing Responsible AI Media Applications project, she is investigating, in collaboration with media organisations in the Netherlands, how AI can be applied in a responsible manner. She also deals with regulations (such as the AI Act), in particular how ethical guidelines can make an important contribution in translating strategic interventions into practice. She is currently part of the first cohort of the MSt AI Ethics & Society at the University of Cambridge, mainly focusing on decolonization and demystification. Additionally, Oumaima co-founded the platform "AI Better World" to raise awareness, educate about and deconstruct AI.
Presentation 3: AI Regulation in Brazil: National Knowledge or Foreign Appropriation?
Presenter: Marina Garrote, Paula Guedes and Bruno Bioni
Abstract: Finally not submitted yet
The implementation of Artificial Intelligence (AI) systems in Brazil has grown in recent years. Along with it came the first concerted effort to regulate the technology through different instruments: first, the Brazilian Strategy for Artificial Intelligence (EBIA), then a first Draft Bill (PL 21/2020), and, finally, the most recent step, a Commission of Jurists in the Senate for the writing of an amendment to the draft bill. Regulating AI in the South presents a more significant challenge than in Global North countries, as northern citizens have older and more structured right-protecting mechanisms regarding data protection as well as more robust democracies and institutional protections and hence more extensive protection of human rights. Thus, any regulation in the South needs to consider their population's local context and vulnerability to the risks of AI and the utmost relevance of popular participation in the regulations (Arun, 2020). This article aims to analyze the amendment's text to the Brazilian Draft Bill written by the Commission of Jurists. Is there a consideration of the local context and the population vulnerabilities reflecting the participatory processes in which local knowledge was emphasized? Is there a prevalence of foreign knowledge and appropriation of models of regulation? Are these models a rights-based or risk-based approach? This is a new analysis, and answering the question will allow the evaluation of the state of AI regulation in the country. An analysis of the efforts so far to regulate AI demonstrate that, although there was the big goal of producing a framework to regulate the technology in the country, there was no proper consideration of the contributions made by stakeholders and the public during participatory processes as well as the resulting instruments were not fit to their purpose (Belli et al., 2023).
Author bios: Marina Garrote Researcher at Data Privacy Brasil Research Association. Master's student at the University of São Paulo. Lawyer and academic in Data Protection, Digital Rights, Access to Justice, and Gender and Sexuality. Specialist in Gender and Sexuality at the University of the State of Rio de Janeiro.
Paula Guedes Law and Technology Specialist. Phd candidate at Catholic University of Portugal - Faculty of Law in Porto. Master in International and European Law Catholic University of Portugal - Faculty of Law in Porto.
Bruno Bioni Executive Director at Data Privacy Brasil Research Association. Lawyer and professor in the field of regulation and new technologies. Ph.D. in Commercial Law at the University of São Paulo, Masters degree in Private Law at University of São Paulo. Founder of Data Privacy Brazil and founding partner of Bioni Consulting. Member of the Board of Directors of the Brazilian Data Protection Authority, as a representative of civil society. Member of the Jurists' Commission responsible for subsidizing the elaboration of the substitute draft for the Bill (no.5051, of 2019), which aims to originate Brazil's Legal Framework for Artificial Intelligence.