Workshop 2: Provotypes for Embodiment of Value Tensions across Cultures

Facilitator: Dasha Simons

Abstract: Critical challenges arise when translating AI ethics principles into practice and intercultural contexts. With a crucial one originating around value tensions. Evidently not all desired values can be embedded in AI, as some might be conflicting like fairness and accuracy, individual benefit and collective benefits, transparency and privacy. These trade-offs need to be resolved in context and culture sensitive manners. However, how to identify which values to prioritize in AI development, and how to meaningfully speak to these differences? Simultaneously the design perspective in AI ethics is advancing. Ethics literature recognizes designers as important professionals as they cannot only provide technical means but also address values of people and society and create ways how to express them in material culture and technology (Van den Hoven, Vermaas, & Van de Poel, 2015). Also, studies show that design approaches can deal well with satisfying conflicting demands (Dorst, & Royakkers, 2006). This is a proposal for a workshop, in which we will use this design perspective to create provotypes as an artifact to meaningfully discuss differences across cultures in resolving value tensions for AI. These are artifacts/pictures that embody tensions in a certain context to explore new design opportunities (Boer & Donovan, 2012). With this workshop of 1 hour, we will collaboratively explore the creation of provotypes that can embody value-tensions present between commonly used AI ethics principles and more local ones in an interactive manner. This will be guided and prepared with examples from industry. We invite all to join this explorative workshop!

Author bio: Dasha is passionate about bringing the human heartbeat into technology development by creating more trustworthy AI by design. She uses her creativity and human-centred perspective to find new ways how we can make AI more trustworthy, by making it more explainable, transparent and fairer. At IBM her role is twofold. On one hand internal, by enabling internal teams in creating trusted AI. Examples include: setting-up the global CoE for Trustworthy AI at IBM, leading the training initiatives on Trustworthy AI for consultants working in EMEA. On the other hand, she is advising various industries ranging from financial institutions, public sector and consumer goods from operational to C-level on trustworthy AI development. Dasha strongly believes a design perspective can support the current technological and policy feats in AI ethics. Her close collaboration with Delft University of Technology is focused on this, by exploring new methods and tools, providing educational support and brought to live in her role as Advisory Board member at the AI Futures Lab. She is a frequent speaker at events and conferences and co-authored various publications in the design field. For more information, please have a look at: https://www.designfortrustworthyai.com/about-dasha-simons

Recorded Presentation | 27 April 2023

#Values #InterculturalApproaches #ParticipatoryProcess #Designers/Developers

Previous
Previous

Keynote: Approaches to AI Ethics: “Sparks of Ideas” (Inspirations) from East Asian Philosophies

Next
Next

Workshop 1: Envisioning Equitable Representation in ML Evaluation