TILT-PHCE Seminar: Eugenia Stamboliev
‘Trust’ as a Floating Signifier in AI Ethics
In response to troubles with the development and implementation of AI-systems in everyday life, policy-makers and academic researchers have constructed a discourse about trustworthy AI. The hope is to better regulate AI-systems by imposing human-centric norms that justify human users to trust AI-systems. However, critics like Luke Munn emphasize that these norms are often formulated as mere voluntary guidelines. The discourse of trust in AI ethics thereby risks becoming legitimation devices for a morally problematic industry. Ai ethics allegedly either remains in the ethereal realm of abstract, general statements of principle or devolves into a list of concrete boxes AI developers have to check in order to support their moral reputation. In our paper, we wish to argue that both the optimism and Munn’s pessimism about the role of trust in AI ethics are misguided because they miss the political functionality of ‘trust’ as a floating signifier. ‘Trust’ is not an ideal moral-theoretical standard for AI development nor mere empty talk. It is a political concept left vague in order to appeal to the multiple stakeholders of AI development. However, it is also concept increasingly being instrumented for design and computational convenience and could be potentially hollowed out by its inflationary use. Mostly, trust seems shifted from an interrelational concept to a design attribute signifying a more desirable future for designers or AI researchers than for users or workers of their technology. The conflict on who profits from the mobility of a floating trust concept is left out most debates despite mutually conflicting interests. By promoting the discourse of ‘trust’ as being unified, computational and stable, policy-makers and researchers subsequently seem to unite divergent social groups (private businesses, the public sector, civil society organizations, etc.) towards aiming for the same goal; to make technologies like AI or platform architectures more trustworthy for everyone. However, we point out that this strategy risks hiding the underlying conflicting interests of the multiple stakeholders as much as the very fluidity of the concept of trust involved.
Speaker: Dr. Eugenia Stamboliev is a media scholar and philosopher situated at the University of Vienna (AT) and works as a postdoctoral fellow in the project 'Interpretability and Explainability as Drivers of Democracy’. She has an educational background in legal studies, media studies and philosophy from the Free University Berlin (DE), University of Arts Berlin (DE), the European Graduate School (CH) and she holds a PhD from the Marie-Curie program 'CogNovo' situated at the University of Plymouth (UK). In the semester of 2022/2023, she is a visiting fellow at the philosophy department at Tilburg University researching on algorithmic trust, platform labour, and normative limits. More information on Eugenia’s work can be found on her profile.
Moderator: Dr. Tim Christiaens is assistant professor of philosophy at Tilburg University. His field of research is contemporary continental political philosophy with a focus on social critique and economic topics, like the digitalization of work, socio-economic exclusion, and financialization. Most recently Tim Christiaens has been working on a book about the digital gig economy and worker autonomy, called ‘Digital Working Lives’ (due November 2022). His work has been published in journals like Theory, Culture & Society, European Journal of Social Theory, and Big Data & Society. His university profile can be accessed here.