Beyond Binary: Diversity & Equality in AI
From job filters favoring male candidates, to wrong face recognition of people of color, AI seems to be biased. At this open discussion just before International Women's Day, we’ll discuss the relationship between AI, (in)equality and diversity. (English / SG-Certificate*)
Time: 16:00-18:15 hrs. Doors open at 15:45 hrs.
Admission is free, registration required.
As humans, biases infiltrate everything we create, including Artificial Intelligence.
This perpetuates our inequalities, causing many, very real problems such as speech and facial recognition software often performing better for men, because the training data contains more male voices and faces. Grants allowance algorithms tending to prefer white people when filtering applications, reflecting historical racism in society.
However, nowadays anyone can use AI, such as ChatGPT, to learn, develop, and help them achieve their goals, which could help to create a more level playing field. But, despite this potential for empowerment, only a few companies are building a monopoly over these new technologies, reducing year by year the transparency behind the development of these tools and augmenting the paywalls to access them. This again creates uneven advantages.
Without explicit human intervention, breaking this self-reinforcing loop is challenging.
How can we make sure that AI will remain something really available to everyone and not just for a privileged elite? What about the trustworthiness of these algorithms and their exposure to biases, if we don’t even have access to how exactly they are created? How to ensure that one’s gender, race or socio-economic status does not influence the benefits one gains from AI? In other words: how to ensure AI is something created by the people for the people, no one excluded?
Calling for different perspectives
During this interactive and open discussion, we would like to take an important first step in solving these challenges, by bringing different perspectives together so that everyone can play a role in defining the future of AI. This way, we aim to push beyond the limitations of a simple discussion, putting awareness of the opinions and ideas of people around you at the center of the conversation. The goal is to foster a comprehensive understanding of inequality and diversity in AI, striving for inclusivity in its development and utilization.
The interactive discussion, moderated by study association Enigma (Cognitive Science & Artificial Intelligence), will feature topics introduced by researchers dr. Marie Postma and dr. Eva Vanmassenhove. Key questions include whether AI should be open-source and whether more AI ultimately benefits or harms equality. To ensure everyone’s contribution lasts beyond just this event, the essence of the discussions will be captured in a written record.
The debate will be followed by an informal get together with drinks and a small bite to finish those conversations you didn’t have time for during the event.
Dr. Eva VanmassenhoveAssistant Professor in the Department of Cognitive Science and Artificial Intelligence (Tilburg University)
Google Translate, which makes use of AI, translates “The nurse, the cleaner, the politician, the scientist” into la enfermera, la limpiadora, el político, el científico in Spanish. Do you notice the gender norms that are hidden in this translation? This is exactly what dr. Eva van Massenhove is doing research on. She works on the integration of linguistic features into Neural Machine Translation (NMT), focusing on issues related to gender and the loss of diversity in language due to statistical/algorithmic bias. She obtained her PhD from Dublin City University, Ireland.
Marie Šafář PostmaFull Professor in the Department of Cognitive Science and Artificial Intelligence (Tilburg University)
Prof. dr. Marie Šafář Postma is head of the department of Cognitive Science and Artificial Intelligence. Her research focuses on cognitive phenomena closely linked to the concept of consciousness, such as bistable perception, perceptual decoupling, interoception, and complex consciousness experiences. To examine these phenomena, she makes use of a broad range of research tools, including computational modeling and behavioral experimental methods applied on questionnaire-, behavioral and neurophysiological data collected in real and simulated virtual reality environments. She is a core member of TAISIG (Tilburg Artificial Intelligence Special Interest Group). TAISIG aims to combine, coordinate, and strengthen AI activities at the university and to emerge recognizably as a key player in the regional and national AI domain.