Rol van bestuur, recht en privacy

Ethical, legal and societal aspects

One of the key tenets of AI at Tilburg University is that AI is always part of a socio-technical context and that this context matters. As a university founded on humanistic values, we build on decades of experience in society-driven research, enabling us to critically assess the complex interplay of AI and society.

How can we ensure that AI is grounded in shared values and fundamental rights?

Throughout its lifecycle, from the moment data is collected and models are developed until AI is widely used in everyday life, AI remains an integral part of a socio-technological context. This environment is not a passive recipient of AI, slavishly molding itself to what AI needs to function properly. On the contrary, machine learning experts, data scientists, citizens, companies, NGOs, and public actors, all in their own roles and capacities, influence the way in which AI takes shape; whether it is used and how it is used. Also, AI developments do not evolve in a regulatory void. Legal, economic, and social agreements create a normative, societal action space in which AI is welcomed (or not).

One of the key tenets of AI at Tilburg University is that AI is always part of a socio-technical context and that this context matters.

While AI takes shape in interaction with this socio-technical environment, it simultaneously also transforms it; for the good and the bad. In recent years, we have witnessed how AI can bring about unwanted consequences, e.g., opaque and unexplainable decision making, manipulation, discrimination, challenging solidarity, and privacy intrusions. Considering AI’s promises and perils, we are required to rigorously reimagine what it means to be human and what it means to live in a democratic society. How can we ensure that AI is grounded in shared values and fundamental rights?

At Tilburg University, this crucial branch of AI research in the domain of the social sciences and humanities is brought together under the header of ELSA: Ethical, Legal, and Societal Aspects of AI. As a university founded on humanistic values, we build on decades of experience in society-driven research, enabling us to critically assess the complex interplay of AI and society and provide fruitful avenues to developing human-centric AI.

To develop human-centric AI that is ethical, sustainable, and respects fundamental rights and values, three key themes are distinguished: explainable, fair, and trustworthy AI. At Tilburg University, these themes are explicitly identified as multidisciplinary themes. We are committed to bringing together our technical, legal, ethical, and societal expertise to gain a rich understanding of these themes.

Focal Issues

Human-centric AI

Human-centric AI is high on both the research and political agenda in the Netherlands as well as in Europe. The basic idea underpinning humancentric AI is that AI should be used “in the service of humanity and the common good, with the goal of improving human welfare and freedom.” AI should not be seen as a substitute for humans but as a strategy to effectively assist and augment humans. Human-centric AI, therefore, puts the interaction between humans and AI firmly into the spotlight.

At Tilburg University, we engage with human-centric AI at three levels: the individual, organizational, and societal level. This means that we not only investigate what is needed for a fruitful interaction between people and AI in a specific user context but also how AI can be properly embedded in organizational processes and how, at the national and international level, structures (legal and otherwise) need to be interpreted, adapted, and developed to ensure that effective checks and balances are set in place to introduce AI applications worth wanting.

Explainable AI

Transparency, understood as making the operation and outcome of AI models and applications accessible, is an important prerequisite to counteract bias and discrimination through AI. Developing technical instruments for post-hoc explanations or deploying glassbox ML models are just two of the many strategies that have been created in the technological domain to deal with these problems.

Through our ELSA lens, we investigate how these instruments are deployed. Do experts blindly rely on these instruments? Can they critically assess and correct them? How do these interventions impact the decision-making process? Acquiring this knowledge is of utmost importance for successful adoption of these crucial tools.

Another fundamental question concerning explainable AI is what we mean by a meaningful explanation. Which requirements need to be set in terms of content and procedure to ensure that an explanation effectively leads to understanding? Here, the sociotechnical context comes into play again. At Tilburg University, for instance, we are investigating, in the context of healthcare, which levels of explainability in ML algorithms are needed to ensure a trusting doctor-patient relationship.

Whereas, for AI experts, explainable AI is important to develop robust systems, on the work floor, employees engaging with AI applications will have completely different demands. They want to know how they should interpret the outcomes of the AI application and how responsibility and accountability for decisions is organized. Such questions involve legal scholars (liability law, labor law), ethicists, social scientists (organizational sociology and psychology), and public administration (policies). 

From an ethical point of view, explanations are needed to ensure that people can develop some form of meaningful autonomy in a data-driven society that respects their human dignity. This is particularly of interest in the health domain where AI applications are increasingly used in diagnosis and treatment and where patients are in a vulnerable and dependent position. But, also in the judiciary, where AI expert systems are introduced to assist judges and clerks, explanations are a sine qua non for a responsible uptake.

From a data protection perspective, explanations are required to ensure that data subjects can exercise control over what happens with their personal data and, simultaneously, hold companies and public actors accountable for their decisions. For regulatory bodies, explainable AI is key to developing meaningful oversight and auditable AI systems.

The deep-felt need to “open up the black box” is also intrinsically intertwined with fundamental questions in the domain of intellectual property law: If and how can you “own” a deep learning model and to what extent are you obliged to share information on that model? At Tilburg University, we are well equipped to tackle these questions from a multidisciplinary perspective. 

Fair AI

From facial recognition systems that are biased against black persons to fraud detection systems that predominantly target people with a lower socio-economic background: increasingly, incidents lay bare the discriminatory impact AI can have. Stopping AI-induced unfair treatment by developing “discrimination-aware data mining” or “fairness-aware machine learning” is, therefore, high on the agenda of the ML community. At Tilburg University, we investigate if and how it is possible to translate legal principles into non-discrimination constraints, which could become part of the actual design of algorithms.

We understand that fairness covers a variety of ideas and definitions, such as, equity, impartiality, egalitarianism, non-discrimination, and justice to name just a few. Building on our work in political philosophy and ethics, we are developing a rich understanding of what fairness entails, not merely focusing on the outcomes of data-driven decisions, providing equal treatment between individuals or between groups of individuals, and at the same time addressing questions concerning procedural fairness. Specific attention is paid to data justice: how to ensure that people are “made visible, represented, and treated as a result of their production of digital data” in a fair way.

We combine this stance of theoretical and conceptual research with hands-on, bottom up approaches, e.g., citizen-science projects, ethnographic research, and collaborations with public and private stakeholders. By replenishing our theoretical knowledge with on-the-ground insights and experiences of communities affected by AI applications, we get a nuanced understanding of how AI takes shape in practice and to what extent fairness formalizations hold up in real life.

Fairness objectives are also central to the design of an inclusive digital market; where different legal regimes like competition, data protection, and consumer law are becoming increasingly interconnected. Due to new data analytics technologies, companies can manipulate consumers by exploiting their biases and vulnerabilities, or they can engage in unfair competition by homogenizing and personalizing prices. Our legal researchers work on regulatory mechanisms such as licensing regimes, competition remedies, data-sharing obligations, property rights, and community engagement to operationalize fairness in the market context. Specific attention is paid to platformization, the arrival of new gatekeepers, commodification, and the autonomy of both individuals and businesses in the market.

Trustworthy AI

Trust is a crucial building block for a flourishing democratic society. Citizens must be confident that their fundamental rights are protected, their interests are represented, and their freedoms are respected. Since AI is increasingly shaping that society, it is no surprise that building trust in the development, deployment, and use of AI and data-driven applications has become a focal point of AI policies. However, as individuals might trust certain technologies that should not be trusted, it is of utmost importance that AI is also worthy of that trust. To what extent are people, for instance, willing to engage in selfdisclosure when interacting with chatbots? The benefits of AI can only be reaped if risks are addressed simultaneously. 

A first and fundamental question that we address in our philosophical and ethics research is: what are we actually talking about when we talk about trustworthiness? Which conditions need to be met to speak of genuine, trustworthy, and responsible AI? Although trustworthy AI has been widely embraced as an important prerequisite for AI developments, what it means to be trustworthy (as a technology and as an actor employing that technology) remains rather vague. At Tilburg University, we believe that we can only engage in state-of-the art AI research if we are clear about the meaning and reach of the concepts that we adopt. Therefore, we invest in a firm conceptual basis for our ELSA AI research. 

Trustworthy AI is intrinsically intertwined with reliability and safety. After all, AI can only take the interests of citizens truly at heart if it does not cause harm or act in undesirable ways. However, the growing autonomous and complex behavior of AI applications challenges our ability to maintain meaningful oversight and control. For instance, our philosophers explore how the use of AI tools in the military context interact with responsibility and accountability as these tools increasingly act independently. Also, the growing complexity of AI-driven supply chains leads to the need for more and new forms of accountability and cooperation between economic actors. In addition, the growing distance between the decisions made by AI developers and how these outcomes impact real life challenges our ideas of responsibility in a moral, social, and legal sense.

Broader governance issues regarding AI systems are part of Tilburg University’s ELSA AI research as well, for instance, involving governance structures, the allocation of responsibility, liability, and duty of care in the AI context. Practical examples include what the role of Institutional Review Boards or shareholder engagement could be to ensure that companies use AI in a trustworthy manner. 

Specific attention is also paid to corporate social responsibility, the moral competences of data scientists and ML experts to develop trustworthy AI, public-private partnerships in the domain of cybersecurity, and the role of public actors in safeguarding rule of law principles (e.g., in the domain of taxation and law enforcement). 

Do we need AI specific regulation to ensure trustworthy AI? This question has tentatively been answered now that the European Commission has launched its proposal for an AI Act in April 2021, regulating the whole sphere of high-risk AI systems. Is this the right approach? Are the proposed distinctions and regulatory and enforcement mechanisms adequate, effective, and efficient? Do they (unnecessarily) hamper innovation? What do measures such as these mean for practice? Questions such as these form part and parcel of the ELSA research within Tilburg University, where not only regulation but also its impact in different contexts, from self-driving cars to new data intermediaries, healthcare robots, and chatbots, is considered.


Contact