Ethical, Legal and Societal Aspects of AI (ELSA)
How do you ensure that technological development contributes to an inclusive, democratic world? You do that by taking a holistic approach. "You cannot separate the development of Articifial Intelligence (AI), the associated prerequisites, and the application. You have to cover the whole life cycle; AI is worthless without human or social intelligence."
‘You have to cover the whole life cycle’
Speaking is Esther Keymolen. She is an associate professor in Philosophy of Data-Driven Technologies and, together with Ronald Leenes, the initiator of ELSA (Ethical, Legal and Societal Aspects of AI). This branch of TAISIG specializes in the preconditions of AI: does it meet legal, ethical, and societal conditions? "Take the use of facial recognition on the street, or labor law at a company like Uber. There are boundaries being crossed that we need to look at. Do we really want it this way? Humans must be at the heart of AI. ”
"We can contribute to how you do that," she continues. "There is a phenomenon called the Brussels effect. From Brussels, we enshrine the values that AI must meet in the law. That European standard is widely recognized as high. Because those laws also apply to providers of products and services on the European market from the rest of the world, European standards seep through to outside the EU and influence public and societal values. How can workers be protected from undesirable consequences of digitalization and globalization? What applies to all these areas is that expertise in government and regulators must grow. Not only because that knowledge is required to regulate AI effectively, but also so that AI itself can be used for regulation in a responsible and effective way. We approach these issues not only through scientific research but just as much in education. After all, we are educating tomorrow’s professionals."
In Europe, we are indeed opting for human-centric AI
Legislation in motion
"In Europe, we are indeed opting for human-centric AI," confirms Ronald Leenes. He is a full professor at Tilburg Law School (TLS). "In America there is more of a Wild West atmosphere where everyone tries to jump in on developments. We do research on AI from the perspective of law, ethics, and social sciences. As a result, we have a good view of what is happening, and we are well aware of European standards. This works both ways: we are concerned both with the preconditions during development and with data protection. You can choose to adhere strictly to the rules of the game. But because opinions in society are shifting, legislation is not always appropriate. Take the example of face recognition. Do we want that you are never able to walk down the street anonymously again? In Europe, we think not, but in America and China, they have a different opinion. Legislation is constantly being tightened in the areas of privacy and security. We are helping to shape new regulations. ”
What distinguishes Tilburg University? "The Netherlands has three or four larger law and technology groups at universities," outlines Leenes. "We are the largest. Our size and our multidisciplinary approach make us unique. Others often include the legal aspect, but look less at ethics and social issues. Tilburg University does this and is very international as well. We harbor many perspectives." Partners can come to Tilburg University for applied research, consortia, policy advice, lectures, and training courses. Keymolen: "They are welcome to approach us with their questions. What do entrepreneurs reach an impasse on when they want to apply AI? What difficulties do municipalities, regulators, and policy makers have in establishing and enforcing policies? The best part is when we can link our scientific knowledge and expertise to issues that live in society."
Entrepreneurs are welcome to approach us with their questions
"Tilburg University has all the knowledge for an integrated, holistic approach," Keymolen believes. "Our goal is to contribute ideas to the technicians throughout the AI life cycle, also because we know that values are not only important when you roll out an application in society. You don't want to make a wonderful technological discovery only to conclude at the end that people don't think it's a good development. Values are already woven into the way of development. Data scientists themselves are guided by values; how do they ask questions? What do they think is important? ”
Legal and ethical
"At Tilburg University we are used to pointing out the limits of the law and contributing ideas to technicians about what is possible within those limits," adds Leenes. "We search constructively together with them." Within ethics, Tilburg University identifies relevant values. "Sometimes something is allowed legally, but ethically not desirable. A few years ago, the ING offered the possibility of personalized ads based on payment behavior. They did that in a legally correct way, but customers did not appreciate it because it affected the trust relationship with their bank. The social resistance was too great."
Impact of action
Tilburg University offers several programs in the field of AI. "What's great is that in the Master's program in Data Science & Society, for example, there is also a focus on ethics and legal aspects," says Keymolen. "It's woven into it. You not only need technical knowledge but you also have to look at the impact of your actions and the solutions you come up with." The same goes for the Master's program in Data Science & Entrepreneurship we offer together with JADS. I see that alumni who start startups from the program think carefully about their social relevance. In this way, we, as Tilburg University, are making a mark on the character of new tech companies in the region. That's a good start, but we're not there yet."
I see that alumni who start startups from the program think carefully about their social relevance.
Explanation and accountability
In terms of research, TILT (Tilburg Institute for Law, Technology, and Society) has worked for years with technical universities and large tech companies such as IBM, Microsoft, and HP in European projects. Leenes: "We looked together at what problems we are trying to solve. What conditions do we need to meet? That requires multidisciplinary insight. Tilburg University can also contribute to explainable AI. This is about explanation and accountability. Are we satisfied with ‘computer says no?’ You have to explain how the computer arrives at that answer if you want users to accept it. Such an explanation also depends on the situation and the user. How do you prevent stereotyping and discrimination? That has our attention and we have to jointly work on that." Keymolen nods. "For the Ministry of the Interior, for example, we are making a guide on how to prevent discrimination in the development of AI applications. In it, we explain the main legal principles and relate them to technical and organizational measures. ”
Humans and machine
"We should not see AI as technology but as a socio-technical system in which humans and machines come together," Leenes believes. "From ELSA we highlight the social side: what does AI mean for people and how do people influence AI? The angle revolves around ethics, legal aspects, and how people understand and judge AI. It is relevant to know people's views on robots, which have implications for the development of those robots. AI does not replace people as yet; it is always a combination of humans and machine."