Merel Noorman

Questions on rather than queries to ChatGPT: “Now is the time to make fundamental choices”

Interview 4 min. Corine Schouten

Generative Artificial Intelligence, like the recently launched ChatGPT and successive chatbots, seems to have caused a seismic shift in society. Is this new technology going to change our lives? Should we use it or not? Assistant Professor of AI, Robotics, and Science & Technology Studies Merel Noorman specializes in asking ethical questions that help deploy technology in an informed way. “It is important not to speculate, but to monitor what happens in practice,” is her motto. She has both feet securely planted in practice as well as theory.

Artificial Intelligence (AI) is already used in a lot of other technology, although we do not always realize this. However, with ChatGPT, AI suddenly seems to be all around us, especially if you are a student or an instructor, or compose texts for other purposes. Even Dr. Merel Noorman, who has engaged with AI in her teaching and research for many years, had to quickly upgrade her skills to stay ahead of her own students in examinations. Is ChatGPT a hype, or has something fundamental changed? “It seems that the uncanny valley theory applies here,” Noorman thinks. “As long as AI appears to be human, it is cute. Until it gets too close for comfort.” 

Ethics as a guide

Exactly when things get uncanny, Noorman can put her expertise to good use. She asks ethical questions that help find answers when existing norms and values are no longer helpful. So questions on good and bad. “Ethics can help address a new reality,” she explains. “To think about ethical questions, we can draw on fundamental human rights and core values, such as human dignity, privacy, individual autonomy, free interaction with other people. These need to be reinterpreted in the context of certain technologies.”

Merel Noorman

In many countries, there must be a driver behind the wheel who can be held responsible. But can technology also be liable?

She takes the self-driving car as an example. It uses complex AI that connects different systems. Who is responsible if there is an accident? “In many countries, there must be a driver behind the wheel who can be held responsible. But can technology also be liable? Is it a digital entity, a legal person? We need to think of answers to those questions.”

Human metaphors

Since Noorman has worked at TILT, the Tilburg Institute for Law, Technology and Society, her portfolio has consisted of ethical as well as regulatory issues: is the existing legislation adequate or should new laws be made and, if so, how? She has now studied all questions that are pertinent to AI and the relevant disciplines (unsurprisingly, those covered by TILT). It all started with a degree program in AI at the University of Amsterdam and a job at a company that developed facial recognition software, among other things. “But soon I had the feeling that there were all these social aspects to AI that I didn’t get around to exploring,” she states. To that end, she studied Science & Technology Studies and completed a PhD in Philosophy in Maastricht. She delved into the question of why we often apply human metaphors to computers, for instance, how intelligent they are or how autonomous, how those ideas drive technology but also: how things could be approached differently. Do you want a computer to work independently, or rather in collaboration with humans? The fundamental philosophical and ethical questions eventually brought her to TILT.

The problem is that ChatGPT acts as if the answers generated are true

False expectations

Many ethical concerns also surround ChatGPT, depending on the area of application for which it is used. The system generates answers to users’ questions by estimating what is most likely to be true, based on information on the internet. But that creates inflated expectations. "The problem is that ChatGPT acts as if the answers generated are true,” Noorman states. “In the US, a barrister had failed to check the truth of his arguments; this illustrates that there are many unresolved questions."

Very many questions, in fact. What you need to realize about the system is its ‘automatic bias’, simply because the internet is full of it, too. Minorities are underrepresented on the internet, which can lead to incorrect or incomplete answers. Moreover, there is still a lot of human work in the system: statements are filtered before they are added to the information that is fed into the system. That is painstaking work, often outsourced to poorly paid workers. And who checks whether the medical information is correct? And irrespective of whether the statements are correct, information is used differently in different countries. How does the system handle personal data? What if it is trained with fake news, or even manipulated images, the so-called deepfakes? How do we ensure that the big tech companies are kept in check? And then there is the fact that these AI systems are energy guzzlers, which is an important factor in times of energy transition. 

It will continue to be a kind of arms race: there will always be people who try to hack and abuse the system

Learning to deal with AI

"We are now discovering how to solve these issues," Noorman says. “As regards laws and regulations, examples include requirements with respect to the transparency of the system, or a duty to disclose that AI has been used. Enforcers should have sufficient knowledge of how the technology works. But it will continue to be a kind of arms race: there will always be people who try to hack and abuse the system.” 

The bottom line is that we must learn how to handle AI as a new technology, all of us. "It is going to be quite a task," Noorman thinks, "Just look at what kids are picking up from the social media. It usually does not do any harm, but things can also go seriously wrong if, for example, the self-esteem of youngsters is adversely affected. You have to have rules – and laws and regulations are necessary to be able to intervene.” 

So AI will definitely change our lives but the same as with other innovations, it will probably not be about the things we can think of now. Noorman: “It is important not to speculate, but to monitor what is actually happening. Now is the time to make choices. I try to make these choices explicit and mobilize the people to make those choices.”

Transparent use of AI

One of the applications in which she is involved as a researcher is a transparent use of smart AI in the distribution of electricity available in the Netherlands. Questions to be answered include: Who gets to charge first at a charging station, whoever comes first, or are exceptions possible? Does everyone get the same amount? What are the conditions and how important are they: safety, affordability, access, sustainability? All these considerations affect the algorithm you use. 

Making sure that the responsibility for new technology is in good hands, that is what it’s all about, according to Noorman. So we don’t just hop on the bandwagon but ask ourselves the question: “Is this what we want?” I hope to provide people with more insight into the context of technology and how technology can raise important ethical questions. What are the things we should act upon and what things can we ignore? That is what I find important in my research, but certainly also for my law and data science students. After all, they are the new generation.”

Date of publication: 25 September 2023