Teaching machines to respect values

Robots: Rascals or Role Models?

Feature Article 5 min. Joost Bijlsma

AI is going to make a definitive breakthrough this decade. However, it is up to us whether this technology is going to help us solve big problems, for instance, the corona crisis. Preferably without causing disruption in society. If we aim for the latter, we should not just develop good algorithmic systems and applications, but also teach robots our values. With TrusTee, the most reliable robot in the world, Tilburg University wants to explore how we can achieve this.

The robot has long led a rather anonymous existence. It showed its skills mainly in the privacy of industrial settings. But all that will definitively come to an end in this decade. It will be let out of its cage. As a new ‘being’, it will enter our society. Next to humans and animals, it will become an active part of it. We as humans will need to be able to deal with this ‘new kid on the block’ that is so like us in many ways. So the question arises: how are we going to relate to machines that, like us, are autonomous and intelligent?

The image of a robot breaking out of its cage was suggested by Ton Wilthagen. He is not only Professor of Labor Market Studies but also the driving force behind Tilburg University’s impact program. Wilthagen calls interacting with robots one of the great challenges of our time. He consciously evokes the image of the cage: it shows that the bear is loose. Wilthagen: “We have to teach robots how to behave. The ultimate question is whether the technology continues to do justice to human values. To that end, it needs to know what we find valuable. That raises questions like: how can we tell the technology what our values are? And: who gets to decide what values we will teach robots?”

Ton Wilthagen

We have to teach robots how to behave

Misogyny and racism in AI

Whether AI and algorithms will serve the good cause is, to a large extent, in our own hands. But time is of the essence because the technology is developing rapidly. AI is now better at detecting skin cancer than dermatologists. AI helps us understand the behavior of the coronavirus and, in the meantime, robots at airports safely take our temperatures. And you can easily find yourself talking to a chatbot when you think you have a human operator on the line. That probably won’t bother us very much as long as robots help us effectively. However, technology gone bad, that’s a different story altogether.

The first examples of robots behaving like rascals are already emerging. For instance, Microsoft developed Chatbot Tay for Twitter. It posted misogynistic and racist remarks and made Trump-like comments. The reason it did so was because it had been taught by posts by other Twitter users. Unsurprisingly, Microsoft wasted no time in taking Tay offline and mothballing it away. Something similar happened to online retailer Amazon, with an AI tool for automatically selecting CVs. Amazon used it to select candidates for new positions. Since the tool worked based on data from the past, it discriminated against women. It proved impossible to remedy the problem, so Amazon decided to discontinue the tool to find talent. And what about the UWV (Employee Insurance Agency) ? They used the controversial risk profiling system SyRI (Systeem Risico Indicatie). The court found this fraud detection program, that uses algorithms and data from local residents, to be in violation of human rights. As a result of which State Secretary for Social Affairs and Employment Van Ark decided not to use it anymore.

toy robots children

If technology is ready to be used, then we are often already dependent on it. It is much more effective to be involved early on in the process

Boeing crashes

The Pavlov reflex after an incident with AI or algorithms is a call for more rules. We try to create barriers to prevent similar situations from arising in the future. Wilthagen wonders how effective this is. Legislation is necessary, but protesting afterwards that systems are intrinsically wrong does not work, he thinks. It also makes it difficult to adapt them. “If technology is ready to be used, then we are often already dependent on it. It is much more effective to be involved early on in the process, e.g., in the development phase.” Wilthagen argues in favor of developing trustworthy technological applications with multi-disciplinary teams, for instance, teams in which technology scientists collaborate with researchers from the humanities and social sciences, like those from Tilburg University. “We are very capable when it comes to addressing the question of how people relate to machines. We already know a lot about interaction among people. That can be translated into man-machine interaction.” Wilthagen thinks technology is too important to be left to the technologists: “They focus on optimizing the technique. In doing so, they may lose sight of the human aspect.” Frequently, they also have too much faith in technology. Like the developers of the crashed Boeing 737 MAX 8. They made it nearly impossible for the pilots to correct a computer error.

Blessing in device

Tilburg University is fully committed to AI and all kinds of applications, for instance, chatbots and avatars. A large community of researchers is active in this field. In Spoorzone, the new MindLabs is getting off the ground and, in the person of former Rector Emile Aarts, Tilburg is actively participating in the national AI coalition. Moreover, many Tilburg researchers participate in the Digital Society inter-university program.

As a university specializing in the humanities and social sciences, Tilburg University sees it as its mission to contribute to developing excellent as well as trustworthy technology. The university wants to help make AI a blessing in disguise – or: a blessing in device. That is a noble ambition but it is difficult to showcase. Therefore the university’s impact team has decided to give this endeavor a face with TrusTee. It has to become the most reliable and social robot in the world. The university wants to build this role model robot, as it were. This does not mean a kind of physical supercomputer like IBM’s Watson. In TrusTee, Tilburg University’s knowledge on technology and human values converge. The university has great expertise in this field. For example, Tilburg researchers are pioneering robots that help children learn a second language and chatbots that help people become more mentally resilient. They are the driving force behind a leading study in the area of European values. According to Wilthagen, these researchers will join forces with colleagues and external partners. Together they will investigate how to upload machines with human values. Of which the imaginary trustworthy robot TrusTee will be the figurehead.


Trustee, figurehead of machines with human values

A know-it-all goody two-shoes

The big question is, of course: what will Trustee be like? Wilthagen has some idea. He thinks that the role model robot does not discriminate against people and does not start wars. And even takes our changing preferences into account. It also holds up a mirror to us as regards our own values in a way that leaves room for us to respond to – or not. Wilthagen calls this ‘value backfiring’. “Trustee may say, for instance: ‘You have been sitting here all day. I would think you might want to take some exercise now.’ Robots know that we greatly value health, but that we do not always feel like doing what we know is good for us. We can be inconsistent or ambivalent in our values.” If robots do not take our predilections into account, we will not take it seriously. That would make TrusTee an irritating, straight-laced, know-it-all goody two-shoes. What we also want to make it clear how it ‘decides’. An exemplary robot puts its cards on the table: the user gets to know how it works. Wilthagen expects that there will be an increasing demand for transparency. “The city of Amsterdam has already demanded that all algorithmic systems it purchases are transparent. This has been set out in its so-called Tada City manifesto.”

Part of your education

Wilthagen wants to make TrusTee a brand that stands for trustworthy technology. He already has a number of applications in mind, e.g., a quality label stating that certain technology takes into account values that we respect. “Suppose you need to find a long-time care home for a parent living with dementia. It would give you peace of mind if they use TrusTee-certified technology there that takes the autonomy of the elderly seriously.” Another application could be ‘TrusTee in the classroom’. “That would allow us to study how children interact with robots.” Wilthagen underlines the importance of including this theme in education. He is fascinated by the subject, buying and reading all recent children’s books on robots he can lay his hands on. “I read a story about a girl in a world in which robots are commissioned to destroy her. They see her as a risk to the planet. Fortunately the girl is able to convince the robots that they do not have the correct information.” Children need to learn more about the way in which technology operates, Wilthagen thinks, to make them aware of the risks and prevent that they blindly accept everything they are told. “Now that the robot is out of its cage, ‘dealing with robots’ must be part of every child’s education.”

Date of publication: 21 August 2020