Dutch experiment with facial recognition: privacy risks require legislative choices
A smart doorbell with facial recognition; registering with your face rather than with an entry ticket; and now perhaps using facial recognition to identify individuals who have been into contact with Covid-19 patients? Facial recognition technology has serious implications for our privacy. The legislator can limit those risks, but will need to make legislative choices for that purpose. This is argued by legal researchers at Tilburg University. They have written a report for the Dutch Research and Documentation Centre (WODC), that has just been published.
In the Netherlands, facial recognition technology is only tried out and applied only sporadically by citizens and businesses. At the same time, the applications that are being developed worldwide and the concomitant privacy risks are real and far-reaching. This raises the question of whether the existing laws and regulations sufficiently protect our privacy and how privacy infringements can be prevented or limited now and in the future.
Choices and opportunities
Will the legislator opt to maximize risk avoidance, let a thousand flowers bloom, or prefer to deal with problems on a case by case basis? This needs to be clarified before the legislator can weigh privacy risks against the opportunities which technology may offer. Depending on such an explicit choice, regulations may or may not be tightened. Options include a total ban, prior permission, a code of conduct and certification, or a policy of tolerance.
The Tilburg researchers provide tools for governments and politicians to make informed choices in regulation. For instance, they differentiate between different sectors and applications. An app that helps the visually impaired to spot people is completely different from responding to customers’ emotions with facial recognition technology. By distinguishing between the purposes, for instance, care, security, commerce, or recreation, it is possible to make sound choices in systematic and transparent ways, according to the report.
These choices should take into account a number of potentially serious privacy infringements. The technology usually works with image data of people who have not given their permission for these data to be used. It is hard for citizens to assess what happens to their data. Moreover, if certain parties have many data at their disposal whereas others do not, a power asymmetry arises. Besides, the use of facial recognition technology can lead to a situation in which people adapt their behavior (e.g., avoiding certain places or behaving more unobtrusively). And citizens who choose to avoid facial recognition technology may have to settle for stripped-down alternatives.
A problem that should not be underestimated is that the data used to train the software may be biased, as a result of which certain groups can be discriminated against because they are recognized incorrectly or not at all. Finally, there is always the risk that the authorities may seize data collected by businesses for different purposes.
Facial recognition technology used by non-governmental actors is not yet a reality in the Netherlands; it is facial recognition “at first sight”. The researchers therefore conclude that this is the moment to ask ourselves the question of how we want to use this technology in our democratic state under the rule of law. The report aims to provide the input for this process.