Algoritmes

Algorithmic government: This is the moment to properly regulate transparency

Science Works 6 min. Corine Schouten

We may not be aware of it most of the time, but in more and more cases the Dutch government relies on increasingly complex algorithms to make decisions: on whether or not to grant a permit in case of nitrogen emissions, to detect fraud and crime, or in the field of taxation – these are just a few examples.

That the government uses algorithms for calculations is nothing new. What is new, however, is that these algorithms are used more frequently and can make increasingly complex calculations based on big data, which leads to transparency and explainability issues. “Apparently it seems that more (partly) automated decisions are made than decisions that are not supported by algorithmic processes,” says Jurgen Goossens, Associate Professor of Constitutional and Administrative Law at the Department of Public Law & Governance of Tilburg Law School. “For example, if the Tax and Customs Administration did not use algorithms, levying taxes would cost much more time and money and the risk of human error would be many times greater. In principle, computers that process accurate data with properly functioning algorithms are more reliable than humans.”

Jurgen Goossens

A meaningful dialogue between lawyers, technologists, and ethicists at the very beginning is essential

Trust

But what if complex algorithms have not been properly designed? Or what if computers make complex decisions based on self-learning, when in fact Artificial Intelligence is at work, which could result in unintended bias? Can we then trust a government that uses computer models to support decision-making?

In case of decisions that are based on transparent, explainable algorithms that are made by humans and can be checked by them, Goossens answers that question with a cautious yes. For him, however, another important question is also whether our constitutional and administrative law is sufficiently resilient to correct the government if an automated decision-making process goes wrong. This still seems to be the case, he says. “Take the ruling of the Dutch Council of State on the AERIUS calculator to calculate nitrogen deposition. That was clearly a red flag: the Minister herself could not explain the system.”

AERIUS: How the government was rebuffed

Government authorities used a software system called AERIUS as part of the Nitrogen Action Program (PAS) to decide whether or not an activity is subject to a permit decision if that activity causes nitrogen deposition in a Natura 2000 site. The system calculates whether the nitrogen deposition standards in a certain area are exceeded as a result of the requested activity.
The Dutch Council of State decided as early as on May 17, 2017, that the relevant Ministers (of Economic Affairs and of Infrastructure and the Environment) as well as the State Secretary for Economic Affairs have the duty to publish the choices made and the data and assumptions used in a complete, timely and accessible manner and to do so on their own initiative. This enables effective legal protection against decisions based on these choices, data and assumptions, and enables the courts to review the legality of these decisions.
In the AERIUS case, the government was not able to provide that information. Since then, the administrative courts have tested whether the government is able to do so on the basis of the assessment framework for automated decision-making developed by the Council of State.

Goossens also points to another recent example. On February 5, 2020 the court also ruled that the System Risk Indication (SyRI) violates the right to respect for private life embedded in Article 8 of the European Convention on Human Rights. SyRI uses big data and algorithms to detect social security fraud without concrete cause by predicting the risk that someone is likely to commit fraud. According to the court, the government does not provide sufficient insight into the functioning and the use of the risk model, but without providing guarantees that can compensate for this lack of insight. As a result, citizens cannot check whether the use of SyRI unintentionally results in discriminatory bias.

Call for action

In addition to rebuffing authorities if they do not comply with, for example, due diligence and the duty to give reasons, there is still a lot of work to be done to make the use of algorithms transparent and verifiable: at the beginning of the decision-making process for the government itself, that must be able to explain it; for the citizen who must be able to understand this explanation, among other things, to be able to decide whether or not he wants to challenge the decision; and, ultimately, for the judge who must be able to determine whether an algorithm-driven decision is lawful.

The fact that the Council of State red-carded AERIUS is a clear sign, in Goossens’ opinion, that the moment has come to call on the government to properly identify and screen its algorithm-driven decision making. “Something like this really shouldn’t happen again. There is an urgent need for more transparency and insight,” he states.

Guarantee quality

The government should not only be able to explain exactly how a decision was made on the basis of what data and calculations, but the quality of all automated decision-making should also be guaranteed “by design” in advance by the legislator and the government. This is argued by Goossens and his colleague Professor Jurgen de Poorter in a recent article entitled ‘Effectieve rechtsbescherming bij algoritmische besluitvorming in het bestuursrecht’ [Effective legal protection against algorithmic decision-making in administrative law] in the Nederlands Juristenblad.

This can be done, for example, by establishing a supervisory authority or, as in Canada, by making an algorithmic impact assessment mandatory in order to estimate the risks of automated decision-making. If the risk is high, additional legal quality requirements can then be imposed. Goossens hopes to obtain funding to investigate how such an impact assessment modeled on the Canadian example can be introduced in the Netherlands.

From black box to glass box

Nevertheless, the heart of the matter is transparency: the government must be able to explain every algorithmic ‘black box’ in an accessible way to the public and in legal proceedings to the judge. In other words, turn it into a ‘glass box’.

Moreover, before a case goes to a court, a decision must be reviewed in administrative appeal by the administrative authority itself first. According to De Poorter and Goossens, this means that there should be a right to human intervention: people of flesh and blood should be able to justify how a decision is made.

Goossens: “To that end, the government should work closely with technical experts as early as the design phase of software programs and should, at the very least, use a clear checklist. A meaningful dialogue between lawyers, technologists, and ethicists at the very beginning is essential. In addition to safeguarding ethical and public values, the terminology of the computer scientist and the lawyer must be attuned. After all, the code must execute what is expected on the legal and ethical level, so that the government can comply with, among other things, the duty to give reasons and the principle of due care.”

Training of lawyers

“As a lawyer, I wrote a book about the distributed blockchain technology and smart contracts, namely 'if x, then y' algorithms, together with a computer expert. This was an intensive process of interdisciplinary communication in order to create added value. Not only government and corporate lawyers will have to be trained in this, but also teachers and students.”

In a large project on hyper-connectivity and complexity due to the use of blockchain technology and smart contracts by the government, for which Goossens as project leader together with colleagues Esther Keymolen (TILT) and Damian Tamburri (JADS) recently acquired € 1 million, this interdisciplinary dialogue between lawyers, technologists, and ethicists is given a major role. In addition, the project researchers pay extra attention to the “citizen perspective” when safeguarding public values, which, in the researchers’ view, all too often remains underexposed in legislation and governance.

Project: Blockchain in the network society. In search of transparency, trust, and legitimacy

What public values come into conflict when public authority is exercised by means of distributed technology, such as blockchain, where hyper-connectivity of public and private actors leads to complexity? And what constitutional conditions are required to guide the role and responsibilities of public authorities using this technology? These are the central questions of the NWO-MVI research project The Role and Responsibilities of Public Actors in Distributed Networks. Transparency, Trust and Legitimacy by Design.

The research questions will be answered through an interdisciplinary approach from philosophy of technology, law, and data science perspectives based on two case studies: one on the Financial Emergency Brake blockchain pilot of the Dutch Central Judicial Collection Agency (CJIB) to help people in debt pay their fines by signaling any inability to pay in a timely fashion and another on granting government subsidies via blockchain. The study will pay particular attention to the perspective of the end users (citizens), and will operationalize rule of law guarantees in practice. Tilburg Law School will collaborate with JADS (Jheronimus Academy of Data Science) in this project in a consortium of five public and seven private parties.

Read more

Challenge: self-learning systems

Another important question is how the legitimacy of a decision made by a self-learning system can be checked by the judge, since such a computer model changes constantly. Goossens: “The administrative court that has to rule on a decision looks back at the moment the decision was made and must therefore know what the algorithmic decision rule was exactly at that moment and what data were used. In order to make that information available in the future for all possible decisions, the computer network must have a particularly large storage capacity. This is a big challenge.”

The Netherlands, a leading country?

The Netherlands is investing a lot in algorithms and Artificial Intelligence and has the ambition to become one of the leaders worldwide that also sufficiently safeguards ethical public values. A national AI coalition has been formed for this purpose, and Dutch Parliament has also undertaken action by establishing a temporary parliamentary commission on the Digital Future. However, is the government sometimes not moving too fast when it comes to the use of algorithms and AI, in which case there is a chance that citizens fall victim to mathematical models that can no longer be explained and could lead to discrimination and bias? Are there sufficient guarantees?

The legislator should consider to adopt regulation that is geared to an algorithmic world

Goossens answers: “Our system of administrative law with its general principles of good administration is robust and resilient, but was written to regulate a world of paper, including, for instance, an obligation to hear interested parties in preparation of decisions. In order to ensure that administrative law does not inadvertently put a brake on digital developments and thus to sufficiently guarantee legal certainty, the legislator should consider to adopt regulation that is geared to an algorithmic world based on automation and hyper-connectivity. If it does not, the economic loss could be substantial. If government, industry, and researchers join forces and start from a citizens’ perspective, the Netherlands can become a leading country for the use of transparent, reliable, legitimate applications of algorithms and AI by governments.

Further reading

Date of publication: 16 June 2020