How does automation of the imposing of taxes influence the legal protection of the individual citizen?
Automation of repetitive large-scale decisions
Tax Administrations around the world are automating their tasks at high speed. It seems impossible to impose taxes on a large group of subjects without some automated assistance. The Tax Administration in the Netherlands was an ‘early adaptor’ ‘avant le lettre’. Systems build in the seventies still ‘run’ bulk tasks. Through interconnectivity, lots of data produced by other governmental agencies are used to automate tax decisions fully. Once the tax administration decides automatically on the individual wage income tax, the data will get an official status and registration in the National Personal Income Databases. Other governmental agencies are obligated by law to use these data as they are considered to be correct and ‘authentic’. This procedure causes propagation of the legal decision outside its domain. It is called the chain effect. But what are the consequences of automated decision making (1) and the re-use of data in networks of agencies (2) on the legal protection for the individual taxpayer?
My empirical research showed the usual legal protection mechanism is no longer sufficient. When the tax administrations shift to data refineries and turn to automated chain decision making, new arrangements are needed.
Automation as a necessity
Automated decision making (ADM) means that the computer makes formal decisions. Humans experts program the computer. They translate legislation into code / algorithms / decision rules. Technology to do so was already available in the previous century and took a flight in the so-called first Artificial Intelligence (AI) summer (around the ‘80s). Now, computers decide some 90% of the 11 million tax returns submitted annually in the Netherlands. Being part of the daily practice for so long, people tend to take this use of AI in knowledge representation systems for granted. Roaring stories from the tax administration itself on its use of big data and risk-based audits ignore that automated predictions are only useful to help to divide the cases in ‘easy cases’ (made by the computer) and non-easy cases (have to be reviewed or made by humans/ civil servants). The majority (90%) of the decisions is made by computers, and automation has become a necessity to collect taxes and reallocate the funds.
How the Dutch tax administration executes its tasks has been considered part of the discretion of the administration itself. Unlike Germany (ADM and administrative law: Germany, by Jan Etscheid), permission by law is not needed if the administration decides to automate its execution. Neither the Danish idea that all rules apply on both the execution of administrative tasks, in paper and digital (ADM and administrative law: Denmark, by Thea Johanne Raugland), was part of the body of knowledge in Dutch administrative law. In my Ph.D. research, I first built a theoretical framework based on 4 principles of good administration:
- principle of fair play / fairness;
- principle of due care, including proportionality;
- right of defense; and
- the principles of equality and non-discrimination
I undertook two case studies, both of them forming an essential part of the Digital Welfare State. In the Netherlands, the incoming taxes are very quickly reallocated to finance social benefits. One case study was undertaken at the social security bank (SVB) and focused on the execution of the General Child Benefit Act. The other case study was conducted at the tax administration and investigated the wage and income tax and how the annual personal income was determined.
It became very clear that the decision rules of the system (how the legislation is interpreted and programmed) are problematic since they are not available in a way they can be assessed. In other words, this means that neither the taxpayer nor the independent judge can fully determine whether the decision is within the law. The lack of clarity in the decision rules of the system makes this specific part of the automated chain decision making a weak link to the concept of legal protection.
Another weak link to the concept of legal protection is the propagation of the administrative chain decision. Not all negative consequences of a decision are annulled by using formal legal remedies. Therefore, the principle of defense is violated. Because individual circumstances are ignored and one decision is accumulated on the other one, unequal situations are treated alike. Hence the principle that unequal cases should be treated unequally is not met. It sets all kinds of things in motion without proper oversight of all consequences for the administrative authorities, let alone the taxpayer or the administrative judge. Even when effects are visible, they are not considered to belong to the competence of the authority that made the chain decision. Administrative authorities ping pong the taxpayer maybe because the scenario has become too complex, even for the government itself.
If the research showed anything, it is that legal knowledge alone is not sufficient if one wants to observe the e-government (or Digital Welfare state or ADM in government) and its legal consequences. Other disciplines and their body of knowledge need to be considered as well. My fifty cents are:
- make sure individual justice (Einzelllfallgerechtigkeit) is facilitated both legally and technically.
- all decision rules (algorithms) used to make the decision should be explained and clarified upon request. Maybe these should even be treated as ‘the law’ and brought under democratic control (see for instance Dag Wiese Schartum and Andrew Le Sueur) and, therefore, supervised by an independent authority.
- in the Netherlands, the principle of defense should be enhanced and adopted regarding all administrative decisions. The primary goal would be that decisions are not propagated before the citizen has had the opportunity to discuss them. Decisions should not be time-barred within 6 weeks but preferably within 6 months to prevent entanglement of individuals in the web of agencies.
- the ADM expert systems should be designed more flexibly by considering that one will always need a ‘unexpected’ category and with a powerful ‘Kafka button’: a button that can be used by the public servant if s/he wants to take a case out of the automated handling.
- all citizens are entitled to their individual pilot. It means everybody should be entitled to directly access a designated human public servant. The pilot must help the citizen to get his/her rights guaranteed throughout those difficult-to-oversee and fragmented governmental labyrinths.
- the agencies that have formed a chain / network should have an obligation to cooperate in back offices. By that, all agencies are responsible for delivering justice and lawful decisions and obligated to correct data and decisions in hindsight (principle of cooperation, see the work of Henrik Wenander).
- the legislator should follow the advice of The Netherlands Scientific Council for Government Policy of 2011 to establish an ‘iAuthority’, an organization that helps citizens to correct false or inaccurate data in the key registers.
One year later
The thesis raised quite some attention. Members of the parliament posed formal questions to the cabinet of ministers about the conclusions.
Peeters and Widlak published their research called ‘The Digital Cage’, stressing that both citizens and civil servants are stuck in digital labyrinths.
From the perspective that it all starts with legislation, Mariet Lokin wrote her PhD thesis on possibilities to connect the design of legislation with the computer program that is built to execute the law. Combining all kinds of disciplines, scientists cooperated in a digital free online course for all Dutch people on AI, following the example from Finland.
Researches as that of Virginia Eubanks (Automating Inequality) and that of the Special Rapporteur on extreme poverty and human rights on The Digital Welfare State will be fundamental to raise attention to the dangers of the use of technology in the hands of the ‘haves’ to monitor or govern the ‘have-nots’.
All of these studies show that the GDPR (Article 22) is not the final answer to ADM by the government. There are more values at stake and more solutions to think about when studying the shift in power from street level bureaucrats to system-level bureaucrats (see Stavros Zouridis, Mark Bovens and Marlies van Eck).
What do you know?
ADM in government is everywhere. However, how these fit in the administrative and tax law traditions in countries is not systematically researched. If you have knowledge of this in your country, I would appreciate very much your contribution to the open science body of knowledge that is being built on the Automated Administrative Decisions and the Law blog.