Governance by AI Algorithm

Ai Algorithms Have Infiltrated Life ,We Cannot Be Ignored.

While AI has managed to penetrate all corners of our lives in recent years, AI algorithms have come into our lives much earlier than we may have fully realized. One morning when you check your email, you suddenly realize that the government is claiming that you are a criminal and carrying some sort of crime. This is not the case, you have done nothing, and it is only because of the AI algorithm’s inference that you are being held guilty of such a crime.

This is not a random story, but really happened to thousands of families in the Netherlands, this is the Dutch “Toeslagenaffaire” incident. The Toeslagenaffaire was an attempt to use ai algorithms to create a better and smarter age, but after this incident, it was realized that the ai algorithms relied too much on data to learn, and that the data was interdependent on each other, which was one of the main reasons for the Toeslagenaffire. This happened to thousands of families in the Netherlands and is not an isolated case. The Netherlands has adopted an ai algorithm to combat welfare fraud, but this has led to a system of “self-judgment” based solely on data, which has resulted in some immigrants, or people of other nationalities, being identified as fraudsters, but the vast majority of the victims are innocent.

It took a few years for the government to admit that this was an AI algorithm error, but the impact of that error is hard to undo. “Tens of thousands of families — often with lower incomes or belonging to ethnic minorities — were pushed into poverty because of exorbitant debts to the tax agency. Some victims committed suicide. More than a thousand children were taken into foster care. ”(Heikkilä, 2022)

The Dutch case is just one of the failures of ai algorithms to be integrated into many aspects of our lives. ai algorithms are used more positively by governments as it is an automated system built on a big data platform. For example, the health code system that was introduced during the epidemic was an unforgettable time, so much so that in some countries there was no way to spend money in public without a health code. ai algorithms are able to quickly find out what humans missed when they were “vetting” through technological means and make judgments based solely on data, without any emotional bias. AI algorithms can help sharpen decision-making, make predictions in real time.(Tabsharani, 2023)”. In a way, it does this in a way that it can better help the government to accomplish its tasks more efficiently and fairly without consuming a lot of human resources, but it still requires humans to act as manipulators or as reviewers for the final review. Through Lecture’s ppt, we can know that ai algorithms are mainly divided into the following categories: automated decision-making, algorithmic governance, data governance and privacy handling, and human behavioral intervention.

We will explore each of the four ideas mentioned above. The first is automated decision making, which is like a custom system that automatically or semi-automatically (partially operated by the controller) makes corresponding judgments and produces corresponding results. The results of these systems are largely dependent on the big data algorithms and training they received in the first place, as in the case of the Netherlands, where poor families or unrelated immigrants became fraudsters without people knowing it, due to biases in the system’s algorithms.

Essentially what is automated decision-making system, this is the most outstanding point of ai algorithms, AI algorithms can make appropriate judgments based on big data, so that logical judgments and choices can be made without human participation, “Making decisions based on data without human participation is what automated decision-making is.(Symbio6, 2024)”. What seems to be a convincing way of operating is a big disaster. In the case of the Netherlands, the first thing that happened was that the tax office did not do its due diligence and responsibility to investigate each household, but instead relied too much on AI algorithms, to which the tax office gave the right to judge. “One of the primary advantages of automated decision-making systems is their capacity to make real-time decisions 24 hours a day, seven days a week.(Symbio6, 2024)” And because of this, it has made the Dutch tax office overly reliant to the point that there is no one to do the final review.

Firstly, there is no responsible person to supervise during the analysis process, second, there is no one to conduct a secondary audit when the system makes a judgment, and finally, when the government issues a notice and sends an email to the parties, there is still no supervision by the auditor.

This problem is not only found in the Netherlands, but also in many other places. For example, the Office of Qualifications and Examinations Regulation (Ofqual) in the UK assesses apprentices in the same way (Ofqual’s Approach to Regulating the Use of Artificial Intelligence in the Qualifications Sector, 2024), and the US has tried to use algorithms for the position of judge. Ofqual’s Approach to Regulating the Use of Artificial Intelligence in the Qualifications Sector, and the United States attempted to apply algorithms to the position of judge, where the algorithms made judgments instead of the judge, but it was found that the algorithms were racially biased through the analysis of the data. “One study, which examined a group of people who had been arrested, found that Black individuals were twice as likely to receive a high-risk score than otherwise similar white people.(Webster, 2024)”

Automated decision making systems save human resources and time, but this can never be replaced in situations that require people to make subjective judgments, such as government benefits, the education system, and the legal system, .etc.“AI stands on three key pillars: algorithms, hardware and data. You collect large amounts of data, then using the methods of machine learning, algorithms learn to find inter-dependencies among these pieces of data and then reproduce this logic on every new piece of data they meet. (Megorskaya, 2022)”

In fact, the Dutch “Toeslagenaffaire” incident is not as simple as it sounds, and it is not just an algorithmic error in judgment. Imagine why an AI algorithm would make such a low-end error (or mistakes) in such an important system. The underlying logic then emerges from big data analytics, where the government goes in with a faulty baseline to build on when providing information to the AI algorithm database, and the AI algorithm rightfully produces a faulty recognition system, which sends out faulty signals. The Netherlands is reading the use of big data to create a database that can identify dole scams, the initial purpose of which is good, saving manpower, saving resources and modernization can all be used as representative words to describe it. When the AI algorithm started to be implemented, problems became apparent: “dual citizenship”, “missing documents”, “immigrants from different countries” were all recognized by the system as signals of a high risk, and then the AI algorithm would signal a high risk. The ai algorithm will then make its own judgment based on the big data, and ultimately come to a conclusion. This is exactly what the algorithm is missing – the rationality of judgment. This is what happens when humans are not involved. ai algorithms are unable to make accurate, in-depth judgments, and all judgments are based on the database they initially create, which can lead to poor results if they lack human review. ai algorithms only make judgments based on “data similarity”. AI algorithms only make judgments based on “data similarity”.

Another fatal flaw is transparency. In the Toeslagenaffaire case, people have no way of knowing why they are at high risk or have been “criminalized by the system,” and it is difficult to find a way to appeal.

Under this system, people live as if they are blindfolded, never knowing when they will become criminals. The lack of transparency, the use of data is not open, so that the masses lose the most basic right to know and appeal. As mentioned in the report, many victims’ families in the Netherlands were identified as scammers by an email without knowing which rules they had violated or where their data problems had occurred. Although people started using these algorithms with the best of intentions. In fact, most people have a misunderstanding about AI algorithms, that is, based on the big data analysis provided, AI algorithms are maintained to a certain degree of fairness and objectivity. Contrary to expectations, big data is based on the social status quo, then there will be discrimination and information incompleteness, when AI autonomous learning, these negative aspects are likely to be “automated” expansion.

Privacy is also one of the issues that we need to be concerned about. Without most people’s knowledge, their information is automatically entered into a large database for AI algorithms to learn.

Algorithms also appear in our daily lives, not all in this large-scale form. In the apps we usually use, such as YouTube, UberEats, such software usually has an option called “Recommend for you”, in which the algorithm shows us according to our usual preferences and product ratings, which greatly increases the chance for us to make additional decisions. We can think of this behavior as “algorithmic guidance.” This “guidance” does not actively interfere with you, but it does through this kind of behavior that is difficult for us to detect and reject. This potential intervention actually manifests itself very early in our lives. The reputation points system is a good example. In Canada, people can check their credit scores from the bank app, which has a great impact on people living in Canada. These are the “algorithm interventions” that we don’t see. Credit scores have a direct impact on getting a loan to buy a car, a house or a job. One obvious consequence is that the algorithmic system does not prohibit us from doing anything, and in order to get a high reputation score, we will discipline ourselves.

When it comes to self-restraint, there is an example that has to be mentioned, which happened a few years ago. As an experiencer, I experienced the hidden intervention of the health code system on me. “For the past two years, a green health code has been required for entry to almost every public place in urban China.(Zhao, 2022) “. During those difficult times, the way people traveled was basically determined by the color of the health code. The health code is divided into three colors, green (safe), yellow (risky), and red (dangerous), but you can’t choose these three colors, nor are they randomly assigned, but the system generates them based on GPS location and trajectory, as well as contact records. For example, if I’m driving from one city to another and I have indirect contact with someone who has a red health code (high risk), then my health code will change from green to yellow, Even though I could be in The same area as that person. “The apps track people’s movements as users check in at locations and update users’ status if they have visited a COVID “hotspot” or tested positive to coronavirus.(Zhao, 2022) “. This leads to social distancing and persuasion.

This puts us in a managed environment, even though we may not feel this management process. Quite the contrary, we should be managing AI data and AI algorithms, not being managed and constrained by them.

AI algorithmic governance isn’t about to start in the future, we’ve been moving into it for a long time. It is trying to redefine our life and work, from the welfare scandal in the Netherlands, to racial discrimination in the court system, to the health code GPS location tracking during covid19, our life and privacy are gradually letting AI algorithms infiltrate, and we are gradually recognizing the power of algorithms. On the one hand, the rational use of AI algorithms can save a lot of resources and can also mention efficiency. On the other hand we as the dominant and decision maker, we can not let go and let ai algorithms invade privacy and lack of management transparency. Digitalization should be under the supervision of humans and should be based on humanization.

Reference page

  • Heikkilä, M. (2022, March 29). Dutch scandal serves as a warning for Europe over risks of using algorithms. POLITICO. https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/
  • Humphry, J. (2025). Issues of concern: AI, automation & algorithmic governance [PowerPoint slides]. ARIN6902 Digital Policy and Governance, University of Sydney. https://canvas.sydney.edu.au/courses/64614/files/42025224?wrap=1
  • Megorskaya, O. (2022, June 27). Training Data: The Overlooked Problem Of Modern AI. Forbes. https://www.forbes.com/councils/forbestechcouncil/2022/06/27/training-data-the-overlooked-problem-of-modern-ai/
  • Ofqual’s approach to regulating the use of artificial intelligence in the qualifications sector. (2024, April 24). GOV.UK. https://www.gov.uk/government/publications/ofquals-approach-to-regulating-the-use-of-artificial-intelligence-in-the-qualifications-sector/ofquals-approach-to-regulating-the-use-of-artificial-intelligence-in-the-qualifications-sector
  • Symbio6. (2024). Symbio6; Symbio6. https://symbio6.nl/en/blog/what-is-automated-decision-making
  • Tabsharani, F. (2023, May 5). Types of AI Algorithms and How They Work. Enterprise AI. https://www.techtarget.com/searchenterpriseai/tip/Types-of-AI-algorithms-and-how-they-work
  • Webster, H. (2024, June 28). Are Risk Assessment Tools Setting the Stage for AI Judges? The Bail Project. https://bailproject.org/learn/are-risk-assessment-tools-setting-the-stage-for-ai-judges/
  • Zhao, I. (2022, November 25). Concerns over Beijing’s plans to roll out digital health code system for every aspect of residents’ health. ABC News. https://www.abc.net.au/news/2022-11-26/china-plan-for-national-digital-health-code-system/101690448

Be the first to comment

Leave a Reply