Please visualize the following recruitment scene: A hiring manager holds a pile of resumes in his hands while clicking his tongue. “a rich experience but a woman? Women cannot bear the burden of hard work, all they can do is take care of their families… You will not be hired; Excellent academic record but colored? Colored people are vulgar and lazy, in no way can you join our company…”
Accompanying the sound “tsk-tsk-tsk”, the hiring manager throws away all the resumes from minoritized communities with disdain but sends offers to white, straight males with a big, satisfied smile.
Needless to say, the above imagination is rife with bias and discrimination. If such a recruitment scene ever happens in reality, the hiring manager will absolutely be denounced by the public for being politically dangerous and morally objectionable. In the 21st century, in civilized countries, the above negative visualization is almost unlikely to happen—or, on the contrary, it is currently happening under people’s noses, but the one who discriminates against others is not a human hiring manager but a “neutral” AI?
Woefully, the latter is the cruel reality. Discriminatory AI and its resultant algorithmic bias have already penetrated every industry and sector and dealt a heavy blow to various aspects of life.
What is Algorithmic Bias?
Algorithms are the rules and processes established for activities such as calculation, data processing, and automated reasoning (Flew, 2021). Nowadays, algorithms have become more sophisticated and pervasive tools for automated decision-making (Barton et al., 2019). While people want artificial intelligence to be completely neutral in the decision-making process, the fact is a different story. It is now well-established that algorithmic bias is possible.
The term algorithmic bias refers to systematic and repeatable errors in a computer system that create “unfair” outcomes (Algorithmic Bias, 2024). Biased AI shows favoritism toward certain groups or individuals, deviating from its original design intention and generating unfair and unjustified outcomes that strayed away from designers’ and users’ expectations. Algorithmic bias can manifest in different aspects (including gender, class, race, age, and so on), and its implications are vast.
There are generally two types of algorithmic bias: intentional and unintentional.
Intentional Algorithmic Bias
Intentional algorithmic bias occurs when AI developers deliberately discard a certain degree of impartiality and neutrality for the sake of their own interests, either personal prejudices, monetary gains, or collective goals.
Search engines make a killing by virtue of “paid search“, a traditional but never outdated marketing strategy: Search engines give algorithmic priority to companies and institutions that give enough money. Therefore, whenever potential customers type something into search engines’ search bars and click “enter”, the “VIPs” who bribe the whole search engine platforms can be put at the top of all results, so obvious that they can take over from their competitors and attract potential customers’ attention to the greatest extent.
Unintentional Algorithmic Bias
However, on account of factors such as inherent flaws or skewed training data, sometimes AI exhibits algorithmic bias despite designers’ wishes for it to behave in a neutral and impartial way. Unintentional algorithmic bias is usually challenging to address since it usually requires a multidisciplinary approach involving cooperation between industry elites, data scientists, domain experts, and policymakers.
End Algorithmic Bias: Why It is So Urgent?
The drama of discrimination has never ended on the stage of human history. As a result, some people may feel baffled: “Is there any difference between algorithmic discrimination and discrimination between human beings? Is it necessary for us to raise a hue and cry about algorithmic bias, a thing we are already used to? Is it really worth the amount of money and time that we have dedicated to eliminating it?”
And the only answer is “yes”, because algorithmic bias matters.
In the era of AI, the repercussions of discriminatory algorithms will spread on a much wider scale compared to old-school human-to-human discrimination (Gupta & Krishnan, 2020). After all, there was a time when biased decisions made by human-beings were temporarily and spatially confined. However, now AI virtually unfetters these constraints. People are accessible to AI decision-making platforms like Google Cloud AI Platform, ChatGPT, and Salesforce Einstein from every corner of the earth and at any time. Let’s take an example to illustrate the deteriorating outcomes of biased AI. If a judge holds racist beliefs, he may disproportionately impose harsher sentences on individuals from certain ethnic backgrounds. These individuals are unfortunate, but the number of victims will not be stunning. Nevertheless, when it comes to cyber courts held by an AI judge, there will be far more victims of the miscarriage of justice. It will undermine public trust in the country’s legal system.
(AI Judges, 2023)
In the instance of human and AI judges, we can see the expanded impact of bias in algorithms. And the principle is the same in other industries like the military, education, and business…
The Root Cause of Bias in Algorithms
Data Bias
As the proverb goes, “You reap what you sow”. And when it comes to AI, there is no difference. Just as Satell and Abdel-Magied said (2020), “Machine learning algorithms are created by people, who all have biases. They are never fully objective; rather they reflect the world view of those who build them and the data they’re fed.”
“Machine learning algorithms are created by people, who all have biases. They are never fully objective; rather they reflect the world view of those who build them and the data they’re fed.”
— Satell & Abdel-Magied
Machine learning algorithms develop their selection criteria and decision-making through training data. Therefore, if people feed systematically biased data to an algorithm, in most cases, the algorithm yields biased outcomes. Such a phenomenon is commonly referred to as “Garbage In, Garbage Out“.
(Garbage in, Garbage Out, 2019)
In human history, discrimination in different domains has never stopped, and existing biases are baked into the data that under- or over-represent certain groups. The 2023 EEOC lawsuit case serves as a reminder of the adverse consequences when algorithms underestimate the senior age group. In August 2023, the U.S. Equal Employment Opportunity Commission (EEOC) settled an AI recruitment discrimination lawsuit against the brand “iTutorGroup”, claiming the latter violated the Age Discrimination in Employment Act of 1967 (ADEA) by programming its application software to automatically filter out female applicants age 55 or older and male applicants age 60 or older (Glasser, 2023), giving rise to the unreasonable rejection of 200 qualified applicants. Why did this age discrimination occur? The answer is, iTutorGroup, Inc. built on its AI-hiring tool from resumes submitted to the education companies. Since most education companies are dominated by the young generation, only a tiny proportion of resumes come from the elderly generation. Then owing to the lack of relevant data, iTutorGroup, Inc. failed to program an AI-hiring project that rates candidates in a gender-neutral way.
Another notable example is the facial recognition system of Apple. Its datasets used to train facial recognition algorithms to some extent lack diversity and contain predominantly photos of white male faces, resulting in racial discrimination against colored people and gender discrimination against females in the form of a higher error rate of facial recognition. According to the “Gender Shades” project run by MIT Media Lab, the error rate of Apple facial recognition for light-skinned men is 0.8%, compared to 34.7% for darker-skinned women. In both cases, it is self-evident that skewed data input with bias may lead to biased data output.
Method Bias
Algorithmic bias is not solely the byproduct of discriminatory training data, but also the offshoot of methodological and procedural choices in algorithm development. Bias can arise from programming mistakes, wherein the designer unconsciously or deliberately makes causal correlations when identifying associations between variables in the underlying data, leading to over-generalization of insights. These over-generalized insights may not be suitable in specific contexts, making the outcomes of algorithms seem “aberrant” and “discriminatory”.
Moreover, advanced AI is somewhat like a black box: even developers can not see what is in the black box, so it is difficult to target biased sources and moderate them.
Considering AI was programmed with specific codes, why does it look like a black box as opposed to a drawing on paper, which can be seen at a glance? That’s because programmers only build an initial structure of algorithms and then let AI start self-learning from training data. So the AI continues to grow complex, building a hierarchy of interrelated layers on its own, and eventually become opaque to the eyes of human beings. According to Daniel Acuna (2022), “We (designers) will eventually pass a threshold where processes of AI are no longer understandable to the team building it.”
“We (designers) will eventually pass a threshold where processes of AI are no longer understandable to the team building it.”
— Daniel Acuna (2022)
How to Counter Algorithmic Bias?
Individuals
Due to their limited knowledge about AI as ordinary people, the role of users in eliminating algorithmic bias is only a drop in the bucket. However, users can still learn more about algorithmic bias and its implications to avoid over-valuing and over-relying on AI algorithms. In addition, users can also continue to express their concerns about algorithmic bias and call on authorities to take action.
AI designers can do a lot more than ordinary people. Designers can explore potential strategies for reducing algorithmic bias in AI, such as diversifying datasets. With more diversified databases, algorithms are less likely to yield biased data output. Besides, designers can also work doggedly on implementing human-centered elements in algorithm design, prioritizing users’ needs to get a deeper understanding of contextual guidance for decision-making.
Big Companies and Institutions
Big companies and institutions can play a crucial role in curbing algorithmic bias by wielding their large influence. To begin with, big companies and institutions need to form inclusive and diverse teams for AI development, ensuring the voices of the minority can be heard. What’s more, big companies should hold regular audits of algorithms to check whether covert discrimination exists. Big companies can also put forward standard and clear ethical guidelines for algorithm design, and provide guidance that prioritizes principles of fairness, accountability, and transparency for their staff.
Government
Considering AI will continue to be a major concern in the future, governments should enact moderation and regulations to make sure algorithms are developed ethically.
Australia currently has no established laws on algorithmic discrimination. Nevertheless, the White House established a first-of-its-kind Algorithmic Discrimination Protections in July 2023. The bill requires companies that use AI technologies to perform an annual audit of their recruitment technology for bias (+ Algorithmic Discrimination Protections | OSTP | the White House, 2022).
“You should not face discrimination by algorithms and systems should be used and designed in an equitable way.”
—the White House, 2022
Apart from the enforcement of the law, the government can also introduce policies that promote inclusion and diversity in the AI industry to support minoritized groups’ access to this demanding industry. What’s more, policymakers can also fund sustainable research and development projects in AI companies and pay particular attention to their ethics, lest companies lack incentives to invest in projects about the ethics of algorithms.
Conclusion
In conclusion, the rapid development of artificial intelligence holds not only great promise but also challenges of algorithmic bias, which is caused by data and method bias. The implications of algorithmic bias can be far-reaching, affecting individuals, communities, and society at large.
A concerted effort is required to mitigate biases in algorithms. Individuals can raise awareness and advocate for fair and transparent algorithms, while designers can prioritize diversity and inclusion in algorithm development. Big companies and institutions can play a crucial role in promoting inclusive practices and implementing ethical guidelines for algorithm design. In addition, government regulations and legal frameworks can ensure that algorithmic systems uphold principles of fairness, accountability, and transparency.
References
+ Algorithmic Discrimination Protections | OSTP | The White House. (2022, October 4). The White House. https://www.whitehouse.gov/ostp/ai-bill-of-rights/algorithmic-discrimination-protections-2/
AI Judges. (2023). LinkedIn. https://www.linkedin.com/pulse/from-e-banking-ai-judges-estonias-digital-governance-journey-baek–rqdqc/
Isley, R. (2022, October 30). Algorithmic Bias and Its Implications: How to Maintain Ethics through AI Governance. https://doi.org/10.21428/4b58ebd1.0e834dbb
Garbage in, garbage out. (2019). Garbage in, Garbage Out – How Bad Data Can Fuel Bad Decisions. https://www.welldatalabs.com/2019/10/garbage-in-garbage-out/
Algorithmic bias. (2024, April 1). Wikipedia. https://en.wikipedia.org/wiki/Algorithmic_bias
Algorithms of Oppression: How Search Engines Reinforce Racism Algorithms of Oppression: How Search Engines Reinforce Racism Safiya Umoja Noble NYU Press, 2018. 256 pp. (2021, October 29). Science, 374(6567), 542–542. https://doi.org/10.1126/science.abm5861
Barton, G., Lee, N. T., & Resnick, P. (2019, May 22). Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms. Brookings. https://www.brookings.edu/articles/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/
Flew, T. (2021, November 30). Regulating Platforms. John Wiley & Sons. http://books.google.ie/books?id=fI1SEAAAQBAJ&printsec=frontcover&dq=Regulating+Platforms&hl=&cd=1&source=gbs_api
Glasser, N. M. (2023, September 7). How Much Does the EEOC and iTutorGroup Settlement Really Implicate Algorithmic Bias?—Four Notable Points for Employers. https://natlawreview.com/article/how-much-does-eeoc-and-itutorgroup-settlement-really-implicate-algorithmic-bias-four
Gupta, D., & Krishnan, T. S. (2020, November 17). Algorithmic Bias: Why Bother? California Management Review. https://cmr.berkeley.edu/2020/11/algorithmic-bias/
Project Overview ‹ Gender Shades – MIT Media Lab. (n.d.). MIT Media Lab. https://www.media.mit.edu/projects/gender-shades/overview/
Satell, G. (2020, October 20). AI Fairness Isn’t Just an Ethical Issue. Harvard Business Review. https://hbr.org/2020/10/ai-fairness-isnt-just-an-ethical-issue
Images
AI Judges. (2023). LinkedIn. https://www.linkedin.com/pulse/from-e-banking-ai-judges-estonias-digital-governance-journey-baek–rqdqc/
Garbage in, garbage out. (2019). Garbage in, Garbage Out – How Bad Data Can Fuel Bad Decisions. https://www.welldatalabs.com/2019/10/garbage-in-garbage-out/
Be the first to comment