Algorithmic bias: Understanding its roots and impacts on social justice

What is an “algorithm”?

With the development of the Internet, the word “algorithm” has gradually come into our view. When you are surfing the Internet, have you ever encountered a situation where you can always browse the content you are interested in? Have you ever shopped online and been pushed to many items you are interested in purchasing? When you input information into a search engine, do you find that you can be recommended to some relevant information? These are the outcomes of AI and algorithms.

Algorithms can be described as a set of processes that are used to calculate and process data. As time and technology develop, algorithms are improving in aspects like computational power and speed. This development is due to the increase in user interaction activities and continuous learning of data, together with emerging technologies such as Artificial Intelligence (Flew, 2021).AI and algorithms play a major role in news reporting and improvement of the user experience, providing us with a quick and convenient experience in exploring the online world (Shin, 2020).

How good or bad are algorithms for us?

Where are AI and algorithms usually appearing? Search engines, shopping sites, maps, social media, email filters, video recommendations – none of these sites and features are actually separate from algorithms. There is no doubt that algorithms provide us with many benefits in many scenarios. Algorithms can help process data more efficiently, offering faster and more accurate data analysis and even decision-making support than human beings. Rational use of algorithms can help some online platforms to filter harmful information, protect the safety of users and create a safer online environment. What’s more, algorithms also have a very positive influence on the innovation and development of new technologies, the development of machine learning and AI can not be separated from the constant advancement of algorithms, which can bring more differentiated and personalised experiences for us to surf the Internet (Charncherngpanich, 2023).

However, while algorithms and AI have brought us so many benefits, there are still many possible problems. There is a certain lack of transparency in algorithms, and the complex and complicated process of learning from data leads to a situation where the layman may not be able to understand the principles and mechanisms of how they work. There are no straightforward explanations to illustrate how the final results are output. Meanwhile, the use and continuous learning of algorithms rely on huge amounts of big data. As users, all the information we provide on the internet is part of the big data. When algorithms analyse user data, it may cause privacy violation issues if some proper protection measures are not taken for privacy. In addition, the decisions that algorithms help to make are highly dependent on the data, and if there are biases or imbalances in the data, the algorithms may operate in a way that amplifies these biases, leading to unfair or even discriminatory consequences (Charncherngpanich, 2023). 

Have you ever heard of a “filter bubble”?

As you repeatedly type your content into search engines and browse your favourite content on online platforms, a filter bubble for you is emerging. By analysing and learning from the user’s history and behaviour, the filter bubble provides the user with results or pushes that are tailored to the user’s preferences. In the meantime, the algorithms are constantly learning about user behaviour (Bruns, 2019). 

What consequences will such a situation bring to us on the Internet? The content we are able to search and browse on the internet is increasingly narrowly categorised, giving us access to less and less information. That means we may only ever see content that we would prefer to see, and our preferences are being intentionally catered for (Bruns, 2019). Different groups of people may also have different access to information, which indicates that there exists some information that should be presented to everyone that may be unfairly filtered by algorithms. This demonstrates the existence of algorithmic bias.

(Filter Bubbles and Their Impact on Social Media, 2023)

What is algorithmic bias?

When we are doing internet browsing or some decision-making, there are many instances that we can’t do it without the help of algorithms. Algorithms rely on huge datasets and repeated data-based training. However, these data sets, as well as the algorithms, are also constantly tested and trained by human beings. Assuming that the data used for training is unbalanced or incomplete, or that the programmers designed it with some subjective judgement, and that the algorithms operate with a lack of transparency, there is no way to avoid that the algorithms are not potentially susceptible to some bias (Lee et al., 2019).

Why does algorithmic bias develop?

With the increasing maturity of computer technology, people are becoming more and more dependent on the Internet and the convenient experience that computers provide. Such a convenient and efficient online experience is even more essential with the support of big data and algorithms. However, algorithmic bias is also silently affecting our lives. Algorithmic bias occurs when algorithm-driven outputs or decisions are unfavourable to certain groups or deepen the information callus of users (Friis & Riley, 2023).

Algorithms require multiple interactions with both humans and data in the process of data analysis and learning, and both humans and data have the potential to transmit biased content (Sun et al., 2020). In other words, humans may be subjective in the design of algorithms, but different people have different attributes, values, and cultural backgrounds. Programmers may put their own preferences into the process of designing and executing algorithms, and thus the algorithms may operate in a subjectively biased context, bringing about unjust results.

Meanwhile, the process of data collection is also a part of the process that deserves attention, and there are many details in the process that can lead to bias in the collected data. If the sample is incomplete, or the data itself is not representative enough, the data produced in the background of these problems is likely to be problematic or not balanced, which will lead to bias in the subsequent algorithms when analysing the data. Amazon once intended to develop a recruitment tool that used AI and algorithms to help filter CVs, but they eventually gave up due to algorithmic bias. The data they used was resume data from Amazon’s previous hiring as well as hiring data from existing employees, however, most of these engineers and programmers were male. This resulted in this CV filtering tool working with algorithmic bias due to data bias to discriminate against the group of women applying for the respective positions. In such a case, the discrimination and injustice that occurred under the effect of this algorithmic bias made Amazon stop and develop this recruitment tool (Goodman, 2018). 

More specifically, algorithmic bias may occur at multiple points in the algorithm’s operation, affecting its fairness and reliability. We can attribute the key sources of algorithmic bias to three components: input bias, training bias, and programming bias. Input bias refers to the fact that the data used for analysis has bias problems of its own, which leads to algorithms that may subsequently magnify these problems and create algorithmic bias. The Amazon example above illustrates the input bias of the data. Training bias is when the data is incorrectly classified during the algorithm’s “machine learning” process, or the algorithm is unable to distinguish between cause and effect and correlation during the training process, which ultimately leads to biased results. Programming bias means that the algorithm operates with subjective biases brought about by developer or user interactions (Serwin & Perkins, n.d.). Moreover, the lack of transparency in these processes increases the possibility of algorithmic bias as users are unable to accurately and clearly determine whether there are problems in the algorithm’s operation.

(“Bias and Fairness in AI Algorithms,” 2023)

The Impact of Algorithmic Bias on Social Justice

If the existence of algorithmic bias has been permitted, it will surely have a very bad impact. Then what kind of impact does algorithmic bias have when it comes to social justice? We have mentioned the example of Amazon above, which wanted to recruit more efficiently with the help of algorithms, but had to give up due to the social discrimination and injustice brought about by algorithmic bias caused by data bias. It’s not hard to imagine the impact on the employment environment for female engineers if Amazon had insisted on using such an algorithmically biased recruitment tool. However, there are many other cases like this one that prove that the existence of algorithmic bias can have a significant negative impact on social justice. 

An Asian woman working in a mental health-related service in Connecticut, USA, had filed a case against the department, namely her employer, for racial discrimination in the allocation of work. This mental health related department used an algorithm to help them with work allocation, however, the woman claimed that she and other Asian doctors were assigned a larger workload than other white doctors. This reflects a possible algorithmic bias in this algorithmic system, and the lack of publicity and transparency in the process of how the algorithm operates allows for inequality and discrimination in work scenarios (Serwin & Perkins, n.d.).

These cases all represent possible algorithmic bias in the decision-making process of workplace scenarios and the social fairness issues it leads to. However, there are many other scenarios other than those in job recruitment or the workplace where algorithmic bias affects fairness. A system that identifies patients with the most urgent needs based on the passing of time is in use in hospitals across the United States. However, in previous data, black patients generally spent a lower amount of money to see a doctor than white patients, which caused the algorithm to incorrectly conclude that black patients generally had less severe conditions as it learned and analysed the data (Paul, 2019). Such an algorithmic bias then leads the system to seriously underestimate the need for black patients to see a doctor, which is racially discriminatory to a certain extent and seriously affects social equity.

How to manage and regulate the problems caused by algorithmic bias? 

Since algorithmic bias exists, we also need to be conscious of how to avoid these negative impacts that algorithmic bias may bring. How can we manage and regulate the social fairness issues brought about by algorithmic bias? To start with the causes of algorithmic bias, no matter in which part of the algorithm’s operation, we can start by managing human factors and testing and adjusting data to avoid the problem. Firstly, the human factor, algorithm developers and designers may have different cognitive or cultural backgrounds, so we need to manage and co-ordinate them on an ethical level and try to prevent them from producing algorithmic mechanisms that are strongly subjective. It is important to ensure that the algorithm development team is as diverse as possible so that a wider variety of data can be used in algorithm design to avoid unfair treatment or even discrimination against some groups (Lee et al., 2019). For the issue of fairness and transparency in algorithm design, the government or regulators can formulate relevant restrictive regulations and also set up a monitoring group composed of interdisciplinary members to help review the possible risks of algorithmic bias and further reduce the social justice issues that algorithmic bias may lead to (Lee et al., 2019). Secondly, certain measures should also be taken on the data side to reduce the emergence of algorithmic bias and to avoid the social fairness issues it brings. Algorithms may be blind to the content of sensitive attributes, which can lead to the emergence of some sensitive social equity issues, such as gender discrimination, racial discrimination and other issues. Therefore, some measures should be taken to allow the algorithms to operate with sensitive information and subsequently mitigate the bias in such information. These measures of detection may require the formulation of principles that can measure the error rate of the algorithms in the processing of information, to achieve relative fairness, when necessary (Lee et al., 2019).

(“From Discrimination in Machine Learning to Discrimination in Law, Part 1: Disparate Treatment,” 2022)

Conclusion

AI and algorithms bring us convenience while still having many unsolved problems. Some people may think that algorithms are very complicated and far away from the lives of ordinary people, but the products of algorithms and AI have already penetrated into our daily lives. The impact of algorithmic bias on social justice should not be ignored. Focus on algorithmic bias, taking measures to manage and regulate algorithms, can make algorithms more effective and reasonable to help us, and help our society more fair.


References

Bruns, A. (2019). Filter bubble. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1426

Charncherngpanich, S. (2023). Is the use of algorithms good or bad? – Media & Society Issues Explained. https://mediaandsociety.org/is-the-use-of-algorithms-good-or-bad/

Flew, T. (2021). Regulating Platforms. John Wiley & Sons.

Friis, S., & Riley, J. (2023). Eliminating Algorithmic Bias Is Just the Beginning of Equitable AI. Harvard Business Review. https://hbr.org/2023/09/eliminating-algorithmic-bias-is-just-the-beginning-of-equitable-ai

Goodman, R. (2018). Why Amazon’s Automated Hiring Tool Discriminated against Women | News & Commentary. American Civil Liberties Union. https://www.aclu.org/news/womens-rights/why-amazons-automated-hiring-tool-discriminated-against

Lee, N. T., Resnick, P., & Barton, G. (2019). Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms. Brookings. https://www.brookings.edu/articles/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/

Paul, K. (2019). Healthcare algorithm used across America has dramatic racial biases. The Guardian. https://www.theguardian.com/society/2019/oct/25/healthcare-algorithm-racial-biases-optum

Serwin, K., & Perkins, A. H., Jr. (n.d.). Algorithmic Bias: A New Legal Frontier. 1023. https://doi.org/10.36644/mlr.115.6.racist

Shin, D. (2020). How do users interact with algorithm recommender systems? The interaction of users, algorithms, and performance. Computers in Human Behavior, 109, 106344. https://doi.org/10.1016/j.chb.2020.106344

Sun, W., Nasraoui, O., & Shafto, P. (2020). Evolution and impact of bias in human and machine learning algorithm interaction. PLOS ONE, 15(8), e0235502. https://doi.org/10.1371/journal.pone.0235502

Images References

AI bias – what is it and how to avoid it? (2022). In levity.ai. https://levity.ai/blog/ai-bias-how-to-avoid

Bias and Fairness in AI Algorithms. (2023). In Plat.AI. https://plat.ai/blog/bias-and-fairness-in-ai-algorithms/

Filter Bubbles and their impact on Social Media. (2023). https://www.sciencespo.fr/public/chaire-numerique/en/2023/06/08/student-essay-filter-bubbles-and-their-impact-on-social-media/

From Discrimination in Machine Learning to Discrimination in Law, Part 1: Disparate Treatment. (2022). In SAIL Blog. https://ai.stanford.edu/blog/discrimination_in_ML_and_law/

Be the first to comment

Leave a Reply