You Think It’s Technology That Works, But It’s Actually Power That Speaks: How Algorithms Shape the World We Can’t See

Have you noticed that many of the decisions are no longer yours to make?

What type of film you’d like to watch, what kind of video you’d like to watch, what type of product you’d like to discover, and even what kind of job you’d like to apply for—it appears like the website can always get it right. Individuals refer to this as ‘algorithms know you,’ yet how do algorithms ‘know you’? From whom did the algorithm learn to ‘know you,’ and does the algorithm really ‘know you’? More importantly, is the algorithm really fair and objective?

We always think of algorithms as ideal, cold mathematical formulas that are always devoid of emotions and compute exactly what we want. But it might not be the case. As Crawford (2021) describes, AI is not a value-free technological instrument; it is an apparatus that enacts social, political, and economic power relations. That is to say, the algorithm learns from the society in which people live, therefore whose society does it learn from? Whose data do they privilege and whose voices do they silence? In this algorithmic age of ‘choosing’ reality (Just & Latzer, 2019), we long ago lost our independence as receivers of information and have instead become ‘forecast,’ ‘filtered,’ and ‘steered.’ (Pasquale, 2015).

Image: Algorithmic code interface, symbolising the complexity and invisibility of the algorithmic system (image source: web)

Invisible rules, governed more than laws

We always assume that government is the government’s business and is distant from the people. However, in reality, what governs our lives today is not the government, nor the law, but the algorithm of different platforms. It does not impose rules as loudly and brazenly as the law but tells you in a ‘subtle’ and invisible way what you are meant to see and not see, whose voice is to be listened to, and who is going to be placed in a position where no one will ever look at them.

Terry Flew (2021) refers to this as ‘soft’ governance’—where algorithms are unseen managers, setting the limits of what we are able to view on the internet by blocking and recommending. You might not realize that your own information universe is already being shaped by algorithms: the news you consume, the individuals you find on the front page, and even the products you purchase aren’t ‘accidents’ but deliberate designs.

There exists an ‘automation culture’ (Mark Andrejevic, 2019) where our attention, interests, and behaviors have been quietly tracked, predicted, and directed by algorithms for a while now. We no longer decide actively but accept passively. The platform does not have to inform you what to think; it only has to present to you the same thing repeatedly, and sooner or later, you will believe that’s ‘the way the world is.’

The YouTube recommendation algorithm is a prime example. You may simply wish to unwind and watch a fitness video or a brief tech movie, but the further you go, the stranger it becomes and the further you’re pulled into some fringe political or conspiracy theory material. You didn’t ask for it specifically, but the site ‘thought you’d be interested’ and led you step by step right there. This algorithmic nudge system is trying to ‘keep you in for a few more minutes,’ but at the cost of your information world narrowing and getting more extreme.

This type of influence, not to mention the law, but it literally alters the way we think and behave. The algorithm is like an invisible director, scripting every scene behind the scenes. It does not censor you, but it subtly dictates what you see and who you hear – and that, in a word, is one of the most insidious types of power today.

Image: YouTube’s home page recommendation display page, reflecting the information environment built by the platform’s algorithmic push of content. (Image source: web)

The invisible code that determines what you can see

People always take for granted that if it’s a number, it must be objective. Data sounds like tough, cold facts, as if they don’t take a stand or lie. The problem is that what data to take and what not to take is a choice. When you let an algorithm learn from these selected ‘realities,’ what it learns is a particular view of the world-a world of bias, absence, and skew.

Data on which AI is trained is not just information; it is political, economic, and social power (Kate Crawford, 2021). For example, if the data of a facial recognition system is predominantly of white men, then it will tend to err in recognizing black women. This is not a technical issue but maybe a bias of perspective. Whose data is being extensively recorded? Whose experience is simply not there? The world the algorithm learns may not contain you at all.

Just and Latzer (2019) contend that algorithms are no longer simply presenting us with information; they’re ‘reality construction.’ If the news you read every day, the videos you’re suggested to view, and the social media posts you’re presented with are all algorithmically ‘computed’ by platforms, then the world you experience is maybe no longer the same. When the news you read daily, the videos the platform suggests to you, and the social media posts are all ‘calculated’ by platform algorithms, the world you see is indeed a filtered, selected, and ranked world. And those that are ‘deemed unimportant’ by the algorithm, in our line of sight entirely vanished.

Even worse, this is done without our even realizing it. We live, Pasquale (2015) argues, in a “black box society”: you don’t know how the algorithm has determined that you do or do not qualify for the job or who has determined that your loan application must be denied. You do not even know why you’ve been ‘muted’ by a platform or why you cannot see what certain individuals are posting. For instance, there have been instances where users were posting content on a social networking site that is not in breach of any community rules, but all of a sudden and without notice, the popularity and viewership drastically dropped – this is referred to as shadow banning(Newton, 2018): the site’s algorithm silently lowers your visibility, but you can’t know or complain about it. This is so-called shadow banning: the algorithm of the site silently lowers your visibility, but you don’t have a clue or can’t say anything.

What we are witnessing before us may be a microcosm of the real world, but it is rather the product of algorithmic filtering. It controls which information is prioritised to appear, what is reduced, and what never gets a chance to appear before our eyes at all. Slowly, we think we are engaging with information by choice, but we are merely being exposed to a selected, sorted, and optimised version. This sort of ‘reality’ is not always the most balanced, but it is repeated enough on our screens to be taken for granted.

Case study: Australian Robodebt program. Data bias is not a bug; it’s by design.

In Australia, the Robodebt scheme has been a terrifying example of the damage that can be caused by our blind faith in algorithms. The program, operated by the federal government since 2015, was initially set up to automatically detect and recover welfare payments that were potentially paid in error. Sounds like a convenient solution, but in reality, it is not without issues (Henriques-Gomes, 2023).

Robodebt uses a simplistic algorithm to check for discrepancies through cross-matching of employer tax return data with benefit receipt records. Where an inconsistency in income data is found, the system automatically assumes fraud has occurred and sends out debt recovery notices, even directly debiting individuals’ accounts in some instances.

This caused much social harm. Many people became depressed and anxious because of this, and some people even took the plunge. This policy was illegalized in 2023, and the government had to give a public apology along with compensation to the affected people (Henriques-Gomes, 2023).

ABC News’ comprehensive analysis of the Robodebt Royal Commission’s findings provides an expert journalistic perspective and interpretation (ABC News, 2023).

As Safiya Noble (2018) argues, algorithms are not neutral, and they will reproduce and even amplify current inequalities, which are mirroring and reinforcing power dynamics in society. In the majority of instances, the creation and usage of technology unintentionally or even purposely mirror current inequalities in society, whether it is from gender to race to class, and these prejudices get integrated into the algorithms we use on a daily basis. It is thus essential to acknowledge the fact that technology does not fix social issues, particularly where it is being used as a replacement for human judgment and accountability. A good system ought to augment social justice, offer transparency and accountability, and be ensuring that all persons, and more so the most vulnerable, are treated in a fair manner. It is only then that we will not be unwittingly contributing to oppression and make sure that technological progress serves everyone instead of exacerbating social divides.

Challenging inequality in the age of AI: what we need to do?

AI is playing an increasingly larger role in the decision-making of our lives, ranging from shopping suggestions online to making actual significant financial decisions. It is not always secure, however, to leave decision-making up to AI, since AIs are imperfect. Their decisions can be based on biases that are inadvertently transferred during programming, or the outcomes can be unjust since the data considered is not broad enough.

So it is obviously not sufficient to leave it to the companies and technologists who create these systems to make sure that everything is equitable. They might have more stake in the technological breakthrough than in the ethical and societal questions behind it. We cannot depend on their codes of practice and self-regulation simply because everyone’s interest in the actual world is involved.

That’s why we require robust external regulation to help guarantee that AI applications are transparent and fair. Governments and regulators must establish explicit laws and norms to guide the application and development of AI to guarantee that they don’t violate individuals’ rights or unintentionally entrench social inequalities. For instance, businesses ought to be compelled to reveal the decision-making process of AI systems and the data sources utilized so that the public may comprehend and determine if AI decisions are trustworthy.

Furthermore, we must have effective complaint and grievance mechanisms in place so that those negatively impacted by biased AI decision-making can be compensated. This involves allowing affected parties to appeal AI’s decisions and request a fresh manual review.

And finally, by reinforcing regulation, we can leverage the potential of AI to enhance society, instead of letting the flaws in the technology create newer issues. We must create systems that are more human-centric so that the progress of AI can indeed be for all and move society towards more equity and transparency.

Conclsion

As increasingly more of our lives become filled with AI, the convenience that it affords us can’t be overlooked. But the problems that accompany it can’t be overlooked either. Incidents like Robodebt remind us that, left unregulated, technology can get out of hand and do irreparable harm.

We cannot just leave it to technicians and corporations themselves to ensure that AI is safe and fair. We the members of society can and ought to insist on tighter controls. That will mean opening up the decision-making regarding AI systems to the people, making them transparent and subject to ongoing challenge and scrutiny.

We also must develop the technology alongside raising people’s awareness of AI. Access to information and education are crucial in empowering the masses and empowering people to be a part of this technological revolution.

Ultimately, we require more than clever machines; we require clever policy and compassionate regulation to make sure that the power of technology assists us in creating, rather than eroding, from our shared prosperity. Let us construct, collectively, a future in which we all share a stake in the returns on technological progress.

Reference list

ABC News. (2023, July 7).Robodebt royal commission findings revealed, individuals referred for prosecution [Video]. YouTube. https://www.youtube.com/watch?v=zDi3Tsc33iQ

Andrejevic, M. (2019). Automated culture. In Automated media (pp. 44–72). Routledge.

Chen, Y.-S., & Zaman, T. (2023). Shaping opinions in social networks with shadow banning. PLOS ONE, 18(10), e0289871. https://doi.org/10.1371/journal.pone.0289871

Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence (pp. 1–21). Yale University Press.

Flew, T. (2021). Regulating platforms (pp. 79–86). Polity Press.

Henriques-Gomes, L. (2023, July 7). Robodebt royal commission report finds scheme was ‘dishonest and illegal’. The Guardian. https://www.theguardian.com/australia-news/2023/jul/07/robodebt-royal-commission-report-finds-scheme-was-dishonest-and-illegal

Just, N., & Latzer, M. (2019). Governance by algorithms: Reality construction by algorithmic selection on the Internet. Media, Culture & Society, 39(2), 238–258. https://doi.org/10.1177/0163443716643157

Noble, S. U. (2018). Algorithms of Oppression: How search engines reinforce racism. New York University Press.

Pasquale, F. (2015). The black box society: The secret algorithms that control money and information (pp. 1–18). Harvard University Press.

Be the first to comment

Leave a Reply