When Algorithms “Rule the Heavens and Earth”: Do We Really Know Who Is Controlling the World We See?

Have you noticed that some of the software you use seems to understand you better and better, precisely knowing your preferences? For example, after browsing Douyin for a few minutes, the content starts to capture your attention, making you want to continue watching. Before you know it, an hour has passed. When you see a product recommended by a blogger on a social media platform and then jump to a shopping platform where the first item is exactly what you wanted to buy, it feels like magic, as if it can read your mind like a parasite in your stomach. Sometimes, even before you input anything into the search bar, it already knows what you want to type, as if another version of yourself exists in the world.

Have you ever wondered who is controlling these platforms? Is it so-called AI or something else?

Speaking of which, do you think it’s magical, or do you think it’s a psychic event?

Do not misunderstand, this is not an electronic fortune-teller but merely something called ‘algorithms’ silently observing you. Like MOSS from “The Wandering Earth,” it silently observes every like, every video viewed, and every search query without rest. It not only observes but also intervenes in the content you see, potentially confining you within an ‘echo chamber’ of your own making.

Welcome to the world governed by algorithms,it is a digital society where you did not move in but wake up in every day.

The Shortlist: Social Media Platform Recommendations ( https://protectdemocracy.org/work/shortlist-social-media-recommendations/)

The invisible “algorithmic hand” is constructing our world. 

We have traditionally been accustomed to the notion of “governance” as something performed by the government, such as traffic police issuing fines, urban management clearing stalls, or a mayor’s directive to improve traffic conditions. However, many of the “rules” that influence our lives are no longer set by the government but by these intangible platforms: Facebook, TikTok, Weibo, Douyin, Baidu, Google… Through a series of complex and enigmatic algorithms, they silently control what we see, how we interact, and even whether we can be “seen.”

Andrejevic posits: “Automation shifts the focus from representation to preemption: from telling us what we want, to shaping it in advance”. (Andrejevic, 2019). 

In this discussion, we will dig into how these platforms utilize algorithms to ‘govern’ us; whether these governance mechanisms exhibit bias, operate as black boxes, or have ulterior motives; and most importantly: whether ordinary individuals have any means to say ‘no.’

Are you ready to explore this ‘invisible government’ further?

Those “objectively neutral” algorithms are, in fact, automatic perpetuators of bias.

Now, let us discuss a term that frequently appears in technology news, product launches, and annual summaries by executives: algorithms.

“Our recommendation system is based on advanced AI algorithm models”; “This technology intelligently distributes content based on user profiles.” These statements sound particularly rational, scientific, and unbiased. However, the reality is that while algorithms are smart, they are far from impartial.

How does it “learn” to judge what you want? It relies on every one of your likes, shares, dwell times, input content… all your actions become “data points,” fed into a massive ‘mouth’ called a model to learn to predict your next possible actions. This phenomenon is academically referred to as datafication: the transformation of our behaviors, emotions, and relationships into computable and analyzable data forms.

This sounds reasonable; however, the issue is that algorithms learn from our “history,” but history is never simple.

Safiya Noble conducted an experiment in her book Algorithms of Oppression:”When you type ‘black girls’ into Google, the results are porn sites. When you type ‘white girls,’ the results are innocuous. These are not technical errors. They are algorithmically driven results that reflect the values, assumptions, and commercial interests embedded in Google’s advertising and ranking systems”. (Safiya Noble, 2018)

She points out that searches for “Black girls” often return pornographic content while searches for “White girls” tend to align with mainstream aesthetics. This is not a system error but rather an indication that “algorithms reflect a combination of societal values, commercial interests, and implicit biases.”

We instinctively believe that search results are objective; however, the truth is that these results are generated by a triad of algorithms, models, and advertising bidding systems. Their concern is not about being right or wrong but about achieving high click-through rates.

As Just and Latzer state: “Algorithms do not merely select information; they construct reality by prioritizing certain facts, voices, and interpretations over others”. (Just and Latzer, 2019)

They place you within a future predicted by your past behaviors, narrowing your path until even the space to imagine other possibilities is sealed off.

End Adultification Bias (Full Version) https://www.youtube.com/watch?v=L3Xc08anZAE

“Black Box” Automation: The Uncertainty of Decision-Making Agents

Imagine a scenario where your loan application is rejected, and the bank attributes this decision to “system evaluation results,” without providing any further explanation. Would you feel helpless?

Welcome to the “black box society.”

Frank Pasquale (2015) coined the term “black box society” to describe our current condition: we are increasingly reliant on algorithms, yet their decision-making processes remain opaque. We are unaware of how these algorithms make decisions, and even when they malfunction, there is no accountability.

“We are governed by technologies we do not understand, and their designers refuse to explain. The very opacity of these systems helps entrench the power of those who control them” (Frank Pasquale, 2015).

This situation reveals not only an issue of “opacity,” but also a problem of “loss of bargaining power.” In reality, we can still communicate with teachers, HR personnel, or officials, but in the face of platforms, there is no one to dialogue with.

In digital governance, “the right to know” is not a luxury but a fundamental democratic requirement. However, it has now become an unattainable “technological privilege.”

Automation Systems: Not for Convenience, but for Dehumanization

Mark Andrejevic offers a thought-provoking perspective: “Automation relocates decision-making power, not just away from people, but away from sites of accountability” (Mark, 2019). This implies that an increasing number of issues that should require human contemplation and judgment are being delegated to algorithms, which lack moral sensibility and considerations of social justice.

As automated decision-making becomes more deeply embedded in daily life, we have become increasingly accustomed to explanations being absent. YouTube employs AI models to automatically detect “sensitive content,” often resulting in the erroneous suppression of legitimate expression, misinterpretation of satirical content, and even the removal of cultural expressions from non-Western contexts. Meanwhile, truly incendiary content may evade detection because it does not trigger predefined keywords.

This is not a matter of “AI not being smart enough,” but rather a lack of cultural understanding on the part of AI. More importantly, when one’s content is erroneously removed, platforms typically respond with a statement such as “the system detected it and will not be restored.”

Automation does not merely replace human decision-making; it eliminates accountability mechanisms.

Platform Governance: Not “No Regulation,” but “Self-Regulation”

At this juncture, you might ponder, “Where does oversight come into play? Does the government not intervene at all?”

Regulation is indeed present, never absent. However, the crux lies in the prevalent practice of “self-regulation” by platforms, which often obfuscates the regulatory landscape.

Instances abound: Twitter and Facebook may cite their “community guidelines”; TikTok might flag “content violations”; YouTube might prompt you to appeal video removals. Yet, these mechanisms are self-defined rules and self-appointed adjudicators by the platforms themselves, requiring users merely to comply with platform-dictated rules.

Terry Flew (2021) critiques this platform governance model as akin to a “corporate constitutionalism”: platforms amalgamate the roles of legislation, judiciary, and enforcement, all under the guise of “user experience” or “freedom of speech,” rendering scrutiny impossible.

The outcome? Ordinary users find it arduous to voice dissent, and nations grapple with regulatory challenges, especially concerning cross-border operations of international platforms. The sole entities consistently wielding influence are the tech giants who shape algorithmic design, data strategies, and policy formulation.

What we perceive as our use of platforms is, in reality, platforms molding us.

In various presentations, AI often appears as an intelligent yet reserved engineering student—capable of performing tasks with ample data and shining once properly trained. However, delving into the reality behind these models reveals that AI is, in fact, a political animal wearing glasses.

Kate Crawford (2021) in “The Atlas of AI” elucidates that “AI is neither artificial nor intelligent. It is made from natural resources, fuel, human labor, data, and massive capital investment. It is fundamentally political” (Kate, 2021). “Technology is not intangible; it stands on the shoulders of the earth and people.” (Kate, 2021)

What does this imply? AI is not merely a “tool” but a component of the global governance structure. It exacerbates the power imbalance between technological powers and developing nations. Without establishing algorithmic rules, one is governed by others’ rules; without converting data into policy influence, one passively “contributes data.”

Many AI systems rely on annotators from platforms like Amazon Mechanical Turk, who earn less than $2 daily, labeling tasks such as “Is this image a dog or a cat?” Their labor is completely “algorithmically rendered invisible.”

What we perceive as using an app is actually participating in a global power allocation system, albeit passively.

Are you a robot? Introducing “No CAPTCHA reCAPTCHA (https://developers.google.com/search/blog/2014/12/are-you-robot-introducing-no-captcha)

The “public voice” of ours is gradually being squeezed out by algorithms.

In the past, when we discussed public issues such as elections, environmental protection, and education, we relied on mediums like television, newspapers, and public speeches. Everyone had the opportunity to voice their opinions, and even with differing viewpoints, a semblance of reason could emerge from the discourse.

However, nowadays, whether your voice is heard, whether it reaches an audience, and how far it travels all depend on the “algorithm gate” of the platform. The platform can elevate your content to prominence or render it imperceptible. What you perceive as “participation” is actually a script meticulously orchestrated by algorithms for optimal viewing.

More critically, many issues that should be resolved through social dialogue, such as fake news, online violence, and extreme rhetoric, have now been transformed into “technical problems”—the platform’s response of optimizing algorithms and adding labels is considered sufficient.

Yet, at their core, these are not merely technical issues but questions of social equity and power balance.

Pathways to Governance: Not Just Controlling Platforms, but Awakening Users

Terry Flew asserts that reliance on “self-governance” by platforms is untenable, as “platforms have little incentive to limit their own power unless compelled by regulation or public pressure” (Terry Flew, 2021). In essence, platforms will not voluntarily curtail their own authority unless external pressure is applied. We can no longer naively trust these platforms to “self-regulate” nor place our hopes in “smarter technology.”

Our requirements are as follows:

– Mandating platforms to disclose the mechanisms of their algorithms for public scrutiny;

– Advancing robust regulatory frameworks, particularly in the Global South;

– Educating the public on how algorithms influence information, emotions, and social realities.

Swiping fingers, but scripting the world

Finally, we return to the initial question: is the algorithm merely a “recommendation tool”? No, it is an extension of power, a variant of governance, and a sieve for narratives. What you perceive as the “world” is actually a script written for you, and you do not even possess the cover of this script, let alone the ability to rewrite its plot.

AI, algorithms, and automation are not some form of “futuristic mysticism”; they are integral components of contemporary governance. While you may believe you are merely watching videos, scrolling through Weibo, or searching for information, in reality, you are daily subjected to the “behavioral norms” and “value hierarchies” set by these platforms. Failing to understand this dynamic means allowing it to dictate your future. The aim is not to reject technology but to reclaim our agency.

This is not an “anti-technology” piece; rather, it seeks to prevent us from blindly idolizing technology. As we like, share, search, and scroll through videos, we should also question: Whose interests do these underlying algorithms serve? For whom do they work?

Digital governance is not merely an issue for the elite; it is a struggle for rights that concerns every smartphone user.

Reference:

Andrejevic, M. (2019). Automated culture. In Automated media (pp. 44–72). Routledge.

Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence (pp. 1–21). Yale University Press.

Flew, T. (2021). Regulating platforms (pp. 79–86). Polity Press.

Google. Are you a robot? Introducing “No CAPTCHA reCAPTCHA”. https://developers.google.com/search/blog/2014/12/are-you-robot-introducing-no-captcha.

Just, N., & Latzer, M. (2019). Governance by algorithms: Reality construction by algorithmic selection on the Internet. Media, Culture & Society, 39(2), 238–258. https://doi.org/10.1177/0163443716643157

Noble, S. U. (2018). A society, searching. In Algorithms of oppression: How search engines reinforce racism (pp. 15–63). New York University Press.

Pasquale, F. (2015). The need to know. In The black box society: The secret algorithms that control money and information (pp. 1–18). Harvard University Press.

Wakefield, J. (2021) Ai: Ghost workers demand to be seen and heard, BBC News.  https://www.bbc.com/news/technology-56414491

YouTube. End Adultification Bias (Full Version). https://www.youtube.com/watch?v=L3Xc08anZAE (Accessed: 09 April 2025).

Be the first to comment

Leave a Reply