As early as the 19th century, scholars began to explore the impact of mass media on individuals and society. Today, in an era of explosive technological development, media’s influence on society and individuals has entered an entirely new dimension.
Do you wake up every day to the same routine? Your alarm goes off, your smartwatch analyzes your breathing and heart rate to assess your sleep quality, and your smart speaker delivers the weather forecast. You might then browse through apps for music, news, or podcasts, streaming content tailored to your preferences. A coffee discount notification convinces you to start your morning with a cup of coffee, and you open your navigation app to begin your busy day. At lunchtime, food delivery apps recommend nearby restaurants, and in the evening, TikTok captivates your attention, while Netflix’s streaming logic encourages you hours of viewing (Burroughs, 2019).
Nowadays, AI technology has seamlessly integrated into our daily lives. According to research from Tsinghua University, by 2025, the average person will interact with over 15 AI systems in one day (Grant, Alexander, 2011). Initially, people reveled in the joy brought by precision recommendations and the convenience of technology. However, it gradually dawned on some that these systems were subverting and reshaping human agency and social structures. This realization has sparked heated discussions about regulating and governing AI. Ironically, the very algorithms under discussion played a role in delivering this article to you.

Digital Transformation: Algorithmic Benefits and the Efficiency Revolution
From the late 1990s to the early 2000s, the internet began to spread globally. By the late 2010s, technologies like cloud computing, artificial intelligence, big data, and the Internet of Things (IoT) matured, ushering the world into a new era of digital transformation.The emergence of ChatGPT in 2023 further solidified the arrival of the AI age. Under Bruno Latour’s “Actor-Network Theory,” algorithms are no longer mere tools but “non-human actors” . Through data flows, predictive models, and automated decision-making, they reshape power networks in society (全&李,2021). The efficiency of the society has been revolutionized. For instance, Australia’s Commonwealth Scientific and Industrial Research Organisation (CSIRO) developed a solar cell testing robot that works 24/7, testing 12,000 batteries with an efficiency 600 times greater than manual testing.
However, behind this algorithm-driven efficiency revolution lies a profound social restructuring and systemic cognitive shift. A 2023 study from MIT Media Lab found that Spotify users, after three months of algorithmic recommendations, saw a 42% decline in their initiative exploration of new music, confining their musical consumption to an algorithm-defined “comfort zone.” This aligns with Turkle’s warning: “We teach machines to think, but in the process, we forget to think about ourselves.”(Turkle, 2015)

Algorithms Reshaping Personal Lives: The Algorithmic Prisoner
The Subversion and Restructuring of Cognitive Frameworks
Algorithmic recommendation systems continuously push content aligned with individual interests while incorporating commercial considerations. This selective mechanism shapes what we see and how we see it, directly influencing our perception of the world and then transform our behavior (Just & Latzer, 2019). Over time, algorithms replace traditional information retrieval methods, eroding our capacity for deep thinking and autonomy. We are gradually have negative dependence one thoes high-integrated syste.
If you want to understand how algorithms reshape our thinking, watch this video.
The Erosion of Individual Autonomy and Sense of Control
Algorithms generate “user profiles” based on historical online behavior, a process that sounds like a individual service, but subtly transfers power from individuals to systems.Algorithms generate “user profiles” based on historical online behavior, a process that subtly transfers power from individuals to systems. Through data collection and surveillance (e.g., “predictive policing”), algorithms reinforce control over individuals, creating invisible power structures. For example, food delivery platforms may recommend high-calorie foods based on order history, leading users to unconsciously accept this guidance as their own choice. This “black-box” decision-making mechanism weakens individual agency, trapping users in a paradox of “passive activity.”
The Paradox of Freedom in the Panopticon
Foucault’s “panopticon” theory finds its digital-age embodiment in our voluntary participation in surveillance. We willingly wear smartwatches to track steps, grant apps microphone access, and share location data—yet these “free choices” construct a digital prison. More invisable is the social media platforms, for instance, convert every like into a “social credit score” that influences content recommendations. In this “participatory surveillance,” users become both prisoners and jailers, reinforcing algorithmic control (Todd, 2024). The initiative choices one by one are the invisible guidance of the algorithmic. During this process, users sacrificed their privacy unconsciously.
Automation Rewriting Social Structures: Inequality and Ethical Challenges
Algorithmic Discrimination and Resource Allocation Injustice
We all know that algorithms rely on vast amounts of data to function effectively. However, the data they process is often simplified through labeling. Like a foolish yet obedient servant, algorithms reduce complex identity markers such as race, gender, and class to single dimensions. Hard to understand? Let me focus on the Amazon’s recruitment. Amazon’s AI recruitment tool downgraded resumes containing keywords like “women’s college” due to historical gender biases in training data, leading to its discontinuation. Simple gender labels often reinforce existing biases, and this “data poisoning” causes algorithms to encode historical discrimination as “objective rules.” (The Greenlining Institute, 2021). Similarly, American researchers found that low-income communities are disproportionately targeted with high-interest loan ads, while high-income groups receive low-risk investment recommendations. Twitter has also conducted platform audits that revealed algorithmic political bias. This highlights how, under the guise of technical neutrality, algorithms assert decision-making dominance over social resource distribution and perpetuate invisible discrimination.

The Cultural Assembly Line and Deepfakes
As AI technology is applied across various fields, you may have noticed a trend on social platforms: identical posts with matching text and even emojis. This template-driven narrative style and content creation increasingly bear the “flavor of AI.” AI-driven content creation lowers barriers to entry but stifles diversity (Karamjit, 2022). Studies show AI filters out niche or complex content, leading to homogenized cultural products (Gill, 2022). Commercial interests accelerate the decline of originality, alienating human emotions under data-driven logic.
More alarmingly, deepfake technology, fueled by AI, has enabled the spread of misinformation, from celebrity deepfake pornography to fabricated political statements, even in the war. These falsehoods, amplified by algorithmic recommendations, threaten democratic foundations. The World Economic Forum (WEF) has identified this as a top global risk.

Digital Labor Exploitation in the Age of Automation
Big data, which, as a prime example of instrumental rationality, is highly standardized. Similarly, platforms and systems incorporating algorithms have standardized monitoring. Crawford (2021) noted that routing algorithms in food – delivery platforms worsen labor exploitation. For instance, Italy’s Flink optimized delivery routes via AI, raising riders’ accident rate by 300%, yet blamed “personal errors”. But it’s not just food – delivery or streaming platforms where dehumanizing algorithms are an issue. When users interact with platforms, they become digital laborers, generating data for platform profits. Scariest of all, algorithms and feedback – reward mechanisms work together to make workers complicit in their own exploitation.
Algorithmic Governance: Balancing Control and Chaos
The above discussion highlights the significant impact of advanced AI technologies on our society. It’s clear that algorithms need regulation. However, challenges arise when considering how to regulate them. Thinking of If we directly restricting algorithm and AI usage, due to loss of precision in user interest targeting, platforms may bankruptcies and we’d struggle with tedious tasks once more。The question is, how do we balance the public good and commercial interests of algorithms? How should we handle the vast amounts of personal data held by tech giants? Can we stop algorithms from learning and reinforcing social biases during their operation? And how can ordinary citizens, while enjoying the benefits of algorithms, avoid being controlled by them and remain in charge? These pressing issues require immediate discussion and resolution. In both academic and industry circles, discussions on algorithmic, AI, and automation governance are ongoing, with a strong focus on algorithmic transparency, explainability, bias, and balancing public and commercial interests.
Globally, AI governance takes diverse approaches. The EU, with its AI Act, classifies AI systems by risk, demanding that high-risk systems (e.g., medical diagnostics) disclose decision – making logic and undergo ethical reviews. The US favors market – driven regulation, requiring “red teaming” (simulated attacks) before AI deployment to balance innovation and risk. China emphasizes classified and tiered control, implementing a “dual – list” system for generative AI, including content safety checks and data labeling rules, to prevent privacy misuse. Despite differing paths, all countries share the same goal: setting ethical boundaries for technological progress as algorithms reshape our world.
In summary, algorithmic governance currently has clear directions. First, an ethics-first approach is essential, integrating human values into machine learning processes to balance instrumental and value rationality. This involves embedding ethical considerations into algorithm design, such as incorporating fairness reviews to prevent bias or filter bubbles (Mounson,2013). Second, algorithmic governance requires collaborative participation among multiple stakeholders, including enterprises, users, and governments. This collaboration should be supported by third-party certification and the establishment of industry-wide ethical standards. Third, dynamic adaptation and feedback mechanisms are critical. This involves iterative testing, continuous monitoring of algorithmic impacts, and adjusting governance strategies based on societal feedback to ensure algorithms remain aligned with ethical and social goals.
For ordinary internet users, enhancing algorithmic literacy and engaging in algorithmic resistance are crucial parts of algorithmic governance. Algorithms collect information via our behavioral data, but we can also use deliberate behavioral data to influence algorithms and reclaim control. What we truly need to do is reflect on technology’s impact and grasp what makes us human. In this unpredictable age of intelligence, what can we humans do?
At this critical point in a new technological era, we must recognize that algorithmic and AI technologies are merely tools, just like the mass media of the last century. They are created to serve humanity, not to control it. While acknowledging their sophistication and autonomy, we must remember that humans retain the ultimate authority and control over technology. The ultimate question for algorithms is not just efficiency but safeguarding human dignity. After all, it’s not the system that traps us, but our cognitive inertia in treating AI as the ultimate truth.
Reference
Burroughs, B. (2019). House of Netflix: Streaming media and digital lore. Popular Communication, 17(1), 1–17. https://doi.org/10.1080/15405702.2017.1343948
Grant, Alexander W.(2011). Artificial Intelligence Through the Eyes of the Public. (Year). Worcester Polytechnic Institute. https://digital.wpi.edu/downloads/d504rk556
全, Y., & 李, Q. (2021). 作为行动者的算法:重塑传播形态与嵌入社会结构. 陕西师范大学学报(哲学社会科学版), 2021(4), 117-124
Turkle, S. (2015). Alone together: Why we expect more from technology and less from each other. Basic Books.
Just, N., & Latzer, M. (2019). Governance by algorithms: reality construction by algorithmic selection on the Internet. Media, Culture & Society, 39(2), 238-258. https://doi.org/10.1177/0163443718776931
Todd, J. (2024, May 2). Foucault’s Panopticism: The Paradox of Power and Control in Modern Surveillance. Dispatches from Room 101. Retrieved from https://room101.jtodd.info/jvan1/is-foucault-right/
The Greenlining Institute. (2021, February 18). Algorithmic Bias Explained: How Automated Decision-Making Becomes Automated Discrimination. Retrieved from https://greenlining.org/publications/algorithmic-bias-explained/
Gill, K. S. (2022). Transformational AI: seeing through the lens of digital heritage and ‘cybersyn’. AI & SOCIETY, 37(3), 815–818. https://doi.org/10.1007/s00146-022-01484-1
Crawford, Kate (2021). The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven, CT: Yale University Press, pp. 1-21.
Munson, S. A., Lee, S. Y., & Resnick, P. (2013). Encouraging reading of diverse political viewpoints with a browser widget. https://dub.washington.edu/djangosite/media/papers/balancer-icwsm-v4.pdf
Be the first to comment