Algorithm Prisoners: How do we become ‘manipulated’ in the data age?

From the information cocoon of social media to the hidden hegemony of automated decision-making

Photo Credit: Austin Distel on Unsplash

The Truman Show in the Age of Algorithms

Have you ever experienced these situations?

  • – You’ve just viewed a pair of running shoes on a shopping site, and for the next few days, social media and web ads are showing similar shoes.
  • – You watch a video on TikTok about a social issue, and you find that the front page is filled with content with similar views, even more extreme.
  • – Your news feed is almost always about your side of the story, and when you argue with your friends about a social issue, you find yourself getting very different versions of the facts.

These are not coincidences, but the result of careful design of the algorithmic manipulation system. According to Auxier and Anderson (2021), we can know that the average person is exposed to more than 4,000 algorithm-based recommendations per day, and 71% of the information comes from no more than 3 platforms. We’re living in a theater even more hidden than the Truman Show – with algorithms influencing our access to information and behavior patterns through data tracking, personalized recommendations, and automated decision-making. Algorithms not only arrange the goods we see, but also shape our political positions, social perceptions, and even emotional responses.

These seemingly ‘intimate’ personalized services are actually the tip of the iceberg of the algorithmic manipulation system.” While we marvel at the accuracy of guessing what you like, tech companies have already proven an even more dangerous truth in the lab – that algorithms can not only predict your behavior, they can systematically change your thoughts.

Historical Case: Facebook’s “Emotional Contagion Experiment” – How can algorithms become temperature controllers for social emotions

In 2014, Facebook conducted a secret experiment on 689,000 users (Kramer et al., 2014). The researchers modified the news feed algorithm so that the experimental group saw more negative content and the control group saw more positive content. The results were chilling: Users who were pushed negative content became more negative, Posting a 5 percent drop in sentiment, while those who saw positive content increased their sentiment by 8 percent. This research proves that emotions are algorithmically contagious, and algorithms can not only affect our access to information, but also change our mental state.

Photo Credit: Luke Chesser on Unsplash

Three mechanisms of algorithmic manipulation

1. Filter Bubbles: The Democracy Disruptor

Based on a user’s browsing history, “like” behavior and social relationships, the recommendation algorithm gives priority to pushing content that is in line with the user’s interests, resulting in a more homogeneous information environment. This phenomenon, known as the Filter Bubble, was first proposed by Eli Pariser (2012), pointing out that users have difficulty accessing views different from their own, resulting in cognitive closed loop and social polarization. Bruns (2019) and Dubois and Blank (2018) further suggest that algorithm-driven machine learning systems continually steer users toward content that reinforces existing beliefs while avoiding ideas that challenge their cognitive boundaries, creating overlapping reinforcement of “information cocoons” and “echo chamber effects.”

YouTube recommendations for radicalization and the Brazilian political Crisis

During the 2018 Brazilian presidential election, YouTube’s recommendation algorithm was found by researchers to be systematically pushing far-right content, causing widespread concern. According to the data, the platform is more likely to recommend videos with emotional keywords such as “corruption” and “crisis”, and the average viewing time of such content is 2.3 times that of other videos. Once a user clicks to watch a video with a right-wing stance, four of the next five recommended videos are more extreme. The algorithm leads users step by step to a more radical content range by continuously strengthening emotional stimuli and stance bias, forming the so-called “radicalization path”.

Such algorithmic mechanisms undermine the pluralistic information environment on which democratic societies depend, reducing complex issues to emotional confrontation, and increasingly polarizing public positions. At the same time, users mistakenly believe that they are receiving information in a “free choice”, but they are actually trapped in an information trajectory set by the algorithm. Its so-called “autonomous cognition” is often the machine learning model’s optimal “emotional triggering path” based on data.

2. Addictive Design: Neuroscience as a Weapon

By analyzing user behavior data such as dwell time, swiping speed, and interaction frequency, the platform continuously optimizes content delivery strategies to maximize “attention acquisition.” This mechanism is not simple data feedback, but a kind of progressive neuro-behavioral engineering: the algorithm repeatedly learns the user’s behavior, adjusts the recommendation logic, and gradually evolves into a system that can accurately predict the user’s needs. According to Flew (2021), we can know that a well-designed algorithm can extract patterns from huge data sets through continuous human-computer interaction, and then predict the content output that users are most likely to react to, thus achieving highly personalized, emotion-driven content push.


How does TikTok’s algorithm work?

TikTok‘s recommendation algorithm analyzes thousands of behavioral signals from users (like likes, comments, length of stay, etc.) to accurately determine their current preferences and potential interests. Then it will push highly personalized content. This mixing mechanism constantly stimulates the user’s brain to secrete dopamine, creating instant gratification and making people addicted. Unlike YouTube and Instagram‘s algorithms, TikTok not only presents users with familiar content, but also actively guides them to explore new content, making the platform both sticky and fresh. In this mechanism, the user’s autonomous decision-making power is translated into a calculable “behavior pattern”. And all behavior is predicted and manipulated. This makes it increasingly difficult to concentrate, reducing the ability to think deeply and plan for the long term. 

3. Automated Discrimination: Encoding Bias

The data used to train algorithms is often not neutral, but fraught with historical social biases. These biases are systematically “coded” and even amplified in algorithmic models.

A video talks about Amazon’s Sexist Recruitment AI

Amazon AI recruitment tool discriminates against women

In 2018, Amazon‘s internally developed AI recruitment system was found to be systematically biased against female job applicants, automatically lowering the rating of resumes with the keyword “female.” That’s because the AI system was trained on hiring data from the past 10 years. And the tech industry’s own sexism has led to historical data showing that male applicants are more likely to be hired (Dastin, 2018), allowing AI to automatically perpetuate this bias.

So algorithms are not objective, but subject to implicit bias from data and developers. This mechanism makes technology an “accomplice” to structural inequality, exacerbating social inequality.

Who is benefiting? The conspiracy of power and capital

Behind the manipulation of algorithms are huge commercial interests and the concentration of power. Platform companies monopolize user attention and control data resources by building a closed-loop ecology, and on this basis gain both economic and political benefits.

Targeted advertising has become a major source of revenue for companies such as Google and Meta, which rely on user data to target ads. For example, Meta derives more than 80% of its revenue from advertising. The company believes that user attention equals advertising revenue. Therefore, in order to prolong the user’s stay, these platforms are constantly optimizing the recommendation system, creating a “never-ending” flow of information, and then harvesting data and clicks.

A video talks about Facebook acquires user data

In 2016, Cambridge Analytica obtained data from Facebook without users’ authorization, built psychological profiles of millions of users, and used “psychological tactics” to accurately launch political ads, manipulate public opinion, and profoundly affected the US election and the Brexit referendum. In 2020, the United States is once again experiencing a crisis of information manipulation. The “Stop stealing” conspiracy theory has spread widely on the Facebook platform, and its recommendation algorithm has been criticized for further amplifying extreme political content and exacerbating social divisions.

As Just and Latzer point out, algorithms are not just code tools, but a whole set of implicit institutional mechanisms. By filtering, sorting and hiding information, it reshapes the user’s cognitive path and behavior. At the same time, its black-box nature makes the power of the platform highly concentrated, and ordinary users are difficult to perceive that they have been systematically manipulated. People continue to contribute data without knowing it, but have no control over how the data is used, becoming “digital labor.”

Flew’s (2021) study also pointed out that algorithms generally tend to push emotional and confrontational content in order to get higher engagement and stay time. This mechanism has invisibly contributed to social polarization and further torn the common basis of public opinion.

The possibility of resistance: The path from individual to system

Faced with the reality of algorithmic manipulation, we are not entirely powerless to resist. Rebuilding cognitive sovereignty, promoting institutional reform, and developing technological alternative paths are the keys to getting out of the “algorithm cage”.

At the individual level, users should actively enhance their own media literacy and digital sovereignty awareness. We can:

1. Using “Digital Detox” to reduce social media usage and restore control of attention;
2. Expand sources of information through the use of RSS feeds, independent news platforms, and academic podcasts to break the single path of information shaped by algorithms.
3. With the help of technical plug-ins such as “Unfollow Everything” and “TrackMeNot”, users can also counter the platform’s recommendation mechanism to a certain extent and restore the choice of content ranking.

At the system level, building a more transparent, accountable, and decentralized information environment is particularly critical.

The European Union has passed the Digital Services Act (DSA), which requires platforms to disclose algorithm recommendation mechanisms and give users the right to choose to turn off recommendations; China has also issued the “Regulations on the Management of Internet Information Service Algorithm Recommendation”, emphasizing the controllability and auditability of platform algorithms.

From the perspective of technology substitution, decentralized mechanisms such as blockchain social networks and Federated Learning are becoming new paths to weaken platform monopolies. The development of “interpretive AI” (XAI) also provides users with the possibility to understand and question the logic of algorithmic decisions, helping to rebuild the foundation of trust in algorithmic systems.

A tug of war between humanity and technology

The efficiency of algorithms has improved the convenience of life, but it is also quietly changing people’s behavior patterns and even undermining the ability to think independently. Algorithms are no longer just tools behind calculators, but a complex of political, economic, and cultural forces. We are being invisibly choreographed into a social script constructed by code. As Pasquale says, “Algorithms are the law of the new age, invisible but effective everywhere.”

Adrienne Massanari (2017) also regards algorithms as a form of “platform politics”, pointing out that platforms, through a combination of design, policies and norms, silently shape specific cultural and behavioral patterns while suppressing other voices and practices. This covert discipline not only controls users at the technical level but also guides them at the cultural and value levels.

— When we feed short videos, are we choosing the content, or are the algorithms choosing us?
— How much of the information we get is real, diverse, or filtered?
— Algorithms are changing not just the way we live, but the way we understand the world.

When we laugh at the Truman Show, where he lives in the director’s script, maybe we should ask ourselves: How much of your worldview is written by algorithms?”

References

Auxier, B., & Anderson, M. (2021, April 7). Social Media Use in 2021. Pew Research Center. https://www.pewresearch.org/internet/2021/04/07/social-media-use-in-2021/

Dastin, J. (2018, October 11). Amazon Scraps Secret AI Recruiting Tool That Showed Bias against Women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G/

Flew, T. (2021). Regulating Platforms. Polity Press.

Kramer, A. D. I., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111(24), 8788–8790. https://doi.org/10.1073/pnas.1320040111

Lang, K. (2024, March 26). How to Work With the TikTok Algorithm in 2023. Buffer Resources. https://buffer.com/resources/tiktok-algorithm/

Nerds On Call Computer Repair. (2018, October 10). Amazon’s Sexist Recruitment AI. YouTube. https://www.youtube.com/watch?v=JOzQjT-hJ8k

WION. (2020, November 5). US ELECTION 2020: Facebook bans “Stop the steal” group over calls for violence |World News WION News. YouTube. https://www.youtube.com/watch?v=LxJsg7Iz0R8

Be the first to comment

Leave a Reply