The Algorithm Trap: How TikTok’s For You Page Can Fuel Harmful Content Loops

US-China Tariff Tensions: Is TikTok Bridging the Divide or Fueling the Fire?

“Can’t stop scrolling? Have you been trapped by the algorithm?”

Have you ever had this experience?

  • You only meant to spend two minutes on TikTok, but before you know it, an hour has flown by. What’s even stranger is that as you keep scrolling, the recommended content seems to “understand” you more and more — but it also becomes increasingly repetitive. For example, if you click on a video by a fitness influencer, your feed suddenly fills up with weight-loss challenges. Watch a video about anxiety, and the platform keeps pushing similar emotional content. Even if you open a short video about China–U.S. trade tensions, the next second you’re flooded with emotionally charged and one-sided opinions.

This isn’t a coincidence — it’s a deliberate move by the algorithm. TikTok’s recommendation system closely analyzes every swipe, pause, like, and comment you make to pinpoint what content you might be interested in, then keeps feeding you similar topics. The more you watch, the more the platform tailors the content to your tastes — until you’re trapped in an information bubble.

This article will explore how TikTok’s algorithm creates content echo chambers by examining a recent real-world case — the new round of U.S. tariffs on China announced on April 3, 2025 — and discuss whether the platform should take greater responsibility and offer more transparency in response.

Algorithms aren’t objective — they’re a “mirror” that reflects and reinforces your behavior

TikTok’s For You page isn’t truly “tailored for you” — it’s tailored for your behavioral preferences. The recommendation system behind the platform uses machine learning to constantly rank content based on your activity data. Its core goal? To keep you on the platform for as long as possible.

Image2 from: EU-TIKTOK-bfjt-videoSixteenByNine3000

Algorithms aren’t teaching us anything — they’re amplifying what we already want to see.

Zeynep Tufekci

As a result, they keep recommending content you already prefer, pulling you deeper into that space while filtering out different perspectives. This is exactly how information bubbles and algorithmic echo chambers are formed.

Research shows that this kind of “algorithmic collusion” not only reinforces users’ cognitive biases but can also undermine the diversity of public discourse. For example, in shaping teenagers’ body image on TikTok, the algorithm repeatedly promotes content tagged with terms like “ultra-thin” and “clean fasting,” leading users into a dangerous loop of harmful content (Harriger, Evans, Thompson, & Tylka, 2022).

Case Study: How the New Round of U.S. Tariffs on China Sparked Content Polarization on TikTok

On April 3, 2025, the Office of the United States Trade Representative announced higher tariffs on key Chinese industry products such as electric vehicles, batteries, and solar panels. TikTok quickly became one of the main battlegrounds for this media storm.

@dylan.page

Is this just a necessary bad phase or is this the new normal?🤔

♬ original sound – Dylan Page

By examining how related content spreads on TikTok, we can see how the platform rapidly “funnels” users into emotionally charged and heavily biased content loops:

Observation 1: Keyword Searches Quickly Trigger Nationalist Content Pushes

All it takes is a few searches of terms like “China tariff,” “TikTok ban,” or “Biden China policy” for a new user’s feed to become flooded with content such as:

  • Emotionally charged videos with titles like “The U.S. is cracking down on Chinese tech again” or “America is afraid of China’s rise”
  • Comment sections filled with calls like “Strike back” or “China should respond more aggressively”
  • Videos containing unverified information or even outright rumors, such as “These tariffs are just a cover-up for the U.S. economic crisis”

These videos typically rack up high view counts, strong engagement, and intense emotional tones—making them especially sticky for users.

You can check out CNBC’s feature article on TikTok: “States sue TikTok over app’s effects on kids’ mental health” (2024). In it, a tech journalist conducts a hands-on test to demonstrate the impact of TikTok’s algorithm.

Observation 2: Hashtags and Algorithms Work Together to Amplify Polarized Emotions

Many videos are tagged with phrases like #SupportChinaAgainstAmerica, #AmericanHegemony, or #RiseOfMadeInChina. TikTok’s algorithm then pushes this content to users who have previously watched similar videos. This hashtag-plus-recommendation mechanism creates a closed loop: the more you scroll, the more uniform the viewpoints become.

This system reinforces an “us vs. them” mentality, weakening rational discussion and fact-based judgment. On social media platforms, the “middle ground” of opinions is often the hardest to find.

As a regular bystander, I do not possess expert practice to determine whether “punishing each other using tariffs” is possible to put an end to international conflicts or to profit regular folks. However, from life events, tariff policy most times never brings about economic boom—but imposes onerous effects: raising the cost of living, injuring local communities, waves of offshoring and firings, reducing earnings, and mounting social instability.

In today’s global context, TikTok—a global social media platform with a vast global user base—can and ought to be more than an algorithmic traffic machine. It ought to be more in public duty, as a bridge and buffer between nations and cultures, rather than letting its algorithm drive the spread of nationalism and polarization.

It can have the power to be a megaphone so that more reasonable and middle voices get heard from around the world. It can be a lubricant too, bringing about a relaxation of tensions among countries through providing a calming and pluralistic platform for public discourse.

Technology shouldn’t be employed as a tool for generating hatred—it needs to be a tool for generating understanding. Being one of the strongest short video platforms globally, TikTok needs to respond with greater mindfulness: Are we fostering understanding from the content we are recommending, or are we further alienating?

The information bubble isn’t unique to TikTok

Though this article focuses on TikTok, algorithmically generated loops of content and homogenization are widespread throughout almost all the large social media platforms. It is not an issue unique to TikTok — it is a sign of a larger design failure that many sites share.

Take YouTube, for example. Experiments have found that if the viewer views a single conspiracy video, the algorithm then begins pushing more extreme content. “The Earth is flat” or “Vaccines are bad” videos — which are incorrect or based on pseudoscience — are ones that then follow. Such a recommendation not only spreads dangerous information but also twists users’ perceptions and continues on to solidify their biases (Ribeiro et al., 2020).

Image3 from: YouTube video Little Red Book algorithm engineer Nick interview https://youtu.be/cN07i8Puqzs?si=VQqCfEWSThx1qQe5&t=3243

On Facebook, during the 2016 U.S. presidential election, there was considerable evidence that its algorithm served to encourage political polarization and the spread of false information. The site is seen to encourage emotive and extremely opinionated material because such material gets clicked on, liked, and shared most (Allcott & Gentzkow, 2017). This ultimately leads to a tight, filtered information environment where users do not even get to view neutral or varied views very often.

These same patterns have been observed on Chinese websites like Douyin and Weibo too. When a user gets into one’s niche of interest — parent content, motivational sayings, or practice exam videos — the algorithm keeps pushing more of that. While it increases engagement and viewing time, it quietly creates an “information cocoon” that keeps users away from accessing broader and more general content (Gao, Liu, & Gao, 2023).

All these examples culminate into one central issue: the algorithm logic of platform algorithms — intended to keep users interested and productive — works in much the same way across the globe. Thus, the algorithmic echo chamber issue is not TikTok’s alone. It’s an global, system-wide issue social media platforms around the world should consider solving.

TikTok has a responsibility to show you a more diverse world

TikTok isn’t deliberately spreading bias or inflaming people. But the platform’s algorithm isn’t geared toward engaging, but to get noticed. When a video can succeed only by a single measure of engagement, startling, sensational, and repetitive videos will thrive.

Research evidence supports this view as well. For example, Cinelli et al. (2021) carried out a study and found that social media algorithms amplify information polarization to the extent that users struggle to view content from the other side. Tufekci (2015) further argued that algorithmic recommendations are not a “neutral pipeline,” but are actively involved in reproducing ideologies.

If a platform only shows you what you wish to see, and not what you need to see, society can easily become cognitively fragmented and emotionally polarized. So, can TikTok get better? Here are a few ways:

✅ Enhance Users’ Content Management Strengths

TikTok can help users set their preferences in advance, such as:

“I don’t want to see this kind of content anymore”

“I want to see different perspectives”

In the process, the platform can provide a “content interest profile” summary so users will know what kind of content they are being immersed in.

✅ Make Recommendation Mechanism More Transparent

The platform must release periodic transparency reports to inform users: “Why am I seeing this content?” Meta and Twitter have already started testing similar features—TikTok needs to do the same at the earliest.

✅ Diversity Prompts and “Reverse Algorithm” Suggestions

When users see the same type of content over and over, the platform can ask: “Would you like to see something different?” It could even have a “reverse algorithm” option that actively shows diverse sources of information, such as neutral news websites or subject-matter experts.

✅ Enhanced Moderation System and Limited Scope for Extreme Tags

Contents and hashtags that cause nationalist polarization, create fear or spread misinformation should be subject to a “risk assessment mechanism.” Before their recommendation, those contents shall be evaluated for credibility and labelled with an open source label.

It’s also worth noting that TikTok has also begun to attempt to give more control to the users in terms of what they get to see. For example, as part of its #SaferInternetDay official statement, TikTok announced it was rolling out a Digital Safety & Privacy Guide, through which users would be able to customize their content preferences and streamline their privacy and security settings by simply searching “Check my settings.”

In addition, TikTok has revealed how it protects U.S. user data and described the algorithm that powers the For You page. While those efforts are in their infancy, they suggest that the platform is not mindlessly propagating bias—it can ideally work to make the overall information environment improved.

The final question: Are we truly prepared to move out of the information bubble?

From the perspective of the platform, we have all the grounds to expect more diverse, just, and ethical recommendation systems for content. This means not only reducing the push of emotive or extreme material, but also showing a more diverse range of viewpoints, being more transparent as to how one is recommended, and enabling people to have choice over what they see. As a global information platform, social media platforms not only have the ability but the responsibility to establish a healthier digital ecosystem (Gillespie, 2018).

But part of the issue is also us — users. We need to ask ourselves: do we enjoy being surrounded by content that confirms our opinions and feels good? Do we avoid opinions that challenge our mind without even realizing it? Breaking the information bubble is not a technological issue — it’s a mental challenge. Tufekci (2015) recognizes that algorithms are subjective. They engage our psyche and validate the assumptions that we already possess.

In these days of social media, our worldview is shaped by what comes before us. If we are always stuck in the comfort zone, we are at the risk of losing the capacity to deal with the complexities of intricate issues. Do we wish to hear the voices of the other side? Can we tolerate disagreement in the realm of reason? These are issues all social media users must take into account. Sunstein (2018) warns against the danger of “information cocoons.” When people only hear one kind of opinion, they may lose their ability to think clearly and participate in public discourse.

Choosing to exit the bubble is not all about how we protect ourselves in the world of technology. It is also choosing to stay mentally open and accessible to the world. This ability will dictate the way we think about people, how we view the world, and how we face the future.


References:

Gao, Y., Liu, F., & Gao, L. (2023). Echo chamber effects on short video platforms. Scientific Reports, 13(1), 6282.

Harriger, J. A., Evans, J. A., Thompson, J. K., & Tylka, T. L. (2022). The dangers of the rabbit hole: Reflections on social media as a portal into a distorted world of edited bodies and eating disorder risk and the role of algorithms. Body Image, 41, 292-297.

Cinelli, M., Quattrociocchi, W., Galeazzi, A., et al. (2021). The echo chamber effect on social media. Proceedings of the National Academy of Sciences, 118(9), e2023301118. https://doi.org/10.1073/pnas.2023301118

Ribeiro, M. H., Ottoni, R., West, R., Almeida, V. A. F., & Meira Jr, W. (2020). Auditing radicalization pathways on YouTube. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 131–141. https://doi.org/10.1145/3351095.3372879

Gillespie, T. (2018). Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.

Sunstein, C. R. (2018). #Republic: Divided democracy in the age of social media. Princeton University Press.

Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211–236. https://doi.org/10.1257/jep.31.2.211

Tufekci, Z. (2015). Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency. Colorado Technology Law Journal, 13(203), 203–218.

Tufekci, Z. (2015). Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency. Colorado Technology Law Journal, 13(203), 203–218. https://ctlj.colorado.edu/?p=1333

Be the first to comment

Leave a Reply