Do Algorithms Take Sides? How TikTok Shapes Political Perception Across Borders

If Diogenes were alive today, he’d probably marvel at how digital platforms connect the world — and believe, just for a moment, that the dream of being a “citizen of the world” had finally come true. But after a few more scrolls, he’d realize that this so-called “world” is just algorithmically filtered reality — a giant bubble dressed up as diversity. And then, maybe, he’d rage-quit the app like a true Cynic.

Imagine Diogenes using TikTok.
Generated by ChatGPT
Imagine Diogenes using TikTok. Generated by ChatGPT

I started feeling this unsettling disconnect while using Threads, a text-based platform operated by Meta. As a user from mainland China, my feed was filled with posts about cross-strait relations—intense debates, bold opinions, and strong political takes.

But when I searched for the same keywords using my Australian friend’s account, it felt like I’d entered a parallel universe: some topics had vanished entirely, and the tone of the discussion was gentle—almost detached.

We tend to trust technology, as if algorithmic recommendations reflect data-driven, objective truth—as if whatever is computed must be correct by default (Manhoo, 2016).

But why does the world look so different on the same platform and in the same language? Behind the myth of neutral recommendation algorithms, who is deciding what we should see—and what we shouldn’t? Who is responsible for the outcomes?

That was the moment I realized: I was a tagged data point in a massive database, moving through a personalized filter bubble. I kept thinking of Threads as a “global platform,” and forgot that its parent company, Meta, is an American tech giant—with its interests, ideologies, and algorithmic logic.
Turns out, algorithms have political identities, too.

TikTok Case Study: Who Decides What You Get to See?

It seems we’ve entered a new era—one where the boundaries between online and offline life are disappearing.

“How many hours do you spend on TikTok?” asks user @katherou from the other side of the screen. “How many of you spend more time on TikTok than with your family or partner?”

User @leftoverlasagnia, showing the screen time on TikTok

There’s no doubt that TikTok has become notorious for its addictive algorithm—once you open the app, it’s hard to stop scrolling. The video-based social media app has garnered more than 1.5 billion users since its launch. (Pew Research Center, 2024)

And within those endless scrolls, young users aren’t just watching cute pets or viral dance moves—they’re also consuming political content.

According to a Pew Research Center survey, about one-third of Americans under 30 regularly get their news from TikTok—either through official political ads or interpersonal dialogue. (Pew Research Center, 2024)

It wouldn’t be a stretch to call TikTok Gen Z’s top news source. The political expression on the platform is often highly personal, marked by humor or cynicism—informal, and resembling a strange kind of “political sphere.”

TikTok is becoming an essential political public space. (Source: Vox)

In the process of being entertained, users end up expressing political views—consciously or not.

TikTok claims that it serves the public good in the realm of politics. The company argues that its platform is a space for information, entertainment, and community-building. It says it strives to balance users’ desire for debate and discussion with its goal of connecting people without causing division. (Johnson, 2024)

But something about this “news source” feels off.

How Does TikTok Influence Political Orientation?

Ahead of the 2024 U.S. presidential election, a study conducted by NYU Abu Dhabi exposed political bias embedded within TikTok’s recommendation system. Researchers created 323 simulated accounts across New York, Texas, and Georgia, pre-setting them as pro-Republican, pro-Democrat, or politically neutral. Over several weeks, they tracked the political content shown on each account’s “For You” feed. (Ibrahim et al., 2025)

  • The findings were clear: TikTok’s algorithm was far from neutral.
  • Pro-Republican accounts received 11.8% more content that aligned with their political leanings.
  • Pro-Democrat accounts, in contrast, were shown more content that criticized their views—encountering opposing-party posts 2.8 times more often than Republican-leaning users.
  • Crucially, even after controlling for engagement metrics like likes, views, and shares, this imbalance persisted. In other words, it wasn’t just about “what users like”—the recommendation system itself was biased.
Donald Trump vs. Kamala Harris in 2024 ‘s US election.  

The most widely recommended content is rarely rational debate. Instead, it tends to be emotionally charged, strongly opinionated, and negative in tone—less about exchanging perspectives, and more about attacking the other side. (Ibrahim et al., 2025)

As the researchers noted, “These asymmetric recommendations are largely driven by negative partisanship.” (Ibrahim et al., 2025)

However, in an interview with Vox, a TikTok spokesperson said:

“During elections, we focus on preserving the integrity of the platform to maintain a creative, safe, and positive environment where people can enjoy a diversity of content.” (Johnson, 2024)

On TikTok, swiping through videos can feel like joining a silent political battleground. Every like, every search becomes a kind of vote—one that quietly shapes the kind of “truth” you’ll be shown next.

And given TikTok’s enormous influence among young voters, this bias in what users see could have a lasting impact on how they think. The same platform that fueled a movement like Black Lives Matter (Janfaza, 2020) could just as easily inflame something as extreme as the Capitol riots. (Woodruff Swan & Scott, 2021)

Beyond TikTok: The Systemic Bias Behind Our Feeds

But the issue goes far beyond TikTok.

It points to a deeper structural problem: political bias embedded in algorithmic governance—a growing body of research shows that exposure to political information on social media shapes how individuals think, vote, and engage. (Whitsitt et al., 2019)

Many people understand racial and gender bias as serious issues—and platforms tread carefully around them. But political bias? It’s rarely even seen as a problem. It’s like a ghost—silently embedded into algorithmic logic, always lurking, always one click away from detonation. (Peters, 2022)

Source: Getty Image

Unlike racial or gender bias, political bias often escapes scrutiny. In democratic societies, there are strong legal and social norms to counter discrimination based on race or gender. (Peters, 2022) For example, in Australia, protections are codified in laws like the Racial Discrimination Act 1975 and the Sex Discrimination Act 1984.

But when it comes to political orientation, there’s no such moral firewall. People are often encouraged to ridicule or reject opposing views. That’s why political bias is not only more likely to be absorbed by algorithms but also harder for developers to detect and debug.

This creates a dangerous loop—algorithms reinforce and exaggerate one dominant narrative while silencing others. The result? A narrower view of reality. (Flew, 2021)

You’ve Been Tagged: Data as the New Oil

As early as 2010, a study on Facebook pointed out that to personalize content, the platform trained its algorithms to track users’ every digital footprint. These models could even sketch out a user’s political profile based solely on the pages they “liked.”

You could no longer rely on silence or vagueness to conceal your stance—you were being labeled, profiled, and targeted. (Manhoo, 2016) And that political profile? It has become one of the most valuable assets in the modern economy. Data is the new oil—a third core resource standing alongside capital and labor. (World Economic Forum, 2011)

In this algorithmic era, individuals are almost powerless when confronting the platforms.

We’ve grown used to mechanically clicking “Accept All Cookies” or agreeing to user terms just to get past a pop-up. As Cohen (2018) sharply criticized, this is “the processing of personal information on an industrial scale”—a one-sided transaction where you don’t get to say no. (Flew, 2021)

Data privacy and breach have become an urgent issue (Source: BBC)

And behind these algorithms is a small, homogeneous group of designers, living in a handful of cities and working for the richest companies on Earth. (Crawford, 2021) Their political views and biases are embedded into the code, yet we have no way of knowing where their data comes from or how it works.

It’s a black box.

“In its early days, social media did function as a kind of digital public sphere, with speech flowing freely,” said Kai Riemer and Sandra Peter, professors at the University of Sydney Business School, in an interview with the BBC. (Barrett, 2024)

However, the very idea of the public sphere as a space for free and open exchange is challenged by algorithms. We think we’re free to scroll, but we’re trapped inside an echo chamber—our thoughts, identities, and realities are shaped by something we cannot see. Platforms now decide who gets to speak—and whose voices are heard.

Anger Is a Calculated Emotion

But shaping identity is just one part of the story. Algorithms don’t just reflect who we are—they help stir up who we fight. And no emotion gets more algorithmic love than rage.

Think back to Threads—same keywords, different countries, wildly different tones. While my feed was filled with polarizing posts on cross-strait relations, my friend’s version felt like a diplomatic press release. It wasn’t just what we saw—it was how we were made to feel.

Rage is the emotion most eagerly boosted by algorithms. Hate speech and misinformation are more likely to grab attention, go viral, and dominate timelines. And in recent years, the line between online fury and real-world violence has grown dangerously thin.

Source: The Council of Europe
  • Christchurch Mosque shooting (2019): The attacker consumed extremist content on social media and livestreamed the massacre. YouTube’s algorithm may have served as an “accelerant” in his radicalization.
  • Delhi riots (2020): Facebook was accused of allowing inflammatory, anti-Muslim hate speech. A whistleblower revealed that commercial interests outweighed content moderation. (Purnell & Horwitz, 2020)

In the attention economy, anger isn’t just a byproduct—it’s the business model. And the consequences don’t stay online.

Algorithmic Governance: An Ongoing Social Experiment

Social media algorithms, in their current form, are now 16 years old. (Barrett, 2024)—and like teenagers in their rebellious phase, they’ve brought on a list of governance challenges that no one quite knows how to handle.

Ironically, few of us feel manipulated—precisely because platforms carefully craft a feel-good, multicultural image that soothes public anxieties about the dominance of algorithmic tech giants. (Elkins, 2019)

And this “experiment” moves quietly forward, with us as its test subjects. But more and more users are waking up.

They’re starting to question the cliché of “making the world a better place” and instead recognize that algorithms carry biases—especially on platforms like TikTok and Instagram. Tired of being labeled, tracked, and fed content, they’ve begun pushing back, organizing anti-platform and anti-algorithm techlash movements in all kinds of creative ways.

  • Digital detox has become a trend. People are cutting back on screen time and embracing the unplugged life—reading books, handwriting, walking, and having real conversations.
  • The return of chronological timelines—small acts of resistance against the logic that everything we see should be sorted by algorithmic weight. It’s not just nostalgia—it’s a search for a more human online experience.

Meanwhile, governments are scrambling to catch up.

  • Australia passed the Privacy and Other Legislation Amendment Bill 2024. (Birkett & Kermond, 2024)
  • The EU introduced rules threatening to fine tech firms 6% of their turnover and suspend them if they fail to prevent election interference. (BBC News, 2022)

Of course, like all governance efforts in fast-moving tech domains, algorithm regulation is inherently reactive. Laws usually come after the damage.

But waiting for the perfect legal framework is naive. What we need is a foundation of ethical norms and public awareness—before algorithms decide everything for us.

Can we imagine an algorithm that doesn’t take sides? Not a content machine optimized for profit but an infrastructure that can be audited, challenged, and rebuilt in the public interest.

The future of algorithmic governance is not a technical upgrade—it’s a social experiment in how we balance power, politics, and perception. And like all experiments, it asks us not just to observe, but to decide:

Whose side is the algorithm really on—and who gets to say?

Reference

Barrett, N. (2024, April 9). How have social media algorithms changed the way we interact? BBC News. https://www.bbc.com/news/articles/cp8e4p4z97eo

Birkett, S., & Kermond, C. (2024, December 5). Australia: Privacy Act amendments and Cyber Security Act become law. Privacy Matters. https://privacymatters.dlapiper.com/2024/12/australia-privacy-act-amendments-and-cyber-security-act-become-law/

Cohen, J. E. (2018). The biopolitical public domain: The legal construction of the surveillance economy. Philosophy & Technology, 31(2), 213–233. https://doi.org/10.1007/s13347-017-0283-2

Crawford, K., & JSTOR. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

Elkins, E. (2019). Algorithmic cosmopolitanism: On the global claims of digital entertainment platforms. Critical Studies in Media Communication, 36(4), 376–389. https://doi.org/10.1080/15295036.2019.1630743

Flew, T. (2021). Regulating platforms. Polity Press.

Gottfried, J. (2024, January 31). Americans’ social media use. Pew Research Center. https://www.pewresearch.org/internet/2024/01/31/americans-social-media-use/

Ibrahim, H., Jang, H. D., Aldahoul, N., Kaufman, A. R., Rahwan, T., & Zaki, Y. (2025, January 29). TikTok’s recommendations skewed towards Republican content during the 2024 U.S. presidential race. arXiv. https://arxiv.org/abs/2501.17831

Janfaza, R. (2020, June 4). TikTok users are helping #BlackLivesMatter activists avoid police surveillance. CNN. https://edition.cnn.com/2020/06/04/politics/tik-tok-black-lives-matter/index.html

Johnson, G. C. (2024, September 9). Is TikTok breaking young voters’ brains? Medium. https://ginajohnson-37023.medium.com/is-tiktok-breaking-young-voters-brains-279b4c97e240

Karimi, K., & Fox, R. (2023). Scrolling, simping, and mobilizing: TikTok’s influence over Generation Z’s political behavior. The Journal of Social Media in Society, 12(1), 181–208.

Manjoo, F. (2016, May 11). Facebook’s bias is built in, and bears watching. The New York Times. https://www.nytimes.com/2016/05/12/technology/facebooks-bias-is-built-in-and-bears-watching.html

Peters, U. (2022). Algorithmic political bias in artificial intelligence systems. Philosophy & Technology, 35(2), 25. https://doi.org/10.1007/s13347-022-00512-8

Purnell, N., & Horwitz, J. (2020, August 14). Facebook’s hate-speech rules collide with Indian politics. The Wall Street Journal.

Whitsitt, L., & Williams, R. L. (2019). Political ideology and accuracy of information. Innovative Higher Education, 44(4), 423–435. https://doi.org/10.1007/s10755-019-09478-6

Woodruff Swan, B., & Scott, M. (2021, September 16). DHS: Extremists used TikTok to promote Jan. 6 violence. Politico. https://www.politico.com/news/2021/09/16/dhs-tiktok-extremism-512079

World Economic Forum. (2011). Personal data: The emergence of a new asset class. https://www3.weforum.org/docs/WEF_ITTC_PersonalDataNewAsset_Report_2011.pdf

Be the first to comment

Leave a Reply