What You See Isn’t the Whole World: Platform Algorithms, Rules, and Our Lost Digital Sovereignty

In today’s platform-dominated digital age, we interact with algorithms almost daily. Open TikTok, and it instantly knows whether you want to laugh, cry, or watch cat videos. Scroll through YouTube, and it always finds a way to recommend the next video you might like. Post a comment on Twitter, and the next day, you might find yourself “shadowbanned”—without even knowing which keyword triggered the system.

None of this is accidental. All the information we see on social media is based on the mysterious and unspeakable rules and deep learning algorithms of the platform. We think of ourselves as “users” of the Internet, but in fact we are recipients of information that is selectively classified and manipulated by the platform. Let’s do a simple experiment: this time, click on the content that you are not interested in. Try to analyze how the platform will guide you in such a situation? In this way, you can jump out of the inertia of thinking and clearly see how manipulative the platform is. Soon, you will realize that it is time to doubt: in an era when technology giants monopolize public discourse, attention and data, do we really have freedom of speech and information sovereignty?

Based on this core issue, this article will explore the four key dimensions of platform governance, recommendation algorithms, hate speech review and user rights based on real cases of Western social media platforms, including Facebook’s Cambridge Analytica scandal, YouTube’s radicalization channels, Twitter’s potential political tendencies and TikTok’s discriminatory algorithmic practices. Through these examples, we reveal the power dynamics hidden behind likes, browsing and recommendations.

1. Platform Rules: The Invisible Censorship Regime

Social media platforms have long touted themselves as “technologically neutral,” but come on, who are they fooling? Through “community guidelines,” they set their own boundaries for permissible speech and enforce removals, bans, and restrictions. Users are forced to follow these rules, while having no idea what shady deals are being made with their information that are completely beyond the scope of the rules. The Facebook-Cambridge Analytica scandal (2018) is a classic example. The British consulting firm collected data on more than 87 million Facebook users through a personality test app and later used it for micro-targeted political ads during the election. Users had no idea how their data was collected, modeled, or weaponized by algorithms (Suzor, 2019). This case exposed the lack of transparency in platform governance and the lack of control users have over their own data.

Beyond the high-profile Trump ban, Facebook (now Meta) implemented a controversial “cross-check” system that created a two-tiered moderation structure. Internal documents revealed that while ordinary users faced immediate enforcement, high-profile accounts (politicians, celebrities, and journalists) enjoyed special protection – their content would be reviewed by human moderators before any action was taken. This system allowed rule-violating content from VIPs to remain visible for days or weeks longer than comparable posts from regular users. Notably, when this system was exposed by whistleblower Frances Haugen, it showed how platform rules aren’t equally applied, with internal memos admitting this approach gave certain users “greater latitude to break rules” (Newton, 2021).

Meanwhile, Twitter’s “shadowbanning” mechanism subtly shapes public discourse. Many users report their tweets being hidden without notice, making them invisible in searches or feeds. This lack of procedural fairness leaves no room for appeal, yet it has real consequences for free expression.

Reddit’s mass banning of nearly 2,000 communities (subreddits) demonstrated both the power and inconsistency of platform rule enforcement. While the removal of extremist forums like r/The_Donald was widely applauded, the simultaneous banning of left-leaning communities like r/ChapoTrapHouse raised questions about political neutrality. Former Reddit CEO Steve Huffman later admitted in interviews that these decisions were partly influenced by advertiser pressure rather than purely content-based judgments, highlighting how commercial interests shape platform governance (Green, 2020).

In 2021, Twitter’s permanent ban on then-President Donald Trump sparked a global debate: Do platforms wield censorship power surpassing that of nation-states? As Terry Flew (2021) argues, this incident proved that platform governance is no longer just a technical issue—it has evolved into a socio-political regulatory tool with quasi-sovereign authority.

These cases demonstrate that platform rules are not mere community standards but instruments of power that control information flow and define the boundaries of public discourse—with users having little to no say in the process.

2. Recommendation Algorithms: You Think You’re Choosing, But You’re Being Chosen

Many users believe they “freely choose” what to watch, but in reality, they are being conditioned by algorithms. Recommendation systems are not neutral; their goal is to maximize engagement, clicks, and ad revenue. To achieve this, they prioritize extreme, emotionally charged, or divisive content.

YouTube has been widely criticized for radicalizing users. The New York Times (2019) found that its algorithm often leads viewers from neutral topics (e.g., fitness videos) to extremist content (e.g., far-right conspiracy theories). Gillespie (2018) argues in Custodians of the Internet that platforms are not neutral intermediaries but “content curators”—actively shaping what we see and don’t see. This “algorithmic visibility politics” doesn’t just influence individual preferences; it reshapes entire public discourse.

A more serious problem that has emerged in global platform governance is the ideological bias embedded in algorithmic content recommendations – particularly the experience of users operating in the Simplified Chinese language environment of Twitter (now X).

Once the interface language is set to Simplified Chinese, the platform disproportionately recommends content from accounts that promote “Taiwan independence” rhetoric, including groups such as Asian Voices for Freedom and self-proclaimed dissident media voices. Beyond that, it’s all pornographic content. It’s clear that Twitter views all Simplified Chinese users as involuntary singles who are “sex-starved and crazy, venting their dissatisfaction with their lives online and even trying to subvert state power”. Well, at least I’m not, and I’m offended to be seen as such.

It’s clear that these recommendations are not accidental, but symptomatic of a larger ideological strategy that privileges one narrative while marginalizing others.Often boasting of “freedom,” “democracy,” or “human rights,” these Taiwan channels have become persistent, emotionally charged tools for denigrating mainland China. The Chinese government is often portrayed as an authoritarian dictator or even an aggressor when it comes to China’s internal affairs in Xinjiang, Tibet, Hong Kong, and Taiwan, or on topics such as management policies and data governance during the epidemic. Seriously? “Invaders”? These places have been part of China since ancient times! And hey, what do you guys expected from China when it’s the covid-19 time? It’s not like we are always strictly controlled like that, it’s just special policy for special situation.Taking advantage of the lack of understanding of China’s current situation among the general public in the Western world, these accounts often disseminate negative portrayals of the country, including ridiculous rumors, such as the famous “social credit system in mainland China”.

What is most worrying is not that such content exists, but that it is amplified through social media. Twitter’s recommendation system gives these ideas a disproportionate amount of visibility, and there are virtually no platform-level restrictions on their dissemination. This is not a matter of preserving diversity or promoting dialog, but rather the reinforcement of a single ideological position by an algorithm. Notably, while X claims to support “freedom of expression,” it does not provide the same amplification rights for pro-unification or pro-continental views.

From the perspective of international law, this behavior is an even more intolerable mistake. The one-China principle is not just a domestic Chinese position. It is a fundamental element of modern international relations, formally recognized by the United Nations and virtually all sovereign states, and embodied in countless bilateral agreements. Yet Twitter has allowed support for this type of defamatory rhetoric to flourish on its platform, thereby undermining internationally recognized principles while tacitly endorsing separatism under the guise of user-generated content.

As Suzor (2019) reminds us, platforms like Twitter wield power akin to sovereign actors in the digital age—they set the rules, enforce them, and remain largely unaccountable to the public. When algorithmic governance begins to selectively elevate content that challenges established international consensus, this power morphs from technical mediation to ideological intervention. Gillespie (2018) similarly warns that content curation through recommendation systems is never neutral: it shapes what users see, think, and ultimately believe.

In short, for users operating in Simplified Chinese, Twitter/X does not merely serve as a communication tool—it becomes a conduit for politicized messaging. This raises serious concerns not only about algorithmic fairness but about digital sovereignty itself. A platform that selectively amplifies divisive narratives against a recognized sovereign government is not upholding free speech—it is manipulating perception.

Meta’s internal research, leaked by whistleblowers, revealed that Instagram’s algorithm actively promoted content that damaged teen mental health. The system would identify users who engaged with fitness content and gradually steer them toward “comparison” reels – side-by-side images of “ideal” versus “flawed” bodies. One internal study found that 32% of teen girls reported feeling worse about their bodies after Instagram use, with the algorithm exacerbating these effects by prioritizing “dramatic transformation” content (Wells et al., 2021).

TikTok’s algorithm also faces scrutiny. In 2020, The Washington Post revealed that TikTok’s system downranked content from disabled, plus-sized, or LGBTQ+ creators to maintain “aesthetic uniformity.” Though TikTok later claimed to have adjusted its policies, it never disclosed the full criteria or offered redress.

Netflix’s recommendation system employs sophisticated emotional manipulation tactics. Internal data showed that controversial content like The Social Dilemma kept users engaged longer than feel-good programming, not because they enjoyed it but because it provoked strong reactions. The algorithm learned that anger and outrage led to more completion rates and social media sharing. As a result, Netflix began weighting its recommendations to favor divisive content (Shaw, 2020).

These examples prove that algorithms are not designed to “serve users” but to optimize corporate profits. What you see isn’t necessarily what you want to see—it’s what the platform wants you to see.

3. Hate Speech Moderation: Who Defines What’s “Harmful”?

Hate speech moderation remains one of the most contentious issues in platform governance. While companies like YouTube and Facebook claim to combat hate speech, their enforcement is often inconsistent, opaque, and politically biased.

In 2021, Facebook was accused of enabling violence in India and Myanmar by failing to remove inflammatory posts. Meanwhile, far-right conspiracy channels in the U.S. often operated unchecked, while some left-leaning content was wrongly flagged. As Flew (2021) notes, platform governance is torn between legal pressures, user demands, profit motives, and technical constraints—leading to erratic moderation.

During the 2021 Gaza conflict, Twitter’s moderation showed clear political asymmetry. The platform removed over 200 Palestinian activist accounts using the #SaveSheikhJarrah hashtag for “inciting violence,” while leaving up nearly identical posts from Israeli officials. Internal emails later revealed that the company used automated tools to flag Arabic keywords more aggressively than Hebrew ones (Biddle et al., 2021).

Another problem is context-blind AI enforcement. For instance, Black activists using the N-word in anti-racism protests have been mistakenly banned for “hate speech.” This reveals a critical flaw: automated systems lack nuance, reinforcing structural biases.

The takeaway? Hate speech moderation isn’t neutral—it’s a political choice. We can’t leave it entirely to platforms; independent oversight is needed.

4. Who Watches the Watchmen? Institutional Reforms & User Empowerment

As platform power grows, governments are stepping in. The EU’s Digital Services Act (DSA, 2022) mandates algorithm transparency,independent appeals mechanisms and annual risk assessments on systemic harms (e.g., disinformation).This signals a shift: Platforms are not just tech companies—they’re public infrastructures.

The European Commission’s landmark investigation into X (formerly Twitter) marked a turning point in platform accountability. Following Hamas’ October 2023 attack on Israel, the EU found that X had failed to remove illegal hate speech and terrorist content within the DSA’s 24-hour requirement. The investigation specifically cited X’s decision to lay off 80% of its trust and safety team after Musk’s takeover (Braslau, 2023).

Australia’s ACCC (2019) similarly proposed treating platforms as “digital utilities,” enforcing data rights and ad transparency. But regulation alone isn’t enough.

Brazil’s pioneering legislation created one of the world’s strictest platform accountability regimes. The law requires platforms to remove flagged illegal content within 48 hours or face fines up to $800,000 per hour of delay. During the 2022 election, this forced WhatsApp to limit message forwarding (a major disinformation vector) (Pearson, 2022).

Suzor (2019) advocates for “platform citizenship”—giving users a voice in governance. Decentralized alternatives like Mastodon (where users control their own servers) offer a model for transparent, community-driven platforms.

Conclusion: The Right to Be Seen Is the Right to Speak

Every video, tweet, or post you encounter isn’t just a “recommendation”—it’s the result of thousands of algorithmic calculations and corporate agendas. These choices silently reshape how we understand the world, express ourselves, and trust each other.

If we don’t challenge platform power, we remain mere “predictable data points”—not citizens with agency. Digital governance isn’t just about the internet; it’s about democracy itself. We must shift from being users to governors, demanding algorithmic transparency, content justice, and data rights.

Because the right to be seen is the right to speak—and that’s a fight worth having.

References

Biddle, S., Morse, J., & Scheiber, N. (2021, May 18). Twitter silences Palestinian voices amid Israeli violence. The Intercept. https://theintercept.com/2021/05/18/twitter-palestine-israel-censorship/

Braslau, D. (2023, December 18). EU launches first investigation under DSA against Elon Musk’s X. Euronews. https://www.euronews.com/2023/12/18/eu-launches-first-investigation-under-dsa-against-elon-musks-x

Flew, T. (2021). Regulating platforms. Polity Press.

Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.

Green, E. (2020, June 30). Reddit’s ban of r/The_Donald was long overdue. Wired. https://www.wired.com/story/reddit-ban-the-donald/

Newton, C. (2021, September 13). Facebook’s ‘cross-check’ system favors VIPs, internal documents show. The Verge. https://www.theverge.com/2021/9/13/22670824/facebook-cross-check-vip-enforcement-delay-internal-documents

Pearson, S. (2022, November 7). Brazil passes law to hold social platforms accountable. AP News. https://apnews.com/article/brazil-social-media-law-misinformation-accountability

Shaw, L. (2020, November 19). Netflix’s secret algorithm hacks your brain to keep you watching. Bloomberg. https://www.bloomberg.com/news/articles/2020-11-19/netflix-s-secret-algorithm-hacks-your-brain-to-keep-you-watching

Suzor, N. (2019). Lawless: The secret rules that govern our digital lives. Cambridge University Press.

Wells, G., Horwitz, J., & Seetharaman, D. (2021, September 14). Facebook knows Instagram is toxic for teen girls. The Wall Street Journal. https://www.wsj.com/articles/facebook-knows-instagram-is-toxic-for-teen-girls-company-documents-show-11631620739

The New York Times. (2019, June 8). YouTube’s algorithm leads users to extremism. The New York Times. https://www.nytimes.com/2019/06/08/technology/youtube-radical.html

The Washington Post. (2020, March 16). TikTok’s hidden discrimination. The Washington Post. https://www.washingtonpost.com/technology/2020/03/16/tiktok-discrimination-algorithm/

Be the first to comment

Leave a Reply