

Recently, a blog on a social platform wrote that before a parent became a social media reformer, her son chose to commit suicide due to cyberbullying and online harm. He was only 16 years old. It is a real tragedy that happened in 2021 (Navarro, 2025). We are living in an unprecedented era-with a swipe of a finger, we can donate supplies to refugees or help-seekers and anonymously use Meme to bully strangers on the spiritual level. With the rapid development of the Internet, the public promotes that people can have freedom of speech on the Internet without any limitations. Digital platforms are a double-edged sword—they can break down regional and prejudicial barriers. Still, they can also serve as a breeding ground for hate speech.
While enjoying the money and profits brought by online traffic, social media platforms are exposing billions of users to verbal violence and creating a new violent industry chain in the code. For example, the anonymous forum Reddit has become a shield for extremism, Instagram has not taken any action on 926 hate comments out of 1,000 comments, which include racial and gender discrimination, death and rape threats (Abusing Women in Politics — Center for Countering Digital Hate | CCDH, 2024) Data on the CCDH website (2024) shows that in 2016, 41% of women self-examined to avoid online gender violence. The algorithm mechanism, regulatory loopholes and ambiguity of free speech on social media platforms have jointly promoted the spread of hate speech. This article will take Tiktok as the core platform to analyze:
- Why does algorithm recommendation contribute to the spread of hatred?
- Why does platform regulation frequently fail?
- When freedom of speech becomes a shield for perpetrators, how can we build a new digital order that is both inclusive and safe?

Understanding Hate Speech and Online Harm
Online platforms are now spaces for communication and regulators. However, platforms are always caught in regulatory dilemmas (Sinpeng et al., 2021, p.6). For example, when “free speech” becomes a legal shield for “perpetrators” and platform review becomes a weak “firewall” that cannot block harmful content at all.
Freedom of speech, or freedom of harm?
Hate speech covers a range of threats, dehumanization, and cultural marginalization(Sinpeng et al., 2021, p.11). For example, on the Internet, hate speech includes language, images, memes, and videos that degrade or threaten individuals because of their religion, race, gender, ethnicity, disability, or sexual orientation (Flew, 2022, p. 115). Such forms are diverse, from open defamation and specific threats to more subtle expressions like sarcasm or intimation(Ariadna, 2017). These are within the scope of hate speech. This kind of speech exists not only between individuals, but also between celebrities and fans, etc. However, the line between free speech and harm is blurred. Although freedom of speech is everyone’s rights, one of the difficulties in platform regulation is determining when offensive or controversial speech appears in the public view.

There are many forms of cyber violence, not just throwing abuse and issuing death threats. What is scary is those seemingly “no big deal” jokes. For example, in the TikTok comment area, gender, body shape, accent or ethnicity are evaluated in a “joking” way. It is a micro-aggression, which is like a chronic poison. Long-term accumulation can break through people’s psychological line of defense.
Specific threats involve public insults, dehumanizing comments or incitement to violence. Fans insult celebrities and say “die” and make some photos for them to use after death. Transgender bloggers receive threatening comments and private messages every day. These blatant malice can hurt people across the screen.
Flew (2022) pointed out that cyber violence is more influential and lasting than real-life insults. Why? The anonymous mechanism and algorithm mechanism on the digital platform amplifies the speed and scope of the spread of these contents, allowing hate speech to persist, reappear, and constantly involve new victims.
Case study: From insults to ‘rational discussions’, hatred has learned to disguise itself
Massanari (2017, p. 330) found that some groups on the Reddit platform are avoiding regulation by changing their language strategies. They deliberately use ambiguous words to package hate speech. Although the language seems less offensive on the surface, the actual harm remains the same. This behaviour also appears on the TikTok platform. The current online speech about transgender, race, etc., has shown new characteristics, from direct insults in the past to more covert expressions (Weimann & Masri, 2023). For example, TikTok users will use avatars and names to group numbers, symbols, words, etc., to avoid the AI regulatory review system so that they can continue to post their extreme remarks. In addition, TikTok even has users abusing the reporting mechanism to suppress the voices of specific groups, and the platform has also seen the selective closure of related accounts. For example, a creator who usually only posts drumming videos was maliciously reported after expressing support for the “Black Lives Matter” movement, and the platform mistakenly deleted the video. Another transgender black creator was identified as a repeat offender by the system because he/she was reported many times without reason, and his/her account was eventually banned (Zeng & Kaye, 2022). This type of content seems neutral, but in fact, it constitutes a systemic hidden harm to transgender people through false information and false balance between the two sides. What is more worrying is that, according to the research of Patnaik and Litan (2023), TikTok not only has vague review standards. It also often intentionally or unintentionally amplifies such “controversial content” to “increase interaction rate,” making these offensive remarks more likely to enter the public opinion space and mislead young audiences.

Invisible damages
Due to the appearance of hate speech, online harms in the digital environment have become diversified, such as psychological harm, political harm and cultural harm. First, cyberbullying, persistent harassment, and other behaviours have caused a certain degree of psychological harm, which undermines the self-esteem and confidence of individuals and causes long-term anxiety to the perpetrators (Flew, 2022, p. 117). In social media, victims continue to face a life of fear and cannot relax because the threat posed by hate speech is constant (Flew, 2022, p. 117). Secondly, political harm occurs when people post false information or extreme speech to divide public cognition and undermine social cohesion and trust. For example, many female MPs have to suffer cyberbullying because of their gender (Flew, 2022, p. 115). Thirdly, cultural harm reflect in some marginalized groups becoming “content materials”. For example, controversial content (such as racist speech) is allowed to create topics, make it entertaining, and become a “topic” or “controversy” that people can consume, thereby increasing user interaction (Matamoros, 2017, p. 330). These cultural forms harm the rights and interests of their groups, which is what is mentioned above: the deprivation and dehumanization of the identity of a specific ethnic group (Flew, 2022, pp. 115–116).
The image of online abusers is far more complicated than we think. The abuser might not be a stranger, but that colleague who seems nice in your daily life. Many keyboard warriors are oppressed by authority in reality and turn to anonymous identities to vent their emotions on the Internet. They do not screen the objects of venting, and it may be anyone, such as celebrities, public figures, politicians, or even civilians, without any identity protection. The concealment of cyberspace makes it easier for them to release the dark side of their hearts, and freedom of speech allows them to be free from moral and legal constraints. So much so that the hate speech follows the trend also enables them to gain a sense of identity in the illusion of “group justice”, further blurring the boundaries between good and evil.

How do platforms and algorithmic mechanisms help hate speech?
Everyone who has used TikTok knows that the algorithm controls the platform. In a sense, it is like “spying” on your screen. Whenever you like or comment, you are helping it select which videos to put on the “For You” page (FYP). The platform has always believed that [interaction volume] is the key factor in content recommendation. Regardless of whether the content is emotional or the values are controversial, the platform still believes that videos with sufficient interaction volume are the main recommendations of the platform (Zeng & Kaye, 2022). As a result, teasing and suggestive biased content are often mistaken for “humour”, avoiding platform review and further spreading. For many marginalized groups, hateful content is not only “seen” but also often “ignored”.
Elements of the diffusion effect of extreme speech:
- Virtual identities reduce the attack cost, making it easier for perpetrators to escape from real responsibilities.
- Provocative content gains more exposure due to high interaction rates, which promotes the spread of hate speech (Flew, 2022, p.115).
- Using metaphors (such as “special groups” and “ideological struggles”) to circumvent censorship (Flew, 2022, pp.116-117).
- The existence of information cocoons has exacerbated the spread of extreme views and intensified the confrontational sentiment (Zeng & Kaye, 2022).
What is more worthy of vigilance is that when “transgender rights” are packaged as “academic discussions” and racial discrimination is hidden in the narrative of “cultural differences”, the algorithm not only amplifies the efficiency of the spread of prejudice, but also gives “rationalization”.
When the logic dominated by traffic dominates content release, does every inadvertent like and forwarding create this silent war?
Governance Dilemma
Against the backdrop of growing hate speech and online harm, platform governance faces two problems:
1) How to balance freedom of speech and harm prevention
Everyone believes freedom of speech is important, and both TikTok and other platforms promote “freedom of speech”. However, Flew clearly stated in his 2022 article (page 117) that hate speech is no longer just “different opinions”. It feels like you are walking on the street, but someone will suddenly scold you, and you feel insecure because you don’t know when someone will suddenly scold you. So now, more and more people appeal to these hate speeches should be treated as public safety issues. However, the difficulty in governance lies in the fact that the platform is afraid of deleting harmless content by mistake and cannot deal with implicit and compliant harmful speech.
2) Game between platform and government intervention
Although Tiktok claims to have implemented strict self-regulatory policies through technology and guidelines, actual operations are inconsistent, such as the platform selectively intervening in related issues. Controversial content posted by top creators tends to be kept for a more extended period. In contrast, similar content posted by ordinary users is more likely to be deleted quickly (Zeng & Kaye, 2022). At the same time, as mentioned above, TikTok also has a reporting mechanism, leading to some degree of “bullying” of these specific groups. This platform fails to protect victims, but it also weakens the space for marginalized groups to speak out. It has led to the platform’s users gradually losing trust in its governance credibility.
Conclusion
TikTok has become an important platform for public opinion and social interaction. It is not just about connection and entertainment, but also about prejudice, discrimination and harm that are amplified in the name of algorithms. When we like and comment on the popular videos on the recommendation page every day, we should realise that these behaviours expose the collective dilemma of this era – in the “cage” built by algorithms, we are both victims and perpetrators. The spread of hate speech is not accidental, also the result of platform mechanisms, regulatory failures and freedom of speech.
We may start to change the status quo with three actions:
- Personal action: Select the “not interested” button for videos that incite hatred
- Platform responsibility: pay attention to the execution of content review, as well as pay attention to transparency, fairness and user feedback mechanism
- Government system: Increase relevant systems to ensure that users have fair and timely channels for complaints when they encounter accidental deletion or malicious reports
Algorithms cannot be above human rights, and freedom of speech should not be a reason for continued oppression and marginalisation. Only when individuals, platforms, and governments simultaneously control relevant mechanisms, respect diversity, and respond to harm can we build an open and responsible public digital space.
References
Abusing women in politics — Center for Countering Digital Hate | CCDH. (2024, August 14). Center for Countering Digital Hate | CCDH. https://counterhate.com/research/abusing-women-in-politics/
Flew, T. (2021). Regulating Platforms (pp. 115–118). John Wiley & Sons.
Massanari, A. (2017). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807
Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118x.2017.1293130
Navarro, M. (2025, January 16). Online harms: a parent’s fight for social media regulation. Center for Countering Digital Hate | CCDH. https://counterhate.com/blog/online-harms-a-parents-fight-for-social-media-regulation/
Patnaik, S., & Litan, R. E. (2023, May 11). TikTok shows why social media companies need more regulation. Brookings. https://www.brookings.edu/articles/tiktok-shows-why-social-media-companies-need-more-regulation/
Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific (pp. 1–47). https://r2pasiapacific.org/files/7099/2021_Facebook_hate_speech_Asia_report.pdf
Weimann, G., & Masri, N. (2020). Research Note: Spreading Hate on TikTok. Studies in Conflict & Terrorism, 46(5), 752–765. https://doi.org/10.1080/1057610x.2020.1780027
Zeng, J., & Kaye, D. Bondy V. (2022). From content moderation to visibility moderation: A case study of platform governance on TikTok. Policy & Internet, 14(1), 79–95. https://doi.org/10.1002/poi3.287
Be the first to comment