Introduction: The Tragedy of the Pink-Haired Girl — How Hate Speech Can Kill
On January 23, 2023, 23-year-old Zheng Linghua took her own life—just six months after being targeted by online abuse.
In May 2022, as she prepared to graduate from university, Zheng dyed her hair pink to take beautiful graduation photos. After receiving her postgraduate admission letter, she rushed to the hospital to share the good news with her beloved grandfather and posted a photo of them together on social media.
She never expected that this ordinary photo would trigger a wave of cyberbullying. Some organizations misused the photo for promotional purposes, while strangers hurled insults at her, calling her a “club dancer” ,“seductress” and “bitch” ,and spreading false rumors about an “age-gap romance.”
A few days later, she gave in and dyed her hair back to black—but the abuse didn’t stop. She tried legal action and even talked to her attackers, but with little effect. Eventually, Zheng fell into depression and ended her life months later. Meanwhile, those who attacked her showed no remorse, dismissing her as simply being “too sensitive.”

Zheng Linghua’s Photo with Her Grandfather
Zheng Linghua’s death is both heartbreaking and thought-provoking: when hate speech floods the internet, how should we respond?
With the rise of social media, public expression has become easier—but so has the spread of hate speech. In recent years, this issue has worsened, especially targeting vulnerable groups such as the LGBTQ+ community, feminists, and Indigenous Australians.
So how should we deal with hate speech? Should it be restricted? Would restrictions infringe on free speech? Can AI algorithms help curb its spread, or do they risk creating “information bubbles” that trap marginalized voices in echo chambers?
This article explores the harm caused by hate speech, its tension with free expression, the role of AI in its spread, and the challenges of regulating it.
1. Types of Hate Speech
Hate speech refers to public expressions that attack marginalized groups (Sinpeng et al., 2021).
According to Sinpeng et al. (2021), hate speech can be divided into two types based on the intention and impact of harm:
The first type is causal harm, which involves direct attacks such as insults, death threats, sexism, and racism. Examples include statements like “All homosexuals should die”, “Homosexuals belong in hell”, or the gender-based abuse faced by Zheng Linghua. Causal harm can inflict immediate psychological trauma on victims, disrupt their daily lives, and threaten their sense of safety.
The second type is constitutive harm, which refers to indirect harm caused by speech that may not be overtly aggressive but aims to subordinate or degrade its targets. This can manifest as discrimination—for example, in Indonesian political and religious elite culture, homosexuality is often viewed as deviant. It can also involve deprivation, such as denying certain groups the right to express opinions or invalidating their claims for rights.
Constitutive harm represents a more covert form of hate, one that undermines free speech, poisons public discourse, and makes marginalized individuals feel as if they’ve been “slapped in the face”. As a result, it discourages them from speaking out and reduces the chances for diverse and meaningful dialogue.

cyber-violence
2. How Do AI Algorithms Influence Hate Speech?
(1) Algorithmic Loops: Why Do Platforms Promote Controversial Content?
Social media platforms do not actively eliminate hate speech; in fact, they may contribute to its spread. The structure of these platforms, their interaction rules, and recommendation algorithms determine which content gains visibility (Matamoros-Fernández, 2017). Most algorithms prioritize content that is controversial or emotionally charged, as such content drives higher engagement—leading to increased traffic and ad revenue.
This mechanism explains why hate-filled, inflammatory, or extreme content often spreads more easily than rational discussions. Much like how celebrities attract attention through eye-catching outfits or behavior, extreme or polarizing statements on social media can quickly ignite heated discussions, pushing such content into trending feeds. For platforms, controversy equals traffic—and traffic equals profit. As a result, algorithms are more likely to amplify hate speech rather than suppress it.
(2) When Platform Culture Meets Algorithm: Hate Speech Becomes More Severe
When the culture of a social media platform interacts with AI algorithms, the spread of hate speech can intensify. Take Reddit as an example. Influenced by long-standing geek culture, Reddit’s dominant user base is largely white and male, which has fostered an environment where anti-feminist and sexist sentiments thrive (Massanari, 2017). In such a setting, women are often “othered”, treated as sexual objects or unwelcome intruders.
Reddit’s “karma” system further amplifies this phenomenon. Karma is a point-based system meant to represent a user’s contribution to the community, calculated by the difference between upvotes and downvotes a post receives.
To gain karma—by maximizing upvotes and minimizing downvotes—users tend to post content that aligns with popular or dominant views. This encourages imitation and conformity within the prevailing cultural climate. As a result, anti-feminist posts, for example, are more likely to be promoted and circulated, supported by both Reddit’s user culture and its algorithmic system.
This fusion of “toxic culture” and algorithm-driven visibility has given rise to several disturbing incidents. One notable example is Reddit’s Gamergate (GG) controversy, in which a large number of users targeted feminist game developers, critics, and their male allies with coordinated harassment. The goal was to delegitimize and intimidate women and their supporters in the gaming community—an effort fueled by both Reddit’s cultural bias and the reinforcing mechanisms of its algorithm.

The fappening
In the Fappening incident, a large number of private photos of female celebrities were stolen and shared online. For profit-driven reasons, platforms failed to stop the spread of these images in a timely manner.
In such cases, social media platforms are not only tools for the dissemination of hate speech, but its very design can also unintentionally encourage the generation and spread of hatred.
(3) How Does Hate Speech Reach the Right (or Wrong) People?
As previously mentioned, hate speech is more likely to be widely promoted by AI. Once the algorithm identifies that a user is interested in a certain topic, it continues to recommend related content—without assessing whether that content is harmful or abusive. As a result, offensive posts are often pushed not only to the attackers but also to the targets themselves, as well as to new users who show initial interest in the topic.
According to Matamoros-Fernández (2017), for example, engaging with positive posts about Adam Goodes on Facebook could lead to recommendations of related mocking memes. On YouTube, liking a racist video involving him would result in more similar content being pushed. This demonstrates how algorithms can inadvertently amplify harmful content through personalized recommendation systems.

Discriminatory Memes Targeting Adam Goodes
On Reddit, many hate-speech-related subreddits appear on the home interface of new users. Those who dislike such content are more likely to abandon the platform, while those who remain are often individuals who already align with the toxic culture. This self-selection process has, over time, made Reddit’s culture increasingly extreme (Massanari, 2017).
As a result, the excessive recommendation of hate speech contributes to a dangerous trend: newcomers with neutral stances may gradually be swayed by the dominant hateful perspectives they’re exposed to, becoming more sympathetic to or even adopting such views. Meanwhile, those uncomfortable with the tone of the discourse often choose to leave. Ultimately, this creates a feedback loop that pushes the community further into toxicity.
This raises a critical question: AI algorithms often appear to amplify hate.
But if AI only recommends marginalized voices to their existing supporters—or filters hate speech so that targeted groups never see it—is that truly better?
(4) Blocking Hate Speech = Solution?
Blanket blocking of hate speech—either by preventing it from being shown to marginalized groups or by only recommending marginalized voices to supporters—may seem protective on the surface. However, in practice, this approach risks reinforcing “filter bubbles” and may further diminish the social visibility and influence of these communities.

Filter Bubbles Separate People
According to Just and Latzer (2017), the content we are exposed to shapes our perceptions and behaviors. In the era of traditional media, people generally received uniform information, which helped build shared values and social cohesion. In contrast, social media—driven by AI algorithms—exposes users to increasingly personalized content, steering them away from opposing views. This results in a highly individualized information environment: while personal freedom increases, shared norms and social order weaken. The consequence is a decline in social cohesion and the rise of issues like polarization.
Blanket censorship of hate speech only intensifies this isolation. It confines marginalized voices within echo chambers, limiting their reach and weakening their broader societal influence. In the long run, this can hinder their ability to advocate for rights and push for meaningful change.
Therefore, simply blocking hate speech from reaching vulnerable groups is not an effective or sustainable solution.
3. The Dilemma of Governing Hate Speech
(1) Blurred Boundaries: What Counts as Hate Speech?
Given the harmful effects of hate speech, it’s clear we cannot allow it to spread unchecked. But the first step in controlling it—defining what constitutes hate speech—is far from straightforward.
The two types of hate speech mentioned earlier are easier to identify when they involve direct attacks. However, things become more complex when such speech is disguised as humor, satire, or parody. These forms blur the line between free expression and harm, making hate speech appear more acceptable and harder to regulate.
According to Matamoros-Fernández (2017), platforms like Facebook and YouTube defend users’ rights to express humor, irony, or social commentary. However, many people exploit this leniency by cloaking discriminatory content in jokes. For example, attacks on Indigenous Australian footballer Adam Goodes often appeared in meme form. The platforms’ tolerance, combined with users’ bad faith, has led to an environment where hate speech—disguised as humor—flourishes under the banner of free speech. In many cases, its impact is no less harmful than more overt forms of abuse.
This is further complicated by cultural differences in interpreting “humor”. Even advanced AI or human moderators struggle to distinguish between satirical commentary and genuinely harmful content. Sometimes, well-meaning commentary is flagged, while malicious content slips through unnoticed.
(2) Does Limiting Hate Speech Restrict Free Speech?
Another key issue: does regulating hate speech infringe on free expression?
On one hand, limiting hate speech helps protect marginalized groups and ensures they can speak freely. On the other hand, it can suppress differing viewpoints.
Take the ongoing debate about whether transgender athletes should be allowed to compete in women’s sports. Some people raise concerns based on fairness in competition, rather than from a place of malice. Should such views be categorized as hate speech that impedes transgender rights—or are they legitimate forms of public discourse?

Hate speech & Free speech
Thus, not all dissenting opinions should be labeled as hate speech.
If we only allow speech that benefits marginalized groups while silencing all dissent, we inadvertently eliminate genuine public discourse. It is crucial to remember that upholding the rights of marginalized communities should not come at the cost of ignoring the voices of others.
(3) The Misuse of Hate Speech Regulation
Moreover, laws targeting hate speech are not always enacted to protect vulnerable groups. In some cases, they are exploited by those in power as tools to maintain control (Sinpeng et al., 2021), suppress dissent, and serve political interests. Rather than safeguarding free expression, such misuse ends up obstructing it.
Hence, excessive restrictions on hate speech—whether through overprotective censorship or aggressive legislation—can easily curtail freedom of speech.
(4) Unfair Platforms
Currently, major platforms are the main arbiters of what constitutes hate speech. But clearly, their regulation efforts are inadequate, especially given the diversity of hate speech forms.
To begin with, these platforms fail to properly manage direct hate speech targeting vulnerable communities. According to Sinpeng et al. (2021), LGBTQ+ group moderators across different countries reported that Facebook rarely responded to hate speech complaints, leaving them feeling powerless.
Just as Reddit is shaped by its geek-dominated culture, large platforms have prevailing cultural norms. These platforms are often controlled by dominant societal groups (Noble, 2018), typically reflecting white male-centric values. Since algorithms are created by a small elite, they also embed these creators’ biases (Crawford, 2021). As a result, platforms tend to approach marginalized groups with elitist indifference. They may claim to oppose hate speech for the sake of order, yet fail to take action when LGBTQ+ users report abuse or when hate speech disguised as humor spreads unchecked.
4. Conclusion
In summary, the rise of hate speech is a result of both platform culture and AI algorithms. Blanket bans and indiscriminate censorship can weaken marginalized groups’ influence, hinder public dialogue, and contribute to societal fragmentation. The governance of hate speech is riddled with challenges: vague definitions, the tension between free speech and harmful content, legal misuse, and biased platform cultures.
To truly address the issue, we must avoid both inaction and overcorrection. As discussed, harsh regulatory approaches—like restrictive laws—can backfire. A more effective solution may lie in softer, structural strategies.
Take Reddit for example: its toxic environment stems from geek culture, conformity driven by karma scores, and the upvote/downvote system (Massanari, 2017). To prevent hate speech from flourishing, platforms must be designed with cultural and systemic considerations in mind from the start.
For established platforms like Facebook and YouTube, in addition to adjusting their culture and algorithms, it’s essential to build more transparent moderation systems. Most importantly, they should include administrators from diverse cultural backgrounds in both content moderation and platform governance. Only then can platforms be equipped to manage hate speech fairly and empathetically.
Reference
Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating hate speech in the Asia pacific. Facebook Content Policy Research on Social Media Award: Regulating Hate Speech in the Asia Pacific.
Matamoros-Fernández, A. (2017). Platformed racism: The mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930-946.
Massanari, A. (2017). # Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New media & society, 19(3), 329-346.
Just, N., & Latzer, M. (2017). Governance by algorithms: reality construction by algorithmic selection on the Internet. Media, culture & society, 39(2), 238-258.
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. In Algorithms of oppression. New York university press.
Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
Be the first to comment