
Introduction: hurtful words or silencing labels?
In online life, ‘hate speech’ is defined by Parekh (2012) as advocacy of hate speech against a specific group of people (e.g. race, religion, gender, sexual orientation, etc.). It seems to be a negative term that everyone avoids – it implies extremism, discrimination, aggressiveness. However, in reality, the understanding of “hate” varies greatly from country to country, platform to platform and culture to culture. The term “hate speech” carries with it the characteristics of moral correctness – it is like a sword pointed at racists, gender violators, ultra-nationalists. But few have pursued the question: who is qualified to wield this sword?
What kind of speech should be deleted? Who decides what constitutes “hate”? When “hate speech” itself is used, does it really protect the vulnerable or does it become a new tool of repression? When Weibo defines “patriarchal society” as “gender opposition”, when a German court asks Facebook to remove anti-refugee emoticons, and when the Indian government blocks the accounts of peasant protesters for “inciting hatred” – we suddenly realised that under the label of “hate” lies a more complex power game: it can be a shield to protect the weak, or it can be a weapon to eliminate dissent. The real losers in this game will always be the silent majority – the marginalised groups who self-censor out of fear of miscarriage of justice, the social movements that have been mishandled by algorithms, the critical voices that have been lost in the cracks of the state and the platform.
Next, we will explore the complexity of “hate speech”: how this seemingly simple term has been given different meanings in the global digital governance system and how it affects our discursive space.

There is no uniform definition of hate speech
We tend to assume that there is a global, standardised definition of hate speech. But the reality is that when you look at the laws of various countries, you will find that the definition of “hate speech” is like a piece of playdough, which changes according to the context, the culture, and the political formations.
The United Nations (n.d.) considers hate speech as any form of communication that attacks or discriminates against a person or group of persons (e.g., race, religion, gender, etc.), which is a definition that emphasises “identity attacks”;
In its Code of Conduct on Countering illegal Hate Speech online, the EU (n.d.) defines hate speech as “public incitement to violence or hatred”. The EU places more emphasis on the “consequences of the act” – whether it is provoking real hatred or violence in reality;
Chinese law does not explicitly use the term “hate speech”, but the Provisions on the Governance of online Information Content Ecosystem refer to the prohibition of “creating social confrontation”, “fuelling conflicts”, “inciting national hatred”, etc.(Cyberspace Administration of China,2020). The Chinese definition is more from the perspective of “social stability and maintenance of order”;
As opposed to social media platforms (in the case of Facebook, Twitter, etc.), which generally use “attacks based on protected identity” as a threshold (e.g., race, gender, religion, etc.)(Meta,n.d.). While this may seem clear, the platforms rely on algorithms to automatically identify them, which makes it easy to misjudge and mistakenly delete them.
This ambiguity creates all kinds of absurdities in reality: feminism can be “affirmative action” in one country and “man-hating” in another; in India, low-caste netizens who criticise Brahmin privilege are blocked, but the same speech might be considered “social criticism” in the United States. This blurring of standards makes the voices of some marginalised groups (e.g. feminist, LGBTQ, etc.) often misrepresented. This ambiguity provides room for abuse.
When “Hate” Becomes an Excuse for Repression: The Case of Feminism in China
A real-life example worth pondering is the frequent removal of feminist content from Chinese online platforms.
On Weibo, one of China’s largest social media platforms, feminists are under double siege:
– Word blocking: Words such as “satirical exploitation of marriage” and “patriarchal society” were banned from searches, but words abusing women were not dealt with;
– Selective enforcement of the law: Internet accounts advocating gender equality are frequently banned and restricted for “inciting gender antagonism” and “fuelling group sentiments”, while accounts genuinely targeting women with personal attacks, sexism and malicious mockery are often allowed to remain.
What is the logic behind this?
This was not a technical error, but the result of power collusion: In China’s online environment, where gender issues are defined as “destabilising”, platforms are actively expanding censorship to avoid risk.
In this context, “hate speech” no longer means attacks on vulnerable groups, but is extended to any “speech that may challenge the mainstream order and provoke controversy”. When “anti-hate” becomes a tool for stability, the critics become the source of “hate”. Feminists have been labelled as “confrontational” and have instead been the target of “hate speech”.
This suggests that the very definition of hate speech is an expression of power. When the platform or the State has the power to define, it can determine not only what can be said, but also who is entitled to speak. The hate speech label is used to punish the critical voices of minorities rather than the hateful behaviour of mainstream groups. Platforms and States work together to shape the discourse of “what is hate”.

The dilemma of platform governance: the grey boundary of algorithms
Platforms need to preserve a plurality of opinions while protecting users and enforcing policies. You might think: having rules and vetting will ensure that hate speech doesn’t proliferate. But the reality is far more complicated than that. Most social media platforms use an algorithmic and manual review model. Algorithms are responsible for initial screening, such as identifying sensitive words; humans are responsible for review, but limited human resources and vague criteria lead to frequent misuse. A text criticising sexism was found to be “male-hating” because of the use of the words “male” and “privileged”; a poster speaking out on behalf of the LGBTQ was taken down by a platform that misjudged it as “sexually suggestive”; user posts about sexual harassment on campus, but account banned for “stirring up emotions”.
The platform’s algorithm doesn’t understand context; it can only recognise “surface keywords”. And does the vetting team behind it understand cultural details and linguistic context? The answer is not good (Sinpeng et al.,2021). As a result, the governance of hate speech can easily slide into censorship and miscarriage of justice, especially if the censorship lacks transparency and accountability mechanisms.

State intervention: the risk of politicisation of hate speech
“When you hold the power to define ‘hate,’ you have the eraser to rewrite history.”
Not only platforms, but also States are using the label “hate speech” to regulate “dissent” in the public space.
In China, criticism of the Government’s actions or expression of a position is often labelled as “disrupting social order” or “hatred of the State”. For example, the writer Fang Fang, who documented the real experiences of the people of Wuhan in the early stages of the epidemic through her Wuhan Diary, was subjected to massive cyber-attacks and blocking as a result (China Digital Times, n.d.). Rationalising the suppression of dissent by creating a forbidden zone of “hate” and stigmatising dissidents as “public enemies”. When Chinese netizens mobbed the writer Fang Fang, these acts were never labelled as “hate” – because they served the “right narrative” endorsed by power.
Western countries are no exception. In the United States, some right-wingers have accused the platform of “conservative bias”, pointing out that their political positions are often labelled as “hate speech” by the social media platform, preventing them from expressing themselves properly and forcing Facebook to relax its censorship of extreme speech(Voice of America,2025).
This differential labelling strategy suggests that the State not only dominates the criteria for defining “hate”, but also gives tacit legitimacy to certain offensive speech because its political stance is in line with the dominant ideology. This suggests that the concept of hate speech can be politicised – it is used to define what voices are legitimate and what voices should be silenced. This exemplifies how political power achieves ideological governance by controlling the “boundaries of speech”.
What kind of governance do we need? Reframing the solution: beyond blocking and Indulgence
We certainly can’t let real hate proliferate on the web. But combating hate speech is not the same as blanket deletion and blanket banning. What we need is not more powerful AI audit, but a redistribution of discourse:
- Clearer and more transparent criteria: Platforms should have a public explanation mechanism (why deleted/released). Users have the right to ask questions, and user complaint mechanisms are more efficient. The EU Digital Services Act (2020) requires platforms to disclose the reasons for removal, and platforms should explain “why they blocked you” rather than remain silent.
- Decentralized review of rule-making with multiple participants: Do not just have companies or governments deciding, but also listen to the voices of users, experts, and civil society organizations.
- Context-driven review logic: Algorithms cannot just look at keywords, but also understand the context and intent behind them.Governance mechanisms should be based on cultural sensitivity.
- Oppose the misuse of the name “hate” to silence dissent: Do not label legitimate criticism and controversial topics as “harmful content”.
In short, the goal of combating hate speech is not to “keep it quiet” but to “create a space for dialog”. Hate speech Governance cannot rely only on technology, but also on social governance wisdom. But one must be wary: any mechanism can be corrupted by power. When “localization audits” become government proxies, we need to remain skeptical – true security should not come at the cost of silence.
Conclusion: the complexity of hate speech is something we can’t escape
While groups like feminists, sexual minorities, environmentalists are labeled as ‘hate’, real hate revels in the shadows instead. Hate speech cannot be binary “good or bad”; it is a dynamic political or cultural phenomenon. It is also never a neutral concept. As mentioned earlier, it is like a double-edged sword, which can protect the weak, or be a tool of suppression in the hands of power.
The challenge in combating hate speech is not whether it can be managed, but how it is managed. True protection is the suppression of violent speech without eliminating marginalized voices. We must always be vigilant: in the name of fighting hate, has expression itself been misplaced? While maintaining order, does it stifle the space for pluralism and reflection? The delete button is easy to press, but if we eliminate all the “dangerous” voices, all that will remain is the echo of power. Understanding the complexity of “hate speech” is the first step towards more equitable digital governance.
Reference
China Digital Times. (n.d.). Fang Fang Diaries. China Digital Times. https://chinadigitaltimes.net/space
Cyberspace Administration of China. (2020, March 1). Provisions on the Governance of the Online Information Content Ecosystem. The State Council of the People’s Republic of China. https://www.gov.cn/gongbao/content/2020/content_5492511.htm
European Commission. (n.d.). EU Code of Conduct on countering illegal hate speech online. European Union. https://commission.europa.eu/strategy-and-policy/policies/justice-and-fundamental-rights/combatting-discrimination/racism-and-xenophobia/eu-code-conduct-countering-illegal-hate-speech-online_en
European Union. (2022). Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act). EUR-Lex. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32022R2065
Flew, Terry (2021) Hate Speech and Online Abuse. In Regulating Platforms. Cambridge: Polity, pp. 91-96
Meta. (n.d.). Hateful conduct. Meta Transparency Center. Retrieved April 11, 2025, from https://transparency.meta.com/policies/community-standards/hateful-conduct
Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021, July 5). Facebook: Regulating hate speech in the Asia Pacific. Final Report to Facebook under the auspices of its Content Policy Research on Social Media Platforms Award. Dept of Media and Communication, University of Sydney and School of Political Science and International Studies, University of Queensland.https://r2pasiapacific.org/files/7099/2021_Facebook_hate_speech_Asia_report.pdfLinks to an external site.
United Nations. (n.d.). What is hate speech? United Nations. https://www.un.org/en/hate-speech/understanding-hate-speech/what-is-hate-speech
Voice of America. (2025, January 8). Meta eliminates fact-checking in latest bow to Trump. https://www.voachinese.com/a/meta-eliminates-fact-checking-in-latest-bow-to-trump-20250108/7929257.html
Be the first to comment