In the digital age, hate speech and online harm have become a global concern. With the popularity of social media, people’s freedom of expression has been expanded unprecedentedly. At the same time, this has also brought new challenges. How should we protect freedom of speech while curbing speech that may cause harm to others? This problem is particularly salient in China, because China has a strict Internet censorship system and faces the difficulty of balancing freedom and control. In this blog, we’ll explore these challenges, focusing particularly on China as a case study, to understand how one of the world’s most censorship-heavy nations approaches the issue of hate speech and online harms.

What is Hate Speech?
Before diving into the complexities of governance, it’s essential to define what we’re talking about when we say “hate speech.” Hate speech is not just offensive or hurtful speech, but rather expressions that may cause actual harm to a specific group. According to the definition given by Parekh (2012), hate speech refers to those that “expresses, encourages, stirs up, or incites hatred against a group of individuals distinguished by a particular feature or set of features such as race, ethnicity, gender, religion, nationalist, or sexual orientation” (p. 40). Hate speech might target groups with disability (Sinpeng et al., 2021).
Such speeches not only cause psychological harm to the target group. They are very harmful, as they also cause discrimination, violence and even social instability. It is important to distinguish hate speech from merely offensive or insulting speech. Not every unpleasant comment is as same as hate speech. However, when speech perpetuates, feeds, and creates systemic discrimination and oppresses marginalised groups, it crosses the line. This harm can be casual, therefore, sometimes unnoticed

Taking China as an example, the spread of hate speech in cyberspace is particularly invisible, hard to observe and be identified. For example, there are a large number of discriminatory comments against gender, sexual orientation or ethnicity on the Internet (Guan & Chen, 2025). To be more specific, hate speech against the LGBT group is a threat because of big differences in how society normally acts and what it values. However, these speeches may appear in the form of humour or ridicule (Webber & Yip, 2018). Therefore, their potential harm should be ignored by how dangerous they could be with a long time existence. Hate speech, in this sense, violates social, cultural and moral orders in the digital environment. More importantly, it may also have a long-term impact on the mental health and social status of the target group who is already marginalised for certain characteristics.
Taking China as an example, the spread of hate speech in cyberspace is particularly invisible, hard to observe and be identified. For example, there are a large number of discriminatory comments against gender, sexual orientation or ethnicity on the Internet (Guan & Chen, 2025). To be more specific, hate speech against the LGBT group is a threat because of big differences in how society normally acts and what it values. However, these speeches may appear in the form of humour or ridicule (Webber & Yip, 2018). Therefore, their potential harm should be ignored by how dangerous they could be with a long time existence. Hate speech, in this sense, violates social, cultural and moral orders in the digital environment. More importantly, it may also have a long-term impact on the mental health and social status of the target group who is already marginalised for certain characteristics.
The Rise of Online Harms
The Internet has become a breeding arena for hate speech and online harms. With over 4 billion internet users globally, the problem has grown exponentially, especially in regions like Asia, where social media use is skyrocketing across different cultures and regions (Sinpeng et al., 2021). Online platforms are now critical spaces for communication, connection, and freedom of expression, but they are also hotbeds for cyberbullying, adult cyber abuse, image-based abuse, and the spread of illegal or restricted content.
In China, online harms often take the form of “renrou sousuo” (人肉搜索), which means online doxing. For example, a young woman who showed no compassion for earthquake victims was detained by authorities, while people who tortured kittens lost their jobs. These cases show how harmful speech and online witch-hunts have become a form of cyber violence, with real-life consequences for the targeted individuals. However, these victims have limited legal options to protect their reputation and privacy. Many cannot afford to go to court, and it’s even harder for them to know who to sue, especially since most online posts are anonymous . To make matters worse, the person causing harm might not be just one individual but a group of anonymous internet users working together (Cheung, 2009). To make matters worse, the person causing harm might not be just one individual but a group of anonymous internet users working together.

The challenge for regulators and platforms is to balance the benefits of open communication with the need to protect users from harm. This is where the concept of a “duty of care” comes into play (Woods & Perrin, 2021). Instead of focusing solely on content moderation, platforms should design their services with safety in mind, taking responsibility for preventing harmful outcomes.
The Role of Platforms in Governing Online Harms
Platforms like Facebook, Twitter, and WeChat play a crucial role in regulating online content. However, their current approaches often fall short. Facebook’s Community Standards, for example, have been criticised for failing to capture the nuances of hate speech in different cultural contexts. This high-tech Internet company has made efforts to expand its definition of hate speech, but gaps remain, particularly in addressing culturally specific forms of harassment (Sinpeng et al., 2021).
Not surprisingly, Chinese social media platforms do not just has their influence in China but also in other countries and their users as well. WeChat is a prominent social networking tool among Malaysian youth. WeChat’s prominence among the younger population makes it simpler for sex predators to target potential victims, as evidenced by news reports of rape cases began on the platform (Hussain & Ismail, 2021). This leads to the question of how WeChat’s characteristics contribute to this situation. What is it about WeChat that attracts predators? In the study of Hussain and Ismail (2021), certain risky behaviours of WeChat use increase the danger of young people being victims of online grooming. Children who are exposed to WeChat from a young age are more vulnerable to online risks because they are unable to self-regulate and traverse social media riskily. All of the victims went online on their own phones, which increased privacy and limited adult oversight in their usage. So, what can WeChat do to avert these challenges for its users in a cross-national context?

A “duty of care” approach suggests that platforms should focus on the design of their services to prevent harm, rather than just reacting to it after the fact (Woods & Perrin, 2021). This could involve better algorithms to detect harmful content, more transparent moderation practices, and greater accountability for identifying hate speech.
China’s Approach to Hate Speech and Online Harms
China’s approach to regulating online content is notoriously strict (Wang & Mark, 2015). The country has a comprehensive system of censorship, often justified as necessary for social stability and national security. However, everyone knows how it can utilised for the Party’s political propaganda. This approach to control and regulate the online environment has been criticised for stifling free speech and being overly broad.
China’s legal framework for hate speech is embedded in its broader Internet governance policies. Laws like the Cybersecurity Law and the Internet Information Service Management Measures provide the government with sweeping powers to regulate online content (Liu, 2000; Qi et al., 2018). While these laws aim to combat illegal and harmful information, they often conflate hate speech with political dissent, leading to censorship that goes beyond protecting vulnerable groups (Sinpeng et al., 2021).
Challenges in China’s Governance of Online Harms
Despite its strict regulations, China faces significant challenges in coping with hate speech and online harms. One major issue is the sheer volume of online content, which makes enforcement difficult. Large amounts of user-generated content that were previously impossible to produce have been made possible by social media platforms (Li & Zhou, 2024). The ambience, which Li and Zhou (2024) define as the ubiquitous information that instantly surrounds material and enacts its overall character and influence, is being regulated by Chinese platforms in place of the semantic approach, which seeks to understand the meaning of content. Additionally, the line between hate speech and legitimate criticism or satire is often blurred, leading to inconsistent enforcement (Matamoros-Fernández et al., 2023).
Another challenge is the cultural and social context in which hate speech occurs. In China, certain forms of harassment may be culturally specific or tied to particular social issues, making it harder for platforms to identify and address them effectively (Sinpeng et al., 2021). The prevalence of online communication has increased the number of contexts where sexual harassment takes place. Under the guise of employment, offenders may use WeChat, email, or video conferencing to send offensive messages, harass others, or act inappropriately (Zhu & Yin, 2025). Because these types of online harassment are frequently covert and persistent, it can be challenging for victims to recognise and reject them right away, which worsens the suffering that harassment causes.
Toward a Better Approach: Lessons from China’s Experience
So, what can we learn from China’s approach to hate speech and online harms? While its censorship-heavy model may not be the most effective or desirable for other countries, it highlights the importance of taking online harms seriously and the need for proactive measures to prevent them.
One key takeaway is the importance of a “duty of care” approach. By focusing on the design of online platforms and the systems that govern them, we can create safer digital environments that prioritise user well-being (Sinpeng et al., 2021). This could involve better moderation tools, more transparent policies, and greater accountability for platforms that fail to protect their users. This also means corporations behind platforms should highly acknowledge their roles in moderation and detection. Users are becoming more vulnerable ever since. Social responsibility within this context is essential for creating better, safer environment for users.
Another important lesson is the need for cultural sensitivity in regulating hate speech. What constitutes hate speech can vary very differently depending on the cultural and social context. This means any effective governance strategy must take these differences into account. However, hate speech classifiers tend to be culturally insensitive, as Lee et al. (2023) notices through their evaluation of datasets in different languages. For China or any country in the world, cultural sensitivity should be prioritised as a critical lens and basis for the identification and enforcement of hate speech and online harm.

The Way Forward: Balancing Freedom and Safety
The challenge of regulating hate speech and online harms is not unique to China. Countries around the world are dealing with how to balance the benefits of open communication with the need to protect users from harm. The key is to find a middle ground that respects freedom of expression while safeguarding vulnerable groups. This requires a detailed approach that goes beyond simplistic solutions like censorship or blanket bans. Instead, we need policies that are grounded in a deep understanding of the complexities of online communication and the diverse needs of different communities.
.
Conclusion: the Path to a Safer Digital Future
The regulation of hate speech and online harms is a complex and ongoing challenge. It requires a combination of effective policies, technological solutions, and cultural sensitivity. China’s experience offers valuable insights into both the potential and the pitfalls of different approaches. As we move forward, it is important to remember that the digital world is a reflection of our broader society. The same principles that guide us in our offline interactions, based on respect, empathy, and a commitment to justice; while these should also guide our efforts to create a safer and more inclusive online environment. So, the goal is not to eliminate all forms of harmful speech but to create a digital space where freedom of expression and the right to live without fear of harassment or discrimination can coexist. It is a difficult balance to achieve, but it is essential for the health of our digital communities.
References
Cheung, A. S. (2009). A study of cyber-violence and internet service providers’ liability: lessons from China. Pac. Washington International Law Journal, 18(2), 323-346. http://dx.doi.org/10.5281/zenodo.1467901
Guan, T., & Chen, X. (2025). Threat Perception, Otherness and Hate Speech in China’s Cyberspace. Journal of Contemporary China, 1-16. https://doi.org/10.1080/10670564.2025.2475051
Hussain, N., & Ismail, N. (2021). Young people’s WeChat use that heightens online grooming vulnerability. SEARCH Journal of Media and Communication Research, 1, 1-17.
Lee, N., Jung, C., & Oh, A. (2023, May). Hate speech classifiers are culturally insensitive. In Proceedings of the First Workshop on Cross-Cultural Considerations in NLP (C3NLP) (pp. 35-46).
Li, L., & Zhou, K. (2024). When content moderation is not about content: How Chinese social media platforms moderate content and why it matters. New Media & Society, 0(0), 1–17. https://doi.org/10.1177/14614448241263933
Liu, L. (2000). 中华人民共和国国务院令(第292号) 互联网信息服务管理办法__2000年第34号国务院公报_中国政府网. Www.gov.cn. https://www.gov.cn/gongbao/content/2000/content_60531.htm
Matamoros-Fernández, A., Bartolo, L., & Troynar, L. (2023). Humour as an online safety issue: Exploring solutions to help platforms better address this form of expression. Internet Policy Review, 12(1). https://doi.org/10.14763/2023.1.1677
Qi, A., Shao, G., & Zheng, W. (2018). Assessing China’s cybersecurity law. Computer law & security review, 34(6), 1342-1354. https://doi.org/10.1016/j.clsr.2018.08.007
Parekh, B. (2012). Is there a case for banning hate speech? In M. Herz and P. Molnar (eds), The Content and Context of Hate Speech: Rethinking Regulation and Responses (pp. 37–56). Cambridge: Cambridge University Press.
Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. https://r2pasiapacific.org/files/7099/2021_Facebook_hate_speech_Asia_report.pdf
Wang, D., & Mark, G. (2015). Internet censorship in China: Examining user awareness and attitudes. ACM Transactions on Computer-Human Interaction (TOCHI), 22(6), 1-22. https://dl.acm.org/doi/abs/10.1145/2818997
Webber, C., & Yip, M. (2018). The rise of Chinese cyber warriors: towards a theoretical model of online hacktivism. International journal of cyber criminology, 12(1), 230-254.
Woods, L., & Perrin, W. (2021). Obliging platforms to accept a duty of care. In M. Moore & D. Tambini (eds.),Oxford University Press (pp. 93-109).
Zhu, K., & Yin, H. (2025). Analysis of Sexual Harassment in China: A Criminological Perspective. Advances in Applied Sociology, 15(01), 18–36. https://doi.org/10.4236/aasoci.2025.151002
Be the first to comment