
Introduction: The Plague of Hate in the Digital Age
Let’s image that you just post a text about gender equality on social media, there came hundrends of abusive messages in your comment section after a few minutes. Some of them threatening you, someone laughing at you, and even some publicizing your personal information. And this is not an imaginary scene, but a reality that many people encounter online every day.
“I don’t know if you’ve ever had the experience of posting a normal comment on Weibo and suddenly a lot of bad messages come in. Last week my friend just said ‘girls want the freedom to run at night too’, and she was scolded by more than 300 articles. Nowadays, the online environment is really getting more and more heartbreaking.” However, the problem is that: why hate speech so rampant online? Who should take responsibility of this? And what can we do to deal with this?
Research from the University of Manchester in the UK further revealed that hate speech against Asians shot up by 300% on Twitter during the New Crown epidemic (Lee & Wong, 2021). My cousin received more than one hundred of abusive private messages from strangers about her body because she posted a self-photo in her TikTok. What’s most shocking is that when she reported it to the platform, she got the reply ‘no offending content detected’. After I looked for information and came across a shocking figure: over 60% of women have experienced online abuse so that my cousin’s experience is by no means an individual case.
What shocked me more was the thing happened on Australian native football player Adam Goodes. He was shamed by social media users with pictures of apes and monkeys for performing a traditional war dance during a match, and was finally forced to retire from the team (Matamoros-Fernández, 2017). These cases reveal a sharp reality: webspace has become a hotbed of hate.
British football player Marcus Rashford is the leader of a public service advocate who fights for free lunches for poor children and has been subjected to large-scale racist attacks on social media simply because of his black identity, and has even received death threats against his family, and finally had to look for police protection.
Multiple Forms and Characteristics of Hate Speech
Definition and Presentation
Based on Matamoros-Fernández’s research, hate speech refers to identity characteristics based on race, gender, religion, sexual orientation and something else that demeans, threatens or incites violence against individuals or groups. It’s more than just “rude words”, it’s more like a catalyst that can trigger real-world violence. In contrast to traditional beliefs, online hateful speech always have three main characterises below:
- Disguised Aggressiveness: often under the cover of “humour” or “satire”, covering up its offensive nature.
- Viral Spread: spread quickly through social networks and have great impacts in a short period of time.
- Long-lasting Harm: contents continue to spread even after it is removed, causing long-term psychological damage to victims.

Typical Case Study
Case | Type | Consequence | Platform Response |
The Gamergate Incident | Gender Violence | Female developers quit the industry. | 6-month delay in processing |
“The Fappening” Incident | Privacy Violation | Stars in the incident are suffering from psychological trauma. | enhanced iCloud security |
Influencer Named “Empty Dairy” Incident | Cyberbullying | Victims suffered varying degrees of physical and psychological injuries. | These behaviors went on for more than half a year and the platform didn’t do anything about it until it became a big deal. |
Online victimization’s forms
- Direct Attacks: Female game developers were subjected to systematic sexual assault as a direct result of blocking the development of diversity in the game industry during the Gamergate incident.
- Visual violence: Spreading privacy pictures and videos without permission. There is a tipical incident called The Fappening” that hackers broke into many female stars’ iCloud accounts and distributed their private photos.
- Systematic discrimination: algorithm recommand fueling extreme contents. Studies have shown that, Reddit’s “karma” point system actually encourages instigative posts, because these contents are easily to gain interactions (Massanari, 2015).
When it comes to platforms tolerating cyber violence, the recent sensational ‘Empty Diary’ incident is a typical example. This influencer on tiktok not only guided his fans to cyber-violate Ms. Momo, a vegetarian, but also publicly abused the woman on social media software, and worse yet, caused a sale to lose his job as a result. What’s the most funny part? Thses behaviors went on for more than half a year and the platform didn’t do anything about it until it became a big deal. As the comment section says: ‘The only thing that hasn’t fallen off Empty Diary now is the visitor histories’ – the platform algorithms are still gaving a steady stream of traffic to him. Doesn’t this just confirm what was said earlier? As long as it still brings in clicks, platforms always turn a blind eye to online violence.”
Systemic Flaws in the Platform Architecture
Algorithms Fuelling Hate
The theory of “ monitoring capitalism” pointed out by Professor Zuboff (2015) at Harvard Business School reveals the fact that platforms gain by radicalizing emotions. Specific mechanisms include:
- Anger First: Facebook Algorithm Increases Exposure of Angry Content by 30 Percent because these types of posts retain users longer (Woods & Perrin, 2022).
- Radicalization pipeline: YouTube associates controversial topics with extreme content. Matamoros-Fernández points out that in the case of Goodes, anti-Native rhetoric was associated with white supremacist videos to form recommendations.
- Commercially Driven: Zuboff says $4.2 in ad revenue per 1,000 hateful views, creating a model of so-called “surveillance capitalism”.
The double-edged sword of anonymity
While anonymity mechanisms protect privacy, they also provide a breeding ground for hate speech:
- Accounts don’t need to take responsibility for what they say on anonymous platforms, so that this kind of platforms have become hotbeds of hate speech.
- Even on non-anonymous platforms (e.g. Twitter), fake accounts and robots can still widespread hateful content.
Failure of Adult System
Platforms’ content management exist systemic flaws:
- Confusing standards of criticism: Facebook has deleted images of traditional Aboriginal ceremonies as “pornography,” but has long tolerated racist pages (Matamoros-Fernández, 2017).
- The dilemma of outsourced moderation: content moderation relies on low-paid outsourced staff, leading to erroneous or missed deletions (Woods & Perrin, 2022).
- Ineffective policy enforcement: platforms often use “freedom of expression” as an excuse to shirk their responsibilities, and do not take action until a public relations crisis is triggered.

Depth and breadth of impact
The impact of online hate speech goes far beyond surface perception:
- Individual level: it leads to anxiety, depression, self-censorship, and even suicide. Research based on aboriginal communities has shown a significant connection between internet violence and an increase in youth suicide rate (Carlson & Frazer, 2021).
- Social level: disrupting democratic spaces for discussion. “Gamergate” (Gamergate), in which female game developers were subjected to systematic sexual harassment, directly contributed to the disruption of diversity in the games industry (Massanari, 2015).
Why is online violence so violent?
To tell you the truth, it’s platforms’ algorithms that are “enabling” it. Have you notice it? The more the content of the war, the more likely to be popular. That is because argument can brings money. Just like my friends said: “When brushing short videos, the system always pushes particularly extreme views to me, and it’s getting more and more irritating to watch.”
Even scarier is the anonymity feature. Some people change their accounts and start being brazen, saying the most abusive things. I remember last year there was a game influencer who had to go off the internet in the end because he refused the unreasonable requests of online users.
Responding strategies: from individuals to policy
Personal Protection Measures
Emergency Response Checklist:
- If you see a bad review, take a screenshot and report it first, don’t abuse each other!
- Set ‘friends only’ to comment immediately.
- Contact the cyber violence helpline (take India as an example).
Long-term Protection Strategies:
- Don’t put your workplace or school in your profile!
- Check privacy settings regularly!
- Know common scams such as phishing!
- Develop critical thinking skills!
Direction of Platform Reform
Social media platforms must undergo fundamental changes:
- Algorithmic transparency: publicize content recommendation mechanisms to reduce the distribution of extreme content; Introduce “algorithmic impact assessment” mechanism.
- “Responsible Care” model: Put user safety above business interests; Build a comprehensive risk assessment system; Publish regular security transparency reports (Woods & Perrin, 2022)
- Optimize the audit mechanism: Better review mechanisms: increase human review and protect the mental health of content reviewers; Improve auditor working conditions; Develop more accurate AI identification tools.
The role of policy and law
Global Comparison of Lawmaking:
Country | Law | Key Requirement | Characteristic |
Germany | NetzDG | 24 hours to delete | High fines |
EU | DSA | Risk Assessment | Algorithmic Auditing |
China | Cybersecurity Law | System for identifying users ( on a rail network, the Internet etc) | Content Review |
- EU Digital Services Act: Platforms are required to remove illegal content or face highly fines.
- Germany’s Network Enforcement Act: requires social media to remove explicitly illegal content within 24 hours.
- China’s cybersecurity law requires real-name access to the Internet and content review will be done.
Collaborative Social Governance
Changes in the education system
Suggest additions to the school curriculum:
- Required course on digital ethics
- Cyber Psychology Guidance Class
- Media Literacy Workshop
Technology companies responsibilities:
- Create a CEO position
- Incorporate security metrics into KPIs

Recently I’ve noticed that some apps are starting to change:
- Weibo has introduced “friendly comment alerts”.
- Bilibili has introduced a “pop-up screen test”.
- TikTok can filter unwanted comments with one click.
But it’s not enough to do these changes.
Conclusion
Hate speech is not “unavoidable impacts of the internet”, but rather a result of a combination of platform design, commercial interests and social regulation. Solving it requires:
First of all, individual users should be more vigilant, both to protect themselves and avoid becoming accomplices in the spread of hate. Secondly, social media platforms must face up to their responsibilities and fundamentally restructure their algorithmic logic and business models.
The most important thing is that governments should set up an effective regulatory framework to ensure they carry out social responsibilities through legal means. As Matamoros-Fernández (2017) saying: “code is law” -technological architecture determines the shape of cyberspace. Only by forcing platforms to redesign their system architecture can we truly bulid a safe and inclusive digital public space. This is not just a technical issue, but also a major question for social justice and the future of democracy. Digital platforms should be open, but they shouldn’t be rude. Each of us has a responsibility to make it safer.
Hate speech and online victimization are persistent problems in the digital age, but they are not unsolvable. With technical innovation, legal improvement and social collaboration, we can gradually clean up the online environment. The digital space of the future should be an extension of humanity, not a hotbed of bad intentions. Each small indulgence can contribute to greater harm.Only through collective action can the Internet return to its original purpose of connecting people and promoting understanding. The online world shouldn’t be outside the law. Next time you want to comment, remember that there’s a live person across the screen. We may not be able to change the entire Internet, but we can at least start with ourselves and make small acts of kindness snowball. After all, no one wants the day to come when the person being hurt becomes us, right?
Interactive Discussion
Have you ever experienced cyber violence? Feel free to share your experience and coping strategies. let’s put our heads together and work together to clean the online environment.
Reference Lists:
Carlson, B., & Frazer, R. (2021). Social Media Mob: Being Indigenous Online . Macquarie University.
European Commission. (2025, February 12). Digital Services Act package. Digital Strategy. https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package
Matamoros-Fernández, A. (2017). Platformed racism: The mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930-946.
Massanari, A. (2015). Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329-346.
NCW India [@NCWIndia]. (2024, April 15). The NCW 24×7 Helpline is here to support women in distress anytime, anywhere. Whether it’s domestic violence, cyber harassment… [Tweet]. Twitter. https://twitter.com/NCWIndia/status/1902242955811045583
Sky Sports News [@SkySportsNews]. (2022, November 27). It’s more disappointment, the fact people have views like that. Marcus Rashford reflects on the racist abuse he received during lockdown when he campaigned for free school meals [Tweet]. Twitter. https://twitter.com/SkySportsNews/status/1596873465948168198
Woods, L., & Perrin, W. (2022). Obliging platforms to accept a duty of care. In Regulating Big Tech: Policy Responses to Digital Dominance (pp. 93-109). Oxford University Press.
Wikipedia contributors. (n.d.). Network Enforcement Act. In Wikipedia. Retrieved April 4, 2025, from https://en.wikipedia.org/wiki/Network_Enforcement_Act
Wikipedia contributors. (n.d.). Cybersecurity Law of the People’s Republic of China. In Wikipedia. Retrieved April 4, 2025, from https://en.wikipedia.org/wiki/Cybersecurity_Law_of_the_People%27s_Republic_of_China
Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30(1), 75–89. https://doi.org/10.1057/jit.2015.5
Be the first to comment