
Introduction
Is the internet a safe space for everyone to have a voice? In recent years, hate speech and online harassment on social media platforms have become increasingly serious. According to the Anti-Defamation League’s (ADL) 2023 report about Online Hate and Harassment, 52% of American adults reported they have experienced online harm, a significant increase from 40% in 2022. Among teenagers, this figure also rises from 36% in 2022 to 51% in 2023. Not just in the United States, hate speech and online harm are a topic of concern globally. According to a UNICEF survey conducted across 30 countries, one in three young people reported being a victim of online abuse, and nearly three-quarters said social media platforms, such as Facebook, Instagram, and Twitter, were the most common places of online harm. Therefore, hate speech and online abuse have become global issues, seriously affecting the mental health and digital experience of users.
what is Hate Speech and Online Harm?
The United Nations defines hate speech in its Strategy and Plan of Action on Hate Speech as any kind of communication in speech, writing, or behavior that attacks or uses pejorative or discriminatory language to a person or a group on the basis of their identity characteristics (UN, 2019). This means that hate speech may be more than what you might imagine, an expression of malicious on an individual. It is more commonly expressed as verbal violence against a specific group based on differences such as religion, gender, and race. Horribly, these speech are often not regulated or restricted by platforms because of so-called “free speech” and “cultural differences”.

Online harm is a concept closely related to hate speech. It can be a direct consequence of hate speech, or more broadly refer to any psychological or physical harm caused to individuals or groups through digital communication. Its forms include emotional humiliation, harassment and threats, doxxing, the spread of false information, and psychological or even physical harm triggered by public pressure. Hate speech and online harm are not merely online arguments or emotional outbursts. They reflect longstanding social prejudice and inequality.

Under Online Abuse: The Tragedy of a Malaysian TikTok Influencer
In July 2024, Malaysian TikTok influencer Esha (real name Rajeswary Appahu), a 30-year-old woman, died by suicide at home after experiencing online abuse.
Esha gained popularity among young users for sharing content about beauty, food review and her lifestyle on social media. She once exposed a mercury-containing cosmetic product, which was later banned, earning a lot of followers on TikTok.
Esha became the target of online abuse after she invited a guest to her TikTok livestream who made remarks that were considered “offensive to Islam.” Although the comments came from the guest, an Islamic preacher directed the blame at Esha. After posting a clip of Esha’s TikTok livestream, the Islamic preacher publicly accused her on social media of “allowing blasphemous remarks” and urged the public to report the incident to police. This directly led to Esha being detained by Malaysian police for three days. Although she was released after three days, the online abuse did not stop and even became more severe.

The online abuse erupted in the comment section under Esha’s videos, the live stream interaction, and reposted article. Many of the attacks were framed around Islamic moral discourse, with comments like “She should be stoned to death” and “She’s disgracing the image of Muslim women.” She was also called “kakak arang” (black charcoal girl), “perempuan jalang” (a slut who seduces men) and other terms with racial and gendered discrimination. In addition, some users maliciously edited clips from her livestreams and posted them with provocative titles for secondary communication. Some even used anonymous accounts to threaten “you’ll be found sooner or later” in her comments section and made fake images of her “being intimate with a man” to humiliate her personal image.
As per The Star, Rajeswary claimed that the individual had used her photo and made threats to “rape and kill” her as well as “injure” her.
“He said that he had a gang to back him up and also encouraged his followers to forward the live session [link] to me,” Rajeswary said.
According to the report by Firstpost. Esha had reported at the Dang Wangi police station in Kuala Lumpur the day before her suicide, expressing fear of being raped or murdered. But that ultimately didn’t prevent the tragedy.
Why is Esha the target of online abuse?
It wasn’t just one live stream that made Esha a target. This is a result caused by her multiple marginalized identities, including Malaysian woman, Indian ethnic minority, and TikTok influencer. A study published on Fulcrum notes that in Southeast Asia, many Muslim female often face online abuse and moral criticism when they fail to follow traditional expectations of Muslim woman on social media. Although Esha was not a Muslim, her content was still perceived by some religious preachers as “violating the proper role of women” or even “challenging male authority,” due to her position in the predominantly Islamic Malaysian society. She thus became the target of online attacks with religious and gender discrimination.

These online hate speeches are not a random incident of malice, but based on existing gender discrimination and religious norms. It is an extension of social prejudice in social media platforms (Williams, 2021). Esha not only experienced the attack from hate speech, but also suffered the pressure of huge public opinion with social prejudice. So why were these racial and gender humiliating statements, as well as malicious threats and doxxing, able to spread quickly on the platforms? What did the platform do and what did it ignore?
Behind the fast-moving content on the platforms, is delayed moderation.
The algorithmic logic of social media platforms often prioritizes instant distribution and rapid engagement. Any user who can spark interaction is rewarded with greater visibility. This is especially evident on TikTok, a platform driven primarily by short videos and livestream. While this algorithmic design is not inherently malicious, when user’s content involves hate speech and online harm, it may unintentionally accelerate the spread of these harmful statements. In the case mentioned above, Esha’s clips of the live stream were posted and criticized by Islamic preachers, attracting a large number of comments with racial and gender discrimination. These statements often remained unremoved on the platform for hours, or even several days.
However, such delayed moderation is not an special case. According to TikTok’s 2023 Transparency Report and an article on Reuters, in just the first half of 2023, the Malaysian government submitted 340 content removal requests to Tiktok, and TikTok removed over 650,000 pieces of content that violated community guidelines. Despite this, hate speech continued to appear widely in comment sections and livestream interactions. When platforms consistently fail to keep pace with the speed of user-generated content, especially in livestream interactions and other content that are difficult to regulate, they tend to respond too late, or selectively ignore those harmful speeches (Roberts, 2019).
There are significant disparities in content moderation resources across countries.
While content moderation mechanisms on platforms like TikTok may appear to be uniform globally, in reality, there are inequalities due to geographic and economic differences. In Western countries, platforms often provide better moderation teams and advanced automated detection systems with stronger risk management capabilities. In contrast, in Southeast Asian countries such as Malaysia and Indonesia, moderation relies heavily on user reports and generic algorithms.
The online abuse that Esha experienced happened in South-East Asia, where moderation resources are scarce. Facing a large number of malicious comments and threats, TikTok did not set up a “high risk user” protection mechanism as it does in other developed regions. They also did not restrict those comments on Esha’s videos and live streams. This ultimately led to the rapid spread of hate speech on the platform. As Woods and Perrin (2021) argue, social platforms are “foreseeably responsible” if they cannot prevent online harm by design, or even enlarge hate through algorithms for commercial profit or other reasons. When users in certain areas continue to be unprotected on time, hate speech and online harm are no longer a problem of algorithm design, but a failure of platform governance.
The platform’s misjudgment and indulgence of religious and cultural narratives
The online abuse that Esha faces is not just rude comments, much of it is attacks with religious and cultural humiliation. Although Esha is not a Muslim, her location in Malaysia has put her in a moral framework based on Muslim culture. Carlson & Frazer (2021) noted that the statements of marginalized groups are more likely to be overinterpreted by the public and thus experience online abuse. For an ethnic minority woman in a social environment dominated by Muslim culture, these accusations not only embarrassed Esha, but also made her feel excluded by the whole society.
However, in the content moderation systems of Tiktok and other platforms. The hate speech in the religious and regional cultural contexts is often not restricted because of “free expression” and “free belief”. On the contrary, they may be accelerated due to high interaction rates. This content censorship problem is not only found on TikTok, but also on other platforms. Facebook Hate Speech Asia Report states that in Burma, Facebook failed to act against hate speech in local language, which enabled long-term online harm against the Rohingya community. This ultimately evolved to real-world harm.
In addition, the governance culture of many social media platforms defaults to a male perspective due to local culture, and women are more likely to be misunderstood by algorithms and communities when expressing themselves (Massanari, 2017). In this context, abusers are given more expression space, while victims are further silenced.
How do we respond to hate speech and online harms
Esha’s experience reminds us that hate speech and online harms are not arguments on the Internet, but social problems that can cause real harm. Media platforms can no longer use technology as an excuse to selectively ignore these issues. Algorithm design and censorship mechanisms can determine what users see, they are not just technical processes, but also reflect the responsibility and attitude of platform governance.

To address hate speech and online harm, it is not enough to simply remove comments after the fact. As Woods and Perrin (2021) suggested, platforms need a systematic approach, from platform design to operational moderation, to reduce the risk of online abuse. It may be more effective to focus on designing prevention mechanism rather than punishing after the fact. They suggest using the UK Health and Safety at Work Act as a model to establish a platform’s “duty of care” to users and potential victims, forcing platforms to prevent online harms through product design.
It is also essential to strengthen localized moderation teams and cultural sensitivity training (Carlson and Frazer, 2021). Platforms need to understand the religion, culture, and other information in the environment of marginalized groups, avoiding to recognize malicious comments and threats as cultural expressions or normal arguments. At the same time, mechanisms, such as “high risk users,” “automatic comment restriction,” and “ turn off live streaming bullets”, should be introduced to intervene in real time with users who may be experiencing online abuse, preventing the spread of hate speech.
Additionally, the engagement of the public and policymakers is essential. Users need stronger media literacy and greater awareness of reporting those hate speech, while governments have a responsibility to promote legislation that ensures that these media platforms no longer use technology as excuse to avoid their governance responsibility on a global scale.
Conclusion
The tragedy of Esha was not caused by a single comment or a live stream, but a result influenced by the combination of her marginalized identity in Malaysia and local cultural biases. When she became a target of hate speech, the platform’s delayed moderation, poor prevention mechanisms and misjudgment of religious culture made this online abuse more serious. Hate speech and online harms will not disappear on their own. If social media platforms do not improve their existing product design and operational structures, the tragedy of Esha will not be the last one.
References
Alkaff, S. N. H. (2022, June 22). Cyberbullying of Muslim celebrities: The pressure to conform to ‘modest’ Islam. Fulcrum. https://fulcrum.sg/cyberbullying-of-muslim-celebrities-the-pressure-to-conform-to-modest-islam/
Anti-Defamation League. (2023). Online hate and harassment: The American experience 2023. https://www.adl.org/resources/report/online-hate-and-harassment-american-experience-2023
Carlson, B., & Frazer, R. (2018). Social Media Mob: Being Indigenous Online. Sydney: Macquarie University. https://researchers.mq.edu.au/en/publications/social-media-mob-being-indigenous-online
FP Explainers. (2024, July 10). How Malaysian TikTok influencer’s death has shed light on cyberbullying. Firstpost. https://www.firstpost.com/explainers/malaysian-tiktok-influencer-rajeswary-appahu-esha-death-cyberbullying-13791518.html
Latiff, R., & Azhar, D. (2023, December 15). Meta, TikTok report jump in Malaysia govt requests to remove content in 2023. Reuters. https://www.reuters.com/technology/meta-tiktok-report-jump-malaysia-govt-requests-remove-content-2023-2023-12-15/
Massanari, A. (2017). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807
Roberts, S. T. (2019). Behind the Screen: Content Moderation in the Shadows of Social Media. Yale University Press.
Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021, July 5). Facebook: Regulating hate speech in the Asia Pacific. Final Report to Facebook. University of Sydney & University of Queensland. https://r2pasiapacific.org/files/7099/2021_Facebook_hate_speech_Asia_report.pdf
TikTok. (2023, October 12). Community Guidelines Enforcement Report: April 1, 2023 – June 30, 2023. https://www.tiktok.com/transparency/en/community-guidelines-enforcement-2023-2/
UNICEF. (2019, September 3). UNICEF poll: More than a third of young people in 30 countries report being a victim of online bullying. https://www.unicef.org/press-releases/unicef-poll-more-third-young-people-30-countries-report-being-victim-online-bullying
United Nations. (2019). United Nations Strategy and Plan of Action on Hate Speech: Summary. https://www.un.org/en/genocideprevention/documents/UN%20Strategy%20and%20Plan%20of%20Action%20on%20Hate%20Speech%2018%20June%20SYNOPSIS.pdf
Williams, M. (2021). The Science of Hate: How Prejudice Becomes Hate and What We Can Do to Stop It. Faber & Faber.
Woods, L., & Perrin, W. (2021). Regulating Big Tech: Policy Responses to Digital Dominance (Ch. 5: Obliging Platforms to Accept a Duty of Care). Manchester University Press.
Be the first to comment