
Because information is now spread so quickly, the Internet has infiltrated every aspect of our lives and grown to be an indispensable tool. It enables us to communicate with people thousands of kilometers apart and easily acquire information from all around the globe.
Despite its usefulness in sharing ideas, the Internet has significant downsides. The most notable of them is the rise of online violence, especially hate speech. The audit is employed as a remedy to rectify these chaotic situations. Unfortunately, the audit is not in place, so false information keeps spreading, and it is overly stringent and restricts free speech. Its inability to strike a balance has led to a great deal of debate. Since we are all impacted by this unresolved problem, let’s collaborate to find a solution.

Hate Speech: The Internet’s “Tumor”
What exactly is hate speech? Simply defined, hate speech is any speech that purposefully encourages violence and prejudice against an individual or group of individuals on the basis of characteristics such as race, religion, nationality, gender, sexual orientation, handicap, etc. On some social media platforms, for example, people may aggressively discriminate against others and attack or criticize them because of their skin color or religious views. These comments have the ability to operate as a fuse, bringing conflict and friction between groups and emotional harm to the target, similar to a “tumor” in the network. When they “explode” on a massive scale, the network environment’s harmony is disrupted, which could endanger actual civilization. It will disrupt the cyber environment’s equilibrium and may even expand to undermine the stability of the real world if it “explodes” on a large scale. Don’t underestimate this “spark”; it has the potential to start a prairie fire.
As Professor Massanari writes in “Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures“, toxic technocultures promote antiquated ideas about gender, sexual identity, sexuality, and race while rejecting diversity, multiculturalism, and progressivism (Massanari, 2017, p. 333). Take the “Gamergate” incident on Reddit, which sparked outrage. Quinn became the focal point and symbolic character in a nasty campaign to delegitimize and harass women and their allies in the gaming industry (Massanari, 2017, p. 334). As mentioned in the event written on Gamergate .

The global spread of hate speech has been steadily growing and is a serious issue at the moment. According to Germany’s Laws on Hate Speech, Nazi Propaganda & Holocaust Denial: An Explainer: in Germany, it is unlawful to publicly deny that the Nazis killed Jews, and the maximum penalty for inciting hatred is five years in prison. Hate speech laws in the United Kingdom and Hate speech laws in Canada both contain sections that forbid public hate speech, as noted in Wikipedia’s articles on hate speech legislation in the United Kingdom and Canada. Despite the fact that many nations have passed pertinent legislation, the laws vary greatly in their actual application because different nations have varied definitions of hate speech. In the chapter, Woods states, platforms Are Mandatory to adhere to a duty of care. Perceptions of what constitutes appropriate content differ from one jurisdiction to another, and in general, meaning can be context-specific (Woods, 2021, pp. 96-97). Globally, it is much more difficult to plan and control internet communication.
Cyber dangers are a multifaceted annoyance
Cyberharm, in addition to hate speech, can take many different forms. One of the most common and well-known is cyber violence, where individuals intentionally hurt and denigrate others online, seriously jeopardizing their physical and mental well-being. Numerous internet superstars have faced criticism for expressing views and lifestyles that deviate from those of some of their peers, similar to the internet harassment campaign run by Gamergate: Dissatisfied with the emergence of women in the gaming business, white male right-wing gamers used Zoë Quinn’s Depression Quest in 2014-2015, circulating accusations about her receiving critical praise through an inappropriate connection. Zoë faced threats of “rape and death” on Reddit and other sites, her privacy was violated, and she had to leave her house. Other women, including Anita Sarkeesian, were harassed after she referenced “having weapons to massacre women” in class (Greengard, 2024).
The severity of cyberviolence is demonstrated by the fact that many Internet celebrities, like these victims, suffer from long-term depression and have their friends and family blamed. Cyberviolence is increasing in frequency and harming many people in the social media age. This emphasizes even more how damaging content on the internet can have an impact that is too strong to ignore.
When we register an account or fill out an online form, hackers may illegally collect or steal our personal information, which they may then use for illegal purposes. Fraudulent individuals utilize phony websites, prize-winning information, online part-time jobs, and other tactics to deceive customers into paying money or disclosing personal information. This is known as a cyber scam. Cyberthreats have a significant impact on people’s daily lives and the protection of properties. According to Report shows highest number of data breaches in 3.5 years shows that, recent years have seen a rise in data fraud and leakage in Australia, which has caused serious instability in the community.
Audit mechanisms: a solution or a fresh conundrum?
The relevant authorities have also passed rules and regulations, and major online platforms have put in place screening processes to handle hate speech and cyberthreats. For instance, France orders social media platforms to remove extremist content in 1 hour or pay a fine and PPL visant à lutter contre les contenus haineux sur internet : adoption en lecture définitive in the Anti-Internet Hatred Bill’s passage by the French National Assembly. To avoid fines, online social media platforms must remove hostile or discriminatory insults based on gender, race, religion, or other criteria within 24 hours. Artificial intelligence technology is used by certain platforms to identify and filter false information. Due to an Update on Our Progress on AI and Hate Speech Detection news reports, such as Facebook, whose automated technology has raised its proactive hate speech detection rate to 97% (Schroepfer, 2021). This dependence on AI for screening isn’t flawless, either. In“Behind the Screen: Content Moderation in the Shadows of Social Media,”Roberts makes this observation: machine-automated detection remains an ongoing computational problem for images and movies (Roberts, 2019, p. 37). Automated systems frequently have trouble understanding some complicated occurrences and cryptic language.
Even if the screening process stopped some harmful information from propagating, it has resulted in a considerable amount of discontent. Excessive censorship inhibits free expression. Excessive censoring can limit the public’s capacity to express themselves while suppressing ideas and information that should be freely circulated. Automated algorithms are prone to data identification errors because they are unable to understand complex semantics and a range of situations. As a result, a considerable quantity of banal content is wrongly processed, resulting in censorship abuses and infringement of citizens’ rights to free expression.
There are several barriers to enforcement and verification. Many forms of “gray” illicit communication are subtle and hard to adequately define. It can be challenging to enforce censorship based on universal standards since people’s understanding of speech differs greatly between cultures, and the same content may signify very different things in different cultural situations. In Behind the Screen: Content Moderation in the Shadows of Social Media, Roberts points out that professional moderators need to be knowledgeable with the cultural norms of both the platform’s audience and the area where it is based, as well as the preferences of the site’s assumed audience (Roberts, 2019, p. 35). Consequently, content reviewers frequently experience a great deal of anguish due to the absence of precise and uniform criteria for judging whether a given piece of content is illegal.
Case Study: Meta Platform Policies and Controversies
Formerly known as Facebook, Meta is a well-known social media site with a well-regarded community standards policy. According to the story, After Trump’s election win, Meta is firing fact checkers and making big changes and More Speech and Fewer Mistakes,since Trump’s election, Meta has modified their community standards policy. Mark Zuckerberg has also expressed his intention to allow greater discussion and lessen mistakes on the website. This community norm from Meta’s Hateful Conduct indicates it is clear that Meta has policies in place to deal with hate speech, including a ban on direct incitement to violence and stringent safeguards against hate speech aimed at specific groups of people who want to enhance the communication environment on the platform.
First of all, hate speech is hard to define. Different social groups and cultures have different definitions of hate speech. It is therefore challenging to apply a reliable and consistent criterion to decide whether a message qualifies as hate speech. Second, the way that Meta handles hate speech can vary. Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube, facebook, for instance, is well-known for its refusal to prohibit racist pages that target Aboriginal people (Matamoros-Fernández, 2017, p. 931). Specifically, in some places, racially discriminatory remarks may not be dealt with swiftly or sternly enough, whilst in others, ordinary everyday conversations may be outlawed due to confusing policies. This extensive platform has a lot to review. It is difficult to ensure that every instance of hate speech is dealt with promptly and properly. Some communities lack the staff and technology required to respond to hate speech quickly, making it impossible to prevent it from spreading in time. In rare cases, in order to strictly enforce standards, some statements may be misinterpreted, classifying routine debates as hate speech and preventing users from commenting normally. In their study “Facebook: Regulating hate speech in the Asia Pacific,” Sinpeng et al. discovered that Facebook’s moderation policies are more effective against racial and ethnic hate than against gender-based hate speech (Carlson & Rousselle, 2020, as cited in Sinpeng et al., 2021, p. 3). It also raises issues about Meta’s vetting mechanism, whose policies and procedures appear to fall short of public expectations when it comes to dealing with hate speech, with no well-developed and publicly recognized approach in place.
Where do we go from here?
In the face of hate speech, cyber harm, and the vetting conundrum, we must discover better answers. First and foremost, countries should reform applicable laws and regulations, clarify the criteria for recognizing hate speech and online harm, and strike a balance between freedom of expression and social order. Woods recommends in “Obliging Platforms to Accept a Duty of Care” that the statutory duty of care we propose is inspired by the UK approach to regulating the broad landscape of workplace health and safety (Woods, 2021, p. 97).
By forcing platforms to take on management responsibilities, a statutory “duty of care” framework will aid in the legal regulation of the online environment. Secondly, online platforms can improve their auditing system. To increase auditing accuracy and impartiality, they should use manual auditing in addition to automated processes, taking into account various cultural backgrounds and environments. Some platforms use an audit mechanism that works well. To enhance and collaborate on evaluations, they employ a combination of algorithms and people. Algorithms swiftly and legally check movies and comments, looking for any offensive language, incorrect formatting, or copyrighted content to lessen the strain on humans. Because the human review staff is familiar with the site and its users, they can accurately decide whether certain information—like the use of esoteric language, metaphors, or content with controversial meanings—is prohibited if the algorithms are unable to. efficient management of community material.
The healthy evolution of the network world depends on our collective efforts. We should aggressively address whatever problems the network may be causing in addition to utilizing its advantages so that it can be a tool for encouraging cooperation and the transfer of positive energy.
Reference List
Assemblée nationale. (2019). PPL visant à lutter contre les contenus haineux sur internet : adoption en lecture définitive. Assemblée Nationale. https://www.assemblee-nationale.fr/dyn/actualites-accueil-hub/ppl-visant-a-lutter-contre-les-contenus-haineux-sur-internet-adoption-en-lecture-definitive
Glaun, D. (2021, July 1). Germany’s Laws on Hate Speech, Nazi Propaganda & Holocaust Denial: An Explainer. PBS. https://www.pbs.org/wgbh/frontline/article/germanys-laws-antisemitic-hate-speech-nazi-propaganda-holocaust-denial/
Greengard, S. (2024, March 14). Gamergate | Summary, Facts, & Zoe Quinn | Britannica. Www.britannica.com. https://www.britannica.com/topic/Gamergate-campaign
Hateful Conduct | Transparency Center. (2022). Meta.com. https://transparency.meta.com/zh-cn/policies/community-standards/hateful-conduct/
Kaplan, J. (2025, January 7). More Speech and Fewer Mistakes. Meta. https://about.fb.com/news/2025/01/meta-more-speech-fewer-mistakes/
Massanari, A. (2017). Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807
Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118X.2017.1293130
Mehta, I. (2020, May 14). France orders social media platforms to remove extremist content in 1 hour or pay a fine. TNW | Security; The Next Web. https://thenextweb.com/news/france-orders-social-media-platforms-to-remove-extremist-content-in-1-hour-or-pay-a-fine
OAIC. (2024, September 15). Report shows highest number of data breaches in 3.5 years. OAIC. https://www.oaic.gov.au/news/media-centre/report-shows-highest-number-of-data-breaches-in-3.5-years
Roberts, S. T. (2019). Behind the Screen : Content Moderation in the Shadows of Social Media . Yale University Press,. https://doi.org/10.12987/9780300245318
Ryan, B. (2025, January 7). After Trump’s election win, Meta is firing fact checkers and making big changes. Abc.net.au; ABC News. https://www.abc.net.au/news/2025-01-08/meta-ends-factchecking-appoints-dana-white-mark-zuckerberg-says/104793862
Schroepfer, M. (2021, February 11). Update on Our Progress on AI and Hate Speech Detection. About Facebook. https://about.fb.com/news/2021/02/update-on-our-progress-on-ai-and-hate-speech-detection/
Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney. https://hdl.handle.net/2123/25116.3
Wikipedia Contributors. (2019, June 9). Hate speech laws in the United Kingdom. Wikipedia; Wikimedia Foundation. https://en.wikipedia.org/wiki/Hate_speech_laws_in_the_United_Kingdom
Wikipedia Contributors. (2020, January 10). Hate speech laws in Canada. Wikipedia; Wikimedia Foundation. https://en.wikipedia.org/wiki/Hate_speech_laws_in_Canada
Woods, L. (2021). Obliging Platforms to Accept a Duty of Care. In Martin Moore and Damian Tambini (Ed.), Regulating big tech: Policy responses to digital dominance (pp. 93–109).
Be the first to comment