Introduction
As “attacking them or communicating any words, words or behaviour against them using degrading or discriminatory language based on the identity of an individual or group (i.e. religion, ethnicity, nationality, race, colour, descent, gender or other identity factors),” hate speech is defined by the United Nations in its Strategy and Action Plan on Hate Speech. In an era of increasing digital connection, controlling hate speech on the Internet has emerged as a critical social concern. A few platform companies have also put in place some content management strategies. This includes establishing community norms and reporting systems so that users can report offensive information, as well as using algorithms and artificial intelligence to identify and eliminate hate speech and abusive content. However, given the continued rise in hate speech and online harassment on digital platforms, the efficacy of these initiatives has come under scrutiny. Therefore, to successfully address this issue, a cooperative approach comprising platform firms and government involvement may be required.
Platform Role and User
The effect of platform media offers a new way of looking at the extremely complicated problem of cyber-violence in the digital era. According to the theory of digital media, the platform serves as a digital platform where users can create and share network information content, receive news messages, and engage in public discourse. It also extends human reality into the digital sphere and subtly influences news commentary, public participation, and content creation, which has a structural effect on the cycle of creation, dissemination, and alienation of cyber-violent information (Xiaoyan, 2024). Users do not naturally generate information concerning cyber violence; rather, the platform environment does. The situation is made worse by the algorithms used by internet platforms, which enable the quick dissemination of nasty content. According to recent studies, a spike in hateful tweets following Elon Musk’s acquisition of Twitter in October 2022 resulted in fewer limitations on content review (Hickey et al., 2023). It is important to recognize that relying solely on the internal logical analysis of the law makes it difficult to handle this extremely challenging work, as the creation, distribution, and alienation of network information content in the platform is a field of traditional legal research in which it is not deeply involved (Xiaoyan,2024). So the platform itself has an important responsibility for controlling hate speech and cyber violence. Wenguang believes that in the age of Web 3.0, the argument of platform intermediary liability exemption is no longer standing, and the outdated principle of intermediary responsibility exemption needs to be reformed and revised. Intermediaries are becoming new managers of online speech, and the platform now has the power to restrict online hate speech. (Wenguang, Y., 2018).
However, the Internet’s competitive climate constantly pushes people to produce material and operate digitally, opening up a public forum that welcomes paranoid speech. Since hostile speech injuries are frequent in the platform division, constructive argument has taken the place of amicable exchanges, and the community discussion area created by the platform has steadily drifted from the initial idea of being accepting, open, and helpful to one another. The internet fosters animosity, reduces tolerance, perpetuates media injustice, and provides knowledge on cyberviolence in a unique media environment that is always thriving.
First, the platforms’ low bar for content creation presents issues with excessive media injustice and freedom of expression, creating a media climate that encourages the spread of false information regarding cyber violence. Some users even establish many profiles to fight with each other, creating a profession devoted to publishing hate speech and providing mass-produced, paid information about cyber violence. While the platform does have some community conventions and account punishment mechanisms in place, these policies are primarily reflected in mild penalties like temporary bans and account point depreciation, which serve more as user retention tools than as effective means of controlling extreme speech. Facebook has been known for refusing to ban racist pages against Aboriginal people. In 2012 and 2014, the Online Hate Prevention Institute engaged in long negotiations with Facebook to have the company remove multiple pages containing racist assaults on Indigenous Australians. Facebook initially concluded that the pages did not violate its rules of service, but required their creators to rename them to indicate that they were ‘controversial content’. Only when the Australian Communications and Media Authority became involved did Facebook decide to block these pages, and only in Australia. Facebook’s removal of a photograph of two Aboriginal women, as well as its failure to completely restrict racist comments directed at Indigenous Australians, demonstrate the platform’s lack of understanding of Aboriginal images and its preference for Western principles of free speech. Facebook examines work favours platforms’ profit-seeking and legal demands rather than reacting to social justice or advocacy-related goals. I think that while platform companies are taking some measures because of the potential impact of hate speech and online abuse on the company’s long-term interests, it is, like Facebook’s practice, a delayed and mild rectification. As long as the current hotspot event gives the company a profit, they will give the information a license until someone or an agency protests against the platform.
Government Administration and Freedom
Cyberliberalism dominated the debate in the early stages of the Internet, asserting that cyberspace is a free society of self-determination and that governmental regulation should be kept to a minimum. Cyberliberals resist any attempt by the State or the law to regulate Internet content, as this may lead to inappropriate censorship (See, e.g., 1996). However, even if illegal hate speech and cyber-violence are managed solely on a platform, it would first require a clear definition of hatred in the terms of service of the relevant laws and platforms. The ambiguity of this definition often leads to the suppression of freedom of expression or the neglect of certain extreme expressions. The establishment of an external border based on domestic law and international human rights standards is therefore crucial for the extensive concept of hate speech. In cases of suspected hate speech and cyber violence, clear conditions should be set for the deletion of content. Therefore, this paper considers that government involvement in network management is inevitable.
Globally, there are three primary ways to control the interplay between freedom of expression and dignity: the American model, which prioritizes freedom, and the German model, which emphasizes dignity. The First Amendment of the United States Constitution asserts neutrality, forbids the government from censoring speech to prevent the frostbite effect, and does not distinguish between speech that is protected and speech that is not. Almost all speeches are protected by the Constitution; the law does not censor speech unless there is a demonstrable and genuine risk associated with a particular statement. Article 1 of the German Basic Law states that “the dignity of a person is inviolable, and all State organs must respect and protect that dignity.” The German Basic Law prioritizes protecting human dignity above all other principles, rating it higher than the right to free speech. Germany does not, therefore, support the idea of “content neutrality,” which holds that governments have the authority to censor speech to uphold people’s rights to dignity. And, most countries and regions, including China, took a compromise position between the two models, trying to find a balance between the values of freedom of expression and human dignity, the hybrid model. These are the three main models in the world today for government involvement in hate speech and governance of cyber violence (Xiang,2024).
From the standpoint of the Chinese administration, The ban on hate speech is not for the sake of protecting the honour of the group. hate speech and cyberviolence do not infringe upon the rights of honour, nor do they carry the same penalties as crimes of insulting speech. The goal of outlawing hate speech is to defend weaker groups rather than powerful ones. Democracy forbids “the tyranny of a majority” as well as the privileges of the minority over the majority and the discrimination of the majority against the minorities. The protection of the public interest may be considered a crime against honour in the majority of nations and areas. In recent years, China has implemented a new guideline requiring all social media companies to display each user’s IP address. By the time the policy was implemented, the majority of platform users had expressed their support for it, and many of them, like myself, believed that it would limit extremists who frequently posted hate speech and abuse on the network. People assume that if something a person does on the internet hurts the image of the region in which he lives, he will examine if his or her activities are appropriate before commenting, thereby effectively cleansing the web space. The regulation has had certain consequences, such as, to some extent, preventing public disruption caused by geographically inaccurate information that happens at a specific time and limiting the breadth of some online rumours. When the hotspot happens, the practice of claiming to be a party to the occurrence and fooling others will be reduced. The IP addresses in China demonstrated a method of guiding people to spontaneously improve problems through government conduct, which now looks to be effective but not flawless, However, I think that it remains a great attempt to improve the online environment without infringing freedom of expression as much as possible.
Conclusion
Speaking on November 27, at the United Nations Headquarters in New York for the International Holocaust Commemoration Day, Secretary-General Guterres noted this terrible past but also voiced particular worry about the state of the modern world. He continued, “The alarm clock had already been ringing in 1933, and few were willing to listen and fewer dared to speak.”His main concerns were misinformation, irrational conspiracy theories, unrestrained hate speech, anti-Semitism, white supremacist movements, neo-Nazi ideology, and the need for people to be aware of hate speech that misleads them (Global Times, 2023).
Taking into account the prevalence of hate speech on the Internet and its harm to the target population, democratic speech and public security, it is essential to combat it. To this goal, platform companies play a crucial role as direct managers of online speech. From the perspective of Internet governance, the balance of multiple interests and the involvement of multiple stakeholders helps to choose the appropriate regulatory paradigm that matches the platform responsibility regime. The government has a long way to go to develop an appropriate regulatory model and accountability system. Government involvement in addressing hate speech and online abuse can take different forms. Governments can also work with platform companies to develop regulations and guidelines for content moderation, ensuring that the public interest is protected while upholding freedom of expression. While it is important to protect individuals’ right to express their opinions, hate speech and online abuse have significant negative impacts on targeted groups and society as a whole. So, a collaborative approach involving platform companies and government intervention may be necessary to address this issue effectively.
Reference
Amin Liang (2021). Study on regulation of hate speech on the Internet (PhD, Southwestern University of Political Law).https://kns.cnki.net/KCMS/detail/Detail.aspx?dbname=CDFDLAST2023&filename=1021602005.nh
Xiang Luo (2024). The path of the cyber-violence penal system is chosen and reflects on the separation of offences of insult. Chinese and foreign law.
Hu. (2023). The wave of hate speech is ringing in Washington. Global Times 007.
Hickey, D., Schmitz, M., Fessler, D., Smaldino, P. E., and Muric, G., and Burghardt, K. (2023). Auditing Elon Musk’s impact on hate speech and bots. Paper presented at the 2023 ICWSM (international AAAI conference on web and social media) and to be published in the ICWSM 2023 proceedings. Available at: https://arxiv.org/pdf/2304.04129.pdf
Issues of Concern. (2021). In T. Flew, Regulating platforms (pp. 91–96). Polity.
Park, A., Kim, M., & Kim, E.-S. (2023). SEM analysis of agreement with regulating online hate speech: influences of victimization, social harm assessment, and regulatory effectiveness assessment. Frontiers in Psychology, 14, 1276568–1276568. https://doi.org/10.3389/fpsyg.2023.1276568
Woods, F. A., & Ruscher, J. B. (2021). Viral sticks, virtual stones: addressing anonymous hate speech online. Patterns of Prejudice, 55(3), 265–289. https://doi.org/10.1080/0031322X.2021.1968586
Wenguang, Y. (2018). Internet Intermediaries’ Liability for Online Illegal Hate Speech. Frontiers of Law in China, 13(3), 342–356. https://doi.org/10.3868/s050-007-018-0026-5
Xiaoyan Zhu.(2024).Towards media justice: the role of the platform for information governance of cyber-violence and the realization of the rule of law.Journal of the University of China Southern (Social Sciences Edition) (01), 50-62. doi:CNKI:SUN:ZLXS.0.2024-01-005.
Be the first to comment