Hate speech and online harm in China
The public can now converse and share information on social media platforms because to the quick growth of Internet technology. However, the problem of online hate speech has been progressively magnified via digital platforms and social media. Hate speech is described as discourse that denigrates the human dignity of the target group, stigmatizes them based on traits, and incites hatred against them (Flew, 2021, p.92). Often intertwined with misinformation and extremist content, such hate speech poses challenges for government regulators and platforms (Sinpeng et al., 2021, p3). In China, with the advancement of digital technology, the problem of online hate speech and online harm has become increasingly serious, which has aroused widespread concern from the society and the government. On Weibo, one of China’s largest social networks, a 17-year-old teenager, Liu Xuezhou, committed suicide after being subjected to a lot of online abuse while searching for his birth parents. China’s internet authorities and platforms were forced to respond after this severe instance of cyberviolence aroused public alarm and debate. Liu’s case is a catalyst for the governance and digital policy behind Online Harm. Using Liu’s incident as a case study, this essay will examine China’s digital policy and governance methods and obstacles in detail, as well as potential avenues for improving response to cyber injury.
The tragedy and underlying factors of hate speech and online harm
Liu committed suicide after being cyberbullied while searching for his birth parents. Liu first came to wide public attention in 2021 when he posted a family search video on Chinese social media platforms Weibo and TikTok, claiming that he was separated from his biological parents by human trafficking when he was three months old. With the assistance of the Chinese police, he was reunited with his biological parents in December 2021. But Liu’s biological parents told The Beijing News that Liu blackmailed them and asked them to buy an apartment for him. Liu later denied the allegations and cut off contact with his biological parents. But The article, published by The Beijing News, sparked online violence against Liu, with some netizens calling him a “schemer” and a “liar.” In his suicide note, it was revealed that he had received cruel comments on social media, urging him to kill himself and attacking and denigrating his appearance and character. Liu’s suicide sparked a heated debate about cyberbullying. Social media users switched gears, blaming his attackers for Liu’s suicide, and calling on Weibo to strengthen its anti-cyberbullying tools. At The same time, some users criticized The Beijing News for carelessly publishing the parents’ allegations without confirming and listening to Liu ‘s voice, arguing that it is irresponsible and encourages hate speech and online harm (feng, 2022). Since then, the case has also sparked discussion about whether China needs to develop legal definitions and regulations for cyberbullying.
Liu Xuezhou (Feng,2022): Common governance of online harm: The responsibility of the Chinese government and platformsIn this tragic case, the lack of digital policy and governance allowed the problem of online harm to grow without any regulation. Before Liu’s suicide, uncontrollable group online injuries had continued to appear on the Internet, but no relevant institutions had taken measures to prevent the incident from fermenting and online injuries. During the incident, not only did no netizens who participated in online violence need to be held accountable for their hate speech, there was no censorship process for media content, and no one was held accountable for Liu’s death. The Beijing News, as an objective media, only reported the one-sided remarks of Liu’s parents without verifying the facts, which damaged Liu’s public image and led to online violence against Liu. It was not until Liu committed suicide that the people who attacked him online were dealt with. One of the main reasons for the above tragedy is the lack of official policy control on these social media platforms, which allows both news publishers and news reviewers to freely express hate speech without paying any price, contributing to the growth of online harm. Secondly, the functions of the social media platform itself cannot limit and prevent the acts of cyber violence, nor do they provide corresponding protection to the victims of cyber violence.
Current state of government regulation of online harm
In response to online harm, the government needs to set up an independent regulatory body to regulate social media platforms and the Internet (Woods & Perrin, 2021, p93) to conduct necessary supervision and censorship of content on platforms. At present, the Chinese government’s Internet agency has carried out the supervision of Internet violence. The day after Liu’s suicide, the Cyberspace Administration of China launched a campaign to curb chaotic behavior in cyberspace during the Spring Festival holiday, with a focus on cyberbullying (Zhang, 2022). In addition, the Chinese government has introduced a series of laws and regulations aimed at combating online hate speech and online violence. In terms of Internet legislation, the Cyber Security Law has been promulgated to further refine the rules for the protection of personal information. The Personal Information Protection Law, promulgated in 2021, marks a comprehensive upgrade in the protection of personal information. The legal responsibilities and obligations in cyberspace have been clarified. In response to the specific problem of network harm, the Chinese government also issued detailed rules such as the Regulations on the Management of Information Services for Internet Users’ Public Accounts in 2022, which put forward more detailed requirements on the content review and management of online platforms (Li, 2023).
Improving legal frameworks to halt the propagation of hate speech and other harmful online content is also a global priority. The freedom of speech is guaranteed by Article 19 of the United Nations International Covenant on Civil and Political Rights; however, Article 20 further states that hate speech of any national, racial, or religious nature that incites hatred, violence, or discrimination ought to be illegal (Flew, 2021,93). Therefore, the government’s adoption of laws is the most direct and effective way to combat hate speech and online harm. The government should improve the law as soon as possible to require online platforms to undertake a duty of care to reduce harm caused through their services. This statutory duty of care involves reviewing and screening content to ensure that it does not contain harmful content such as self-harm, violent images, hate speech, etc. (Roberts, 2021, p. 53).
Current situation of platform governance network injury
Weibo said it is considering promoting a one-click removal function for all cyberbullying messages(Global Times, 2022): Common governance of online harm: The responsibility of the Chinese government and platformsAfter the importance of government regulation of social media platforms and the Internet has been recognized, how to effectively carry out measures on the platforms themselves is of great significance for the governance of network harm. First, platforms should regulate user-generated content, strengthen the management of comments, promptly block, or remove offending comments and illegal content related to online abuse, and deal with accounts involved in online abuse (Woods & Perrin, 2021, p97). Following Liu’s case, Weibo said it had screened 1,239 accounts that had had personal contact with Liu since January 12. In addition to the 40 accounts permanently banned for sending abusive messages to “twice-abandoned” teenager Liu, another 52 accounts were suspended for 6-12 months(Li, 2022). Secondly, platform operators should conduct a comprehensive assessment of the risks of platform operations, especially those for vulnerable groups, and take appropriate measures to mitigate these risks. Strengthen protection for victims of online abuse through specific design to reduce their access to abusive information (Woods & Perrin, 2021, p96). The official Weibo account of the Weibo community management said that it will promote the one-click deletion function for all online bullying messages and launch the “riot mode” function with one click. When enabled, users can isolate unfollowed comments and private message attacks for an optional period. In addition, when the user receives many abnormal comments, the pop-up window will prompt the user whether to enable the privacy protection function (Global Times, 2022). Weibo platforms took the initiative to take some measures to strengthen the protection of victims of Internet violence after the Liu incident.
Challenges for governments and platforms
However, China’s hierarchical information protection is still in its infancy, and there is a lack of comprehensive and standardized network information hierarchical review system construction. For the definition of the scope and type of “harmful information on the Internet”, there are still some problems, such as abstract legislative terms, low level of normative documents, and deviation of emphasis (Cheng, 2023). Therefore, although the government is constantly improving the responsibility clause in the governance of cyber violence, the content of these abstract concepts lacks detailed support, and these measures also lack the provisions of standards and due process.
For platforms, the responsibility of media platforms should be emphasized, that is, the specific obligation to undertake a comprehensive assessment of the risks that may arise from any aspect of their operations or services, especially those involving vulnerable groups. Central to this responsibility is risk management, which requires the clear identification and management of risks and the determination of appropriate measures to mitigate them, considering the nature of the hazards that may occur and the likelihood of their occurrence (Woods & Perrin, 2021, p97). Weibo, an Internet service provider in China, has not yet completely assumed its management responsibilities. On Weibo, on the other hand, users can “forward” a single piece of information to thousands of people in a matter of minutes,and accelerates the spread of hate speech and network damage. Users will then encounter the same rumor repeatedly.
As primary content managers, platforms often aim to maximize revenue, sometimes at the expense of user well-being. Woods & Perrin(2021, p95) mentioned that the platform often modifies user behavior for profit, and sometimes those behaviors are not in the user’s best interest. Therefore, the regulatory rules of the platform often highlight the value preference, and the rights enjoyed by the regulatory subject are easy to abuse when the rules are formulated. In addition, Weibo lacks transparency when formulating regulatory rules, and lacks consultation and feedback mechanisms. The technical setup of Weibo does not have a clear and visible standard in content review itself, and the platform lacks an early warning and prevention mechanism, nor does it strictly prevent the spread of Internet violence information. Media platforms such as Weibo could have achieved effective control at the early stage of online violence through technical means, but some platforms, driven by commercial interests, conduct abnormal public opinion guidance to obtain traffic and advertising fees, and encourage hate speech and online harm. Therefore, Weibo should take the initiative to take responsibility and strengthen management, not only after the rectification, but more importantly, prevent in advance and create a healthy public opinion environment.
Therefore, the Weibo platform should take the initiative to assume corresponding responsibilities, and at the same time, regulate and give timely feedback on major public opinion events, to effectively regulate and constrain user subjects. The government plays a leading role in conducting research, judgment, and guidance on public opinion. Government agencies should fully assess media information before it is released and take good account of its interests. Functional departments shall conduct real-time supervision on the Weibo platform to prevent the risk of improper governance and damage to users’ interests in the process of governance and ensure the normal supervision of the platform.
Conclusion
Online hate speech and online harm are major challenges facing modern society, especially in China, which is rapidly developing digitally. Taking the Liu incident as an example, we see the serious harm of online hate and the inadequacy of current digital policies and governance mechanisms. Although the Chinese government has formulated relevant laws and regulations, there is still room for improvement in enforcement and supervision. Therefore, the government needs to further improve the legal system, clarify the legal responsibility of the Internet, and strengthen the supervision of the platform. In terms of the platform, it should take the initiative to assume management responsibilities, implement preventive measures in advance, and strive to create a healthy public opinion environment. At the same time, platforms need to fully assess operational risks, especially the protection measures for vulnerable groups. In short, only the joint efforts of the government and platforms can effectively control online hate speech and build a civilized and healthy cyberspace.
Reference
Cheng, L. (2023). Research on the legalization of Social Credit System. Academic Journal of Management and Social Sciences, 2(1), 94–100. https://doi.org/10.54097/ajmss.v2i1.6378
Feng, J. (2022, January 24). Chinese teenager’s suicide puts cyberbullying and unethical journalism in spotlight. The China Project. https://thechinaproject.com/2022/01/24/chinese-teenagers-suicide-puts-cyberbullying-and-unethical-journalism-in-spotlight/
Flew, T. (2021). Regulating Platforms. In 91-96 (pp. 91–96). Polity Press.
Global Times. (2022). Weibo bans accounts insulting “twice abandoned” boy Liu Xuezhou after his death – Global Times. Www.globaltimes.cn. https://www.globaltimes.cn/page/202201/1250156.shtml
Li, L. (2022, January 25). Microblog: the case of Liu xuezhou, a boy looking for relatives, has attracted heated discussion and will launch a “riot prevention mode” with one click. Www.lwxsd.com. https://www.lwxsd.com/pcen/info_view.php?tab=mynews&VID=20275
Li, M. (2023, March 26). Full text: China’s Law-Based Cyberspace Governance in the New Era. Www.scio.gov.cn. http://www.scio.gov.cn/zfbps/ndhf/49551/202303/t20230320_709284.html
Meng, J. (2018, January 27). China’s social media Weibo criticised for spreading “harmful content” – Xinhua | English.news.cn. Www.xinhuanet.com. http://www.xinhuanet.com/english/2018-01/27/c_136929891.htm
Roberts, S. T. (2021). BEHIND THE SCREEN: content moderation in the shadows of social media. (pp. 33–72). Yale University Press.
Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Ses.library.usyd.edu.au. https://doi.org/10.25910/j09v-sq57
Woods, L., & Perrin, W. (2021). Obliging Platforms to Accept a Duty of Care. In REGULATING BIG TECH : policy responses to digital dominance. (pp. 93–109). Oxford Univ Press Us.
Zhang, Z. (2022, January 27). Ridding social media of violence. Global.chinadaily.com.cn. https://global.chinadaily.com.cn/a/202201/27/WS61f1de04a310cdd39bc837a2.html
Be the first to comment