Among the world, freedom of speech is generally regarded as a fundamental right and has been incorporated into the political and legal systems of most countries. In recent years, the scope of freedom of speech has continued to expand and its protection has been increasingly strengthened. However, facing of this trend toward increasingly ‘absolutized’, the emergence of hate speech has also raised public concerns.
The Digital Battlefield: Hate Speech in China–Japan Relations
Due to the background of the complex and delicate relationship between China and Japan, cyberspace has become a major place for emotional expression, nationalist expansion, and even hate speech. Whenever there are foreign conflicts, historical arguments or unexpected events between the two countries, such as Japan’s Fukushima release of nuclear sewage or disputes over China’s coastguard patrols, social media platforms are flooded with offensive and radical comments. These contents are often mixed with national stereotypes, historical hates and even lies and inflammation. Hate speech between China and Japan is not a one-off event, but a continuous and widespread online phenomenon. Such comments not only damage rational communications of public in the two countries but also worsen the misunderstanding and bias of their younger generations. More seriously, it may invade the real society from the virtual space and lead to discrimination, rejection and even violence against specific groups, bringing significant social harm which includes damaging Japanese stores in China and burning vehicles from Japanese brands such as Nissan. Therefore, research on hate speech between China and Japan is a significant challenge about education, cultural understanding and public governance more than just management or limitation.
While there has not been any consensus on the definition of hate speech, the concept of incitement to hatred is well established in international human rights law. Article 20(2) of the International Covenant on Civil and Political Rights (ICCPR)clearly states that any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law (ICCPR, 1966, Art. 20[2]). In this context, incitement refers to speech targeting a national, racial, or religious group that poses an imminent risk of provoking discrimination, hostility, or violence against its members. Importantly, the ICCPR includes a requirement of intent which specifically refers to the deliberate promotion of hatred against a particular group. As emphasized in the United Nations Strategy and Plan of Action on Hate Speech, incitement is considered an especially dangerous form of speech because it explicitly and deliberately aims to provoke or accompany discrimination, hostility or violence,and may even trigger acts of terrorism or atrocity crimes (United Nations, 2019). This framework highlights why incitement is not merely offensive rhetoric but a speech act with serious potential to incite real-world harm.
Tensions between China and Japan often cause hate speech on the internet. For example, in 2024, a Chinese man attacked two Japanese people in Suzhou, killing one and injuring two, shocked both China and Japan. Even more worrying was the flood of pro-attacker comments that appeared on Chinese social media after the incident, with some even ‘rationalizing’ the crime. Platforms have removed some of the radical content and banned accounts since then, however, many offensive comments have already been widely screenshotted and reposted. Another example is the assassination of former Japanese Prime Minister Shinzo Abe in 2022, which was even ‘celebrated’ on many Chinese social media platforms. The common feature of these cases is that they all quickly turned into media storms, with extremely emotional content and national identity bias spreading rapidly.
Platforms and Algorithms: How Hate Speech Spreads
Why is it so easy for these hate speeches to spread? Part of the reason is due to the algorithmic mechanisms of the social platforms themselves. As Just and Latzer (2016) note, recommendation algorithms on social media prefer to promote content that triggers emotional responses, or that receives high levels of interaction. It means that hate speeches with elements such as anger, antagonism, and abuse are easier to catch considerable attraction than those sensible comments. Since hate speeches could attract clicks, comments and shares, they are continuously promoted by platform algorithms, creating what is known as an “information cocoon” or an “echo chamber.” Meanwhile, there are obvious shortcomings in the regulation of hate speech on platforms. Sinpeng et al. (2021) found that in a culturally complex and linguistically diverse environment, the automated identification system of platform mostly become difficult to identify the core intentions of users, resulting in harmful content may continue remaining while harmless content may be deleted by mistake. This is especially obvious between China and Japan because of their historically sensitive and complex context. In addition, ordinary users and community managers often feel powerless in the face of frequently occurring hateful content. Over time, it leads to the formation of so-called ‘blind spots’ in public discourse.
A more fundamental problem remains the ‘black box’ structure of the platforms themselves. Pasquale (2015) suggested that algorithms in ‘black box societies’ operate with a lack of clarity, leaving the public unable to truly understand how platforms decide ‘who gets to speak’ and ‘who gets to be seen’. When it comes to sensitive issues such as ethnicity, history, and identity, the lack of transparency becomes a tool for passing the responsibility, making hate speech seems ethical and spread fast. In simple terms, the hate speech between China and Japan is not isolated incident, rather, it is the result of a combination of emotions, algorithms, and poor regulation. Once an incident touches a sensitive topic, the platform mechanism will invisibly accelerate the spread of hatred. This issue not only distorts public opinion but also worsens opposition and misunderstanding among countries.
Education Gaps and Media Illiteracy
Why does hate speech between China and Japan has kept continuing even developing for a long time? One crucial factor is the lack of education in the face of ‘hatred’. When it comes to China–Japan relations, both China’s nationalist education and Japan’s avoidance of the real history in World War Two have become a hidden danger that could lead to hate speech. Many young generations have learned one-side historical narratives since childhood, and their understanding of each other was based on fragmented impressions or fake news from social media. When hate speech appears on social media platforms, people who lack wisdom of recognition and ability of critical thinking are easily controlled by emotions, tending to be gathering to attack others. At the same time, media literacy education is obviously absent. Most people can not recognize the source of information nor the definition of hate speech, not to mention they could realize the impact of spreading an offensive message. As Sinpeng et al. (2021) mentioned in their study, many community moderators or content creators have never even read the platform regulations, nor have they received any training on identifying hate speech so that ordinary users could feel more difficult to learn those abilities.
Media and information literacy refers to a set of skills that enable people to engage in a meaningful way with information. It includes interpreting and making informed decisions as a user of information sources and being a producer of information themselves. People in digital world require skills of using information and communication technologies, including digital media, and applications to access and produce information. This type of literacy empowers citizens to critically evaluate content, understand the role of media and other information providers and make reliable, well-informed decisions both as users and providers. Thus, media and information literacy is vital for freedom of expression, which enables all people to realize their right to seek, receive and impart information. Without media literacy, the ability to express disagreements rationally, or exposure to diverse perspectives about each other’s countries, hatred may become a preferred form of expression whenever a sensitive issue occurs.
Law, Policy, and the Limits of Regulation
However, with the rise of hate speech, government regulation often achieves little. In China, while information online is controlled strictly and sensitive message is banned fast, emotional content such as indistinct hate speech is not as easy to identify as sensitive political words. Unless a public discussion storm occurs, many abusive, offensive and biased comments will not be considered as illegal. Similarly, in Japan, although the Hate Speech Countermeasures Act was passed in 2016 to specifically forbid the publication of discriminatory speech in public, the law does not cover cyberspace, and it affects little. In addition, the co-operation mechanism between platforms and governments is not mature. Pasquale (2015) has pointed out that the algorithms of digital platforms operate in a ‘black box’ without external supervision, and it is difficult for even governments to understand the logic of their content recommendations. As a result, even if the government wants to step in and regulate, it may be unable to clarify, control, or dominate effectively.
More critically, online conflicts between China and Japan are often cross-border. For instance, Chinese people abuse Japan on Weibo and BiliBili, while Japanese people attack China on Twitter and 5ch (Japanese anonymous forums). Such kinds of cross-platform, cross-language, and cross-cultural interactions are difficult to handle by single country. Sinpeng et al. (2021), also emphasized that global platforms without localized linguistic ability and cultural understanding make it difficult to accurately identify the problems. Therefore, while legal measures are important, legislation of restricting free speech is only one limited way to prevent or moderate the widespread impact of hate speech in society. Policies that support the productive use of freedom of expression can strengthen the power of harmony within diversity. These policies can lay the foundations for a united society and contribute to weakening hatred. Counter-speech refers to efforts to address hate speech through constructive counter-narratives, as opposed to restricting freedom of expression. In the lines of the United Nations policy that ‘an important measure to address hate speech is to increase, not decrease, speech’, creating and promoting counter-messages and alternative narratives is one of the UN’s focus areas for preventing hate speech rhetoric. Therefore, it is encouraged that countries resist hate speech with inclusive and constructive messages and promote constructive and alternative speech to stop the violence incitement that may lead to crimes.
Hate speech may seem like just words, however, the emotions, labels and stereotypes behind it can slowly influence people’s real judgements and even lead to actual acts of discrimination and violence. As Just and Latzer (2016) said, the algorithms and content distribution mechanisms of platforms not only select the information for us but also reconstruct our sense of reality in an invisible way. If people are constantly exposed to content like ‘People from some countries are bad’ or ‘some ethnic group should disappear’, it becomes easy for them to believe that the world is truly divisive and aggressive. The case of knife attack in Suzhou mentioned earlier is an extreme but real example, and this kind of violent behavior is hard to happen without the impact of hate speech. Similarly, in Japan, incidents involving right-wing extremists attacking Chinese international students or abusing Chinese tourists occur from time to time. In these cases, influence of hate speech on social media platforms often be found behind them. Sinpeng et al. (2021) also note that gender, religion, ethnicity and nationality are the most common targets of hate speech in the Asia-Pacific region. Such attacks on identity are more than just words, they can gradually destroy the social space and mental health, leading to self-doubt and even affecting real life.
From Harm to Hope: Toward a Constructive Response
In conclusion, while cyberspace does allow everyone to express their own views, it also facilitates the spread of hate speech. Especially in the context of China-Japan relations, where history and reality are closely connected, hate speech online not only reflects the fluctuation of public emotions, but also deeply reflects the lack of education, regulation and platform mechanisms. While hate speech seems to exist only online, in fact it is affecting interpersonal relationships and national perceptions invisibly. It undermines the peace of public voice, distorts the understanding of each other, and may even lead to discrimination and violence. It is not only a moral issue, but also an interconnected problem of policy, technology and cultural governance. It is difficult for any single role to handle this issue on its own. The governments should update laws and regulations and improve governance mechanisms; education system requires to be more active in cultivating students’ media literacy and cross-cultural understanding; and platforms must take on a higher level of responsibility for monitoring content. Additionally, ordinary users should also realize that every piece of information they posted or transferred is influencing the public opinion space.
Reference
International Covenant on Civil and Political Rights (ICCPR), adopted December 16, 1966, United Nations General Assembly. https://www.ohchr.org/en/instruments-mechanisms/instruments/international-covenant-civil-and-political-rights
Just, N., & Latzer, M. (2016). Governance by algorithms: Reality construction by algorithmic selection on the Internet. Media, Culture & Society, 39(2), 238–258. https://doi.org/10.1177/0163443716643157
Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press.
Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Final Report. The University of Sydney & The University of Queensland.
United Nations. (2019). United Nations Strategy and Plan of Action on Hate Speech. https://www.un.org/en/hate-speech/strategy
Reuters. (2024, July 1). Chinese social media companies condemn hate speech against Japanese after knife attack. https://www.reuters.com/world/china/chinese-social-media-companies-condemn-hate-speech-against-japanese-after-knife-2024-07-01/
Be the first to comment