In the Age of Information Explosion: The Legal Boundaries Between Hate Speech and Free Speech

Computer monitors and a laptop display the X, formerly known as Twitter, sign-in page, July 24, 2023, in Belgrade, Serbia – Copyright AP Photo/Darko Vojinovic

The balanced game between combating hate speech and defending free expression

Contemporary society is undergoing a revolutionary change in the way information is disseminated, and its dissemination rate and scope of influence are at an all-time high. The rapid development of digital technology has led to the boom of social media platforms, which have provided ordinary people with channels to express their opinions across borders, allowing diverse perspectives to spread rapidly across the globe. However, this freedom of expression has also brought with it new social issues – the redefinition of the reasonable boundaries of freedom of expression, most notably the legislative controversy over “hateful expression”.

To delve deeper into this issue, it is first necessary to clarify the conceptual scope of “hateful expression”. There is a general consensus in the academic community that such speech refers to verbal attacks, degrading behavior, or incitement to group antagonism based on specific group characteristics such as race, gender, religious beliefs, or sexual orientation. This phenomenon raises a difficult question in social governance: how to effectively curb socially harmful expressions of hatred while protecting citizens’ right to freedom of expression? What is even more alarming is whether the current legislative orientation risks inappropriately labeling true speech as a “social taboo” or even an “illegal act”?

From the perspective of social impact, hateful expression will have multiple negative effects: it not only directly damages the legitimate rights and interests of the target group, but also undermines social harmony and stability, and in extreme cases, it may even escalate into group violence. It is worth noting that with the advent of the digital age, the virtual space of the Internet has gradually become a major breeding ground for such harmful rhetoric. “Some content that is oppressive to the target group and experienced as hateful does not meet the definition of hate speech used by the website and is therefore completely unaddressed.” (Sinpeng et al., 2021) In view of the increasingly close interaction between online and offline societies and the increasing complexity of the online environment, how to effectively control online hate speech has become an urgent social governance issue.

There are different definitions of who is targeted by hate speech. Hate speech is defined as “slurs, epithets, or other harsh language about race whose sole purpose is to harm or marginalize another person or group.” (Matsuda et al., 1993) Hate speech is defined as hate speech that disparages a person based on their race or religion, gender, age, physical condition, physical or mental disability, sexual orientation, etc., in relation to broader group characteristics.

To further understand the concept of hate speech, it is necessary to grasp the constituent elements of hate speech, which is also the basis for the legal regulation of hate speech.

The transmission vectors of hateful expression are diverse. From the perspective of expression, it includes not only direct verbal forms, but also non-verbal expressions with symbolic meaning, that is, “symbolic expressions” that convey discriminatory attitudes through specific symbols or behaviors. In the digital environment, the forms of information dissemination are more abundant, relying on the multimedia characteristics of the network platform, all kinds of hostile content can be diffused and stored through text, images, audio and other composite forms. A typical example is the meme, which has both visual elements and textual information, which has become an important medium for transmitting bias in the online field.

As far as the target of attack is concerned, the category of objects of hateful expression is specific. Different from general offensive speech, the object it targets may be either a specific individual or a group with common characteristics, but the core lies in the group belonging attribute of the focus object. When targeting a group, the group must have clear identifying markers, which include both biological traits (e.g., race, sex) and social identity (e.g., political affiliation, values). Even when an attack is carried out against an individual, the focus of criticism remains on the group identity that the individual represents, rather than on their unique attributes.

The subjective motive of hate speech directly affects the criteria for determining responsibility. From the analysis of psychological mechanism, there are three main progressive negative attitudes: “bias”, as the most basic psychological state, is manifested as an irrational negative cognition of a specific individual or community. This perception is often based on a lack of objective basis and is clearly unjust. “Discrimination” is more serious than prejudice and manifests itself in institutional or systemic discrimination. This attitude is often directed at vulnerable groups, enforcing unfair treatment through social behaviour or institutional design. “Hatred”, as the highest degree of negative psychological state, is presented as extreme hostility towards a specific target group or individual. This psychology is often accompanied by a pronounced tendency towards aggression and destructiveness. These three psychological states constitute a progressive spectrum of negative attitudes, and the degree of harm deepens in turn. In judicial practice, accurately identifying the subjective psychological state of the perpetrator is of key significance for determining the nature of the speech and determining the corresponding legal responsibility. It is important to note that these three states of mind may exist separately or intertwine with each other to form a more complex structure of psychological motivation.

The negative impact of hate speech has multi-level characteristics, and its harm degree can be analyzed from three dimensions: individual, group and society: first, from the micro perspective, the damage suffered by victims presents complex characteristics. It may cause a traumatic stress response on the psychological level and somatization symptoms on the physical side. More deeply, since individual identity is inseparable from group attributes, attacks on group characteristics will directly destroy the victim’s self-perception system and cause an identity crisis. Second, at the group level, such rhetoric has a “chilling effect”. After persistent attacks, specific vulnerable groups often choose to withdraw from public discussion spaces, making it difficult for the legitimate rights and interests of this group to be effectively protected. In the long run, a structural imbalance in discourse will be formed. At the social level, hate speech has a significant role in social deconstruction. By deliberately reinforcing group differences and creating antagonistic emotions, it not only undermines social cohesion, but also may induce mass violence in extreme cases, ultimately endangering basic social order and stability. This multi-layered mechanism of harm shows that the impact of hate speech is far from limited to immediate verbal attacks, but can have far-reaching negative social consequences. Therefore, the assessment of its hazards should adopt a systematic mindset, taking into account the ripple effects at different levels.

The forms of dissemination of hateful expression are diverse, and their carriers have gone beyond the traditional scope of language and writing, and extended to symbolic non-verbal expressions. The rapid development of digital media has created favorable conditions for the dissemination of all kinds of information, and the popularity of user-generated content (UGC) mode has made it easy for ordinary netizens to produce and disseminate content containing multimedia elements such as images, audio, and video. Among them, emojis that integrate multiple media forms have become an important carrier of online hate communication due to their strong communication power and high acceptance. This new type of expression brings significant challenges to network governance: first, symbolic expressions usually use rhetorical devices such as metaphor and irony, and its implicit discriminatory content is difficult to identify through conventional censorship mechanisms; Second, such content is prone to imitation and secondary creation, forming a viral effect, and eventually evolving into collective verbal violence against specific groups.

From a technical point of view, the progress of digital communication technology and the widespread use of terminal equipment have greatly lowered the threshold for network participation. Different from the centralized communication mode of traditional media, the distributed nature of the Internet makes the release of information no longer subject to the review of professional institutions. While this has contributed to the democratization of information, it has also led to the rapid proliferation of harmful content, including hate speech. The current regulatory measures mainly adopt the model of post-event accountability, which has obvious limitations in terms of timeliness and effectiveness. In addition, the network anonymity mechanism further exacerbates the governance dilemma. The use of virtual identities not only increases the difficulty of traceability and forensics, but also provides users with a sense of psychological safety, making them more inclined to make extreme statements online. This “de-inhibition effect” has led some netizens to break away from the constraints of real social norms and actively participate in the collective catharsis of online hatred. How to establish an effective responsibility traceability mechanism in such an anonymous environment has become a key issue that needs to be solved urgently in current network governance.

The proliferation of online hate speech will cause multi-layered and far-reaching harm, which is mainly reflected in the following three aspects: First, the persistent storage nature of digital information makes it difficult to completely eliminate harmful content. The digital nature of cyberspace allows for the long-term persistence of hostile and threatening speech, causing lasting psychological harm to specific populations. These offensive contents do not automatically despawn over time, but may be reactivated at some point in the future, dealing secondary damage. Second, such remarks seriously damage the online ecosystem. The perpetuation of hateful content not only pollutes cyberspace, but its negative impact also has a clear continuity. When relevant social events occur, these contents may trigger a new round of discussions and attacks, forming a vicious circle. What is even more alarming is that this environment will reinforce the “dominant voice effect” – the mainstream group continues to expand its voice, while the marginalized group is forced to remain silent, which ultimately leads to a structural imbalance in the online public opinion field. Finally, the boundaries between online and offline are blurring, and online hate is seeping into the real world. The extreme emotions in the virtual space may be internalized into reality by some netizens, reshaping their stereotypes of specific groups. In extreme cases, verbal attacks on the Internet may evolve into real-life violence, which directly endangers public order. This cross-spatial emotional transmission mechanism makes the impact of online hate far beyond the digital realm itself.

The spread of hate speech on social media tends to detach itself from real-world confines, amplifying its harmfulness. In the case of Elon Musk’s acquisition of Twitter, Twitter (now known as X) has slashed its moderation and platform policing mechanisms since it took over the platform, with Elon Musk vowing to defend “absolute freedom of speech.” However, “the proportion of hateful discourse such as racism, transsexism, and antisemitism on platforms has risen substantially” (Desmarais, 2025), and there has even been a large amount of content with insulting emojis or hateful hashtags that have been widely distributed in a short period of time. Advertisers such as Apple and Coca-Cola have chosen to suspend their partnership because of such egregious incidents, underscoring the high sensitivity of companies to the risks of hate speech. In the absence of effective governance mechanisms, the name of “free speech” is likely to be abused as a refuge for the spread of hate speech, which ultimately undermines the real space for public discussion.

At the same time, there has been a regulatory storm related to hate speech on the TikTok platform. In 2024, the conflict in Gaza has attracted great international attention, and a large number of users on TikTok have posted short video content involving extreme religious positions and incitement to violence, which has triggered widespread reports from users. “The European Union immediately launched the Digital Services Act, requiring TikTok to remove the hate content or face a hefty fine of 6% of global revenue.” (Chan, 2023) Although TikTok claims to have optimized its algorithms and strengthened human moderation, it is still difficult for the platform to fully control the generation and spread of hateful content in the context of the rapid global spread and strong linguistic diversity. This incident once again reflects the high tension and real challenges faced by social platforms between upholding freedom of expression and protecting the safety of users.

At present, the international community generally regulates hateful expression through legislative means, aiming to protect the rights and interests of vulnerable groups and promote social integration. From the perspective of the original legislative intent, such norms have positive significance for the prevention of group antagonism and social violence. However, judicial practice shows that the relevant laws face many difficulties in the specific application process. The first problem is the vague definition of legal concepts. The criterion for determining “hatred” lacks an objective scale and mainly relies on subjective value judgments, which leads to the risk of judicial discretion being abused. History has shown that some regimes may use broad legal provisions to silence dissidents and misinterpret legitimate criticism as expressions of hatred. Therefore, legislators must establish precise conceptual definitions and application standards to avoid undermining constitutionally guaranteed freedom of expression in the name of curbing hate. The deeper concern lies in the “chilling effect” that could be triggered. When legal boundaries are not clear, people may shy away from participating in discussions on sensitive issues for fear of legal consequences, and this self-restraint mechanism will seriously weaken the vitality of the public sphere. Healthy democracies depend on a full collision of ideas, and excessive restrictions will hinder the development of social innovation.

Improving the governance system requires the coordinated participation of multiple actors: the legislature should establish precise normative standards; The judiciary needs to maintain consistency in adjudication; Citizen groups should continue to engage in boundary discussions; Educational institutions should focus on cultivating the ability to engage in rational dialogue. It is only through systemic governance that a dynamic balance between the guarantee of freedom of expression and the maintenance of social harmony can be achieved. This balance is not a static rule-setting, but an ongoing process that needs to be constantly adjusted according to the development of society.

Reference List

Chan, K. (2023, October 19). EU demands Meta and TikTok detail efforts to curb disinformation from Israel-Hamas war. AP News. https://apnews.com/article/meta-tiktok-eu-europe-digital-services-act-81c682d25bd2bd62333ba64564dde9e5?utm_source=chatgpt.com

Desmarais, A. (2025, February 13). Euronews.com. Euronews. https://www.euronews.com/next/2025/02/13/hate-speech-on-x-now-50-higher-under-elon-musks-leadership-new-study-finds?utm_source=chatgpt.com

Matsuda, M. J., Lawrence, C. R., Delgado, R., & Crenshaw, K. W. (1993). Words That Wound: Critical Race Theory, Assaultive Speech, and the First Amendment (1st ed., pp. viii–viii). Routledge. https://doi.org/10.4324/9780429502941

Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney.

Be the first to comment

Leave a Reply