Introduction
As society officially enters a digitally dominant era, digital media is gradually becoming a prevalent medium compared to traditional media. Many individuals, whether celebrities or ordinary people, can express and comment on their opinions and viewpoints regarding certain people or events through digital media platforms. However, despite being a network community that gathers people from around the world, from different cities, genders, and cultures, it often experiences instances of cyber harm and hate speech.
Hate speech is defined as typically aggressive language directed towards a particular minority group or individual, which may threaten their very existence (Chekol et al., 2023). The definition of cyber harm is also unclear, often referring to harms, discrimination, or privacy breaches suffered passively online (Cowie & Myer, 2023). The precise definition and scope of hate speech and cyber harm online remain imprecise.
However, because the terms hate speech and online harm are not precise, with broad and vague definitions, and due to the weak regulatory measures on digital media platforms, this situation may lead to the further spread of hate speech and cyber harm online. Moreover, it may exacerbate the divide and hostility between minority and marginalized groups, triggering more hate speech between them. This, in turn, could restrict and suppress different viewpoints and ideas, escalating their conflicts and making both sides more susceptible to external oppression and discrimination.
The emergence of hate speech and online harm
In today’s digital media era, hate speech and online harm are at an alarming rate, as mentioned earlier. The online world is rife with various types of hate speech, with some malicious individuals exploiting the large user base and the increasing number of digital natives to spread extreme ethnic, racial, and discrimination-based hate speech. A closer examination of the reasons behind the surge in hate speech online in recent years reveals that the vague conceptualization and ambiguous definition of hate speech have contributed to this phenomenon to some extent. The definition of hate speech is broad (Chekol et al., 2023). As Matamoros-Fernández (2023, p.930) noted, different cultures, social groups, and individuals have varying understandings and standards of what constitutes hate speech due to differences in upbringing and cultural environments, leading to ambiguity in its definition.
As a result of the subjectivity and ambiguity of hate speech and harm, there are often differences in the definition and interpretation of these concepts. This ambiguity affects individuals’ subjective awareness, with certain remarks being perceived as normal and reasonable discourse by one group but as discriminatory hate speech by another. This lack of clarity in defining hate speech, compounded by cognitive biases resulting from the widespread use of the internet, exacerbates the confusion and controversy surrounding hate speech, leading to more instances of online harm and hate speech (Lu & Yu, 2020).
Furthermore, regulating hate speech and online harm is currently challenging and ineffective, contributing to their proliferation to some extent. Regulating hate speech is difficult on multiple fronts because individuals exist in a dynamic environment, and as mentioned earlier, identifying and addressing hate speech in this dynamic environment with diverse social and cultural backgrounds is highly challenging (Sinpeng et al., 2021). Additionally, certain social media platforms may not always respond to hate speech for various reasons, leading to a decrease in the trust of the platform by reporters and subsequently reducing their likelihood of reporting hate speech (Sinpeng et al., 2021).
The difficulty of regulating hate speech improves the severity and frequency of online harm, fostering a negative digital public sphere and perpetuating a vicious cycle of hate speech attacking each other. This makes it increasingly challenging to address the issue, further aggravating the level and frequency of online harm.
Case Study – J.K. Rowling and Transgender Individuals
As mentioned earlier, the proliferation and escalation of hate speech in today’s public domain of social media are becoming increasingly common occurrences. One famous example of this situation is the online conflict between J.K. Rowling and the transgender community. Rowling is the creator of the global Ip series Harry Potter, she posted some tweets on Twitter expressing her disagreement with the United States government’s politics to simplify the process of gender identity recognition for transgender individuals group. One tweet stated that “if sex isn’t real, there’s no same-sex attraction. If sex isn’t real, the lived reality of women globally is erased.” This led to a lot of hate speech against Rowling by some transgender individuals group and her fans.
Then, she clarified that the fact is she was not expressing discriminatory views towards transgender individuals but rather voicing concerns about the potential infringement on the rights of “biological women”. However, her response still was attacked by online harm and hate speech from some members of the transgender group. Interestingly, from this situation, transgender individuals groups perceived her comments are hateful, leading them to prepare with hate speech and online harm against her. This shows how the blurry and unclear scope of hate speech can exacerbate the spread of hate speech.
Also, some individuals may think hateful remarks as legitimate expressions of opinion, as Rowling herself may have done. She might have viewed her statements as personal viewpoints rather than hate speech, whereas the transgender groups interpreted them as hate speech and unrespect to the transgender groups resulting in the hate speech against her online. Flew (2021) also noted that some hate speech can be acted as “normal” language and sentences, making it challenging for people to recognize toxic space. Coupled with the vague definition of hate speech, this can lead to varying interpretations of Rowling’s remarks among the public.
Amid this chaotic debate over hate speech, social media platforms face significant challenges in distinguishing and moderating hate speech. This ambiguity complicates the task of platform regulators in identifying and addressing hate speech, ultimately reducing the likelihood of mitigating hate speech (Sinpeng et al., 2021). In the case of Rowling, the online space occupied by her and the transgender community became highly tumultuous. Otherwise, the platform took no action against the hate speech directed at Rowling by transgender individuals, seemingly hesitant to classify Rowling’s statements as hate speech against the transgender community, remaining in a delicate and silent spectator position.
l The consequences of hate speech and harm create a closed loop.
Digital social media platforms have granted greater freedom of expression than ever before. The convenience of electronic devices as carriers of information enables various groups, ethnicities, and individuals to have equal opportunities for expression. This seems to imply that everyone has the right to freedom of speech and to advocate for different ideas and thoughts. Hate speech appears to be another way of silencing those with opposing views. The world is inherently diverse, and humans, due to their different cultures and groups, naturally spark diverse perspectives, thoughts, and beliefs (Evolv, 2019). Just as in the case of J.K. Rowling facing hate speech from netizens, Rowling later clarified that she did not discriminate against the transgender community but rather hoped to avoid squeezing the already scarce gender space resources for women to survive. In a sense, Rowling also spoke up for women, a historically marginalized group.
On the other hand, online hate speech deepens inequality. Wilhelm (2023, p.15) argues that the strong discourse control created by new media diminishes the sense of value for victims and subjects them to further external pressure, rejection, and hatred. Marginalized groups subjected to discourse suppression find it difficult to achieve equality of status despite having equal opportunities for speech and face more hate speech and indirect or direct discrimination. Hate speech itself limits the rights of marginalized groups to freedom of expression, and new media platforms use an equal tone to conceal the fact that marginalized groups are often voiceless in communication. Therefore, online harm and hate speech also exacerbate the inequality of marginalized groups (Matamoros-Fernández, 2023).
Here, there seems to be a very interesting observation. The controversy between J.K. Rowling, a woman, and the transgender community is not only a clash of personal viewpoints but also reflects the complex relationship between two marginalized groups. Transgender individuals, as a minority group, also face discrimination and hate speech online. Conversely, Rowling, as a woman, may also face attacks from transgender individuals, and there are countless instances of oppression and hate speech against females on social media. This situation reflects as said over all the unclear definition of hate speech, creating conflicts between different social groups. Transgender people may guess Rowling’s comments as derogatory towards their community, then start to publish hate speech to her.
Meanwhile, hate speech from transgender individuals against Rowling may also provoke discontent among the feminist group, attacks may be seen as an unfair treatment against the female community. This internal contradiction of hate speech may lead to further division and discord among marginalized groups, increasing societal tension and conflict. Therefore, due to the unclear definition of hate speech weak regulatory methods and low efficiency, conflicts between different social groups are exacerbated. Some groups may be against perceived hate speech, while others may feel their speech is wrongly classified as hate speech. This misunderstanding and opposition escalate conflicts between groups, leading to more cyber harm and hate speech spreading online. Furthermore, hate speech also effectively silences both marginalized and minority groups, depriving them of the ability to voice their thoughts and opinions, thus deepening the inequality between marginalized and minority groups.
Regulation and Measures
To prevent the continued growth of hate speech or effectively curb hate speech and cyber harm, digital media platforms can use similar or approximate standards in practice to delineate the scope of hate speech and reduce the difficulty of conceptual ambiguity (Riedl et al., 2022). Platforms can analyze the main components and criteria of hate speech and distinguish between hate speech and the expression of free viewpoints, provocation, and offensive speech to ensure citizens’ rights to express themselves freely. Additionally, governments can enact international laws and regulations to regulate hate speech on digital platforms, as spontaneous moral supervision by humans is unreliable. Introducing relevant international legal regulations may protect freedom of expression in cyberspace, reduce hate speech, and foster a positive interaction rather than a vicious cycle. Platforms must reassess their responsibilities and establish an active platform moderation system, clearly defining their neutral obligations in the public domain. Platforms should regularly disclose systematic risk assessments of combating online hate speech to the public and report the measures they have taken. Additionally, platforms should respond to the reports of hate speech from online users quickly, also revealing their commitment to dealing with hate speech and online harm.
Conclusion
Overall,when the hate speech and online harm’s defintion are unclear, people will understand their speech differently based on their feelings. This can create doubt about each other’s intentions and cause harm. When a group be treated unfairly meets another group and both want more respect and fairness, unclear meanings and weak rules on social media can make the gap and unfriendliness between them worse. This can be more hate speech.Also,it is making it harder for them to strive for fairness and justice.To improving this problem, social media platforms need to do their part by studying and layouting exactly what hate speech and online harm mean. In another hand, they should work with governments to make fair rules to control hate speech. They can also make a strong system to watch over their sites to stop hate speech and online harm, and quickly deal with reports from users.
References list:
Cowie, H., & Myers, C. A. (2023). Cyberbullying and Online Harms: Preventions and Interventions from Community to Campus (1st ed.). Taylor & Francis. https://doi.org/10.4324/9781003258605
Evolvi, G. (2019). Islamexit: inter-group antagonism on Twitter. Information, Communication & Society, 22(3), 386–401. https://doi.org/10.1080/1369118X.2017.1388427
Issues of Concern. (2021). In T. Flew, Regulating platforms (pp. 91–96). Polity.
Lu, J., & Yu, X. (2020). Does The internet make us more intolerant? A contextual analysis in 33 countries. Information, Communication & Society, 23(2), 252–266. https://doi.org/10.1080/1369118X.2018.1499794
Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118X.2017.1293130
Riedl, M. J., Whipple, K. N., & Wallace, R. (2022). Antecedents of support for social media content moderation and platform regulation: the role of presumed effects on self and others. Information, Communication & Society, 25(11), 1632–1649. https://doi.org/10.1080/1369118X.2021.1874040
Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney.
Wilhelm, C., & Schulz-Tomančok, A. (2023). Predicting user engagement with anti-gender, homophobic and sexist social media posts – a choice-based conjoint study in Hungary and Germany. Information, Communication & Society, 1–20. https://doi.org/10.1080/1369118X.2023.2275012
Be the first to comment