People take part in a ‘March For Palestine’
(Google, 2023)
Introduction
Nowadays, the popularity of the Internet has not only changed the way people live their daily lives, but also the way people communicate with each other. With the diversification of the ways in which people communicate on the Internet, what began as an expression of opinion has gradually developed into a tendency to vent negative emotions at will (Burnap & Williams, 2016). The way people communicate has evolved from expressing opinions politely to venting negative emotions casually. In this process, language expression has developed into a form of violence (Sinpeng et al., 2021). Hate speech has victimized innocent people. However, some voices argue that such verbal expression is free and reasonable, and that there is no act that constitutes substantial harm (Sinpeng et al., 2021). The differences of speech of freedom and hate speech is vague.
Since 2022, TikTok’s hashtags about Palestinian expressions of support and anti-Israel have raised questions from many government, organizations, and online users that the remarks are alleged to be hate speech against Jews (Jennings, 2023). This issue will be discussed in depth in the case study below. Not only on TikTok, but also discussions about whether expressing support for Palestine is considered hate speech have spread to many other social media platforms, such as X and Instagram. As a result, many social networking sites and platforms face challenges in moderating the language of Internet users, partly from local governments with different rights backgrounds. Despite the continuous advancement of artificial intelligence technology, it is still difficult for online management platforms to distinguish between the expressions of Internet users, and because the definition of hate speech has become very complex in different contexts and objects. The following sections includes definition, case study, consideration, and conclusion.
How to understand and define the hate speech?
According to the general definition of hate speech, it is a form of language expression that causes psychological and physical harm to the victim. Given the long-term harm that hate speech can cause, relevant government departments and platforms need to strengthen the supervision and enforcement of hate speech.
Legally, hate speech has long been clearly defined. International Covenant on Civil and Political Rights documented that “any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law” (as cited in Flew, 2021, p.117). In addition, From the academic perspective, hate speech is clearly defined as incitement to hatred through verbal acts (Sinpeng et al., 2021). A scholar Parekh (2021) further summarizes three basic characteristics of hate speech. While the definitions of these three aspects are not delved into here, it can be seen that there seems to be a fairly clear understanding of hate speech. In addition, the harm caused to individuals by hate speech is real. Another scholar, Flew (2021), also noted that hate speech violates a person’s basic human rights and spreads at a viral rate in cyberspace. Flew (2021) also explained that the victims of hate speech tend to be marginalized groups.
However, banning pro-Palestinian voices online has created new dilemmas for understanding hate speech. As pro-Palestinian voices have gained traction online, some netizens have begun to question that this is not hate speech, and some have even questioned the regulatory mechanisms of some social media platforms and relevant government policies and regulations. The following is an in-depth analysis and discussion of how this case has changed people’s understanding of hate speech.
Case study: Are pro-Palestine voices hate speech?
As tensions between Pakistan and Israel escalated, the U.S. government has initiated regulations on certain social media platforms, given its support for Israel and the provision of various weapons to the country. Some of the most controversial recent events include the passage of a bill by the U.S. Congress banning TikTok, and TikTok CEO Shouzi Chew releasing a video hoping that U.S. screen names support TikTok to continue operating. Growing pro-Palestinian voices are one of the triggers for the TikTok ban (Harwell & Lorenz, 2023). Some critics have pointed out that the U.S. government and some Republican lawmakers believe that TikTok is a political tool used by the Chinese government to manipulate the speech of American netizens (Jennings, 2023). TikTok gave a corresponding explanation for this, arguing that the pro-Palestinian screen names in the United States are not determined by the algorithm within the TikTok platform, but by the will of American users themselves. TikTok says their content is guided based on user preferences rather than the algorithm itself (TikTok, 2023). Meanwhile, while there are pro-Palestinian voices on other social media, the U.S. government has every reason to suspect that Chinese authorities are using TikTok to manipulate U.S. teen Hamas to sympathize with terrorists because TikTok’s parent company is in China (Harwell & Lorenz, 2023; Jennings, 2023). It seems that from here on, speech on social networks is given more labels, and although there are some languages that does not conform to the previous definition and understanding of hate speech, in this case there is a new interpretation of the definition of hate speech, which could also have an impact on social media and freedom of expression.
TikTok has made it clear that it firmly rejects all hate speech, including any form of antisemitic content, and will not appear on TikTok’s social platforms. In this regard, TikTok has taken a series of positive measures. For example, they use computer vision models to detect possible visual signals, as most of TikTok’s content is dominated by short videos and dynamic videos, which requires very advanced image filtering technology. In addition, for textual content posted on the platform, TikTok will also actively identify text that may be present in extremist ideologies. In addition to TikTok’s advanced system for anti-hate speech initiatives, TikTok regularly trains its own employees to ensure that its employees are highly sensitive to any hateful behavior and symbols and reject all bias (TikTok, 2023).
Although, TikTok has made a very deliberate move to prove that they are adamantly resistant to any hate speech and anti-peace ideology. The U.S. Congress still approved the TikTok Ban Act ((TikTok, 2023). Some business leaders with Jewish backgrounds have directly accused TikTok of creating the largest anti-Semitic movement since the Nazis by allowing pro-Palestinian voices (Jennings, 2023). Since the approval of the TikTok ban bill, law professors at the University of Chicago have directly pointed out that the actions of the US authorities against pro-Palestinian speech are a brutal crackdown, and they always seem to be signalling to the American public that they are resisting hate speech on TikTok (Malik, 2023). This argument has sparked a rethinking of the definition of hate speech, and the ban on TikTok seems to reflect the growing possibility of reinventing hate speech. For now, for this case, the criteria for a post on social media to be judged as hate speech seems to depend on whether the posted content supports Israel or Palestine.
Consideration
Who really cares about hate speech? Who really cares about the harm that victims will suffer forever? Should we focus on hate speech itself or on the motives and purposes behind it? The above case has triggered new considerations regarding internet governance. Should online platforms solely address hate speech, or should they also consider underlying ideologies, linguistic and cultural contexts, as well as the attitudes of internet users? Is the goal of internet governance to manage speech itself or to address broader issues? The presentation of the above cases may indeed prompt each internet user to engage in profound reflection.
Firstly, Governance of online hate speech must stand for justice. While social media is rife with racism and hate speech, which may contribute to the growth of terrorist forces, this does not mean that one-sided support for Israel is commendable. Speech should be appropriately restricted and monitored, and efforts should be made to avoid all forms of violence, discrimination, and injustice. If all pro-Palestinian voices were to be classified as hate speech, it would do great harm to the Palestinian people. Despite the shrinking of their territory, Palestinians have struggled to express their concerns on the Internet, which helps strengthen their sense of identity and national confidence (Shehadeh, 2023). The internet was originally built to ensure each individual’s freedom of speech Sinpeng et al. (2021). Labelling Palestinian voices as hate speech not only deprives them of their voice but can also make them feel marginalized and isolated from society. Such actions not only exacerbate division and hostility, but could also lead to further conflict and instability, posing a significant risk to the Palestinian people. Governments, tech companies, and internet users may need to be objective and unbiased about every presentation. However, when dealing with politically sensitive and racially discriminatory topics, ensuring freedom of expression is challenging.
Secondly, in today’s context, where freedom of speech is often emphasized, perhaps we should also pay more attention to the underlying motives behind each expression. Inspired by this case, we need to consider more deeply the critics of hate speech and where they come from in terms of power dynamics, social relationships, and cultural backgrounds. This notion aligns with the argument put forth by Sinpeng et al. (2021) that contemporary technology companies struggle to effectively detect and manage a vast array of speech content within dynamic linguistic environments (including language, culture, history, and politics). In addition, some policymakers hold significant authority in enacting laws, which underscores the importance of public scrutiny and oversight to collectively manage online hate speech. Favoring absolute freedom of speech while disregarding the rights and dignity of victims as citizens, transforms online platforms from arenas of free expression into instruments of harm. At this moment, it becomes imperative to contemplate the regulation mechanisms of these platforms, and whether governments should intervene in overseeing and managing the expression of speech on digital platforms. As stated by Wolfsfeld et al. (2013), the role of social media must be evaluated with consideration of its political background. Understanding the impact of social media on the Palestinian people must be grounded in the relevant political context of Palestine.
Thirdly, it is important to focus not only on hate speech itself, but also on the impact it can have on the Palestinian people. It is worth remembering that the battlefield between Israel and Hamas is not only in Palestine, but also in cyberspace. Any statement made at this time is sensitive and deadly. Who will be affected? Who will be seen? The suppression of pro-Palestinian rhetoric is detrimental to the well-being, identity, dignity, and ethnicity of the Palestinian people. Focusing on the governance of online hate speech means broadening the focus on trust, safety, fairness, responsibility, and the public interest.
Conclusion
In conclusion, this news article introduces the standard definition of hate speech and provides some perspective to make readers to understand how to consider the online hate speech in different ways. While most people may not be familiar with the complexity of online governance, the article tries to emphasis the importance of digital platform’s governance. Furthermore, it also discusses the case of American ban on pro-Palestine voices on TikTok and explores the potential considerations arising from this case. It is hoped that this article sparks readers’ interest in online hate speech and digital policy and governance. What is hate speech? Who is suffering? Who can be seen? Who is responsible?
References:
Burnap, P., & Williams, M. L. (2016). Us and them: identifying cyber hate on Twitter across multiple protected characteristics. EPJ Data Science, 5(1). https://doi.org/10.1140/epjds/s13688-016-0072-6
Flew, Terry (2021) Hate Speech and Online Abuse. In Regulating Platforms. Cambridge: Polity, pp. 91-96.
Harwell, D., & Lorenz, T. (2023, November 2). Israel-Gaza war sparks debate over TikTok’s role in setting public opinion. Washington Post. https://www.washingtonpost.com/technology/2023/11/02/tiktok-israel-hamas-video-brainwash/
Jennings, R. (2023, December 13). TikTok isn’t intentionally pushing pro-Palestine content to young Americans. Vox. https://www.vox.com/culture/23997305/tiktok-palestine-israel-gaza-war
Malik, K. (2023, December 3). Solidarity with Palestinians is not hate speech, whatever would-be censors say. The Guardian. https://www.theguardian.com/commentisfree/2023/dec/03/freedom-expression-imperilled-when-speakers-cancelled-whether-left-or-right-gaza
Shehadeh, H. (2023). Palestine in the Cloud: The construction of a digital floating homeland. Humanities (Basel), 12(4), 75. https://doi.org/10.3390/h12040075
Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Final Report to Facebook under the auspices of its Content Policy Research on Social Media Platforms Award. Dept of Media and Communication, University of Sydney and School of Political Science and International Studies, University of Queensland. https://doi.org/10.25910/j09v-sq57
TikTok. (2023, November 13). The truth about TikTok hashtags and content during the Israel-Hamas war. Newsroom | TikTok. https://newsroom.tiktok.com/en-us/the-truth-about-tiktok-hashtags-and-content-during-the-israel-hamas-war
Wolfsfeld, G., Segev, E., & Sheafer, T. (2013). Social media and the Arab Spring. The International Journal of Press/Politics, 18(2), 115–137. https://doi.org/10.1177/1940161212471716
Be the first to comment