Kanye’s public anti-Semitic remarks: Revealing growing moderation issues for x-hate speech

As social media has become the primary medium for people to communicate, hate speech on the platform has surged in recent years, and racial discrimination, gender attacks, and religious hatred have spread rapidly through the platform. The problem of hate speech has become an increasingly serious and urgent problem for regulators and platforms. Online hate speech incidents are increasing worldwide and are intertwined with false information and extremist political content. (Sinpeng et al., 2021). When such hate speech comes from people with high public influence, the consequences are particularly serious. Kanye West posted a number of anti-Semitic remarks on the social platform X (Twitter), including highly controversial remarks such as publicly declaring “I am a Nazi” and “I love Hitler,” which spread to millions of people within hours and quickly triggered widespread criticism (Thomas, 2025). Although Kanye paid the price for his evil deeds after the incident, and he faced a series of legal proceedings and termination of business cooperation, the innocent Jewish community was still affected by his remarks and suffered more or less harm. This article will analyze the impact of Kanye West’s anti-Semitic tweets, explain how Kanye’s behavior has caused structural violence and actual harm to the victim groups, and critically think about the misunderstanding of interpreting hate speech as free speech or satire, and think about the fact that this incident reveals that x, as a mainstream social media, is insufficient in reviewing malicious speech. This article argues that digital platforms are not neutral spaces, and effective supervision is crucial to curbing the normalization and amplification of hatred.

First of all, Kanye’s anti-Semitic tweets are not just controversial speeches or emotional expressions on the Internet; they are also typical examples of how hate speech can cause real harm online and in the real world. This speech directly caused harm to the Jewish community. Kanye’s hate speech was not purely expressive but largely performative, eliminating the victim’s sense of security (Barendt, E. (2019). His speech caused constitutive harm, that is, the harm caused by the hate speech itself and the harm caused by it (Gelber and McNamara (2015). The most serious harm caused by this incident is that it seriously damaged the dignity and equal status of the targeted individuals and groups (Gelber and McNamara (2015). For the Jewish community that has already suffered discrimination, this speech has exacerbated their fears. 75% of the Jewish respondents believe that anti-Semitism has worsened in the past five years, and more than 90% of the Jewish respondents said that there is some kind of anti-Semitism in American society (Reingold & Reznik, 2024). After Kanye West made anti-Semitic remarks, anti-Semitic incidents broke out in many parts of the United States, including the “Goyim Defense League” in Los Angeles hanging a “Kanye is right” banner on the highway and graffiti and harassment in support of Kanye appeared in Massachusetts schools (Thune, 2024). These incidents not only reveal anti-Semitism in American society but also highlight the powerful influence of celebrity speech. As a global celebrity, Kanye’s posting of anti-Semitism on a public platform is not a personal act but a symbolic permission for others to engage in discriminatory speech and behavior online and offline, exacerbating the harm of hate speech. After his social media posts were published, the Anti-Defamation League recorded at least 30 anti-Semitic incidents directly targeting the West and used the hashtag “You are right” to express support for his anti-Semitic claims (Reingold & Reznik, 2024)(Figure 1). In Kanye West In the case of Kanye, his influence has legitimized anti-Semitic speech on multiple platforms, inducing imitation and creating a hostile atmosphere. Structural violence should not be equated with general expression of opinions. Especially under the promotion of digital platforms and celebrity culture, it is often entertaining and even normalized, so it needs to be regulated through social norms and platform governance.

(Figure 1)

As described above, as a celebrity, the cultural power contained in Kanye’s personal image has not only increased the exposure of anti-Semitic speech but also contributed to the normalization of hatred in the digital space and even attributed speech to freedom of speech. Classifying hate speech as free speech conceals its oppressive consequences for vulnerable groups in society. Kanye’s speech quickly spread in the form of emoticons, comments, and tags, triggering a wave of imitation and blurring the line between freedom of speech and collective harassment. The development of personal computers, network communications, and the Internet in the United States has been deeply influenced by specific cultural forces. These technologies often reflect liberal colors and show a tendency to extract technology from historical contexts, social differences, and physical experiences (Matamoros-Fernández, 2017). In order to protect the right to speak for all users on the platform, different safeguards are included in the policy. This results in x not tolerating behavior that crosses the line of insults but recognizing the role of parody and humor in its policy. However, using humor as a guarantee of free speech is problematic for those who are being insulted because it is a common online practice to cover up racist and sexist comments with sarcasm and irony, which can promote discrimination and harm (Matamoros-Fernández, 2017). In the context of the prevalence of anti-Semitism in the United States, some online jokes or memes often include references to Jewish stereotypes and anti-Semitic remarks. This can lead people to downplay the severity of anti-Semitism, thereby exacerbating anti-Semitism, because it leads people to think it is just a joke, thereby subtly increasing the level of hatred towards the Jewish community and harm to the Jewish community. In this way, when supporters of anti-Semitism become the butt of jokes, people’s focus will shift from the harmfulness of anti-Semitism, and it will be subtly legitimized without realizing it (Reingold & Reznik, 2024). And in a society full of racism, racist speech will be given authority in this context. In doing so, it puts the target group at a disadvantage, legitimizes discrimination against them, and deprives them of their power (Sinpeng et al., 2021). In reality, platforms often avoid regulating influential users on the grounds of freedom of speech. This selective enforcement exacerbates structural inequality, protects the speech of the strong, and suppresses the voices of the weak. When hate speech comes from the mouths of celebrities and is condoned by the platform, it marks certain groups as objects that can be harmed, thereby strengthening systemic oppression.

At the same time, in this incident, the social platform has actually become an important promoter of the spread and legitimization of hate speech between regulatory failure, algorithmic preference, and vague speech policy. Twitter stated that the platform’s mission is to give everyone the power to create and share ideas and information so that everyone can express their opinions and beliefs without obstacles. At the same time, it maintains the human right of users to freedom of speech. It believes that everyone has the right to speak and the right to use their voice, and different opinions are allowed (Konikoff, 2021). However, Kanye’s speech has gone beyond the scope of free speech and constitutes hate speech, which is already harmful in itself. Instead of organizing, the x social platform further amplified these contents. The x platform racism exposed by Kanye’s incident mainly promotes racist dynamics through its affordances, policies, algorithms, and corporate decisions, making the platform a tool for amplifying and creating racist discourse (Matamoros-Fernández, 2017). Kanye’s tweet received millions of interactions in a short period of time and became a global trending search. However, x allowed anti-Semitic defamatory remarks to remain on its website, but later, when West began to post pornographic video links on his account, x intervened (Thomas, 2025). The slow response in dealing with its content reflects the systemic failure of x platform governance. “Platform racism” is not only reflected in user behavior but also deeply rooted in the platform’s policy settings, including vague content review rules and inconsistent review standards (Matamoros-Fernández, 2017). Twitter generally adopts a post-review mechanism for hate and abusive speech. User posts will be immediately public without review, and the platform can only delete problematic content afterward (Konikoff, 2021). Twitter even transfers part of the review process to users, allowing users to identify and mark illegal content so that it can be evaluated according to the website rules (Konikoff, 2021). This review method not only lacks deterrence but also makes it easy for the platform to put free expression above the safety and rights of protected groups (Konikoff, 2021).

Twitter’s hate policy relies too much on the discourse of “freedom of expression” and ignores the actual harm caused by hate speech to marginalized groups. These mechanisms mean that Twitter’s policies will only be effective if users actively report hate behavior. This forces “restricted” users to become active gatekeepers of online hatred and abuse (Konikoff, 2021). The inaction of the review is part of the systemic inequality. Improving the review needs to start at the institutional level. Only by institutionalizing the responsibility of reviewing and humanizing the algorithm design can the platform truly assume the responsibility of maintaining the fairness and dignity of the public space. However, the current platform governance framework is still insufficient. Global regulators face great challenges in identifying, defining, and punishing online hatred. Platforms lack transparency and often ignore dialogue with victim groups (Sinpeng et al., 2021). Effective supervision requires the establishment of a multi-party participation mechanism, including government, non-governmental organizations, academia, and users (Sinpeng et al., 2021). Content review policies should focus on harm reduction and give priority to protecting the safety and dignity of vulnerable groups. Platforms should not only rely on post-deletion but also actively optimize the review algorithm and the detection and visualization reporting mechanism of online bias. In addition to the Kanye incident, we should recognize the power of celebrities at the cultural and symbolic level and formulate stricter standards for speech supervision, especially for the consequences of its greater influence.

In conclusion, Kanye West’s anti-Semitism incident reveals the complexity and harm of the spread of hate speech in the digital age. As a celebrity, Kanye’s speech symbolically justifies discrimination and hatred against others, thus promoting structural violence against the victim group. In an environment of anti-Semitism, this not only weakens the sense of security and dignity of the victims but also symbolically constructs a group boundary that can be attacked by default, causing substantial insult and harm to the victim group. In this process, the failure of the review mechanism of social platform X and its over-reliance on “freedom of speech” together constitute the structural problem of amplifying and condoning hate speech. In the platform culture dominated by technological liberalism, satire, imitation, and other forms are often abused as a means to cover up hatred. Twitter not only weakens the protection of vulnerable groups but also blurs the line between freedom of speech and collective harassment. In the face of increasingly serious online hate speech, platforms should establish a more rigorous, transparent, and people-oriented regulatory mechanism and, at the same time, involve multiple stakeholders to build a fairer and safer environment for public discourse space.

Reference List:

Barendt, E. (2019). What Is the Harm of Hate Speech? Ethical Theory and Moral Practice, 22(3), 539+. http://dx.doi.org/10.1007/s10677-019-10002-0

Dias, T. (2024). Finding Common Ground: The Right to be Free from Incitement to Discrimination, Hostility, and Violence in the Digital Age. Elsevier BV. https://doi.org/10.2139/ssrn.4928341

Konikoff, D. (2021). Gatekeepers of toxicity: Reconceptualizing Twitter’s abuse and hate speech policies. Policy & Internet. https://doi.org/10.1002/poi3.265

Matamoros-Fernández, A. (2017). Platformed racism: The mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118x.2017.1293130

Reingold, M., & Reznik, S. (2024). Heartless: Jewish teens, antisemitism, and unfollowing kanye west. Journal of Jewish Education, 1–21. https://doi.org/10.1080/15244113.2024.2388513

Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney. https://hdl.handle.net/2123/25116.3

Thomas, B.-B. (2025, February 12). Kanye West sued and dropped by talent agency over antisemitic slurs. The Guardian. https://www.theguardian.com/music/2025/feb/12/kanye-west-sued-dropped-by-talent-agency-and-retail-platform-over-antisemitic-remarks

Thune, N. (2024, Mar 25). Celebrity worship, hate speech and the alt-right: Consequences of Kanye’s antisemitism – The Mass Media. University Wire https://www.proquest.com/wire-feeds/celebrity-worship-hate-speech-alt-right/docview/2976402615/se-2

Be the first to comment

Leave a Reply