Fan Practice or Fan Conflict? How Hate Speech and Online Harms Spread in Digital Communities

As the global fan economy is booming, fan communities on social media are both spaces for enthusiastic interaction and breeding grounds for harassment, discrimination, and cyberbullying.

Fan Practice: More Than Just Love, Sometimes Conflict

In March 2025, the promotion of the movie “Snow White” caused an uproar on the Internet. The discussion that should have focused on the diverse cast unexpectedly turned into a racial controversy. A group of radical fans posted a large number of hate speeches on social platforms. The storm was even called the “racial storm caused by Snow White“, exposing that the online fan space, which was once a center of cultural exchange, is rapidly becoming a battlefield for ideological struggle.

The Adam Goodes controversy has previously exposed the issue of racism on digital platforms, where platform features and algorithms drive the spread of hate speech (Matamoros-Fernández, 2017).

The same thing happened with Snow White. As the controversy grew, racially charged comments were disguised as “jokes” or “criticisms” and the platform’s recommendation mechanism pushed the speech further and further, spreading it faster and faster. This incident reveals a fundamental challenge facing digital platforms: where should the line between free speech and user protection be drawn when it comes to engagement-driven algorithms that promote hate speech?

Moreover, the difficulty in regulating such content stems from the nature of hate speech itself. The definition of it is neither universally accepted nor are individual facets of the definition fully agreed upon (MacAvaney et al., 2019). The concept transcends legal and cultural boundaries and is intertwined with aspects of social psychology, power dynamics, and the boundaries of free speech.

Between Expression and Exclusion: How Hate Speech Hides in Fan Community

When Snow White was promoting the diversity of its cast, a part of the fan community launched a discussion about criticizing the casting. These arguments were disguised under the guise of terms such as “authenticity” and “accuracy to the original work”, covering up the underlying racial hatred.

The discussion quickly escalated from an online quarrel to an ideological struggle, exposing people’s deep uneasiness about “representation” in a traditional homogeneous culture. In the seemingly open online fan space, identity differences may ignite fierce confrontation at any time.

The controversy over the casting of Snow White is not accidental, but a typical example of the collision between cultural appropriation and identity politics within the fan circle. Indeed, somewhat ironically, many people across the world express hostility toward immigrants and immigrant cultures, as well as toward indigenous groups, while at the same time engaging and appropriating from these other cultures(Lenard & Balint, 2019).

In the live-action version of Snow White, a Latina actress was chosen to play a character that has long been portrayed as white and Germanic. Despite this fresh casting choice, the film still sticks closely to its original Eurocentric storyline. This move has divided audiences—some have hailed it as a positive step towards diversity, while others have reacted with racially charged criticism online. Some critics say the movie uses diversity as a marketing tool without really addressing the hateful, racist backlash that often follows such changes.

Some fans criticize the casting of Snow White on the grounds of “originality,” “fidelity to the source material,” and “preservation of the source material,” but in reality, it is a cover-up for racial animosity, and such “Cultural bias and appropriation” reflect exclusive historical narratives.

These debates go beyond the surface, and when some fans fight back against racial representation issues in the name of “film discussion”, they often create a toxic communication environment, allowing the voices of marginalized groups to be quickly drowned out.

People of color who dare to speak out are often the first to bear the brunt, encountering malicious attacks, slander, and even organized encirclement and suppression, and these violent acts are packaged as “normal film debates.” This phenomenon is not new in fan culture: when the line between criticism and abuse is blurred, harmful behavior can take advantage of the situation and spread.

And because so much of this happens through screens, it is easy for people to distance themselves from the harm they are causing. If one cannot see the emotional hurt wrought by one’s online hate speech, one may be more likely to downplay its significance (Brown, 2018). That’s part of what makes online harms in fan spaces so persistent: The damage is real but often invisible to those causing it.

This situation has caused many marginalized fans to lose their sense of security in fan space, while the perpetrators have become more unscrupulous due to the lack of effective constraints. When extreme behavior is packaged as a just act of “defending the fan community”, its essence is to suppress different voices in the name of the community, which not only silences diverse speech but also condones the spread of online hatred, further amplifying the power inequality that already exists in the media ecology.

How Algorithms Fuel Prejudice: Amplifying Hate Speech Online

What makes the challenge of online governance even more complex is that the algorithms behind the platform are not completely neutral. In many cases, content review systems rely on machine learning models, and the rules learned by these models are often based on historical data, which is itself biased. To address these challenges, developers have turned to machine learning systems trained on vast datasets of manually labeled text, aiming to automatically detect and flag toxic comments (Gorwa et al., 2020).

However, in practice, this approach can backfire: if a kind of hate speech was common in the past, even if it was obviously discriminatory the system may default to it as “no problem”; on the contrary, if someone criticizes such speech with fierce language, it is more likely to be misjudged as a violation by the system. This directly leads to the suppression of those voices that really want to oppose hatred and reduce online harm.

In the live-action adaptation of “Snow White“, a clear problem comes to light. When users criticized the film’s “token casting and cultural misappropriation” in its choice of a Latina actress, the platform removed their posts via a keyword filter, labeling the content as “sensitive.” Meanwhile, overtly hateful comments circulated freely and were even amplified by the algorithm. This double standard not only allows racist rhetoric to proliferate unchecked, but it also silences marginalized voices and deepens online injustice.

As public concerns about hate speech and false information grow, search engines and social media platforms, which are core hubs for global information dissemination, are facing increasing pressure and scrutiny (Caplan, 2018).

Platforms claim neutrality, insisting they are not responsible for user content. Furthermore, platforms often take a negative “middle position”: they are unwilling to face hate speech for fear of being accused of free speech, but they ignore the harm caused by its spread to the real world. This hesitation actually allows hatred to continue to grow, and the cycle of unresolved online harms continues.

You might be wondering: why focus so much on fan practices? Are these not just entertainment communities?

In reality, fan culture plays a much bigger role. It is a space where people express their identities, form communities, and debate values. How fans are portrayed—often in dismissive or negative ways—reflects deeper social divisions, especially around gender, race, class, and age (Gray et al.,2017). These differences also show up in which fan communities people join and the media they engage with.

Take the controversy around Snow White, for example. On the surface, it seemed like a debate about whether the actors fit their roles. But underneath, fans were asking bigger questions like “who gets to be represented?” and “whose stories matter?” So when they discussed the lack of diversity in casting, they were engaging with broader social issues like race, colonial history, and gender inequality.

This is why fan communities cannot be underestimated: they are the frontier of cultural collisions, where people practice expressing their opinions and finding their place. When the fan circle is full of hate speech, it not only hurts individuals but also amplifies the social rifts in reality. Therefore, managing fan communities is not a small task and it is essential for creating a truly inclusive and fair digital world.

Governance Gaps and Global Dilemmas: Why Regulation Is Not Enough

Why have not governments simply stepped in to regulate this? In fact, several countries have already passed laws requiring platforms to remove harmful content more promptly. For example, After a failed attempt to push social media platforms to self-regulate, Germany adopted a law called the Network Enforcement Act (NetzDG) which forces platforms to ensure that “obviously unlawful content” is deleted within 24 hours (Heldt, 2019).

However, the reality is far more complex than anticipated, and these laws often struggle to keep up with the speed and complexity of online communication. Hate often lives in a gray zone and masks as jokes, lifelike images, or supposedly “reasonable” questions, so rigid rules can not always catch it. Also, each country has its own set of rules, and in some places, they have to enforce the strictest policies, while in other places they can be more relaxed. This creates a fragmented and ever-changing governance landscape.

Even more troubling, some regimes exploit anti-hate laws to muzzle dissent and silence minority voices. They misinterpret these laws, turning regulations that were originally intended to protect the public into tool to suppress criticism and silence the voices of minorities. It can be seen that regulation is a double-edged sword. In the absence of transparent monitoring, the original intention of combating hatred may be distorted and the voices of the disadvantaged groups may be suppressed instead.

Therefore, the solution to online hatred cannot be found in superficial efforts such as deleting posts and blocking keywords. We need more elaborate programs that can curb real harm without harming the diversity of voices so that the digital space can become a place for protecting the weak rather than suppressing them.

Rethinking Governance: Building a Resilient Digital Community

From the perspective of platform governance, the algorithm recommendation mechanism should not be dominated by traffic-oriented logic and should not become a “catalyst” for hate speech. The current technical architecture often becomes a booster for the spread of extreme content due to the excessive pursuit of user attention.

The amplification of extremist content by algorithms is currently a blind spot for social media regulation and addressing it presents challenges to legislators (Whittaker, Joe, et al.,2021). This phenomenon warns us that platforms can establish a “value-calibrated recommendation system”, such as reducing the traffic weight of hateful content and developing content harm pre-assessment tools so that technology can assume more ethical responsibilities in information distribution, and shift from the technical logic of “traffic first” to the governance framework of “value first”.

At the level of legal regulation, it is necessary to build a system of rules that is both precise and inclusive. On the one hand, legislation should clarify the core criteria for determining hate speech to avoid excessive squeezing of freedom of speech by “one-size-fits-all” regulation.

On the other hand, it is also necessary to promote cross-border legal coordination so that online harmful behaviors can also be subject to due constraints in cross-border communication so that online harmful behaviors will no longer escape constraints due to legal jurisdiction gaps, and achieve reasonable boundary protection of freedom of speech and cross-regional protection of personal dignity.

The ultimate breakthrough is to make “respecting others” a conscious act for everyone when they are online. If we can pause for three seconds before sending a message and think about whether this sentence will make others feel attacked or belittled, cyberspace will have an “inner defense line” against hatred.

Victims of online hate speech exhibit a more pronounced feeling of insecurity outside the Internet. Since online hate speech exposes and attacks victims based on their personal characteristics and group affiliation, the victim and others must fear a (renewed) victimization by like-minded people of the perpetrator at any time, even outside the internet(Dreißigacker et al., 2024).

Therefore, publishers should try to think about other people’s situations from different perspectives, and gradually develop the habit of “thinking before speaking”, so that the debate remains at the level of opinions and does not escalate to attacks on people.

When “do not harm others” becomes everyone’s consciousness from legal provisions, the governance of technology and law can really take root and form a more resilient network civilization ecology.

It may not be possible to change platforms, laws, or even hearts and minds overnight, but it is possible to start by participating more consciously in discussions and utilizing the right of expression more responsibly. Instead of letting hate speech quietly grow on the Internet platform, it is better to rebuild a public discourse system in public communication that is full of humanistic care and abides by ethical boundaries. Let such a system not only become a protective net to filter hate speech but also a spiritual pillar to support the dialogue of multiple civilizations in the digital age.

References

Brown, A. (2018). What is so special about online (as compared to offline) hate speech? Ethnicities, 18(3), 297–326. https://doi.org/10.1177/1468796817709846

Caplan, R. (2018, November 14). Content or Context Moderation? Data & Society. https://datasociety.net/library/content-or-context-moderation/

Dreißigacker, A., Müller, P., Isenhardt, A., & Schemmel, J. (2024). Online hate speech victimization: consequences for victims’ feelings of insecurity. Crime Science, 13(1). https://doi.org/10.1186/s40163-024-00204-y

Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance. Big Data & Society, 7(1), 205395171989794. sagepub. https://doi.org/10.1177/2053951719897945

Gray, J., Sandvoss, C., & Harrington, C. L. (2017). Fandom, Second Edition: Identities and Communities in a Mediated World. In JSTOR. NYU Press. https://www.jstor.org/stable/j.ctt1pwtbq2?turn_away=true

Heldt, A. (2019). Reading between the lines and the numbers: an analysis of the first NetzDG reports. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1398

Lenard, P. T., & Balint, P. (2019). What Is (the Wrong of) Cultural appropriation? Ethnicities, 20(2), 331–352. https://doi.org/10.1177/1468796819866498

MacAvaney, S., Yao, H.-R., Yang, E., Russell, K., Goharian, N., & Frieder, O. (2019). Hate speech detection: Challenges and solutions. PLOS ONE, 14(8), e0221152. https://doi.org/10.1371/journal.pone.0221152

Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118x.2017.1293130

Whittaker, J., Looney, S., Reed, A., & Votta, F. (2021). Recommender systems and the amplification of extremist content. Internet Policy Review, 10(2). https://pearl.plymouth.ac.uk/cgi/viewcontent.cgi?article=1221&context=sc-research

Be the first to comment

Leave a Reply