

Imagine a scenario where one of your friends posts a tweet in support of minority rights online, which is quickly followed by a flood of malicious comments, private messages and physical threats. However, when you choose to report the post, the platform tells you that the comments “do not violate community standards.” You’ve probably experienced this before. But the question is: why do we think this phenomenon is normal? More importantly, what is the mechanism behind this phenomenon?
From Digital Utopia to Algorithmic Prison: The Transformation of the Spirit of the Internet
The Internet was once treated as a utopia – a space for free expression, unfettered. In the past, digital pioneers saw the web as a democratizing tool, anyone could communicate and connect. But over time, it was slowly affected by a very different reality. Today, this web has in many ways become a tightly controlled algorithmic ecosystem, it commercializes people’s interests, and ignores the emotions of people themselves.
Social media platforms, in particular, have shifted from passive middlemen to powerful content managers. They decide what people see or hide, it is not driven by public interest or fairness, but by an opaque recommendation system optimized for engagement and platform revenue. This evolution from digital freedom to algorithmic restrictions has implications for vulnerable groups, particularly in allowing hate speech to flourish while suppressing protective speech.
Assumptions that shouldn’t be ignored
Many people assume that social media platforms are merely “conduits of information” and that they are not overly responsible for hate speech that appears on them. However, this assumption is inherently problematic. Rather than being neutral, platforms use algorithms to proactively recommend content that is likely to attract attention (and often controversy) in order to increase user engagement and advertising revenue (Woods & Perrin, 2021). That is, the commercial interests of platforms are often prioritized over the safety of users, especially marginalized groups.
Regulatory failures in historical and social contexts
The rise of social platforms is inextricably linked to the trend towards digital globalization. While technological advances have made the flow of information faster, they have also made it easier for hate speech to proliferate. the case of Facebook in the Asia-Pacific region is particularly egregious. Research has shown that platforms’ globally uniform content regulation standards are often ineffective in identifying localized hate speech, resulting in frequent attacks on LGBTQ+ and ethnic minorities in places such as India and Myanmar (Sinpeng et al., 2021). In the case of the Rohingya in Myanmar, for example, the rapid spread of hate speech on social media has directly contributed to real-world violence.
“The Rohingya just dream of living in the same way as other people in this world…but you, Facebook, you destroyed our dream.” –Mohamed Showife, Rohingya community member
In November 2023, Elon Musk, CEO of Platform X (formerly known as Twitter), sparked widespread controversy for responding to a post on the platform that was allegedly anti-Semitic in nature. The incident led several major advertisers to suspend advertising on the platform, reflecting the dilemma of balancing content regulation and freedom of speech on social media platforms (Montgomery, 2023).

These examples clearly reveal a pervasive loophole in policies regulating social platforms globally that fail to effectively take into account the cultural, social and political specificities of each location, leading to rampant hate speech.
These regulatory shortcomings are not simply technical failures—they represent a profound neglect of human rights in digital spaces. When rules are not adapted to protect the most vulnerable, platforms become complicit in the harms they enable.
Why is current policy inadequate?
The root cause of the current inadequate regulatory framework lies in the serious misalignment between platforms’ interest orientation and the public interest. Platforms tend to avoid implementing strict regulatory measures, as they may increase the platform’s operational costs and risks, such as the need to invest more manpower in content auditing and algorithmic adjustments. In addition, strict regulation may reduce user engagement on the platform, which directly affects advertising revenue.
The deeper reason is that the current digital economic model is based on platforms maximizing user engagement and the economy of attention, which is naturally biased towards the dissemination of attention-grabbing and even controversial content, thus making it easier for hate speech and extreme views to be disseminated and reinforced The current digital economic model is based on platforms maximizing user engagement and the economy of attention. In the over-saturated information environment of the digital media economy, the ultimate goal of “attracting attention” has led to the sophisticated manipulation and capitalization of emotions, particularly anger, hate, and fear, which have become widely exploited resources (Boler & Davis, 2020).
As users, we are not only consumers but also products. Every click, share and even pause becomes profitable data for the platform. In this case, it is often already integrated into the business model.
Another critical issue is the opacity of content moderation decisions. What counts as hate speech is often determined by internal teams at platforms behind closed doors, with no user representation or public accountability(Woods & Perrin, 2021). As a result, decisions about what speech is permissible often reflect commercial priorities, rather than ethical or inclusive standards. This lack of transparency erodes user trust and further alienates marginalized voices.

This lack of profit-driven regulation further exacerbates the proliferation of hate speech, with far-reaching negative impacts on the public discourse space. For example, Yang Kasa, a well-known Chinese comedian, encountered massive online attacks and boycotts after making public comments related to gender issues, which even led to the termination of cooperation with his business partners due to public pressure. THE WALL STREET JOURNAL titles that it pays to make fun of men in China. Chinese companies are trying to capitalize on the growing interest in women’s content without pissing off male consumers. This situation clearly demonstrates the vulnerability of marginalized groups or people with controversial views in the current regulatory environment. Their voices have been forced to diminish or even disappear, further exacerbating divisions and injustices in society.
Taking Weibo (China’s social media platform) as an example, in the Yangasa incident, the hashtag #YangasaEndorsementBoycotted stayed on Weibo’s hot search list for 37 hours, gaining 1.2 billion readers. But at the same time, a large number of gender insulting posts against her were not deleted. Why? Because Weibo’s ‘heat algorithm’ weights the recommendation of controversial topics – the more polarized the scolding, the more traffic it brings. The platform is both the referee and the biggest winner in this ‘witch hunt’.
Similarly, LGBTQ+ users continue to face systematic vetting issues in Western countries. According to the 2024 GLAAD Social Media Safety Index, major platforms have consistently failed to enforce their own policies when it comes to LGBTQ+ safety.GLAAD notes that, despite years of campaigning, platforms have allowed anti-LGBTQ hate and disinformation to proliferate, while legitimate content from LGBTQ+ users is sometimes deleted or downgraded. The report warns that this continued inaction contributes to a climate of fear and censorship, silencing queer voices and undermining digital inclusion (GLAAD, 2024).
The impact of Hate Speech
Impact on individual victims
Mental health damage:
Hate speech is often insulting, demeaning and even threatening in nature, causing victims to feel anxious, depressed, fearful and even triggering post-traumatic stress disorder (PTSD) and suicidal tendencies.
Social isolation:
Victims may reduce their social interactions and isolate themselves due to fear of being attacked or harassed, which in turn affects their normal social life and mental health.
Identity victimization:
Ongoing hateful attacks can lead marginalized groups to doubt or deny their self-identity. Besides that, it will further contribute to low self-esteem, shame, and difficulty with self-acceptance.
Impact on marginalized groups and communities
Silence and marginalization of public discourse:
Due to fear of attack or retaliation, marginalized groups might choose to remain silent. They avoid expressing their views on public platforms, thus reducing the diversity of society and the breadth of public discourse.
Increased social division and polarization:
Hate speech aggravates misunderstanding and hate between different groups, leading to a reduction in social cohesion and further exacerbating divisions between different groups.
Increased risk of potential real-world violence:
Research has shown that hate speech on the internet is often closely related to real-life violence. For instance, the large-scale violent conflict among the Rohingya in Myanmar was a result of the spread of online hate speech (Sinpeng et al., 2021).
Impact on society as a whole
Erosion of democratic values:
Hate speech breaks the fairness and openness of the public discourse space. It weakens the original principles of liberal democracy, and hinders rational dialog and exchange of opinions.
Increased risk to public safety:
The continued spread of hate speech may inspire certain groups or individuals to take drastic or even violent actions. It will pose a long-term threat to public safety.
Increased economic costs and social burdens:
Society needs to invest more resources to deal with law and order issues, social conflicts, and public mental health problems caused by hate speech.
Impact on platform and network ecology
Deterioration of the information environment:
The proliferation of hate speech can make the online environment worse, with valuable information exchanges gradually being overshadowed by offensive speech, reducing the overall quality of information on the network.
Damage to platform credibility and commercial interests:
If a platform fails to manage hate content in a timely and effective manner, it may trigger public boycotts, advertiser withdrawals, and economic losses (e.g., the advertiser boycott that Twitter faced due to hate speech).
What we can do: Towards a fairer online environment
To truly and effectively combat the problem of online hate speech, the efforts of a single party are far from enough. The government, society and technology companies (social media operators) need to share the responsibility, each utilizing the strengths of their different roles to form a multi-level governance system.
Government level
Clarify the legal responsibility of platforms: Introduce a regulatory policy similar to the EU’s Digital Services Act (DSA) to compel platforms to proactively prevent and remove hateful content in a timely manner.
Establishment of an independent regulator: Ensure transparency and compliance in the review of platform content and prevent formalization of regulation.
At the societal level
Enhance public education and awareness: Promote public education programs on the impact of hate speech and encourage the public to resist and report hateful content.
Community support and protection: Provide psychological, legal and social assistance services to marginalized groups affected by hate speech.
At the level of technology companies and platform operators
Algorithm Transparency and Auditing: Regularly publicize the operation of algorithms and subject them to independent third-party audits to reduce the amplification and spread of extreme content on the platform.
Enhanced Localized Auditing Capabilities: Hire an auditing team that is familiar with local languages and cultures to improve the ability to identify and deal with localized hate content.
In addition to this, empower users with agency by creating safe micro cyberspaces. Examples include feminist internet or civic initiatives that promote collective digital literacy and accountability. These solutions emphasize that resistance does not have to be top-down, but can also start with the reempowerment of users.
Conclusion: What kind of cyberspace do we need?
The digital space should be a platform for the free expression of different voices, rather than allowing hatred and prejudice to continue to spread. The current lack of regulation not only leaves marginalized groups devastated and insecure, but also further divides society. Only with a deeper and better understanding of the logic of economic interests behind platforms, a clear understanding of the social responsibility of platforms, and the establishment of an effective regulatory framework can we truly provide a fair, safe and inclusive space for all users.
Sources
A. Sinpeng, F, M., K, G., & K, S. (2021). Facebook: Regulating hate speech in the asia pacific.
Boler, M., & Davis, E. (2020). Affective Politics of Digital Media. Routledge.
GLAAD. (2024, May 21). A Letter from President and CEO Sarah Kate Ellis – 2024 Social Media Safety Index | GLAAD. GLAAD | GLAAD Rewrites the Script for LGBTQ Acceptance. https://glaad.org/smsi/2024/president-letter/
Lu, S. (2024, November 11). Making Fun of Men in China Comes at a Cost. WSJ; The Wall Street Journal. https://www.wsj.com/world/china/making-fun-of-men-in-china-comes-at-a-cost-5dc3ca77
Montgomery, B. (2023, November 17). Elon Musk agrees with tweet accusing Jewish people of “hatred against whites.” The Guardian. https://www.theguardian.com/technology/2023/nov/16/elon-musk-antisemitic-tweet-adl
Woods, L., & Perrin, W. (2021). obliging platforms to accept a duty of care.
Be the first to comment