The Algorithm Is Lighting Fires, Regulation Is Asleep, and We’re Just Watching: The Breakdown of Hate Speech Governance

From Online Controversy to Offline Harm: How Hate Speech Crosses the Screen

In 2019, two mosques in Christchurch, New Zealand, were attacked within minutes by an armed extremist, leaving 51 people dead. This horrific act of violence, which shocked the world, quickly spilled into the digital realm. The gunman livestreamed the entire massacre using a head-mounted camera, and the gruesome footage surfaced on platforms like Facebook, YouTube, and Twitter within hours—sometimes even recommended by algorithms to unsuspecting users. Some platforms try to block content, but the combination of algorithms and user behavior keeps violent images emerging. The spread effects of digital platforms and social media have greatly amplified hate speech and online harassment based on political or religious beliefs, appearance, race and other factors, and the harm caused by this has been widely recognized as an important and growing social problem (Flew, 2021). The fact shows that the network platform is not a “virtual entertainment paradise” divorced from reality, they have become a part of the real society, and even have the power to shape social emotions and actions. When hate speech spreads across these platforms, it doesn’t just influence opinions—it fuels prejudice, incites violence, and can even be weaponized as a tool for mobilizing real-world attacks.

Compounding the problem is a growing tendency among social platforms to relax content moderation in the name of free speech. In late 2022, Elon Musk acquired Twitter and slashed its content moderation teams, loosening controls on extremist rhetoric. Not long after, the platform saw a surge in racist, misogynistic, and homophobic content. Even though platforms say they’re improving their tech to deal with harmful content, a lot of hate speech still slips through—and is now thriving under the banner of free speech.

This raises a critical question: are digital platforms the guardians of information freedom, or are they accelerators of hate? When “free speech” becomes a shield for extremists, should we still cling to the idea of platform neutrality and do nothing?

This blog argues that self-regulation by platforms is no longer enough to tackle the growing problem of online hate speech. Governments must step in to establish clear legal frameworks, and the public must also take shared responsibility for maintaining a healthy and respectful digital public sphere.

Roles and responsibilities of social platforms

Neutral or conniving?

Social media platforms have profoundly transformed how we access information, gradually becoming the primary channels through which most people encounter news and public events in their daily lives. At the same time, the development of social networks and algorithmic use of big data allows communicators to push specific promotional content directly to users who are more receptive and shareable (Duan, 2023). But how is content actually delivered to users? The answer lies in algorithmic recommendation systems. These systems may seem neutral, aiming to offer more personalized and accurate suggestions based on user preferences. However, in practice, they often reinforce existing biases and strengthen emotional reactions, sometimes pushing users toward more extreme viewpoints.

As one of the world’s largest social media users, Facebook’s internal investigation has pointed out that its algorithmic recommendation system, to a certain extent, promotes the spread of extremist information and may gradually push some users to adopt more radical views. In The Facebook Files, a 2021 investigation by The Wall Street Journal, leaked documents showed that Facebook executives were aware of these issues. They admitted that the algorithm could intensify emotional divisions and expose users to more provocative and polarizing content, all in an effort to increase engagement and keep users on the platform longer. In these files, Horwitz and Seetharaman (2020) reported that senior executives acknowledged the algorithm was not connecting people, but rather driving them further apart. Algorithmic recommendations are often based on a user’s initial mild interest, such as clicking on a controversial post or joining a politically explicit page, which is then followed by more similar but more aggressive content. Once this pattern takes shape, users can easily become trapped in an information bubble, where the content they encounter grows increasingly narrow and extreme, making it difficult to break out.

YouTube has faced similar concerns. Its recommendation system has often been blamed for dragging users deeper into radical content. A person might begin by watching a mainstream political video and, after a few suggestions, find themselves viewing conspiracy theories or hate-filled material. Although YouTube has not officially admitted to this pattern, research by Cakmak et al. (2024) suggests that the platform’s algorithm tends to prioritise emotionally intense and attention-grabbing videos. In the absence of adequate human review, YouTube’s recommendation algorithm often tends to promote videos that are emotionally charged and strongly opinionated, as this type of content is more likely to attract clicks and keep viewers watching longer. In such a system, hate speech is not just a rare exception—it becomes an almost inevitable outcome of how the recommendation mechanism is designed to operate.

This raises a profound and urgent question: How much responsibility should platforms bear in the spread of online content? When real-world problems emerge, social media platforms often portray themselves as neutral tools for content distribution, claiming that they only provide a space for users to publish content and do not participate in the judgment and dissemination of speech content, without interfering with the judgment and dissemination of content.

The responsibility of the platform

In fact, the platform does not passively present information; instead, it actively filters and arranges the content to which you are exposed through algorithms. The platform uses algorithms to determine which content is more likely to be seen by users, and this filtering is not a neutral act, but an optimization strategy based on a specific goal – to increase user engagement, prolong usage, and drive more interaction. Therefore, the platform is actually highly dominant in the flow of content and deeply involved in the structural shaping of public opinion space.

Even when platforms claim to take responsibility, their content moderation mechanisms often suffer from structural limitations, leaving them poorly equipped to deal with hate speech. Most major platforms rely heavily on AI to detect and filter harmful content. Yet hate speech is not always explicit or crude; it is often subtle, veiled in humor or sarcasm, making it difficult for machines to identify accurately. In addition to that, language diversity presents another challenge. Platforms are much better at detecting English-language content than non-English content, which makes certain minority-language communities more likely to fall into regulatory blind spots.

From a legal perspective, platforms in many countries still operate in a grey area when it comes to liability. In the United States, for instance, Section 230 of the Communications Decency Act has long granted platforms a certain degree of immunity, stating that they are not legally responsible for illegal or harmful content posted by users. This provision encourages service providers to self-regulate offensive content on their platforms by allowing them to set standards without the risk of being held liable, which also creates opportunities for those seeking to avoid responsibility to exploit the system (47 U.S.C. § 230, 1996; Mediavilla, 2022). This provision provides legal protection for the development of platforms, but it also allows them to promote content distribution technology without constraints, without being responsible for the consequences. As the real-world impact of hate speech becomes increasingly severe, continuing to shield platforms under this legal protection is no longer justifiable—the current system must change.

We Need Government Regulation

From the previous section, we learned that most platforms do not have no governance mechanisms, but when faced with the choice between traffic and responsibility, the former is preferred. At this time, the intervention of government supervision is particularly important. Governments can set legal standards to clearly define what constitutes intolerable speech. Set clear boundaries for speech that is inflammatory, offensive or discriminatory. For example, content involving racism, physical threats, or sexism that meets the definition of hate speech must be removed or restricted within a specified period of time.

Additionally, governments can promote transparency and accountability. Platforms are often reluctant to disclose their algorithmic logic and content review details, and users are often left with no explanation when they fail to report. Through regulation, platforms can be required to publish regular reports, such as the rate at which hate speech is addressed, or how quickly user reports receive a response.

Furthermore, governments can push for reforms in algorithmic recommendation systems. Much of the viral spread of hate content occurs not because it’s the most valuable content, but because it generates the most engagement. Governments could legislate the right for users to opt out of personalized content feeds or enable a “low-stimulation recommendation mode” so users can choose not to be steered by engagement-maximizing algorithms—thus reducing their exposure to controversial or emotionally charged content.

On the other hand, research by Sinpeng et al. (2021) found that hate speech legislation is often framed as a threat to free speech, which leads to heavy criticism or political resistance and frequently stalls legislation before it even reaches a vote. When it comes to government regulation of hate speech, some countries have expanded the definition of hate speech infinitely, including criticism of the government and expression of dissent, turning regulation into a tool of control, which has triggered public concerns about freedom of speech. However, the real issue is not whether we should regulate hate speech, but how we design that regulation. Just as we don’t reject the law itself because we fear excessive enforcement, we shouldn’t abandon regulation altogether simply because of the risk it could be misused. Regulating hate speech isn’t about telling people to say less—it’s about creating space where more people feel safe to speak. Regulation is not suppression, but protection.

That said, regulation should not be treated as a cure-all. The spread of hate speech online is not solely a result of failed rules—it also stems from biased recommendation algorithms, profit-driven platform design, and deep-seated structural inequalities and prejudice in society. The government cannot solve these issues with a single law, but what it can provide is a starting point—a basic safety net.

Not Just Bystanders: Everyone Is a Gatekeeper of the Public Sphere

Research has found inconsistencies in the way social media platforms implement policies regarding cultural differences and hate speech, which have received increasing public attention (Matamoros-Fernandez, 2017). Faced with the spread of hate speech in cyberspace, it’s easy to shift the blame to platforms or governments, as if we’re just passive recipients of a torrent of information. But in reality, we are not outside observers. Every time we like, share, or comment, we are actively participating in shaping what content gets promoted. These small actions help steer the logic of algorithmic recommendation systems, often without our awareness.

This means the public must take responsibility for the role we play in information dissemination. Before forwarding a post, do we check the source? When commenting on a thread, do we consider whether our language might unintentionally echo harmful or hostile rhetoric? These seemingly simple decisions are, in fact, the foundation of a healthier digital public sphere.

Moreover, hate speech doesn’t always appear as obvious insults or direct attacks. It often hides behind the language of jokes, opinions, or free speech. Without the ability to recognize this framing, we risk believing, sharing, or even participating in the spread of such content unintentionally. As participants in the public sphere, we must also take our digital literacy seriously—developing critical thinking, media literacy, and an understanding of how online interactions work.

Educational institutions and community platforms should treat digital literacy as a core civic skill—not just for children, but for adults as well. We need to teach not only how to spot misinformation, but also how to recognize harmful speech. Only when more people are equipped with basic judgment and a sense of responsibility as media participants can we effectively push back against the constantly evolving, disguising, and shape-shifting forms of online hate.

Free Speech Is Not a License to Hate — Everyone Has a Role to Play

The problem of hate speech is not a simple matter of who’s right and who’s wrong. It is a systemic challenge—one that implicates platform algorithms, legal frameworks, and civic culture. What we need is not to single out one party to blame, but to build a sustainable model of collaborative governance. That means starting from the public interest and working toward more ethical platform logics, more democratic government oversight, and a stronger sense of agency among the public.

In the end, this blog doesn’t aim to provide you with a clear-cut answer. Instead, it offers an invitation—an invitation to rethink the information world we take for granted, to re-examine the boundaries of free speech, and to ask whether we are truly prepared to both resist hate and defend freedom along that fragile line.

The spread of hate has never required the support of the majority. All it takes is the silence of most people. And we should not, and cannot, stay silent any longer.

Reference

Cakmak, M. C., Okeke, O., Onyepunuka, U., Spann, B., & Agarwal, N. (2024). Analyzing Bias in Recommender Systems: A Comprehensive Evaluation of YouTube’s Recommendation Algorithm. Proceedings of the 2023 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, 753–760. https://doi.org/10.1145/3625007.3627300

Duan, Y. (2023). How Can Social Media Play a Role in Combating Fake News. Lecture Notes in Education Psychology and Public Media, 4, 822–827. https://doi.org/10.54254/2753-7048/4/2022438

Flew, T. (2021). Regulating Platforms. Polity.

Horwitz, J., & Seetharaman, D. (2020). Facebook executives shut down efforts to make the site less divisive. Wall Street Journal26.

Matamoros-Fernández, A. (2017). Platformed racism: The mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118X.2017.1293130

Mediavilla, K. (2022). Section 230 of the Communications Decency Act of 1996: The Antiquated Law in Need of Reform. Missouri Law Review, 87(4), COVF-COVF.

Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. https://ses.library.usyd.edu.au/handle/2123/25116.3

47 U.S.C. § 230. (1996). Protection for private blocking and screening of offensive material.

Be the first to comment

Leave a Reply