Have you ever come across comments like these online?
If you’ve used Twitter, Facebook, or YouTube, chances are you’ve seen posts like: “He’s just a monkey, doesn’t deserve to be on the field” or “She had it coming—just look at how she dresses.” These aren’t just isolated outbursts—they’re part of a broader, troubling trend of hate speech flourishing across social platforms (Carlson & Frazer, 2018). In some online communities, misogyny, racism, and body shaming are not only normalized but also rewarded with thousands of likes and shares (Massanari, 2017).

You might not think a Facebook post could get someone killed—but in Myanmar, that’s exactly what happened. Between 2016 and 2018, Facebook’s algorithm aggressively promoted anti-Rohingya Muslim hate content. Words like “dog,” “virus,” and “terrorist” spread like wildfire across the platform (Rushe & Paul, 2021). These weren’t just words—they sparked real-world violence, massacres, and mass displacement. The United Nations later stated, “Facebook played a determining role in the genocide.” This wasn’t just a glitch in the system—the system itself was part of the problem (Flew, 2021).
So, what exactly is going on?
We often think of the internet as a space for free expression. But increasing evidence suggests that the problem goes beyond individual users’ attitudes. Social media platforms are actively amplifying hate. This article dives into how and why hate speech is spreading—and why the platforms seem powerless, or perhaps unwilling, to stop it.
The “Outrage Economy”: How Algorithms Amplify Hate
Australian researcher Matamoros-Fernández introduced a crucial concept: platformed racism. This isn’t just about individual bias—it refers to how platform design, algorithmic logic, policy loopholes, and malicious user behavior intertwine to accelerate the spread of hate (Matamoros-Fernández, 2017).
A famous example is the online abuse faced by Indigenous Australian footballer Adam Goodes. In 2015, he performed a traditional dance during a match to celebrate his culture, only to be met with relentless harassment. Social media was flooded with monkey comparisons and racist taunts, while YouTube’s recommendation system kept pushing mocking videos. Instead of protecting marginalized voices, platforms became amplifiers of hate (Matamoros-Fernández, 2017).
Ever notice how rage-inducing posts magically pop up on your feed? That’s not an accident—it’s by design.
Facebook’s recommendation engine operates on an “emotion-driven logic”: the more a post evokes anger, shock, or disgust, the more likely it is to be promoted. Since introducing emotional reactions in 2016, Facebook’s algorithm has given “angry” responses five times the weight of a “like,” prioritizing divisive and emotional content because outrage drives engagement—and engagement drives profit (Kallioniemi, 2022).
Myanmar became a tragic case study in algorithmic amplification. In 2017, Facebook was the main vector for spreading anti-Rohingya hate speech. Extremist groups posted graphic content inciting violence, which gained massive interaction and was pushed further by the algorithm. In non-English-speaking regions like Myanmar, India, and Ethiopia, Facebook’s content moderation was nearly nonexistent. Leaked documents revealed a lack of AI models for local languages and a shortage of human moderators, allowing hate speech to circulate freely. In India, test accounts were transformed into echo chambers of blood-soaked images, fake news, and militant nationalism in just three weeks (Kallioniemi, 2022).
Recommendation algorithms act like invisible hands. They don’t judge right from wrong—they judge attention. The more extreme and emotional the content, the more likely it is to go viral. Hate becomes a currency (Flew, 2021). Internal documents show Facebook knew as early as 2018 that its algorithm was promoting emotionally harmful content, yet kept recommending it because it performed well (Kallioniemi, 2022).
In plain terms, it’s not that platforms can’t stop it—it’s that they won’t. Because toxic engagement still counts as engagement. Facebook repeatedly ignored its internal team’s suggestions to fix the algorithm, admitting that “harmful but high-performing” posts were too valuable to suppress (Kallioniemi, 2022).
The Dark Side of Human Nature: Why We Join the Pile-On
It’s not just the platforms—it’s us, too.
Anger is addictive. Studies show negative emotions spread faster than positive ones (Flew, 2021). A heartwarming story might earn a quick like, but an infuriating post? That’ll get long comments, heated debates, and friends tagged in for backup. Platforms love that—it boosts engagement.
You’ve probably seen comments like: “How can anyone defend this?! My faith in humanity is gone.” or “Totally agree! These trolls have no brains!”

That’s moral outrage—a way to signal your own righteousness by criticizing others. Social media amplifies this because “taking a side” brings social validation, and algorithms reward extremism with visibility (Carlson & Frazer, 2018).
In Myanmar, many users weren’t extremists—they were ordinary people. But driven by a sense of “moral duty,” they shared and liked violent anti-Muslim propaganda. When combined with algorithmic amplification, this self-righteous emotion led to devastating real-world consequences (Matamoros-Fernández, 2017).
Then there’s anonymity. Even on platforms that require real names, people use burner accounts to attack others and vanish. Timid in real life, ruthless online. “Keyboard warriors” at their worst (Roberts, 2019).
Behind the Curtain: Moderators, Algorithms, and the Platform Dilemma
You might think: if the problem is this bad, why not just delete the posts? But moderation is far messier than it looks.
American scholar Sarah Roberts studied thousands of content moderators—often outsourced workers who face graphic, hateful, or sexual content every day, under crushing psychological pressure (Roberts, 2019). The volume of harmful content is overwhelming, and moderation guidelines are vague, leaving much up to personal judgment. Facebook admitted it might never be able to train AI to catch most harmful content, especially in minority languages and sensitive cultural contexts. Still, the company continues to prioritize AI over human moderation to save costs—despite repeated failures. In Indian-language environments, recommendation systems quickly flooded users with death and hate imagery. In Burmese, the mistakes were even worse. UN investigators stated: “Facebook’s conduct in Myanmar contributed to and enabled the genocide” (Flew, 2021).

Reddit, one of the world’s largest forums, played host to the misogynistic “Gamergate” attacks. Researcher Adrienne Massanari called it a case study in “toxic technoculture”: the platform enabled anonymous harassment and tolerated hate groups in exchange for traffic (Massanari, 2017).
Similarly, Carlson and Frazer found that in Australia, online hate was often cloaked in calls for justice, satire, or “truth-telling”—what they called the punishment mechanism of social media. Platforms often turned a blind eye (Carlson & Frazer, 2018).
Why? Because ad revenue depends on engagement, and removing hateful content could tank interaction—and share prices. For Zuckerberg and friends, the math is simple (Kallioniemi, 2022).
Policy Blind Spots and Selective Enforcement
We assume platforms treat all countries equally—but that’s far from true. In the EU, strict laws mean Facebook deletes harmful content more aggressively. But in less regulated markets like Myanmar, India, or the Philippines, the company often chooses inaction (Flew, 2021).
In Myanmar, Meta’s internal research as early as 2012 warned that its algorithm could cause real-world harm. Even after riots in 2014 led to Facebook being temporarily banned in Mandalay, the company failed to act. A 2016 internal study confirmed that its recommendation engine was driving extremism. A 2019 internal memo admitted: “We’ve known for years that our algorithm promotes hate but removing too much content hurts engagement. Management won’t prioritize this” (Amnesty International, 2022). This mix of cultural ignorance and economic priorities has led to selective enforcement, where users in non-Western countries face greater risks and less protection (Flew, 2021).
And it’s not just Myanmar. In 2020, The Wall Street Journal reported that Facebook allowed members of India’s ruling party (BJP) to post anti-Muslim hate speech. Executives blocked content reviewers from deleting these posts—despite clear policy violations—because they feared it might harm business in India (Purnell & Horwitz, 2020).
So, in regions without strong public oversight, Facebook doesn’t follow the same standards. Its “neutrality” only applies where regulation demands it. Elsewhere, profit comes first.
What Can We Do? More Than Just Reporting
So, what can we do as ordinary users? Reporting alone isn’t enough. Every time we engage, we influence the algorithm. When you see a rage-bait post, maybe don’t click. Don’t comment, don’t share, don’t like. Because every tap could help it go viral (Flew, 2021). Instead, actively support positive, inclusive content. Let the algorithm know that kindness gets attention, too (Carlson & Frazer, 2018).

And don’t forget—we can push for better policy. Australia’s proposed Privacy and Other Legislation Amendment Bill 2024 aims to hold platforms more accountable and make “transparency” more than a buzzword (Attorney-General’s Department, 2024).
Of course, platforms and regulators must take the lead. Platforms can no longer claim innocence—they must weigh social impact alongside engagement (Flew, 2021). They also can’t hide behind flawed AI. It’s time to invest in real people—moderators who understand local languages and contexts (Roberts, 2019). Laws like the EU’s Digital Services Act (DSA) are a step in the right direction, forcing Meta to reform its algorithms. Other countries need to follow suit (Frosio & Geiger, 2023).
Because if we don’t fix this, platforms will keep profiting off hate—and we’ll keep living in a world shaped by algorithms.
Liking Hate—Silence Is Also Participation
We often think: what harm is there in just scrolling, reacting, or commenting? But as this article shows, a single like or passive view can quietly help push hateful content over the edge—until it spills into real life.
Yes, platforms are responsible. Yes, policy must improve. But most importantly, we need to stop blaming “the algorithm” to avoid owning our choices. Anger isn’t evil. But unchecked emotion and hijacked justice can be fuel for hate.
If algorithms can amplify hate, they can amplify compassion too. If extremism can go viral, so can reason—if we choose to make it visible.
We don’t have to be “perfect” social media users. But we can start now—share less outrage, think more critically; stay less silent, reflect a bit deeper. After all, the internet is just a reflection of the light—or darkness—we bring to our screens.
Have you ever encountered hate speech online? How did you respond?
Feel free to share your thoughts or experiences in the comments. Or tell us one small action you’ve taken to “change the algorithm.”
Let’s try to shift something—maybe just with the next scroll.
References
Carlson, B. & Frazer, R. (2018) Social Media Mob: Being Indigenous Online. Sydney: Macquarie University. https://researchers.mq.edu.au/en/publications/social-media-mob-being-indigenous-online
Massanari, Adrienne (2017) #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3): 329-346. https://journals.sagepub.com/doi/full/10.1177/1461444815608807
Rushe, D., & Paul, K. (2021). Facebook’s role in Myanmar and Ethiopia under new scrutiny. The Guardian. https://www.theguardian.com/technology/2021/oct/07/facebooks-role-in-myanmar-and-ethiopia-under-new-scrutiny
Flew, Terry (2021) Hate Speech and Online Abuse. In Regulating Platforms. Cambridge: Polity, pp. 91-96 (pp. 115-118 in some digital versions)
Pekka Kallioniemi. (2022). Facebook’s Dark Pattern Design, Public Relations and Internal Work Culture. Journal of Digital Media & Interaction, 5(12). https://doi.org/10.34624/jdmi.v5i12.28378
Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118X.2017.1293130
Roberts, Sarah T. (2019) Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven, CT: Yale University Press, pp. 33-72. https://ebookcentral.proquest.com/lib/usyd/reader.action?docID=5783696&query=&ppg=1
Amnesty International. (2022). Myanmar: Facebook’s systems promoted violence against Rohingya; Meta owes reparations – new report. https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/
Purnell, N., & Horwitz, J. (2020). Facebook’s Hate-Speech Rules Collide With Indian Politics; Company executive in vital market opposed move to ban controversial politician; some employees allege favoritism to ruling party. The Wall Street Journal. Eastern Edition. https://www.proquest.com/docview/2434083543?_oafollow=false&accountid=14757&pq-origsite=primo&sourcetype=Blogs,%20Podcasts,%20&%20Websites
Attorney-General’s Department. (2024). Better protection for Australians’ privacy. https://ministers.ag.gov.au/media-centre/better-protection-australians-privacy-12-09-2024
Frosio, G., & Geiger, C. (2023). Taking fundamental rights seriously in the Digital Services Act’s platform liability regime. European Law Journal : Review of European Law in Context, 29(1–2), 31-77-31–77. https://doi.org/10.1111/eulj.12475
Be the first to comment