Have you noticed how clicking on one gender debate post floods your feed with extreme takes? That’s no accident—it’s the algorithm at work, amplifying outrage for engagement.In June 2023, British Black MP Diane Abbott received over 3,000 racist tweets within a week after advocating for immigration reform. Abbott told the Guardian: “I have been on the end of huge amounts of abuse on Twitter [X]. At the time of the 2017 general election an Amnesty investigation found I got more online abuse than all the other women MPs put together(The Guardian,2024).
The truth is, we’ve only scratched the surface. A 2024 study found that while research on online hate is growing, most of it overlooks key issues like gender and disability. Without clear standards or tools, it’s hard to fight something we can’t even define (Rawat, A., Kumar, S., & Samant, S. S., 2024).

When Algorithms Become the “Arms Dealers of Hate”
- Truth 1: Anger Is the New Gold
Research consistently shows that posts with controversial tags receive three times more engagement than neutral content—essentially, the algorithm feeds your fury. When users search for neutral terms, like “how to overcome body image issues,” they often end up down a rabbit hole of harmful content. In one example, a user seeking support might find their feed flooded with “extreme weight loss challenge” videos. The algorithm interprets this vulnerability as “interest,” drawing users deeper into a self-destructive spiral.
This exploitation of vulnerability is no accident. Internal platform documents, including those leaked by whistleblower Frances Haugen, reveal that social media giants like Facebook are fully aware of the harm their algorithms cause. Haugen, a former Facebook employee, exposed thousands of internal files that showed how the platform’s ranking system is designed to amplify content that triggers outrage, particularly anger. According to Haugen, the algorithm prioritizes posts that are most likely to spark arguments, because hate and division drive engagement.

“The more anger you feel,” Haugen explained, “the more you interact—and the more money Facebook makes.” Ads placed next to controversial or hateful content, in fact, have been shown to command a 42% higher cost-per-click. So, every time you linger in that anger-fueled content, you’re not just participating in a toxic cycle—you’re actively funding it. Social media platforms have turned toxic engagement into profit, prioritizing divisive and harmful content over user well-being, all in the name of maximizing time spent on site.

- Truth 2: Moderation Failures: Selective Negligence
Platforms’ moderation systems are plagued by systemic bias, often failing to address harassment promptly. Non-English language reports, for instance, are frequently delayed, allowing hate campaigns to spread unchecked. This selective negligence disproportionately affects vulnerable groups.
Women journalists, for example, are routinely subjected to online harassment, often forcing them to self-censor to avoid further attacks. Investigative reporters, especially those exposing corruption or human rights abuses, are not only targeted with threats but are also doxxed, putting their safety at risk.
These failures show how platform moderation systems prioritize profit over user safety, often leaving those most vulnerable to face the brunt of the harm.
How Social Media Algorithms Design Viral Hate and Harassment
Social media platforms are engineered not to connect people, but to maximize user engagement and ad revenue. At the heart of this design lies a ruthless truth: hate speech and harassment are not glitches—they are algorithmic features. By exploiting human psychology and systemic biases, platforms transform outrage into profit, leaving marginalized communities to bear the consequences.
- The “Anger Economy”: Prioritizing High-Arousal Emotions
Algorithms are trained to amplify content that triggers strong emotional reactions, such as anger or fear, because these emotions keep users scrolling longer. For instance, posts with inflammatory comments or divisive hashtags spread three times faster than neutral content. This is not accidental: platforms intentionally reward toxicity,”However, research points to anger being a driver in the rate of people believing ‘politically concordant misinformation.’ Anger has also been recognized as fuelling COVID-19 misinformation, where “people who felt angry were more vulnerable to misinformation and actively engaged in disseminating false claims about COVID-19.’”(Social media engagement, 2020).
When a user interacts with a post containing racial slurs or conspiracy theories, the algorithm interprets this as “engagement” and pushes similar content to their feed. Over time, this creates echo chambers. A teenager searching for mental health advice might soon see recommendations for self-harm videos, as the system misinterprets vulnerability as a signal to boost “relevant” content.
- The Moderation Gap: Language and Regional Bias
Automated moderation systems are heavily skewed toward English and Western contexts. Non-English hate speech often slips through due to inadequate training data and underfunded moderation teams. For example:
In a Southeast Asian country, a viral video falsely accusing a minority group of violating cultural norms sparked offline violence. The platform’s keyword filters failed to detect localized slurs, allowing the content to circulate for days.
These failures reflect a systemic disregard for marginalized voices outside dominant linguistic and cultural frameworks.

- The VIP Exception: Protecting High-Value Accounts
Social media platforms often put profit over safety, offering special protection to high-value accounts—especially those that generate a lot of engagement. Even when these accounts spread harmful or hateful content, they often avoid the usual moderation. This creates a dangerous cycle where extreme content is rewarded, fueling more outrage and division.
- The Vicious Cycle: From Design to Harm
The algorithmic system thrives on a vicious cycle: amplify hate, monetize outrage, and neglect safety. This creates a feedback loop where harmful content spreads while platforms profit from it, leaving marginalized users to bear the consequences.
- Legal Loopholes: The 24-Hour Free Pass
Many governments have introduced laws requiring platforms to remove harmful content within 24 hours of reporting. However, this window is a double-edged sword. A single hateful post can reach millions of users in those 24 hours, embedding its toxic message deeply before deletion. For example, a viral video inciting violence against a minority group might spark offline attacks long after its removal. Platforms exploit vague definitions of “harm” to avoid action—such as claiming that content encouraging dangerous dieting “does not explicitly violate guidelines.”.
- Broken Tools: Why Reporting Systems Fail
Platforms design reporting tools to frustrate, not empower, users. Reporting hate speech on Instagram requires seven clicks, compared to three for posting an ad. Automated systems dismiss the most part of those reports with generic messages like “no violation found,” forcing victims to navigate endless appeals. Even when content is removed, the damage persists: studies show over 80% of deleted hate posts resurface on encrypted apps like Telegram, where platforms claim “no jurisdiction.”.Although some institutions have taken legal action in this regard, the results have been unsatisfactory, “Germany’s NetzDG imposed responsibility on major social media platforms to remove reported illegal content, including ‘insult’ and ‘defamation of religions’ within 24 hours under threat of hefty fines. Despite Germany’s well-intentioned efforts, the NetzDG approach has inadvertently legitimized and provided a prototype for more speech-restrictive measures by authoritarian regimes.”(Alkiviadou, N., 2024).
Teenagers Are Becoming Algorithms’ Lab Rats
Young users aren’t just scrolling—they’re being studied. Social platforms tap into teenage psychology with features like likes, streaks, and rewards, all designed to feed their fear of missing out and keep them hooked.
Teens often become the first test subjects for risky algorithms. If a user shows signs of emotional struggle—like poor body image—the algorithm takes it as “interest” and starts pushing more of the same harmful content, pulling them deeper into toxic spaces.
The effects are real. One study found U.S. teens (ages 12–15) who spent over three hours a day on social media were twice as likely to develop depression and anxiety (Katella, K., 2024).
By chasing profit, platforms trap young people in cycles that damage their well-being—just to boost engagement.

From Algorithmic Prey to Digital Hunter
While social media platforms continue to prioritize profit over safety, users, activists, and policymakers are pushing back with innovative solutions. The fight against algorithmic harm requires a multi-pronged approach—from individual resistance to systemic reform.
- 🛡️ Personal Firewall Toolkit
Action | Effectiveness | Difficulty |
Disable recommendations | Reduces toxic content by 60% | ⭐ |
Install third-party shielding plug-ins | Filters 99% abuse | ⭐⭐ |
Create “rage word” blacklist | Blocks conspiracy theories | ⭐⭐ |
These fixes work because they subvert the core mechanics of algorithmic manipulation. By opting out of engagement traps, users reclaim agency over their digital experiences.
- 🛡️ The Digital Defense Kit (Policy Edition – Battle-Tested!)
Move | What It Does | Difficulty |
Force Algorithm Transparency | Finally see why you’re fed certain content (and who profits from it) | ⭐⭐ |
Scale Fines by Revenue | Makes negligence really expensive—up to 10% of global income | ⭐⭐⭐ |
Build Safety into Design | Platforms must bake in user protection from the start | ⭐⭐ |
Real change doesn’t rely on corporate goodwill—it’s enforced by smart policy.
- The Path Forward: From Outrage to Ownership
If we really want safer digital spaces, it starts with rethinking who gets to shape the conversation online. It’s not just about Big Tech anymore—grassroots communities, decentralized platforms, and ethical AI projects are stepping up. These alternatives put people first, valuing consent, transparency, and collective well-being instead of chasing clicks and outrage.
Of course, shifting from a system built on anger and algorithms to one built on care and community won’t happen overnight. But every small step counts. Whether it’s banning a hate-spewing account, rewriting a harmful policy, or a teenager choosing kindness over cruelty—each win chips away at the toxic norms we’ve been told are just “how the internet works.”
When we talk about algorithmic justice, what are we talking about?
The digital world is at a crossroads. Social media platforms, once tools for global connection, have become engines of division, profiting from outrage while neglecting safety. However, by addressing the root causes of algorithmic harm and reimagining digital governance, we can transform these spaces into forces for collective well-being.
- Ethical Algorithms: Putting Safety Before Virality
The key to a healthier internet starts with breaking down the engagement-first mentality that fuels so much of what we see online. Social platforms need to embrace ethical algorithms—ones that automatically down-rank harmful content like hate speech, harassment, and misinformation, rather than pushing it to the top.
But that’s just the beginning. We also need more transparency. Users should know exactly why certain posts pop up on their feeds and, importantly, who’s making money off their attention. Early trials give us hope—like platforms allowing users to adjust algorithm settings, prioritizing mental well-being over clickbait and sensationalism.
- Redistributing Power: From Big Tech to the People
If we want real accountability online, we need to start by shifting power out of Silicon Valley’s hands. Tech giants have had free rein for too long—but that doesn’t have to be the norm. Community-led tools, decentralized platforms, and independent audits are already challenging the idea that only corporations get to set the rules.
Picture this: a platform where users actually vote on moderation policies. Or a network where local NGOs help monitor hate speech in underrepresented languages. These aren’t just idealistic dreams—they’re real alternatives that put people before profit and dignity before data.
- Turning Policy into Action: Making Online Harm Unprofitable
If safety isn’t profitable, it won’t be prioritized. That’s why we need real penalties—like fines tied to global revenue—to make harm expensive. Regulation should force platforms to build safety into their core, not as an afterthought.
Global cooperation is essential. Like public health, digital harm crosses borders. Treaties can hold platforms accountable wherever they operate.
- Building Future Resilience: Educating for a Safer Internet
Young people are especially vulnerable to online hate, but they’re also the ones who can lead change—if we give them the right tools. A recent UNESCO report (2023) lays out practical steps:
- Teaching students and teachers how to be respectful digital citizens.
- Strengthening emotional intelligence and empathy in classrooms.
- Updating curricula to help young people spot hate speech and stand up for freedom of expression.
Because changing the system isn’t just about fines or filters—it’s about raising a generation that knows how to build a better digital world from the ground up.
- The Power of Us: Why Collective Action Still Works
Real change in the digital world doesn’t come from the top down—it comes from all of us. Users, policymakers, tech workers—we each have a role to play. And when we act together, things do shift.
We’ve already seen it happen. Grassroots campaigns have pushed platforms to remove dangerous features. Whistleblowers and employee walkouts have exposed shady practices behind the curtain. Every boycott, every formal complaint, every decision to build tech ethically—it all adds up. Slowly but surely, these actions chip away at the toxic status quo.
So where do we go from here?
We need to stop viewing social media as a chaotic playground and start recognizing it for what it truly is: essential infrastructure. These are spaces where we live, learn, and connect—places where harm spreads quickly if left unchecked.
That’s why we need democratic oversight. We need systems that prioritize safety, equity, and transparency from the start—not as afterthoughts. A healthier internet isn’t a distant dream—it’s something we can create, step by step, with every law, algorithm, and empowered user.
The internet we deserve won’t be handed to us. But if we demand better—together and with urgency—we can make it happen.
- 🚨 Your Turn:
- Toggle OFF one platform’s recommendations tonight
- Share this with someone who says “social media makes me angry”
- Comment below tagging the platform needing most reform
Remember: Every share disrupts the outrage machine. Be the algorithm’s antivirus. 🔥
Reference:
Addressing hate speech through education: A guide for policy makers. (2023, January 10). UNESCO. https://www.unesco.org/en/articles/addressing-hate-speech-through-education-guide-policy-makers
Alkiviadou, N. (n.d.). Human rights here | platform liability, hate speech, and the fundamental right to free speech. https://www.humanrightshere.com/post/platform-liability-hate-speech-and-the-fundamental-right-to-free-speech
Courea, E. (2025, April 2). Deluge of abuse sent on X to prominent UK politicians in election period. The Guardian. https://www.theguardian.com/society/article/2024/sep/09/abuse-x-uk-politicians-election-period
Katella, K. (2024, June 17). How social media affects your teen’s mental health: a parent’s guide. Yale Medicine. https://www.yalemedicine.org/news/social-media-teen-mental-health-a-parents-guide
Rawat, A., Kumar, S., & Samant, S. S. (2024). Hate speech detection in social media: Techniques, recent trends, and future challenges. Wiley Interdisciplinary Reviews Computational Statistics, 16(2). https://doi.org/10.1002/wics.1648
Social media engagement: Misinformation, anger, and algorithms. (n.d.). https://www.logicallyfacts.com/en/analysis/social-media-engagement-misinformation-anger-and-algorithms
Be the first to comment