The Internet Is Broken : How Hate Speech and Online Harm Became Everyone’s Problem

Let’s be real: being online doesn’t always feel safe anymore.

You open Instagram or X (formerly Twitter) to unwind. Maybe you’re checking out a viral video or reading the news. But within seconds, the comment section turns ugly, filled with slurs, abuse, and conspiracy theories. You scroll away, a little unsettled. Maybe even a little numb.

If this sounds familiar, you’re not alone. There’s a growing sense that our digital spaces, once full of potential, are now overwhelmed by toxicity: hate speech, harassment, misinformation. It’s like the internet has lost its chill.

But this isn’t just about bad vibes or mean comments. Online harm is real. It affects mental health, public trust, and even democracy. So, how did we get here, and what can we do about it?

Let’s unpack the mess and examine how governments, platforms, and everyone else are trying to make sense of it.

What Counts as Online Harm?

We often hear the term “online harm,” but what does that actually mean in practice? The phrase is thrown around in policy debates, news headlines, and tech company statements, but it’s not always clear to everyday users what it covers.

At its core, online harm refers to any digital activity, whether that’s content, behavior, or even platform design, that causes psychological, emotional, reputational, social, or physical damage to individuals or communities (Scheuerman et al., 2021). This can manifest in very visible forms—like hate speech, harassment, or doxxing—but also in more subtle, indirect ways such as algorithmic bias, the spread of misinformation, or the promotion of self-harm content.

To understand how online harm functions, consider this: someone posts a TikTok or tweet expressing their opinion. Within hours, they’re flooded with thousands of abusive replies, many of which include racial slurs, threats of violence, and targeted insults. The post has “gone viral,” not because it was controversial, but because the algorithm picked up on the engagement, regardless of its tone. For the platform, it’s data. For the person, it’s trauma. As Matamoros-Fernández (2017) highlights, even the use of slurs, racial, homophobic, transphobic, surges dramatically.

Then there’s misinformation. Think about how conspiracy theories around COVID-19, vaccines, or elections were widely circulated on platforms like Facebook and YouTube. While not always immediately violent, this content can erode trust in science, fuel public health crises, and, in some cases, incite extremist actions offline. It’s no longer just a digital issue, it becomes a real-world problem.

Some online harms are clearly illegal. These include things like hate speech that incites violence, the sharing of child sexual exploitation material, terrorist recruitment content, and doxxing. Hate speech goes against human rights, and although online communication has brought people closer, it has also allowed people a space to express hate more freely (Flew, 2021). Thus, when these hate speeches appear, platforms are (at least in theory) legally obligated to act.

But here’s the more complex part, most harmful content users experience day-to-day is not technically illegal. It lives in a legal grey area, unpleasant, abusive, or manipulative, but not directly prosecutable. A meme mocking a rape survivor. A “joke” thread using racial stereotypes. A subtle suggestion that someone should kill themselves. These examples might not cross legal thresholds, but they can cause serious psychological damage, particularly when they are part of sustained or coordinated campaigns.

It’s hard to moderate this type of content. One person’s “sarcasm” can be another person’s trauma trigger. Satire, critique, activism, and abuse can all have similar tones but very different effects. This is the balancing act that social media platforms have to confront on a daily basis, attempting to uphold principles of free speech while also protecting users from harm.

That balance differs on most platforms, which use automated systems (AI algorithms) and human moderators to detect and remove harmful content. But these systems are not perfect. AI can get context and nuance wrong, especially across languages or cultural frameworks. Human moderators, meanwhile, are frequently underpaid, overworked and exposed to grisly content with scant psychological support.

And then there’s the algorithm problem. Social media feeds are not chronological—they’re curated based on what will get the most attention. Often, that means content that provokes outrage, anger, or fear. In other words, the business model of many platforms is hardwired to promote exactly the kind of content that causes the most harm.

As online spaces begin to function more like public squares than private venues, the stakes are higher than ever. We don’t just use the internet—we live on it. We work, learn, debate, date, and express ourselves online. If we can’t define or control what counts as “harm,” and if the people in charge of these spaces fail to protect users, the consequences reach beyond the screen and into the fabric of our society.

Elon Musk’s Twitter Takeover – Chaos in the Name of Free Speech

In October 2022, Elon Musk, the billionaire CEO of Tesla and SpaceX, completed his $44 billion acquisition of Twitter (Conger & Hirsch, 2022). Almost overnight, the platform was thrown into a period of massive change. Musk, who described himself as a “free speech absolutist,” promised to make Twitter a space for unfiltered expression and open debate. But within days, it became clear that this vision would come at a cost.

One of Musk’s first actions as the new owner was to fire large segments of Twitter’s content moderation and trust & safety teams, including many people responsible for monitoring abuse, hate speech, and misinformation. In his view, these roles represented an ideological bias that limited free expression. What followed was a rapid and public dismantling of the platform’s existing safety infrastructure.

Under Musk, Twitter (later rebranded to “X”) reinstated previously banned accounts, including individuals known for spreading conspiracy theories, extremist rhetoric, and hate speech. Figures like Andrew Tate, Donald Trump, and far-right influencers were welcomed back with open arms.

Musk pushed back on these findings, accusing the media and watchdogs of exaggerating. But regular users noticed the difference. Many journalists, LGBTQ+ creators, women in tech, and racialised users reported a significant uptick in abuse. Some left the platform entirely; others made their accounts private to escape harassment.

In response, Musk introduced “Community Notes,” a crowdsourced tool designed to correct misinformation through user-generated fact-checking (Wirtschafter & Majumder, 2023). But this tool, while noble in theory, did little to stop waves of coordinated hate. It couldn’t protect individuals from dogpiling or remove viral hate speech fast enough.

Meanwhile, advertisers started jumping ship. Major brands like Coca-Cola, IBM, and Disney pulled their advertising from the platform, concerned that their content would appear alongside violent or offensive posts. This had a tangible economic impact. Revenue dropped. Trust dropped. Usage dropped.

What we are witnessing is more than a leadership shake-up — it is an experiment being conducted in real time in the consequences of unregulated “free speech.” Without any kind of organised moderation, a small number of loud, toxic voices started to take over the conversation, scaring away others or creating a climate of fear and fatigue.

The lesson? The absence of parameters on free speech does not equal democracy — it equals digital anarchy. And when that takes place, it is usually those who are already marginalised in society who are most affected.

The UK’s Online Safety Act – Can Policy Clean Up the Internet?

In October 2023, the UK’s Online Safety Act finally climbed onto the statute book following years of political wrangling (Bechtold, 2024). This historic piece of legislation places a legal responsibility or duty of care on tech companies to ensure users are protected from illegal and harmful content (Bechtold, 2024). The law covers Platforms hosting user-generated content, including social media sites, messaging applications, forums and search engines.

At the heart of the Act is the idea that tech companies should be held responsible for what happens on their platforms, just like restaurants who are responsible for food safety or airlines for passenger welfare.

Key requirements of the Act include:

  • Removing illegal content (e.g., terrorism, child sexual abuse material) promptly.
  • Minimising the risk of users encountering harmful but legal content, such as suicide ideation, eating disorders, and misogynistic content.
  • Implementing age-appropriate protections for children, including age verification and content filtering tools.
  • Publishing transparent reports on content moderation practices and systems.

Supporters have applauded the Act for providing action to bridge the gap between accountability for tech and user safety that many have deemed necessary for years. It introduces criminal liability for executives who fail to comply, gives powers to Ofcom (the UK’s communications regulator) and gives users stronger mechanisms to report harmful content.

Who Pays the Price for Online Hate?

It is convenient to write online hate off as “just words on a screen.” But for those targeted, the results are often traumatic and lasting.

Imagine that you’ve received hundreds of violent threats for speaking out. Or strangers posting your personal information online. Or the extreme waking up every morning to thousands of dehumanising messages on your phones. These aren’t theoretical scenarios; they’re daily realities for many users.

At least 41 per cent of U.S. adults, black and white men alike, have been subject to online harassment, according to a Pew Research Center report (2021), a number that rises dramatically for women, people of colour, LGBTQ+ folks, and young people. This type of abuse is not just psychologically draining; it can lead to mental health problems, affect the job and confine the victim to a prison of the home.

And onscreen hatred has, in some regrettable cases, spurred people to end their own lives. For some of them, it ends in physical violence, such as the evil of online radicalisation that ends mass shootings or hate crimes.

When toxic content is allowed to be controlled unchecked, it doesn’t just harm individuals; it poisons the terrain for everyone. It stifles free expression, spreads fear and transmutes the internet, a realm that should be one of opportunity, into a realm of peril.

There are also the long-term effects to consider. If we continue to shut out the weakest voices from our online communities, we will have a less democratic, less representative, and less diverse internet.

So, What Can Be Done?

It’s daunting to think of the scale of the problem — but change is possible.

First, tech platforms should no longer hide behind the “we’re just a platform” excuse to avoid responsibility. They are not neutral messengers — they determine what we look at, listen to and interact with. Platforms exercise tremendous ability to shape public discourse in both the design of algorithms and the enforcement of community guidelines. That power must be exercised responsibly.”

That means investing in hiring and training content moderators (not just deploying AI), engaging with affected communities and transparent in decision-making processes. It also means standing up to profit motives that encourage controversial and high-engagement content, because, let’s be honest, outrage attracts views.

Second, governments should craft thoughtful, rights-respecting regulations. The UK’s Online Safety Act may be a flawed model, but it shows what national-level digital policy could look like. We need laws that protect users without destroying encryption and that promote safety without crushing dissent.

Finally, everyday users have a role too. Digital citizenship means being conscious of what we share, how we engage, and who we support. Calling out abuse, amplifying marginalised voices, and choosing not to reward outrage with attention—all these small acts can shift the culture online.

The internet isn’t beyond saving, but it won’t fix itself. If we want it to be better, we all have to participate.

Conclusion: A Broken Internet Isn’t Hopeless

Yes, the internet feels broken sometimes. It’s noisy, chaotic, and often cruel. But it’s also where movements are born, art is shared, communities are built, and voices are heard.

The same tools that spread hate can also spread healing and hope. But that depends on how we design, govern, and use them.

We don’t have to accept online harm as the status quo. We can demand safer platforms, smarter policies, and stronger communities. Because the internet belongs to all of us, and if we care enough to fix it, it doesn’t have to stay broken.

References

Bechtold, E. (2024). Regulating online harms: an examination of recent developments in the UK and the US through a free speech lens. Journal of Media Law, 1-32. https://doi.org/10.1080/17577632.2024.2395094 

Conger K. & Hirsch L. (2022). Elon Musk Completes $44 Billion Deal to Own Twitter. The New York Times. https://www.nytimes.com/2022/10/27/technology/elon-musk-twitter-deal-complete.html 

Flew, T. (2021). Hate Speech and Online Abuse. In Regulating platforms. John Wiley & Sons.

Matamoros-Fernández, A. (2017). Platformed racism: The mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society20(6), 930-946. https://doi.org/10.1080/1369118X.2017.1293130 

Pew Research Center. (2021). The State of Online Harassment. Retrieved from https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/ 

Scheuerman, M. K., Jiang, J. A., Fiesler, C., & Brubaker, J. R. (2021). A framework of severity for harmful content online. Proceedings of the ACM on Human-Computer Interaction5(CSCW2), 1-33. https://dl.acm.org/doi/abs/10.1145/3479512 

Wirtschafter, V., & Majumder, S. (2023). Future challenges for online, crowdsourced content moderation: evidence from Twitter’s community notes. Journal of Online Trust and Safety2(1).

Be the first to comment

Leave a Reply