No One Is Safe: Digital Violence Is Built into the System

Before You Speak, They’ve Already Judged You

 

Introduction

Have you noticed that we’re getting more and more careful about what we say online lately?

Maybe it’s just an ordinary comment — sharing an opinion, forwarding an article, commenting on some news.

And then you suddenly become the target. Attacked, reported, blocked, even your account disappears.

What’s scarier is: you don’t even know who is “judging” you.

We’ve always thought that only those “controversial” people — public figures, feminists, people involved in politics — would face online violence.

But the truth is, as long as you’re speaking, as long as you’re breathing on this platform, you might be the next target.

This isn’t alarmist talk — it’s a kind of invisible but very real violence in the digital space.

It’s not always insults and doxxing — it could be your video being “mistakenly taken down,” your account being “buried by reports,” or your appeal being “systematically auto-responded” and archived.

As Carlson and Frazer (2018) wrote: “Users don’t just risk being ignored — they risk being erased.”

We’re not just not being heard — we’re being systemically erased.

Platforms often call themselves “neutral,” but Gillespie (2018) reminded us long ago:

“Platforms are not merely conduits for expression—they actively participate in shaping what expression is possible.”

You thought the platform was just a stage — but it’s already the director.

So this article isn’t about “who’s wrong,” but about asking:

In this digital world where everyone could be targeted, what should we do?

Part 1: What Is Digital Violence? It’s More Widespread Than We Think

When we talk about “online violence,” most people picture a few trolls, some curse words, or an emotional comment section.

But that’s just the surface of violence.

What’s truly frightening is those platform mechanisms that you can’t see, can’t touch, but can really hurt you.

They might be an algorithm, a moderation rule, a vague “violation of community standards,” or — a complete silence.

This is the new form of violence in the platform era: it doesn’t rely on intention, but on structure.

Sociologist Johan Galtung once used the term “structural violence” to describe a kind of violence with no direct perpetrator. It’s embedded in systems, manifesting in the idea that “you could have been safe, but the structure hurt you.”

On social platforms, this structure includes: moderation pipelines, recommendation algorithms, report systems, the default language logic, the default standard of “normal users.”

Massanari (2015) studied Reddit’s “freedom culture” and pointed out how it enables continued attacks on women under the guise of neutral design. It’s not that platforms encourage hate — it’s that they have no mechanism to recognize or stop it — and sometimes even reinforce it.

This is called a “toxic technoculture.”

Carlson and Frazer (2018), in their study of Indigenous Australian users, found that platform moderation systems often fail to understand the cultural context of marginalized expression. When Indigenous users express anger or share experiences of harm, platforms flag them as “offensive” and remove the content — while real hate speech remains untouched.

Even terminology can be part of the violence. The Lexicon of Lies (2017) report pointed out that platforms and media misuse terms like “fake news” or “harassment,” which blurs the real problems and dilutes responsibility.

For example, calling organized attacks “free speech,” calling structural censorship “automated judgment,” calling a missing post “inappropriate content” — all of these polite terms are just the wrapping paper of violence.

This isn’t someone being malicious — it’s the system itself that’s hurting you.

When you post something and it gets taken down for no reason;

When you report hate speech and get told it “doesn’t violate the rules”;

When you’re clearly the victim, yet your account is suspended —

That’s not a mistake.

That’s the system “working as designed.”

This is the digital violence we’re talking about: not a sentence that stings, but an entire system that denies your existence.

 

Part 2: Not the Marginalized — but “Anyone Who Dares to Speak”

You think you’re safe because you’re not a woman, not a minority, not a dissident, not a public figure.

But you’re wrong.

In the digital world, as long as you’re speaking, there is no safe zone.

Violence doesn’t ask who you are, and doesn’t care whether you meant harm.

It only needs you to show up — and that’s enough.

We’ve seen far too many examples.

  • Men who try to talk about emotional suppression in masculinity are labeled as “misogynist” and attacked by groups.
  • Feminist scholars who share academic views get clipped, edited into videos, and posted in “anti-women” groups — receiving death threats and deepfake abuse.
  • Experts, journalists, public intellectuals who discuss vaccines, politics, or environmental issues are also attacked, defamed, and banned — while platform moderation is always a step too late.

You just walked into the digital street — and got charged with “disturbing public order.”

You just told your own story — and got accused of “stirring emotions.”

You just gained some followers — and got pushed into a platform-designed show trial.

Case Study: When Platforms Become Prosecutors

Digital violence doesn’t always start in the comments.

Sometimes, it starts the moment you post something.

  • Case 1: Zheng Linghua — When Joy Turns into Evidence for Accusation

In 2022, Zheng Linghua, a girl from Hangzhou, had just been accepted into the Master’s program at East China Normal University.

She dyed her hair pink in celebration, and filmed a video of herself going to visit her grandfather with her offer letter.

It was a “highlight moment” in her life, totally ordinary.

But online users twisted her video into rumors of “early dating” and “showing off.”

Her pink hair, her outfit, her appearance — all became targets for attack, escalating into slut-shaming and sexualized rumors.

The platform did nothing.

Her video was taken down.

Her appeal failed.

Her account was mass-reported.

The rumors kept spreading.

She tried to prove herself — but even the space to speak was stripped away.

Eventually, after relentless online abuse and total silence from the platform, she took her own life.

This wasn’t just individual malice — it was a structural failure by the platform.

As Gillespie (2018) emphasizes, platforms act as governors: they make the rules, enforce moderation, and uphold order — but often choose ambiguity and inaction when it matters most.

In Zheng’s case, the platform let the report system be weaponized, refused to disclose standards, and abandoned her in the grey zone.

This is exactly what Galtung described as “structural violence” —

No single person pressed the attack button.

The system simply made it easy for harm to happen, and made recovery impossible.

  • Case 2: Zoe Quinn and Gamergate — “Free Speech” as a Gendered Attack

Back to 2014.

American indie game developer Zoe Quinn was doxxed and attacked in the infamous “Gamergate” harassment campaign — which began when her ex posted a long blog.

Attackers claimed to be “exposing corruption in gaming.”

In reality, they used her private life as a weapon.

They spread sex-related rumors, made threats, doxxed her address, harassed her family.

Reddit and Twitter were the main battlegrounds.

Reddit hosted thread after thread targeting her.

Twitter coordinated thousands of anonymous accounts to repost, reply, and provoke.

Though these platforms had “anti-harassment policies,” enforcement was slow and weak — depending almost entirely on manual reports.

Massanari (2015) argues that Reddit’s design and culture of “absolute freedom” created an ecosystem where hate was empowered and rewarded.

Platforms claimed “we don’t decide what’s allowed,”

but their algorithms and moderation amplified attackers and silenced victims.

This wasn’t chaos — it was a coded culture of domination.

It wasn’t that the platforms couldn’t intervene. It was that they chose not to.

 

Part 3: Who Is Judging Us? The “Violent Structure” of Platform Governance

The Hidden Algorithms, Rules, and Blurred Boundaries of Responsibility.

Have you ever noticed that those who launch online attacks, spread rumors, or make death threats almost never have to take responsibility?

They commit violence, then disappear. As if nothing ever happened.

Not because they’re clever, but because the system was designed to let them walk away.

They say, “It’s just a comment.”

The platform says, “It’s just the algorithm.”

The audience says, “It’s just the culture.”

The result is: no one is wrong — except you, who got hurt.

Opaque Discretion and Responsibility Drift

Platforms are not cold, neutral pipes, nor are they self-operating technical machines.

They are governors, curators, and judges.

But when it comes to violence, they suddenly become “powerless.”

Gillespie (2018) pointed out that what makes platform governance so difficult to hold accountable is its “role-shifting” nature: when responsible, they retreat into being a “neutral channel”; when intervening, they transform into “rule-makers.”

In the Zheng Linghua case, the platform allowed the reporting mechanism to be abused, refused to disclose moderation standards, and let hateful content spread — while limiting her voice.

The power was real, but the responsibility was always hidden in fog.

 This is not technological neutrality — it’s selective participation in governance; not inaction, but a deliberate deferral of responsibility.

Moderation Outsourcing and Report Systems: Fragmented Governance

Content moderation looks like rule enforcement — but in reality, it’s outsourced responsibility.

Facebook’s moderation system divides content into three tiers (Tier 1–3), most of which are reviewed by low-paid external contractors. These people face sexual violence footage, hate speech, and degrading content daily — yet they are not truly “platform representatives,” but replaceable human filters.

Meanwhile, the report system becomes a weapon:

As long as enough people report you, your post can be taken down automatically — without any need for explanation, without logic.

Algorithms don’t question — they execute. They don’t understand language, don’t understand culture, but as long as the numbers are high enough, they rule against you.

Carlson & Frazer (2018) call this a “digital structure of silence” —

You’re not blocked — you’re ignored by design.

You didn’t say the wrong thing — you just didn’t match the model.

The Myth of Neutrality Masks the Politics of the Platform

Platforms don’t want to be seen as “media,” so they refuse to take responsibility for speech.

But they still want to control distribution — so they quietly manipulate the algorithm.

As Gillespie noted back in the 2010s, platforms wrap themselves in the myth of neutrality, hiding their sorting and ranking of content behind automation — wielding massive “governing power” without “being governed.”

Facebook’s failure in content governance across Asia is the clearest example.

In Myanmar, India, and the Philippines, they had almost no local-language moderators — religious hate speech spread unchecked, eventually sparking real-world violence.

It wasn’t incapacity, it was the default assumption that violence would be tolerated.

Platforms dont fear violence they fear friction.

So they outsource it to algorithms.

If it draws clicks — promote it.

If it disrupts — bury it.

If it angers — amplify it.

Right or wrong? Doesn’t matter.

What matters is who feeds the machine.

We are judged by a system with no judge.

Comments convict. Likes endorse. Reports execute.

“This is community governance,” says the platform.

But who armed the community?

Who erased the appeals?

Who profits from the chaos?

Violence isn’t a bug — it’s a feature.

Silence isn’t inaction — it’s engineered.

You’re not just harassed — you’re abandoned by design.

So let’s ask again:

Do we still have a say in what we’re allowed to say?

If platforms decide whether we’re heard —

Then “free speech” isn’t freedom.

It’s a condition for survival.

Conclusion: Before Being Judged, We Must Learn to Speak First

You might think: isn’t this a bit exaggerated? Are platforms really that serious?

But the issue isn’t individual comments — it’s that platforms have normalized this idea that “comments can become weapons” and let it go unchecked.

You just said something — and you were attacked.

You were seen — and somehow became “provocative.”

This isn’t someone losing control or the platform “making a mistake.”

This is a structure that allows violence to happen by default, a form of governance that rules through neglect.

As Carlson and Frazer (2018) wrote in their study:

“Platforms don’t silence you directly—they just make it impossible for you to be heard.”

Platforms don’t kill you.

They just turn off your mic.

Gillespie (2018) also warned:

“The more that platforms claim to be neutral, the more they obscure the political consequences of their design.”

When platforms call themselves “neutral tech companies,” what they’re actually doing is hiding the political consequences of their governance.

Of course, we can’t change the platform’s design overnight.

But we can start:

Demanding platforms make their governance logic transparent, instead of hiding behind algorithmic black boxes;

Pushing for moderation systems to include cultural sensitivity and linguistic diversity, instead of relying on English-only logic;

Establishing real appeals channels, not “auto-closed pages”;

And more importantly — reminding every user:You are not just a consumer. You are part of this space. You have the right to demand safety.

The digital space should not be a courtroom —It should be a square.

A place where voices are allowed, conversations are protected, and people can make mistakes — without being destroyed.

When every one of us could be the next target, then we should be the first to stand up.

In a digital world where algorithms stage trials, silence becomes law, and violence becomes routine—

Before the platform decides whether we can speak — we must first learn how to speak together.

Be the first to comment

Leave a Reply