
Introduction
You probably use Facebook, YouTube or Twitter every day. Media has become an integral part of human society, and becoming an inseparable part of most people’s lives. As media scholar Neil Postman (1985) mentioned in Amusing Ourselves to Death, we are like fish in water, living in a media environment, unaware of its presence. Social media has fundamentally changed the way we communicate, connect, and interact, offering unprecedented opportunities for global dialogue. But what you might not realize is that the way these platforms are designed—and the way they’re managed—can quietly help spread some of the internet’s darkest content: hate, racism, and even violence.
Back in 2019, a horrifying event in Christchurch, New Zealand made this all too real. A gunman livestreamed his attack on two mosques, killing 49 people while broadcasting the massacre on Facebook for the world to see (Macklin, 2019). The video went viral in minutes, shared and recommended across platforms. This tragedy exposed serious flaws in how social media works—and how badly it’s governed.
Some researchers call this kind of problem “platformed racism”—a fancy way of saying that racist or hateful content isn’t just spread by bad users, but is often quietly boosted by the platforms themselves (Matamoros-Fernández, 2017). This blog unpacks how that happens, what went wrong in Christchurch, and what we can do to fix it.
What Happened in Christchurch?
On March 15, 2019, a white supremacist gunman walked into two mosques in Christchurch, New Zealand, during Friday prayers and opened fire. By the time the attack ended, 49 people were dead and dozens more were injured. What made this tragedy even more shocking was that the attacker livestreamed the massacre on Facebook using a GoPro camera strapped to his head (Macklin, 2019).
Within minutes, the video was copied, shared, and reuploaded across the internet—on YouTube, Twitter, Reddit, and other platforms. Despite attempts to take it down, the footage continued to spread. In the 24 hours after the attack, Facebook reported removing 1.5 million versions of the video (Keall, 2019).
This was one of the first times a terrorist attack was broadcast live to a global audience in real-time. The killer didn’t just use social media to promote his views—he used it as part of the weapon. And in doing so, he exposed just how unprepared these platforms were to deal with content designed to go viral, no matter how horrific it was.
Why Did Social Media Make Things Worse?

It’s easy to think of the Christchurch attack as an isolated tragedy. But what made it so much worse was how quickly—and how widely—it spread online. The attacker strapped a GoPro to his head and streamed the massacre on Facebook Live, mimicking the perspective of a first-person shooter video game (Macklin, 2019). He even posted links to the video and his manifesto on Twitter and message boards like 8chan, knowing exactly how content spreads online. He designed his content to go “viral”.
But what made the situation worse wasn’t just the livestream—it was how social media platforms amplified it. Facebook’s algorithm did not immediately recognize the video as harmful, allowing it to be seen, saved, and shared before moderators could act. On YouTube, the video was reuploaded countless times—some versions edited or mirrored to evade detection. Many were automatically recommended to users through the platform’s “Up Next” feature, a system designed to maximize engagement but blind to context. Social media platforms like Facebook, YouTube, and Twitter are built to boost content that gets reactions. The more people like, comment on, or share a post, the more the platform shows it to others. That’s great for cat videos and memes—but dangerous when the content is violent or hateful. On YouTube, for example, the shooter’s video was recommended alongside other extremist content, exposing it to even more viewers (Macklin, 2019).
This is a clear example of what Matamoros-Fernández (2017) calls “platformed racism”—where the way platforms are built unintentionally helps harmful content thrive even if that’s not the original intent. It’s not just that racist or violent videos exist online; it’s that the systems behind the scenes—trending lists, auto-play queues, algorithmic recommendations—push this content in front of more eyes. The more shocking or controversial the content, the better it performs under the logic of engagement.
Even worse, as Matamoros-Fernández (2017) notes, automated moderation systems often miss dog whistles, memes, and subtle hate speech—especially when attackers intentionally use humour or internet slang to dodge detection. That’s exactly what happened in Christchurch. And it’s not just about what gets shown. It’s also about what doesn’t get taken down. Many users flagged the Christchurch video, but platforms were slow to act. That’s because automated moderation tools often fail to detect subtle or fast-moving hate speech, especially when it’s cloaked in humour or irony.
So while the shooter pulled the trigger, social media gave him the megaphone. The platforms didn’t cause the violence—but their design helped it spread. The platforms weren’t just passive hosts. Their very architecture helped this violence become viral.
When Platforms Failed

After the Christchurch shooting, major social media companies were quick to say they had removed the video and condemned the violence. Facebook announced it had removed 1.5 million versions of the video within 24 hours. That sounds impressive—until you realize how many people saw, downloaded, and reuploaded it before the platform even reacted. For hours, the livestream was available in full on Facebook, and fragments of it kept appearing on YouTube, Reddit, Twitter, and even TikTok. But by then, the damage was done. Even days after the event, new versions kept popping up, edited just enough to escape detection (Macklin, 2019).
Why couldn’t the platforms stop it? One big issue is that social media platforms mostly rely on a combination of automated tools and third-party human moderators, many of whom are underpaid, overworked, and working far from the cultural or linguistic context of the content they review. Part of the problem lies in their content moderation systems, which are mostly reactive and heavily reliant on user reports or automated tools (Sinpeng et al., 2021, p. 22-38). In the Asia-Pacific region, platforms like Facebook claim to have hate speech policies in place.
However, even when policies exist, they often aren’t enforced in real time. In many cases, moderators had to rely on graphic imagery—rather than nuanced context—to judge whether content was harmful. Subtle hate symbols, coded language, and memes often went undetected, so it’s struggled to keep up with the sheer volume of Christchurch videos. YouTube alone reportedly saw one upload every second in the first 24 hours after the attack (Macklin, 2019). The speed and scale of virality outpaced platform infrastructure.
Another problem is that most platforms use self-regulation. They write their own rules (called “community guidelines”) to support freedom of expression but aren’t held accountable by any external body. As Flew (2021, p. 117-118) points out, this leads to vague enforcement, lack of transparency, and inconsistent takedown practices. Users rarely know why certain content is removed while others stay up.
Worse still, the shooter’s digital footprint—including his manifesto and video—was intentionally crafted to game these weaknesses. He mirrored videos, renamed files, and used alt-right slang, knowing it would trick content filters (Macklin, 2019).
By the time companies acted, the harm had already gone global. In short, the platforms weren’t ready. And people paid the price.
Real People, Real Harm

Behind every viral video and trending hashtag are real people. When we talk about content moderation or algorithm problems, it’s easy to forget there are real people behind the screen—people who are hurt, retraumatized, and targeted. For the Muslim community in Christchurch, the 2019 attacks weren’t just a national tragedy—they were a deeply personal trauma. Many survivors, family members, and even distant community members found themselves re-exposed to the horror online, as the gunman’s footage appeared across Facebook, YouTube, Twitter, and fringe sites. Some saw the livestream before even knowing if their loved ones were safe. The Christchurch attack wasn’t just a digital governance failure—it was a human tragedy, especially for Muslim communities in New Zealand and around the world.
Carlson and Frazer (2018, p. 12-14) remind us that Indigenous and marginalized users often experience social media not as a place of connection, but as a space of constant harassment. In their research on Indigenous Australians, they found that racist abuse online isn’t rare—it’s routine. The same is true for many Muslim users. After Christchurch, survivors and their families had to endure not only grief, but the horror of seeing the attack replayed online—again and again.
Worse, online trolls flooded comment sections with slurs and conspiracy theories, some even mocking the victims. In many cases, this digital abuse wasn’t flagged or removed. Platforms failed to create any safe online space for affected communities. As Carlson and Frazer note, harmful content doesn’t just “appear”—it circulates in echo chambers that normalize hate, reinforce stereotypes, and amplify trauma.
Few social media companies reached out to Muslim users for support or apology. No emergency content warnings were issued. No tailored protections were put in place. For many, it felt like the systems designed to connect people had instead turned against them—and no one was held responsible. This wasn’t just a failure to remove content. It was a failure to protect people. Social media companies have a duty not just to take things down—but to understand how their platforms affect the people who are most vulnerable.
What Needs to Change?
After the Christchurch attack, tech companies offered the usual response: “We’re reviewing our policies, Nothing to announce at this stage.” But vague promises won’t stop the next livestreamed atrocity. What’s needed is a complete shift in how platforms think about responsibility.
First, platforms must have clear and enforceable definitions of hate speech and violent extremism. Too often, harmful content slips through because rules are inconsistent or hidden behind corporate jargon. Facebook, for instance, failed to flag the Christchurch video immediately—even though it clearly violated their own terms of service (Keall, 2019). As Flew (2021) argues, real accountability means external oversight—not just vague community standards written by the platforms themselves.
Second, algorithms need to be redesigned to stop boosting dangerous content. YouTube’s recommendation engine actively pushed users toward reuploads of the attack, simply because those videos were getting clicks (Macklin, 2019). Features like “Up Next” and auto-play should be paused during crises, and AI filters should be trained to prioritize safety over virality.
Third, moderation can’t be outsourced to poorly trained workers halfway across the world. Sinpeng et al. (2021, p. 22-38) point out that many moderators in the Asia-Pacific region don’t understand local languages or political context, meaning hate speech often goes undetected. Platforms should invest in regionally embedded, culturally competent moderation teams, not just AI tools.
Finally, platforms need to actively engage with affected communities. After Christchurch, Muslim groups were rarely consulted in policy updates. Listening to these voices isn’t just respectful—it’s essential for building a platform that works for everyone.
Social media is powerful—and with power comes responsibility. We’ve seen what happens when platforms prioritize growth over safety. It’s time to reverse that.
Conclusion
The Christchurch shooting changed how I think about social media. What should have been a space for connection became a tool for terror—and the platforms, built to bring us together, failed to protect the people who needed them most. It wasn’t just a technical glitch. It was a wake-up call.
I still believe in the power of the internet to do good. But that power needs to be matched with responsibility. We need smarter rules, fairer systems, and platforms that actually listen to the communities they affect.
What happened in Christchurch can’t be undone. But by learning from it—and by demanding more from the companies that shape our digital lives—we can build a safer, more respectful online world. Not just for ourselves, but for everyone.

References
Carlson, B., & Frazer, R. (2018). Social Media Mob: Being Indigenous Online. Macquarie University. https://research-management.mq.edu.au/ws/portalfiles/portal/85013179/MQU_SocialMediaMob_report_Carlson_Frazer.pdf
Flew, T. (2021). Issues of concern. In Regulating platforms (pp. 115–118). Polity.
Macklin, G. (2019). The Christchurch Attacks: Livestream Terror in the Viral Video Age. Combating Terrorism Center at West Point. https://ctc.westpoint.edu/christchurch-attacks-livestream-terror-viral-video-age/
Matamoros-Fernández, A. (2017). Platformed racism: The mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118X.2017.1293130
Keall, C. (2019, March 17). Could a horrific livestream happen again? Where Facebook, YouTube & Twitter’s response falls short. https://www.nzherald.co.nz/business/could-a-horrific-livestream-happen-again-where-facebook-youtube-twitters-response-falls-short/IPYJRQCWGB3XTNXU2M3Q6FANCU/
Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney. https://hdl.handle.net/2123/25116.3
The Christchurch mosque attacks: How parliament responded. New Zealand Parliament. (n.d.). https://www.parliament.nz/mi/get-involved/features/the-christchurch-mosque-attacks-how-parliament-responded/
BBC. (2019, August 5). What is 8chan?. BBC News. https://bbc.com/news/blogs-trending-49233767
Postman, N. (1985). Amusing ourselves to death: Public discourse in the age of show business. Penguin Books.
Be the first to comment