Hate Speech and Harms Online: Deleting Comments Is Not Enough

On October 27, 2018, a man broke into the Tree of Life Synagogue in Pittsburgh, the United States
,killed 11 people. Before the attack, he had posted anti-Semitic comments on the far-right platform Gab, and said that “the Jews are destroying the white nation”.This violence is more than a mere case of offline violence; it’s also a demonstration of how hate speech online can potentially be converted into offline violence.Many think hate speech is merely “nasty speech” or a free speech issue, but the reality is much more complex than that.Hate speech has the ability to shape group identity, amplify social divisions, and even lay the groundwork for violence.When inflammatory language goes viral in the digital world, it acts like a virus, spreading rapidly across communities to instill fear, hostility, and division.This blog will explore in depth the nature of hate speech, the various levels of harm it causes, and why content moderation alone won’t address this problem.

What is hate speech?

Hate speech is more than just offensive language. It’s speech that aims at individuals based on who they are—race, religion, gender, sexuality, disability, and so on—in ways that instill discrimination, hostility, or violence (Parekh, 2012). This speech is not merely offensive speech, but possesses an obvious malicious and destructive intent.Hate speech differs significantly from routine rude comments. To insult someone and spreading conspiracy that a specific race is “conquering the country” are completely different dimensions of harm.The former can be a channel of personal feeling, while the latter is a stigmatization and threat for the entire group.Hate speech has different definitions across countries.The First Amendment of the U.S. Constitution has stricter protections for free speech because hate speech is only given protection if used to directly incite imminent illegal activity. Other countries such as Germany and France, however, still have prohibitive legislation on Holocaust denial or racist speech. Legal differentiations are shown to be representative of diverging societies’ diverging views about the balance of social order and freedom of speech.

From online to offline violence

The Tree of Life synagogue shooting was not an isolated event.There is a high correlation between hate speech on the internet and violence offline.Hate speech provides the intellectual justification, emotional mobilization, and social legitimacy for violence and often also serves as a prelude to, and as a reason for, violence.The “Gamergate” case is a widely known example. In 2014, the female video game practitioner first began experiencing large-scale online abuse, including death threats, exposure of identifying information and personal attacks.What initially seemed like small “online harassment” increasingly became an organized attempt to silence women’s speech.A study (Citron, 2014) states that Gamergate led to numerous female video game developers being driven out of business, harming gaming culture as a whole for decades to come.Research uncovered a strong correlation between hate and violence on and off-line.Additionally, the Southern Poverty Law Center (SPLC) estimated that 80% of America’s mass shooters over the last five years generated extreme or hate speech on-line before committing violence.Hatred expressions not only forecasted imminent violence, but also largely justified it.Sending hate speech messages can also lead to a “copycat effect.”When there is a glare of publicity around some bizarre act of an extreme type of violence on line, this act may inspire someone else to become a copyist.For example, after the massacre committed by Norwegian far-right terrorist Anders Behring Breivik in 2011, extreme right-wing activists in a few European countries made his tactic their model.

Platforms are not neutral: the governance dilemma

Social media platforms like Twitter and YouTube previously insisted that they were only “neutral hosts” of users’ posts.Also, however, according to scholar Gillespie (2018), sites are actually “the regulators of the Internet”.Their algorithms, policies, and design choices shape our behavior and what we’re exposed to radically, a role far beyond the normal understanding of neutral hosting providers.The central dilemma for sites is how to balance commercial interests and social responsibility.These companies have tended to focus profits and clicks over users’ safety.Aggressive behavior elicits engagement, time on the page and AD money, and hate speech is a wonderful way of achieving this.To this end, the site’s algorithms can be weighted towards recommending dangerous stuff because that will keep users watching longer and result in more activity.This commercial-style recommendation model will be replicated many places.YouTube’s recommendation algorithm has been reported to deliberately keep people watching for longer and drive more exposure to extremist content.It has been found that when individuals view moderately extreme content, they may click to more extreme stuff, an effect known as the “Moderation Funnel”.Restrictions on sites’ duties do vary by country.Section 230 of the United States Communications Decency Act, which grants a shield of immunity from suit to platforms regarding users’ content, is considered a foundation stone of the emergence of the Internet, but also leaves little incentive for sites to police. The Digital Services Act of the European Union and Australia’s Online Safety Act burden the platforms more to actively search and remove illegal content or else face massive fines. Flew (2021) also critiques this spread-out regime of governance in Hate Speech and Online Abuse, arguing that platforms’ algorithmic structure as default is pro-engagement, not ethical stewardship. Meta, for example, has to follow more stringent content moderation rules in Europe but is able to do so relatively independently in the United States. This difference creates enormous challenges for border-spanning platforms and makes it harder to control hate speech around the world.

Why Content Moderation Isnt Enough

Removing hate speech is required, of course, but it is somewhat like cleaning the floor of a house with a leaky roof and not plugging the hole in the roof.Content moderation is subject to certain inherent limitations that render it a partial solution to hate online.First, users worldwide upload enormous amounts of information every day on all platforms, and even the most excellent automated review systems cannot manage this amount of information without generating a high rate of false negatives and false positives.This already buries much harmful information, even if it does not get to many eyes, in the flood of data.Second, failure to understand context is a major shortcoming of automated tools. Hate speech typically relies on a specific cultural context, historical event, or information among a group of people. Automated filtering tools cannot easily identify sarcasm, metaphor, or semantic inversion, and will either mistakenly remove benign content or not remove offending content.For example, the phrase “go back to your country” may bear completely opposite meanings based on context, but it is difficult for automated systems to ascertain the degree of maliciousness.In some contexts, it may involve racial discrimination.The rapid speed of language change is also problematic for content moderation. Hate speech will also become more surreptitious, using code names, code words, or community-specific “slang” to evade censorship. Once a specific expression is censored by the platform, alternative terms and expressions are quickly developed by users.This “cat-and-mouse” cycle puts content moderation in a constantly reactive stance.Woods and Perrin (2020) and others have suggested that the “duty of care” model is more effective than passive auditing. This model requires platforms to proactively assess and mitigate risks ahead of time, rather than reacting after the fact when issues have already occurred.

Some concrete steps involve: pre-identifying hotspots of discussion where hate speech is likely to arise, developing preventative tips and intervention strategies, putting in place rapid-response teams to address threats as they arise, and working together with affected communities to develop more effective policies.Facebook’s role in the Myanmar crisis is a case study in what not to do. Despite having advanced technological tools for monitoring hate content, it was not able to prevent the violence since it did not have local expertise and was too slow to respond.

In 2017, Facebook was utilized by Myanmar’s military to spread hate speech against Rohingya Muslims,facebook has been identified by a UN investigators’ team as a “driver of digital violence”,Which led to forcing hundreds of thousands to flee their homes.The incident highlighted the limitations of relying on technology and automated systems only and the importance of localization and human monitoring.As the UN Special Rapporteur concisely put it: “Facebook’s failure in Myanmar is an interrogation of humanity’s collective responsibility in the digital age.”

What is to be done? A multi-level solution.

Hate speech cannot be eliminated by one force alone. It requires governments, platforms, users, and the global community to come together to build a multi-level governance regime.At the governmental level, governments must conduct open assessment and risk management through legally bound platforms.The European Union’s Digital Services Act (DSA) is a model in this regard, as it requires large platforms to be transparent regarding their content moderation policies, algorithmic reasoning, and complaint handling, with strict penalties for non-compliance. The DSA also created a “dangerous content damage presumption,” which reverses the burden of proof onto a platform to prove that it is not liable once it is found to have carried dangerous content.In conjunction with this, governments must invest in online literacy education programs to improve the public’s ability to discern and manage hate speech.Singapore’s Cyber Health Mentoring Programme is a successful example of improving media literacy through school curricula and public outreach.At the platform level, the platform should streamline algorithm design to reduce the transmission power of harmful content. For example, the proposed weight of hate speech can be lowered, limiting its visibility and spread.Twitter’s “community Notes” feature, whereby users are allowed to add context to tweets that are disputed, has also been shown to be effective in the suppression of misinformation spread.Social media platforms should also employ culturally literate content moderators, especially when engaging minorities and marginalized communities. Meta has established local moderation teams in several countries to help comprehend and address culturally sensitive hate speech.

In the meantime, the platform should establish a clear appeal process to allow users who have had content inaccurately removed to appeal.

At a user level, users can report hate content, stand in solidarity with victims, and increase their own digital literacy. Research has shown that when users actively report hate speech, the platform’s response speed and efficiency are significantly improved.Users should also learn to recognize the hallmarks of hate speech and avoid inadvertently spreading or amplifying harmful content. It is also important to empower public interest organizations and initiatives to combat online hate. For example, organizations such as the Center Against Digital Hate (CCDH) provide in-depth resources and training to equip individuals and organizations with the ability to combat online hate. At the civil society organization level, civil society organizations should push for platform responsibility and provide psychological support and legal assistance. They can also act as a bridge between governments, platforms and users, and facilitate multi-party dialogue and collaboration. For example, the Global Network Initiative (GNI) is where tech companies and civil society groups get together to develop principles for Internet governance.

What can you do?

You don’t need to be a policy maker to contribute towards fighting hate online: don’t retweet or engage in discussions of hateful content – not even to debunk it.Studies have shown that repeating and disseminating hateful content gives it more visibility and impact, even when you’re trying to counter it.You can choose to report directly or speak out through other channels. Report harmful content when you see it.Fortify public interest initiatives and projects that challenge online hate.Learn and teach others about the dynamics of online harm. Understanding the characteristics and patterns of hate speech can allow you and others to recognize and engage with such content more efficiently.

Conclusion

Hate speech online is not merely “hateful comments”; it’s a societal issue with the potential to cause actual violence.The velocity and scale of hate speech render customary responses ineffective. What is needed is not just content moderation, but systemic-level governance change and public awareness.This demands that technological innovation be paired with human concern, commercial interest be coupled with social benefit, and domestic action be coupled with international cooperation.We all have the responsibility and the ability to make the digital environment safer and more inclusive. Only by multi-stakeholder collaboration and sustained efforts can we effectively combat the issues of online hate and protect social harmony and human dignity in the age of the internet.

References

Citron, D. K. (2014). Hate crimes in cyberspace. Harvard University Press.

Flew, T. (2021). Hate Speech and Online Abuse. In T. Flew (Ed.), Regulating Platforms (pp. 91-96). Polity.

Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.

Parekh, B. (2012). Is there a case for banning hate speech?. In M. Herz & P. Molnar (Eds.), The content and context of hate speech (pp. 37-56). Cambridge University Press.

Woods, L., & Perrin, W. (2019). Online harm reduction – a statutory duty of care and regulator. Carnegie UK Trust.

Be the first to comment

Leave a Reply