Understanding Hate Speech and Online Harms: Challenges and Pathways Forward

The social media sites of today have become integral parts of global communication, it connects people across borders. However, it has also created hate speech and online bullying. A study by Anti-Defamation League (ADL) shows 60% of Americans are found to have said that they have been subjected to hate in a year, while young people and minority groups, (religious, racial and LGBTQ+) are the main subjects(Eisenstat, 2023). To solve this issue requires a good grasp of its underlying causes and making forward-looking digital policies.

Hate speech consists of statements which attack, threaten, or encourage violence towards persons or groups based on attributes such as race, ethnicity, religion, gender or sexual orientation. Platforms’ mechanisms for online interaction can exacerbate things. Consider the case of Reddit (Massanari, 2017). In research studies it has been shown that, the platform’s algorithms, rules, and culture can unintentionally promote harmful online interactions. Reddit loves sharing hot, trending things, which means bad stuff on Reddit can skyrocket to enormous audiences fast. Whenever the harmful stuff and comments receive likes, they’re amplified. In that way, the implicit reinforcement of bad behavior encourages hate-speech violators to continue their action, because their hate speech keeps receiving higher audience exposure. Meanwhile, the target sees no other way to get out this toxic zone but in tears. And neither is Twitter immune to this chaos. Its rapid pace and character limits turn it often into the playground of misinformation and harmful remarks. Also, the platform’s dependency on hashtags and trending topics allow toxic content to move fast and fast. The fact is, part of these platforms’ design and policies foster the creation of an ecosystem where toxic cultures emerge and, unfortunately, do real-life harm, poisoning online communities positive potentials.

It’s as if these platforms’ algorithms fuel the fire. The engineers built them to maximize likes or dislikes, so posting something that stirs up an angry, emotive, passionate response seems the likely way to do that. This ratchets life experiences into a crushing quality-of-life penalty. What’s life like, constantly having hate, stereotypes thrown at you? It’s like feeling there is only criticism. As Terry Flew explained in his book “Regulating Platforms”, hate speech can marginalize and exclude already vulnerable groups (Terry Flew, 2021). Worse, spreading hate speech can trigger offline violence, which will drive the social instability and produce tremendous damage, such as attack on specific groups inspired by online hate speech.

Consider the following concrete case: the Gamergate, which Adrienne Massanari maps out for us (Massanari, 2017). A post about the ethics of video game criticism got caught up in a flame war at an August 2014 event at the SomethingAwful online forum, caused when an ex-boyfriend of the indie game developer Zoe Quinn expressed his misgivings. Their blog post included personal information about their relationship and the false claims that Quinn’s game Depression Quest was successful due to her close connection with a game journalist who reviewed the game positively. It took off and turned into a witch hunt against women in gaming industry. Zoe Quinn and Anita Sarkeesian (creator of the Feminist Frequency series about gendering in video games) were central targets. Because of their gender and their workplace in the gaming industry, these women received absolutely brutal and depressing harassment. “Human flesh searches” published their sensitive information, including home address and phone number, for public consumption. “Rape threats” populated their inboxes. These complaints are not only distressing, but they make women reluctant to join the gaming industry. These attacks are not singular occurrences. These attacks highlight another urgent problem with the digital ecosystem. These attacks point to the fact that some platform’s governance structures have failed to keep their users safe, as Massanari observes (Massanari 2017).

As another example, Carlson and Fraser’s book Social Media Mob: Being Indigenous Online investigates the lives of Indigenous Australians online. They point out that the internet has both beneficial and detrimental impacts on Indigenous peoples. On the one hand, it is the perfect tool for connection in community and maintaining tradition. On the other, racism and internet vitriol run rampant (Carlson & Frazer, 2018).Indigenous users routinely experience hurtful stereotypes, harassment, and being perpetually inspected and questioned about their identity. More than half of Aboriginal users said they would not post specific content if they felt they might be discriminated against. Worse still, over 88% of study participants reported coming across racist content aimed specifically at Indigenous people and some had even been sent threats. As Carlson and Frazer point out, online spaces can both reflect and reinforce offline injustices, including systemic racism and the lingering impacts of colonialism(Carlson & Frazer, 2018). Given that platforms’ content removal rules are set on a uniform standard that does not account for cultural-specific challenges Indigenous communities face, they often fail to address culturally specific harm. Thus, it is more difficult for Indigenous people to feel safe both on and off-line.

A key hurdle in dealing with hate speech online is therefore a rich understanding of language, culture and technology. First, the underlying complexity is that identifying hate speech is very context sensitive. What’s offensive on one location may be completely harmless somewhere else. Regional idioms, regional baggage and political stressors define how the meanings of messages are perceived. Analysis of the Asia Pacific found that Facebook’s global content moderation policies are too blind to local conditions and lead to unequally applied standards – there are places that will end up under-regulated and others with over-policing. That means it is clear that a one size fits all approach simply no longer works. And what we require is a management for locally fit. This is to say, that we need locally based policies that are grounded in the distinct culture and political patterns of a region.

A second significant challenge comes from overwhelming demand on human moderators and limitations in existing AI tools. The huge volume of online content contributes significantly to the challenge of policing hate speech, and despite all their efforts (and very often working out of hours), human moderators cannot keep pace with the flow of content being added to the web each day. And being the original targets of their abuse, they also had to relive disturbing if not traumatizing contents over and over, causing many of them suffer anxiety, depression or PTSD. Platforms must heed better labor protections for these employees, including appropriate mental health services and good working hours and wages. Technologically, while many platforms use automated filters in place to identify hate speech, such filters don’t hold up in complex context. Take AI system as an example, it is a fish out of water for comments that are sarcastic, obscure, or use “code words” to dodge the censorship. The AI systems are also unreliable, which may result in over-policing of harmless speech or letting real harmful content slip through. Thus we need to promote context aware AI systems and enhance human-machine collaboration mechanism in order to boost the efficiency of contents governance.

Finally, and most combative: on the one hand, we risk over-moderation and chocking off legitimate speech—on the other, risk under-moderation and being unable to defend against hate speech. The greater societal value changes and politics cycles, more difficult becomes such a balance. Policymakers and platforms have to build such rules that are principled yet flexible.

The fight against hate speech on line must employ multiple measures. First, social media companies will have to rely on local communities, grassroots groups and regional experts in order to develop more culturally resonant and actionable guidelines. As the acceptance and cultural sensitivity to speech vary in different regions, platforms should fully consider local cultural background and context in order to achieve an effective regulation of harmful speech while respecting varied views. Such cooperation can eliminate the misunderstanding about languages and culture to certain degree, reduce the conflicts which can be caused by such misunderstanding, as well as the regulatory failure stemming from cultural misjudgment. Moreover, such local learning-based way of ruling builds the belief of users in the platform’s laws, enabling the users to consciously follow social norms and help create a better milieu for online interaction.

Second, platforms need to guarantee content moderators’ decent working conditions and sufficient psychological support and optimize artificial intelligence’s utilization in content review. Platform should take comprehensive occupational health protection measures for content moderators, such as reasonable work time, regular rest plan, and professional psychological help and a safe and friendly work environment. It will not only help staff physical/mental well-being, but also facilitate the precision and speed of their judgments, sustainable content regulation. Meanwhile, AI must learn to make brains. Platforms have long sought scale, but a more intelligent system is something that we really need. We may train AI models from heterogeneous samples of high-quality dataset, covering more diverse expressions and complex hate speech of different languages/cultures.AI systems will also require better knowledge of contexts and capability for detecting hidden language including sarcasms, metaphors, and so-called code words. The platform can also continue to refine audit system, for example, cross-platform integration, and online user feedback mechanism. It is worth pointing out that, critically, AI should not replace human decision but should, rather, be a supportive tool. AI needs to be subject to human scrutiny when dealing with ambiguous or controversial content to enable co-operation between humans and machines. We can only build spaces that are just and safe if we combine humans’ understanding with machines’ accuracy.

Third, refine content algorithms system to priorities users’ wellbeing. Platforms should periodically review their recommendation systems for signals that the systems may inadvertently be amplifying harmful content—e.g. sensational, polarizing, or devaluing posts. And make changes when negative patterns are detected to lessen the visibility of such posts and instead foreground more equitable and benevolent dialogue. Furthermore, platforms should optimize for users’ mental health and emotional safety as much as user engagement. A few platforms already attempt to introduce interventions such as content dissemination speed slow-down, the down-ranking of harmful comments, or proffering users to reflect again before posting inflammatory content. Systematically implementing interventions such as these may play a critical role in alleviating online harm.

Forth, get people more engaged in regulation process. As users play an important role in cyberspace, they can also contribute in creating healthy online space. Governments and civil society organizations need to increase their digital literacy efforts and enable users to identify and take action against hate speech. It would enable change passive audience into active community police, which, in turn, will result in creating self-regulating healthy online community. Reporting mechanisms should be robustly enabled too, with users feeling confident to raise problems. They are more likely to be willing to contribute if the platform feels like their voice is valued. There should also be room for users to play a greater role in the governance of platforms through user participatory review, such as advisory panels and feedback boards. Even just throwing people and AI systems at speech moderation, without their being a range of staff backgrounds and experience, doesn’t always lend a democratic process and determination as to what can be tolerated. With more platforms users, though, we allow for the more varied voices involved in decision making for what may appear in that space. Engaging, more groups at the cross-hairs of hate speech (such as racial or religious minorities, LGBTQ+ users, and people with disabilities) provides an opportunity to use the lived experience rather than the distant rules.

Fifth, foster multi-stakeholder cooperation to realize harmonized, rights-based regulation. No government or platform is able to address this challenge unilaterally. Rather, a model of collaborative governance must be adopted. Policymakers must join forces with platforms, researchers, NGOs, and vulnerable groups to produce fair, enforceable regulation. In fact, regulations may mandate that platforms post periodic reports on how they are addressing hateful content, such as what fraction of reported content the platforms are addressing, how fast the platforms are acting, and what happens when content is disputed. What we will also need are neutral regulatory mechanisms—commissions comprised of lawyers, digital rights groups, and community leaders, to ensure the process is not biased. These groups may also serve as arbiters in disputes between platforms and individuals and between free speech and takedowns. International collaboration is equally important. International and digital platforms are inherently global, unilateral regulation often leads to inconsistent jurisdictions and enforcement gaps. Commitments like the Christchurch Call and the recently proposed Digital Services Act by the European Union provide a starting point for multilateral collaboration on platform accountability, and suggest that it will be critical to converge on platforms’ responsibilities while preserving diversity of cultures and norms within the digital sphere.

Hate speech and online harm are complicated and multidimensional problems that require a holistic approach from multiple angles. It not only calls for the collaborative efforts of the platform, government and regulators, but also calls for the active involvement by every user. Only through all these efforts can we create a safer, more inclusive and healthier hate/harm-free online environment.


References

Carlson, B. & Frazer, R. (2018) Social Media Mob: Being Indigenous Online. Sydney: Macquarie University. https://researchers.mq.edu.au/en/publications/social-media-mob-being-indigenous-online
Eisenstat, Y. (2023, June 28). Online hate and harassment continues to rise. Axios. https://www.axios.com/2023/06/28/online-hate-harassment-rise-adl
Flew, Terry (2021) Hate Speech and Online Abuse. In Regulating Platforms. Cambridge: Polity, pp. 91-96
Massanari, Adrienne (2017) #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346.

Be the first to comment

Leave a Reply