Curbing the spread of hate speech with strong, inclusive online communities

Introduction:

In the digital age, the proliferation of online platforms has revolutionised the way we communicate, connect and share information. Beyond the benefits of this interconnectedness, there is a dark side, characterised by the spread of hate speech and online harms. We will explore the multifaceted nature of hate speech and online harm, examining its causes, effects and potential solutions. We will delve into the underlying factors that contribute to the proliferation of online hate speech, including social bias, technological availability and political polarisation. In addition, we will investigate the far-reaching consequences of online harm, ranging from psychological distress and social fragmentation to radicalisation and violence. In addition, we will analyse current efforts to mitigate the negative effects of hate speech and online harms. We aim to shed light on the complex interplay between technology, society and human behaviour in the digital age. Ultimately, we seek to deepen our understanding of the challenges posed by hate speech and online harms and propose strategies for a safer, more inclusive online environment.

Hate speech, defined as any form of expression that promotes hatred, discrimination or violence against an individual or group of individuals on the basis of characteristics such as race, ethnicity, religion, gender or sexual orientation, has become increasingly prevalent in cyberspace(Roberts,2019). Together with countless other online harms, including cyberbullying, harassment, misinformation and extremist propaganda, these phenomena pose significant challenges to the safety and inclusiveness of online communities.

International day on countering hate speech marked for the first time Cyberviolence(Todd,2022)

What is Hate Speech ?

Hate speech is a form of deep-rooted prejudice and stereotyping in society. Hate speech is rooted in historic injustice and systemic inequality, targeting marginalized and vulnerable groups, imposing stereotypes and stigmatizing them(Flew,2021).It takes various forms, from explicit threats and derogatory language to subtle microaggressions and coded language.

As an example, Adam Goodes is a former professional Australian rules footballer who played for the Sydney Swans in the Australian Football League (AFL).  He is a Brownlow double medallist, is awarded the best and fairest player in the AFL every season and is widely regarded as one of the best players of all time.  Even for such an outstanding athlete, there have been many racist verbal attacks on him because of his Andiamatanga and Noronga heritage, with extremists calling him a “chimpanzee” at matches.  Opponents mocked Goodes on Twitter, a Facebook page was created just to discredit Goodes and his Wikipedia page was vandalised, with a picture of Goodes being replaced with a picture of a chimpanzee.  The harassment campaign on social media forced him to take a break from competition until he quietly retired in September 2015, and later in June 2016 he deleted his Twitter account.

(Goodes during a lap of honour after winning the 2012 AFL Grand Final)

I think it was an unquestionable majority participation atrocity. It is also a classic case of internet hate speech. Sport in Australia is an arena of national pride and cultural politics (Hallinan & Judd, 2009) and Goodes’ experience highlights the prevalence of racism in sport.

Causes of Hate Speech 

I believe that the root cause of hate speech comes from the inequality of economic development, which is the root cause of most of the contradictions and conflicts brought about by the current social development. While most of the current hate speech on online platforms consists of racial discrimination, religious conflicts, gender identity and geographical ethnic conflicts. There are persistent calls for organised hatred against minorities such as racial, religious and gender groups on Facebook (Aavaz 2019; Murphy 2020; Vilk et al 2021). But class conflict and so-called disparities between rich and poor are rarely mentioned. The main problems facing social, regional and national development are, in my view, essentially class tensions arising from the disparity in living standards of people due to uneven economic development, which is deliberately downplayed in these online hate speeches, which focus on targeting the marginalised in society. I believe that issues of race, religion and even gender can be communicated and understood on an equal footing and that in the past, before the internet, these issues did not cause conflict on the scale that they do today. The state-specificity of racism and the media-specificity of platforms and their cultural values (Rogers, 2013) see platforms as tools to amplify and create racist discourse. I am inspired by Bogot and Montfort’s (2009) approach of ‘platform studies’, in which capitalist development has allowed the bourgeoisie to spontaneously use online platforms to shift contradictions and perpetuate internal struggles within the working class in order to maintain bourgeois rule.

And in addition to the artificial bourgeois push for the development of hate speech, inequalities in economic development have been a breeding ground for hate speech in other ways as well.

Sociol-economic disparities can lead to a digital divide. Economically disadvantaged people may lack the resources or knowledge to safely navigate online spaces, making them more vulnerable to hate speech and online radicalisation. Early research on race and the Internet pointed to unequal levels of access as a source of racial inequality online (Hoffman & Novak, 1998), and I believe that harmful speech such as racial discrimination, for example, is not part of the natural human instinct like breathing, and that much of what causes racial discrimination stems from the climate of opinion. Platforms contribute to the growth of hate speech through policies, algorithms, and corporate decisions. The infrastructure of platforms responds to a large extent to their economic interests (Helmond, 2015). Platform algorithms track user activity, such as the pages people like and the posts they engage with, and Facebook has even created a category called ‘racial affinity’, which marketers can select or exclude when selling products to users, arguably making the platform system inherently racist. The information push mechanism of Internet platforms can easily create a cocoon of information. The information that people receive through internet platforms only becomes more homogenised without external interference, and their own beliefs are constantly reinforced, which is an important breeding ground for prejudice and discrimination.

(Washington Post illustration; Amanda Andrade-Rhoades for The Washington Post; Manuel Balce Ceneta/AP; Facebook screenshots; iStock)

Online bullying has also been spawned. Individuals from marginalised backgrounds are more vulnerable to online harassment and bullying. There is a lot of money worship, hatred of the rich or discrimination against the poor in the comments section of posts showing off luxury homes and cars (Baym & Shah, 2011), so it has to be said that a lot of the harm that is done to others can be unintentional, and it is the inequality of economic development that causes this. Most people are unable to empathise with those who are not of their own class or wealth, and this is amplified through the internet and platforms, resulting in discrimination and hate speech.

What Is The ‘Flaunt Your Wealth’ Challenge? | Facebook 2018

Impacts of Hate Speech

The impact of hate speech extends far beyond the digital realm, permeating the real world and perpetuating violence, discrimination and social exclusion.

Psychological damage: Hate speech can cause severe psychological distress to the victim, leading to increased stress, anxiety, depression and isolation. Prolonged discrimination can lead to low self-esteem, loss of hope and insecurity in life, and Adam Goodes’ late-career decline in performance was also a result of a prolonged period of hate speech, which can be directly attributed to the fact that it ruined Adam Goodes’ career and prevented him from enjoying and using social media as a normal person.

Normalisation of discrimination: The normalisation of hate speech in public discourse leads to the normalisation of discrimination and bigotry within society.   When everyone identifies with the harmful effects of discriminatory language and behaviour, believing it to be normal, it will lead to a loss of normal morality and sense of right and wrong in society.

Social divisions and antagonisms: for example, the issue of gender identity, which is more prevalent nowadays. I believe that such behaviour deepens social divisions and leads to polarisation of people. It creates an “us versus them” mentality and antagonism. The affirmative action movement that people develop to defend their own interests becomes a tool for a few to pursue their own privileges and interests. This divisiveness and antagonism has led to increased hostility, mistrust and conflict between different groups. Such divisions undermine social cohesion and unity and hinder efforts to build an inclusive and harmonious society.

Extremist terrorism: Hate speech can lead to violence and crime in the real world. Online hate speech can lead to the radicalisation of individuals, incitement to violence and the fostering of extremist ideologies. The wave of anonymity on online platforms can give birth to a large number of blind followers. Extremist organisations can easily gather large numbers of violent extremists by spreading incorrect values and formulating false ambitions. Their extreme actions can lead to physical violence and even destabilise the security of the region, with extremely negative consequences.

Combating hate speech and online harm

Don’t be a victim of hate speech! I think it is important to focus on adolescents, who are in an emotionally sensitive period, and whose behavioural patterns and mental health are very susceptible to interference from outside speech. It is important to equip them with the ability to critically assess online content and recognise hate speech. This not only reduces the likelihood of them engaging in hate speech, but more importantly allows them to effectively avoid the negative impact of hate speech. For all of us, the first thing we need to do in the face of hate speech is to avoid becoming a victim of hate speech, and to prevent our physical or mental health from being adversely affected. Even in a relaxed communication environment such as an online platform, we should always pay attention to our own protection and be responsible for ourselves, our families and our friends.

Improve relevant laws: Governments can enact and enforce laws that prohibit hate speech and cyber harassment while upholding freedom of expression.These laws should be clear, consistent and adapted to changing forms of online harm.   Legal measures should also include provisions to hold individuals and platforms responsible for spreading hate speech.   While platforms continue to present a neutral discourse – for example, Facebook sees itself as a technology – it is actually ‘interfering’ with public discourse (Gillespi, 2015).   As social media platforms serve as the medium for most of the current online socialisation and creativity (Van Dijck, 2013), as well as tools for both pro- and anti-social uses, tighter control of platforms is an important part of reducing hate speech and cyber harm.   I believe that the government should formulate stricter laws and regulations for platforms and platform users, and even if it is not possible to achieve a complete real-name system on the Internet, abusers should be duly sanctioned.

Promoting equal cultural exchanges

the role of Internet platforms is to allow most people to leave their real-life identities behind in order to come into contact with and learn about more diverse cultures in an equal and free manner. The obsession with real identity in the context of the Internet is not conducive to the development of cultural inclusiveness on the platform. Platforms should diminish differences in cultural identities and enhance equal exchange between different cultures, and the creation of inclusive online communities can help to curb the spread of hate speech and promote mutual understanding.

Conclusion

Strong and inclusive online communities can effectively curb the spread of hate speech by establishing clear guidelines against such behavior, actively moderating content to swiftly remove hate speech, encouraging user reporting, providing educational resources to promote digital literacy and empathy, amplifying positive contributions, collaborating with platforms to develop effective moderation tools, fostering community engagement and belonging,  and advocating for stronger legal frameworks against hate speech.  Through these measures, online communities can create a culture of mutual respect, understanding, and accountability, ultimately contributing to a safer and more positive online environment for all users.

References

Flew, T. (2022). Regulating platforms. Polity.

Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118X.2017.1293130

Moore, M., & Tambini, D. (2021). Regulating big tech: Policy Responses to Digital Dominance. Oxford University Press.

Roberts, S. T. (2019). Behind the screen : Content moderation in the shadows of social media. Yale University Press.

Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Routledgehttps://doi.org/10.25910/j09v-sq57

Be the first to comment

Leave a Reply