Is Hate speech and online harm the worst? What’s even scarier is something behind it!

Have you ever suffered hate speech and online harm? Or have you ever watched any online harm? If you had, you would have thought it was a terrible disaster. Like locust infestation when locusts swept over the wheat field, they brought great trauma and loss to the victims of the attack. On the surface, it seems that only the public attacks the victim, but behind the public, who is the potential boss that really guides the wind?

Hate speech and online harm are all over the social media platforms we use, and the scary thing is that sometimes we don’t even realise that some of the extreme speech and behaviour is hate speech and online harm because our thoughts and judgments have been so wrapped up in that kind of speech and carried away. Hate speech’s fermentation has a strong influence on top of all the big global platforms, and even the platforms can be the culprits in leading the hate speech, and I’ll give you a few examples here. In April 2015, Facebook blocked a trailer for one of the episodes of an Australian comedy show, this show featured a humorous satire on white prejudice against Australian Aboriginals (Ariadna 2017, p. 930-931). Facebook claims to have blocked the issue because it contained a photo of two topless Australian Aboriginal women, which violates Facebook’s nudity policy. Indigenous activist and writer Celeste Liddle denounced Facebook’s rules on her Facebook account and posted the trailer for the episode and that photo, but then she received a lot of hate speeches against her from numerous users, which ultimately caused she was temporarily banned by Facebook, and that post of hers being deleted from Facebook. However, Facebook did not give a clear and reasonable explanation for this operation. This episode of the Australian comedy program was only intended to show the public the not entirely fair state of affairs regarding the status and rights of Aboriginal people in Australia, yet such content was met with the platform’s opposition and hate speech from a part of the population, resulting in fewer opportunities for Aboriginal people in Australia to be seen by the public.

Also, there have been two large incidents of hate speech on Reddit about insults and degradation of women, in line with today’s topic (Massanari 2017, p. 330). They occurred simultaneously in August 2014, and one of them concerned a female game creator, Zoe Quinn, being publicised by her ex-boyfriend, Eron Gjoni, for an unpleasant romantic experience between the two (p.334-335), which was published in Reddit’s anonymous imageboard 4chan. Reddit, a social media platform that places relatively few constraints on its users, allows them to register anonymously, and bans only underage pornography, distribution of spam, disruption of the site’s proper functioning, and vote rigging. In addition to this, Reddit allows users to freely create their own communities of interest, also known as subreddits, where people who share their interests can get together and discuss them in the same subreddit. Reddit is full of people who are fanatical about technology, computers, and gaming, which creates a very strong Geek culture (p.332). In addition, these men, all of whom are male, have developed a consensus that excludes women, and they view women who enter their circle as unwelcome intruders. In this cultural environment, the revelation of Zoe Quinn’s relationship with her ex-boyfriend triggered an outcry from male Reddit users, who began sending death threats and rape threats against Quinn, and they vehemently opposed female game creators, and Quinn, unfortunately, became the centrepiece of these male Geek users’ venting of their frustration and anger (p. 334). The incident even grew into a sensational campaign to delegitimise women in the gaming community.

However, it was far from over, and this ridiculous farce even went above Twitter and other big platforms, evolving into something even more vicious and heinous. Actor and right-wing conservative Adam Baldwin was one of the first to create the hashtag “GamerGate” on social media and support the movement. Although his intent was allegedly to condemn Quinn for his game creation inappropriate behaviour with regard to game creation, his support and the labelling he created have been used by supporters of this movement to go on a rampage against female game developers, feminists, and opponents of this movement, who have no bottom line in attacking women under the guise of upholding the righteousness of the moral standards of gaming journalism. Despite the fact that Reddit’s administrators had banned Quinn and the attackers from the 4chan forums, the deranged attackers even moved on to 8chan to continue their insults, considering Reddit’s banning of them a betrayal of their “culture of justice” (p.335)

Figure 1: comments about the GamerGate

Another incident, The Fappening, also took place in late August 2014, just as the GG (GamerGate) incident was continuing to fester, there were a large number of nude photos of female celebrities were stolen by hackers, mostly through the channel of iCloud, and published on 4chan (p.335). Although the photos were removed from 4chan, they have been saved and reposted to other locations by a large number of users. Reddit’s push settings are determined by the number of likes; the higher the number of likes, the more exposure the post and related content will receive and be pushed out to more users. And Reddit’s official moderators did not take timely measures to prevent the spread of the photo, it led to the photos’ spread was not effectively controlled. What’s even more hateful is that most Reddit users have expressed their love and support for this content, even if they only care about who the next female celebrity to be exposed in nude photos will be, only a very small number of Reddit users have expressed ethical concerns. These users hide behind virtual platforms that fearlessly show their ugly faces without moral boundaries.

Every time hate speech and online harm festered more and more, the reasons behind it were by no means caused only by the malice of a section of the public, with almost every component of the social media platform contributing. This includes shortcomings in the management of the platform, omissions and favouritism in the platform’s official automated review mechanism, favouritism of the reviewers to one side or another, and a review mechanism that is not visible to the users, these important factors always control the public opinion of the whole incident. Have you ever experienced times when the content we post on social media platforms is taken down or outright deleted by the platforms? And sometimes platforms do not put the exact reason for this, this is because the platform’s moderation rules make it invisible to the user and the moderation process is hidden, so that the user is not aware of what moderation process their post has gone through (Nicolas, 2019). This leads to potential bias and injustice, where users can only be unilaterally sanctioned by the platform, and the platform doesn’t check the source if the user asks who moderated their post. This demonstrates the dictatorship of the platform, and the inequality of the power structure between the platform and the user.

In addition, since the review rules and processes are hidden, then platforms that want to block certain content which is not favourable to the side that the platform favours, despite the actual reason being because it touches the interests of the side that the platform favours, the platform will block such content for a reason that is not sufficient to convince the public otherwise. Using the example at the beginning of this article to illustrate, the underlying reason Facebook blocked the trailer for that Australian comedy show was because that video would have opened the public’s eyes to more of the injustices experienced by Australia’s Aboriginal people, as well as the issue of power inequality with white Australians. The fact is that Facebook is biased in favour of white people and that this show threatens white people’s interests, so Facebook platform officials need to take it down. However, the reason given by Facebook is that this program featured two topless women, which violates Facebook’s nudity policy, and this is platformed racism (Ariadna 2017, p. 930-931).

Furthermore, if platforms do not intervene promptly, the push mechanisms of individual platforms can exacerbate the festering of hate speech and online harm, which is related to the push rules of the platforms. Taking Reddit and Facebook as an example, they both rely on the likes mechanism, where the higher the number of likes on a post, the more exposure within the platform, and the platform pushes these posts and related content to more users (Massanari 2017, p. 334). And once the number of likes on posts with hate speech increases, more and more hate speech is pushed out to users, and public sentiment can easily be guided by radicalised speech, especially if there is a lot of support for radical statements, the public is more prone to the Bandwagon Effect (p. 337). Couple this with deliberate favouritism on the part of the platforms, such as not moderating these hate speech promptly because it doesn’t touch their core interests, and the overall winds of public opinion will be completely guided by these radical statements.

Under this premise, victims of hate speech and online harm who are not on the platform’s favoured side will not only be attacked by hostile users, but also experience potential harm from the platform. What’s even more frightening is that harmful speech isn’t just speech that offends people, it’s speech that can immediately and consistently cause serious harm to people, it is similar to poison, and the psychological impact of this harm is even greater than that of physical assault (Sinpeng et al., 2021, p. 6).

In addition, the information cocoon makes users receive narrower and more targeted information, and makes it difficult for people to recognise other points of view. Information cocoon refers to the push mode of Internet platforms based on the user’s personal preferences, the platform will recommend the content to the user if the user has a high number of likes (Liu & Zhou, 2022). Over time, users will receive more and more recommendations of their favourite content, and other content will not be pushed by the platform, forming an information cocoon, which will wrap users in their own narrow world. This is one of the reasons why users tend to be led by radical statements because once they support radical speech, the platform will push them more, causing their accounts to be filled with such negative information, and then lose their objective judgement.

All the users we can see who make radical remarks are appearances, and many of them even begin to attack after being swayed by the direction of public opinion. What really masters the development and fermentation of events is the silent platform behind them: the platform’s opaque audit policies, algorithms and recommendation mechanisms that trap and brainwash users, form an information cocoon for users, and the platform’s official bias and maintenance of certain positions. The platforms’ silent control of all this not only ignores many victims, but also creates a growing inequality in the power structure between the platforms and their users. Users seem to be free to choose the content they are interested in, but they are actually under the invisible control of the platform. To face this unfairness, I hope the public will unite to fight the unfair practices of the platform, and try to retain their own clear and objective judgment in the face of overwhelming radical comments, do not easily do the person who is unkind to others. Finally, I hope that all people will no longer be attacked by hate speech and online harm, and that the Internet environment can remain truly peaceful.

References:

Broderick, A. (December 18, 2014). #Gamergate: Cultural change begins in classroom. Indiana University Bloomington.

https://mediaschool.indiana.edu/news-events/news/item.html?n=gamergate-cultural-change-begins-in-classroom

Liu, W., & Zhou, W. (2022). Research on Solving Path of Negative Effect of “Information Cocoon Room” in Emergency. Discrete Dynamics in Nature and Society, 2022. http://dx.doi.org/10.1155/2022/1326579

Massanari, A. (2017). Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807

Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118X.2017.1293130

Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney. https://hdl.handle.net/2123/25116.3

Suzor, N. P. (2019). Who Makes the Rules? In Lawless: The Secret Rules That Govern our Digital Lives (pp. 10–24). chapter, Cambridge: Cambridge University Press.

Be the first to comment

Leave a Reply