We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth.
We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.
This utopian claim was published in 1996 in A Declaration of the Independence of Cyberspace, describing a web-based fantasy world where there is no discrimination based on race, gender, class, religion, or sexual orientation, independent of government, where everyone is treated equally, like a utopia untainted by any reality. The ideal Internet world, however, does not exist, just like utopia only exists in fantasy. We finally find that online platforms are totally another real world. On online platforms, we can see all kinds of discrimination and harassment flourish, with consequences no less serious than those in the real world. So, what do these online harms include, and what efforts should we make to minimize online harm? A lot of evidence has proved that the internet is not a utopia that can be left unregulated.
What is included in online harms?
1. hate speech
We can always see some offensive or negative speech on the platform and social media, but not all of them can be regarded as “hate speech”. Hate speech has some features:
- It targets marginalized or minority groups: people who use hate speech tend to attack people because of the labels they wear, not because of their personality or moral quality. For example, cis-gender groups may discriminate against someone just because they are transgender, or heterosexuals may have prejudices against someone because they are gay. As a result, hate speech is often an attack by mainstream groups on vulnerable groups.
- It disempowers targets by putting them in a position of inferiority: hate speech makes targets feel less important and afraid to stand up for themselves because they have no say in the matter.
- It has a long-term negative impact on the well-being of the person being discriminated against.
- It takes place in public, which means that other people have access to the content.
2. online harassment
It include sexual harassment, offensive name calling, cyberstalking, doxing, trolling, image-based abuse (also known as ‘revenge porn’) and physical threats on social media, and it is always associated with gender discrimination and gender inequality.
These definitions might not sound very concrete, but some typical cases can help us to identify online harms easier and realize how it closely relates to our lives.
①harassment in online gaming
Basion, a marketing and communications agency, conducted a survey on harassment experienced in online gaming by Australian gamers, with a particular focus on female and LGBTQIA+ identifying gamers. They analyzed the frequency, type, and impact of harassment experienced based on respondents’ answers, and the results show that the rates of harassment experienced or observed in online gaming by LGBTQIA+ gamers, female gamers, and male gamers are 92%, 83%, and 72%, respectively. The types of harassment are diverse, including but not limited to offensive behavior or language, gender-based harassment or discrimination, racial harassment or discrimination, and sexual harassment or discrimination. This harassment not only takes away from the enjoyment of the game, but also has a negative impact on those players. What’s worse, most LGBTQIA+ and female gamers tend to tolerate the harassment instead of standing up to the abusive gamers because they’re afraid it will get worse. This harassment always has a negative and lasting impact on the harassed gamers, some of them want to leave the game and others choose to make some changes before playing, such as using altered voices, muting the microphone, changing their user profile, and choosing a male character to avoid being harassed.
Apparently, the platform itself was invisible in the process. The gaming platforms are not taking strong measures against these abusive gamers to protect the rights of the harassed players.
figure1: the rate of female gamers have experienced or observed harassment in online gaming (source: Bastion Insights Gamer Research with n=601 Australian gamers, aged 16+ conducted in February 2023).
figure2: the negative impacts of harassment (source: Bastion Insights Gamer Research with n=601 Australian gamers, aged 16+ conducted in February 2023).
Maybelline, a famous makeup brand, also launched a campaign in 2023 to prove this toxic culture targeting female and LGBTQIA+ gamers in online gaming. They invited two Australian male gamers and content creators to make their profile more feminine and use voice-altering software in the game to make them sound female. The result is unsurprising: within less than two hours of playing the game, they are flooded with massive abusive comments. These two male gamers encountered this harassment and bad gaming experience just by using voice-altering software to “be a woman ” for a short period of time, the situation of real female gamers and LGBTQIA+ gamers can be imagined as well. This finding also shows that many female or LGBTQIA+ gamers experience harassment simply because of their gender or sexual orientation. (You can watch Maybelline’s video for details: Maybelline New York Through Their Eyes)
The respondents also consider the harassment in online gaming has became more and more common:
I think people are gradually forgetting there are real people on the other end of online interactions.
People don’t act like themselves online are more likely to act in a more offensive manner. I don’t see this changing and I think online gaming will become more popular.
It is clear that the less tightly regulated online platforms have become a place for people to unleash the evils of humanity, instead of becoming the ideal world that people thought they would be thirty years ago. Many netizens in the face of virtual individuals on the networks often forget they are real people, coupled with the network is even more free of legal constraints compared to reality world, virtual and real collision blurred the boundaries of the law, as a result, many people do not need to consider the consequences of sending offensive comments, while the victim will be drowning in pain for a long time.
And an even worse case of online harms happened in Chinese social media.
②online harms in social media
figure3: Linghua Zheng posted a picture on Xiaohongshu, a Chinese social media platform, titled “Grandpa read my postgraduate offer on his sickbed”.
In February 2023, a girl named Linghua Zheng decided to end her life after months of struggling with clinical depression caused by cyberbullying. The source of this cyber violence was an ordinary post that couldn’t be more ordinary (Figure 3). As you can see from the picture, Linghua Zheng was sharing her happiness about receiving a postgraduate offer from East China Normal University with her grandfather, who was lying in a hospital bed. To celebrate it, she dyed her hair pink. However, the pink hair became the beginning of her nightmare.
Innumerable insulting and slut-shaming comments appeared underneath her post, someone compared her hair color to “nightclub girl” or prostitute, someone questioned her ability to be a teacher because of her light hair color, and someone even maliciously misinterpreted her relationship with her grandfather as “the old man and his young wife”.
Linghua Zheng tried to fight back against these abusive comments and tried to face the problems with a positive attitude, but the cyberbullying didn’t end with her clarification and efforts. The multitude and anonymity of cyberbullying makes it difficult for her to defend her rights when faced with perpetrators who are difficult to locate accurately, and some accounts that disseminate online violence on an organized and large scale for the purpose of gaining traffic and making profits. After eight months of this cyberbullying incident, she decided to end her life, in her suicide note she said that the malicious comments from anonymous were the main reason leading to her death.
Shocked by her death, a campaign named “Pink, the color of resistance to cyber violence” was launched on a Chinese social media platform, the founder of this campaign claimed “The internet was initially made to connect and support everyone, not to allow everyone to hurt each other”.
Who should take responsibility for online harms?
- Online Platforms and Tech Companies
The platform itself must be the primary responsible party for online harm. These tech companies have almost 100% control over their platforms, but they only care about what they really care about. For example, when google finds that there are copyright infringing videos on youtube, they remove them very quickly, but they would not take any action on videos containing hate content, because these platforms and tech companies only focus on their short-term profit rather than providing better service to users, although the latter is more beneficial for their long-term development.
Second, the layout, design made by some platforms and technology companies also promote this undesirable cyber-ecology. For example, we all know that many platforms use algorithm to personalize content for users, this is an effective way to retain users, but it also leads to echo chamber, what’s worse, a user of Douyin (Chinese version of tiktok) found that different groups not only receive different content, and even when they watch the same video, the comments underneath are different, which further deepens the information cocoon. This mechanism can definitely affect the type of content that gets more visibility and user behavior, this also shows that the platforms have absolute control over themselves, so regulation is very necessary, otherwise the power will go to abuse.
Some platforms have already established some rules for platform ecology. For example, Facebook has developed the “Facebook Community Standards” to be inclusive of different perspectives and beliefs, especially those of users and communities that may be overlooked or marginalized. In China, a growing number of platforms, such as Sina Weibo, Douyin, Douban, Xiaohongshu, BiliBili, and others, have further strengthened platform governance, for example, by strengthening the technical identification of offensive information, and filtering and intercepting abusive information. However, other than some obvious rude words, it is difficult for it to identify whether the comments contain offensive content
The construction of a friendly platform ecosystem is unreliable on the platform itself alone. Without regulation from the outside, we have no way of knowing if platforms are actually fighting online violence.
- Governments and Regulatory Bodies
Regulation at the legal level is essential, and the state should introduce relevant laws to govern online violence and set up a third party independent of the platform to regulate the platform.
Australia can be a model. The Online Safety Act 2021, which makes the law for online safety better and gives the eSafety Commissioner (an Australian online safety regulator as well as the world’s first government agency on online safety) more power to regulate the industry of online services to protect users, both adults and children, from online harm. It means eSafety can monitor online service providers as a third party, making the platform more constrained.
- Users
individual users also have a responsibility to uphold positive norms and behaviors online. As Linghua Zheng posted on her account: “We should speak carefully online because once you say something bad, it can only be forgiven but not forgotten“. A malicious comment from an anonymous user can be the last straw that broke the camel’s back.
–
In conclusion, addressing online harms requires a multi-faceted approach that involves collaboration among the platform itself, government and agency, and users. It is only when the law imposes stricter regulation on online platforms and when Internet users are made to realize that the Internet is not a place outside the law that we will be able to create a safer online environment.
References
Barlow, J. P. (1996, February 8). A Declaration of the Independence of Cyberspace. Electronic Frontier Foundation. https://www.eff.org/cyberspace-independence
eSafety Commissioner. (2022). Learn about the Online Safety Act | eSafety Commissioner. ESafety Commissioner. https://www.esafety.gov.au/newsroom/whats-on/online-safety-act
Flew, T. (2021). Regulating Platforms. John Wiley & Sons.
GREEN, R. (2023, February 21). Maybelline New York shows game discrimination through female and LGBTQIA+ players’ eyes in newly-launched campaign via HERO. Campaign Brief. https://campaignbrief.com/maybelline-new-york-shows-game-discrimination-through-female-and-lgbtqia-players-eyes-in-newly-launched-campaign-via-hero/
Home. (n.d.). Bastion Insights. https://www.bastioninsights.com/online-gamer-experiences/
Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021, July 5). Facebook: Regulating hate speech in the Asia Pacific. Final Report to Facebook under the auspices of its Content Policy Research on Social Media Platforms Award. Dept of Media and Communication, University of Sydney and School of Political Science and International Studies, University of Queensland. Facebook: Regulating Hate Speech in the Asia Pacific
Yee, J. (2023, March 7). Chinese woman, 24, commits suicide after being shamed for having pink hair. Mothership.sg. https://mothership.sg/2023/03/pink-hair-china-girl-suicide/
Be the first to comment