How does social media cause online harm for teenagers?

On October 15, 2021, a 19-year-old user named Luo Xiao Mao Mao Zi posted a video on her Douyin account with the caption “This is probably the last video; thank you all for being with me.” After posting the video, she went live to share her experiences but was attacked by some people in the comments of her live stream. These comments included aggressive language encouraging her to commit suicide, such as “drink it quickly” and “the bottle must be filled with urine,” among others. In a state of depression, she couldn’t control herself and consumed paraquat, an herbicide that ended her life at such a young age. (Zhenjiangsifa, 2021) In this tragedy, social media not only served as a tool for spreading bullying and violence but also made the victim’s suffering and psychological damage worse. This heartbreaking real-life case is just one of many cases where teenagers are harmed by online violence.

In 2023, the Survey Report on the Protection of Minors’ Online Rights and Interests and Satisfaction with Cybersecurity showed that among the 12- to 18-year-old minor group, 12.29% claimed that cyber violence was widespread around them; at the same time, 22.83% of minors aged 12–18 reported facing cyber violence infrequently. (Zhang, 2023)

In the digital era, social media has become an indispensable part of people’s daily lives. A survey showed that 97% of teenagers go online every day (Laborde, 2023). However, as the number of users and the time spent on social media increase, its impact on teenagers has become increasingly significant. One of the impacts is the increase in violent behavior among teenagers. This blog will explore some possible reasons behind this phenomenon and how we might prevent it.

The Impact of social media on Teenage Behavior: Anonymity, Ubiquity, and Group Dynamics

(Quick, 2021)
With the development of social media, the influence it has on teenagers cannot be ignored. Especially in recent years, we must carefully consider the role social media plays in these situations, given the growth in violent cases involving teenagers. I believe that the anonymity, ubiquity, and group dynamics of social media are part of the reason for this phenomenon. These features not only provide teenagers with opportunities to express their inner worlds and socialize but also have many negative impacts.

Anonymity: Balick stated that anonymity is a significant component of online anger. When social media users are anonymous, many are more likely to post offensive comments because almost no one knows who is behind the screen, and the person posting the comment bears less moral responsibility. Therefore, anonymity is an important factor in digital attacks (Kim et al., 2023; Fleming, 2020).

Ubiquity: The widespread distribution of social media brings new opportunities for online harm (Craig, 2020). I think a possible reason is that, given the low threshold for participating in social media today, almost everyone can join in online communication. This zero-threshold characteristic allows any information to be transmitted and more widely spread, thus making cyberbullying more likely to occur and making it harder for victims to escape the impact of online violence.

Group Dynamics: Research has found that peers might blindly believe the content shared on social media groups without verification (Shareef et al., 2020). The group dynamics of social media can amplify individual behavior. When a teenager posts aggressive comments, some peers who originally do not have a tendency towards violence might be influenced and blindly follow in posting aggressive comments. Also, because of the fast spread of social media, a person’s violent speech could have serious consequences.

Hate speech on social media

What is hate speech?
Common examples of hate speech include insulting nicknames and slanders, comments that promote negative stereotypes, and statements meant to generate anger or violence towards a specific group. Hate speech can also include non-verbal descriptions and symbols (Curtis, 2016). However, to a large extent, hate speech depends on the context and requires in-depth local knowledge to fully understand and address it. (Matamoros-Fernández, 2017) On social media platforms, hate speech can rapidly spread in various forms, such as text, pictures, or videos, with a wide range of impacts.
The anonymity and openness of social media provide fertile ground for the distribution of such speech. As a result, users might feel that they can express their bias and hate more boldly online without consequences.

(Pengpai new, 2020)

Spread of Hate Speech
There are two factors that, I conclude, contribute to the spread of hate speech:
    1:Social Media Algorithms: These algorithms are designed based on user information such as activities, behaviors, and interests to regulate the visibility, order, and suggestions of content to prolong user engagement. (Menendez, 2022) At the same time, extreme and hateful content frequently generates strong emotional responses, which makes it more likely to increase user engagement (like comments and shares). As a result, hate speech that can trigger strong emotions in teenagers could accurately find its target audience and gain a lot of attention in a short time.
    2:Social Media User Habits: On social media platforms, users tend to interact and communicate with others who share their viewpoints, unconsciously creating what is known as an “echo chamber.” An information environment described as an echo chamber is one in which people are specifically exposed to content that supports their own perspectives. As a result, viewpoints become homogenized, and the flow of information is unidirectional. (GCF Global, 2019) This phenomenon is particularly pronounced on social media, as algorithms tend to recommend posts and news similar to the content with which users have previously interacted, further increasing this effect. It creates grounds for hate speech to not only survive but also thrive. In such an environment, extreme and radical viewpoints are repeatedly reinforced rather than challenged or questioned. Moreover, the echo chamber limits users, especially teenagers, from accessing and understanding opposing viewpoints, creating an information cocoon that is only theirs. The effects of hate speech on youth get worse when they become used to an identical stream of information and begin to reject different forms of knowledge, which limits opportunities for conversation and understanding among people.

Impact of Hate Speech
Adolescence is a critical period for the formation of personal identity and value systems. During this time, brain areas related to the need for peer approval, attention, and feedback become more sensitive. Teenagers are easily harmed by the effects of hate speech and other forms of online misinformation. Their belief that they belong to something greater than themselves is a deep desire of teenagers, both socially and within their peer group. The strong need for social inclusion among teenagers, together with their probable inability to recognize and evaluate social messages, makes them particularly vulnerable to the effects of hate speech. (Weir, 2023; Youaremom, 2019) For example, a survey of 1,155 Israeli teens (ranging in age from 12 to 17) revealed the difficulties that teenagers encounter on social media platforms, especially hostile content, anti-Semitic language, and threats. According to the survey, almost 61% of teens said that they were frightened or worried because of offensive content online. (ANI, 2024)

Content Moderation Challenges

The development of social media has changed the way young people communicate with others by providing a unique space for sharing information. On the other hand, there are a number of difficulties that come with this openness, particularly when it comes to content filtering. It is critical that social media sites find a way to reduce harmful content while allowing users to freely express themselves online.

The presence of algorithmic limits creates difficulties in the detection of content that is toxic to youth. This condition may result in a failure to effectively delete harmful information, such as cyberbullying and discriminatory language, raising the risk of online difficulty for young people. Also, algorithms may restrict their access to positive content. For instance, Instagram’s algorithmically developed streams of content personalized to each user’s interaction habits have the potential to pull sensitive youths into a harmful downward social comparison circle by exposing them to unavailable standards of physical attractiveness, body type, and size. (Austin, 2021) The lack of algorithmic understanding of multiple cultural and regional environments might worsen this issue, making specific young groups more likely to receive harmful information. Therefore, improving the accuracy and adaptability of algorithms to protect young users from harmful content is a critical and significant task for social media platforms.

Despite human moderation offering a potential solution to the limitations of algorithms, it faces issues of inefficiency and high pressure on personnel. Many of the moderators’ scans include violent images, sexual content, and videos of beheadings in order to meet daily targets. (Arsht & Etcovitch, 2018) This not only leads to low efficiency but can also severely impact the mental health of the moderators. Long-term exposure to negative or harmful content may result in emotional tiredness or even post-traumatic stress disorder (PTSD). Moreover, human moderation standards can vary significantly, leading to variations in moderation outcomes.

Another core issue in social media content moderation is defining what constitutes hate speech versus protected freedom of speech. Different societies and cultural backgrounds have different perspectives and standards on this issue. The extremely strict content moderation might be believed to be a violation of free speech. And poor moderation could let hate speech and online harmful behaviors go uncontrolled. So, it is difficult to find a balance between protecting teens from harm and not restricting freedom of speech.

Here are some suggestions to solve the problem:

(Social, 2021)
The upgrade of technology: social media platforms could improve the way to detect violent content by using advanced AI algorithms.

The improvement of the Support for content viewers: Provide better mental health support and a working environment for social media moderators, regulate their mental health tests, and shift their jobs. I think these can help reduce work stress and detect more online harm.

The development of laws and policies: The development of rules and laws set in countries and international organizations can better define the responsibilities of social media platforms. This not only allows platforms to improve the efficiency in content monitoring, but also gives users a greater responsibility.

The enhancement of teen education and public awareness: Through public education efforts, raise youth awareness about online hate speech and violence and teach them how to use social media safely and responsibly. A refined social media environment for teens will result in the active reporting of inappropriate content by all users.

The promotion to association with different sectors: Encourage cooperation among universities, business organizations, social media companies, governments, and non-governmental organizations (NGOs) responsible for content moderation. By communicating research findings and new developments in technology, it is possible to improve the efficiency of content moderation on social media platforms.

As the development of technologies, we could not ignore the importance of social media violence to our teenagers, whom are the future of the world. In this blog, I hope we can raise public awareness of how social media can contribute to teen violence and encourage efforts from all parts of society to create a safer and healthier online environment for our young futures.

Reference:

ANI. (2024, April 7). MSN. Www.msn.com. https://www.msn.com/en-xl/news/other/israeli-youth-navigate-online-minefield-of-antisemitism-and-graphic-images/ar-BB1lbw90

Arsht, A., & Etcovitch, D. (2018, March 2). The Human Cost of Online Content Moderation. Harvard Journal of Law & Technology. https://jolt.law.harvard.edu/digest/the-human-cost-of-online-content-moderation

Austin, B. (2021, October 8). How social media’s toxic content sends teens into “a dangerous spiral.” Harvard T.H. Chan; The President and Fellows of Harvard College. https://www.hsph.harvard.edu/news/features/how-social-medias-toxic-content-sends-teens-into-a-dangerous-spiral/

Craig, W. (2020). Social Media Use and Cyber-Bullying: A Cross-National Analysis of Young People in 42 Countries. Journal of Adolescent Health, 66(6), S100–S108. https://doi.org/10.1016/j.jadohealth.2020.03.006

Curtis, W. M. (2016). Hate speech. In Encyclopædia Britannica. https://www.britannica.com/topic/hate-speech

Fleming, A. (2020, April 2). Why social media makes us so angry, and what you can do about it. Www.sciencefocus.com. https://www.sciencefocus.com/the-human-body/why-social-media-makes-us-so-angry-and-what-you-can-do-about-it

Gansner, M. (2017, September 5). “The Internet Made Me Do It”-Social Media and Potential for Violence in Adolescents. Psychiatric Times. https://www.psychiatrictimes.com/view/-internet-made-me-do-itsocial-media-and-potential-violence-adolescents

GCF Global. (2019). Digital Media Literacy: What is an Echo Chamber? GCF Global. https://edu.gcfglobal.org/en/digital-media-literacy/what-is-an-echo-chamber/1/

Kim, M., Ellithorpe, M., & Burt, S. A. (2023). Anonymity and its role in digital aggression: A systematic review. Aggression and Violent Behavior, 72(1359-1789), 101856–101856. https://doi.org/10.1016/j.avb.2023.101856

Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118x.2017.1293130

Menendez, S. (2022, June 13). How Do Social Media Algorithms Work? 101 Guide. Thunderclap.it. https://thunderclap.it/101-guide-on-how-do-social-media-algorithm-works

Shareef, M. A., Kapoor, K. K., Mukerji, B., Dwivedi, R., & Dwivedi, Y. K. (2020). Group behavior in social media: Antecedents of initial trust formation. Computers in Human Behavior, 105(0747-5632), 106225. https://doi.org/10.1016/j.chb.2019.106225

Weir, K. (2023, September 1). Social media brings benefits and risks to teens. Here’s how psychology can help identify a path forward. Apa.org. https://www.apa.org/monitor/2023/09/protecting-teens-on-social-media#:~:text=Young%20people%20may%20be%20particularly%20vulnerable%20to%20social

Youaremom. (2019, April 23). Why Are Adolescents Easier to Influence? You Are Mom. https://youaremom.com/children/easier-to-influence/

Zhang, T. (2023, June 29). 超半数受访未成年人表示身边发生过网络暴力. Cn.chinadaily.com.cn. https://cn.chinadaily.com.cn/a/202306/29/WS649cdd4ca310ba94c5613f61.html

Zhenjiangsifa. (2021, October 22). 网红直播时喝下农药自杀身亡!起哄网友及平台要不要担责?专家分析→_澎湃号·政务_澎湃新闻-The Paper. Www.thepaper.cn. https://www.thepaper.cn/newsDetail_forward_15025763

Images:

Pengpai new. (2020, November 30). 社交媒体与信息茧房:当今网络为何充满戾气_澎湃号·湃客_澎湃新闻-The Paper. Www.thepaper.cn. https://www.thepaper.cn/newsDetail_forward_10200197

Quick, J. (2021, June 10). 5 Social Media Marketing Trends You Need to Know. Quick Web Designs. https://quickwebdesigns.com/5-social-media-marketing-trends/

Social, A. (2021, October 28). What are Social Media Algorithms, and How do They Work? 2021 Updates. Adler Social,. https://adlersocial.com/social-media-algorithms/

Zhang, C., & Xu, Z. (2020, April 24). 各国出台法律法规净化网络_欺凌. Www.sohu.com. https://www.sohu.com/a/390816625_117916

Be the first to comment

Leave a Reply