Have you ever been personally attacked online?
Have you been subjected to all kinds of abuse and slander just because of your gender, race, or even what you wear or what you like?
You’re LUCKY if the answer is “NO”!
Then, have you ever seen this hate speech and learned of incidents where it has caused harms?
Introduction
With the development of the Internet and technology, social media have gradually integrated into people’s lives. People can obtain, publish and exchange information at any time and in any space. The virtualization of the Internet and the promotion of freedom of speech have also enabled people to express their opinions freely on the Internet. However, the network is mostly anonymous, with a high degree of freedom and limited supervision, which has led to the emergence of some extreme, violent and hateful speech. This can bring considerable challenges and serious harms to both individuals and society (Perera et al., 2023).
In fact, the amplification and spread of hate speech through the Internet and various platforms has become an increasingly serious problem. The Pew Research Centre in the United States reported that, in 2017, 41% of Internet users experienced online harassment, and 18% of them experienced physical threats, sexual harassment or stalking (Duggan, 2017, as cited in Flew, 2021, p. 91). These shocking statistics make people feel sad, but they also reveal the problems of the Internet. In the future, social media will become a more indispensable part of our lives, and the issue of hate speech on social media must be taken seriously. Laws and policies, online platforms, celebrities and ordinary people, all need to play a role and work together to overcome the difficulties that may be encountered in dealing with hate speech.
Hate Speech Definitions and Harms
As shown in picture 1, hate speech has different definitions in different societal sectors (Perera et al., 2023). However, taken together, it involves making discriminatory and hurtful remarks in public against marginalized and vulnerable groups, such as women, children, ethnic minorities and LGBTQ+ groups, targeting their race, gender, sexual orientation, religion, disability and other aspects, encouraging or inciting hatred against them (Flew, 2021; Sinpeng et al., 2021).
From many social events, we can find that the online harms and threats that such hate speech can bring are very serious. They can encourage intimidation, contempt, discrimination and prejudice, undermine society’s trust and lead to an increase in hostility in society (Flew, 2021). It can even rise to cyberterrorism and terrorism, then some people will engage in extreme criminal acts (Chetty & Alathur, 2018). For example, it can lead to blatant organization of anti-social activities such as abusing women, denigrating transgender groups, and discriminating against ethnic minorities (Benton et al., 2022). Harmful online speech can directly or indirectly damage the physical and mental health of individuals, preventing them from participating in collective life, and also affecting normal personal life ( Chetty & Alathur, 2018). It may cause victims to develop mental problems such as low self-esteem, anxiety, fear and depression, which have a serious psychological impact (Perera et al., 2023). Victims may also be harassed offline, living a stressful life without being able to relax, having difficulty in having autonomy and feeling insecure, which may even eventually lead them to suicide (Perera et al., 2023; Flew, 2021).
Hate Speech and Online Harms on Twitter
Twitter is also a platform where there is a lot of hate speech, involving racial and sexist discrimination, abuse and other issues, which can cause big online harms (Matamoros-Fernández, 2017). Actually, the looseness and tightness of Twitter’s regulation of online speech have changed in recent years. A few years ago, there were some restrictions, and there would also be management of hate speech on the platform, such as deletion, warning, and banning. One famous event was on January 6, 2021, Twitter deleted several of Donald Trump’s tweets and permanently suspended his account two days later. The reason was that he posted too many tweets that denied the election results and appeared to incite violence, with the risk of further incitement (Yildirim et al., 2023).
However, on October 28, 2022, when Elon Musk officially became the new CEO of Twitter, the number of tweets containing vulgar hate speech against race, religion, ethnicity and sexual orientation skyrocketed to 4778 from midnight to noon on the 28th. What we should know is that the content containing hate speech during the six days before this didn’t exceed 84 times per hour, as shown in picture 3 (Benton et al., 2022).
Meanwhile, scholars conducted a sentiment analysis of Twitter tweets, and found that also on the 28th, the negative sentiments containing hate increased rapidly, with 67.2% of the tweets found to have a negative tone. This is because Elon Musk advocated turning Twitter into a more “free” platform, removing key employees, and greatly reducing the platform’s censorship and restrictions. So that people are able to express themselves freely on Twitter without fear of being punished or regulated by suspensions or bans (Benton et al., 2022).
This also means that there have always been a lot of potential extreme and hateful thoughts on the Internet, which, if left unchecked, will lead to an explosion of hate speech. When there is too many hate contents online, it will affect the health of the entire network environment and bring negative impact on the stability of society, and those groups that are persecuted by hate speech will be in a much more difficult situation. Additionally, there are plenty of teenagers on the Internet, who may be attracted by the free release of social media (Benton et al., 2022). However, youths’ outlook on life and values are easily influenced, and their ability to resist pressure in the face of hate speech is also limited, which can eventually cause heartbreaking incidents (Woods, 2021).
Elon Musk’s policy is an echo of freedom of speech, but one thing that goes too far will cause problems. Freedom without limits will ultimately lead to bad results that are difficult to control. We should have the right to free speech, but certain speech should be limited if it causes serious social harms. Freedom of speech is not a protective umbrella for the rampant use of hate speech, and it can’t come at the expense of causing serious harms to other innocent groups, and negatively affecting the harmony and stability of society as a whole.
Solutions
Clear Laws and Policies
As early as 1975, the federal Racial Discrimination Act stated that no person may engage in public conduct (except art, scholarship, journalism or truthful commentary) that is likely to offend, insult, humiliate, or intimidate another person or group of persons because of race, color, national or ethnic origin (Sinpeng et al., 2021, p. 29). In recent years, there have also been relevant laws and policies restricting hate speech. For example, in 2018, Australia added new laws to the Crimes Act making it illegal to publicly threaten or incite violence on the grounds of race, religion, sexual orientation, gender identity, intersex or HIV/AIDS (Sinpeng et al., 2021, p. 29).
In the future, more laws should be introduced and publicized for society, and specific punishment methods should be shown, and there is a need for special organizations to conduct regular supervision and inspection. This is not only a reminder to people and online platforms, but also a basis for dealing with hate speech and online harms in accordance with the laws.
Twitter’s Actions
Twitter’s regulations need to be open and transparent, and it can reduce hate speech through “warning intervention”. First, Twitter’s regulation should be transparent, allowing users to clarify what kind of speech will be restricted, and warning users if they continue to use hate speech. For the accounts with a certain number of followers, since their speech can have a greater social impact than ordinary accounts, supervision on them should be increased. Moreover, if an account is banned because of hate speech, a pop-up message can be sent to inform its followers (but in order to protect privacy, it doesn’t have to show the specific person), serving as a warning (Woods, 2021).
Additionally, handling complaints is an important way to reduce online harms (Woods, 2021). To our regret, Twitter only accepts web links as evidence of harassment, but it takes time to review, and sometimes the links are subject to invalidation. At the same time, Twitter does not admit screenshots as evidence, which can lead to some complaints not being handled in a timely manner (Matamoros-Fernández, 2017). Twitter should have more manpower to deal with complaints, improve the timeliness, appropriately expand the types of acceptable evidence and review carefully, as well as pay attention to protecting the privacy of the complainants.
What’s more, Twitter can increase efforts to develop machine learning algorithms and AI systems to better identify and solve hate speech (Perera et al., 2023). Using AI to identify hate speech and its spreaders can greatly reduce workload and improve efficiency. Also, in addition to common hate speech, some hate slang, abbreviations, emoticons and symbols are not easily recognized (Perera et al., 2023), which requires AI to learn and update frequently.
However, it should be noted that the existence of such cultural differences can also lead to possible differences in attitudes towards the same things. For example, based on religious cultures, some regions are more unacceptable to LGBTQ+ groups, and may even have verbal and physical humiliation and abuse to them. Like in India, the LGBTQ+ groups are mainly being criticized. And because of the social constraints and discrimination against women, there is more hatred towards lesbians (Sinpeng et al., 2021). Although we may feel sorry for this social phenomenon, it can’t be solved in a short time. What we can do is try our best to minimize the amount of hate speech and online harm it may cause. Twitter is a public, globalized social platform that faces many cultures, which itself can be inherently difficult to manage. Its governance scheme should be continuously developed and improved. It needs to have an overall governance program, but for certain specific cultural backgrounds, regions and events, how to adopt targeted handling approaches is also the thing that needs to be studied.
Individuals Have Proper Behaviors
It’s difficult for us to demand the behaviors of others, but we can behave properly and try to look at differences with a tolerant mind (Flew, 2021). It’s good to focus on the improvement of personal quality, understand the differences between people and countries, and try not to make very extreme and hateful remarks on the network. Moreover, if you see organized and purposeful extreme speech and extreme behaviors on the Internet, timely complaints and give feedback to the platforms. Additionally, try to appeal to or teach people around you (such as children) to stay away from hate speech. Both celebrities and ordinary people should take proper action.
Conclusion
Social media are becoming an increasingly important part of our daily lives, the ways people communicate and express their opinions are constantly enriched, and the boundaries between online and offline are gradually being blurred. All kinds of thoughts and attitudes in real life can be presented on the Internet, and the influence that can be produced on the Internet will also enter the real world invisibly. Unregulated social media platforms can easily become hotbeds of extremism and hate speech, which will cause considerable damage to the harmony and stability of society.
For the long-term healthy development of social platforms and society, many social media, including Twitter, should not give too much freedom to speech and allow the proliferation of hate speech, which still needs to have some restrictions. The control of hate speech requires the joint efforts of laws, regulatory authorities, media platforms and individuals, so as to increase the sense of trust and goodwill in society, reduce the existing hostility as much as possible, control the amount of hate speech, and avoid online harms caused by hate speech. Although there are challenges in addressing hate speech, such as cultural differences and how to balance speech freedom and hate speech limitations, as long as people are willing to solve it, keep adjusting and improving the methods, one day, “HATE SPEECH” may turn into “LOVE PREACH”.
References
Benton, B., Choi, J. A., Luo, Y., & Green, K. (2022). Hate speech spikes on Twitter after Elon Musk acquires the platform. School of Communication and Media, Montclair State University, 1-12.
https://digitalcommons.montclair.edu/scom-facpubs/33
Chetty, N., & Alathur, S. (2018). Hate speech review in the context of online social networks. Aggression and violent behavior, 40, 108-118.
https://doi.org/10.1016/j.avb.2018.05.003
Flew, T. (2021). Hate speech and online abuse. In Regulating platforms (pp. 91-96). Polity Press.
Matamoros-Fernández, A. (2017). Platformed racism: The mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930-946.
https://www.tandfonline.com/doi/full/10.1080/1369118X.2017.1293130
Perera, S., Meedin, N., Caldera, M., Perera, I., & Ahangama, S. (2023). A comparative study of the characteristics of hate speech propagators and their behaviours over Twitter social media platform. Heliyon, 9(8), 1-13.
http://dl.lib.uom.lk/handle/123/21886
Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating hate speech in the Asia Pacific. Facebook Content Policy Research on Social Media Award: Regulating Hate Speech in the Asia Pacific, 1-47.
https://hdl.handle.net/2123/25116.3
Woods, L. (2021). Obliging Platforms to Accept a Duty of Care. In M. Moore & D. Tambini (Eds.), Regulating big tech: Policy responses to digital dominance (pp. 93–109).
Yildirim, M. M., Nagler, J., Bonneau, R., & Tucker, J. A. (2023). Short of suspension: How suspension warnings can reduce hate speech on twitter. Perspectives on Politics, 21(2), 651-663.
Pictures Links
Picture 1 http://dl.lib.uom.lk/handle/123/21886
Picture 2 https://www.nytimes.com/2021/01/08/technology/twitter-trump-suspended.html
Picture 3 https://digitalcommons.montclair.edu/scom-facpubs/33
Picture 4 https://www.nytimes.com/2022/12/02/technology/twitter-hate-speech.html
Picture 5 https://news.un.org/en/story/2023/01/1132617
Picture 6 http://insidethecask.com/2015/11/11/avoid-love-hate-extremes-customer-relationship/
Be the first to comment