“We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.”
— John Perry Barlow, A Declaration of the Independence of Cyberspace (1996).
When the Internet first emerged, it was predicted hopefully that the Internet would bring true freedom of speech, allowing everyone to express themselves without fear. However, reality is gradually revealing a complex and even darker side. With the explosive growth and popularization of social media and digital platforms over the past decade, we have found that freedom of expression has come to mean hate speech, cyberbullying and a lot of harm. So today we are going to explore this set of questions: what is hate speech? What are the connotations of online harm? Why do platforms need to censor content? How does it need to be managed?

What is hate speech?
Imagine you swiped a comment on social media that wasn’t a simple argument or difference of opinion, but hostile, racially, sexually, or religiously offensive text – that, in a nutshell, is classic HATE SPEECH. According to Parekh’s definition, hate speech is speech that “expresses, promotes, or incites hatred against a group of people who are often treated differently on the basis of characteristics such as race, religion, gender, sexual orientation, nationality, or disability” (Parekh, 2012, p. 40). It doesn’t necessarily call for violence directly, but it subconsciously creates an atmosphere of “these people shouldn’t exist in this society”.
The Growth and Real Impact of Online Harm: Three case studies
On the other hand, the term “ online harm” is more broadly defined. Hate speech, cyberbullying, the spread of disinformation and even illegal and restricted online content (pornography, terrorism, etc.) are all included. It refers to behaviors that occur on digital platforms that cause psychological, emotional, or real-life harm to individuals or groups.

With the popularity of today’s social media, instant messaging and various online forums, online harms have become increasingly diverse in their manifestations. In recent years, the trend of cyber hazards has manifested itself in the following ways:
1.Rapid growth in the spread of disinformation: During major public events (e.g., COVID-19, before and after political elections), there is a dramatic increase in the speed and scope of the spread of disinformation and false rumors. Such information not only interferes with the public’s judgment of the facts, but may also cause panic and confusion. A notable case was during COVID-19, when false information that “drinking methanol can cure COVID-19” spread on Iranian social media. Many of the Iranian public believed this and it led to the deaths of 728 Iranians from alcohol poisoning. (ABC NEWS, 2020).
2.Spread of extremist content: The anonymity of cyberspace provides opportunities for extremists to exploit. This type of content often inspires hostility in parts of the population through a combination of carefully crafted videos, images and text, and attempts to transform virtual emotions into violent activities in reality. A clear case in point is the Christchurch mosque shooting that took place in New Zealand in 2019, by an extremist who believed in white supremacy. He initially posted frequent ultra-nationalist statements and incited hatred on online platforms, sparking heated discussions and controversies on social media. Eventually, this online radicalization morphed into real-life violence when he launched an attack on a mosque in Christchurch and live-streamed the entire deed on Facebook. (BBC NEWS, 2019). Thus, there is a clear correlation between inflammatory rhetoric on the Internet and real-life violence.
3.Cyber Violence and Privacy Breach: Some Internet users may launch retaliatory attacks against specific individuals or groups of people, for example (cyber bullying, human searches) in a way that may even lead to irreparable harm to the victim in reality. A case in point is a young Australian named Alex who was subjected to cyber-violence and manhunted, receiving over 50 deliveries a day that were maliciously ordered, which put Alex’s finances in serious jeopardy. (ABC NEWS, 2024). According to Peterson and Densley, online platforms contribute to cyberbullying to some extent because many social media sites allow users to sign up anonymously without having to provide real identifying information. (Peterson & Densley, 2017).
Thus, these phenomena suggest that online harm is not just a rhetorical issue, but a complex one involving social security, public order and individual mental health. Because of this, platform moderation has become critical in the face of the increasing complexity of the digital space.
Platform Moderation-Guardian or Censor?
When confronted with limitless hate speech and online harm, many platforms have pursued content review and management as their response strategy. The moderation process mainly involves two types of manual review and algorithmic screening. These platforms argue that the moderation they impose is meant to help stem the spread of harmful content that endangers vulnerable groups and has the potential to disrupt public order. But, this measure has equally sparked wide debate and controversy.

On the plus side, digital platforms, as an important information dissemination channel, do bear a certain amount of social responsibility. But by setting speech rules, the platforms can rapidly shut down posts that spread hate, incite violence or disseminate false information, thereby protecting the peace of the online ecosystem. For instance, major social media platforms such as Facebook, X and YouTube have developed detailed community standards that cover their removal or limiting of the dissemination of content when they encounter clearly unlawful or harmful information.
But there are many technical and ethical challenges in the moderation process. First of all, content moderation by algorithms has its own limit. They also do poorly at grasping context, irony, puns and cultural distinctions, and they frequently misjudge. Second, even though manual auditing is more judgemental, it is, in fact, very stressful and inefficient when dealing with a large amount of information, not to mention that auditors may subjectively judge due to the difference in individual value. This makes content moderation a ‘double-edged sword,’ trying to stick up for users against harmful information while risking curtailing some of the legitimate freedom of expression.
Moreover, at the level of nation and policy, the specifications required for management of data on web also varies from countries to countries. For instance, European countries do tightly regulate hate and violent speech to limit extremism. Conversely, in some countries Governments might apply greater restrictions on online content due to national security or political stability concerns. That puts platforms in a bind of public opinion. This has become the central challenge of platform governance in terms of figuring out how to best balance what is understood as ‘security’ with ‘freedom’.
Striking a balance between ‘freedom’ and ‘security’
In today’s digital age, online governance faces the great challenge of striking a balance between freedom and security. On the one hand, we need to effectively guard against the harm caused by false information, inflammatory remarks and cyber violence; on the other hand, excessive control may reduce the space for citizens to express their thoughts and undermine freedom of speech. In order to establish an online environment that guarantees public safety while respecting individual rights, it is necessary to build a transparent, efficient and multi-participatory governance system from multiple levels.

Firstly, transparency is the basis for building trust. Whether it is a platform or a government department, publicising the moderation standards, processing and data reporting can help users understand what content is being restricted and why, thus reducing misunderstandings and questions. This open and transparent mechanism not only allows users to have a clearer perception of the scale of the audit, but also prompts the platform to continuously reflect on and optimise its own management style, ultimately building credibility among users. Social platform X, for example, publishes six-monthly or yearly transparency reports that make public the handling of user reports, the reasons for content removal, and data on information requests from governments. Such disclosure measures significantly improve users’ perceptions of the impartiality of the platform’s moderation, and provide academics, the media, and the public with an important basis for external moderation. It is only on the basis of such trust that subsequent governance measures can garner broad social support and be effectively implemented.
Secondly, a single means of dealing with extreme information and disinformation in the Internet is not enough. Information on the Internet has no borders, and a single piece of harmful content can travel the globe overnight. Therefore, it is not possible to rely on one country or one platform alone to control it. Governments, platforms and regulators must work together, not only across sectors, but also across national borders, to create a common response mechanism. Through international communication and the standardisation of rules, even if cultures and laws differ from place to place, it is possible to find some common ground and develop an approach to information governance that is appropriate for the global digital age. Only when all parties act together can they effectively prevent malicious information from spreading across borders and create a healthy and safe online space.
In addition, user education and media literacy are equally crucial. In the face of massive information and a complex communication environment, it is difficult to eliminate rumours and malicious speech by relying only on platform screening. Due to the complexity and fragmentation of information on the Internet, users should think when browsing information, give due consideration to the reliability of the source, consider the consequences when commenting and make rational comments. In terms of the user, moreover, it is not possible to stand by and take no action in certain situations. Users should collect evidence of negative online information, e.g. in the form of social media screenshots, and submit them to the relevant authorities for reporting. Only when every Internet user is equipped with critical thinking and the ability to discern the truth of information will society be able to weaken the spread of false and extremist information from its roots.

Finally, from a legal perspective, to make cyberspace safer and healthier, it is crucial to establish a comprehensive system of legal rules. Just as there are traffic laws in the real world, the online world needs clear boundaries. Whether it was the introduction of specific online security laws or the promotion of judicial cooperation between different countries, all efforts had to be made to ensure public safety without sacrificing freedom of speech. In this, law, justice and ethics should be like the three pillars that can put an end to illegal behaviour and also protect everyone’s right to speak freely. In short, a sound legal framework can not only combat online chaos, but also provide a clear direction and bottom line for platform management.
Conclusion
To sum up, building a network environment that can guarantee freedom and maintain security in the digital age is a comprehensive change involving the participation of technology, ethics, law and society. As Gillespie points out, the key to creating a balanced and credible regulatory system is to disclose the source of the data, guarantee the transparency of the processing, and justify interventions within a legal, cultural and ethical framework. (Gillespie, 2018). This requires a tripartite approach involving the government, platforms and users in order to protect public safety while continuing to promote the diverse development of social discourse, thus creating a truly healthy online world. In this respectful, rational, and inclusive public space, people are free to express their beliefs and ideas as John Perry Barlow envisioned in his Declaration of the Independence of Cyberspace, yet without fear of being the target of malicious attacks. This is not only an ideal, but also a goal worthy of our continuous efforts.
Reference List
ABC NEWS. (2020, April 28). Hundreds die in Iran after drinking methanol to cure coronavirus. Www.abc.net.au. https://www.abc.net.au/news/2020-04-28/hundreds-dead-in-iran-after-drinking-methanol-to-cure-virus/12192582
ABC NEWS. (2024, May 15). Alex was doxxed while playing Call of Duty. What happened next was a living nightmare for him and his family. ABC News. https://www.abc.net.au/news/2024-05-15/btn-high-young-people-share-experiences-of-being-doxxed/103838534
BBC. (2019, March 15). Christchurch shootings: 49 dead in New Zealand mosque attacks. BBC News. https://www.bbc.com/news/world-asia-47578798
Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media (pp. 24–44). Yale University Press.
Parekh, B. (2012). Is There a Case for Banning Hate Speech? The Content and Context of Hate Speech, 37–56. https://doi.org/10.1017/cbo9781139042871.006
Peterson, J., & Densley, J. (2017). Cyber violence: What do we know and where do we go from here? Aggression and Violent Behavior, 34(1359-1789), 193–200. https://doi.org/10.1016/j.avb.2017.01.012
Be the first to comment