
Have you ever been scrolling through social media and seen outrageous hate speech, false information, or even attacks on certain groups of people? Have you ever wondered why some posts are deleted and others still exist? More importantly – who actually decides whether this content should exist or not?
This actually involves a very central question: who has the right to draw the boundaries of freedom of expression?
We all know that freedom of speech is one of the cornerstones of a democratic society – the ability to express one’s views, to question authority, to satirize the government – it’s a “right” we’re often proud of. But the question is: should freedom of speech be tolerated indefinitely when it starts to hurt people, spread hatred, or even endanger lives?
Freedom of speech is never “unlimited”.
Let’s start by clearing up a common misconception: freedom of speech ≠ freedom to say anything, and platforms and governments can’t control it. In fact, freedom of speech is not absolute in any democratic country. According to Articles 19 and 20 of the UN International Covenant on Civil and Political Rights, freedom of expression must be balanced against the rights of others, national security, public order and morality.
While Australian law does not enshrine freedom of expression in the Constitution as in the United States, the Human Rights Act still protects an individual’s right to expression. It does, however, set limitations. For example, the dissemination of speech that incites hatred, violence, or discrimination can be restricted (Flew, 2021).
Terry Flew, in “Regulatory Platforms,” notes that the “limits” of freedom of expression have become a central issue in political and cultural conflicts around the globe. Especially with the rise of social media platforms, cyberspace has been transformed from a mere “marketplace of speech” into a complex battleground of platform rules, algorithmic censorship, and multilateral regulatory pressures (Flew, 2021, p. 93). In other words, we do not speak freely in an open square, but rather in a space regulated by technological systems and commercial logic.
Social media platforms: digital square or private territory?
Speech no longer takes place mainly in the real square, but on platforms such as Twitter, Facebook, Instagram, WeChat, and TikTok. But these platforms are not really “public spaces” – they are commercial platforms owned by private companies with their own community codes, algorithmic control mechanisms, and even “gag” policies. They are commercial platforms owned by private companies with their own community codes, algorithmic controls and even “gag” policies.
For example, in 2021, former US President Donald Trump was permanently banned from Twitter and Facebook for allegedly inciting violence. This decision sparked a global debate: do platforms have the right to block the voice of a country’s leader? Is this “censorship”?
Supporters of the platforms will argue that the companies are keeping their platforms safe from the spread of disinformation and hate speech. But critics argue that this kind of “corporate censorship” is just as dangerous because it is not transparent or democratically regulated.
Complicating matters is the fact that platforms’ standards for judging hate speech are not consistent across cultures around the globe. As Sinpeng et al. (2021) point out in their study, Facebook’s management of hate speech in the Asia-Pacific region suffers from uneven resources, insufficient capacity to censor local languages, and poor protection of marginalized communities (p. 7). In countries such as Myanmar and Sri Lanka, Facebook has been criticized for failing to curb incitement to violence against minority communities in a timely manner, with serious consequences.
This makes us realize that the platform’s definition of “speech boundaries” is not just a technical issue, but a matter of social governance.
In addition, there is a more implicit mechanism: platforms’ recommendation algorithms are not neutral, and they often push more emotional and polarized content to users in order to increase dwell time and interaction rates. This means that the platform not only exerts influence when “deleting posts”, but also has the power to “amplify whose voice”.
Government Regulation: Protecting Citizens or Suppressing Dissent?
In the face of the “loss of control” of social media platforms, more and more governments are intervening and enacting regulations to strengthen the regulation of online content. For example:
Germany’s Network Enforcement Act: platforms are required to delete illegal hate speech within 24 hours or face a hefty fine.
**Australia’s Online Safety Act** gives the eSafety Council greater powers to force platforms to remove hurtful or illegal content, especially online harm aimed at youth and female users.
China’s online regulatory regime: Strict censorship of politically sensitive content has been widely criticized as suppressing dissent and information blocking.
In Australia, for example, the eSafety Commissioner (Online Safety Commissioner) has become one of the more advanced digital governance bodies in the world. Its focus is not only on punishing cyberbullying and hateful behavior, but also on protecting the space for the expression of vulnerable groups. sinpeng et al. point out that this “victim-oriented” logic of governance is in fact an extension of freedom of expression – allowing voices that might otherwise be silenced by cyberviolence to be heard, to be heard, to be heard. Sinpeng et al. point out that this “victim-oriented” logic of governance is in fact an extension of the guarantee of freedom of expression – allowing voices that might otherwise be silenced by online violence to exist.
However, there are also grey areas in this kind of government intervention, as Flew (2021) warns that a lack of accountability mechanisms to restrict ‘legitimate but uncomfortable’ speech can have a ‘chilling effect’, suppressing political criticism and dissent (p. 95). (p. 95). This also suggests that regulation must strike a balance between “protection” and “repression”.
Algorithms and artificial intelligence: neutral censors or new types of risk?
As the volume of content proliferates, platforms are relying more and more on AI and algorithmic systems to review content – from recognizing hate words to automatically removing “offending” content. It sounds efficient and smart, but the question is: are these algorithms really “neutral”?
Research has found that many AI content review systems are linguistically, culturally and contextually biased. For example:
Facebook’s AI review tool is more likely to incorrectly remove non-English content, especially political expressions in Arabic, Burmese, and Filipino (Sinpeng et al., 2021).
Normal expressions posted by LGBTQ+ users are often misjudged by the algorithm as ‘inappropriate’.
Dark humor, satire, and art are also often mistakenly censored.
There is also a more insidious problem: so-called “shadow banning”. Platforms don’t delete user content, but they do limit its visibility in the recommendation stream, often without the user realizing it. This lack of transparency makes it easy to hurt innocent people and deprives content regulation of basic openness and due process.
The deeper structural problem is that** platforms’ recommendation algorithms often tend to push more extreme, emotionally charged content to entice users to click and interact. **This actually amplifies hatred, misinformation and antagonism, making it more difficult for truly neutral and rational expression to be seen. This “algorithmic amplifier” effect is also quietly demarcating the boundaries of speech.
Cultural Differences and the Challenges of Global Governance
Another factor that complicates the question of who should draw the boundaries of freedom of expression is cultural differences and differences in national systems. In some countries, religion and national identity are extremely sensitive topics; in others, political criticism may be treated more leniently.
In Germany, for example, Nazi symbols and Holocaust denial are illegal, a sensitive response to historical harm, but in the United States, they are still legally considered “protected expression” under the First Amendment. In India, for example, criticism of certain religious groups can be quickly labeled as “incitement to violence,” whereas in the United Kingdom, such discussions are covered by freedom of speech.
In their study of the Asia-Pacific region, Sinpeng et al. (2021) also point out that one of the challenges faced by platforms in implementing content moderation in different countries is the tension between **“globally uniform policies vs. local cultural sensitivities”.** Facebook has attempted to develop a globally uniform hate speech policy, but has often overlooked linguistic details, cultural sensitivities and cultural differences. Facebook tries to formulate a global hate speech policy, but it often ignores linguistic details and cultural contexts, which leads to misjudgment and even provokes resentment from the community.
This shows that the boundary of freedom of expression is not a product that can be “standardized and exported”. If a platform uses “global rules” and ignores local contexts, it often creates more conflicts; if it is left entirely to national control, it may lead to censorship abuses. The balance of power and cultural understanding must be taken into account at the same time.
Each of us is a co-builder of boundaries
Drawing the boundaries of freedom of expression is never an easy task. It is not just a matter of law, policy or technology, but an ongoing process of negotiation about power, responsibility and values.
So what should we do in the future?
Strengthen digital literacy education: let every user understand the boundaries of the right to expression and recognize the difference between hate speech and reasonable criticism. Schooling should not only teach how to speak out, but also how to do so responsibly.
Push for algorithmic transparency legislation: We can no longer accept “black box algorithms”. Platforms must explain their logic of content recommendation, hiding and censoring, and accept public scrutiny.
Establish a multi-participation mechanism: platform policies should not be decided by company executives alone, but should include the voices of civil society organizations, user communities, researchers, and others.
As Gillespie (2018) puts it, content censorship is actually a form of power in action. If we lack scrutiny and checks and balances on this power, it can quietly transform our public spaces and reshape the possibilities of our conversations with each other.
Freedom of expression in the future should not just be a “freedom to speak”, but the ability and responsibility to participate in the public space with mutual respect and to build a common understanding. This requires not only governments and platforms, but also you, me and each of us.
Reference
Banet-Weiser, S. (2018). Empowered: Popular feminism and popular misogyny. Duke University Press.
Flew, T. (2021). Hate speech and online abuse. In Regulating Platforms (pp. 91–96). Cambridge: Polity Press.
Gillespie, T. (2018). Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021). Facebook: Hate speech regulation in the Asia Pacific. University of Sydney & University of Queensland. https://r2pasiapacific.org/files/7099/2021_Facebook_hate_speech_Asia_report.pdf
Be the first to comment