Hey there! I have never taken a piece of advice regarding writing that didn’t include some version of this: something specific is required to corner the market before you publish anything. Online harm and hate speech are enormous things in the digital age, and it is not minor—they are dangerous. Today, in the 2023 Twitter/X messiness about hate speech moderation (or missed moderation) following the Elon Musk takeover, the case study is making headlines. Moreover, while we will not give away hostages to terrorists, we will not be unpacking what happened, why it matters … and why digital media policies are so woefully failing us. We will use some big ideas along the way. We use the concept of platform governance and the ‘public sphere.’ Let us get started!
Why is hate speech and online harm a thing?
First, let us break it down. Hate speech is any communication directed to attack or discriminate against a person or persons of racial, religious, or sexual orientation. Along the same lines, online harms are broader, namely cyberbullying and misinformation, as well as the negative mental toll of inhabiting toxic online spaces. The cases are not about hurt feelings; they are about real-life violence, health crises, and radicalization. The 2019 Christchurch mosque shootings are an example: the attacker live-streamed the violence on Facebook after being motivated by online hate communities.
The issue is that Twitter (now X) is a massive global place where millions interact daily. They are like digital town squares and in a very lax manner. Now, we arrive at a central idea: the public sphere. The public sphere refers to a space where people gather to discuss ideas and do so in a way that affects society’s direction. The modern equivalent would be that social media platforms should be open, democratic dialogue. However, that ideal comes undone when hate speech begins to creep in. What is left is an echo chamber for toxicity (Gillespie, 2022).
The Case Study: Twitter/X in 2023
In 2022, Elon Musk bought Twitter for $44 billion and pledged to become a refuge for ‘free speech.’ By 2023, things got messy. Musk subsequently slashed the company’s content moderation teams, rolled back policies on hate speech, and even reinstated banned accounts by hate speech-spreading figures such as far-right figures with racist, anti-Semitic habits. The result? The center estimates a spike of 60 percent in hate speech on the platform. Misinformation, slurs, and threats rose, and marginalized people, including the LGBTQ+ community and people of color, were on its receiving end.
In 2023, not only did the Twitter/X controversies re-victimize groups like the LGBTQ+ community, but they also further oppressed native Indigenous people who often used social media to build and connect their communities and cultural expression. Carlson and Frazer (2018) identify that, as one of their research areas, Indigenous Australians are some of the most avid social media users, using these platforms to connect across great distances, broadcast cultural knowledge, and affirm their identities. However, the 2023 spike in hate speech caused by Musk’s policy changes created a hostile environment that wiped out these practices. For example, Indigenous participation was accompanied by reports of increased exposure to racist abuse (similar to Carlson and Frazer 2018, who found that online racism can intersect with such cultural protocols as Sorry business that add to the emotional harm). It is a failure of platform governance to protect vulnerable communities, as the rampant racism did not suffer any forms of moderation, and, in so doing, the platform failed to fulfill its potential as a safe digital space for Indigenous voices. The hashtags #TransHate surged, and many users reported they were being given death threats (see image 1). Advertisers pulled out because the brands would be linked to toxicity (GLAAD, 2023), and advocacy groups like GLAAD talked about this crisis of hate. At the same time, Musk reinforced these tweets in a series in which he stated that “free speech” includes accepting even offensive content, so long as it is not illegal.
So, what is the big deal? This is not the case with a few mean tweets. This is a perfect example of how digital media policies (or lack thereof) can affect real lives in real spaces. To break it out, we have some key ideas for success.

image 1 combating anti-trans groups
Theory #1: Platform Governance and the Duty of Care
Everything to do with the governance of the platforms is platform governance: how social media companies structure and enforce the rules about how the platforms are to be used. The systems, the processes, and the decisions of what content does and does not get amplified, or the removal of it, or being ignored are not just rules; they are a written policy kind of thing like Twitter making new rules against hate speech. As a leading scholar in this field, Tarleton Gillespie contends that platforms are not neutral conduits for information. Instead, they actively define online discourse (Gillespie, 2022). That gives them much power as digital public sphere gatekeepers in a modern space where people gather to discuss ideas and shape society.
Duty of care is a key element of platform governance and a feature that is more frequently a part of online safety discussions. Platforms are duty-bound to care for their users, and this entails explicitly protecting their users from harm — including vulnerable groups such as members of a marginalized community who are targeted by hate speech. This is no longer a moral but a legal invocation across many places. For instance, under the 2023 version of the Digital Services Act (DSA), which the European Union brought into operation in full, platforms must do more than take down illegal content like hate speech—you can be fined up to 6 percent per day of your annual revenue if you fail to do so. It is a simple idea: if you are a platform millions use, you cannot just stop and say, “Not my problem.”
This idea is the antithesis of how Elon Musk governed the platform on Twitter/X. Under Musk’s rule since 2022, the company pressed content moderation cuts, slashing staff in some departments by nearly 50%. He also reversed the restrictions on hate speech and said that the platform should put ‘free speech’ over restrictive moderation (see image 2). The other way of describing it is hands-off, and to that extent, this was saying Twitter/X was no longer taking its duty of care seriously. It was devastating and had immediate consequences. Further illuminated by looking at the technological affordances of Twitter and X and the providers’ governance structures, moderating an atmosphere of hate speech on Twitter and X in 2023 becomes more complex. As Sinpeng et al. (2021) point out, Facebook, and thus Twitter/X by analogy, is hindered by an inability to address hate speech because it relies upon global standards that often fail to recognize local cultural context. For instance, Sinpeng et al. (2021) have noted that Facebook’s moderation systems could not adequately detect specific ‘hate’ towards minority groups in the Asia Pacific, which could be the same story on Twitter when transphobic and racist tweets peak under the relaxed Musk vision.

image 2 X is bringing some content moderators?
Theory #2: The Marketplace of Ideas—Does It Still Work?
The second thing to pull apart is the marketplace of ideas, a phrase that frequently appears in free speech debates. The marketplace of ideas is based on the philosophy of John Stuart Mill, or that if all ideas can compete freely, then the best ones will command their positions in the marketplace using reason and debate. The theory’s premise is that the more voices and points of view there are, the better it will be at uncovering truth and that good ideas are more powerful than bad ideas. This is a big idea that appeals to Elon Musk. According to him, Twitter/X should not censor content; even hateful speech should be allowed, as he’s argued the First Amendment is a victory over censorship, which prevents ‘truth’ from ’emerging.’ Musk believed the platform should be a forum for ideas to duke out, and the winners take all.
Does such a theory stand up in the digital age? Proponents of the marketplace of ideas, like Zeynep Tufekci, argue that any online playing field is not a neutral marketplace because platforms are not neutral (Tufekci, 2021). The traditional marketplace of ideas is assumed to be equally open to everyone having an opportunity to speak and get heard. However, on social media, algorithms have the final say. In the 2023 Twitter/X case, transphobic posts did not get defeated in the marketplace of ideas; they went viral. The voices of trans and ally users were drowned out, along with millions of views of hateful content, by phrases such as #TransHate that trended on Twitter (GLAAD, 2023). This was no fair debate where the loudest, most hateful voices won this.
The trouble goes beyond amplification. There is no hate speech just for its sake; there is hate speech that silences ideas. Many of the trans users who were targeted in their social media accounts for threats, harassment, etc., did not stay to debate their right to exist. Instead, they walked off the platform and decreased the breadth of the conversation. This is a serious hole in the marketplace of ideas theory: it presumes that all the parties involved are on an even footing. However, in reality, hate speech works as a chilling effect, the effect of driving marginalized groups out of the conversation altogether. For instance, the digital marketplace is not a level playing field (Tufekci, 2021), as the best and most dangerous content gets the biggest megaphone.
Musk’s reliance on the marketplace of ideas is rooted in the irony that he ignores the real-world harm caused by hate speech. The theory might work in a hypothetical world where words have no results, but online hate can turn into offline violence in the real world. For instance, the Christchurch mosque shootings of 2019 were set off by online hate communities and demonstrated how spewing unprotected hate speech can make individuals radicalized and pave the way for real-world bloodshed. By giving hate speech space to grow on Twitter/X, Musk was more than instrumental in what would be a ‘debate’—he was creating a place to propagate hurt and undermine the platform of democratic dialogue.
Why This Case Matters for Digital Policy
What does the Twitter/X controversy tell us about digital media policy and governance? It first illustrates how much power the platform owner has. Musk’s decisions encompassed business angles and the whole online environment for millions of users. What this means is that there is a big question of accountability here. Is not that person, in general, somebody who has too much control over a platform that is essentially a public square? Probably not. Governments enter into the picture. In 2023, the European Union’s Digital Services Act (DSA) was designed to force platforms to tackle content such as hate speech and incur huge fines (European Commission, 2023). Musk has already been warned that Twitter/X is not doing enough, so far telling the tech giant, ‘ I will see you in court.’
In the case of the 2023 Twitter/X controversy, the role of algorithms in furthering hate speech paints the picture deeper of platformed racism, where the design and user practice of the platform merge to continue to cause harm. Matamoros-Fernandez (2017) explains that the term platformed racism is used to describe how such social media platforms, in terms of the way they work with algorithms and affordances, facilitate the mediation of racist discourse, as depicted by an example of an Australian race-based controversy mediated through Twitter, Facebook, and YouTube. Indeed, in 2023, the algorithms on Twitter/X also amplified the visibility of hateful content, such as the #TransHate hashtag, smothering the voices of marginalized people, as Matamoros-Fernández (2017) described: covert and overt racism becomes popular by reach. This algorithmic bias further disqualifies the marketplace of ideas in the idealized sense. It also reminds us that they should modify their systems to not be so virally minded; otherwise, harmful content will take over the digital public space.
What Can We Do About It?
So, where do we go from here? The first thing platforms must do is take their duty of care to a severe degree. That entails putting money into moderation, not only AI filters but also people aware of cultural contexts. This also means designing algorithms that do not reward toxicity. The second is that governments must get serious with the laws. The EU does not have a bad DSA; it is a good start, but we need global cooperation: hate speech does not stop at the borders. So finally, we as users have a little to play a role. Find someone we can call out hate against, support the marginalized voices, and get them pushing platforms to do better. Change starts with us.
Wrapping Up
Does the 2023 Twitter/X controversy have anything to do with a billionaire and a social media platform? There is more to learn from. That hate speech and online harms are not going away, and the way we govern digital spaces does not. Looking at this case through platforms of governance and markets of ideas, we can identify the chasm in today’s systems. The platform is responsible for protecting users and not profits. They [corporations] also need to be held responsible by governments. We also have to see what ‘free speech’ refers to in a digital environment where the algorithms make the moves—the thing that matters is that it is all up for grabs.
References
Carlson, B., & Frazer, R. (2018). Social media mob: Being Indigenous online. Macquarie University.
Center for Countering Digital Hate (CCDH). (2023). Hate speech on Twitter: A 2023 report. CCDH.
eSafety Commissioner. (2021). Online Safety Act 2021: Overview. Australian Government.
European Commission. (2023). The Digital Services Act: Ensuring a safe and accountable online environment. European Union.
Gillespie, T. (2022). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
GLAAD. (2023). Social media safety index 2023: Twitter fails LGBTQ+ users. GLAAD.
Gorwa, R. (2022). The politics of platform regulation: How governments shape online content moderation. Journal of Digital Media & Policy, 13(2), 45–62.
Hinduja, S., & Patchin, J. W. (2021). Bullying beyond the schoolyard: Preventing and responding to cyberbullying (3rd ed.). Corwin Press.
Massanari, A. (2017). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807
Matamoros-Fernández, A. (2017). Platformed racism: The mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118X.2017.1293130
Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021). Facebook: Regulating hate speech in the Asia Pacific. Department of Media and Communications, The University of Sydney, and The School of Political Science and International Studies, The University of Queensland.
Tufekci, Z. (2021). The paradox of the digital public sphere: How platforms undermine democracy. Oxford University Press.
Be the first to comment