Introduction
How deadly can an internet rumour be? The tragedy that unfolded in a small town in the UK in the summer of 2024 provided a shocking answer. Three young girls were innocently murdered in Southport, and a false rumour about the identity of the killers went viral on social media, provoking national outrage in less than a day and sparking the worst incident of rioting in the UK for 13 years. Police cars were burned, shops looted, and mosques attacked ……. As the online world spiralled out of control, a scene that should have happened in extreme circumstances played out for real. The unrest shows that when social media platforms allow disinformation and hate speech to spread, the whole of society pays the price. Using the Southport riots as an example, this paper will analyse how social media’s failures in content governance led to a tragic reality and discuss how we should prevent the next “Southport tragedy” from happening.
Case Study: A Disaster Caused by a Rumor
Following the July 2024 attack in the British town of Southport, in which the police legally concealed the identity of the suspect because he was under the age of 18, the false claim that “Muslim illegal immigrants committed the crime” (Sinmaz, 2024) quickly spread across the social media platform X (formerly known as Twitter). Inflammatory videos posted by far-right bloggers such as Tommy Robinson reached 13 million views the next day, quickly fueling panic and anger across the country (Hagopian, 2024). Dominated by disinformation, riots broke out in more than two dozen cities, with protesters attacking mosques, burning libraries, storming immigrant holding sites, and erupting into violent clashes with police (Sinmaz, 2024). According to police figures, more than 1,000 people were arrested nationwide, hundreds were swiftly prosecuted, and dozens of religious sites and public institutions were vandalized in what police called “the worst domestic violence in 13 years” (Bicer, 2024).
More controversially, instead of banning the rumor-mongering accounts, Elon Musk, the current owner of Platform X, was criticized in the media for “adding fuel to the fire” by posting a message at the height of the tensions stating that “civil war in the UK is inevitable” (Sinmaz, 2024). Media criticized for “adding fuel to the fire”. British Prime Minister Starmer publicly warned social media platforms that “criminal behavior that incites violence is happening on your platforms” (Smout & Vant, 2024). However, X’s algorithmic mechanisms have been criticized for amplifying “emotional, highly interactive” content (Jensen, 2025), further weakening the message as X has drastically cut its censorship team since Elon Musk took over, loosened censorship of hate speech, and reinstated some far-right accounts that had previously been banned through clickbait (Press, 2023), further weakening the ability to verify information. This incident is not only a failure of platform regulation, but also symbolizes how platforms contribute to real-world violence and division when their technological logic is divorced from their social responsibility.
Reasons for Platform Governance Failure #1: The Dual Breakdown of Technology and Management
The Southport incident exposed systemic flaws in the content governance mechanisms of social platforms, with the failure of algorithms being particularly obvious. First, the failure of automatic review technology to curb rumors in a timely manner highlights the dilemma of AI review in identifying language and context. Taking Platform X as an example, its algorithmic recommendation mechanism maximizes exposure to garner eyeballs, leading to the viral spread of inaccurate posts, such as the false information about “crimes committed by illegal Muslim immigrants” (Sinmaz, 2024).Facebook’s previous research in the Asia-Pacific region has shown that the identification of hate speech is highly dependent on context, and existing algorithmic classifiers have difficulty identifying this type of harmful content (Sinpeng et al. In the case of Southport, sluggish algorithmic vetting and out-of-control recommendation mechanisms provided a channel for disinformation, which ultimately led to serious real-world social consequences. As “platform racism” reveals, the algorithmic mechanism itself may amplify biased content and exacerbate community antagonism (Carlson et al.) This highlights the algorithmic responsibility of platforms for how their interaction-first, traffic-first algorithmic design can systematically amplify the influence of hateful content, prompting online rumors to rapidly evolve into offline social unrest.
Second, the platform’s manual supervision was equally problematic.X The platform claimed to strengthen content auditing, but took few effective measures in the Nangang incident, and many user reports were not handled in a timely manner. Studies have found that users are generally powerless to report hate speech frequently, and do not see the results of processing, and the failure of the reporting feedback mechanism leads to a decline in user trust and abandonment of reporting (Xinpeng et al., 2021), indirectly contributing to the spread of false information. This stems from management’s insufficient investment of resources in content auditing as well as poor implementation. When both automated algorithms and manual supervision fail, harmful content persists and spreads over time.
Notably, this problem is further exacerbated by differences in governance capabilities across platforms and regions. In the case of the Meta platform, for example, there is a significant gap in content governance between Facebook and Instagram: while Facebook blocked Indian far-right figure T. Raja Singh for spreading hate speech in 2020, his group of supporters remained active on the platform until February 2025, when his own Instagram account was blocked (Singh, 2025), exposing inconsistencies in governance standards within the platform. For example, following the implementation of the EU’s Codes of Conduct on Countering Illegal Hate Speech Online, Facebook committed to reviewing 90% of user reports and deleting 71% of inappropriate speech within 24 hours (Sinpeng et al.). But the Southport riot demonstrated that even in tightly regulated areas, technical and managerial deficiencies can still lead to the proliferation of fake news. This global governance dilemma reminds us that even strict regulation cannot fully rely on existing governance mechanisms to prevent information governance failures.
Reasons for Platform Governance Failure #2: Platform Acquiescence to the Spread of Disinformation
As disinformation spreads unchecked on social networks, a key question surfaces: who actually acquiesces to this? On the face of it, social media companies have always claimed that their platforms are neutral in their stance (Srinivasaragavan, 2024), and that they merely act as a channel for information exchange. However, it turns out that this so-called “neutrality” is an illusion. Studies have shown that social media tends to reflect and reinforce the reality of power imbalances (Ray et al. Traffic-oriented algorithms prefer incendiary and emotional content, which makes it easier for conspiracy theories and hate speech to be pushed to users, and platform algorithms and operational logics systematically prefer extreme speech.
To make matters worse, platforms have a clear double standard when it comes to content enforcement. Platforms tend to focus on user engagement, length of stay and frequency of interaction as the core metrics for efficiency. Under this “traffic is revenue” orientation, when it comes to copyright infringement and other legal risks, social media are usually able to react quickly by taking down offending content. But they are often slow to react when it comes to speech that incites hatred and violence. This selective enforcement exposes the interests behind the platforms (Díaz & Hecht-Felella, 2021), namely that cracking down on extreme speech may lead to loss of users and damage to business interests.
The Southport riots further demonstrate this point, as Elon Musk, the new administrator of Platform X, promotes so-called “free speech” but objectively condones extreme speech. The accounts of far-right figures unblocked by Elon Musk played the role of instigators during the riots, making heated public statements (Press, 2023) that fueled the spread of rumors and social antagonism. A similar situation exists in the Indian market, where Facebook executives refused to address hate speech posted by members of the ruling party in order to protect the company’s business interests in the region (Frayer, 2020). From algorithmic rules to managerial decisions, social media companies’ own motives and interests “acquiesce” or even “condone” the proliferation of disinformation.
Critical Perspectives: Who Profits and Who Pays?
There is a complex chain of interests behind the proliferation of disinformation. First, social platforms, as commercial entities, make huge profits from the spread of disinformation. Exaggerated and extreme statements usually attract more user interactions, thus creating a large amount of traffic and advertising revenue. In addition, by controlling a large amount of information content through artificial intelligence technology, platforms have effectively gained the ability to guide public opinion but lack a corresponding system of public accountability. At the same time, politicians and extremist organizations benefit from this. They use social media to spread extremist rhetoric at low cost, influence public sentiment and social agendas, and form hidden alliances of interest with platforms. For example, the use of WhatsApp by Brazilian politicians to spread disinformation and extremist rhetoric to influence the outcome of an election (Avelar, 2019) demonstrates how political power can collude with technology platforms to manipulate public opinion.
The collusion between capital and power puts the governance of disinformation in a difficult position: platforms are unwilling to truly manage it, while certain political forces are unwilling to strictly regulate it, and ultimately, it is the public and society that pays the price. The spread of rumors leads to a decline in social trust and seriously damages community relations. In the Southport riots, a fabricated false news about a criminal suspect directly triggered a hateful confrontation between communities, and innocent groups were targeted for violent attacks. More seriously, the community’s trust in the authorities declined, and the government’s ability to govern and its credibility were challenged as a result. Research by the Hate Lab (Hate Lab) at Cardiff University has shown that the surge of hate speech on the Internet is an early warning of an increase in offline hate crimes (Williams, 2019). It is thus clear that governance failures on social platforms are externalizing risks and causing violence and unrest in the real world: those who spread extreme rhetoric and conspiracy theories profit with near impunity, while it is the innocent people, victimized communities, and the stability of society who pay for the rumors. The lessons of the Nangang incident clearly show that platforms that continue to allow the spread of false information will pay the price for the harmony and trust of society.
Conclusions and recommendations
The Southport riots were undoubtedly a catastrophe caused by a failure of social media governance, and a reminder that disinformation and hate speech, if left unchecked, can ultimately undermine the security and stability of an entire society. But more importantly, we still could prevent similar tragedies from happening again. First, social media platforms must strengthen their own vetting technology and combine it with local human vetting power to detect and stop the proliferation of rumours and hate speech in a timely manner, and assume the necessary public responsibility; second, governments should implement and strictly enforce laws and regulations related to network information security as soon as possible, and strengthen the external supervision of platform governance and law enforcement; and lastly, the public themselves should improve their media literacy, enhance their ability to identify rumours, and avoid blindly following the rumours. Finally, the public themselves should also improve their media literacy, enhance their ability to identify rumours, and avoid blind obedience and impulsive behaviour. Finally, the public themselves should improve their media literacy, enhance their ability to identify rumours, avoid blind obedience and impulsive behaviour, and work together to maintain public safety and social order online and offline. What we need is a safe and rational public space, not an agitation ground where extreme ideas and hatred run rampant.
References:
Avelar, D. (2019, October 30). WhatsApp fake news during Brazil election ‘favoured Bolsonaro.’ The Guardian. https://www.theguardian.com/world/2019/oct/30/whatsapp-fake-news-brazil-election-favoured-jair-bolsonaro-analysis-suggests
Bastian, R. (2021, August 11). Why social media can be more toxic for marginalized identities. Forbes. https://www.forbes.com/sites/rebekahbastian/2021/08/11/why-social-media-can-be-more-toxic-for-marginalized-identities/?utm_source=chatgpt.com
Bicer, A. (2024, August 5). UK going through its worst wave of riots in 13 years. EUROPE. https://www.aa.com.tr/en/europe/uk-going-through-its-worst-wave-of-riots-in-13-years/3295065
Bronwyn Carlson and Ryan Frazer. (2018). Social Media Mob: Being Indigenous Online. Macquarie University. https://research-management.mq.edu.au/ws/portalfiles/portal/85013179/MQU_SocialMediaMob_report_Carlson_Frazer.pdf
Díaz, Á., & Hecht-Felella, L. (2021, August 4). Double standards in social media content moderation. Brennan Center for Justice. https://www.brennancenter.org/our-work/research-reports/double-standards-social-media-content-moderation?utm_source=chatgpt.com
Frayer, L. (2020, November 27). Facebook accused of violating its hate speech policy in india. NPR. https://www.npr.org/2020/11/27/939532326/facebook-accused-of-violating-its-hate-speech-policy-in-india
Hagopian, A. (2024, August 20). Posts mentioning Tommy Robinson up 1,348% since rally and Southport stabbing. The Independent. https://www.independent.co.uk/news/uk/home-news/tommy-robinson-social-media-southport-b2596817.html
Jensen, M. (2025, February 12). Hate speech on X surged for at least 8 months after Elon Musk takeover – new research. The Conversation. https://theconversation.com/hate-speech-on-x-surged-for-at-least-8-months-after-elon-musk-takeover-new-research-249603
Press, A. (2023, December 10). Elon Musk restores conspiracy theorist Alex Jones’ X account, reversing Twitter’s ban. PBS News. https://www.pbs.org/newshour/nation/elon-musk-restores-conspiracy-theorist-alex-jones-x-account-reversing-twitters-ban
Ray, G., McDermott, C. D., & Nicho, M. (2024). Cyberbullying on social media: Definitions, prevalence, and impact challenges. Journal of Cybersecurity, 10(1). https://doi.org/10.1093/cybsec/tyae026
Singh, A. (2025, April 11). India’s courts must hold social media platforms accountable for hate speech. Tech Policy Press. https://www.techpolicy.press/indias-courts-must-hold-social-media-platforms-accountable-for-hate-speech/?utm_source=chatgpt.com
Sinmaz, E. (2024, August 6). Why are people rioting across England and how many are involved? The Guardian. https://www.theguardian.com/politics/article/2024/aug/05/why-people-rioting-across-england-how-many-involved
Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney. https://hdl.handle.net/2123/25116.3
Smout, A., & Vant, N. (2024, August 2). Keir Starmer warns social media firms after Southport misinformation fuels riots. Reuters. https://www.reuters.com/world/uk/pm-starmer-warns-social-media-firms-after-southport-misinformation-fuels-uk-2024-08-01/
Srinivasaragavan, S. (2024, September 19). Social media platforms can’t claim to be neutral, says expert. Silicon Republic. https://www.siliconrepublic.com/business/social-media-platforms-cant-claim-to-be-neutral-says-expert
Williams, M. (2019, October 15). Increase in online hate speech leads to more crimes against minorities. Cardiff University. https://www.cardiff.ac.uk/news/view/1702622-increase-in-online-hate-speech-leads-to-more-crimes-against-minorities

Be the first to comment