
In a time when every share, click, and comment has the potential to be the spark that starts a chain reaction of disastrous outcomes, social media algorithms are typically the unseen hands that spread hate.
Social media has dramatically transformed the way people circulate information, access information and express their views in social and political narratives. It would not be wrong to state that the advent of social media has provided new avenues to the public that not only connects them throughout the world, but also ends them with the sense of agency that is exercised through democratisation of speech. However, it is also interesting to know that despite the uplifting aspect of the social media platforms that also exists, the downside which has contributed to the spread of speech due to the spread of disinformation which has shown to have the potential to incite violence.
Before delving deeper into this, it is pertinent to understand what this term means. Hate speech can be described as any written, oral, or visual statement that insults or attacks an individual or group on the basis of traits such as nationality, sexual orientation, religion, race, or ethnicity. This often appears in social media in the form of postings, memes, movies, or comments that stereotype, dehumanise, or call for violence against marginalised communities. It is worth noting that there is no linear or standard definition of “hate speech”. Though theorists and scholars have proposed different ideas and notions that explain this concept. For instance, Sellars (2016) points out that the phrase “hate speech” is most commonly used but is defined poorly. Scholars, institutions, and states all differ in their perceptions of it. It is due to this that the legal regulation and filtering web content are further complicated by its ambiguity. Apart from hate speech that specifically incites violence or constitutes a clear and present threat, it is usually allowed by the First Amendment in America. On the other hand, Germany, France, and the UK, however, have stricter laws that prohibit hate speech or discrimination based on race, religion, or ethnicity. Since content is transmitted across borders and jurisdictions, the advent of social media sites makes regulation harder (Sellars, 2016).
There has been increasing concern in recent years regarding the part social media platforms play in suppressing hate speech and the algorithmic amplification of it. In the context of Facebook, algorithms are a set of mathematical formulas or encoded instructions that regulate how data is sorted, filtered, and shown to users (DeVito, 2017). The algorithms make sure that users only see postings, advertisements, and other content that they are most likely to interact with in order to sort through the vast amount of content that is posted on the network.
But why is that online platforms are now being considered a primary source for issues and controversies stemming from hate speech?
While there are many parallels between online and physical hate speech, Brown (2018) argues that the internet setting amplifies and changes its effects in distinct ways. Hate speech is easily replicated and disseminated online since it is far more accessible and subject to minimal publishing restrictions. Online hate speech is easier to spread and has a longer lifespan than spoken or fleeting offline speech. Additionally, through engagement-based content ranking, internet platforms’ algorithmic design may inadvertently encourage and magnify hate speech. It can be understood that while online hate speech may pose new difficulties, it is not fundamentally different from offline hate speech; rather, what makes it “special” is its speed, scale, and technical mediation—elements that necessitate thoughtful, not reactive, reactions.

Social media giants like Facebook, YouTube, and WhatsApp aim to create viral content. Algorithms also favour information that evokes strong emotions, such as terror or outrage, because it increases interaction. This makes it easier for hate speech to go viral rapidly, particularly when tensions between ethnic groups, religions, or politics are already high.
The social media amplification of content is accomplished through:
- Echo chambers, where individuals are solely exposed to opinions that are similar to their own.
- Recommendation algorithms favour sensational content.
- Lack of moderation, especially when it comes to local languages.
- Anonymity, which gives users more power and reduces their accountability.
This impels one to think if there exist any regulations or method for filtering content that could mitigate the spread of any kind of content that could incite violence or anger in people.
According to Roberts (2019), most content filtering is reactive rather than proactive, so toxic content, including hate speech, may be able to spread widely before being removed, especially if it suddenly goes viral. This is because platforms primarily rely on overburdened, understaffed, and under-resourced moderation processes that take place after harm has already been done.
Case study- Rohingya Crisis
There have been several cases of hate speech and violence that have spurred controversies and civic unrest. One such case was the Rohingya crisis. The majority of the Rohingya, a Muslim minority, live in Myanmar’s Rakhine State. For decades, the government and Buddhist nationalist organisations have subjected them to assault, persecution, and disenfranchisement. The Myanmar military launched a violent crackdown in 2017 in response to suspected attacks by Rohingya rebels, with horrifying results. As thousands of reports of mass killings, rapes, and burning villages were made, and over 700,000 Rohingya fled into neighbouring Bangladesh. This was a “textbook example of ethnic cleansing,” according to the UN, and social media fuelled the bloodshed.

How did social media ignite violence in Myanmar?
Facebook played a pivotal role during this crisis. Facebook is more than just a website to most Burmese; it’s their internet. For millions of people, Facebook serves as their main social networking site, communications hub, and news source. As a result, it had a significant impact on public opinion, and sadly, the website was ill-prepared to deal with the abuse it permitted. In Myanmar, Facebook emerged as the most popular medium of communication, especially after 2011 when Myanmar started moving towards a more open government (Mozur, 2018). Half of Myanmar’s population was on Facebook by 2018, so it had also become a major means of social interaction, communication, and even financial transactions. It also emerged as the main platform for propaganda and hate speech.
In the years preceding the 2017 riots, Rohingya Muslims were the focus of unrestrained hate speech on Facebook. On Facebook, Buddhist nationalist groups like the Ma Ba Tha (Patriotic Association of Myanmar) and radical monks like Ashin Wirathu referred to the Rohingya as “illegal immigrants,” “terrorists,” and “invaders” of the Buddhist motherland (Choudhury, 2020).
The Facebook platform was used to propagate hate speech and false news that fuelled violence in the real world. As military men were found to have spread false information regarding the Rohingya as terrorists and framed them as danger to national security through Facebook postings (Mozur, 2018). There were comments that were full of provocation and hate speech. These messages fostered a culture of fear and dehumanisation and stoked hatred. The military frequently included vivid images and films in their communications to evoke strong feelings and garner support for their cause. These visual propaganda techniques were at the heart of inciting violence because they influenced public perception and fostered empathy for the military’s aggressive methods.
Choudhury (2020) points out that violent content was amplified by the algorithms of Facebook, and the initial poor moderation by the company allowed fomenting stories to circulate. The recourse against Facebook had also been undertaken including a class-action suit by the Rohingya claiming the network was encouraging violence and hate speech. This case calls for Facebook to take more responsibility for what it does to society and enforce stricter content rules on moderation in order to prevent hate speech and disinformation from spreading virally. The content that had evoked multiple feelings in people was mostly related to false accounts of violence against Rohingya that were used to legitimize military repression. It also included content created for controlling the online space with a view to shaping nationalistic narratives.
Do there exist complexities in moderating online content?
Social media platforms like Facebook struggle to filter hate speech in many linguistic and cultural contexts, especially in areas like Asia Pacific (Sinpeng et al., 2021). Due to their inability to understand the nuances of regional languages or cultural allusions, algorithms frequently misunderstand or handle dangerous content poorly. Algorithms may unintentionally encourage offensive content if local modifications are not taken into account, and hate speech may become uncontrollable (Sinpeng et al., 2021). The legal definitions of hate speech vary greatly throughout Asia Pacific countries, and domestic laws are frequently used to stifle political dissent and are either ambiguous or excessively broad. As a result, Facebook frequently finds it more difficult to implement consistent standards. Facebook’s algorithm, which tends to promote more and more extreme or divisive content, amplifies hate speech, and favours content that evokes strong emotions.
How did Facebook respond to it?
Despite several warnings from civil society organisations and UN authorities, Facebook’s response was tardy. There weren’t enough technology or content moderators who spoke Burmese to detect hate speech in regional tongues. As a result, harmful content propagated and incited violence because it was available online for long periods of time.
In response to the 2017 atrocities and the overwhelming international outrage, Facebook recently began removing accounts linked to the military and providing cash to local moderation teams. Due to Facebook’s failure to regulate and enforce community standards, hate speech against the Rohingya found its way into the mainstream narrative, and that was most specifically at the hands of the military.
As a result of Facebook’s failure to curb or regulate the new culture of hate online, anti-Rohingya sentiment was allowed to run wild on the platform. The hate nationalism that Facebook played a role in spreading and inspiring violence against Myanmar’s military and nationalist groups is equal to the “toxic technoculture” that Massanari (2017) discusses. He has mentioned that “toxic technoculture” is the term used to describe the legalisation, promotion, or maintenance of unpleasant behaviour, such as hate speech, cyberbullying, and harassment, through the use of technology and online social platforms. It describes a poisonous atmosphere among internet users in which particular organisations or groups propagate divisive, typically hostile viewpoints that thrive due to inadequate platform management or regulation.
Facebook’s failure to effectively police content or implement effective accountability for the site’s use by the military was another example of its role in hate speech dissemination against the Rohingya. Similar to how Reddit’s inaction in the #Gamergate issue, Facebook’s delay in action allowed the hate speech of the military to reach more people on the platform amid constant complaints (Massanari, 2017).

However, after immense objections were raised against Facebook, it removed 18 accounts and 52 pages linked to the Tatmadaw, Myanmar’s military in August 2018 (Ellis-Peterson, 2018). The Tatmadaw commander-in-chief Min Aung Hlaing’s story was one of them. This came after the military was accused of committing genocide and war crimes against the Rohingya minority group in Rakhine state by a United Nations investigation. In light of the United Nations report’s conclusion that the Tatmadaw’s actions amounted to the “gravest crimes under international law,” they were condemned globally. In an attempt to prevent the website from being used to spread hate speech and incite more violence against the Rohingya, Facebook promptly removed the military-affiliated profiles.
How did it affect the world?
Facebook hate speech was not just offensive, but also lethal. To demonise attempts to defend the mass murder and exile of the Rohingya, dehumanising rhetoric was intensified. Attackers used phoney Facebook posts to portray Rohingya as militants or threats, victims described.
The following are some of the consequences:
- In all, around 700,000 people migrated to Bangladesh.
- Among the thousands who were murdered were women and children.
- A stateless population who resided in refugee camps and flattened settlements.
- A lost reputation for Myanmar worldwide
What do we understand from this?
The role of Facebook in the Myanmar crisis provides some intriguing information as well as practical suggestions for stopping the spread of hate speech on social media. First, powerful parties on social media may use it as a tool to spread false information and incite violence, as demonstrated by the way Myanmar military and nationalist factions used the Rohingya on Facebook. Additionally, because it attracts more users and disseminates extremist information, their algorithms favour inflammatory content. Second, Facebook’s failure to invest in Burmese-speaking moderators or localised content review further exacerbated the problem, underscoring the need for context-sensitive moderation. So, immediate action is necessary because delayed responses may cause irreparable harm.
Some plausible solutions
Using early detection software to mark hate speech before it is escalated into violence, exposing algorithms, and improved content moderation by using local moderators and tailoring AI systems to understand cultural context are some of the steps that have been proposed to keep hate from spreading through social media. Avoiding incitement to violence and maintaining protection of human rights should be the top priority of legal and regulatory actions. Also, media literacy education is required to enable users with the capacity to detect and steer clear of hate speech and disinformation. Lastly, we can reduce the negative impact of social media on society if we stand together as one.
References
Brown, A. (2018). What is so special about online (as compared to offline) hate speech?. Ethnicities, 18(3), 297-326.
Choudhury, A. (2020). How Facebook is complicit in Myanmar’s attacks on minorities. The Diplomat. https://thediplomat.com/2020/08/how-facebook-is-complicit-in-myanmars-attacks-on-minorities/.
DeVito, M. A. (2017). From editors to algorithms: A values-based approach to understanding story selection in the Facebook news feed. Digital journalism, 5(6), 753-773.
Ellis-Peterson, H. (August, 2018). Facebook removes accounts associated with Myanmar military. The Guardian. https://www.theguardian.com/technology/2018/aug/27/facebook-removes-accounts-myanmar-military-un-report-genocide-rohingya
Massanari, A. (2017). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New media & society, 19(3), 329-346.
Mozur, P. (2018). A genocide incited on Facebook, with posts from Myanmarʼs military. The New York Times, 15(10), 2018.
Roberts, S. T. (2019). Content Moderation in the Shadows of Social Media. Behind the screen. Yale University Press.
Sellars, A. (2016). Defining hate speech. Berkman Klein Center Research Publication, (2016-20), 16-48.
Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating hate speech in the Asia pacific. Facebook Content Policy Research on Social Media Award: Regulating Hate Speech in the Asia Pacific.
Image Reference
https://subture.com/blog/understanding-facebooks-algorithm/
https://dfrac.org/en/2021/10/18/hate-speech-on-social-media-giant-facebook-part-2/
Be the first to comment