Myanmar army uses Facebook to spread hate speech
A recent United Nations investigation found that in 2017, the Myanmar military spread hate speech against Rohingya on dozens of seemingly unrelated Facebook pages. The UN also characterised Facebook as an “effective tool” for spreading hate speech in the country.
In 2017, Myanmar’s army unleashed inhuman and massive violence in the Rohingya colony. They deliberately burnt down villages and committed widespread sexual violence and torture, displacing more than 700,000 people.Facebook was used by Myanmar’s army, who perpetuated disinformation and hate speech about Rohingya on the platform. Hundreds of military personnel have used fake accounts or posed as entertainment interfaces to massively post anti-Rohingya content on Facebook.
Facebook is unable to effectively detect offending posts in Myanmar. Despite the presence of 18 million active Facebook users in Myanmar, in 2015 only two people viewed questionable Burmese-language content on Facebook. And because Myanmar’s online community uses a different coding scheme than Facebook’s content review tool, Facebook does not have access to correctly translated versions of local content.Another reason Facebook is being used as an online weapon is that it is pre-installed on virtually any smartphone sold in Myanmar, at no cost. With such a widely available platform, content cannot be effectively censored, and hate speech is spread far and wide, sowing the seeds of incitement for the offline violence that follows.
In response, Facebook has taken steps to improve its human rights monitoring and violence prevention capacity. In particular, in the aftermath of this violence, Facebook removed hundreds of pages related to the Myanmar military and established manual and automated hate speech monitoring features that effectively identify local languages.
Hate speech: a concept under definition
The development of the Internet has made it possible to break through the restrictions on the dissemination and proliferation of speech, and has given the exercise of the right to freedom of expression a completely new means; in the 1980s, neo-Nazis in the United States began to make use of the Internet for hate speech propaganda, and then Nazis in Germany began to use the Internet for the dissemination of extremist content (Timofeeva, 2002). This means that if freedom of expression on the Internet is not effectively and reasonably regulated, it may contribute to the spread of hate speech and even lead to an increase in hate crimes.
So, in order to better manage the web, State began to define and regulate Hate speech. Scholar Judith Butler (Butler, 1997) describes that State produced, delineated the range of acceptable public speech, and distinguished between what was speakable and unspeakable. Hate speech was defined as ” expresses, encourages, stirs up, or incites hatred against a group of individuals distinguished by a particular feature or set of features such as race, ethnicity, gender, religion, nationality, and sexual orientation (Parekh, 2012).
However, there are a number of issues that require attention. First, hate speech is a concept that has no specific boundaries and is constantly changing. It shifts with the social context, political environment, and cultural values. Under different cultural and contextual factors, people’s understanding of what kind of speech will be regarded as hate speech varies. We need to avoid miscalculations or excessive restrictions on freedom of expression. Next, it has been debated whether the state has the right to restrict speech and to what extent such restrictions are justified. The scale of the state’s definition of hate speech is thought to reflect the boundaries of freedom of expression (Cohen-Almagor, 2017).
The unstoppable hate speech
In recent years, global cyberhate has shown an upward trend due to many factors, including global political instability and the outbreak of the New Crown epidemic. This includes hate speech as diverse as anti-Asian rhetoric, gender and racial discrimination, incitement to civil unrest, violence, black activism, terrorism, and extremism. So what makes Hate speech on the internet so difficult to ban?
Hate speech on the Internet is wrapped up in a massive, anonymous, cross-border ocean of Internet content, which is not only presented in multilingual forms, but may also contain emoticons, pictures, videos, etc., making it extremely difficult for computers to quickly identify it.
Not only that, but many individuals and organisations are using the transnational nature of the internet to avoid regulation by their own governments. According to statistics (Perine, 2000), as early as 2000, there were more than 2,300 websites on the Internet spreading hate messages, including more than 500 extreme content websites created by Europeans on U.S. servers, which can evade European anti-hate laws. The Internet is not amenable to single-state regulation. Eroding national barriers bolstered by geographic obstacles, customs inspectors, there is no simple solution to prevent content from reaching internet users wherever they are.
As public debate proliferates across vast and complex platforms and services, questions about who has the power, control and responsibility to monitor speech become increasingly important.
Different countries facing hate speech
The legal regulation of hate speech in different countries can be broadly divided into two different paths: the conservative school, represented by European countries, which tends to punish hate speech, and the tolerant school, represented by the United States, which focuses more on the protection of freedom of expression. In addition, some international social organisations also play the role of intermediary between calling for and advocating the definition and regulation of hate speech.
This difference in the level of regulation and legal rules is due to different value choices, which are to a large extent historically determined. Conservative attitudes in Europe are closely linked to the social legacy of the Second World War. The war made the problem of hate speech more and more visible, and many countries were challenged by intense cross-cultural and cross-ethnic conflicts, such as racial hatred triggered by Nazi Germany, discrimination and hatred against post-war immigrants, and so on.
Many European countries have therefore directly incorporated hate speech into their formal legislation (Brown, 2015). For example, in 2015, the German government also reached an agreement with Facebook, Google and Twitter that the three companies should remove hate speech within 24 hours of receiving a report. In the United Kingdom, criminal legislation against racial discrimination or hate speech was introduced in the Public Order Act as early as 1986.
The United States of America, as a liberal country, is not immune to the influence of its history. After experiencing the rule of gagging during the colonial period, the First Amendment to the United States Constitution, enacted after the founding of the country, protects freedom of expression as an extremely important right. We can say that freedom of expression is an ideology closely related to the democratic political system and values of the United States. The United States, which advocates a “tolerant” attitude, neither agrees with the regulation of hate speech on the Internet, nor does it have any law corresponding to hate speech on the Internet.
In 2019, the US White House has said that the US will work with governments to combat cyberterrorism, but cannot support the Christchurch Call initiated by New Zealand and France because of a “more absolutist stance” on free speech “( Fouquet & Viscusi, 2019). However, reality shows that this absolutism on freedom of expression needs considerable improvement (Flew, 2021).
Tech companies and hate speech
Social media now dominate the media industry in place of traditional media, but they clearly do not carry the responsibility of mediating the truth that traditional media carry. On the contrary, they do not want to be regulated. And the law provides a basis for them to do so, as Section 230 of the US Telecommunications Act of 1996 (2023) exempts providers or users of “interactive computer services” (as opposed to publishers), as well as providers and users of third-party content, from liability. Social media consider themselves to be like telephone companies, which are not responsible for the content of the conversations that take place over their lines, but only for the transmission. This view is clearly problematic, as telephones can be less influential than social media.
“So social media started out by saying, ‘We’re not a media company, we’re a technology company,'” Nadine Strossen, a professor at New York Law School, noted that (China Knowledge@Wharton, 2016), “They knew they had the right and the ability to choose what to publish and what not to publish, just like traditional media. Content can be edited, but they intentionally say, ‘We give everyone an equal right to speak out.'”
Social media wants people to feel that they can say what they want to say on their platforms, which attracts a continuous increase in users and thus ensures the growth of corporate profits. Sarah T. Roberts, an assistant professor at UCLA’s Graduate School of Education and Information Studies, puts it this way (Xu, 2023): Over the past few decades, Silicon Valley’s leadership has believed in free-market principles, and they’ve conveyed the message that you can express yourself without limitations if you use Facebook, Snapchat, or any platform. But when people realise that auditors exist, they know that this is impossible.
While it may sound like social media platforms are so passive and inactive, this may not be the case. Because most of the world’s major social media platforms are situated in the more permissive socio-cultural and legal system of the United States of America, there is currently a grey area in defining and regulating online hate speech. In fact, it is these social media platforms that set the rules of global online speech judgement, with their company policy and platform law being the main enforcing rules of global online speech judgement. Some people even criticise social media for this, saying that they hold more power than the state. There is some truth to this claim, as the nature, boundaries and values of speech are currently in the hands of these technology companies.
Tech companies as actors
So why do platforms have rules to arbitrate speech when they don’t have to take on the responsibility of gatekeepers? It has to do with their business purpose, they want to create an enjoyable environment for their users (obviously, platforms full of extreme content are not very enjoyable), and on top of that, they need to take into account the preferences of advertisers, “If Facebook’s ad-targeting algorithms allow advertisers to imply discrimination on the basis of race, gender, or other factors, or if the targeting algorithms have the potential to disseminate hate speech, then there will be other advertisers who will be reluctant to appear on platforms that condone such advertising content.” (China Knowledge@Wharton, 2018).
This in a way puts social media in a difficult position as well, on the one hand they want to provide a better experience for their users and keep them away from online spam and violence, on the other hand, this puts them in a situation where they are criticised and accused of having too much power and censorship of citizens’ free speech. On top of that, they face cultural and legal differences from various countries with varying scales of hate speech, and need to keep themselves compliant with the laws and regulations of the various countries in which they offer their services.
Tech companies are unhappy with the situation and expect to cede some of their power to the government or users. Facebook’s founder, Zuckerberg, wants the federal government to set clear standards for Internet tech companies to judge content, while Twitter CEO Jack Dorsey wants to give users more control over content algorithms so they can better control what they see on their own.
Closing Thoughts
We can see that, through the Internet, “hate” has broken through the limits of the past and become a global issue. We are living in a world that is getting more and more torn apart. In reality, Internet technology companies have become the main actors in regulating hate speech, and are in constant friction with the state, advertisers, and the public. There is no industry consensus on how to regulate hate speech, and tech giants as big as Facebook have thus assumed a major pressure and demonstration role. Among tech companies, hate speech has also moved from being a concept in the humanities and social sphere to a practical technical issue led by tech companies at the operational level. Tech companies will need to define hate speech more precisely in the future in order to recognise and regulate it. From this perspective, the issue of hate speech is perhaps one of the best areas in which to look at the interaction between human society and technology.
Reference List
Brown, A. (2015). Hate speech law: A philosophical examination. Taylor & Francis.
Butler, J. (1997). Sovereign performatives in the contemporary scene of utterance. Critical Inquiry, 23(2), 350-377.
Cohen-Almagor, R. (2017). JS Mill’s Boundaries of Freedom of Expression: A Critique. Philosophy, 92(4), 565-596.
Department of Justice’s review of Section 230 of the communications decency act of 1996. Office of the Attorney General | DEPARTMENT OF JUSTICE’S REVIEW OF SECTION 230 OF THE COMMUNICATIONS DECENCY ACT OF 1996 | United States Department of Justice. (2023, May 8). https://www.justice.gov/archives/ag/department-justice-s-review-section-230-communications-decency-act-1996
Fake News, Hate Speech, Social Media Abuse: How to Defuse It?. China Knowledge@Wharton (2016, December 15). Retrieved from http://www.knowledgeatwharton.com.cn/article/9003/
Flew, T. (2021). Regulating platforms. Polity.
Fouquet, H., & Viscusi, G. (2019). Twitter, Facebook join Global Pledge to fight hate speech online. Bloomberg.com. https://www.bloomberg.com/news/articles/2019-05-15/macron-ardern-to-meet-twitter-facebook-google-on-hate-speech
Parekh B. (2012). Is There a Case for Banning Hate Speech? In: Herz M, Molnar P, eds. The Content and Context of Hate Speech: Rethinking Regulation and Responses. Cambridge University Press; 2012:37-56.
Perine, K. (2000). CNN. https://edition.cnn.com/2000/TECH/computing/07/25/regulating.hatred.idg/index.html
Social media platforms enjoying the fruits but not taking responsibility? What should platforms do about hate speech?. China Knowledge@Wharton (2018, November 01). Retrieved from http://www.knowledgeatwharton.com.cn/article/9633/
Timofeeva, Y. A. (2002). Hate speech online: restricted or protected-comparison of regulations in the United States and Germany. J. Transnat’l L. & Pol’y, 12, 253.
Xu, L. (2023). 社交媒体是有围墙的花园,不必让科技公司决定我们的交流方式: 专访. Social media is a walled garden, no need to let tech companies dictate how we communicate. https://m.jiemian.com/article/9714322.html
Be the first to comment