Who closes the door of the Echo Chamber? Gender-Based Hate Speech Across Communities on Social Media Platforms​

Introduction

Well, as a part of various online communities, are you used to sharing your views, your daily life, or even discussing political or international news on social media platforms? Absolutely, the internet is diverse, anonymous, and inclusive — a place where everyone can speak freely. But at the same time, it can also be extreme, absurd, and even violent, causing real harm to people. Especially in recent years, with algorithms and AI being introduced, online conflicts have grown more and more intense.

If you’ve participated in social media platforms in the past few years, you might have noticed something interesting: different platforms — or even different communities within the same platform — are starting to show very distinct user profiles. One of the most obvious differences is based on gender. On some platforms or communities, the ratio of male to female users can be as extreme as 9 to 1. This might be resulted from the platform’s original design or just the way people use it.

If you’ve participated in social media platforms in the past few years, you might have noticed something interesting: different platforms — or even different communities within the same platform — are starting to show very distinct user profiles. One of the most obvious differences is based on gender. On some platforms or communities, the ratio of male to female users can be as extreme as 9 to 1. This might be resulted from the platform’s original design or just the way people use it.

Take Chinese social media platforms as an example. Unlike X or Facebook, which are more monopolized, China has a huge variety of platforms, and many of them have large, active users. Xiaohongshu (also known as “Rednote”), as one of them, is a lifestyle-sharing platform that’s mostly used by women. Scholar Geng Shaoqi (2024) points out that “Xiaohongshu connects people with similar interests using algorithmic recommendations based on big data……helping users get responses from others with same backgrounds. Its highly interactive and ‘visible’ environment,” she argues, “gives women a sense of fulfillment through information sharing and emotional connection.”

The disagreement started with different ways of seeing the same thing.

From ABC NEWS: The video, showing a woman chained to the wall of a backyard shed in rural China, has been viewed 2 billion times. (Supplied)

In January 2022, a sensational news from Fengxian, Jiangsu, China, made headlines around the world. A short video exposed a woman with a vacant expression, wearing only thin clothes in the cold basement, and chained by the neck in a run-down rural home. As more details were posted, the public knew that she had given birth to eight children, and she was also suffering from mental illness!

At first, the public was overwhelmed with sympathy for the woman — and anger toward the man who locked her. But the arguments grew over the government’s constantly changing story. First, they claimed it the “legal marriage.” Then they changed attitude towards that the woman has a mental disease and was taken in by a kind family.”

Each official update made things worse, as people began to feel they were being lied to. Eventually, authorities admitted that the woman had in fact been trafficked multiple times.

For a poor woman, he said:

Instagram post calling attention to the chained woman case in Xuzhou, 2022.
Source: Instagram (@citizensdailycn)

But before the truth was fully revealed, public opinion had already split. On social media, waves of users came out to defend the man involved and the local authorities, who had lied in early statements under pressure. Some of those defenders argued things like:“She’s mentally ill, no one wanted her anyway — at least the man gave her food and a shelter,” or “She should be grateful.” Apart from these extreme discourses, male-dominated spaces general framed this case as just a local crime — but something that should be connected to broader gender issues.

For a poor woman, she said:

In contrast, female-dominated communities saw this case as a symbol of gender-based violence — something that women in China are forced to worry about throughout whole lives. Many of them connected that abuse to a deeper issue with patriarchal system. For them, this woman was not just a victim of human trafficking — she was a mirror, reflecting the society’s invisible discrimination and the way women are objectified in China. This kind of hidden risk is something hundreds of women live with every day — and something most men simply cannot relate to, because most of them have never had to fear gender-based violence in the same way in China.

These kinds of gender-based thinking gaps and the arguments they cause have gotten worse over time. According to a 2021 report by Aim Sinpeng on Facebook’s regulation of hate speech in Asia, “Asia Pacific nations need to take into account a region’s cultural, religious and ethnic differences, and therefore might attempt to curb hate speech using eclectic legal measures that consider this diversity.” So this is the reason why gender-related topics are especially sensitive on Chinese social media platforms. But this is not just a regional issue. Similar tensions can be found across global platforms, where cultural diversity, polarization, and online division often lead to serious consequences, even spilling into real-life social problems.

Consequences: Far Beyond Just Online

A man is assaulting a female part-time worker at a convenience store in CCTV footage captured on Nov. 4, 2023, in Jinju, South Gyeongsang Province. Yonhap

As we can see, gender-based violence is no longer just a thing of the past. With the rise of the internet usage, it never really stopped — and now, it has started to spill over into the real world, turning into real threats and harm offline.

And slowly, people began to ask:

Are these so-called “gender wars” just a natural result of different perspectives between men and women — or is something (or someone) deliberately fueling the fire?

But why?

From disagreement to division: how gender perspectives diverge

Maybe you can remember, five years ago or so, online disagreements were not always such intense.

Sure, gender-based arguments happened from time to time. But back then, you rarely saw any hate speech like “men are all potential murderers!”. Then, what changed? Why does every online fight now feel like a full-on war — with people treating strangers like sworn enemies, and those conflicts even bleeding into real life?

“A recent survey has reported that around one-third of the US population have been on the receiving end of some hateful behavior at least once in their life.”

——League, 2020

Terry Flew (2014) once described social media as “Interdependent, Globalizing, and Interacting.“ But hate speech has destroyed the originally peaceful and diverse environment. Ordinary users gradually started to avoiding heated topics, to being afraid to share their thoughts, to being reluctant to commenting at all — just staying safely in their comfort zones. And when that happens, what is left behind are the loudest, most extreme voices. That was how we end up in a vicious cycle, where online speech becomes more hostile.

So why has online speech become so extreme? And where did hate speech get so powerful, so damaging? To understand this, we have to talk about something called the “Echo Chamber.”

For this concept, scholars define it as:

“A recent survey has reported that around one-third of the US population have been on the receiving end of some hateful behavior at least once in their life.”

——League, 2020

Terry Flew (2014) once described social media as “interdependent, globalizing, and interacting. But hate speech has destroyed the originally peaceful and diverse environment. Ordinary users gradually started to avoiding heated topics, to being afraid to share their thoughts, to being reluctant to commenting at all — just staying safely in their comfort zones. And when that happens, what is left behind are the loudest, most extreme voices. That was how we end up in a vicious cycle, where online speech becomes more hostile.

So why has online speech become so extreme? And where did hate speech get so powerful, so damaging? To understand this, we have to talk about something called the “Echo Chamber.”

For this concept, scholars define it as:

“Social media may limit the exposure to diverse perspectives and favor the formation of groups of like-minded users framing and reinforcing a shared narrative — that is, echo chambers.”

——Cinelli et al., 2021

Obviously, as the data manipulator behind the platform, “Algorithm” plays a key role. Social media platforms track what you search, what you like, and how you interact — and then use that data to serve you more of the similar things. Over time, it refreshes your user profile again and again and then, builds a digital version of “you”. Therefore, people with similar views or interests — whether political opinions, identity, or just hobbies — get pushed into the same chamber. And inside that chamber, you will feel supported, and safe because everyone agrees with you. Eventually, you stop seeing different views and the outside perspectives will totally disappear —  polarization grows.

Let’s go back to the case of the chained woman. At first, you might be puzzled: how could the woman who is totally a victim but become the target of gender-based hate speech? Now the answer could be obvious. Public discussion happens on platforms where AI and algorithms cannot fully recognize and filter emotionally charged or harmful language, especially when it comes in the form of satire or metaphors. In that context, once those comments are made, they get locked into echo chambers, alongside neutral or reasonable views.

According to Cinelli et al. (2021), “This algorithmic operated distortion of information and perceived reality is difficult to detect, making it even more dangerous.” As more and more aggressive content gets circulated, the atmosphere within the chamber subconsciously becomes more aggressive and prone to hatred of specific groups, in turn affecting every membership. And when these radical echo chambers finally be unpacked, when opinions from different chambers collide, emotions spike fast. And then, the debates become fights, fights become wars.

This never happened in the single case of “chained woman”, but is a result of the biased gender antagonism of the algorithms in the social media platforms, which since their inception, have been repeating over and over again. So, this is the one of the key reasons why gender-based arguments online feel so much more extreme today than they used to.

So, are Echo Chambers — or algorithms — really the culprit?

The Echo Chamber cannot create any hate, cause harm on its own of course. They are more like flowerpots, and what really matters is what kind of seeds get planted in them. If hate speeches are not deterred, but someone even keeps watering and fertilizing them, then they will not just stay online but start to shape how people think, feel, and act in the real world.

Moreover, if algorithms have no feelings, no intent, just sorting and recommending contents, then:

What are they basing their decisions on? What rules are they following? Who wrote those rules?

Maybe the disagreement at the beginning was not intended to hate but someone made it louder, and pushed it to everyone — who did that?

Why is it always the loudest, most extreme voices that get pushed to our eyes?

First, we must understand that the digital world is not neutral. Technology is made by human beings— and that means human power structures, ways of thinking, and biases can all show up in how systems are built. From the very beginning of the code, algorithms can reflect the interests of those who design them.

The more active users are, the more likely they are to see ads — and the more attractive the platform becomes to advertisers. This is how most of the free platforms make money.

As Fuchs (2009, as cited in Roiha & Malin, 2023) points out, on free platforms, the real “product” is not the content — it is the users themselves.

As I mentioned earlier, negative content is more likely to trigger strong emotions — and strong emotions get people to comment and engage. That is why emotional, and hateful speeches often perform better than calm discussions. According to Sinpeng (2021), “Content that is divisive, sensational, or inflammatory tends to generate more engagement.” Hence the platform ends up rewarding those extreme posts unconsciously.

So, is the platform malicious? Not quite.

But its logic — to prioritize clicks, reactions, and watch time — creates an environment where anger spreads faster than empathy.

“If you gaze long into an abyss, the abyss also gazes into you.”

— Nietzsche

Secondly, according to the report of Sinpeng (2021), platforms often struggle to recognize hate speech — especially in regions with complicated political situations and cultural diversity. Except for language barriers, people use puns, coded words, memes or even made-up slang to attack others.

Additionally, hate speech can be political. If a region where gender inequality is already deep-rooted, algorithms may also reflect that imbalance — not because they mean to, but because they are built on biased data and social context, Gelber (2019, as cited in Sinpeng, 2021) argues that speech can become oppressive when it takes place in a context where structural discrimination already exists.

In these cases, platforms as both powerful players and profit-seekers can’t be expected to regulate such speech fairly.

Take the “chained woman” case as an example again:

The comment like:“She’s mentally ill — no one wanted her anyway.” It hides a toxic idea that women are items and only valuable if someone “wants” them — and that mental illness makes she worth less.

This kind of metaphorical hate speech does not use any dirty words, so the system usually fails to catch it.

Conclusion

Within this kind of system, users keep getting exposed to the same emotional content again and again, over time, their own views start to become sharper. At the same time, platforms struggle to moderate effectively, especially in regions with complex political and cultural differences systems.

And that creates the perfect environment for hate speech to grow inside Echo Chambers.

With the help of algorithms, users are flooded with one-sided content, over and over.

References

Cinelli, M., De Francisci Morales, G., Galeazzi, A., Quattrociocchi, W., & Starnini, M. (2021). The echo chamber effect on social media. Proceedings of the National Academy of Sciences, 118(9), e2023301118. https://doi.org/10.1073/pnas.2023301118

Flew, T. (2021). Issues of concern. In Regulating Platforms (pp. 91–96). Cambridge, UK: Polity.

Geng, S. (2024). 面向女性用户的社交媒体类App界面设计研究——以小红书为例 [Interface design of social media apps for female users: A case study of Xiaohongshu]. 数字通信世界, (12), 65–67.

League, A. D. (2020). Online hate and harassment: The American experience 2021. Center for Technology and Society. https://www.adl.org/media/14643/download

Lomonaco, F., Taibi, D., Trianni, V., Buršić, S., Donabauer, G., & Ognibene, D. (2023). Yes, echo-chambers mislead you too: A game-based educational experience to reveal the impact of social media personalization algorithms. In G. Fulantelli, D. Burgos, G. Casalino, M. Cimitile, G. Lo Bosco, & D. Taibi (Eds.), Higher education learning methodologies and technologies online: HELMeTO 2022 (Vol. 1779, pp. 379–389). Springer. https://doi.org/10.1007/978-3-031-29800-4_26

Roiha, M., & Malin, H. (2023). Beyond the screen: Digital realities and embodied harm in the experiences of gender-based online hate speech. Dipòsit Digital de la Universitat de Barcelona. https://diposit.ub.edu/dspace/handle/2445/216831

Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021). Facebook: Regulating hate speech in the Asia Pacific. The University of Sydney & The University of Queensland. https://www.facebook.com/communitystandards/recentupdates/

ABC News. (2022, March 20). Xuzhou chained woman scandal shines light on China’s human trafficking problem. https://www.abc.net.au/news/2022-03-20/xuzhou-chained-mother-china-reveals-human-trafficking-problem/100908110

Instagram. (2022, February 8). For a poor woman, he said… [Post]. Instagram. https://www.instagram.com/p/CacVQ9ar8a9/

Be the first to comment

Leave a Reply