“Your Body, My Choice”: Gendered Hate and the Digital Persecution of Women

Right-wing commentator Nick Fuentes posted the message, “Your body, my choice. Forever.” during Donald Trump’s successful 2024 U.S. presidential campaign. The phrase rapidly popularized on mainstream social media platforms like Facebook, Instagram, TikTok, and X within 48 hours. It gained over 90 million views and was shared by well over 35,000 individuals. Based on Institute for Strategic Dialogue (ISD) data, uses of the term spiked by 4,600%.

‘My body, my choice.’ is a battle cry that originated with female subjects. The fight for bodily autonomy for girls and women has historically been tied to such an empowering and simple phrase. The slogan encapsulates the ability to make decisions about women’s own body, especially when it comes to abortion, reproductive health, and end-of-life care. It embodies autonomy, self-determination and control over deeply personal health choices. This misogyny, extremism, and far-right hate speech against women has not only taken over cyberspace, but has also spread to schoolyards. “We have received reports of some students using the phrase ‘your body, my choice’ offline against girls in school,” the ISD said.

This was neither a lone instance of online trolling or even a meme. It highlights a deeper, structural reality: women’s voices, particularly those who support autonomy, oppose authority, or rebel, are met with algorithmically enhanced hatred in today’s politicized digital ecology. Platforms like X and Meta, under the banner of “free speech,” have drastically lowered the bar for moderating hate speech while turning a blind eye to the growing tide of gender-based hatred.In response to this wave of hate speech, many women have grown increasingly fearful about the future of their reproductive rights and overall gender equality. Some American women have even begun to embrace the ideas behind South Korea’s 4B movement—a radical feminist stance that rejects marriage, childbirth, dating, and sex with men altogether.

How Should We Understand Hate Speech

As Terry Flew (2021) explains, hate speech refers to expression that “expresses, encourages, stirs up, or incites hatred against a group of individuals distinguished by a particular feature or set of features such as race, ethnicity, gender, religion, nationality, and sexual orientation.” It often exhibits three key characteristics—each of which becomes clearer when we examine the viral “Your body, my choice” incident.

  1. First, hate speech targets a specific group, regardless of any individual differences within it. In this case, all women—as a broad gender category—became the target of extreme hostility. It didn’t matter what their religion, race, or age was; their shared gender identity was enough to make them the object of collective attack.
  2. Second, it works through stigmatization, attributing socially disapproved characteristics to the targeted group. Take, for example, another viral comment during the incident from Jon Miller, a former contributor to conservative media outlet TheBlaze, who tweeted: “Women threatening sex strikes like LMAO as if you have a say,”—a post that received 85 million views. The underlying message reduces women’s agency to a joke, ridiculing their capacity to make decisions about their own bodies.
  3. Finally, the result of this discursive violence is that the targeted group is framed as unwelcome, even as a legitimate object of hostility. In the aftermath of the “Your body, my choice” trend, women—particularly those expressing autonomy—were recast as enemies, open to ridicule, harassment, and abuse. Their presence online became something to challenge, not respect.

All of this raises deeply troubling questions: Why is it so often women who are targeted? Where exactly are the boundaries for moderating hate speech on platforms like X or TikTok? Who defines those limits—governments, ideology, or some abstract notion of ‘free speech’ and human rights?

Censorship That Can’t Censor Hate

According to a study conducted by the Center for Countering Digital Hate (CCDH), Instagram failed to take action on 93% of abusive comments directed at female politicians. Researchers analyzed over half a million comments on the accounts of ten prominent women in politics, including Vice President Kamala Harris and Representative Alexandria Ocasio-Cortez, and found widespread use of racial slurs, death threats, and gender-based insults. A 2016 survey found that 41.8% of female parliamentarians worldwide reported receiving humiliating or sexually suggestive images via social media, while 44.4% said they had received threats of death, rape, physical assault, or abduction (Jankowicz et al., 2021). In addition to causing anxiety, these threats had a major negative impact on the mental health of political women, which ultimately deterred them from actively engaging in civic and public life. The fact that many of these vile remarks might have broken Instagram’s own community guidelines regarding violent threats and the site did nothing about it is very concerning. Along with creating a threatening atmosphere, this dynamic directly compromises women’s political participation. Within the politically charged atmosphere that existed after Trump’s re-election, these dynamics have only grown stronger.

Mark Zuckerberg spoke recently of Meta’s evolving content moderation strategy and proposed several recommended changes. They include easing some constraints on popular topic discussion—refocusing enforcement efforts toward obviously unlawful or high-severity offenses—implementing a more tailored system for political material, and ending third-party fact-checking for a “community notes” system.

Even as these changes might appear beneficial in theory, the move to permit additional speech through relaxing restraints upon controversial issues demonstrates a conflict between expressed values and true platform motivations.In Meta’s official statement, the company declared: We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate.”This framing raises important concerns: in the name of free speech, are platforms abandoning their responsibility to protect marginalized communities from targeted abuse?

This may initially appear to be an attempt to allow more public discussion—an attempt to avoid over-policing political issues and to placate right-wing opponents who say large digital companies are “silenceing” them. However, the repercussions are far but neutral.

The risk is now considerably higher for those who are most at risk from hate speech, including women and gender minorities like transgender people. In 2023, the FBI reported a record number of hate crimes targeting the LGBT community—over 2,800 incidents, accounting for nearly a quarter of all hate crimes in the United States. The report also noted that this figure is likely an underestimation, as many hate crimes go unreported.When hate speech is disguised as “political expression,” does it still get flagged by moderation systems? Will gendered slurs and stigmatizing language that once could be reported now be tolerated under the guise of legitimate debate?

Gendered Hate and Digital Platform Architecture

This growing tolerance for hate speech is often cloaked in the rhetoric of absolute free speech. As Katharine Gelber (2011) has argued, freedom of speech should not be elevated above the rights and safety of others. Freedom of speech is not the freedom to cause harm and calls for a more balanced and context-sensitive approach to governance, one that takes into serious account the real-world impacts of hate speech on vulnerable and marginalized communities.The neutrality of platforms has increasingly come under scrutiny. While social media companies often present themselves as impartial arbiters of information, their algorithmic design and content moderation systems frequently reinforce systemic biases. These platforms often institutionalize and automate privileged male perspectives—both in terms of speech and violence (Chemaly, 2019).

photo shared by @feminist on Instagram

In practice, so-called “free speech” is selective. It tends to protect those who already hold discursive power while suppressing dissenting or marginalized voices. Rather than functioning as equal spaces for dialogue, digital public platforms have become battlegrounds of discourse, where the voices of the marginalized are drowned out by the coordinated hostility of online abusers. This is especially evident in today’s post-Trump era of social media, where the line between “freedom of expression” and “freedom to attack” has become increasingly blurred. Beneath the façade of platform “neutrality,” gendered hate speech is not only tolerated but often amplified—eroding the very notion of these platforms as democratic spaces.

The Return of Gamergate 2.0: Sweet Baby Inc. and Political Harassment

Other than occasional hate speech and knee-jerk trolling, contemporary gendered hostility is rooted in an extensive digital violence culture. It includes coordinated disinformation, bot-operated harassment campaigns, and algorithmic normalisation of misogyny online. Gendered disinformation, which is the distribution of manipulative messages with the intent to promote dangerous gender stereotypes, is used to marginalise females and gender minority groups and to actively deny them presence in public life (Mytha Eliva Veritasia et al., 2024).

A broader trend is perhaps best exemplified by a recent example often summarized as “Gamergate 2.0.” In late 2023, narrative consultancy Sweet Baby Inc. was a primary subject of this pushback. Praised for its work toward inclusionary storytelling for high quality games like Alan Wake II and Spider-Man 2, the company has had a long tradition of advocating for, and implementing, diversity, equity, and inclusion (DEI) in gaming. Its founder, Belair, and female journalists at publications like Kotaku who criticized the phenomenon, became targets of mass online harassment, disinformation, and doxing—leading to several seeking protection under the law. Anti-capitalist right-wing groups called for boycotts of games affiliated with Sweet Baby Inc., and a coalition of Steam curators quickly ballooned to over 450,000 members. The fact that this campaign occurred around the 2024 United States presidential election merits mention because it illustrates the complex interlinkage between hate speech that is gendered and broader political agendas.

Yet, infrastructures at an underlying platform level that amplify and enable these kinds of attacks—not an individual campaign—are what are most problematic. It appears that contemporary content filtering systems are designed to prioritize political engagement and neutrality over moral responsibility. Although such algorithms may cause harm to society and exacerbate hatred, they ignore the drawbacks and choose to continue to amplify inflammatory and emotional content. The hateful slogan “Your body, my choice” instantly became an Internet meme, indicating that the reason why such content is popular is not because it causes harm to a specific group, but because it is controversial, political, and discussed, and is favored by algorithms.

These platforms, meant to make users as involved as possible and fueled by biased data, do not comprehend irony, context, nor the veiled violence of language. As a result, it’s hard to separate “political speech” and “gendered harassment,” and thus extreme content goes undeterred under the guise of public discussion. Platforms become complicit—part and parcel of a culture of violence—by continuing to use “free speech” to cover for systemic hatred.

The Future of Platform Accountability

Reworking platform governance at its foundations is called upon to stem this structural challenge. Another major development is the European Union’s Digital Services Act (DSA), under which large platforms must disclose algorithmic recommendation methods and content moderation procedures. It introduces independent audits and user challenge procedures as well. Concurrently, an eSafety policy of Australia has created a national regulatory agency to enable channels of real-time reporting and redress for victims of abuse online.

But regulation is not sufficient. Platforms need to embrace proactive responsibility by incorporating structural justice and user protection into their technical design and operational framework. These include:

1.Increasing algorithmic transparency and explainability;

2.Incorporating intersectional awareness into content evaluation frameworks;

3.Establishing independent digital ethics councils;

4.Implementing anticipatory governance systems capable of detecting and intervening before abuse reaches mass scale.

Above all, platforms need to do away with the myth of neutrality. These platforms are not merely passive channels of information, but active shapers of public discourse. So long as “maximizing reach” is the foundational priority, platforms will only continue to enable the infrastructure for structural violence. A fairer, more ethical, and democratic digital future depends upon whether platforms can make a move—the foundation of politics is human rights. We should not amplify hatred, but take responsibility.

References

Chemaly, S. (2019). Chemaly, Soraya, ‘Demographics, Design, and Free Speech: How Demographics Have Produced Social Media Optimized for Abuse and the Silencing of Marginalized Voices’, in Susan J. Brison, and Katharine Gelber (eds), Free Speech in the Digital Age

Duffy, C. (2024, November 11). “Your body, my choice”: Attacks on women surge on social media following election. CNN. https://edition.cnn.com/2024/11/11/business/your-body-my-choice-movement-election/index.html

eSafety Commissioner. (2015). eSafety. Office of the ESafety Commissioner; Australian Government. https://www.esafety.gov.au/

European Commission. (2024, February 23). Questions and answers on the Digital Services Act. European Commission – European Commission. https://ec.europa.eu/commission/presscorner/detail/en/QANDA_20_2348

Farokhmanesh, M. (2024, March 14). The Small Company at the Center of “Gamergate 2.0.” Wired. https://www.wired.com/story/sweet-baby-video-games-harassment-gamergate/

FBI National Press Office. (2022). FBI Releases 2022 Crime in the Nation Statistics | Federal Bureau of Investigation. Federal Bureau of Investigation. https://www.fbi.gov/news/press-releases/fbi-releases-2022-crime-in-the-nation-statistics?ref=platformer.news

Flew, T. (2021). Regulating platforms: Communication policy and the logic of the platform economy. Communication Research and Practice https://doi.org/10.1080/22041451.2021.1881056

Frances-Wright, I., & Ayad, M. (2024, November 8). “Your body, my choice:” Hate and harassment towards women spreads online. ISD. https://www.isdglobal.org/digital_dispatches/your-body-my-choice-hate-and-harassment-towards-women-spreads-online/

Gelber, K. (2011). Speech Matters: Getting Free Speech Right. University of Queensland Press.

Ingram, D. (2024, August 14). Instagram failed to act on 93% of abusive comments aimed at female politicians, study says. NBC News; NBC News. https://www.nbcnews.com/tech/social-media/instagram-failed-act-93-abusive-comments-aimed-female-politicians-stud-rcna166477

Jankowicz, N., Hunchak, J., Pavliuc, A., Davies, C., Pierson, S., & Kaufmann, Z. (2021). Malign creativity: How gender, sex, and lies are weaponized against women online. Woodrow Wilson International Center for Scholars.

Kaplan, J. (2025, January 7). More Speech and Fewer Mistakes. Meta. https://about.fb.com/news/2025/01/meta-more-speech-fewer-mistakes/

Mytha Eliva Veritasia, Muthmainnah, A. N., & Mathias-Felipe de-Lima-Santos. (2024). Gendered disinformation: a pernicious threat to equality in the Asia Pacific. Media Asia, 1–9. https://doi.org/10.1080/01296612.2024.2367859

Be the first to comment

Leave a Reply