Deepfake Is Not a Joke. It is a Form of Violence.

Gendered Harm, Privacy Breakdown and Governance Failure in the Age of AI Face-Swapping

South Korean women protest against creating and sharing of deepfake images.

Her Face Is Not Yours to Use

Deepfake technology uses artificial intelligence to replace one person’s face with another in a photo or video. Sexual content often creates very realistic but entirely fake pornographic images. These are made without the victim’s consent and fall under the category of image-based sexual abuse (Fido et al., 2022). 

In early 2024, explicit deepfake images of American famous singer Taylor Swift went viral on X (formerly Twitter), reaching over 45 million views. These images quickly spread across platforms like Reddit and Instagram. This issue goes far beyond celebrities. That same year, cases emerged in South Korea and Australia involving schoolgirls and female teachers. In South Korea, Telegram groups with over 220,000 members shared fake nude images, often including victims’ names and contact details. In Melbourne Bacchus Marsh Grammar, photos of about 50 schoolgirls were altered and circulated as explicit content. In the last ten years, the number of child sexual abuse images online has increased by 830% due to the widespread use of AI image generators. 

From celebrities to ordinary people, and teenagers to children, these incidents show that deepfake porn is not just about bad actors or technical misuse. It highlights a broader failure in protecting privacy, managing platforms and addressing gender-based harm. 

AI is Not Neutral. It Reflects and Repeats Social Bias.

AI does not think and reason as a human. It learns from massive amounts of data, including what users click, share, and view. Crawford (2021) describes AI as an extractive industry. It collects personal data, often without consent, and uses it to make predictions that benefit advertisers and platforms. These predictions reflect existing inequalities. 

Some might ask: “If AI only produces content when someone gives it a prompt, then isn’t it the user’s fault?”

Yes, people who misuse AI to create harmful content must be held responsible. However, it does not mean the tool itself is neutral. AI is not just a mirror of human choices. It is trained on biased data and built within systems that reward what gets attention, not what is fair or safe. AI recognizes that images of women, especially celebrities, attract more clicks. That is why Taylor Swift was targeted. The algorithm had already labelled her face as high-value. Offenders used AI image generators like Microsoft Designer to create fake images and share them in Telegram groups.

Crawford (2021) also points out that AI is trained using biased data. For instance, facial recognition tools perform worse on women and people of colour. It happens because most training datasets are dominated by white male faces. These biases affect who gets wrongly flagged and who is easier to abuse. Flew (2021) notes that AI systems are usually not transparent. Users often do not know when their data is used for training or why content appears in their feed. Victims of deepfakes usually learn about the content only after it has gone viral. By then, removing it becomes very difficult. These systems do not simply reflect society. They actively reinforce its inequalities, built into both the data and the structure of digital economies that prioritize virality over safety.

Privacy and Control: When Your Image Is No Longer Yours

AI learns from personal data users give away without realizing it. It includes selfies, face filters, cookies and facial recognition tools. These everyday activities generate data that can be stored, analyzed and reused. Zuboff’s idea of surveillance capitalism explains the model (as cited in Flew, 2021). Human behaviour becomes raw material for predictions and profit. Platforms track what we do, predict what we might do next, and sell that data. The more we interact, the more data they gather. 

Suzor (2019) explains that users have limited power. Social media platforms are private spaces. They create complex terms of service that few people read. Once users accept, they lose control of their content. Platforms can use or sell it without clearly informing them. The lack of transparency leads to harm. In Australia and South Korea, ordinary students and teachers had their images taken from social media and turned into deepfakes. They were not public figures, which means most of them had no idea this could happen to them.

When victims report abuse, platforms often delay or offer no clear removal process. In many places, platforms are not legally responsible for content users upload. They profit from user data but are rarely held accountable when it is abused. People are told to protect themselves by being careful online. However, it ignores how little control users actually have. When platforms put engagement first, privacy becomes less important. These failures create an environment where deepfake abuse can thrive.

Platform Governance: Too Little, Too Late

Platforms often respond too slowly to reports of deepfakes. Most rely on a reactive approach. They wait until something goes wrong before taking action. However, it is always too late for victims. Flew (2021) explains that platform algorithms are built to increase engagement. It means they push shocking, emotional or viral content. And, harmful content spreads quickly because the system rewards it. Platforms just notice the content is getting clicks and boost it further.

In Taylor Swift’s case, X blocked searches only after the images had millions of views. In South Korea, Telegram groups continued operating until protests forced authorities and platforms to respond. Platforms are not just hosting content. They design the algorithms that decide what gets seen. When a harmful post starts gaining views, the system sometimes promotes it further (Flew, 2021). It is not accidental. It is part of a design that values engagement over safety. Platforms profit from what goes viral, even when the content causes real harm. 

Deleting a viral post is not enough. These fake images are saved, copied and re-uploaded. Without proactive design changes, the same harm happens again. The problem is not just user behaviours. It is the system itself. If virality is rewarded and harm ignored, abuse becomes routine.

Deepfakes Disproportionately Target Women

Deepfake harm is not random. It follows patterns of inequality. Most deepfake porn targets females. Victims are often young, socially marginalized, or lack digital protection. Algorithms learn what captures attention. Sexualized images of females are promoted more often. It creates a cycle where harmful content is repeated and normalized. Sensity AI reports that over 96% of deepfake content online is pornographic, and nearly all of it targets females. Platforms are not only reflecting this problem. They are also helping create and spread it. Fido et al. (2022) found that public reactions to deepfake victims depend on who they are. If the target is a celebrity, people are less likely to treat it as serious. If the victim is unknown, there is often more sympathy. It shows that public empathy is shaped by social status. 

Many females who are targeted lack legal resources or social influence. They are left to prove the images are fake. Often, they are blamed, ignored or silenced. Once a deepfake spreads, it is almost impossible to remove. This is not just a personal issue. It shapes how women are treated both publicly and privately. The control over their own image is taken away by digital platforms and society, which means their identities are shaped by the content they did not create. Deepfake reveals a broader system of digital gender violence. These systems reduce female’s autonomy and spread harmful ideas about whose privacy matters.

What Can Be Done: Three Areas for Action

Solving this issue takes more than individual effort. Real solutions require change in three areas: platform regulation, legal policy, and public education.

Platform Regulation

Many platforms follow their own rules without outside oversight. Flew (2021) argues that they need legal accountability. 

  • The Duty of Care model says platforms should be required to protect users from foreseeable harm. 
  • The FAT principles (fairness, accountability, and transparency) should guide their systems. Platforms should explain how their algorithms work, how user data is used, and how users can report and challenge harmful content.

When designing and testing AI tools, developers must consider misuse. Watermarks or labels can help users recognise AI-generated content. Platforms can also add pop-up reminders to warn people before they upload or share sensitive material.

Legal Governance

Concerns about the over-criminalisation of social issues are understandable. However, it is equally important to acknowledge that, worldwide, the conviction rate for online violence, especially gender-based abuse, remains very low. Victims of digital violence often face legal systems that are slow to respond, lack resources, or fail to address harms rooted in structural gender inequality. 

Some countries have begun updating their laws. 

These legal developments are promising but still insufficient. In many jurisdictions, existing laws do not clearly address non-consensual, AI-generated content. As a result, many offenders escape accountability, and victims are left without access to justice. Some cases receive brief attention, only to quietly disappear without any consequences.

As technology evolves, the law must be enforced and regularly updated. Without clear, enforceable, and adaptable legal frameworks, platforms will continue to operate in legal grey zones, and harmful content will continue to circulate unchecked.

Public Education

Many people still do not know how deepfakes work or how to identify them. Schools, media, and online spaces must teach people to think critically about digital content. 

We also need to talk about inequality. Technology often reflects existing power structures. Deepfake porn is a gendered issue. We must teach people why females are targeted more often and how to support victims. Blaming victims is not the answer.  We should focus on changing the systems that allow abuse. While Taylor Swift had fans who could defend her, many ordinary girls and ladies have no one. Schools, families, and communities need to offer support—legally, emotionally, and socially.

If someone you know is targeted by deepfake pornography, The Conversation provides a helpful guide: What to do if you or someone you know is targeted with deepfake porn or AI nudes. And most importantly, do not hesitate to seek help from someone you trust. No one should go through this alone.

Protecting Dignity in the Digital Age

Deepfake technology shows how gender, power, and profit come together in harmful ways.  It reminds us that digital violence is not rare or accidental. It is built into systems that prioritize clicks over care. Privacy and dignity should not be optional online. But right now, they are not guaranteed. Changing this will require real effort. Platforms must be redesigned. Laws must be enforced. Education must be improved.

If we stay silent, the next victim could be anyone. The right to control our image must be defended. In an AI-driven world, protecting dignity is no longer optional. It is essential.

References

ABC News. (2024, June 12). Parent outraged over ‘incredibly graphic’ fake images of Bacchus Marsh Grammar students. https://www.abc.net.au/news/2024-06-12/bacchus-marsh-ai-images-fake-nude-students-photo-online/103967402

Ahn, Y. (2024, September 25). AI is fuelling a deepfake porn crisis in South Korea. What’s behind it – and how can it be fixed? The Conversation. https://theconversation.com/ai-is-fuelling-a-deepfake-porn-crisis-in-south-korea-whats-behind-it-and-how-can-it-be-fixed-238217

Attorney-General’s portfolio. (2024, June 5). New criminal laws to combat sexually explicit deepfakes. https://ministers.ag.gov.au/media-centre/new-criminal-laws-combat-sexually-explicit-deepfakes-05-06-2024

Bae, S. (2024, September 25). AI is fuelling a deepfake porn crisis in South Korea. What’s behind it – and how can it be fixed? The Conversation. https://theconversation.com/ai-is-fuelling-a-deepfake-porn-crisis-in-south-korea-whats-behind-it-and-how-can-it-be-fixed-238217

Bailey, S. & Lundrigan, S. (2025, February 21). Our research on dark web forums reveals the growing threat of AI-generated child abuse images. The Conversation. https://theconversation.com/our-research-on-dark-web-forums-reveals-the-growing-threat-of-ai-generated-child-abuse-images-249067

Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. New Haven, CT: Yale University Press, pp. 1-21.

Fido, D., Rao, J., & Harper, C. A. (2022). Celebrity status, sex, and variation in psychopathy predicts judgements of and proclivity to generate and distribute deepfake pornography. Computers in Human Behavior, 129, 107141.

Flew, T. (2021) Regulating Platforms. Cambridge: Polity.

Henry, N. (2024, June 12). What to do if you, or someone you know, is targeted with deepfake porn or AI nudes. The Conversation. https://theconversation.com/what-to-do-if-you-or-someone-you-know-is-targeted-with-deepfake-porn-or-ai-nudes-232175

Lodhi, A. (2024, January 29). X blocks Taylor Swift searches: What to know about the viral AI deepfakes. Aljazeera. https://www.aljazeera.com/news/2024/1/29/x-blocks-taylor-swift-searches-what-to-know-about-the-viral-ai-deepfakes

McGlynn, C. (2024, April 9). Deepfake porn: why we need to make it a crime to create it, not just share it. The Conversation. https://theconversation.com/deepfake-porn-why-we-need-to-make-it-a-crime-to-create-it-not-just-share-it-227177

Suzor, N. P. (2019). ‘Who Makes the Rules?’. In Lawless: the secret rules that govern our lives (pp. 10-24). Cambridge, UK: Cambridge University Press.

Ortiz, S. (2024, January 30). Microsoft adds new Designer protections following Taylor Swift deepfake debacle. ZDNet. https://www.zdnet.com/article/microsoft-adds-new-designer-protections-following-taylor-swift-deepfake-debacle/

Be the first to comment

Leave a Reply