
With the rapid development of technology, the phenomenon of hate speech and online abuse is flooded with social media, which not only causes a lot of negative impacts on the Internet environment but also increases the difficulty of Internet governance.
Videos from NBC NEWS
In early 2025, an ordinary college student named Mary Kate Cornett became the target of a massive wave of online harassment after an anonymous post on X. This post claimed that she was romantically involved with both her boyfriend and his father. A simple, fake story. But because it involved sex, power, and an attractive young woman, the post exploded. It got over 1.5 million views. High-profile media figures and platforms, including ESPN commentator Pat McAfee and Barstool Sports, further amplified the claim without verifying its validity. To make matters worse, Cornett’s personal information was leaked (a practice known as doxxing), and she and her family faced harassment and threats. Additionally, Strangers made memes. Someone even launched a cryptocurrency coin using her initials(NBC news, 2025). It sounds ridiculous—but it ruined her life for weeks.
Cornett later put out a video where she strongly denied the rumors, calling it a “digital crucifixion.” Her family said they are planning to take legal action against those involved and the media outlets spreading the news. They also started a fundraising campaign to help victims of online harassment.
This case is a striking example of a multifaceted digital phenomenon in which personal reputations are undermined by unfounded rumors and malicious online behavior. At its core, this incident reveals several key trends in the digital era: the rapid spread of misinformation, the vulnerability of women’s reputations, and the weaponization of personal data in defamation campaigns.
So, what made these problems happen?
The Reasons Behind
One of the main drivers behind such online defamation is that in the attention-driven world, controversial or scandalous stories are more attractive than others. Ambiguous words, vulgar stories, and sensational headlines, these types often have a rapid rate of circulation and can attract more attention. Additionally, In many online communities, particularly those that revolve around entertainment and celebrity culture, there is a perverse incentive to participate in sensationalism. The allure of sensationalism often overrides considerations of privacy and truth, leading to widespread participation in public shaming.

Figure 1. from pixabay
The algorithm also plays a critical role. Social media platforms are designed to show us content that keeps us engaged. Posts that are sensational or controversial, no matter if they’re true or not, often get more attention because the algorithms love high interaction numbers. This setup unintentionally helps spread misinformation. Social media algorithms boost extreme content, like misogynistic posts, it can amplify negative materials(UCL, 2025).
The bias is also an important factor. Women are disproportionately targeted by online harassment, a phenomenon rooted in entrenched gender stereotypes and systemic sexism. The study pointed out that male-supremacist ideas and followings are growing. The report also found a clear link between the increasing abuse and poisoning of women online and the recent rise and mainstreaming of anti-feminist and nefarious ideologies in the social media space(Marlet, 2023).
Finally, the absence of quick and effective recourse for victims also needs to be considered. Current mechanisms on social media platforms often fall short of swiftly addressing and mitigating the spread of false information. The lack of robust policies and tools to protect individuals leaves users vulnerable to significant harm. In the study of UNESCO, it pointed out that many platforms have put in place mechanisms for users to report abuse, however, most of the time, these mechanisms are not effective or reactive enough(UNESCO, 2023). The study of equality now also found that legal complexities and shortfalls in state and tech industry protections make it arduous and slow for victims to obtain help from online platforms(Equality now, 2024). Targets of attacks face a legal labyrinth when attempting to seek redress for problems caused.
The Negative Impacts on Individuals and Society
The consequences of such online defamation are profound and multifaceted. The victims often experience intense emotional distress, including anxiety, depression, and PTSD—not only for the individual targeted but also for their families. Research suggests that the modus operandi of social media platforms amplify gendered disinformation and hate speech with devastating effects on women’s lives in diverse ways. Their mental health can be highly affected(UNESCO, 2023). In Cornett’s case, the rumor quickly escalated into targeted harassment and even a swatting attack at her mother’s home—an act that could have turned deadly.
UNESCO discovered that women can be professionally and reputationally damaged(UNESCO, 2023). Reputational damage is something the Internet can rarely recover from. Once disinformation becomes part of someone’s digital footprint, it’s nearly impossible to remove. Cornett’s name, photos, and personal details are now permanently entangled with a lie she had no part in creating. This impacts her education, job prospects, and even future relationships.
For society, women are disproportionately targeted in these smear campaigns, reinforcing not only existing gender biases but also a culture of misogyny in the digital space. Social media algorithms significantly increase the visibility of misogynistic content, thereby embedding such harmful ideologies within the digital culture (UCL, 2025). The cumulative effect of these dynamics is to create a hostile online environment that discourages women from participating.
How Do We Stop the Next Mary Kate Cornett Crisis?

Figure 2. from pixabay
Individual-Level Actions
- Enhancing Digital Literacy
As the study of UCL said in combating this worrying trend, wider pressure is needed that continue to advocate for digital literacy training or how to maintain healthy digital literacy(UCL, 2025). It’s really essential to teach internet users how to critically evaluate online information. If people understand the tools and tactics used to manipulate digital content, they can better tell fact from fiction. So, we need to set up workshops, online courses, and public service campaigns to help people check that information is real before they share it.
- Thinking Critically
The digital public square runs on engagement, and every like, share, comment, or repost either amplifies or mitigates harm. False narratives like the one targeting Mary Kate thrive on viral momentum. Being a responsible digital citizen doesn’t mean staying silent—it means thinking critically: Where did this information come from? Does it sound too outrageous to be true? Could this harm someone if it’s wrong? Even simple prompts to “think before sharing” significantly reduce the spread of false information.
- Adopting Robust Privacy Practices
It’s important to use the privacy settings on social media. If you limit who can see your personal information, you can reduce the risk of being targeted by malicious actors. And just be careful when you’re sharing personal info online because there’s always the chance that it might get misused down the line. The study showed that information related to the identity of individuals is stored, transferred, and inquired about as little as possible to ensure the identity of communicating counterparts (Eskola, 2012).
Platform-Level strategies
- Reshaping the Algorithmic Incentives
The study discovered that algorithms boost extreme content, like misogynistic posts, it can amplify negative materials(UCL, 2025). So, it is important for platforms to audit and adjust their algorithms to prevent the amplification of harmful content. By prioritizing content that promotes positive engagement and demoting material that spreads misinformation or misogyny, platforms can reduce the visibility of harmful posts.
- Setting the Mechanism
The platform can establish a transparent mechanism to show the source of information and existing disputes, and clearly label problematic information. It can help people to identify and address harmful behavior before it spirals out of control. A study suggests that a mechanism to match rumors and counter rumors on Twitter and display the two sides of information together will be helpful in reducing the spread of rumors on Twitte(Ozturk, Li& Sakamoto, 2015).
- Combining AI Tools with Human Oversight in Content Moderation
Social media platforms should have stricter content moderation policies(Market, 2023). Even though they use artificial intelligence (AI) tools to monitor nuisance content, they frequently face difficulties comprehending context and nuance. Most of the time, this leads to an over-censor or a failure to recognize harmful posts. Additionally, human content moderators might be lacking in their cultural background knowledge to make appropriate decisions, resulting in uniform implementation of content policies. Consequently, combining AI tools with human control on content moderation seems like a valuable practice for increasing content moderation effectiveness. First, AI tools are efficient when it comes to sifting through massive amounts of content. Secondly, human content moderators provide context understanding and reasonable sensitivities.
- Support Networks for Victims
It’s also really important to set up support systems for people who are targeted by online defamation. This can include counseling services, legal assistance, and dedicated helplines. When victims feel supported by their community, they can deal with the psychological impacts of online harassment better. The finding underscore the potential of online communities and social media as valuable channels for abused women to disclose their situations, seek social support, obtain useful information, and receive emotional encouragement(Whiting et al., 2022).

Figure 3. from pixabay
Governmental and Institutional Interventions
- Strengthening Legislation and Enforcement
Governments must make sure prevailing data protection and defamation provisions and regimes are substantial and that they are implemented adequately. Equally important is the recognition by state actors of the need to regulate online harassment, including algorithms for recommendation on social media platforms, and therefore equip law enforcement bodies with sufficient resources and training on how to deal with cyber defamation and misuse of personal data. In April 2025, Victoria’s Parliament passed comprehensive anti-vilification legislation. The laws impose penalties of up to five years’ imprisonment for serious offenses, such as inciting hatred or making physical threats(Kolovos, 2025). Additionally, developing and implementing laws and regulations with input from survivors and women’s organizations(UN Women, 2025).
- International Cooperation
Since much of the harmful content circulates across borders, cooperation is essential. Governments, the technology sector, women’s rights organizations, and civil society should work together to share best practices, harmonize legal frameworks, and collaborate on investigations targeting transnational data theft and online harassment networks. UN Women is leading efforts to combat technology-facilitated gender-based violence by advocating for laws that protect women and girls, closing data gaps, and establishing community support networks(UN Women, 2025). These measures are designed to safeguard women’s rights to privacy, safety, and participation in the digital age.
- Encouraging Women’s Participation
As the study said, greater diversity among designers of AI-based tools for detecting unwanted content is crucial(UNESCO, 2023). It is important to encourage their active participation and leadership in the technical sector. By having women more involved in the technology development process, we can ensure that products and services are more inclusive and meet the needs of users of different genders.
- Imposing Restrictions and Penalties
The government should strengthen the supervision and punishment of platforms, and punish platforms that allow harmful content to proliferate and fail to adequately safeguard user data.
Challenges and Considerations
While these measures represent significant progress, challenges remain. Firstly, finding the right balance between free speech and regulation can be quite challenging. It requires careful consideration to create laws that effectively address misinformation while also respecting individuals’ rights to free expression.
Furthermore, achieving global cooperation is difficult. Legal protections against cyber-harassment are inconsistent globally, with many regions lacking comprehensive frameworks. A World Bank assessment revealed that legal protection against cyber harassment is weak worldwide, regardless of region or income level(Recavarren & Elefante, 2024).
In addition, we know that women represent only 12% of machine learning researchers and 22% of all AI professional developers(World Economic Forum, 2024) which means that empowering women to participate in the science and technology field also requires a lot of human resources and time investment.
Conclusion
At the end of the day, what happened to Mary Kate Cornett isn’t just a “one-off internet drama”—it’s a reflection of how messy, fast, and sometimes cruel our digital world has become. A fake story, a few viral posts, and suddenly someone’s real life gets flipped upside down. It’s scary how easy that is. And let’s be honest: it could happen to anyone, especially women.
But that doesn’t mean we’re powerless. If we start thinking twice before sharing rumors, check sources instead of just clicking “like,” and keep our personal info a bit more private—we’re already making progress. Platforms can definitely do better too. They built these viral machines; they should also be responsible for slowing them down when things get toxic. And governments? They need to update their rulebooks for the internet age and stop letting platforms play judge, jury, and algorithm.
Sure, fixing all this isn’t going to be easy. But if we all do a little—learn a bit more, speak up when something feels off, push for better rules—we might just make the internet a safer, kinder space for everyone. Especially for women, who deserve way more respect than the digital world currently gives them.
Reference
College student reflects on impact of viral online rumor that “ruined” her life. (2025, April 3). [Video]. NBC News. https://www.nbcnews.com/news/us-news/university-mississippi-student-speaks-about-viral-rumor-rcna199330
Ucl. (2025, March 11). Social media algorithms amplify misogynistic content to teens. UCL News. https://www.ucl.ac.uk/news/2024/feb/social-media-algorithms-amplify-misogynistic-content-teens?utm_source=chatgpt.com
Backe, E. L., Lilleston, P., & McCleary-Sills, J. (2018). Networked Individuals, Gendered Violence: A Literature Review of Cyberviolence. Violence and Gender, 5(3), 135–146. https://doi.org/10.1089/vio.2017.0056
Marlet, C. (2023, November 16). The impact of the use of social media on women and girls – Wo.Men.Hub. Wo.Men.Hub. https://gwmh.org/project/the-impact-of-the-use-of-social-media-on-women-and-girls/?utm_source=chatgpt.com
Recommending Toxicity: The role of algorithmic recommender functions on YouTube Shorts and TikTok in promoting male supremacist influencers – DCU Anti-Bullying Centre. (n.d.). DCU Anti-Bullying Centre. https://antibullyingcentre.ie/publication/recommending-toxicity-the-role-of-algorithmic-recommender-functions-on-youtube-shorts-and-tiktok-in-promoting-male-supremacist-influencers/
Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policymaking (Vol. 27, pp. 1-107). Strasbourg: Council of Europe.
Eskola, M. (2012). From risk society to network society: Preventing cybercrimes in the 21st century. Journal of Applied Security Research, 7(1), 122-150.
Ozturk, P., Li, H., & Sakamoto, Y. (2015). Combating rumor spread on social media: the effectiveness of refutation and warning. https://ieeexplore.ieee.org/abstract/document/7070103/authors#authors3
COGNITIVE PSYCHOLOGY ANALYSIS ON THE CAUSES, MECHANISM AND COUNTERMEASURES OF THE SPREAD OF INTERNET RUMORS. (2019). REVISTA ARGENTINA DE CLINICA PSICOLOGICA. https://doi.org/10.24205/03276716.2020.346
Whiting, J. B., Davies, B. N., Eisert, B. C., Witting, A. B., & Anderson, S. R. (2022). Online Conversations About Abuse: Responses to IPV Survivors from Support Communities. Journal of Family Violence, 38(5), 791–801. https://doi.org/10.1007/s10896-022-00414-5
Kolovos, B. (2025, April 2). A ‘Sam Kerr clause’ and long jail terms: Victoria passes tough new anti-vilification and social cohesion laws. The Guardian. https://www.theguardian.com/australia-news/2025/apr/02/victoria-anti-vilification-social-cohesion-laws-sam-kerr-ntwnfb?utm_source=chatgpt.com
Recavarren, I. S., & Elefante, M. (2024, March 16). Protecting women and girls from cyber harassment: a global assessment. World Bank Blogs. https://blogs.worldbank.org/en/developmenttalk/protecting-women-and-girls-cyber-harassment-global-assessment?utm_source=chatgpt.com
How to combat hate speech and gendered disinformation online?UNESCO provides some ideas. (2023, April 20). https://www.unesco.org/en/articles/how-combat-hate-speech-and-gendered-disinformation-online-unesco-provides-some-ideas?utm_source=chatgpt.com
Equality Now. (2024, October 29). Lack of legal protections against doxing is putting women at greater risk of online stalking and harassment – Equality Now. https://equalitynow.org/press_release/lack-of-legal-protections-against-doxing-is-putting-women-at-greater-risk-of-online-stalking-and-harassment/?utm_source=chatgpt.com
FAQs: Digital abuse, trolling, stalking, and other forms of technology-facilitated violence against women | UN Women – Headquarters. (2025). UN Women – Headquarters. https://www.unwomen.org/en/articles/faqs/digital-abuse-trolling-stalking-and-other-forms-of-technology-facilitated-violence-against-women?utm_source=chatgpt.com
Will AI make the gender gap in the workplace harder to close? (2024, September 10). World Economic Forum. https://www.weforum.org/stories/2018/12/artificial-intelligence-ai-gender-gap-workplace/
Be the first to comment