The Cost of a Comment: Doxxing, Power, and Privacy in the Digital Age

Online privacy is breaking down. Every click, like, and search leaves a trace of our privacy on platforms. Most of us give away our personal details without thinking. This is not because they want to, but because they have no choice, such as “accept all cookies”. Social media, shopping apps, and cloud services all collect huge amounts of our data. This includes names, phone numbers, locations, and even what we look at or search for. And this information is used to analyse our interests and sell to some companies.

According to Flew (2021), almost every digital platform company has faced privacy concerns. Facebook is one of the most well-known cases. In 2018, the Cambridge Analytica scandal made headlines. It was found that the company had shared data from around 87 million Facebook users. This data came from a third-party app.


The problem is not just that data is collected and sold. It’s that we don’t know where it goes, who sees it, or how it will be used. When platforms leak this data, it will become a weapon against ourselves. And in the worst cases, it ends up in doxxing. Doxxing means searching for and sharing someone’s private information on the internet. It usually happens without the person’s permission and often accompanies online attacks. What’s worse is that some people are even attacked in real life.


Digital privacy is not just a personal problem. It has become a public concern. Suzor (2019) argues that platforms often work like black boxes. This means we cannot know how our data is handled or who has access to it. Recently, a serious doxxing case caught people’s attention in China. What shocked people most was how the attacker obtained the victim’s personal information.


Background: Who’s Involved?


This case centers are Xie Guangjun who is a Vice President at Baidu, and his daughter. Baidu is one of China’s largest tech companies and widely known for its search engine. But it also runs Baidu Netdisk which is a cloud storage platform. It offers 2TB of data space for users for free to sync files from mobile phones or computers.

The controversy began when Xie’s daughter posted on Weibo which is a Chinese social media platform about her idol’s busy schedule. Then an ordinary netizen casually replied that if the idol had flown first class, she wouldn’t have looked so tired. Although the comment didn’t sound harsh or insulting, it still made her furious and felt like an attack on her idol. This is because she is a passionate admirer of this idol. Therefore, she doxxed the commenter. She not only revealed commenter’s name, her husband’s name and phone number, her address, and even her workplace, but also posted this sensitive data in that idol’s fan club.

The consequence of the doxxing is huge harassment flooding on her family and her. Strangers sent her offensive messages. A few even reached out to her workplace to make complaints. More seriously, she was pregnant. These not only harmed her physically and emotionally, but also put her baby’s safety at risk.

Tracing the Leak


In this incident, the victim left a short comment on a digital platform. That space is public, so it felt like a safe place to speak freely. People do not expect serious harm just for sharing opinions. However, her private information, such as her name, address, and phone number, was later shared in a much more aggressive and hostile space. That shift broke the expected social rules of online privacy.


Things got worse when people online began to guess how the daughter got this information. Some believed she had used Baidu Netdisk to get the victim’s data. Unsurprisingly, Xie denied this on a friends-only private space. He said the information came from another public platform. But there were doubts. The reason was that his daughter had a history of doxxing people. By the time this incident happened, the public had already identified at least 16 previous victims. In these incidents, the leaked personal details were in exactly the same format. This made many people realize that the source was the same every time. It suggested that the data came from inside the company, not from public places.

When the Powerful Cross the Line


Online privacy should protect everyone. But in reality, it protects those with the most power. We often think about governments invading our privacy. Or we blame platforms that collect too much data. But sometimes, the threat is personal. It comes from people who are close to the system and know how to use it for their own interests, just like Xie’s daughter.


Power today is not only about money or title but also about data. Having access to information means having control over others. And when that access is used to punish, humiliate, or silence someone, the damage is huge. It’s not just a technical issue, it’s a social problem.


We’ve seen this before. In the Cambridge Analytica scandal, Facebook data was used to manipulate voters without their knowledge (Cadwalladr & Graham-Harrison, 2018). In South Korea, the Nth Room case showed how tech insiders helped run secret chatrooms where women were blackmailed using personal information (De Souza, 2020). In both cases, the platforms stayed silent for too long. The people harmed were ordinary users. The people protected were those with power.


This pattern keeps repeating globally. Some people get access and others get exposed. Those in charge are rarely punished. The rest of us are told to “be careful what you post.” But caution is not enough when the rules are uneven. Telling users to protect themselves, while ignoring those who abuse the system, is not safety. It’s avoidance.


I don’t think most people understand how fragile their privacy is until it’s already leaked and protection is too late. One person with connections can reach into your digital life and make it public. And when that happens, there is rarely a way to undo the harm. This is what Nissenbaum (2010) calls a breakdown of contextual integrity. That is means Information from a platform is taken out of that and used in another platform which often will cause harm.

In the Baidu case, the victim left a simple comment in a popular platform. That space felt casual and low risk where is a place to share opinions without fear. But her comment was taken out of that space and used against her. Her identity was tracked, matched with her real-world details like her name and workplace, and then exposed to a group of easily influenced and impulsive fans. It was not about starting a discussion, but was about punishment.


Doxxing is not random, it’s patterned. It’s rooted in power imbalance — a structure where powerful people are protected, while others are left exposed. Until platforms take that imbalance seriously, privacy will still broke.

Platforms Are Not Just Neutral Tools


What really bothers me is not just doxxing, but the platform’s silence about negative events. Platforms like Baidu operate like black boxes, meaning we don’t know what happens inside. We don’t know who can access our data, how it’s being stored, or whether it’s being misused (Suzor, 2019). And when something goes wrong, companies offer vague public statements, hoping the news cycle moves on.


But this is not just about an employee or an impulsive fan. It’s a sign of platform governance failure. Rules exist, but with weak carrying out, especially when the harm comes from someone who has power or close to the company.

Please be honest. Platforms really benefit from this silence. This means the less users know about how their data is handled, the fewer questions they will ask and suspect. That is what scholar Martin Tisné (2020) describes as the data delusion — the false idea that protecting individual data is enough when the systems themselves are built to be opaque and unaccountable.


This case also shows how platforms choose who to protect. When the person doing harm has status, companies often say less, do less, and hope for less backlash. But when the person harmed is an ordinary user? They’re left to deal with the mess alone.

The Algorithm Doesn’t Care Who Gets Hurt


We need to talk about what happens after private information is leaked. In many cases, it’s not just people spreading it, platforms also encourage it. The algorithms which are systems that recommend content do not just stay neutral. They boost posts which attract attention, even if that attention is negative like harm or hate. This is what Noble (2018) argues algorithmic oppression. That means automated systems, like search engines or recommendation tools, often make social problems worse instead of helping fix them. They push harmful content higher and ignore context, even if those posts hurt people.


In the Baidu incident, personal information was first shared in a fan group. But it didn’t stay there. It spread across different platforms. Not because the platform approved it, but because the algorithm is programmed not to stop it. Harmful content got more likes, so it stays and is heated again and again.

How Can We Protect Digital Privacy?

In this case, it is clear that privacy cannot protect itself. If we want to surf in a safer online environment, change needs to be made from three parts: governments, platforms, and us, the users.

At the government level, the first thing they need to do is to create and enact laws that are actually effective. A good data protection law should have clear restrictions on how companies collect and use personal data. For example, Europe has the General Data Protection Regulation (GDPR), and China has the Personal Information Protection Law (PIPL) (IPR, 2025). Both of these laws give users more rights, such as letting them know who is using their data and making it easy for them to ask for deletion. However, if no one enforces the law, it is just a set of words. When someone leaks data or platforms fail to stop abuse, there should be severe punishment and consequences (Flew, 2021).

Platforms should take the most part of responsibility of privacy issues and their consequences. They often talk about our privacy safety, but when something goes wrong, they seem to have no action and slow speed in solving the problem. These situations need to change. Platforms should run regular internal audits which involve structured reviews of how data is accessed and used. They should also improve their moderation tools on user-generated content. These systems help detect and block harmful posts before they spread. Unfortunately, these tools exist, but they are weak or sometimes inconsistent, especially when the target is an ordinary user (Suzor, 2019).

And then there’s us. Most of us click “accept” without reading the terms. We don’t always understand how much data we’re giving away. That’s not our fault. Many apps put their privacy risks in long documents full of legal language. So it means we need better digital literacy which means the skill to understand how platforms work and how to protect ourselves online. As Tisné (2020) argues, individual control is not enough if the system itself is broken, but it’s still a place to start. Next time, please try not to accept all cookies right away.

Protecting privacy is not just about avoiding harm. It’s about building a digital world that treats people with respect. We cannot fix that immediately. But we can demand more from those in power, from the tools we use every day, and even from ourselves.

Conclusion


This is more than a story about fan culture. It is a warning about the abuse of digital power. It shows that our privacy can quickly break down on internet. When people access to platforms and want to harm others, it becomes a serious digital rights issue. We need more comprehensive policy, more transparency, and stronger systems to protect our private data. Platforms, governments, and users must work together to make the internet safer for everyone.

Reference

Cadwalladr, C., & Graham-Harrison, E. (2018, March 17). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian. https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election

Flew, T. (2021). Regulating platforms. Polity Press.

IPR. (2025, January 27). How do the European Union’s GDPR and China’s PIPL regulate cross-border data flows? International Policy Review. https://ipr.blogs.ie.edu/2025/01/27/how-do-the-european-unions-gdpr-and-chinas-pipl-regulate-cross-border-data-flows/

De Souza, N. (2020, April 20). The Nth Room Case and Modern Slavery in the Digital Space | Lowy Institute. Www.lowyinstitute.org. https://www.lowyinstitute.org/the-interpreter/nth-room-case-modern-slavery-digital-space

Nissenbaum, H. (2010). Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.

Suzor, N. (2019). Lawless: The secret rules that govern our digital lives. Cambridge University Press.

Tisné, M. (2020, July 2). The data delusion: Protecting individual data isn’t enough when the harm is collective. Luminate. https://luminategroup.com/posts/blog/the-data-delusion

Be the first to comment

Leave a Reply