
When you open your eyes every day, do you click on Instagram, YouTube, TikTok, Instagram like I do? Maps tell us where to go, recommendation algorithms tell us what to read, shopping sites remind us that “you might also like……”. These digital platforms have permeated every corner of our lives.
It seems that this life is very “free” – we can watch whatever we want, buy whatever we want. But have you ever thought about it: the information we see, is it really our own choice? Are those “privacy settings” really for our protection, or are they just for show? When platforms take away our data, have they really asked for our consent?
Especially in our generation, we have grown up in the Internet and are used to accepting the rules of the platforms by default. But slowly we are beginning to realize that many of our “choices” have already been arranged by the platforms: algorithms determine what we see, and the platforms say whether or not we can speak out. We may be becoming a manipulated generation.
In this blog, I would like to talk with you about these key issues that have long permeated our lives: are digital platforms protecting our privacy or not? Are privacy settings a safeguard, or a ruse? Do users have a real say in the platform? And, are the algorithms that affect our lives every day really fair?
These questions may sound a bit complicated or distant, but in fact, they affect each and every one of us every day. We hope that through a few real and close-to-life cases, we can bring you a different perspective on our roles and situations in the digital world. Only when we realize how those “invisible controls” are taking place, can we really take the initiative to fight for the privacy, rights and respect we deserve.
The illusion of privacy: by clicking “Agree”, does it mean that I really agree?
On almost all digital platforms, users need to click “I agree” when registering in order to continue using the service, and this “click and agree” practice has become the default process of digital life. But this apparent “right to choose” is actually a kind of “pseudo-consent”. Users are often forced to accept complex terms and conditions set by the platforms without any information asymmetry (Nissenbaum, 2009). At the same time, these terms are often lengthy and difficult to understand, and even hide vague authorizations.

According to the theory of Contextual Integrity proposed by Helen Nissenbaum, privacy is not the same as the right to control information, but rather, information should be circulated in the right way in the right social context. If information is transferred to an inappropriate context, even if the user has “consented,” it is still a privacy violation. For example, when a person shares a photo of themselves on social media, they want their friends to see it, not to be used to train facial recognition systems or push advertisements. When information is misappropriated by the platform and used for commercial gain without the user’s knowledge or ability to refuse, this constitutes a systematic invasion of privacy.
TikTok has been criticized by several governments for its data collection misdeeds, especially when it comes to handling the data of underage users. The U.S. Federal Trade Commission (FTC) had fined its parent company $5.7 million in 2019 for its violation of collecting personal information from children under the age of 13 (Wong, 2019). The investigation found that TikTok collected children’s names, emails, location information, and facial recognition data without parental consent, which not only violated the Children’s Online Privacy Protection Act (COPPA), but also exposed how the platform was using subtle means to “legally” obtain user data.

This is not an isolated case, but a common problem with digital platforms: users lack the right to know, the right to choose and the right to refuse, while the platforms have almost unlimited control.
The limits of privacy settings: what we need is structural protection
Many digital platforms offer so-called “privacy settings” that purport to give users the right to manage their personal data, such as turning off personalized advertisements, refusing location tracking or restricting data sharing. While such settings may appear to safeguard users’ privacy, there are a number of hidden problems. First, the settings interface is often designed to be complex and hidden, making it difficult for the average user to quickly find key options; second, the default settings are often “on”, allowing users to unknowingly consent to the collection and use of their data; and third, even if the user struggles to turn off certain features, there is no way to know if the settings are actually in effect or not. Again, even when users struggle to turn off certain features, they have no way of knowing whether the settings are actually in effect.
More worryingly, many platforms continue to capture user data through third-party software development kits (SDKs), cross-site tracking technologies, cookie synchronization, and other means. Most of these “privacy settings” are cosmetic, rather than a good faith effort to enforce control over user data.
In 2018, an investigation by the Associated Press revealed that Google continued to collect location information from users through other services (e.g., Google Maps, weather apps, etc.) even after users had turned off the “location history” feature. This behavior has been criticized as “misleading patterns,” whereby platforms use intentionally complex or ambiguous designs to guide users to choose options that benefit the platform but are not conducive to their privacy (Nakashima, 2018).
This case reflects a problem: platforms are shifting the responsibility for privacy to users, rather than providing protection by design. Platforms should implement the “privacy built-in” principle in the early stages of product development, limiting unnecessary data collection and use, rather than leaving it to users to manage in complex settings. Only when privacy is the default setting, rather than optional, can users truly enjoy respect and control over their data.
Do we really have digital rights?
In digital platforms, the ‘rights’ of users are often superficial; in practice, they can only use the services of the platform, but cannot participate in or contest its rules. Platform operations and decision-making are dominated by corporations, algorithmic engineers, and business stakeholders, with little decision-making power for ordinary users, especially young people (Smith, 2019). Platforms’ terms of service are often complex and difficult to understand and lack transparency, making it difficult for users to understand and exercise their rights, and there is no easy way to complain or defend their rights. This situation is prevalent on most platforms, leading to opaque and unfair rules.
In 2018, Facebook suffered a massive data breach in which the personal data of more than 87 million users was accessed by a third-party company, Cambridge Analytica, and used to target political advertisements. The data was not taken directly from users’ Facebook accounts, but was collected through apps such as “personality tests” and users did not realize that they were consenting to the sharing of not just their participation in the test, but their entire social graph (Cadwalladr & Graham, 2018). This incident exposes a fundamental problem: despite owning an account, users do not actually have control over the platform’s operational and governance rules. Platform companies can change policies, tweak algorithms, and even remove users’ content at will, while users have little power to intervene in these decisions(Cadwalladr & Graham, 2018). This situation reflects a highly unequal relationship between users and platforms and further exacerbates the risk of privacy violations and data misuse.
At the same time, this incident shows that in digital platforms, the “rights ”of users are far from being truly guaranteed. Users as the platform’s service object, should have more right to know, control and participation, rather than just passively accept the platform decision-making. Therefore, the platform should be fundamentally reflective, not only in the legal level to give users more rights, but also in the technology and design to reflect the respect for the rights of users.
Invisible voice loss in adolescents: from user to controlled
Although adolescents are the primary users of the Internet, they often have no say in the rules and policies of many digital platforms. While some platforms claim to protect minors, they actually limit their freedom and choice. For example, many platforms prohibit users under the age of 13 from registering, but teenagers often register with false age information, thus losing the protection they deserve. At the same time, platforms are overly restrictive in censoring content, removing much of the normal expression of teens, especially when discussing sensitive topics such as sexuality and body awareness (Federal Trade Commission, n.d.). These practices treat adolescents as a “vulnerable group” without giving them the right to information, interpretation and participation in rule-making.
Some users have found that the Instagram platform reduces the weight of recommendations for content with large body size, dark skin color, or unkempt environments, which are the real life status of many teens. The platform’s “algorithmic aesthetics” not only harms teens’ self-confidence, but also makes them lose the motivation to express their true selves (Richman, 2019). This phenomenon suggests that adolescents are not only systematically excluded from rule-making, but their behaviors, interests, and even values are often invisibly manipulated by the platform’s algorithms. Through this implicit censorship, adolescents are forced to conform to platform-defined “standards” and are not free to express themselves.

In this case, teenagers not only lose the opportunity to have a voice on digital platforms, but their behavior and thoughts are also unconsciously influenced by the platforms’ algorithms. Instead of protecting the interests of teenagers, the platforms’ algorithms and rules have exacerbated their vulnerability. Therefore, platforms should reconsider how to manage underage users and give teenagers more rights and voices so that they can express themselves more freely and safely in the digital world.
Is the algorithm neutral? Who oversees the platform?
Many platforms claim that “algorithms are neutral”, but this is not the case. Algorithms are essentially human-written code that incorporate the value judgments of the developer, the business goals of the platform, and the inequality of the data itself. For example, recommend systems tend to favor content that is easy on the eyes, such as extreme views, polar opposites, and physical exposure. This is not driven by “user interest”, but by “platform interest”.
One study found that YouTube’s recommendation system tends to push users to channels with more extreme, conspiratorial, or violent content after clicking on multiple videos, resulting in “information cocooning” and a “tendency to polarize” (Fisher & Taub, 2019). This raises the question: are we seeing what we want to see, or what the platform wants us to see?
Right now, most platforms don’t publicize how the algorithms work, and ordinary users don’t even know. What we need is: transparent information about algorithms, independent third-party regulation, and channels where users can come together to express their opinions and defend their rights.
We need more than choice. We need participation.
In this age of digitization, privacy, security and digital rights are no longer abstract technical topics, but realities that we all face every day. While digital platforms have brought convenience, they have also quietly eroded many of the rights we are entitled to. From TikTok’s collection of teenagers’ data, to Facebook’s leakage of user information, to the algorithmic mechanisms that platforms agree to by default, our digital lives are increasingly being “observed, predicted and manipulated”. Privacy settings do not mean that users really have the right to control, but have become a tool for platforms to shift the responsibility to users.
As a young person growing up in the Internet age, I realize that the realization of digital rights does not only depend on technological and policy advances, but also requires us to be vigilant about the operation of platforms and to clearly understand our rights. The digital space of the future should be a safe and fair environment, not a market where algorithms decide everything. We need to be brave enough to speak out, engage in discussions, and shift from users to participants to jointly promote a more transparent, fair, and digital future.
(word count: 2035 )
References
Cadwalladr, C., & Graham-Harrison, E. (2018, March 17). The Cambridge Analytica files: The story of the data scandal that shook the world. The Guardian. https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election
Federal Trade Commission. (n.d.). Children’s Online Privacy Protection Rule (COPPA). https://www.ftc.gov/legal-library/browse/rules/childrens-online-privacy-protection-rule-coppa
Fisher, M., & Taub, A. (2019, June 8). How YouTube radicalized Brazil. The New York Times. https://www.nytimes.com/interactive/2019/06/08/technology/youtube-radical.html
Nakashima, R. (2018, August 13). AP exclusive: Google tracks your movements, like it or not. Associated Press. https://apnews.com/article/828aefab64d4411bac257a07c1af0ecb
Nissenbaum, H. (2009). Respecting context to protect privacy: Why meaning matters. Proceedings of the IEEE, 96(1), 86–100. https://nissenbaum.tech.cornell.edu/papers/Respecting%20Context%20to%20Protect%20Privacy%20Why%20Meaning%20Matters.pdf
Richman, J. (2019, October 14). This is the impact of Instagram’s accidental fat-phobic algorithm. Fast Company. https://www.fastcompany.com/90415917/this-is-the-impact-of-instagrams-accidental-fat-phobic-algorithm
Smith, J. (2019). Who makes the rules. In J. Doe (Ed.), Digital rights and privacy: An overview (pp. 10–24). Cambridge University Press. https://www.cambridge.org/core/services/aop-cambridge-core/content/view/6688999078ABFE0821E84D76A055BE70/9781108481229c2_10-24.pdf/who_makes_the_rules.pdf
Wong, J. C. (2019, February 27). TikTok fined $5.7 million for violating children’s privacy. Wired. https://www.wired.com/story/tiktok-ftc-record-fine-childrens-privacy
Be the first to comment