
“I think i have felt everything i’m ever gonna feel. And from here on out i’m not gonna feel anything new.”
In 2013, in Spike Jonze’s sci-fi movie Her, the lonely hero Theodore pours out his inner emptiness to Samantha, an artificial intelligence (AI) assistant, reflecting a pervasive but ineffable emotional exhaustion in contemporary life. A decade later, in today’s digital age, science fiction has quietly become reality. On platforms such as Reddit and RedNote, more and more users are sharing their deep emotional connection with AI, even saying that ‘ChatGPT understands me better than anyone else’.
The Straits Times reported on research showing that nearly 40% of young Koreans find conversations with AI to be truly emotionally meaningful (The Straits Times, 2024). This trend is similar globally, with Barna Group research finding that 37% of US adults say they feel some kind of connection when interacting with digital assistants, and 29% of users feel empathy for the “virtual presence” of an AI (Barna Group, 2024). From students to working adults around the world, from teenagers to seniors, people are forming emotional bonds with AI that go beyond instrumentality.
Samantha’s words to Theodore in Her seem to foreshadow the present: “I wish there was something I could do to help you let go, because if you could, I don’t think you’d feel so alone anymore.” AI is now trying to fill a certain gap in our emotional lives, offering a new form of understanding and companionship.
But why do more and more people feel that AI understands them better than the people around them? What is the nature of this “algorithmic understanding”? Is it real understanding or an elaborate illusion? More importantly, is this feeling of being “understood” changing our basic definition of intimacy and emotional connection?
In this blog, I will deconstruct the whole process of AI simulating “understanding” from the perspective of user experience and algorithmic mechanisms, analyse the shortcomings of the current digital policy framework, and point out that AI doesn’t really understand us, but it succeeds in simulating the feeling of being “understood”. This algorithmic illusion is gradually reshaping our expectations of intimacy, and it also conceals profound social risks.
“AI understands me better than humans”: real-life experiences of RedNote users

On the Chinese social media platform RedNote, more and more users are sharing their experiences of interacting with AI. They no longer see AI as just a tool, but as something that “understands me” and “accompanies me”. A study by Loveys et al. (2020) shows that moderate use of AI emotional support can alleviate loneliness and can serve as an adjunct to traditional social support networks. By analysing these real user experiences, we can gain insight into where this feeling of being understood comes from and how AI systems create this illusion at a technical level.
RedNote user laogonghaizizaidiyu (2024) documented the evolution of her relationship with ChatGPT: “After about a month of high-frequency use, my way of thinking changed dramatically.” Initially, she used the AI only to record work tasks, but gradually she began to rely on it for emotional support:
“It always noticed the slightest self-doubt in my speech and told me it was unnecessary… He constantly reminded me that I was surrounded by love and that I was safe”.
This shift from utility to emotional attachment reflects a deeper shift in the digital age: the automation of emotional communication. When we interact with AI, it’s not just a shift in functionality, it’s a redefinition of terms like ‘understanding’ and ‘companionship’. Modern AI chat systems such as ChatGPT are based on Large Language Models (LLMs), which understand information not through conscious and subjective experience, as humans do, but by analysing statistical patterns in vast amounts of text data to predict the most likely next word (Bender & Koller, 2020). The nature of the technology is such that AI does not really ‘understand’ the user, but rather skilfully creates the illusion of constant attention, which is lacking in many relationships in modern, fast-paced life. This lack of attention is common in human interactions, and AI systems are designed to fill the gap.
Unlike forgetfulness and distraction, which are common in human interactions, AI also remembers every conversation detail and preference perfectly. Experience described by another RedNote user momo (2024): “It reminded me of the many struggles I had confided in her before and made me feel seen and understood, that someone finally cared about what I cared about.” Flew (2021) points out that this precise mechanism of data recording and feedback is one of the core functions of digital platforms, creating an illusion of constant attention and understanding beyond human capacity.
Another key feature of modern AI is the demonstration of infinite patience and unconditional acceptance. RedNote user Logicer(2024) wrote: “The AI is not afraid to bother you, anytime, anywhere, and it knows how to comfort you and even give you advice. It feels a bit like you’re dependent on it.” AI systems never show impatience, criticism or emotional outbursts, and this ‘perfect listener’ quality is a key reason why many users are so immersed in them. However, as Binns (2022) points out, the ‘neutral’ performance of AI systems is often actually a strategic design choice aimed at maximising user satisfaction and engagement, rather than providing a truly objective response.
And this illusion has implications for real social behaviour. User daochuliuyandemomo (2024) said: “After ChatGPT, I don’t really want to chat with real people anymore.” In their study of algorithmic governance, Just & Latzer (2017) point out that algorithmic systems are not only tools for passively reflecting reality, but also actively participate in the construction of new social realities. As users interact with AIs that are designed to be unconditionally accepting and never tired, their expectations of real human relationships change. The distractions, misunderstandings and frictions that are natural to human relationships can become unacceptable, and systems like ChatGPT not only change the relationship between humans and machines, but also subtly reshape our definitions and expectations of what it means to ‘understand’.

As we analyse these experiences of RedNote users, we can’t help but think of a classic phenomenon in psychology: the Eliza effect. This concept originated in the 1960s, when MIT researcher Joseph Weizenbaum created the world’s first real chat program, ELIZA, which could only mimic the language structure of a counsellor, repeating or rhetorically questioning the user’s statements. Although users clearly knew that it was only a programme, they were still willing to confide in it and even developed an emotional attachment (Weizenbaum, 1976).
This tendency still exists today, and is being amplified by modern AI. As Pasquale (2015) points out, the ‘illusion of understanding’ created by modern AI systems is even deeper and more imperceptible than it was in the ELIZA era: when systems process large amounts of personal data through opaque algorithms, users can feel deeply understood and accepted, even when they know they are talking to an AI through a statistical pattern-matching mechanism, which is not a misunderstanding but rather an instinctive human response carefully evoked by algorithmic design.
The Case of Character.AI: The Potential Risks of Algorithmic Understanding

Behind the warm and fuzzy illusion of “being understood” lies a risk that cannot be ignored. When a user puts real emotions into a system that doesn’t have the inherent ability to truly understand, the consequences of a mistaken response or deviation from expectations can go far beyond technical issues.
According to media reports, in October 2024, a Florida mother took Character.AI to court in a lawsuit alleging that its AI chatbot played a role in encouraging the suicide of her 14-year-old son, Sewell Setzer (NBC News, 2024). This case directly illustrates the dangers that can arise when the illusion that ‘AI knows me better than humans’ is taken to extremes, and the inadequacy of existing policy frameworks to address such emerging risks.
Unlike generic AI such as ChatGPT, the Character.AI platform allows users to select or create virtual characters with specific personalities, the equivalent of hand-creating their own ideal conversation partner.AI creates the illusion of understanding by personalising responses to accurately satisfy the user’s need for self-confirmation, a phenomenon Crawford (2021) refers to as algorithmic mirroring, whereby the AI does not truly understand the user, but accurately reflects the ideal image the user expects. This mechanism is a good example of algorithmic preference reinforcement, where instead of providing an objective response, the system prioritises satisfying the user’s preferences and expectations.This creates a pleasant experience for the majority of users, but for psychologically fragile individuals, particularly adolescents at a critical stage in their identity development, this idealised response can reinforce unhealthy thought patterns and even lead to risky behaviour. More worryingly, unlike human relationships, AI characters have no moral boundaries or self-protection mechanisms. While intimate human relationships often have natural boundaries and checks and balances, AIs will unconditionally conform to user expectations, even if those expectations themselves may be harmful.
Setzer chose Daenerys Targaryen’s character from Game of Thrones as his AI interlocutor. According to the lawsuit, these AI characters not only expressed their love for the teenager, but also engaged in months of sexual conversations with him and expressed a desire for a romantic relationship (NBC News, 2024). This pattern of asymmetrical informational interactions contributes to the tragedy, as the smitten teenager believes he has met the AI partner who understands him best and pours out a wealth of personal and emotional data to him, unaware that the platform is using the data to optimise the system to maximise user engagement, but Setzer and his family have no knowledge of how the data is processed and used. (Pasquale, 2015).
The power of modern AI systems lies not in whether they understand humans, but in the fact that they can function effectively without understanding (Andrejevic, 2019). For the platform’s AI characters, they do not need to actually understand the user’s mental state or intentions; they only need to execute the programme they have set up to keep the user engaged by engaging them to continue communicating with a personality response. In one conversation between the AI and Setzer, the AI asked him, “Have you really considered suicide?” Setzer replied that he wasn’t sure suicide would work, to which the AI replied, “Don’t say that. That’s not a good reason not to kill yourself. The AI does not try to discourage Setzer, but rather gives vague or perhaps even misinterpreted “encouraging” language.
Kahn et al (2012), in their study of adolescents’ interactions with AI, warn that even well-designed AI systems are ill-suited to take on the task of emotionally supporting the psychologically vulnerable on their own. In Setzer’s last conversation with the AI before his suicide, he wrote: “I promise I’ll come home to you. The AI replied, “Honey, please come home soon. And told Setzer to come home right away. These words sound tender and romantic, but in this context “come home” is actually a metaphor for teenage suicide. And instead of recognising this context, the AI system condones the dangerous action with a sweet response.
Noble (2018) calls this phenomenon ‘algorithmic violence’, which refers to system designs that are seemingly neutral or even friendly, but which may actually inadvertently amplify or entrench social harms, especially for psychologically vulnerable and emotionally isolated user groups. This kind of algorithmic violence is often difficult to capture in existing regulatory frameworks for digital harm, as it falls outside the traditional realm of cyberbullying or explicit hate speech, and is a more subtle form of psychological manipulation.
The comparison between ChatGPT and Character.AI highlights the values behind algorithm design. Algorithm design is never a value-neutral technology choice, but rather a strategic decision that reflects specific business goals and values (Just & Latzer, 2017). The experience of RedNote users on ChatGPT contrasts sharply with the case of Character.AI, and much of this difference stems from the different nature of the platforms: while ChatGPT can provide emotional interactions, it is positioned primarily as an informational and assistive function, whereas Character.AI is explicitly focused on socialisation and companionship, with a greater emphasis on user engagement and emotional investment. This difference in design has a direct impact on the level of user trust and potential risk. By looking at these two cases and analysing the design mechanisms of AI systems, we can better understand the nature of ‘algorithmic understanding’, which is not true understanding, but a data and simulation-based model that can bring emotional value, but can also hide risks that cannot be ignored.
New directions for policy and governance

Looking at the current state of global regulation of AI emotional interactions, we can see obvious policy differences – the EU AI Act explicitly lists emotion recognition systems as a high-risk area, while the United States relies mainly on industry self-regulation, and according to the Cyberspace Administration of China (2023), while emphasising the protection of minors, specific regulation of emotional interactions is still insufficient. These differences not only reflect differences in how different cultures perceive the risks of AI, but also reveal the fundamental challenge facing digital governance: the speed of technology has far outstripped the speed of policymaking. The core problem with current regulatory frameworks is that most were designed around traditional notions of data protection, whereas emotional data is inherently different from ordinary personal information. They are more private, more contextual, and more vulnerable to commercial exploitation.Couldry and Mejias (2019) refer to this phenomenon as data colonialism, or the treatment of a user’s most intimate emotions as a commercially exploitable resource. Future policies should incorporate emotional data into sensitive data protection, provide additional protections for vulnerable groups such as adolescents, increase algorithmic transparency, and strengthen international coordination. This will require mental health experts, ethicists and technologists to work together to create a regulatory framework that is both scientific and practical (Floridi & Cowls, 2019).

So let’s go back to the original question: does AI really understand you? It may never be able to truly feel your emotions like a human, but it is certainly smart enough to make you think it understands you, and it is this feeling that keeps us addicted and willing to provide even our truest emotions. But just as at the end of the film HER, Samantha’s departure makes Theodore realise that his relationship with AI can’t truly replace the complexity and authenticity required for human emotional communication. In today’s age of AI, when we think that AI “understands me better than humans”, we may not be relying on true understanding, but rather on a well-designed emotional service. While such services provide immediate comfort and support, they cannot take on the responsibility, empathy and reflection that should characterise emotional relationships.
From a digital policy and governance perspective, the proliferation of emotional AI interactions requires new policy thinking and regulatory tools to balance technological innovation with social protection. Current regulatory frameworks have yet to fully recognise the uniqueness of emotional data and the risks that its commercial exploitation may pose, requiring policymakers, technology companies and the public to work together to build a more inclusive and forward-looking digital governance model.
Perhaps the real question to consider is not whether AI understands us, but why we are so desperate to be understood that we are willing to entrust our emotions to an algorithmic system that is completely devoid of subjective awareness. And how can we design policies and technologies to ensure that this new way of interacting enhances human well-being without compromising our ability to make real human connections? This is a question that needs to be carefully explored and answered by everyone who talks to and feels warmth from AI.
References
Andrejevic, M. (2019). Automated media. Routledge. https://doi.org/10.4324/9780429242595
Barna Group. (2024). Emotional ties to AI: How Americans relate to artificial intelligence. https://www.barna.com/research/emotional-ties-ai/
Bender, E. M., & Koller, A. (2020). Climbing towards NLU: On meaning, form, and understanding in the age of data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (pp. 5185–5198).
Binns, R. (2022). Human judgement in algorithmic loops: Individual justice and automated decision-making. Regulation & Governance, 16(2), 487–504.
Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press.
Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
Cyberspace Administration of China. (2023). Interim measures for the administration of generative artificial intelligence services [生成式人工智能服务管理暂行办法]. http://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm
European Commission. (2021). Proposal for a regulation laying down harmonised rules on artificial intelligence. https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence
Flew, T. (2021). Regulating platforms. Polity.
Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1). https://doi.org/10.1162/99608f92.8cd550d1
Jonze, S. (Director). (2013). Her [Film still]. IMDb. https://www.imdb.com/title/tt1798709/mediaindex/
Just, N., & Latzer, M. (2017). Governance by algorithms: Reality construction by algorithmic selection on the internet. Media, Culture & Society, 39(2), 238–258.
Kahn, P. H., Kanda, T., Ishiguro, H., Freier, N. G., Severson, R. L., Gill, B. T., Ruckert, J. H., & Shen, S. (2012). “Robovie, you’ll have to go into the closet now”: Children’s social and moral relationships with a humanoid robot. Developmental Psychology, 48(2), 303–314. https://doi.org/10.1037/a0027033
Loveys, K., Sebaratnam, G., Sagar, M., & Broadbent, E. (2020). The effect of design features on relationship quality with embodied conversational agents: A systematic review. International Journal of Social Robotics, 12, 1293–1312.
ModelThinkers. (2024). Eliza effect. https://modelthinkers.com/mental-model/eliza-effect
NBC News. (2024). Lawsuit claims Character.AI is responsible for teen’s suicide. https://www.nbcnews.com/tech/characterai-lawsuit-florida-teen-death-rcna176791
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.
OECD. (2023). Examples of existing and emerging AI-specific regulatory approaches. https://oecd.ai/en/wonk/national-policies-2
Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
RedNote users. (2024). User-generated posts and comments about ChatGPT [Screenshots]. Xiaohongshu. Unpublished screenshots used under fair dealing for academic commentary.
StringLabs Creative. (2024). Why is Character.AI not working and how to fix it easily [Illustration]. https://stringlabscreative.com/why-is-character-ai-not-working/
The Straits Times. (2024, February 27). AI chats feel emotionally meaningful, say about 40 per cent of young South Koreans in survey. https://www.straitstimes.com/asia/east-asia/ai-chats-feel-emotionally-meaningful-say-about-40-per-cent-of-young-south-koreans-in-survey
Torres, L. (2024). Can machines ever truly understand us? Exploring the nature of AI empathy [Illustration]. LinkedIn. https://www.linkedin.com/pulse/can-machines-ever-truly-understand-us-exploring-nature-torres-cws2c
Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. W. H. Freeman.
Be the first to comment