Artificial Intelligence – Another Constructed Myth?

Artificial Intelligence - Another Constructed Myth?

I guess you must have heard of ChatGPT or are using it right now. Especially in recent years, whether you like it or not, it has been playing an increasingly important role. Maybe you know that artificial intelligence is very powerful and can help us search for information and solve problems, but do you know the operational logic of these artificial intelligences? How does it help us quickly find the information we want in the vast information database? Take ChatGPT as an example. Do you know what “GPT” stands for here? Many people may not be clear. Therefore, I think that to understand the myth created by the contemporary dominant class through artificial intelligence, we first need to understand what artificial intelligence really is.

The definition of AI

On Wikipedia, the definition of artificial intelligence (AI) is as follows: “Artificial intelligence (AI) refers to the capability of computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making(Wikipedia contributors, n.d.).” In ChatGPT, “GPT” stands for “Generative”, “Pre-trained”, “ Transformer”, which can be understood as the core of artificial intelligence being divided into three parts: reading information, processing information, and generating information. “Pre-training” means reading and processing information, while “Generative” and “Transformer” emphasize the generation of information. Although the ChatGPT we use places more emphasis on information output, the core concept remains the same. Meanwhile, it is important to note that the essence of AI technology is digital and algorithmic. In the eyes of AI, the knowledge we consider is merely tokens to it, so AI cannot truly “understand” knowledge but can only infer the greater likelihood of a certain character appearing after another. That is to say, it is an extremely powerful language imitator rather than a comprehender (Bender & Koller, 2020). All the so-called algorithms serve to help it more efficiently and accurately determine this likelihood. This sounds very much like semiotics, doesn’t it? But it is indeed the case. As a binary machine, it can only distinguish between 0 and 1, and all the so-called “truths” to it are merely symbols.

Now that you may have understood some of the operational logic of AI, it’s time to get to the point. Why do I say that artificial intelligence is another constructed myth? In medieval Europe, people believed that bloodletting could help sick people get rid of diseases, so they followed the doctor’s advice and underwent bloodletting treatments (Greenstone, 2010). At that time, people all believed that bloodletting was good, but this “everyone thinks it’s good” was not scientific or a conclusion they reached through their own thinking. Instead, it was because the people around them, the rulers and the noble class all said so, and thus everyone thought it was a good thing and followed suit. Those who criticized bloodletting were labeled as heretics, just like advocating heliocentrism in a group of geocentrists. Even though bloodletting often failed to cure them and instead accelerated their deaths. During the British Industrial Revolution, large machines were invented, and it seemed that they made goods produced more and faster. Rulers and capitalists all praised this great invention, claiming that it promoted world progress and made the world a better place. But did the world really become better? People began to be treated as tools, forced to separate from the means of production, and embarked on a path of exploitation. Compared to before, the happiness index of workers even declined. From the above two historical stories, it is not difficult to see that whether it is a scientific invention or an unscientific one, the outcome is often uncontrollable, and it is usually the common people who suffer. These are examples of the once-constructed myths. Similarly, let’s look at today’s AI technology from this perspective. Although the technology and logic of AI may sound more reasonable than bloodletting in the Middle Ages, they are essentially tools. The problem is not what it is, but who uses it, how it is used, and for what purpose. Therefore, what I want to emphasize here is that AI technology is also gradually being narrated by today’s ruling class, that is, the capitalists, as a new myth.

Looking back at the history of AI development, it can be found that the definition of AI has been constantly changing. That is, the most advanced technology is called AI, and as times progress, once advanced technologies (the past AI) are regarded as ordinary ones (Crevier, 1993). For instance, voice recognition was considered an AI technology decades ago because it was very difficult to achieve, and thus people thought it was high-tech and called it an “AI” function. But today, voice recognition has become a common thing, and no one would say it is AI anymore because there are more advanced technologies that are now called AI, such as ChatGPT which can answer almost any question. We can see that the definition of AI seems to be always relative. Why does such a phenomenon exist? The essence is that it is a product of human construction. That is, those in power or the capitalists can give it any meaning they want and take it away as well.

Foucault’s concept of “ rationality”

In fact, the core idea of this myth was already explained by Foucault in the last century, that is, how rationality was constructed and became the dominant discourse of this era. I believe that AI is a typical representative of rationality in the present. In Foucault’s works “Madness and Civilization” and “Discipline and Punish”, he elaborated in detail how rationality, as a historical element, gradually rose to become the dominant discourse of today, such as how madness was defined and then asylums were established to discipline these madmen; how schools and prisons were established, where the concept of rationality was widely taught and constructed, making everyone recognize the supremacy of rationality. The essence of these phenomena is the process by which rationality, as a means of domination, was gradually constructed. And today’s AI technology can be said to be a typical manifestation of rationality, but what I want to emphasize here is not the logical rationality, but the political nature behind rationality as a dominant discourse, that is, “AI systems are built with the logics of capital, policing, and militarization—and this combination further widens the existing asymmetries of power. (Crawford, 2021)”

Specifically, the emergence of AI technology as a new technological revolution not only means that on the surface we can use more advanced tools, but also implies a reorganization of the entire industry and labor relations behind the scenes. For instance, Amazon needed a large number of sorting workers and customer service staff in the past few years, but now it can largely replace them with AI technology. As a result, many workers will lose their jobs, and those still employed will need to re-establish a cooperative relationship with AI technology. Many factories will also undergo restructuring. In this era of major transformation, who holds the power to speak, or the power to lead all these reorganizations? Of course the capitalists, the ones who held the dominant position in the previous era. They naturally have the priority to change the next era. Will this core logic of the market economy solidify class distinctions and make the gap between the rich and the poor even wider and more entrenched? Have you noticed that the internal logic of AI operation is always referred to as “knowledge economy” or “patent”, and is kept from the common people? Its operation is carried out in a black box (Pasquale, 2015). And this black box operation means that the owners of AI technology have an unlimited space for operation. You will never know exactly what the capitalists have done, or where the information produced by AI comes from, or whether it is right or wrong.

Perhaps these concepts and terms sound abstract, but in fact, they have long been present in our daily lives. For instance, the amount of information on the Internet today is quite huge. How does AI screen and find information? Besides making search results more accurate at the technological level, this screening mechanism and standard can also be influenced by human factors. This logic can be understood by analogy with the Internet. On some search engines, when you enter a keyword, it will provide you with many recommended related terms, but the ranking of these terms can be “bought”. When a company wants to advertise its products, it will pay the search engine company to make its products appear more frequently and rank higher. Reflecting on artificial intelligence with this kind of thinking, when we chat with ChatGPT, perhaps the information we get is not the most relevant information, but the information that has been paid for the most. But would we know? No one knows.

AI in Life

Therefore, the essence of technology remains a political tool, serving capital. For instance, in China, when I ask an AI about its attitude towards the Communist Party of China, the screening mechanism behind it complies with political regulations, thus it will only give a positive response towards the Communist Party of China. When we ask for an evaluation of a certain celebrity, the celebrity’s agency only needs to spend money on public relations, and the internet will be filled with positive reviews. No one knows the true nature of the celebrity. The core of this logic lies in that AI is not only about the range of information users can obtain, but also about the specific customized material means and infrastructure behind it. We can say that information flows freely, but the infrastructure supporting these behaviors has an owner, and the territory accommodating these behaviors is governed by laws. Therefore, the process of us obtaining information will never be free. And here I want to talk about one of my feelings when using AI. It has occurred too when I used search engines, that is, the ” filter bubble”. Nowadays, various services emphasize personalization. People will be pushed various information called “personalized services”, for example, I often search for New Zealand travel on Instagram, and my homepage is always filled with photos or travel guides about New Zealand. AI has the same trend. When I want to start a new topic, the thinking mode of AI will associate this new topic with the topics I have asked about many times before. This thinking mode may be helpful for my personalized services, but at the same time it is also restricting my information sources. Because the essence of AI is not to provide me with better, more, and more accurate information, but to provide me with information that makes me feel satisfied and happy, so that I will use it more and even pay for this AI. “Personalization has given us something very different: a public sphere sorted and manipulated by algorithms, fragmented by design, and hostile to dialogue (Andrejevic, 2019).”

On April 11th, ChatGPT received a major update, introducing “Memory” function. This will enable ChatGPT to remember the topics users frequently discuss and their speaking tones, thereby providing more personalized services. Setting aside privacy concerns, will this personalized service once again lead us into the predicament of the filter bubble? From the perspective of identity construction, will our identities be fixed for it? In the current state where capitalism rules the world, the most typical form of identity construction is consumerist identity construction. That is, we are shaped into the image of people who like to consume and want to spend money on things we like. But in fact, mostly consumption is unnecessary. Due to advertising, political propaganda, and subtle educational concepts, we are told that spending money on things we like is the normal behavior. In the development of AI technology driven by capital, will this identity also be embedded in the thinking logic of AI and then subtly construct our identities? Eventually, each of us, as living individuals, will be treated as mere codes and labels, and our identities will be solidified at this level, with true freedom being deprived. “Facebook defines who we are, Amazon defines what we want, and Google defines what we think (Ensmenger, 2013).” Who we are is a persistent question that needs reflection.

Questions and the Future of AI

Finally, I would like to say that what I want to express through this article is a kind of concern and anxiety. After the birth of the Internet and social media, and short videos, we are increasingly filled with fragmented information and lose a lot of motivation for in-depth thinking and verifying complete information. And with the advent of AI technology today, we can obtain many answers we want directly through it, but does this also mean that we will lose even more motivation to verify actively, and increasingly believe the answers given by AI directly instead of verifying again and reflecting on our own? Or because AI has powerful computing power and encompasses a large amount of information, most of time we do not have the ability to verify and can only choose to believe AI. Therefore, the title of this article is called “Another Constructed Myth”, because AI is very likely to become a new tool for the ruling class to pursue their own interests again, while we as users remain oblivious; or even if AI makes mistakes, many times we cannot distinguish them. Another issue is that technological development makes our connections with people around us increasingly weak. The development of social media has made us have fewer gatherings with friends in reality. Will the development of AI technology make our relationship with teachers increasingly distant? Would we rather trust AI than trust the teachings of our parents, making our relationships with people even more distant? In the field of philosophy, there is a concept called “intersubjectivity”, which emphasizes that all meanings and concepts are born in the space of interaction between people. For example, language has no meaning when it is used by one person alone; it only has meaning in communication. When we use AI technology extensively, will it weaken “intersubjectivity” and gradually make us lose our “human” characteristics? The advancement of AI technology also means that we need to provide more personal information than before, and our privacy will be more exposed to tech giants and the government. However, it is difficult for us to know how they will handle our privacy information, and we cannot guarantee that they will keep their promises to protect our privacy information. But they can secretly sell our privacy information at any time. There are many other possible problems. Therefore, I hope that through this article, everyone can be vigilant and reflect that the development of AI technology is not all benefits and no harm. We need to constantly criticize and reflect to truly make AI play a better role and make our lives better.

Reference:

Wikipedia contributors. (n.d.). Artificial intelligence. Wikipedia. Retrieved April 8, 2025, from https://en.wikipedia.org/wiki/Artificial_intelligence

Bender, E. M., & Koller, A. (2020, July). Climbing towards NLU: On meaning, form, and understanding in the age of data. In Proceedings of the 58th annual meeting of the association for computational linguistics (pp. 5185-5198).

Greenstone, G. (2010). The history of bloodletting. BC Medical Journal, 52(1), 12-14.

Crevier, D. (1993). AI: The Tumultuous History of the Search for Artificial Intelligence. Basic Books.

Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

Pasquale, F. (2015). INTRODUCTION: The need to know. In The black box society: The secret algorithms that control money and information (pp. 1–18). Harvard University Press.

Andrejevic, M. (2019). Automated media. Routledge. 52

Canales, K. (2025, April 5). ChatGPT’s memory upgrade will remember everything you’ve ever told it. Business Insider. https://www.businessinsider.com/chatgpt-memory-remember-everything-you-ever-told-it-2025-4

Ensmenger, N. (2013). Turing’s Cathedral: The Origins of the Digital Universe by George Dyson. IEEE Annals of the History of Computing, 35(1), 6-8.

Be the first to comment

Leave a Reply