AI, Algorithms and Technological Determinism: Do we really have a choice?

In an era where AI and algorithms are forced on us without distinction, are we asking the right questions of technology?

Employees working at the computers and giant robot behind them. Artifical intelligence, automation, machine learning concept. Vector illustration. Image credit: Image credit: Moor studio, iStock

If you take just a cursory glance at anything happening in the news, you’re guaranteed to see something about AI. Whether it be about Musk consolidating X (formerly known as Twitter) into xAI or more recently the craze that saw people converting images, either of themselves or famous and sometimes controversial photographs into Studio Ghibli style animation, AI is everywhere. Not only is it everywhere it seems that it’s becoming more pervasive and personal.

Last year Meta (parent company of Facebook, Instagram and Whatsapp) introduced its very own AI Chatbot across all of its products. Through a regular software update, ‘Meta AI’ was suddenly present on billions of devices across the world (Kelly, 2024), offering to draft messages, provide information, generate images and provide personal advice, among many other things. Users quickly learnt that these features can’t be disabled or removed, only hidden or deliberately ignored (Barbera, 2025). 

Although it may seem that it’s not a big deal—platforms are free to add whatever features they like—what is a big deal is the lack of transparency and choice offered to users. This lack seems to be present across a considerable number of platforms online, as well as in real life. This is a reality we have all seemingly gotten used to—and tacitly allowed—particularly as the pace of technological innovation has outstripped our capacity to even begin to understand how it may affect us and our lives. In light of all this change, the question remains: do we really have a choice, or has the choice already been made for us?

Technology and Choice

To explore the question of choice, we must examine a fundamental concept commonly discussed among scholars and academics in discourses on technology: technological determinism.

There is often a view that technology is neutral; that it is simply a tool that can be used in whatever way the user desires. But if we look closely we can see that this position is untenable. Technology is never created in a vacuum. By design, it is embedded with the values that are prevalent in the social milieu of the time and can unknowingly impose those values on entire populations (Weinberg, 2019). This process has been described as reverse adaptation where “technology structures and even defines the ends of human activity” (Winner, 1978). 

An example of this adaptation is the continuous scroll feature on apps like Instagram and Tik-Tok, which were deliberately implemented to prolong engagement increasing the opportunity to extract data from users and expose them to ads. Millions of users have testified that they are unable to detach themselves which has rendered what we now refer to as ‘doom scrolling’ resulting in another novel term, ‘brain rot’ (Boyle, 2024). These phenomena have had considerable negative externalities on human behaviour, relationships and meaning. 

This highlights the primary problem: most technology is produced in Western, industrialized nations, as they possess the resources and economic conditions necessary for its development. As a result, this technology is embedded with an efficiency calculus reflecting the values of capitalist consumer societies, which actively encourages addictive behaviors to maintain user engagement (Goulet, 1977). When Western technology is transferred to non-Western industrial societies, we clearly see how rapidly these embedded values impose themselves, significantly shifting patterns of behavior.

A prominent example is provided by anthropologist Pietro Pelto (1973), who studied the effects of the snowmobile’s introduction to the Skolt Lapps population in Finland during the 1950s and 1960s. Pelto observed how a population with a deeply intimate relationship to nature and its inherent cycles was violently divorced from that relationship upon the introduction of the snowmobile—a simple technology that most of us would consider inconsequential. This ‘simple technology’ disrupted their culture, traditions, and essential relationships, introducing social maladies previously uncommon, such as depression, addiction, and violence.

Goulet (1977) describes the effects of such technological transfers in his book, stating,  “Technology attacks the principle of cohesion which wove the value universe of pre-industrial societies into a unified fabric. It also undermines the view of nature such societies have, the meaning they assign to work, to time, to authority, and to the very purpose of life.”The Skolt Lapps lacked the means to assess the impact of this technology on their lives, and once the technology became prevalent, attempts to regulate it or even reverse its implementation became incredibly difficult and subject to criticism as regressive.

This dilemma around regulation often renders the same argument from commentators where they will argue that technology is inherently a good thing; it has improved our lives, made us healthier and more productive, and enabled new opportunities and knowledge. To clarify, I am not suggesting these points lack validity, technology very much has enabled these things. But it seems in the excitement of technological innovation and adoption we’ve forgotten to ask ourselves a fundamental question: are we happier? 

To this end the academic Daniel Sarewtiz (2009) writes “While it is likely that a greater percentage of people living in the industrialized world today is free from abject poverty than was ever the case during the past several thousand years, we cannot know whether this group enjoys a higher quality of life than similarly fortunate people enjoyed in the past, or what the direct contribution of scientific and technological progress has been to the emotional, intellectual, and spiritual fulfillment of the average person”.

All of these considerations point us back to the fact that our technology has evolved beyond our capacity to regulate it and make choices about the ways it should behave and impact our lives, we are trapped on a determined path.

AI and Algorithms

In 2016, two events occurred that had a significant bearing on social and political futures: the first was the United States presidential election, and the second was Brexit. Both of these events led to a series of investigations, inquiries, and testimonials into how data and the algorithms that manage it were possibly manipulated to misinform electorates (Stewart, 2018). During this period, we became familiar with a reality that would underscore the inherent difficulty in attempting to regulate algorithms and Artificial Intelligence, commonly referred to as the Black-box problem, a manifestation of technological determinism.

Algorithms fundamentally “are the rules and processes established for activities such as calculation, data processing, and automated reasoning” (Flew, 2021). Social media platforms utilise them to make sense of the vast ocean of data available to provide users with a feed of information that the user can easily digest. How effective an algorithm is at making sense of this data determines how long a user stays engaged and interacts with the platform, whilst simultaneously being marketed to through targeted advertising. Because these algorithms are central to the business model of platforms and are protected as intellectual property (Bagchi, 2023), we aren’t allowed to ‘look under the hood’.

Another aspect of the black-box problem is that, due to the vast amounts of information being processed in real time, engineers claim it is impossible to understand precisely how an algorithm is functioning or making decisions (Pasquale, 2015). This is particularly apparent with AI large language models (LLM)—such as ChatGPT and Meta AI—which are essentially highly sophisticated algorithms employing neural networks capable of programming themselves and engaging effectively in machine learning.

Larger LLMs, which have been given extensive resources often extracted from users and online databases, can display human-like intelligence, a capability often marveled at and celebrated by engineers and technologists alike. However, again we can see the influence of technological determinism challenged by Crawford (2021), who states, “AI systems are not autonomous, rational, or able to discern anything without extensive, computationally intensive training with large datasets or predefined rules and rewards. In fact, artificial intelligence as we know it depends entirely on a much wider set of political and social structures.”


A Creative Mafia

In early 2024, OpenAI released a voice feature for its ChatGPT platform, allowing users to hold live conversations with the chatbot. The featured voice initially launched by OpenAI sounded incredibly similar to that of actor Scarlett Johansson, who had famously voiced an AI virtual assistant in the 2013 Spike Jonze film Her (Milmo, 2024). Despite Johansson repeatedly refusing permission to have her voice adapted by OpenAI, the AI company denied the claims and stated they had used another voice actor speaking in their natural tone. Naturally, without hard evidence, it is difficult to ascertain if OpenAI went behind Johansson’s back. Nevertheless, this incident indicates that the force of innovation among AI companies—combined with their privileged access to data and information, protected by the black-box problem—is presenting creative industries with a non-choice dilemma.

Around the same time, OpenAI was negotiating deals with various publishers and platforms to obtain full access to their data, protecting themselves against future claims of copyright infringement. Notable publications and media organisations, such as The Atlantic, the Associated Press, News Corp., and Vox Media, (Robertson, 2024) were among those involved. These deals have been criticised by commentators, who suggest they were presented with a ‘devil’s bargain’ (Beres, 2024), alleging that OpenAI was behaving like the mafia: “they will come and ask nicely, maybe even offer you a couple of coins, but if you decline, they will simply take what they want.” (McNally, 2024)

Employing an ‘if you can’t beat them, join them’ strategy, publishers and platforms have faced criticism for capitulating to the power of the AI industry, thereby increasing its influence and perpetuating a negative feedback loop. However, not all publications are rolling over. The New York Times was among the first to file lawsuits against OpenAI and its technology partner, Microsoft (Allyn, 2025). The newspaper claims that OpenAI unlawfully used articles published by the NYT without consent to train its models. The prodigious newspaper hopes not only to gain compensation but also to establish a legal precedent protecting intellectual property from unconsented use by AI companies.

Taking Power Back

In 2023, the Writers Guild of America and SAG-AFTRA, the union representing actors who primarily work within the Hollywood system, went on strike. The strike, which required members not to work on or promote existing or upcoming projects, primarily aimed to secure better compensation for writers and actors, particularly in light of the proliferation of streaming services that had significantly shifted the economic landscape. However, some of the key demands during the negotiations included stipulations that studios would refrain from employing AI to draft and write scripts for film and television, and that actors’ likenesses could only be adopted by AI with explicit consent and fair compensation whenever their digital likeness was utilised (Maddox, 2023). These strikes received significant popular support from the wider public, ultimately prompting studios to accept all the stipulations put forward with little amendment.

This is just one prominent example of people actively seeking to protect creative industries from what is frequently termed ‘AI slop’ (Mahdawi, 2025). There are many instances of users and consumers resisting the determinism of AI and algorithms, evident in phenomena such as the resurgence of films and television productions being shot on film stock (Desowitz, 2025) and protests against the use of AI-generated art in traditional media (Hibberd, 2024).

The unregulated rise of AI and algorithms does not have to be inevitable, as demonstrated by these examples of resistance and regulation. Although there have been attempts at regulating AI, notably the European Union’s risk classification system (EU Artificial Intelligence Act, 2024), greater questions must be posed about the nature of technology and its relationship to human prosperity. Goulet (1977) emphatically asserts that three values need to be integrated into the efficiency calculus animating technology: it should abolish misery, contribute to the protection of the ecosystem, and defend humanity from technological determinism. Weinberg (2019) concludes, “We should not reify technology but grapple with it in light of essential principles such as moderation, justice, social harmony, and cultural integrity.”

References

Be the first to comment

Leave a Reply