Photograph from Pixabay
Introduction
One day, when people browse social media, they suddenly see a video of themselves that they have never recorded, and the video of themselves doing something they have never done before, which sounds impossible, however, the emergence and popularization of artificial intelligence and machine learning are easily achieving this “impossible” thing. Increasingly accessible AI tools and cheap generators are continually reducing the cost of creating rumors and spreading misinformation. The widespread use of the Internet and social media has made these platforms become the best place for the dissemination of such false information. At the same time, the big tech companies that are in control of these social media platforms are vigorously promoting and developing machine learning and artificial intelligence, one example is Microsoft, a major investor in Chatgpt’s creator OpenAI, due to the existence of privacy protocols, it also raises concerns about whether these social media platforms, which are under the control of the tech companies, have become an important source of data collected by artificial intelligence. In order to better discern the real from the fake and protect personal privacy in the digital age of rapidly developing information technology and artificial intelligence, people need to have a proper understanding of AI and deepfake.
Photograph from Pixabay
What is deepfake?
The word deepfake is a combination of the words “deep learning” and “fake,” and primarily relates to content generated by an artificial neural network, a branch of machine learning (Mirsky & Lee, 2021). deepfake usually refers to realistic images, videos or recordings generated or manipulated by artificial intelligence and algorithms (Jacobsen & Simpson, 2023). Deepfake is mostly used in the field of pornography, i.e. the transfer of a woman’s face through AI or algorithmic techniques into some pornographic videos and photos, such as the AI-generated pornographic photos of Taylor Swift that have been massively circulated on social media platforms. In addition, deepfake is often used to falsify political information and news, whereby people can create a completely fake, realistic video or voice of a political figure that can be used to convey misinformation and fake news to the public, such as a robocall that appears to be an AI voice resembling President Joe Biden.
Deepfake and artificial intelligence
Since the popularity of deepfake mainly relies on the development of AI and algorithm, to understand deepfake, we first need to understand AI. Kate says in the book, AI is knowledge distillation, an attempt to understand and build an intelligent entity (Crawford, 2021). At the same time, compared with “machine learning”, “neural network technology” and “algorithm”, which belong entirely to the field of computer technology, AI can be said to be a “registry of power” created by using these technologies, because large-scale construction of AI requires huge capital and data support, and the final product aims to provide services for the dominant interest groups,and depends on political and social structures (Crawford, 2021). After understanding this concept, we can relate the basic framework of AI, algorithms, hardware, and data, to capital, politics and governance. It is also because of these complex and intersecting power structures that the data and algorithms behind AI become a black box that cannot be opened, and the public will never be able to learn the operating logic within it (Crawford, 2021). In order to build and train an AI model, a large amount of data is often needed to support, and all publicly available data online may be made into data sets for machine learning. For example, to improve the algorithms of facial recognition and language prediction, the AI’s training dataset is filled with people’s selfies, news recording, gestures, baby cries, and pedestrian images. In this data, there is a lot of information that originally belonged to individuals, and because of the opaque nature of the AI system and the ubiquitous platform terms of service and privacy agreements on the Internet, the information uploaded by users to the major network platforms will at some point no longer belong to them but belong to the big technology companies. As Nicolas said, the essence of these terms is to protect the rights and interests of companies and platforms,and the disconnect between user experience, social values, and legal reality in some ways exacerbates privacy issues and privacy paradox (Suzor, 2019).
“When these collections of data are no longer seen as people’s personal material but merely as infrastructure, the specific meaning or context of an image or a video is assumed to be irrelevant.”
Kate Crawford
In this context, we can understand the concern about deepfakes. This is despite the fact that deepfake has many positive potential applications, including virtual dressing functions, assisting film production after the death of an actor, and giving people who have unexpectedly lost their voice the chance to speak in their own voice again (Murphy & Flynn, 2021). However, the real event that made deepfake famous was in 2017, when a Reddit user named “deepfakes” fabricated and published deepfake pornographic videos of celebrities using machine learning technology (Mirsky & Lee, 2021). In fact, the use of AI and machine learning to create deepfake porn videos still accounts for the majority of deepfake phenomena to this day.
Photograph from Pixabay
Deepfakes are affecting the public, politics, and the world
In fact, manipulating images and sounds is not new, and before AI and machine learning spread, it was also possible to do so with other technologies, such as Photoshop. However, the threshold for the use of these software is much higher than that of AI, and it is difficult to produce images quickly and in large numbers, while AI can quickly generate a large number of realistic content in a short time, and it is difficult to be recognized by the naked eye, which is called “deepfake”. Deepfakes are often used today to create fake pornographic videos and fake news of celebrities and politicians and have caused significant damage. It has also been used as a tool of identity theft, fraud, defamation, intimidation, and harassment to threaten and exploit ordinary people, with those targeted often unable to testify against themselves and potentially suffering financial and reputational damage.
The impact on politics
As early as 2018, many media, including the BBC, published news about deepfakes to warn the public, but until now, deepfakes targeting politicians are still popping up. Some politicians use deepfake as a tool to enhance their own image and increase voter support, for example, in India, the five-time chief minister of Tamil Nadu state M Karunanidhi, who died in 2018, was ‘resurrected’ with the help of AI and deepfake, appearing in a video in support of his son MK Stalin. The practice of fabricating images of dead people raises ethical and moral concerns about deepfakes. And many more are using deepfakes as a way to attack politicians and influence elections. In the United States, a phone call to voters made by an AI impersonating Joe Biden’s voice spread enough false information to influence the outcome of the election. The call advised voters not to vote in Tuesday’s presidential election, and similar fake news stories frequently appear in elections around the world. These deepfakes pose certain challenges to the operation of the democratic system and the politics of various countries because these fake news affect the public’s right to know, making it difficult for the public to distinguish the true and false information accurately and quickly in a short time before the election, and then affect the voting results. In fact, fake news and disinformation have become a kind of moral panic that creates distrust of traditional media (Hameleers & Marquart, 2023). What’s even scarier is that more and more celebrities are blaming deepfakes for unflattering information about themselves or claiming that their enemies are spreading false information to carry out attacks, which further exacerbates this crisis of trust. In 2022, for example, during the Russia-Ukraine war, both sides often accused each other of spreading misinformation, such as in a video of Putin’s hand malacting as he moved toward a microphone, which was discredited by Ukraine as a deepfake.
Photograph: Alexandra Robinson/AFP via Getty Images
Weapons for women
According to research, ’96 percent of deepfake images are non-consensual porn’ and 100 percent of the targets and victims of these fake porn videos are women, the reason is that the relevant algorithms are trained only on images of women (Jacobsen & Simpson, 2023). In the period when AI was not promoted, in order to achieve online sexual abuse out of revenge mentality, the abuser needed to have real pornographic materials about the victim before Posting it on the network, but the emergence of AI and deepfake eliminated this process, and the abuser only needed the victim’s face image to achieve this online sexual abuse through deepfake. It can be said that deepfake is a powerful weapon that only targets women when it is used in the field of pornography. Victims often feel the fear of being exposed when they go out, because they have lost the agency of their bodies and can only be passively attacked (Jacobsen & Simpson, 2023). Although these videos and images are fake, deepfake have caused real emotional harm to victims, so it is now increasingly recognized as a new form of digital abuse and objectification of women (Bode et al., 2021). Recently, the incident of Taylor Swift being fabricated and spreading deepfake pornographic pictures has a strong impact on society, and American politicians even call for the enactment of new relevant laws to govern deepfake. It can be seen that under the influence of patriarchal society and hegemonic elite system, sex is still a powerful means to attack women, even female celebrities who have great influence in the world are no exception. While celebrities have the ability to fight back when attacked by deepfakes, there are many more ordinary people who can only withstand the damage.
Photograph: Imaginechina/SIPA USA/PA Images
How can we protect ourselves from Deepfakes?
Protecting ourselves from being targets and victims of deepfakes is essentially protecting our personal information and privacy in the digital age. In fact, as long as we use the web and social media, it is difficult for us to completely avoid the disclosure of personal information, because the terms of service and privacy policies from the platforms are everywhere, even though most of the time they are secure. In order to protect personal information, as individuals, we can only be careful to identify sources when using the Internet and software. In addition, it is also possible to convert personal social media accounts to private mode, which can ensure that our personal information is not open to the public and reduce the possibility of information leakage. From the perspective of the government, perhaps proper control of the rights of the platform in its own field can help users better protect their privacy, but the intervention of the government may involve another kind of privacy disclosure.
References
Bode, L., Lees, D., & Golding, D. (2021). The Digital Face and Deepfakes on Screen. Convergence: The International Journal of Research into New Media Technologies, 27(4), 849–854. https://doi.org/10.1177/13548565211034044
Crawford, K. (2021). Introduction. In Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (pp. 1–22). New Haven: Yale University Press. https://doi.org/10.12987/9780300252392-001
Hameleers, M., & Marquart, F. (2023). It’s Nothing but a Deepfake! The Effects of Misinformation and Deepfake Labels Delegitimizing an Authentic Political Speech. International Journal of Communication (Online), 17, 6291–6312. https://link.gale.com/apps/doc/A776056065/AONE?u=usyd&sid=bookmark-AONE&xid=b4f2b14b
Jacobsen, B. N., & Simpson, J. (2023). The tensions of deepfakes. Information, Communication & Society, 1–15. https://doi.org/10.1080/1369118x.2023.2234980
Mirsky, Y., & Lee, W. (2021). The Creation and Detection of Deepfakes. ACM Computing Surveys, 54(1), 1–41. https://doi.org/10.1145/3425780
Murphy, G., & Flynn, E. (2021). Deepfake false memories. Memory, 30(4), 1–13. https://doi.org/10.1080/09658211.2021.1919715
Suzor, N. P. (2019). Who Makes the Rules? In Lawless: The Secret Rules That Govern Our Digital Lives. Cambridge University Press. https://doi.org/10.1017/9781108666428
Be the first to comment