Introduction:
Have you ever seen Barack Obama call Trump a “complete asshole,” or Mark Zuckerberg brag about having “complete control over billions of people’s stolen data,” or witnessed Jon Snow apologising for the dismal ending of Game of Thrones? If you answered yes, then you’ve seen a fraud, which is Deepfake technology.
Deepfake is essentially a deep learning model whose basic logic is to synthesise and virtualise new text and content by superimposing, merging or replacing text, images, audio and video clips (Gaur, 2022). As a new form of forgery technology, creators can use it to tamper with the faces, voices, and other information of the characters in the video, thus creating videos that are difficult to distinguish from the real ones (Gaur, 2022). While the technique was initially used mostly in the forgery and distribution of pornographic information, today, deepfakes have begun to venture into other fields such as politics, advertising, science, and medicine, which means that everyone is a potential target (Mirsky and Lee, 2021). The risk of deepfakes will be everywhere and in a more insidious way invisibly alienate the minds and mislead the behaviour of the audience.
On Christmas Day, 25 December 2020, 25 minutes after Queen Elizabeth II delivered her 2020 Christmas message through the BBC, the UK’s Channel 4 TV channel produced a fake news video spoofing the Christmas message by using “Deepfake” AI face-swapping technology. Through Deepfake technology, the image of the Queen in the fake video is very similar to the real Queen in terms of hairstyle, looks and demeanour, and even some tiny expressions are imitated perfectly. However, from the beginning of the speech, the AI version of the Queen began to spit, firstly, she thanked Channel4 for giving her the opportunity to speak, so that she could make the remarks she could not make on the BBC; secondly, she alluded to the behaviour of the people during the COVID-19 and teased the British Prime Minister, then, she handed her mobile phone to the staff and did a dance on the spot; finally, the AI version of the Queen returned to the table and sat down.
The fake video received a barrage of scorn and criticism after it was aired. With the use of AI technology, the authenticity of the original news video was seriously threatened. For the Queen, its production and dissemination violated her portrait rights and damaged her reputation; for Channel 4, the authenticity of the news and the authority of the organisation may be questioned as never before because of the AI face-swapping fake video.
Through this incident, we can see that AI can gradually become a powerful technology for creators to master, but it is also very easy to become a tool for distorting facts and deception. With the development of technology, fake videos will only become more and more real, and the future of information production should become an important issue for every information producer and disseminator to think about.
Deepfake Shakes Up Visual Truth
- Crisis of confidence caused by fake video
Confirmation bias means that people seeking to prove the truth of something uncertain prefer to first believe it to be true, and then look for evidence to corroborate this notion (Vatreš, 2021). Based on this, the first time an audience is exposed to deepfake information is not to consider the authenticity of its content, but whether it fits into their pre-existing cognitive structure. Deepfake in social media rarely exists on its own. The platform’s intelligent recommendation mechanisms solidify the audience’s pre-existing perceptions, and with that comes group polarisation brought about by information segregation, where the free flow of information is cut off (Giansiracusa, 2021). Once the fake information is preconceived and agrees with the audience’s decoding logic, it is much more difficult to correct the viewpoint. This is where the threat of deep falsification comes in, which creates a mimetic reality through repeated mythological communication. This can lead to detrimental effects, creating a general distrust of others and society, and ultimately a serious crisis of confidence.
- Visual errors as a defence for liars
If deepfakes escape the scrutiny of platforms and experts, and the purpose of the deception is achieved, a crisis of confidence ensues. However, if the deepfake information is identified as false, the creator can get away with it by shifting the blame to the technology, claiming to be a victim, and presenting the deeply falsified information as evidence to refute any anomalous behaviour (Lim, 2023). Whether or not deepfake is monitored, undermines the original social media order. Highly falsified misinformation undermines trust in accurate information, which impacts on the credibility of the media while also calling the truth into question. This means that deepfake will be a defence for liars, and any false information produced by liars can be used to avoid the risk of synthetic charges, and with that will come a reduction in the cost of the risk of deception and falsification.
- Accurate face-swapping dismantles visual objectivity
For the public, advances in photography and video technology have made seeing is believing a means of discerning the truth (José and García-Ull, 2021). But deepfake face-swapping essentially shatters the consensus on visual objectivity, as face-swapping technology has matured to the point where anyone can say anything. Not only can video be faked, but audio can be faked as well. Deepfake crushes the public’s collective sense of truth and constantly creates new distances between established truths, blurring the real as well. Anyone with widely available tools for altering images can more easily spread misinformation or incite scandal and conflict (Gaur, 2022). Textual falsification is no longer a novelty; if video falsification becomes the norm, the public will no longer be able to find a basis for objectivity. The use of deepfake for deception actually blurs the line between the real and the unreal, playing on the minds of the audience. When deep-rooted ideological problems collide with deepfake, it can lead to increased social cleavages.
Digital Regulation of Deepfake
- Technical limitations imposed by laws and regulations
Firstly, the law should guarantee the public’s right to remove negative forged information about themselves and order platforms to delete illegal content. Furthermore, to clarify the legal responsibility of the main producers of abusive deepfake technology, to refine the responsibility for possible infringements caused by deepfake, and to do a good job of tracking and tracing the infringement problem. For example, the producers of deepfake are legally obliged to mark the forged content, and compel platforms and other intermediaries to detect deepfake, so as to guarantee the authenticity of the content received by users. Finally, increasing the penalties for platforms to reduce viral proliferation due to the platform’s sharing and recommendation mechanism. The production and dissemination processes of deepfakes need to pass through a series of laws and regulations to reduce regulatory loopholes.
- Platform governance to avoid deepfake risks
The distribution form of video streaming will be more seductive and more fraudulent than other forms of media manipulation, and the sharing mechanism of the platform will facilitate the rapid sharing of fake information (Lim, 2023). By providing space for the dissemination of deepfake, platforms will inevitably have to assume the social responsibility of monitoring. Platforms should balance online anonymity with verifiable identities, raise the threshold for information dissemination for low-credibility sources, and restrict the use of media by users who repeatedly forward and disseminate fake content. The content and user data suspected of being forged should be entered into the algorithmic database, and the recommendation of the platform should be reduced to avoid the further spread of fake information. In addition, platforms should be responsible for creating and maintaining communication norms for their user communities, using industry terms and conditions and platform policies to prevent the further spread of fake information. In terms of ethical oversight, there is a need for risk assessment and early warning of deepfake techniques within the industry, as well as the cultivation of ethical awareness among technicians, in order to create a good online environment.
- Intelligent technology filters out fake content
The emergence of deep forgery has led to a deeper fusion of reality and the virtual. It permeates many real-life scenarios and requires targeted measures for different forms of fusion, as well as the effective use of smart technologies for the filtering of fake content. For example, for audio, AI algorithmic techniques are needed to reduce real-world noise, using deep learning and datasets to more accurately locate the presence of forgeries based on their technological kernel (Mirsky and Lee, 2021). For video, differential identification of the quality of the artefacts needs to be considered. For simple video splices, differences in eye colour or the flicker rate of other features need to be carefully compared, and the missing details and reflections in the teeth area can be used to determine whether they are real or not (Giansiracusa, 2021).
- Digital Literacy Identifying Fake Information
For individuals, the large-scale data collection of deepfake makes it much more likely that users will be spoofed. On the one hand, the interactive nature of deep forgery increases the risk of user exposure to identity deception, covert surveillance, and theft of recorded data. On the other hand, networked connectivity reduces the centralisation and control of security actors and can even bring about the further proliferation of fake information (Vatreš, 2021). The public needs to accept this reality, but such acceptance does not mean complete passive acceptance, but adaptation and adjustment with initiative. Thus, the return of value rationality is an inevitable requirement, which also requires society as a whole to ethically and morally re-examine deepfake, and to use and implement synthetic media in an ethical manner. The public needs to pay attention to the protection of their privacy, view and analyse online information in a rational and dialectical manner, and at the same time receive digital literacy education. Digital literacy education not only helps the public to realise the existence and power of deepfake, but also makes the public aware of the existence of media problems such as confirmation bias, filter bubbles and echo chamber effects, which remains a humanistic concern that cannot be replaced by AI technology.
Conclusion:
An important way in which deepfakes deconstruct the world is by destroying the real world through deception, and social structures will be redefined in the process. The power of deepfake technology not in the sophistication of the synthesis, but in its impact on the truth. The current communications environment increases the risk of deepfake across spatial and temporal boundaries. Cognitive biases and algorithmic techniques increase the likelihood of the spread of malicious forgeries, and the ease of online sharing can make it difficult to eradicate forged information once it is published, rather than entrenching information siloism (Vatreš, 2021).
Although social media is considered to be an intermediary and platform connecting netizens all over the world, allowing social interactions to move from traditional blood and geographic ties to online relationships, in reality, every netizen is a node on the Internet, and the relationship between people has shown unprecedented fragmentation (Lim, 2023). The emergence of deepfake technology has merely amplified the power of media manipulation, and through continuous self-optimisation and iterative upgrading, it has become an effective tool for deception. But this does not mean that deepfake are outright harmful technologies. The impact of deepfake on the entertainment industry is also clear, and its future is bright, but the boundaries of the technology need to be set to ensure that its applications are not abused. It is important to bring back the human subject and achieve good governance and good use of technology.
Reference list:
Gaur, L. (2022). DeepFakes: Creation, Detection, and Impact (1st ed., Vol. 1). Routledge. https://doi.org/10.1201/9781003231493
Giansiracusa, N. (2021). How algorithms create and prevent fake news : exploring the impacts of social media, deepfakes, GPT-3, and more. Apress Media LLC.
José, F., & García-Ull, G. U. (2021). DeepFakes: The Next Challenge in Fake News Detection. Análisi (Bellaterra, Spain), 64, 103–120. https://doi.org/10.5565/REV/ANALISI.3378
Lim, W. M. (2023). Fact or fake? The search for truth in an infodemic of disinformation, misinformation, and malinformation with deepfake and fake news. Journal of Strategic Marketing, 1–37. https://doi.org/10.1080/0965254X.2023.2253805
Mirsky, Y., & Lee, W. (2021). The Creation and Detection of Deepfakes. ACM Computing Surveys, 54(1). https://doi.org/10.1145/3425780
Vatreš, A. (2021). Deepfake Phenomenon: An Advanced Form of Fake News and Its Implications on Reliable Journalism. Društvene i Humanističke Studije (Online), 6(3(16)), 561–576. https://doi.org/10.51558/2490-3647.2021.6.3.561
Be the first to comment