Deepfakes: Your face and voice may have been stolen

Introduction

Do you know deepfakes? Have you ever used deepfakes? It may be strange for you to say so, but if I ask it in another way, do you know about face-swapping and lip-synchronization, I believe you are very familiar with it. In addition to face-swapping and mouth synchronization, other types of AI-based audio, video, and image processing belong to the category of deepfakes.

In social media apps like TikTok, you can see what you’ll look like as you age through face detection lens technology. You can even easily swap your face with the hero or heroine of a movie or TV show through tons of free or paid apps or online tools available on the website. All you need to do is upload a photo of your face or turn on your phone camera.

The Dalí Museum in St. Petersburg, Florida, has collaborated with advertising agency Goodby, Silverstein & Partners (GS&P) to use deepfakes technology to resurrect Dalí. Through deepfakes technology, Dalí can introduce the exhibition to visitors, giving them an intimate and personalized experience. Even deepfakes of Dalí can take photos for visitors and sent to them via text message.

As deepfakes and artificial intelligence technology continue to develop, deeply faked faces and voices are becoming more and more realistic, which not only enriches people’s daily lives, but also provides new display ideas for museums, education and other industries. However, deepfakes have also raised a lot of concerns about personal privacy risks. When we upload personal facial or voice information, this personal information will be saved and collected by the application. Especially now that facial recognition is tied to important assets such as personal bank cards and door locks, it can be used to identify individuals. Any leakage of this sensitive data will have a huge impact on personal property and security. Therefore, deepfakes bring new privacy and security issues, and how to regulate and protect users’ private data needs to be highly emphasized.

What is Deepfakes?

Deepfakes, a combination of “deep learning” and “fakes,” are digitally processed, surreal videos that describe people saying and doing things that never actually happen (Westerlund, 2019). Through artificial intelligence techniques, a person’s facial or vocal information is analysed to make the synthesized images, video, and audio sound and look real.

Typically, deepfakes types include (1) face-swapping (2) mouth synchronization (3) puppet-mastering (4) facial synthesis and attribute manipulation, and (5) audio-only deepfake. Among them, puppet-master’s deepfake aims to mimic the expression or even the whole body of the target character, and animate it according to the producer’s wishes (Masood et al. 2023).

Because of this, the content generated by deepfakes is forged and artificially created, which makes it difficult for victims of deepfakes to define who is responsible for the privacy invasion. Deepfakes has also become a growing threat.

What Deepfakes threat to personal privacy?

Typically, the growing popularity of data analytics often involves social norms regarding privacy. Misuse or disclosure of personal privacy-related data may adversely affect an individual’s social relationships, reputation, financial status, and even lead to civil liability, criminal penalties, or physical harm (Nissim & Wood, 2018).

Deepfakes can be manipulated by unscrupulous individuals for personal and financial gain. In early February 2024, a finance officer of a Hong Kong-based multinational company was defrauded of HK$200 million (A$40 million) during a video conference call. The reason was that the fraudster used Deepfakes technology to pretend a senior official of the company in such a way that the company’s employees did not recognize him.

With deepfakes, personal images and videos can be misused. Malicious actors with mischievous or other evil purposes are able to replace the faces of targeted individuals with fabricated content, potentially resulting in damage to a person’s reputation or mental health, and even legal consequences. The widely spread of Taylor Swift’s pornographic “deepfake” images on social media in early 2024 made Taylor Swift one of the most persecuted public figures in recent years by deepfake pornography, and the victims of deepfake pornography are usually women.

Is our sensitive personal information protected?

Rad and Christie (2024) point out, in June 2023, the Australian Government released a paper discussing safe and responsible AI in Australia, based on the National Science and Technology Council’s paper Rapid Response Information Reporting: Generative artificial intelligence – Language models and multimodal base models. The government was hoped to hear from industry about the recommendations related to how to decrease the risks posed by all kinds of uses of AI and how to support legal and ethical AI practices. However, there is currently no specific legislation in Australia regarding the use of deepfakes technology and the creation or use of deepfakes images, audio or video (Rad & Christie, 2024).

Currently the primary legislation protecting facial information in Australia is the Australian Privacy Act, which states that biometric information (including facial information) cannot be collected without an individual’s consent. However, the law blurs the line between express or implied consent (Vladimirova, 2024). We live in a world of surveillance cameras. No matter it is a facial recognition camera in a shopping mall or someone’s car recorder, and we don’t always see every “alert for cameras” sign. This means you could be filmed without realizing it, which is often misunderstood as implied consent. However, implied consent means that our facial data is at risk of potential exploitation (Vladimirova, 2024).

That is, we don’t know if our photos, voices or videos on public social media platforms are being saved by the Internet, or they have possibly been stolen and used by AI technology to synthesize some deepfakes. According to Rad and Christie (2024), “the Privacy Act is limited in that it does not currently apply to small businesses (including individuals operating a business) unless the business has an annual turnover of more than AUD $3 million or one of the exceptions to the exemption apply”. Clearly there are loopholes in the Privacy Act.

When we open a software for the first time the system pops up the privacy statement, but the fact is that very few people actually read the privacy statement. Kemp (2018) point out, 94% of Australians do not read all the privacy policies that apply to them. In addition, privacy notices can undermine privacy. People are asked to “agree” to these notices, but in reality, people do not really realize how their information will be used. This suggests that people don’t really have any real choices of their own, either.

At the same time, sharing information and personal data on social networks has become a common phenomenon. It has become extraordinarily easy to find a person’s photo or voice on the Internet. All you need to do is to know the person’s name or the name of the network that they may be using and search on major social media applications, and you are likely to see a photo or video posted by the person. Even if you are very concerned about privacy protection and delete all relevant facial data on the Internet, you may appear in databases in public places without your knowledge, such as other people’s TikTok videos (Vladimirova, 2024). Therefore, in the age of big data, it is difficult to ensure that sensitive personal information such as one’s face and voice is not collected or sold.

Happily, Australians whose images or videos have been maliciously altered can now seek help from eSafety.

“eSafety investigates image-based abuse which means sharing, or threatening to share, an intimate photo or video of a person online without their consent. This includes intimate images that have been digitally altered like deepfakes. We can also help to remove: online communication to or about a child that is seriously threatening, seriously intimidating, seriously harassing or seriously humiliating – known as cyberbullying. Illegal and restricted material that shows or encourages the sexual abuse of children, terrorism or other acts of extreme violence”

(eSafety Commissioner,2022)

In addition to this, eSafety works with industry and users to reduce the distress caused by Deepfakes. According to eSafety Commissioner (2022), eSafety focuses on education about deepfakes, and it hopes to raise awareness of deepfakes among Australians so that they can make careful choices. Secondly, eSafety is proposing a ‘Safety by Design’ program to encourage the industry to develop methods of reviewing deepfakes to help companies and organizations screen products and services; to focus on safety issues, and to research and develop deepfakes-related policies, terms of service, and community standards to facilitate the identification of markers and the management of deepfakes.

What can we do?

For users, deepfakes videos can still be distinguished through careful analysis, and they are usually characterized by the following features: unnatural movements (eye movements, facial expressions, etc.); lack of blinking; skin colour anomalies (a person’s skin colour may be inconsistent); strange lighting (unnatural light colours and strange background colours); and unsynchronized mouth patterns. In addition to this, users need to realize that deepfakes can not be well regulated for a short period of time, and need to be vigilant and adaptive, critique and use media software rationally. Meanwhile, try not to upload information about your own faces or voices to unofficial websites. When posting photos and videos on your social media accounts, be careful to protect your private information, for example, checking whether the photo or video contains personal ID, bank account and other private information.

For regulators, the threat of deepfakes should be addressed through specialized legislation. There are progressive initiatives in the United States. In California, for example, AB-602, which goes into effect in 2020, prohibits the use of human image compositing for pornography without the consent of the person depicted, and AB-730 prohibits the dissemination of audio and video images of candidates “with actual malice” for 60 days after an election (Halm et al., 2020).

It is responsible for social media platforms and AI software developers to take advantage of technological updates to reduce the harm caused by deepfakes. Platforms hold the most important control over managing the materials that users can share and view online (Suzor, 2019). This will require them to put more efforts into updating and implementing policies for identifying and removing deepfakes content to protect user privacy. They need to develop more advanced recognition techniques to identify AI works at the same time. Utilizing a logo or labelling alert, labelling works that have the potential for deepfakes so that users can be alerted.

Conclusion

All in all, the future of deepfakes and personal privacy is still full of uncertainty, and the conflict about technological innovation and user privacy will be the biggest challenge. Deepfakes technology itself is neutral, and requires the joint efforts of regulatory legislators, software developers, and users to utilize AI to promote technological progress and development under the important premise of protecting personal privacy, dignity, and safety.

References

eSafety Commissioner. (2022, January 23). Deepfake trends and challenges – position statement. https://www.esafety.gov.au/industry/tech-trends-and-challenges/deepfakes

Halm, K. C., Kumar, A., Segal, J., & Kalinowski IV, C. (2020, February 28). Two new California laws tackle deepfake videos in politics and porn: Davis Wright Tremaine. Media Law Monitor | Davis Wright Tremaine. https://www.dwt.com/blogs/media-law-monitor/2020/02/two-new-california-laws-tackle-deepfake-videos-in

Kemp, K. (2018, May 14). 94% of Australians do not read all privacy policies that apply to them – and that’s rational behaviour. The Conversation. https://theconversation.com/94-of-australians-do-not-read-all-privacy-policies-that-apply-to-them-and-thats-rational-behaviour-96353

Masood, M., Nawaz, M., Malik, K. M., Javed, A., Irtaza, A., & Malik, H. (2023). Deepfakes generation and detection: State-of-the-art, open challenges, countermeasures, and way forward. Applied Intelligence. https://link.springer.com/article/10.1007/s10489-022-03766-z

Nissim, K., & Wood, A. (2018). Is privacy privacy? | philosophical transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. The Royal Society. https://royalsocietypublishing.org/doi/10.1098/rsta.2017.0358

Rad, I., & Christie, A. (2024, January 24). Rolling in the Deepfakes: Generative AI, privacy and regulation. LexisNexis® Australia. https://www.lexisnexis.com.au/en/insights-and-analysis/practice-intelligence/2024/rolling-deepfakes-generative-artificial-intelligence-privacy-regulation

Suzor, N. P. (2019). Lawless: The Secret Rules That Govern Our Digital Lives. Cambridge University Press. https://doi.org/10.1017/9781108666428

Vladimirova, M. (2024, March 4). Your face for sale: Anyone can legally gather and market your facial data without explicit consent. The Conversation. https://theconversation.com/your-face-for-sale-anyone-can-legally-gather-and-market-your-facial-data-without-explicit-consent-224643

Westerlund, M. (2019, November). The emergence of Deepfake Technology: A Review. Technology Innovation Management Review. https://timreview.ca/article/1282

Be the first to comment

Leave a Reply