AI Face-Changing and Facial Recognition Technology, Chanllege to Digital Privacy

We’re living in a era which digital privacy can be a crucial living resource. If you’re a social media user, you probably suffer from digital privacy issues. Promotion messages come into your e-mail box everyday, it means that your e-mail adress and other personal information is leaked. However, there is a more serious type of problem, that’s the bioinformation disclosure, including your face and voice. You can simply reset the e-mail adress to avoid the annoying messages, but you can’t change your face and vocie easily to get away from bioinnformation disclosure. Emerging AI technology like facial recognition and AI face-changing brings convinience and high risk at the same time, it’s a pity that there are limited knowlege and means to deal with the risk. As an individual, having the awareness of AI face-changing and recognition technology and protect ourselves is an on-going subject. For the whole society, how to complete the regulation of AI facial technology and balance between individuals and corporates, is also a long-term challenge.

Here you might ask some questions. What is digital privacy in deed, why is it important to me, what can I do to protect myself especially about my biologicalinformation? We live in a digital era, digital privacy can be explained from two perspectives. One is access, a state in which one is not observed or disturbed by other people. The other aspect can be control, the right to have some control over how your personal information is collected and used. These two aspects include many contexts, such as technology platform and business model or practice. The daily data that you generate, the surveillance, the geolocative data or sensor data, clouds of data that are being gathered continuously on each person. (Helen Nissenbaum, 2010) We can generalize all these things and actions as data tracking, collection and trading. Why should we allocate high importance on digital privacy and bioinformation is easy to be explained. The prevalent adoption of facial recognition technology brings ethical questions such as surveillance, consent, and misuse, there’s a rising tension between security and personal freedom. (Wang, X., Wu, Y. C., Zhou, M., & Fu, H., 2024) You can search so many cases that people from all over the world suffer from the illegal and unethical AI face-chaging hackers, some of the victims might be your friends or your boss, it’s easy to get involved in such a fraud but hard to protect your rights.

Biologicalinformation leaking can cause serious financial loss especially to the big company, even if the company already set their financial secure system, the hackers will give the company a crushing blow in an unexpected way. Imagine that you are a financial manager in a big company, one day the CFO makes a video call to you, you can see his face and hear his voice clearly, you follow his instructions and transfer 2 hundred milllion dollars to the account he provides to you but it’s totally a fruad by AI face-changing technology, the company lost 20 million HKD in the end. It’s not a fiction, it’s a real event happened in Hongkong in the begining of 2024. Arup is a multinational engineering company, hackers took months to collect Arup Hong Kong branch’s CFO’s bioinformation from public speech and interview videos. And the hackers furtherly used Deepfake to generate a videoconferencing link to a target employee. During the conference, the target employee was the only one real person, he was totally puzzled by the face-changing technology. The anonymous employee said that he found the CFO’s blinking frequency was weird during the conference, but he didn’t realize it’s a fruad. Fortunately the hackers were not able to break into Arup’s internal financial system, this event had shocked the whole Hong Kong media, and the CFO whose bioinformation is leaked decided to resign due to the pressure from public onpinion. Can Arup avoid this fraud from the very begining? Personally speaking, they have the opportunities to avoid this tragedy. The key is to arouse the alertness of AI face-changing technology, it doesnt’t mean that all the employee need to be IT specialist, but all the employee should check the rationality of transactions before they do. Are they truly authorised to these financial transaction. Once you noticed your leader has a weird blinking frequency in a videoconference, you should be sensitive enough to check the compliance of today’s work.Besieds, the company itself should buy more IT schemes to prevent from this bioinformation hackers in advanced. 

AI face-changing hackers also target on some innocent individuals by making illegal porn photos or video and spreading them, hackers make profits and create emothional damage to the victims at the same time, but the victims are not totally passive, they can fight back and stand up for their dignity and digital privacy while the legal construction progress is yet to be completed. I’ll hold a current case happened in China. In July 2024, China had its first successful case of AI face-changing rights protection. Ouyang is a young girl who just turned 21 years old in 2024. One day she was told by a close friend that she was seen on a porn website, there were a porn video which Ouyang showed up. What’s more, she noticed that her personal imfornation like ID number, family adress and even her college major were all leaked. All her porn videos and photos were marked the price clearly even thogh they were all fake. Ouyang’s diginity and reputation were damaged and she almost had a breakdown. However, due to Chinese digital regulation and laws, Ouyang did has the chance to fight back and punish the hacker. After calling the police, police and Ouyang decided to start with the real-name system authentication to find out who the hacker was. By tracing the acoount number on Douyin, it turned out that the hacker is Ouyang’s ex-boyfriend, he used Deepfake to revenge Ouyang. In the end, Ouyang’s ex-boyfriend recieved the written decision of administrative penalty and was under arrest for 10 days. But the story was not over yet.

Ouyang had the phased victory, unfortunately there were still some doubts. The hacker was accused of spreading pornography, but this case didn’t reach the level of criminal responsibility. He will not be in jail or have a life-during criminal record. What’s worse is that after the hacker ended his 10 days arrestment, he continued spreading and leaking Ouyang’s personal information. When Ouyang turned to the police for help, they felt helpless too. Did Ouyang fight for her digital privacy? Yes she did. Did she win the battle? Perhaps not yet. The punishment intensity of illegal AI face-changing in China is still in a stage to be perfected. Besides, Ouyang is also facing strong pressure from others who makes fun on victims, some people may get invloved in spreading the porn videos and photos of victims which can cause secendary harm to Ouyang’s emotion, but she did nothing wrong at all. Ouyang is not the only girl who suffers from the rampant AI face-changing hacker, in other context there are so many similar cases that female can be the main victims, the victims may from Korea or America and so on. It tends to be a universal fraud, before different countries publish the regulation and articles of law to punish AI face-changing hackers, many girls are still in a dilemma. 

There is a cruel fact that many people in digital era are not eligible enough to deal with digital privcacy risk, most people know about the importance about digital privacy but they don’t have a further understanding of it, in other words, people’s acknowledge of digital privacy . Here I choose Australia as the context example, according to Australian Community Attitudes to Privacy Survey 2023, 90% of Australian know why they should protect their personal information, but only 21% of Australian claim that they have correspoding privacy knowledge to protect themselves, nearly 60% of Australian admit that they do care about the personal data but don’t know what to do with it. In my opinion, this curel fact can be one of the main reasons that AI face-changing hackers can take advantage of a weak point. Your right is infringed unkonwingly, and once you become a victim, the other people might make fun of you or put pressure on you which is also a symbol of lacking basic acknowldge of digital privacy. It’s both a challenge and opoortunity, once people make progress on acknowldege of AI face-changing technology, there will be nearly no living space for the hackers and the technology itself can truly release the potential. Your face and your voice are just like your ID number and family address, they are information and they can create so much value, surfing on the internet is risky sometimes, you can’t let others to make use of it without your permission, that’s the simplified definition to digital privacy.

Nowadays there still exist many chanlleges for the digital privacy progress, there does exist tension between financial interests and individuals (Wang, X., Wu, Y. C., Zhou, M., & Fu, H., 2024). Firstly, the basic social consensus about digital privacy and facial recognition technologhy is still on a initial stage despites of the context and region, it’s truly necessary for people to equip digital privacy knowledge especially in facial recognition technology. Secondly, the ability to achieve privacy often requests the privilege to create structure make choices to make such freedom possible (Warren & Brandeis, 1890). Some people who are marginalized by the society have to compromise more and have more difficulty to fight for their digital privacy, some poor people are even forced to provide their personal information to get basic services in return. (Alice E. Marwick & danah boyd, 2018). Usually it takes so much time and money cost to protect one’s digital privacy especially for bioinformation, an influencer with millions of followers would get more attention and exposure to push the progress of his or her digital privacy protaction, but what about the ordinary being. Even if both of the influencers and ordinary people are fighting for their digital privacy, there still exists structural inequality. Thirdly, the legal construction always lagged slightly behind the actual development of society because the legal insititution usually respond slowly to the social changes, including technology and social consensus (Friedman, L. M, 1968), that’s why some victims got hurt by the AI face-changing hackers but feel hard to protect their rights, the society lack of corresponding and specific regulation and specific articles of laws now. Some scholars even advocate that the facial recognition technology should be ban before robust and practical regulations and law are established (Aniulis, H., 2022). However, all the AI face-changing cases emerged can be the fertilizer and reference for the society, including the social consensus and legal constructions aspects. In my opinion, no matter what country or region it will be, there should exist specific definition and articles of laws to AI face-changing fraud as soon as possibile. 

Weak acknowldge, lagged leagal constructions and rampant misuse of AI face-chaging and facial recognition, these three factors make people in digital era a huge dilemma when they are facing the AI face-changing issues. But don’t be too negtive about this situation. On the one hand facial recognition technology does brings so much convinience to the whole society. On the other hand, the regulation framework will gradually adapt to the AI technology, us human beings are not the passive user of facial recognition and face-changing technology.

Be the first to comment

Leave a Reply