With the advent of the era of big data, data is undoubtedly the most important asset for enterprises and individuals. Especially for individuals, it is not only the collection, use, processing or sharing of personal information in the digital environment, but also related to the existence of individuals in the digital world. With the rapid development of the internet, data security and privacy boundaries are increasingly important. There is no doubt that in the digital age, information is accessible at anytime and anywhere, which brings great convenience to enterprises and individuals. But as companies collect and store more and more personal and sensitive information, data privacy has become a serious social issue. New technologies are fundamentally advancing our freedom, but they are also leading to unprecedented breaches of privacy. The nation’s laws have yet to keep pace with the changing privacy needs brought about by new digital technologies. Respect for anonymous speech and individual autonomy must be balanced with legitimate concerns such as law enforcement.
What does privacy mean in the digital Age?
Data privacy generally refers to a person’s ability to decide for themselves how, when and to what extent personal information about them is shared or communicated with others. This personal information can be a person’s name, contact information, location or behavior in the real world. Internet privacy comes in many forms, including data sharing controls, privacy statements on websites and data transparency initiatives. Digital privacy is also a right designed toguarantee the protection of the personal data of users accessing services via the Internet. Privacy problem emerging from them are associated with these electronically mediated contexts without a clear sense that they may emerge in different ways because of the different architectures and protocols (Nissenbaum, 2018). However, every action we take in the virtual world can be tracked, and the privacy seems more of an ideal concept than a reality. A user’s search history, posts they “liked” on social media – everyone expect this information to be private, but often it isn’t.
Now that people use different mobile apps so often, our online activities tend to leave digital “footprints” that can be used to identify us, such as employment records, geolocation data, shopping and entertainment preferences, social media posts and healthcare and financial information. An analysis of data from more than 20,000 users revealed that each user has an average of 90 online accounts, while in the United States, there are an average of 130 accounts linked to a single email address (Lord, 2020).
Imagine how you feel when all your personal information, daily activities, preferences, and even secret moments of your heart are being monitored, and even these data are being sold across major commercial platforms? Who is really responsible for protecting our personal data?
What are the threats to digital privacy about Facial recognition?
From unlocking iPhone to automatically tagging photos of faces on Facebook, facial recognition technology is becoming increasingly integrated into our daily lives. This is a method of using person’s face or confirming the identity of an individual. The global facial recognition market is expected to reach $12.67 billion by 2028 (The Insight Partners, 2022). Facial recognition systems can be used to identify people in videos and photos or in real time. Facial recognition is a category of biometric security.
The facial recognition market is growing rapidly, with some companies adopting the technology for a variety of reasons, including identifying and verifying individual to granting them access to authorizing payments, tracking employee attendance online accounts and targeting shoppers with specific ads. Some people believes that it as an intrusive form of surveillance that can seriously damage individual freedom and ultimately society. These concerns are reflected in agencies around the world that have implemented strict regulations to protect people’s identifying data. One example is Europe’s General Data Protection Regulation (GDPR). Europe’s GDPR details that citizen have the right to protect their privacy and that there will be consequences for any violation of privacy. These privacy concerns are not unreasonable. People don’t like the feeling that they are being watched or that their personal information may be compromised. For example, If a home camera with facial recognition is compromised in some way, hackers could gain access to data that identifies anyone on the camera system. If that face information is sent to the cloud, it can be matched with technology such as Amazon’s Rekognition software, which matches faces against massive databases. This is naturally a matter of great concern.
It’s been reported that a company that operates a facial recognition system in China has reportedly leaked the personal information of 2.5 million people without the database being protected (Toulas, 2019). This company named SenseNets. The database includes the user’s ID number, address, photo and location in the last 24 hours. The leak is very dangerous, and the Shenzhen-based company should take greater responsibility. Their website is currently down (Toulas, 2019).
There are some security and privacy problems with facial recognition technology. One is the lack of consent. The fundamental principle of any data privacy is that companies need to let users know what identifying data they are collecting and obtain their consent. The most important privacy concern with facial recognition technology is that company are used to identify individuals without their consent. In addition, because unlike other biometric technologies, such as fingerprints, face scanning can be captured easily and secretly. The second one is Unencrypted faces. Unlike many other forms of data, there is no way to encrypt faces. Data involving facial recognition thus greatly increases the potential for stalking, identity theft and harassment due to leakage. Unlike credit card information with password, facial information cannot be easily changed. The third concern is inaccuracy and it’s a common critical comment for facial recognition. Research show that the accuracy of face varies by population, with people of color and woman having the highest rate of false positives (Najibi, 2020). This situation might potentially lead to false arrests in criminal contexts. Innocent people who are wrongly identified may be denied access to services to which they are entitled.
Data leaks encourage cyber attacks
Cybercriminals use personal data for profit. Hundreds of thousands of usernames, personal information, passwords, and confidential documents are sold on the internet. Criminals buy and use this data to steal someone’s money and identity, even commit crimes in their name, and commit fraud against their family members. In recent years, the situation of phone scams, bank card theft, fraudulent use of other people’s bank cards, online consumer fraud is still serious, most of which are caused by personal information disclosure, and it has become the main source of crime. ACMA member Samantha Yorke said Australians lost more than $25 million to SMS scams last year, and the impact on individuals and families can be devastating (Australian Communications and Media Authority, 2024).
Phishing scams is scammers use text messages to trick you into giving them your personal and financial information. Commonwealth Bank has issued a warning amid a massive text message scam, claiming to be a “security alert” asking people to keep their accounts safe. The latest text message scam claims they are Commonwealth Bank of Australia officials, asking people to click on suspicious links to “protect” their accounts and claiming their accounts are registered to the new CommBank app (Antrobus, 2024). A Commonwealth Bank spokesman said the link was not a legitimate communication from the bank and that the bank would never ask customers to log in and provide private information via a link in a text message (Antrobus, 2024). They advise customers to delete any such messages they receive immediately. If you click on the fake website they send, scammers can hack into your bank or other accounts. Or they may sell your information to other scammers.
Data brokers might also be a way to potentially leak personal information. A data broker collects information about you and then sells that data to other companies. The information collected by data brokers can be wide ranging, from your address and birthdate to your job, the number of children you have, and even your interests. Because data brokers hold increasingly detailed profiles of individuals, they store all sorts of sensitive information. These practices have raised concerns among the public, as the disclosure of personal data could put individuals at risk of fraud or identity theft (Rostow, 2017). Elizabeth Warren introduced the Health and Location Data Protection Act, which prohibits data brokers from selling Americans’ location and health data. Warren said data brokers collect and sell large amounts of personal data, often without users’ consent or without their knowledge. They profit from the location data of millions of Americans. Location data being sold poses a serious risk to Americans around the world (Elizabeth Warren, 2022). This bad business practice is not regulated by federal law and poses a danger to Americans around the world (Elizabeth Warren, 2022).
How to protect our personal information?
We should understand the following ways of personal information disclosure, in order to properly take protective measures to ensure that their personal information is not infringed.
1. Do not use illegal information collection apps.
During the Covid-19 period, there are some illegal apps in the name of Covid-19 surveillance, unauthorized collection and use of personal information beyond the scope of the problem. There is a risk of user personal information disclosure and abuse, so the public need to improve risk prevention awareness when using relevant apps.
2. Do not lend your ID or bank account to others.
Your own bank account must be used by yourself, and it is illegal to rent and lend bank cards and accounts. Renting or lending your identity documents may allow some criminals to use your name to engage in illegal activities, which will bring great legal risks to yourself, as well as personal credit damage, personal information disclosure and other problems.
3. Set a complex password in your account.
Some users use the same password for each platform account, or use a simple weak password, in order to make it easier to remember. In this way, hackers may use technical means to try out your account password! Therefore, it is best to set a strong password, set a separate password for important accounts, and change the password regularly.
4. Do not click on fraudulent links such as phishing sites.
Criminals often send fraudulent SMS using fake base stations in disguise through SMS or social platforms, and attach fake links such as phishing websites, inducing users to click on the link to fill in information such as account password, mobile phone number, verification code, etc., resulting in the disclosure of important user account password and other information.
5. Do not easily publish and disclose personal information in the social media.
It is some people’s hobby to release daily news on social platforms, but they do not know that they may also leak their personal information, for example, accidentally showing their tickets or passports, thereby revealing identity information. Also try to avoid using location features on social platforms. Avoid “talking too much” on social platforms.
In conclusion, almost all industries face data security and data privacy issues. We can’t deny the power of technology, what we should do is to prevent technology from being abused. Technology is neutral, the evil is the person behind the technology, so we have to use laws and other means to limit the possibility of things to the bad side. Users need to have the right to know and consent before personal data is collected. Control risks for the storage and processing of information and have remedial measures in place. With the continuous progress and implementation of legal norms, the network security market and industry will certainly have more space to develop.
Reference
Antrobus, B. (2024, January 16). Major alert issued by Commonwealth Bank to ‘immediately delete’ new text scam claiming to be security alert. News.
Australian Communications and Media Authority. (2024, February 15). Five telcos breached for allowing SMS scams. https://www.acma.gov.au/articles/2024-02/five-telcos-breached-allowing-sms-scams
Elizabeth Warren. (2022, June 15). Warren, Wyden, Murray, Whitehouse, Sanders Introduce Legislation to Ban Data Brokers from Selling Americans’ Location and Health Data. https://www.warren.senate.gov/newsroom/press-releases/warren-wyden-murray-whitehouse-sanders-introduce-legislation-to-ban-data-brokers-from-selling-americans-location-and-health-data
Lord, N. (2020, September 29). Uncovering Password Habits: Are Users’ Password Security Habits Improving? Digital guardian.
Najibi, A. (2020, October 24). Racial Discrimination in Face Recognition Technology. SITN.
Nissenbaum, H. (2018). Respecting Context to Protect Privacy: Why Meaning Matters. Science and Engineering Ethics, 24(3), 831-852. https://doi.org/10.1007/s11948-015-9674-9
Rostow, T. (2017). What Happens When an Acquaintance Buys Your Data?: A New Privacy Harm in the Age of Data Brokers. Yale Journal on Regulation, 34(2), 667-707.
The Insight Partners. (2022, February 25). Facial Recognition Market Size Worth $12.67Bn, Globally, by 2028 at 14.2% CAGR. https://www.prnewswire.com/news-releases/facial-recognition-market-size-worth-12-67bn-globally-by-2028-at-14-2-cagr—exclusive-report-by-the-insight-partners-301489784.html
Toulas, B. (2019, February 15). SenseNets Facial Recognition Company Leaks Out Personal Data of 2.5 Million Chinese Citizens. TechNadu.
Be the first to comment