(Xinhuanghe, 2024)
Introduction
With the advent of the Fourth Industrial Revolution, new technologies such as the Internet of Things, AI, algorithms, and big data are continuously integrating into our daily lives, transitioning us from the digital economy era to the era of intelligent digital (ID) economy. During this transition, data has become a key resource for human societal production and national strategic assets, and intelligent algorithms, using data as their fuel, have emerged as new tools for production widely applied in societal operations and economic strategies. This has made the impact of data and algorithms on the allocation of societal resources increasingly significant, becoming a primary driving force in the development of the ID era. However, alongside these advancements, data and algorithms also introduce non-negligible risks to social stability and national security, necessitating greater attention and reflection on digital policies and governance.
Therefore, this paper will discuss the case of “big data price discrimination” emerging from algorithmic recommendation systems to unveil the potential pitfalls of digital tools. It encourages a more open and equitable discussion on the right to know about digital policies, pushing for the formulation of more transparent and public digital policies.
Big Data Price Discrimination in E-commerce Platforms
(Xinhuanghe, 2024)
Have you ever noticed when booking hotels or flights on your mobile platform that the price differs from what your friends see using the same app? A netizen from Zhengzhou, Henan, China, reported to journalists that while purchasing tickets for the same flight on the Feizhu platform, he saw three different prices displayed on three different mobile phones, with a price difference reaching up to 900 RMB. Additionally, users have observed that repeatedly searching for the same flight can lead to an increase in ticket prices on the platform. The more times tickets are purchased, the higher the prices displayed, meaning that long-time users of the software tend to see higher prices than new users. This phenomenon is not only present in platforms selling airline tickets but is also prevalent across various online service platforms. (Xinhuanghe, 2024) This is a typical case of price discrimination and is one manifestation of big data price discrimination.
(Xinhuanghe, 2024)
This price optimization strategy is not a new phenomenon. As early as the year 2000, Amazon displayed an early form of what is now known as big data price discrimination by conducting random price tests on its website, showing different prices for the same DVD to different customers, leading to 6,900 customers purchasing the product without knowledge of the price variations (CNN, 2000). This incident sparked a heated debate about price transparency and fairness.
This clearly demonstrates the significant role of data and algorithms in these cases. Platforms utilize algorithms and data for price optimization, where algorithms analyze user behavior and personal information such as past purchase history, device type, search frequency, and geographic location to predict the psychological price points of users and dynamically adjust the displayed prices accordingly (Azzolina et al., 2021). Although platforms describe this variable pricing display as one of their pricing marketing strategies, it is undeniable that this is achieved by exploiting barriers to other price information and user trust, thereby maximizing business profits. This places users at an increasingly disadvantageous position within the digital platform’s information cocoon. In the process of big data price discrimination, platforms wield absolute control over the algorithms, implementing mandatory customer service strategies that collect users’ gender, preferences, and spending habits to offer personalized services to different users, further undermining consumer rights (Xue et al., 2022).
In today’s age of increasingly sophisticated algorithmic tools, while users enjoy high-quality consumer experiences, they are increasingly unaware of how much of their personal data is collected by platforms, nor how their consumer behavior is captured to influence their purchasing decisions. Against the backdrop of widespread use of smart devices, algorithms pervade daily life, yet their opaque operation limits consumer freedom, infringes on personal privacy, and often leads users to unwittingly accept conditions that are not in their favor.
Therefore, understanding algorithms, how they work, and their impact on individuals and society becomes crucial. A fair and transparent algorithmic system requires the collective effort of governments, platform developers, and all sectors of society. Through standardized data protection regulations and clear policies on algorithm transparency, public understanding and control over algorithmic systems can be enhanced, ensuring that the personal interests of every user are protected.
What are Algorithms and Algorithmic Identity?
According to Merriam-Webster’s Dictionary, an algorithm is defined as a systematic process for solving a problem or achieving a specific outcome. When utilized in digital contexts, it serves as the foundational mechanism enabling computers to analyze data and autonomously execute logical operations. (Merriam-Webster) Algorithms are currently widely and centrally used for recommending content, processing information, predictive analytics, and automating decision-making. Among these, personalized recommendation algorithms are the most frequently encountered systems in everyday user experiences. For instance, many have likely noticed that when re-entering a ride-sharing app, the price may have increased; online shopping sessions tend to show more products of interest the more one refreshes the page, extending the shopping duration; and the more one watches short videos, the more the content aligns with their preferences, thus increasing their enjoyment and engagement through immediate dopamine stimulation. This phenomenon occurs because digital platforms collect users’ browsing and search data, record their interest preferences, and use algorithms to analyze these historical behavior data to predict content that users will be interested in the future. This maximizes the user experience on the platform and strengthens their dependency on it.
Algorithms, centered around big data thinking and user needs, have led to the labeling, encapsulation, and “cocooning” of members in networked societies. This phenomenon has given rise to the concept of “algorithmic identity,” where algorithms create new online identities for users based on the analysis of their online behaviors and personal information. As mentioned in the aforementioned case of price discrimination, it is precisely because platforms generate different prices for users based on their distinct algorithmic identities. It is important to note that algorithms also automatically identify users’ gender and race in a digitized form and redefine users’ actual racial and gender identities based on the collected information, thereby reconstructing new social identities and group boundaries in network societies. (Cheney-Lippold, 2011) Algorithmic identity not only reflects the interactions between users and digital platforms but also shows how algorithms are gradually penetrating into social and cultural realms.
As interactions between users and algorithms deepen, the boundaries for sharing private data become increasingly blurred. User behaviors online and even emotional states are being collected and analyzed. Can you be sure whether your emotionally charged text posts online are being collected and analyzed? Apart from not posting on digital platforms, it seems difficult to find ways to protect already published information from being collected. Although most platforms offer users the option to share their private data and interest preferences, these settings are often obscure and not user-friendly, making it difficult for users to notice and choose them. Capital platforms use the collection of this data to optimize and precisely target advertisements, earning substantial profits while, unbeknownst to users, they surrender their data rights. (Mallard, 2021) This constitutes a deprivation of user rights.
Driven by algorithms, people have shifted from actively seeking information to passively receiving targeted content. This shift demonstrates the powerful influence of algorithms in the reception and transmission of information. Algorithms are now determining what content users can see, which has a significant impact on shaping people’s thoughts and behaviors. Therefore, it is crucial to understand how algorithms utilize information flows to influence our cognition and decision-making, enabling users to maintain the ability to think independently while receiving various perspectives and information in the cluttered digital world.
The Impact of Algorithms
In the era of the digital economy, data is a crucial production resource, and maximizing its value is inseparable from the efforts of algorithms. Algorithms assist governments and businesses in making wiser decisions by efficiently processing data sets, thus maximizing benefit values and avoiding the pitfalls of decision-making errors. Therefore, the widespread application of algorithms greatly enhances the development of both the economy and society.
Algorithms primarily contribute to economic and social development by enhancing business production efficiency, facilitating the intelligent upgrade of industries, and driving innovation in business models. By serving as a core competitive advantage within industries and reducing the cost of decision-making, algorithms improve work efficiency. For example, in enhancing the production efficiency of power and energy companies, algorithms can help reduce the performance of repetitive tasks and tackle complex and variable issues such as energy pricing and weather forecasting models (2022). In terms of innovating business models, many companies have started using algorithms to optimize recruitment and personnel management, reducing unnecessary costs and enhancing operational efficiency and market competitiveness (Lee et al., 2019).
While algorithms have contributed to economic and social development, they also bring potential harms. Economically, the damage caused by algorithms is evident in their consumption and misuse of energy. The efficiency of algorithms in processing large datasets requires significant electrical power and ongoing cooling systems, imposing severe strain on limited environmental resources. Moreover, training AI deep learning models consumes vast amounts of computing resources, increasing carbon emissions and burdening environmental conservation. Socially, the harms of algorithms manifest in their focus on cost reduction, often at the expense of workers’ rights. Businesses may cut benefits for workers and lower-skilled jobs may be lost, posing risks to social stability. Ethically, algorithms can infringe on personal privacy as data drives algorithmic operations. To refine models, massive amounts of user data are collected and analyzed without users’ knowledge, not only violating their right to privacy but also increasing the risks of cyber fraud. (Crawford et al., 2021)
Algorithms have transcended their basic function as advanced data processing tools. In their operation, they rely on collected historical data, inevitably carrying and reflecting specific values and biases. This phenomenon is not limited to the realm of digital technology; the influence of algorithms is gradually penetrating human culture, shaping our values and affecting individual behavioral choices. In modern society, algorithms play a significant role in cultural production, information exchange, and the construction of social relationships. (Hallinan & Striphas, 2016)
Algorithm Governance Strategy
In algorithm governance, it is important not only to recognize the neutrality of the technology itself and support the innovation and progress of algorithms but also to ensure that the development of algorithms adheres to ethical, economic, and social norms and values. This is to guarantee that the advancement of algorithmic technology is both orderly and safe.
1. Market Transparency and Corporate Responsibility. In “The Black Box Society,” Frank Pasquale discusses the concept of the “technological black box,” which can be interpreted as users being unable to see the inner workings of algorithms. This lack of visibility impairs users’ right to be informed and exacerbates injustices in the marketplace (Pasquale, 2015). Therefore, platforms that use data for algorithmic applications need to disclose to users the workings of their recommendation systems, the decision-making processes of their pricing strategies, etc., to ensure that all user groups receive and enjoy equal treatment. Transparency in algorithms is one of the primary responsibilities for protecting consumer rights and the duty of application platforms.
2. Advance Data Legislation and Strengthen Regulation and Penalties. In mitigating the harms caused by algorithms, governments need to clarify their principal responsibilities. In response to changes in the digital age, more precise data legislation should be proposed, and the supervision of enterprises should be strengthened, along with increased penalties for privacy intrusions made for profit. Platforms using algorithmic applications should present their privacy data collection policies to users in a clearer and more concise manner, tailored to the characteristics of their company’s products. The sense of responsibility in the algorithm application industry requires the joint efforts of both enterprises and governments.
3. Enhance Citizen Independent Thinking and Promote Algorithm Awareness. It is crucial to enhance public understanding of how algorithms operate and influence their daily lives, enabling citizens to make more informed choices when faced with algorithm-driven decisions. The primary reason for the leakage of private data by citizens is due to insufficient understanding of how algorithms function. Therefore, intensifying the promotion of how algorithms are created and make decisions is key to addressing this issue. For instance, incorporating discussions about the applications of algorithms in everyday life into school curriculums, community events, and online courses can help users better comprehend this technology.
Conclusion
All in all, while enjoying the conveniences and efficiencies brought about by algorithmic technology, we must also be vigilant about its impacts on society, the economy, and ethics. As mentioned in the article with the example of price discrimination, algorithms can infringe on consumer rights without their knowledge. Therefore, to address the increasingly complex digital challenges, we need a multi-layered, multi-stakeholder algorithm governance framework. This framework should integrate technological innovation, legal regulation, and ethical guidelines to ensure the healthy development of algorithms while protecting everyone’s fundamental rights and the public interest. Through the collective efforts of policymakers, technology experts, businesses, and the public worldwide, we advocate for building a more open and transparent algorithmic future.
Reference
Azzolina, S., Razza, M., Sartiano, K., & Weitschek, E. (2021). Price discrimination in the online airline market: An empirical study. Journal of Theoretical and Applied Electronic Commerce Research, 16(6), 2282–2303. https://doi.org/10.3390/jtaer16060126
CNN. (2000, September 28). Amazon pricing flap. CNNMoney. https://money.cnn.com/2000/09/28/technology/amazon/
Cheney-Lippold, J. (2011). A new algorithmic identity. Theory, Culture & Society, 28(6), 164–181. https://doi.org/10.1177/0263276411424420
Crawford, K., Crawford, K., & Crawford, K. (2021). Atlas of AI: Power, politics and planetary costs of Artificial Intelligence. Yale University Press. 2021, https://doi.org/10.12987/9780300252392
Global Artificial Intelligence in Manufacturing Market Report 2022 to 2027: Growing Focus on Boosting Operational Efficiency of Manufacturing Plants Presents Opportunities. (2022, October 28). GlobeNewswire, NA. https://link.gale.com/apps/doc/A724301029/STND?u=usyd&sid=bookmark-STND&xid=b54e7ecf
Hallinan, B., & Striphas, T. (2016). Recommended for you: The Netflix Prize and the production of algorithmic culture. New Media & Society, 18(1), 117-137. https://doi.org/10.1177/1461444814538646
Lee, J., Suh, T., Roy, D., & Baucus, M. (2019). Emerging Technology and Business Model Innovation: The Case of Artificial Intelligence. Journal of Open Innovation : Technology, Market, and Complexity, 5(3), 44. https://doi.org/10.3390/joitmc5030044
Merriam-Webster. (n.d.). Algorithm definition & meaning. https://www.merriam-webster.com/dictionary/algorithm
Mallard, G. (2021). Critical theory in the age of surveillance capitalism: How to regulate the production and use of personal information in the Digital age. Law & Social Inquiry, 47(1), 349–354. https://doi.org/10.1017/lsi.2021.80
Pasquale, F. (2015). The Black Box Society. Harvard University Press. http://www.jstor.org/stable/j.ctt13x0hch
Pavolotsky, J. (2013). Privacy in the age of Big Data. Business Lawyer, 69(1), 217+. https://link.gale.com/apps/doc/A358314856/AONE?u=usyd&sid=bookmark-AONE&xid=1fced0ca
Xinhuanghe. (2024, February 2). Unveiling “Big Data Price Discrimination”: Endless Tactics Emerge, E-commerce and Travel Platforms Become Hotbeds of Complaints. Xinlang Website. https://news.sina.cn/sh/2024-02-02/detail-inafsnea3745953.d.html
Xue, L.-D., Liu, Y.-J., Yang, W., Chen, W.-L., & Huang, L.-S. (2022). A blockchain-based protocol for malicious price discrimination. Journal of Computer Science and Technology, 37(1), 266–276. https://doi.org/10.1007/s11390-021-0583-x
Be the first to comment