In this era when algorithms are enjoying remarkable prevalence in various fields of today’s society, people embrace prominent convenience brought by its personalized functions while, to some extent, neglecting the underlying threats behind them. This essay aims to explore how algorithmic personalization benefits consumers’ experience but otherwise posing a potentially discriminatory problem.
Section I Overview of Algorithmic Personalization
When referring to algorithmic personalization, it’s defined as the pattern that utilizes big data and machine learning to analyze users’ behavioral data and preferences, so as to provide personalized product recommendations, content display and marketing strategies targeting different individuals. The appearance of personalized functions is an inevitable trend in algorithms’ development, as Terry Flew (2021) noted in his article “Algorithms can be improved over time by learning from repeated interactions with users and data how to respond more adequately to their inputs: well-designed algorithms evolve so as to be able to predict the outputs sought by users from the vast data sets they draw upon”.
To achieve algorithmic personalization, Internet companies will collect the browsing records, search history, purchase records, geographic location and other massive data of their users’, inputting these data through algorithms to analyze and mine them, thus drawing out each user’s personal portrait and preference model. After obtaining and basing on such personalized models, the company will recommend different products, adjust different prices and display different advertisements for different users, in order to promote their products and services conforming to the needs of users, thus improving the conversion rate and revenue.
Section II Positive Impact of Algorithmic Personalization on Consumers Behavior
It is universally acknowledged that algorithmic personalization has largely facilitated purchase processes in terms of improving user experience, saving consumers’ time and cost, increasing the relevance of wanted products and services. Many e-commerce platforms like Amazon, Taobao and Jindong has utilized such personalized pattern.
By applying algorithmic personalization, the website’s content layout and display can be greatly optimized and customized according to consumers’ behavioral habits and usage scenarios, creating a more friendly browsing and shopping environment for them and thus the overall user experience being improved. For another, as algorithms will synthesize data from all aspects, intelligently analyze and present the product information that consumers may be interested in, there is less necessity for consumers to spend a lot of time and energy searching, advancing their shopping efficiency and decreasing cost. Furthermore, via analyzing data such as consumers’ purchase history and browsing records, algorithms can more accurately grasp consumers’ needs and preferences, allowing the website not only to recommend products that best meet their needs but also identify more potential interests.
Section III Discrimination in Algorithmic Personalization
Despite all these convenience algorithmic personalization brought to us, there’s one aspect that still worth attention—bias.
Crawford Kate (2021) has cited the story of Hans in The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, depicting that Hans the horse got the right answer by stopping reacting at the moment when there was a slight change in questioner’s posture, breathing and facial expression. It was the questioner’s unconscious direction that made the horse intelligent. The Hans and von Osten situation, said by Crawford, demonstrates how biases can subtly work their way into systems in complex manners, and how the researchers themselves can get caught up in and influenced by the phenomena they are attempting to objectively observe and analyze in an impartial way. The story of Hans can be served as an alarm that, according to Crawford, “you can’t always be sure of what a model has learned from the data it has been given. Even a system that appears to perform spectacularly in training can make terrible predictions when presented with novel data in the world”. In other words, AI and algorithms works the similar way as Hans the horse, learning from the resources given by people but not merely limited to them. What else will they learn? No one can be clear and precise.
After realizing how unpredictable and unexpected AI and algorithms are, Terry Flew (2021) raised some of the challenges posed by algorithms and the way they use big data to shape individual and social outcomes, saying that “Bias, fairness and transparency. The data that inform decisions are often biased in terms of sampling practices or reflect societal biases”. By closely following up details like people’s consuming habit, income and spend, geographic location and population information, algorithms are more likely to be utilized to target on people’s mental weakness and deeper desire. In this way, algorithmic personalization will lead and tempt different people to buy different products and services, hence reinforcing the bias and discrimination in social perspectives towards people with different gender, ethnicity and level of income etc.
How does such discrimination generate? As algorithms are often designed and programmed by humans, developers will consciously or unconsciously incorporate their own biases, perceptions, and values into the design and implementation of the algorithms, leading to the appearance of algorithmic bias. Moreover, the training data used by AI is also biased. It’s the case that many AIs build their models and make predictions by learning from a large amount of training data, and if there are inherent biases in the training data, for example there’s limited or homogeneous record of some certain groups of people, the model will also reflect these biases and so will AI. Biases in historical data are another significant cause. If the training data AI and algorithms used possess bias from the history or still exists, like some certain groups have been treated unfairly in the past, these biases will also be learnt by the algorithm and replicated.
There are many cases of such discrimination caused by algorithmic personalization. In 2016, critics accused Amazon of discriminating against predominantly black neighborhoods in some cities by not offering same-day delivery there, likely due to algorithms factoring in demographics (David & Spencer, 2016). At the same year, an investigation found Facebook’s ad platform allowed advertisers to exclude viewers by race for housing and employment ads, which is otherwise illegal under the Fair Housing Act (Julia & Terry, 2016). All these two cases reflect the existing discrimination brought by algorithmic personalization, which is adverse to social stability and may give rise to chaos.
Section IV The Impact of Discriminatory Algorithms on Consumer Behavior
After adverting to the phenomena where algorithmic personalization has led to great percentage of social discrimination and bias, its impact on consumer behavior cannot be neglected.
Personalized recommendations, indeed on one hand, facilitate consumers’ purchase processes, but it also limit their choice and decision making. The phenomena of price discrimination occurs, with some e-commerce sites vary prices and offers for the same products based on user characteristics like location, income levels, and browsing data. For example, Amazon has displayed different price for DVDs based on users’ demographics, shopping histories and online behaviors in 2000 (The Washington Post, 2000). While in 2013, McAfee, a company that develops antivirus software, implemented differential pricing strategies for their product subscriptions, with the existing customers who attempting to renew their subscriptions were charged $79.99 while new customers were offered the same software at a discounted rate of $69.99. (Caillaud and Nijs, 2014). Meanwhile, a French multinational retail group called Auchan Direct provides a free delivery service to its new consumers but its existed customers are required to pay extra delivery fees (Caillaud and Nijs, 2014).
There’s no doubt that such algorithmic discrimination will pose some negative effects on consumer behavior. If algorithms have intently selected certain product information to show or hide based on a consumer’s demographic characteristics like their race, gender, age, etc., consumers will lose access to the full range of options and are forced to make decisions within a restricted range of choices. What’s more, when some companies use algorithms to offer different prices to different groups of people, this kind of price discrimination will violate the principle of fair trade and makes some consumers pay higher prices without conscious.
Furthermore, on account of the feedback loops in the practical usage of AI, the output data of its algorithms system can be used as an input one, which will exacerbate the intrinsic bias and continue to cycle on. Under such circumstance where algorithmic personalization generate such discriminatory phenomena, consumer behavior will be adversely affected and restrained, leading to an vicious spiral on consumption patterns.
Section V Case Study: “Big Data Killing”
From the surging of e-commerce platforms, consumers are becoming aware that there’s a variation between the prices of the same goods or services displayed on travel, shopping and taxi software on different people’s mobile phones. It’s the so-called “big data killing”. In the e-commerce market, “big data killing” refers to the practice of price discrimination whereby e-commerce platforms use big data to collect consumers’ information and analyze their consumption preferences, purchasing habits, income levels and other information through complex algorithms, so as to sell the same product or service to different consumers at different prices (LEI, GAO, CHEN, 2021). In particular, those platforms also take advantage of users’ reluctance to easily change their habitual platforms and laziness, charging higher fees for old users without informing them.
During the period of Chinese New Year when people travelled much frequently than usual, the problem of big data killing again, stirred up heated discussion. Problems like when trying to book the same hotel in travel applications, different accounts have different prices but the price for new users is always cheaper; during the use of taxi software, premier members’ price is higher than new members, even more difficult to get a car…are all which people expect to be resolved all the time. Just few months ago, a netizen shared the experience of buying air tickets online: with three accounts buying the same flight in the same cabin, the price difference is up to 900 yuan (JIN, 2024). According to a survey by the Beijing Consumers’ Association, 61.21% respondents believe that big data kills old users mainly in the form of different discounts or offers for different users, and 45.76 per cent perceive that the price of the same product or service will automatically rises after multiple browsing (JIN, 2024).
Essentially speaking, the phenomena of big data killing is caused by the merchants’ psychology that they are free to take advantage of consumer trust and imbalanced information to arbitrage excess benefits, which infringes on the rights and interests of consumers, deviating from the value principle of fairness and honesty, and violating the relevant legal provisions in China. However, even if such behavior is illegal, merchants are still keeping doing so. Why for that? It is because that big data killing behavior is with a certain degree of covertness. Unless consumers are very vigilant, finding the implicit change is very difficult. Moreover, some platforms, by virtue of their information superiority status, apply excuses like prices will be different at different times and places, preferential offers for new users, etc., to defend big data killing. Additionally, the ways for consumers to protect their rights are not sufficiently smooth, rather time-consuming and laborious. It makes harder for people to protect their rights owing to their the lack of technical understanding of algorithmic personalization system, as well as the extremely difficult proof of what they have experienced.
However, China has come up relevant regulations to resolve this problem. By raising up more effective monitoring and disciplinary measures, platforms are required to fulfill their responsibilities fundamentally and efficiently. For consumers, complaint and reporting channels are called to open, with relevant departments being expected to provide technical support to help consumers find and provide evidence, meanwhile disseminating relevant knowledge to enhance consumers’ awareness of prevention. Online platforms are also called to treat every consumer honestly and equally, achieving mutual benefits and win-win situation in benign interaction.
Section VI Countermeasures and Solutions
To enjoy the benefits meanwhile solving the discriminatory problems brought by algorithmic personalization, some possible ways can be taken into account. The data model of AI can be adjusted to be more diverse, equipping it with a wide range of gender, age, ethnicity, geography and other characteristics. Regular audits of the system to assess whether algorithms are unfair is another way, detection tools and metrics being applied to quantify algorithmic bias. Additionally, when developing and maintaining AI algorithm teams, people with different backgrounds, experiences and values are preferred.
In short, algorithmic personalization can, on one hand, potentially improve consumers shopping experience and optimize purchase processes, but the valid concerns that this practice could enable new forms of bias and discrimination should not be ignored. The roles of society, business, and government in addressing these issues should be emphasized and related regulations should be put by authorities.
References
Bozdag, E. Bias in algorithmic filtering and personalization. Ethics Inf Technol 15, 209–227 (2013). https://doi.org/10.1007/s10676-013-9321-6
Caillaud, B., & De Nijs, R. (2014). Strategic loyalty reward in dynamic price discrimination. Marketing Science, 33(5), 725-742. https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=00d616fc6303e2b1beb62015f2fd7f96750459c8
Center, A. P. (2000). Amazon. com Issues Statement Regarding Random Price Testing. Sep. 2000.
Chen, J. (2021). Economic Thinking of Big Data Killing in the Internet Era. In: Abawajy, J., Choo, KK., Xu, Z., Atiquzzaman, M. (eds) 2020 International Conference on Applications and Techniques in Cyber Intelligence. ATCI 2020. Advances in Intelligent Systems and Computing, vol 1244. Springer, Cham. https://doi.org/10.1007/978-3-030-53980-1_142
Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press. https://doi.org/10.12987/9780300252392
David, Ingold., Spencer, Soper. (2016, April 21). Amazon Doesn’t Consider the Race of Its Customers. Should It?. Bloomberg. https://www.bloomberg.com/graphics/2016-amazon-same-day/
David, Streitfeld. (2000, September 27). On the Web, Price Tags Blur: What You Pay Could Depend on Who You Are. The Washington Post. https://www.washingtonpost.com/archive/politics/2000/09/27/on-the-web-price-tags-blur/14daea51-3a64-488f-8e6b-c1a3654773da/
Flew, T. (2021). Regulating Platforms. Polity. https://bookshelf.vitalsource.com/books/9781509537099
JIN, Xin. (2024). 加强平台监管,防止“大数据杀熟”. People’s Daily Online, 05. http://opinion.people.com.cn/n1/2024/0229/c1003-40185476.html
Julia, Angwin., Terry, Parris Jr. (2016, Oct. 28). Facebook Lets Advertisers Exclude Users by Race. Propublic. https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race
LEI, Licai., GAO, Shang., CHEN, Ruixiang. How to Solve the Problem of Big Data Killing: Evolutionary Game in E-Commerce Market Based on Collaborative Supervision of Government and Consumers. Journal of Systems & Management, 2021, 30(4): 664-675. https://xtglxb.sjtu.edu.cn/EN/abstract/abstract1156.shtml
R. A. E. -D, Ahmeda., M, E, Shehaba., S, Morsya., & N, Mekawiea. Performance Study of Classification Algorithms for Consumer Online Shopping Attitudes and Behavior Using Data Mining. 2015 Fifth International Conference on Communication Systems and Network Technologies, Gwalior, India, 2015, pp. 1344-1349, doi: 10.1109/CSNT.2015.50.
Wu, Zhiyan., Yang, Yuan., Zhao, Jiahui., & Wu, Youqing. (2022). The Impact of Algorithmic Price Discrimination on Consumers’ Perceived Betrayal. Frontiers in Psychology, 13. doi: 10.3389/fpsyg.2022.825420
Be the first to comment