9,000,000 yuan in one day, AI live streamer become the cash cow
In order to satisfy commercial requirements, artificial intelligence live streamers have penetrated numerous industries. predominately taking the role of Douyin platform broadcaster. Their formidable name in China is “Virtual Digital Humans.” A considerable proportion of human broadcasters have been supplanted by visual digital humans. The commercial value of AI streamers is increasing at an accelerated rate. Some of them create vast wealth. As an example, a streamer known as “Xu Anyi” achieved 9 million RMB in daily reward income. After 3 months of streaming on Douyin, he rose to the top as an industry-leading live streamer (Wen & Bi, 2022).
It is weird that individuals spend large sums of money in artificial intelligence broadcasters because the object on which they are spending money is not even real. At the forefront of the trend, the popularity and high income of virtual broadcasters have attracted considerable attention. However, one must question whether virtual broadcasters are simply a response to human need or do may act as a mechanism to control humans.
Why are AI live streamers becoming popular?
Virtual digital humans can be classified into three distinct categories: entertainment virtual broadcasters, e-commerce virtual broadcasters, and media virtual broadcasters. Among them, entertainment live streamers receive the most attention, with Xu Anyi being a prime example. It based on big data, he was designed to represent the desires of most women, with attractive features, a soft voice, and considerate behaviour. It is obvious that the notion of AI companions and entertainment virtual live streamer is similar, both to meet the emotional requirements of people.
Nevertheless, e-commerce virtual live streamers possess the greatest long-term commercial value. They not only work nonstop for over a dozen hours per day without needing to rest, but they also demonstrate outstanding ability in multiple languages. So, their powerful potential has positioned them as the streamers of first choice for businesses. This is a reason why leading companies in various industries are competing to invest in and develop their own AI streamer projects.
What is the consequence behind the trend?
However, as the interaction between audiences and live streamers increases, Douyin’s database system carefully records the behaviour data of each audience, including their viewing duration, purchasing power, consumption decision habits, and more. As a result, Douyin continuously pushes works that are consistent with the consumption habits that users have formed to them. Simultaneously, studies have shown a positive correlation between how often someone watches AI broadcasters and how much they are willing to consume. This means that people who watch AI broadcasters more often also spend more money.
It can be said that this vast trove of data can be manipulated and transformed into various forms of information through complex algorithms and layout combinations. Through data collection, AI and algorithms personally become the one who knows the most about the user. They can cultivate consumer preferences and continuously harvest them. In other words, algorithmic culture shapes user experiences and consumption patterns. There are issues that raise concern.
Concern 1: Impact of AI livestreamers on society
Biases in data algorithms
Data is the most important parts of AI technologies such as machine learning. the foundation of AI fairness lies in data, because data integrity, completeness, scale, and quality are crucial for effective AI analysis and modelling. However, it is also hidden biases and prejudices. regarding disproportionate representation of different groups, inherent discrimination and bias, lack of inclusivity, and data accuracy and universality issues contribute to data inequalities. These biases within datasets can undermining the ethical integrity of AI applications.
Additionally, the issue of algorithmic discrimination warrants attention. Rights and accountabilities must be defined in collaboration between businesses and providers of virtual human IP operations, avoiding the inclusion of commercial values or prejudices in algorithms. Preserving the integrity and richness of the persona of AI livestreamers is crucial, guaranteeing their protection against any kind of bias or discrimination.
Ethical in societal values
Due to artificial intelligence lack of direct sensory capabilities and sensitivity to value judgments, prioritizing the development and management of ethical standards and norms is crucial in the creation and supervision of AI live streams. They function as vital standards for evaluating the moral implications of AI technology progress and use. At the same time, ensuring that AI as livestreamers align with scientific ethics can guide technological progress in a positive direction.
Concern 2: Consequences of ubiquitous AI algorithms
Authority in decision making
When users engage with Douyin, they are showed with automated video recommendations, curated by the platform’s algorithmic system. Users can choose to stay and watch or slide down to skip and move on to the next video pushed by the platform. This begs the question, how does the algorithm filter the content recommended to the user?
Through data-driven algorithms that remember all habits about users, analysing user behaviour. Then it deconstructs user actions and preferences, employing computational methods to interpret cultural nuances and shape the user experience accordingly. Users’ exposure to content ultimately depends on the dynamic interplay between culture and algorithmic interpretation.
In essence, the data colonialism, the legacy of historic colonialism, a new form of colonialism distinctive of the twenty-first century (Couldry & Mejias, 2019), as the mechanism, which means users’ interactions with the platform inform the algorithm’s understanding of their preferences. For instance, if a user quickly skips a video within a few seconds, the algorithm registers this feedback and adjusts its recommendations accordingly, gradually refining the user’s profile to better match their preferences. In this way, the algorithm continuously improves its recommendation system, tailoring content to individual user personalities.
The relationship between users and algorithms may be framed in terms of “mutual domestication” (Siles et al., 2020). The production of sophisticated recommendations leads to increased customer satisfaction, which produces more customer data that generates more sophisticated recommendations, and so on, resulting—theoretically—in a closed commercial loop in which culture conforms to its users more than it confronts them (Hallinan & Striphas, 2016).
Cognitive solidification
Humans naturally gravitate towards information that validates their existing beliefs, a tendency which algorithms exacerbate. By catering to this inclination, algorithms foster echo chambers (Sunstein, 2009), reinforcing societal divisions and diminishing interpersonal communication skills. Consequently, society becomes highly segmented, with individuals confined within their own ideological filter bubbles (Pariser, 2011), shielded from contrasting viewpoints while reinforcing their own biases.
In this context, when a user engages with an AI livestreamer for an extended period, Douyin memorizes this preference and begins to consistently recommend content featuring the AI livestreamer to the user. Eventually, users may find their Douyin feed dominated by AI livestreamer content. Through this process, Douyin employs a filter bubble to channel users into a single stream.
Despite these platforms’ original intention to expose users to the diverse facets of the world, algorithms have paradoxically led to information isolation, where individuals remain confined within their own limited sphere of knowledge, detached from the broader reality. This isolation exacerbates the divide between people in the real world, making it increasingly difficult to bridge. Moreover, it diminishes the ability to interact effectively with real-world human beings.
It is obvious that, under the influence of algorithms, users relinquish some degree of autonomy in decision-making, as Douyin effectively dictates their content consumption choices. If decisions are seen as being made ‘by machines’, then who is responsible for the resulting outcomes (Olhede and Wolfe, 2018, p. 9)?
Algorithmic governance
Avoiding algorithm bias
To address bias in algorithmic data, intervention and prevention measures must be taken at the stage of creating AI broadcaster models. During data sampling, efforts should be made to avoid tend to specific demographic groups in society and to mitigate biases present in decision-making reference data. For example, in crime data, avoiding the use of potentially racially biased data. By reducing these biases, unfairness in artificial intelligence algorithm models can be decreased, ensuring and preserving fairness in the decision-making process of AI broadcasters.
Strengthening humanistic ethical development
In the development and operation of artificial intelligence broadcasters, it is necessary to embed ethical frameworks to ensure alignment with societal values and minimize potential harm. This includes incorporating ethical frameworks advocating for improving human life, avoiding harm, and promoting fairness and justice. This is crucial because addressing ethical concerns not only prevents the misuse of artificial intelligence technology but also instills confidence in people’s use of such technology.
Diminishing the dominance of algorithms
Authority is being expressed through algorithms increasingly (Pasquale, 2015), with the values and prerogatives that the encoded rules enact hidden within black boxes (Beer, 2017). Moreover, digital platforms and online services own a significant amount of power over operators, so they have the discretion to make and enforce the rules as they see fit (Suzor, 2019, p. 17). For example, tech giants like Google guard their secret recipes fiercely and deploy strategies of obfuscation, shrouding their operation of algorithms in secrecy to consolidate power and wealth (Crawford, 2021).
In response, emerging governance modes for platform governance, including self-government, external governance, and co-government (Gowra, 2019), seek to ensure power dispersion and prevent unilateral control and manipulation of the digital realm. The governance triangle model, comprising the ‘firm,’ ‘NGO’ (non-governmental organizations), and ‘state,’ serves as a heuristic to structure analysis of widely varying forms of governance (Gowra, 2019).
This multi-party, mutually constrained approach is vital for achieving a sustainable internet ecosystem. Platform operators and AI developers bear responsibility for ensuring transparency and accountability throughout the design, deployment, and operation of AI live streaming technologies. Furthermore, there is a need to promptly update policies and regulations conducive to the sustainable development of internet products to enhance the mechanism for regulating artificial intelligence algorithms.
However, regulations alone are not sufficient. Alongside artificial intelligence governance, there is a need to continuously develop and innovate emerging technologies capable of supporting regulation. For instance, in addressing issues of data leaks and privacy infringements, technologies such as data anonymization and blockchain can be explored as potential solutions.
It is worth noting that Douyin has set a leading example in algorithm governance. Douyin continuously improves relevant systems and procedures to address issues arising from the use of AI broadcasters for live streaming. According to the “2022 Douyin Live Platform Governance White Paper,” Douyin initiated 41 special rectification campaigns for rule violations in 2022, establishing over 140 multidimensional security models and more than 1000 safety rules (Douyin, 2022). These efforts reflect Douyin’s sense of social responsibility.
Conclusion
The prevalence of AI broadcasters on the Douyin platform needs to be looked at dialectically. On one hand, it can reduce costs and increase efficiency, meaning it can achieve maximum economic benefits with minimal manpower costs. On the other hand, it reflects the potential negative consequences of excessive AI involvement in human life, such as biases in data algorithms, deviations in ethics and morals, and an overabundance of authority over AI algorithms in decision-making and cognitive solidification. To address these issues, platforms, organizations, and governments should all bear corresponding social responsibilities, mutually supervising and governing each other. Only through this can a sustainable innovation in AI technology be collectively built.
Reference
Beer, D. (2017). The social power of algorithms. Information, Communication & Society, 20(1), 1–13. https://doi.org/10.1080/1369118X.2016.1216147
Couldry, N., & Mejias, U. A. (2019). Data colonialism: Rethinking big data’s relation to the contemporary subject. Television & New Media, 20(4), 336-349. https://doi.org/10.1177/1527476418796632
Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. New Haven: Yale University Press. https://doi.org/10.12987/9780300252392
Douyin. (2022). 2022 Douyin live platform governance white paper. https://trendinsight.oceanengine.com/arithmetic-report/detail/783?source=undefined
Flew, T. (2021). Regulating platforms. Polity Press.
Gorwa, R. (2019). The platform governance triangle: conceptualising the informal regulation of online content. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1407
Gorwa, R. (2019). What is platform governance? Information, Communication & Society, 22(6), 854–871. https://doi.org/10.1080/1369118X.2019.1573914
Hallinan, B., & Striphas, T. (2016). Recommended for you: The Netflix Prize and the production of algorithmic culture. New Media & Society, 18(1), 117–137. https://doi.org/10.1177/1461444814538646
Olhede, S. C., and Wolfe, P. J. (2018). The growing ubiquity of algorithms in society: Implications, impacts and innovations. Philosophical Transactions, 376, 1–16.
Pariser, Eli. (2011). The filter bubble: What the Internet is hiding from you. Viking.
Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
Siles, I., Espinoza-Rojas, J., Naranjo, A., & Tristán, M. F. (2019). The mutual domestication of users and algorithmic recommendations on Netflix. Communication, Culture & Critique. https://doi.org/10.1093/ccc/tcz025
Sunstein, C. R., & Sunstein, C. R. (2009). Republic.com 2.0 (1st ed.). Princeton University Press.
Suzor, N. P. (2019). Who makes the rules? In Lawless: The secret rules that govern our digital lives (pp. 10–24). chapter, Cambridge: Cambridge University Press.
Wen, M., & Bi, Y. (2022, July 12). Revealing the polarization of virtual anchors’ monetization. China Business Network. https://news.cnstock.com/industry,rdjj-202207-4928327.htm
Be the first to comment