Introduction
With the development of emerging technologies, AI and algorithms are widely used in social media and other applications. Many people may not even recognize that AI is already in every part of our lives. Also, AI-generated content, like images, videos, and deep fakes is widely spread and used in diverse domains.
AI reflects and produces social relations and understandings of the world, but is often reduced to a technical domain (Crawford, 2021). Due to the reason that AI is made by humans, it is not neutral, but cultural, political, and social. The conversation about AI and algorithms encompasses a broad range of issues, touching on ethical, social, technical, and regulatory aspects. Moreover, it slowly manipulates and shapes our behaviors. As we spend more time and engage in more activities online, algorithm-related decision rules and data-driven processes begin to influence our thoughts and behavioral patterns (Flew, 2021). Thus, the regulation of the use of AI technology is essential and complex.
Understanding Algorithms and AI-generated Content
Before diving into the regulation of AI, it is significant for us to understand what AI means, and its potential issues. Flew identified algorithms as below:
“Algorithms are the rules and processes established for activities such as calculation, data processing, and automated reasoning.”
(Flew, 2021, p.79)
Additionally, algorithms are the essence of artificial intelligence, enabling machines to learn from data, make decisions, and improve over time. According to Flew (2021), algorithms are becoming increasingly important in the digital age, having a significant impact on individual, collective, and social well-being. Among the wide use of AI, AI-generated content occupies the majority.
In the next section, I will typically focus on the potential issues of AI-generated content and its regulation, taking Midjourney as a case study. Midjourney is an independent research lab focusing on AI and new media launched in March 2022 (Midjourney, 2024). It is known for its innovative projects, including its AI model that generates images based on textual descriptions.
Issues of Concern
There are many AI tools like Midjourney used to generate diverse media content, such as articles, images, videos, and audio. With the development of deep learning technologies, AI has been widely used in various artistic fields to generate new images or transform existing images. Moreover, deep fakes could create images that audiences are hard to distinguish whether it is human-made or AI-generated. The risk of deep fakes in media content creation is significant and multifaceted, posing a serious risk of spreading false information and manipulating public opinion (Partadiredja et al., 2020). Taking Midjourney as an example, it has the potential issues of authenticity, copyright, and ethical considerations.
Misuse and Deep Fake
Partadiredja (2020) and other scholars conducted an illustrative experiment to examine the ability of participants to distinguish human-made and AI-generated content. The results of this experiment show that participants have difficulties in telling the differences between humans and AI. Only 45% of participants got the correct answer to images (Partadiredja et al., 2020).
Midjourney, as the most famous image-generation AI art tool, has owned 15 million users till 2023 (Infoglobaldata, 2023). In 2023, an artist, Justin T. Brown, was famous for creating images of politicians cheating on their spouses with AI to warn of the dangers of AI-generated content. Through these fake images, he wanted to show how AI could be a weapon and create scenarios that could harm humans. However, the artist was banned by Midjourney after the photo was released (Bandara, 2023). Midjourney’s decision to ban Brown reflects the ongoing struggle between the creative freedom provided by AI tools and the responsibility to prevent their misuse, especially in sensitive contexts such as politics. The incident sparked discussions about the need for regulation and the ethical use of AI in generating images. Thus, the legitimate use of AI-generated content and regulations on the spread of deep fakes are significant in addressing issues of authenticity and misinformation.
Copyright
The main sources of AI-generation tools often come from trained data sets (including public data, texts, images videos, etc.), and the data collected and output by AI in the process of serving users. Since the emergence of generative artificial intelligence, the issues of intellectual property and copyright ownership of AI-generated content have been discussed a lot. Benhamou and Andrijevic (2022) proposed that the use of generative adversarial networks (GANS) has challenged the traditional notions of authorship and creativity. The use of GANs raises copyright issues, particularly regarding the originality of the output generated by AI and the use of images as training input (Benhamou & Andrijevic, 2022).
Related to this argument, I found a case about three artists who have launched a lawsuit against generative AI organizations including Midjourney (Vincent, 2023). These artists allege that generative AI tools have infringed the rights of “millions of artists” since they train their AI tools using five billion images collected on the internet, without the consent of original artists (Vincent, 2023). This incident reflects the idea that in the output and input process of machining training, the most risk relating to copyright infringement is reproduction and adaptation rights. Whether the data in the database of AI training is collected and used with the consent of all original authors is the most significant consideration in the regulation of generative intelligence.
Ethical Consideration
The use of AI may cause ethical issues of fairness, transparency privacy, etc. Especially, when decisions made by AI may lead to large-scale discrimination, ethical considerations become significant (Cath, 2018). Since most of the AI models (including Midjourney) are trained based on the Big data set, these data sets could contain bias and discrimination of humans, culture, and thoughts. It could lead to the perpetuation and amplification of stereotypes or unfair representations within the AI-generated images. Midjourney’s privacy policies introduce the collection and use of users’ personal data, including sharing and data analysis. For the security of personal data, Midjourney utilizes strategies to protect these data, but it cannot guarantee its absolute security (Midjourney, 2024). Also, it does not demonstrate how to utilize and protect personal data in detail. Thus, there are many aspects to consider when regulating the ethical issues of artificial intelligence generation tools.
Regulation on Use of AI-generated Content
After understanding the potential issues of AI and AI-generated content, the strategy of regulation is essential to consider as the next step. In the video, several AI leaders met with the US Congress in 2023, discussing the risks that AI brought to society and the significance of AI regulation. Also, they suggested that governments should construct a regulation organization to regulate AI products (Fireship, 2023). This meeting could be seen as an essential dialogue balancing the innovation and responsibility of the AI domain. It also emphasizes a significant and positive step towards addressing the complexities and challenges of AI technologies. However, there is still a long way to go to conduct a systematic strategy to regulate all types of AI products on a global level.
According to Nitzberg and Zysman (2021), Machine learning has facilitated the development of artificial intelligence systems that can perform tasks faster and more accurately than humans, allowing AI to reach an unprecedented scale. This scale requires new laws that better reflect the limitations, potential, and risks of contemporary AI. To ensure the fair use of algorithms in various fields, legal and ethical issues, bias, fairness and transparency, accountability and agency, data, and privacy governance, and impacts on human behavior have to be addressed (Flew, 2021).
Technical Regulation
Firstly, technical regulation is an essential agenda, because of the risks and challenges of utilizing AI widely. When designing the policies of regulation, decision-makers need to consider the characteristics of AI, the domain of usage, and potential impacts (Nitzberg & Zysman, 2021). Especially for generative AI, like Midjourney, technical standards and certification are the most significant parts of regulation. It should cover diverse aspects such as data quality, model robustness, and security. In addition, the certification process could help ensure compliance with this standard. Explainability and interpretability mechanisms could improve algorithmic transparency, fairness, and accountability by providing insights into how AI systems make decisions, including the right to explanations for algorithmic decisions. It is an effective way to protect user’s privacy and minimize the risks of infringing copyright.
Stakeholders Regulation
According to relevant literature, technical regulation involves conducting rules and norms to ensure the reasonable and responsible use of AI technology while balancing innovation and risk, as well as the relationship between public and private interests (Cath, 2018). Thus, it is important to combine the technical regulation with stakeholders. Stakeholders can collaborate to improve the effectiveness of AI governance by leveraging a variety of tools and solutions that impact AI development and application. Stakeholder engagement contains a broad range of stakeholders in the regulatory process, including technologists, ethicists, legal experts, industry representatives, and the public. This strategy could ensure diverse perspectives and needs are considered.
Talking back to Midjourney, it involves various stakeholders due to its broad impact on technology, society, and the economy. As we can see from the statistics of Midjourney in 2023, it has reached a wide audience and earned a lot of profits using AI. First of all, the developers and engineers are the individuals and teams directly involved in creating and refining the AI technologies. They have a vested interest in the development, deployment, and improvement of the system. Moreover, users and consumers encompass anyone who interacts with Midjourney, whether for creative purposes, information processing, entertainment, or any other use case. They are concerned with the system’s accessibility, usability, and the quality of its outputs. Therefore, regulations of generative AI and algorithms are inseparable from the governing and consideration of various stakeholders.
Legal Regulation
The incident of Midjourney banning an artist we mentioned before was widely discussed on Reddit. It reflects that people have gradually paid attention to the harm and impact caused by the large-scale use of AI-generated contents (AGCS). However, users and platforms alone cannot fully regulate the use of AGCS. Synchronous support from the government and the law is also crucial.
Effective governance is critical to maximizing the benefits and mitigating risks of AI, and stakeholders recognize the importance of directing attention to this area. AI governance involves the tools, solutions, and levers that influence the development and application of AI, such as promoting norms, ethics, and legislative measures (Butcher & Beridze, 2019). Also, since the internet can cross national borders, diverse laws are conducted in different countries. For example, the European Commission outlines an approach: the EU’s General Data Protection Regulation to artificial intelligence that focuses on socioeconomic changes and establishing appropriate ethical and legal frameworks. Additionally, Canada’s national AI strategy and the UK’s Center for Data Ethics and Innovation, focus on ensuring the responsible and ethical implementation of AI through legislative measures and regulatory frameworks (Butcher & Beridze, 2019).
Although there are several governments have released legislative measures to address ethical issues, targeting responsible and ethical AI implementation, the challenges of AI regulation still exist. It needs support from users, organizations, stakeholders, and governments from all over the world to collaborate together.
Conclusion
As we have explored, the rapid advancement and integration of artificial intelligence, like Midjourney into various domains poses significant deep fake, copyright, and ethical challenges, underscoring the pressing need for comprehensive regulatory frameworks. AI governance should be iterative and contextual, involving both public and private interests. International interoperability is needed to respect different preferences (Nitzberg & Zysman, 2021). Without reasonable regulations, we risk not only the misuse of AI-generated content in critical areas such as creative industries and decision-making but also the broader societal acceptance of this emerging technology.
In the future, AI and algorithms will continue to bring revolutionary changes in various fields and face more regulatory challenges. Thus, it is crucial for governments, policymakers, technologists, and other stakeholders to engage in regulatory discussions that balance innovations with ethical and legal considerations. Understanding the potential risks of AI and conducting regulation strategies is significant for us to shape a future where AI benefits all.
Reference List
Bandara, P. (2023, June 29). Artist banned by Midjourney over fake “photos” of cheating politicians. PetaPixel. https://petapixel.com/2023/06/29/artist-banned-by-midjourney-over-fake-photos-of-cheating-politicians/
Butcher, J., & Beridze, I. (2019). What is the state of Artificial Intelligence Governance globally? The RUSI Journal, 164(5–6), 88–96. https://doi.org/10.1080/03071847.2019.1694260
Benhamou, Y., & Andrijevic, A. (2022). The protection of AI-Generated Pictures (photograph and painting) under Copyright Law. The Human Cause, 198–217. https://doi.org/10.4337/9781800881907.00016
Cath, C. (2018). Governing Artificial Intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180080. https://doi.org/10.1098/rsta.2018.0080
Crawford, K. (2021). Introduction. In Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (pp. 1-22). New Haven: Yale University Press. https://doi.org/10.12987/9780300252392-001
Flew, T. (2021). Issues of Concern. Regulating Platforms (pp. 79-86). Cambridge: Polity.
Fireship. (2023, May 17). AI regulation is coming… YouTube. https://www.youtube.com/watch?v=CDokUdux0rc
InfoGlobalData. (2023). Midjourney: A comprehensive overview of key statistics and data 2023. https://www.infoglobaldata.com/blog/midjourney-a-comprehensive-overview-of-key-statistics-and-data#link12
Nitzberg, M., & Zysman, J. (2021). Algorithms, data, and platforms: The diverse challenges of governing ai. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3802088
Partadiredja, R. A., Serrano, C. E., & Ljubenkov, D. (2020). Ai or human: The socio-ethical implications of ai-generated media content. 2020 13th CMI Conference on Cybersecurity and Privacy (CMI) – Digital Transformation – Potentials and Challenges(51275). https://doi.org/10.1109/cmi51275.2020.9322673
Vincent, J. (2023, January 16). AI art tools stable diffusion and Midjourney targeted with copyright lawsuit. The Verge. https://www.theverge.com/2023/1/16/23557098/generative-ai-art-copyright-legal-lawsuit-stable-diffusion-midjourney-deviantart
Midjourney Community guidelines. Midjourney. (2024). https://docs.midjourney.com/docs/community-guidelines
Figures
Tech, V. (2018). Artificial Intelligence: Behind the algorithm | provided by virginia tech. The Seattle Times. https://www.seattletimes.com/sponsored/artificial-intelligence-behind-the-algorithm/
Concerns raised over how hospitals can validate radiology AI algorithms. Health Imaging. (2022). https://healthimaging.com/topics/artificial-intelligence/concerns-raised-over-how-hospitals-can-validate-radiology-artificial
Be the first to comment