Emergence to prevalence: challenge and governance of Generative AI

Introduction

Seeing is believing might not be applicable in recent years. The increasingly fast-paced development of Generative AI(GAI) makes the creation of all sorts of digital contents, including images, sounds, videos accessible by simply providing text-based inputs. From ChatGPT, to DALL-E, and then Sora. In February 15th, 2024, OpenAI, the company behind ChatGPT and DALL-E, announced Sora with a video of a stylish women strolling down the city street, surrounded by neon lit signs and pedestrians. A music video -like-clip, but it is anything beyond reality: the women nor the street exist in real life. Like Sora and Chat GPT, they are Generative AI, which is part of Artificial Intelligence Generated Content (AIGC). AIGC is focusing on generating high-quality digital contents through AI models at a fast pace (Yihan et al., 2018). And in recent years, they are improving the way of extracting human inputs by learning from the increasing data base and result in generating more comprehensive and accurate contents. As GenAI is efficient and accessible when it comes to content creation, the current usage and the potential capability of it is gaining huge attention.

Understanding GenAI

The procedure of human interacting with GAI is after instructions are provided by human, which could help the model to learn and complete the task, the GAI algorithms will extract the intent information, which then be used to generate content to satisfied the instructions (Yihan et al., 2018). There are three types of foundation techniques that are commonly used behind GenAI. First, the foundations models, which including Transformer and Pre-trained Language Models, that act as the backbone and dominant choice regarding the learning capabilities. Second is the reinforcement learning from human feedback, which is used to fine-tune models for higher alignment when it comes to the user’s intent. Last but not the least is Computing stage, where hardware-based training and cloud resources are being applied to train the models (Yihan et al., 2018). if the models the GenAI can study patterns from vast amount data, which enable them to capture the nuances in language. As the consequence, they could generate the outputs in according to the patterns they have been trained. In addition to that, the models can memorize interactions they performed, and this benefit on establishing a more relevant and coherent interaction experience for users (UCL, n.d.).

Current Landscape

GenAI tools are widely used by individuals as well as organisations and have seen an explosive growth in many industries. According to a Mckinsey Global Survey on the current state if AI in mid 2023, seventy-nine percent of respondents have had some level of exposure to AI, and organisations are rapidly deploying GenAI tools, with one third of them are using GenAI frequently in at least one business function, and 40 percent of the companies saying that they will increase their investment in the overall use of GenAI because of the meaningful changes that GenAI brings to their workforces. (Weidner,2023) Especially organisations in product and service development and risk and supply chain management, they have achieved significant value from GenAI, and are using it in more business functions than other. Besides, GenAI has significant impact in the field of architecture and urban design, where creativity and efficiency could be redefined. With the help of GenAI, lots of design options could be generated, and many optimised solutions could be provided, which somehow reduce the workload of those practitioners. In light of countless benefits GenAI could bring to our work and life, deployment of GenAI in more aspects deemed to happen at a fast pace. But with the prevalence of GenAI, the advantages of it comes along with raising concerns.

Challenges

Proprietary data leak

The risk of privacy data leak may happen from the input side, output side as well as the training stage. It was reported in April 2023 that one of the Samsung employees shared confidential data of the company with ChatGPT in assisting with the work unintentionally (Mauran, 2023). The action may allows ChatGPT to train model with the company‘s proprietary information, which could lead to the potential consequence of revealing the privacy data to other users or organisations. Although ChatGPT warns users on not sharing sensitive information and has the data policy on allowing users’ request to not share the data for model training, it’s unlikely for Samsung to recall the leaked data. Concerns on data privacy also happens on Sora, whose CTO was interviewed in March 2024 regarding the data that used for training Sora are from YouTube or not. Mira Murati, the CTO of OpenAI were struggled in answering the question and only stressed on the use of publicly available and licensed data (Goldman, 2024). As OpenAI was reported in June 2023 on secretly using YouTube data to train Sora, it’s hard to determine the data source that are actually used on Sora. Known as the platform for content creators to share their videos, YouTube is the collection of copyright videos. If there is access permitted to the YouTube videos on Sora’s training, the issues and concerns on video copyrights would be raised.

Biased outputs and limitations

Although with firm tone and credible way of phrasing the words, it’s shown in many cases that GenAI tools aren’t always provide the correct answers and there are some limitations when it comes to the generated contents. When Sora was launched in February 2024, the text to video model surprised the public with its capability. However, one video generated by Sora whose prompts was from the OpenAI CEO Sam Altman selecting from X users, depicting a wizard shooting lightning out of his hand, it’s obvious to see the wrong number of the fingers in a few frames (Doerrer, 2024). According to the website Sora AI, there are limitations in content generation when it comes to simulating a complex scene, especially when ambiguous prompts, logical reasoning or abstract concepts are included (SoraAi, n.d.). It’s difficult for Sora to understand and produce scene which requires temporal continuity, and this may result in misleading contents which are in objection of the facts. Other than that, there is potential risk in preventing Sora from generating unethical contents. Given the fact that ChatGPT could be manipulated in assisting with unethical behaviours, for example ChatGPT won’t tell the methods of conducting the action when users say how to break the car windows, but when adding the context of an emergency to help a child out of the car action in steps will be provided, it’s a challenge for Sora to not be manipulated too.

Potential liability

Increasing opportunities are opening up for the GenAI tool to get incorporated both internally, externally and across industries. However, with the rising of liability, the other side of the double-sword tool is being revealed gradually. Demonstrating the ability to pass professional exams and has a proven track record in performing tasks in the fields of law and medicine, ChatGPT-4 is excelling in partnering with human workforce in the several industries (Cohen et al., 2023). As for lawyers, the GenAI tool could help in saving up many hours in writing repetitive drafts, and the driving force are not only lawyers themselves but also the clients, who make the payment on hourly basis, are aware of the ability of the tool in generating high quality contents within a limited time manner. Same as ChatGPT-4 to the law industry, it’s likely to happen for Sora in the creative industry. According to Tyler Perry, the entertainment mogul, in an interview he states the possible threaten that Sora might poses on human creatives (Clark, 2024). In other words, the creative ability of Sora is unneglectable, and there is huge potential and many opportunities to utilise Sora in the creative industries. It’s hard to predict the potential liability of human to the unceasing involving GenAI tools, but it’s of priority on having the GenAI tool to empower human creativity rather than endanger it.

Regulatory compliance risk

There are laws like the EU AI Act proposal, which follows the risk-based approach in constraining the use of AI depending on its potential risks (Cohen et al., 2023). However, there are many cases where litigations are pending due to the copyright concerns when it comes to the data that are used for models training. Similar scenarios happening in the field of medical, where GenAI can not only analysing the symptoms and providing prescriptions but also suggesting communication methods on better conveying the message to the patients. The problems arise when the content generated by AI goes wrong and may causing potential risks on the patients, and who should in charge of this. Although there is informed consent law being applied in some medical fields (EU Artificial Intelligence Act, 2024), there are spacious room for its improvements based on more empirical and normative approaches. Another concern is around the “AI Deepfake”, which is very likely to be applied in the political field. Deepfake refers to digitally replace an existing person’s likeness convincingly or generate similar contents that do not exist. Sora, one of the leading GenAI tool, whose goal is “to understand and simulate the physical world in motion”, could generate voices and videos as real as the real human that can only be detected within a certain level of confidence but are often unreliable, has the potential risk of bring negative ramifications to the public. When using inappropriately, this could be a manipulation tool by the politics for its unsayable goals.

Regulatory Strategies

Data Transparency

No matter for companies who are using general or specialised GenAI models, it’s of importance for them to create detailed operation documentations both internally and externally. The documents should include from the model training, like what data source is being used, how the model is being developed, and how the model functions, to the usage of the model and potential risks may occur. For specialised models, documentations could be classified in accordance to the existing regulations, potential risks or controls depending on cases. But make sure the guide is clear to both inside and outside the organisations.

Robust governance

Setting up and implementing structured governance rules to ensure GenAI is functioning under sufficient oversight and accountability. This include rules to manage the usage of GenAI models within the corporate as well as third party regulators. For example, the definitions and responsibilities of each models and the management plans accordingly. To make sure the model could function well and be robust enough to withstand changes, another factor to consider is the human involvement, which is an unneglectable element regarding the GenAI model governance. PWC have suggested establishing a C-suite line to focus on all the management of GenAI to ensure the responsible functioning of AI tools (PWC, n.d.). It is also important to consider having real human check involved for critical decisions.

Ethical usage

Balancing the GenAI functions and ethics is never neglectable. It is important for a company to cultivate a culture that prioritizes ethical considerations of GenAI development and deployment. Making sure the data are sourcing with fairness, transparency and accountability. It’s of the same importance to ensure both internal and external users are provided with clear instructions on how to interact with the model, including utilise their rights to access the data but also being instructed on whether ethical usage is conducted and consequence in accordance when breaching the regulation. Besides the clear user guidance, the sustainability issue should also be considered. As the training and using of AI models requires huge amount of energy consumptions. It’s crucial to be mindful on finding the sweet spot between the energy consumption and the usage of the technology.

Continuous evolution

Keep evolving and reforming the regulations in accordance with the continuous improving of the GenAI technologies is essential to its governance. Stay informed of the latest emerging challenges and developments of both GenAI and its regulations to ensure the implementation is on the right track in the dynamic filed. It is beneficial for the organisations and individuals who are involved in the GenAI field to foster the culture of learning and adaptation. Amidst the wave of constant innovation, continuous evolution on the regulations plays a significant role in maintaining and governing the ideal output of GenAI. On the contrary, failed to do so may incur severe consequence and doing bad to the organizations as well as the industries.

Conclusion

As GenAI is gradually proliferating in our day to day life, we are constantly benefiting from its accessibility and convenience. From text to text tools like ChatGPT to text to video tools like Sora, these GenAI technologies could assist in not only daily life but have stepping into business and several professional fields too. With the rising of its usage, it is essential to be mindful of its potential risks and applicable solutions. Proprietary data leak, biased output, potential liability and regulatory compliance are some of the major challenges while interacting with GenAI models. To minimize the risks, it is of value to apply strategies like keeping data transparency, maintain robust governance, sustain ethnical usage as well as keep the regulations up to date. Nevertheless, there is no dry-and-cut solutions in the governance of GenAI, and it’s always important to stay informed and keep up with the pace.

Reference

Clark, E. (2024 February 23). Tyler Perry Warns of AI Threat After Sora Debut Halts An $800 Million Studio Expansion. Forbes. https://www.forbes.com/sites/elijahclark/2024/02/23/tyler-perry-warns-of-ai-threat-to-jobs-after-viewing-openai-sora/?sh=f380e5f70719

Cohen, G., Evgeniou, T., & Husovec, M. (2023 November 20). Navigating the New Risks and Regulatory Challenges of GenAI. Harvard Business Review. https://hbr.org/2023/11/navigating-the-new-risks-and-regulatory-challenges-of-genai

Doerrer, B. (2024, February 18). Hopes and concerns for OpenAI’s Sora. Campaign. https://www.campaignasia.com/article/hopes-and-concerns-for-openais-sora/494415

EU Artificial Intelligence Act. (2024). The Act Text. https://artificialintelligenceact.eu/the-act/

Goldman, S. (2024, March 14). OpenAI’s Sora: The devil is in the ‘details of the data’. VentureBeat. https://venturebeat.com/ai/openais-sora-the-devil-is-in-the-details-of-the-data/

Jackson, B. (2024, February 23). Deepfakes Are About to Become a Lot Worse, OpenAI’s Sora Demonstrates. Spiceworks. https://www.spiceworks.com/tech/artificial-intelligence/guest-article/deepfakes-are-about-to-become-a-lot-worse-openais-sora-demonstrates/

Kappel, R. (2024, March 7). Generative AI Governance: Balancing Innovation and Ethical Responsibility. https://www.centraleyes.com/generative-ai-governance-innovation-and-ethical-responsibility/

Mauran, C. (2023, April 6). Whoops, Samsung workers accidentally leaked trade secrets via ChatGPT. Mashable. https://mashable.com/article/samsung-chatgpt-leak-details

Pratt, M. (2023, October 20). Generative AI’s sustainability problems explained. TechTarget. https://www.techtarget.com/sustainability/feature/Generative-AIs-sustainability-problems-explained

PWC, (n.d.). Managing the risks of generative AI. https://www.pwc.com/us/en/tech-effect/ai-analytics/managing-generative-ai-risks.html

SoraAi. (n.d.). Navigating the Challenges and Limitations of Sora AI. https://soorai.com/challenges-and-limitations/

UCL. Introduction to Generative AI. (n.d.). Generative AI Hub. Retrieved April 6, 2024, from https://www.ucl.ac.uk/teaching-learning/generative-ai-hub/introduction-generative-ai#:~:text=GenAI%20can%20learn%20patterns%20and,relevant%20conversation%20experiences%20for%20users.

Weidner, D. (2023, December 21). As gen AI advances, regulators—and risk functions—rush to keep pace. McKinsey & Company. https://www.mckinsey.com/capabilities/risk-and-resilience/our-insights/as-gen-ai-advances-regulators-and-risk-functions-rush-to-keep-pace

Yihan, C., Siyu, L., Yixin, L., Zhiling, Y., Yutong, D., Philip, Y., & Lichao, S. (2018). A Comprehensive Survey of AI-Generated Content (AIGC): A History of Generative AI from GAN to ChatGPT. Journal of Association for Computing Machinery,37(4), 111.

Be the first to comment

Leave a Reply