The Gun Went off, was it the Gun’s Fault? Ethical Challenges Raised by AI

Introduction

There is a famous variety show in China: Exciting offer, which mainly tells the story of a group of law student interns in an authoritative law firm. One of their internship projects impressed me very much that they were divided into two groups and engaged in a heated debate over whether the use of AI to create literature constitutes infringement. The background of the issue was that an online writer Xiao An provided some instructions to AIGC(Artificial Intelligence Generated Content) to generate novels. Through this, the writer Xiao An created nine novels in just one month and thus earned a large profit. Of course, today we will leave aside the outcome of their debate on whether AI-generated fiction really constitutes infringement. What really got me thoughtful about this issue was the ethical issues that AI raises and how we can deal with them.

Some Ethical Issues Raised by Artificial Intelligence

Ethical issues raised by AI include issues of privacy, data security, attribution of responsibility, bias and unemployment (Mark Coeckelbergh, 2019).

The first and most obvious issue is privacy. One of the most frequently raised questions about the Internet and digital media is the loss of personal privacy. (Flew, 2021). Think back, how often do you see multifarious ads on apps or websites that interest you? In fact, this is the result of big data after processing your browsing preferences, which is very frightening. Sometimes I just mention something I have the tendency to purchase to a friend, and it won’t be long before I get an advertisement or introduction for the relevant product. This means that our personal privacy has been violated because they have collected and processed our personal preference information without our permission. It could be through wiretapping, it could be through data analysis. So in this age of big data, do we have any privacy at all?

This brings up another related issue: data security. We live in a digital age, and the ability to visualize and quantify each actor, event, and transaction may be the central idea of big data analytics (Cinnamon, 2017, p. 609). This raises concerns about data security. The most typical example is a data breach that occurred at Facebook in 2018. The breach involved the illegal acquisition and misuse of the personal data of more than 87 million Facebook users (Issac and Kang, 2019). This data includes sensitive information such as users’ personal data, friend lists and private messages. So in this data era, how we protect our data, and how the platform should be regulated, need to be further studied.

(Gupta, n.d.)

There is also the issue of responsibility attribution. When an AI system makes a wrong or controversial decision, determining who is responsible can become ambiguous. It’s like a man taking a gun and killing another innocent person. Is that the fault of the gun? This is clearly a deliberate human error. But hypothetically, what if this gun could decide when it fires? Would the result have been the same? Imagine you take a gun, but the gun has an AI attached to it and it can determine for itself when the conditions are met and then automatically fire. This means that when it has a consciousness of its own decision, it can be the subject. So when the gun is in the hand and it fires, is the person holding the gun responsible, or is the AI program that decided to fire responsible, or is the producer who set the trigger conditions responsible? All makes the attribution of responsibility ambiguous.

Last but not least is the issue of bias and unemployment. For example, AI is skewed toward higher-income groups who are more likely to generate user data. AI also reflect social bias, such as racial bias in crime data (Flew, 2021). This has the potential to exacerbate social inequality and lead to unfair treatment of certain groups. In addition, the widespread use of AI may lead to the disappearance of some jobs, thus exacerbating the unemployment problem. As college students about to enter the workplace, we are worried that our major will be replaced by AI in the near future. Therefore, we can only continue to learn, improve our competitiveness, and strive to make our own capabilities at the forefront of AI development.

What Ethical Issue is Violated by AI’s Literary Creation in this Case?

Going back to the case that we spoke about at the beginning, the primary purpose of this topic in the variety show is to investigate the problem of responsibility that develops as a result of Artificial Intelligence. Generally speaking, the subject of infringement that we normally address may be largely the author for the most part. Within the context of this specific scenario, however, it is necessary to first determine the nature of the identity that Artificial Intelligence possesses. It is necessary for debaters to discuss the relationship that exists between the three parties involved in this scenario: the Artificial Intelligence, the users, and the writer who has infringed.

In a sense, AI may be thought of as nothing more than a tool that is managed by a particular computer program. When the user provides it with information and requirements, it creates outcomes depending on those facts and certain criteria. So is the training data set used by the AI to generate the work public or private? Do the works that are made by Artificial Intelligence come from their own awareness or are they imposed by their trainers? How to define the obligations and responsibilities of AI? These are contentious, and there are still holes in the legislation with regard to them.

(What Is Real Artificial Intelligence: Characteristics of True AI, 2019)

As user Xiao An, she asked AIGC to summarize the characters, plot synopsis, plot elements and writing style of the two novels. On the basis of the above information, the story outline and major chapters of different types of novels are output, and the AIGC puts forward its own adjustment requirements to the AI after the feedback results of each round. And Xiao An in the dialogue with the AIGC constantly informs the AIGC, to avoid the approximate point with the original novel, can only use it for reference, can not copy. So in this case, how much of the content of Xiao An is really original? Can AI really judge and understand what is borrowed and what is copied?

And obviously, the victim is the writer whose work has been infringed upon.  Although the original author retains ownership of the copyright to the original work, other individuals acquire enormous income as a result of the copying of the original author’s work, which is manifestly unfair to the original author.

To sum up, the debaters need to take into consideration that Artificial Intelligence is not a real person or the subject of a legal person; hence, is it possible that AI could be the subject of an infringement? If it is unable to accept responsibility, then who should be held accountable for the outcome of this situation? As a result, the issue of who is responsible for what comes to the forefront.

Where else is this Ethical Issue Likely to Arise?

Additionally, the same issue of responsibility may arise in the field of autonomous cars, as well as in the domains of medicine, finance, and other related fields. Let me just briefly talk about Tesla’s driverless car case, which has garnered increasing attention in recent times.

Tesla’s Autopilot system is implemented using Artificial Intelligence technology. The Autopilot system uses advanced technology that enables Tesla cars to full-automation or semi-autonomously. However, according to the National Highway Traffic Safety Administration (NHTSA), since Tesla’s autopilot technology was put into use in 2019, there have been 736 crashes, resulting in 17 deaths (NHTSA, n.d.). The data raises concerns about the safety of self-driving technology. One of the most famous accidents occurred in 2019, when the car owner was driving on a highway and suddenly “lost control” of his vehicle, ultimately killing the driver and seriously injuring two passengers. Then the surviving passengers filed a lawsuit with the court, demanding compensation from Tesla, and the two sides began a lawsuit. At the heart of the dispute is whether the accident was caused by human error or a defect in Tesla’s AutoPilot system. With that in mind, I’d like you to guess who won in this lawsuit.

(Krisher, 2023)

Four years after the accident, this world-famous case has finally come to an end. With the participation of a Volkswagen jury, Tesla won this “life lawsuit”. From the meaning behind the result, the opinion given by the jury composed of 12 people shows that the general public has a further understanding of the boundaries of power and responsibility for the current intelligent driving. Drivers undoubtedly need to share driving responsibilities and must remain engaged and focused on the driving task. Of course, the driver is not the only one to blame. Policymakers also need to work on liability and insurance issues. (NHTSA, n.d.)

How Should Deal with the Ethical Issues Arising from AI?

The development of artificial intelligence has emerged as a new driver of economic transformation. Artificial intelligence has become pervasive in all facets of our lives and works in today’s society. In addition, there are ethical concerns that are brought up by AI, which we need to address now. It will be necessary for all industries to work together, but they are still confronted with a great deal of resistance. In light of this, what kind of response should be given by users, platforms, and governments?

It is definitely for us, as users, to make our lives easier by utilising Artificial Intelligence. On the other hand, once we begin using it, we must remain vigilant at all times. By way of illustration, we should refrain from divulging personal information on potentially harmful websites or applications, as well as from clicking on email links that come from unknown sources, and so forth. The practices are used to help us reduce the likelihood of adverse effects occurring.

It is also the responsibility of platforms to do all in their power to safeguard the rights and interests of the people who utilise their services. In the first place, platforms have to beef up their protection of personal privacy and data security to forestall Artificial Intelligence systems from misusing and leaking personal data in order to breach the rights and interests of users. In addition to that, the platform has to be improved by providing supervision. It is important to protect and fight any potential vulnerabilities or hackers and kill all the possibilities that can damage the rights and interests of users in the cradle.

In conclusion, it is of the utmost importance for governments to adopt and execute rules and regulations that are pertinent. As a result, the dispute rivalry that is taking place in this variety show is also supplementing the expansion of the frontier judicial blank field. If our legislature is able to hear such a small voice from this seemingly entertaining variety show at the moment, then if we are able to see relevant proposals over the next few years, then it will prove to be an unquestionably significant effort to advance the development of jurisprudence.

Without a doubt, merely considering things from these perspectives is far from enough. In order to provide answers to these problems, we must also pay attention to the following challenging questions: Can you explain what AI is? What kind of political ideology does it propagate? Whose interests does it serve, and who is the most likely to bear the greatest risk of harm? Where should there be restrictions placed on the application of Artificial Intelligence (Crawford, Kate, 2021)? Without first having a comprehensive understanding of these problems, it will be hard for us to go on with the practical remedies that were outlined before.

Conclusion

All in all, the development of AI cannot become an ethical shield. Technological development has its internal trajectory. And our habit throughout human history is that technology runs first and ethics hangs behind. Once ethics go too far, there will be tragedy and disaster. We can’t stop the rapid development of technology, we can only always keep ethics in mind, so that it can better serve AI, rather than let it control or even override ethics. So our laws are constantly modifying. Fortunately, we have immediately become acutely aware of the considerable contradictions between ethics and technology, and have made efforts to address them.

I recall watching a news broadcast about the 2008 Wenchuan earthquake, which is a period of history that all Chinese people are saddened by. The host choked up several times after broadcasting the earthquake casualties. I think the emotion conveyed by the tears rolling in her eyes is something that AI cannot yet experience.

References

Cinnamon, J. (2017). Social injustice in surveillance capitalism. Surveillance & Society, 15(5), 609–25.

Coeckelbergh, M. (2019). Artificial Intelligence: Some ethical issues and regulatory challenges. Technology and Regulation, 2019, 31–34. https://doi.org/10.26116/techreg.2019.003

Crawford, Kate (2021), The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven, CT: Yale University Press, pp. 1-21.

Flew, Terry (2021), Regulating Platforms. Cambridge: Polity, pp. 79-86.

Issac, M., and Kang, C. (2019). Facebook expects to be fined up to $5 billion by FTC over privacy issues. New York Times, 24 April. https://www.nytimes.com/2019/04/24/technology/facebook-ftc-fine-privacy.html.

NHTSA. (n.d.). Automated Vehicles for Safety | NHTSA. Www.nhtsa.gov. https://www.nhtsa.gov/vehicle-safety/automated-vehicles-safety

Image References:

Converted handguns fired more than “real” weapons in UK crimes. (2024, January 16). Www.bbc.com. https://www.bbc.com/news/uk-67895627

Gupta, D. (n.d.). 9 Data Security Best Practices For your Enterprise | LoginRadius Blog. Www.loginradius.com. https://www.loginradius.com/blog/identity/data-security-best-practices/

Krisher, T. (2023, December 12). Tesla was running on Autopilot moments before deadly Virginia crash, sheriff’s office says. NBC4 Washington. https://www.nbcwashington.com/news/local/tesla-was-running-on-autopilot-moments-before-deadly-virginia-crash-sheriffs-office-says/3492662/

What Is Real Artificial Intelligence: Characteristics of True AI. (2019, October 8). Emarsys. https://emarsys.com/learn/blog/real-ai/

Be the first to comment

Leave a Reply