درآمدی بر چهارچوب های حقوقی مسئولیت کیفری برای سیستم های هوش مصنوعی (مقاله علمی وزارت علوم)
درجه علمی: نشریه علمی (وزارت علوم)
آرشیو
چکیده
با توسعه فنّاوری، موجودات هوشمند مصنوعی تقریباً در تمامی حوزه های زندگی روزمره ما استفاده شده اند. انتظار می رود این موجودات، که در حال حاضر در خدمت انسان ها هستند و کارهایشان را آسان می کنند، در آینده به مرتبه انسانی برسند و بتوانند برخی حرفه ها را انجام دهند. این پیشرفت ها پرسش های حقوقی و کیفری متعددی را نیز به همراه دارند. این که وضعیت حقوقی موجودات هوشمند مصنوعی چیست، در جرایمی که براثر استفاده از آن ها ایجاد می شود، مسئولیت کیفری متوجه چه کسی است و نقش آن ها در روند دادرسی کیفری به چه صورت است، ازجمله سؤالاتی است که نیاز به پاسخ دارند. هدف از این مطالعه، ارائه یک ارزیابی کلی درمورد این پرسش ها با درنظرگرفتن مقررات قانونی است. روش شناسی مطالعه به صورت توصیفی تحلیلی صورت گرفته است. در این مطالعه، به این نتایج دست یافته ایم که موجودات هوشمند مصنوعی در جایگاه «اموال» قرار دارند، نمی توان آنها را به دلیل جرایم ناشی از استفاده از آن ها مسئول دانست و اگرچه سهم مهمی در روند دادرسی کیفری دارند، نمی توانند جایگزین سوژه های دادرسی (قاضی، دادستان، وکیل) شوند. در این چهارچوب، مقررات حقوقی فعلی تا حد زیادی توانایی رفع مشکلات به وجودآمده را دارند. بااین حال، اگر موجودات هوشمند مصنوعی به منزله موجودیتی کاملاً مستقل و خودآگاه به مرتبه «انسان» برسند، به تغییرات بنیادین در نظام حقوقی ما نیاز خواهد بود.An Introduction to the Legal Frameworks of Criminal Liability for Artificial Intelligence Systems
With the development of technology, artificially intelligent beings have been used in almost all areas of our daily lives. It is expected that these beings, which are currently at the service of humans and facilitate their work, will reach human level in the future and be able to perform some professions. These developments also bring with them numerous legal and criminal questions. What is the legal status of artificially intelligent beings, who is criminally liable in the crimes that arise from their use, and what is their role in the criminal proceedings, are among the questions that need to be answered. The purpose of this study is to provide a general assessment of these questions by considering legal regulations. The study methodology is descriptive-analytical. In this study, we have reached the conclusion that AIs are in the position of “property”, cannot be held liable for crimes resulting from their use, and although they have an important contribution to the criminal justice process, they cannot replace the subjects of the proceedings (judge, prosecutor, lawyer). In this context, the current legal regulations are largely capable of resolving the problems that have arisen. However, if AIs are to reach the status of “humans” as fully independent and self-aware entities, fundamental changes will be needed in our legal system. Keywords: Artificial Intelligence, Legal Person, Liability, Criminal Law 1. Introduction The term “Fourth Industrial Revolution,” which refers to the digitalization of the production chain, the coordination of machines, humans, infrastructure, and the development of intelligent systems, was first introduced in 2011 at an exhibition held in Hanover, Germany. With the Fourth Industrial Revolution, it is predicted that the use of industrial robots will increase rapidly and automation will completely replace production processes, digital technologies will have significant impacts on efficiency, productivity, income distribution, and the environment, and half of the world’s trade will be conducted using the interaction of intelligent entities. With significant advances in information technology, we are witnessing the increasing expansion of applications of artificial intelligence entities in various fields. Artificially intelligent entities are widely used in almost all areas, including military operations, the industrial sector, the service sector, insurance services, medicine, and law. Translation programs, speech recognition systems, facial recognition systems, unmanned aerial vehicles used in military operations, and software that makes legal decisions by examining litigation data are only a small part of the areas where artificial intelligence is actively used. The increasing use of artificial intelligence saves time, labor, and money. However, the increasing presence of artificial intelligence in our daily lives also brings with it numerous legal challenges. The evolution and progress in the field of law, which can be considered a set of rules regulating human behavior, occur much slower than technological advances. For this reason, it is difficult to constantly update legal regulations in line with technological advances; As a result, questions such as the definition of artificial intelligence, the legal status of artificial intelligence, and the responsibility for the results of decisions based on predictions and inferences of artificial intelligence still await answers. In the first part of this study, the concept of artificial intelligence, its types, and its historical developments are explained to provide a conceptual assessment of the subject. In the second part, the views existing in the legal doctrine on the legal status of artificial intelligence are examined, and in the third part, the assessment of artificial intelligence from the perspective of criminal law is analyzed. Finally, in the conclusion section, possible challenges in the criminal law status of artificial intelligence with the increasing use of artificial intelligent entities and related solutions and suggestions will be addressed. Definition and History of Artificial Intelligence In studies related to technological progress, the term artificial intelligence, which is called “Artificial Intelligence” in English, is frequently used. However, a single and universally accepted definition of artificial intelligence has not yet been provided. John McCarthy, who coined the term artificial intelligence, defines it as “the engineering and science of building intelligent machines.” Humans can use their intelligence to solve problems they encounter. Artificial intelligence is a human-made machine that, depending on its level of development, acquires the ability to solve problems and is able to imitate human behavior. The definition of artificial intelligence, which Sebastian Tran, director of the Stanford University Artificial Intelligence Laboratory, has called “understanding something complex and making appropriate decisions,” is now recognized as a comprehensive and accepted definition in this field. Some authors consider artificial intelligence as a machine and define it as “the ability of a machine to learn from experience, adapt to new stimuli, and perform tasks similar to humans.” In line with the explanations provided, artificial intelligence is defined as follows: "Artificial intelligence refers to the imitation of human mental processes such as perception, reasoning, learning, and problem-solving by machines." The goal of artificial intelligence technology, which is the simulation of intelligence similar to human intelligence, is to create a robotic system consisting of machines, computers, and software that acts similarly to human intelligence, understands and classifies events, learns, evaluates, makes decisions based on the results of the evaluation, and implements the decision made. Just as there is disagreement over the definition of artificial intelligence, there is no consensus on the definition of a robot. Neil M. Richards, a professor of law, defines a robot as: “Biologically non-living systems that are created to perform both physical and mental activities”; therefore, it has been emphasized that just as there is no single definition for artificial intelligence, there is no complete consensus among experts on its definition for a robot. This definition emphasizes two characteristics of robots: 1. Being biologically non-living; 2. The ability to perform physical and mental activities. 2. Methodology This research has been conducted using a descriptive-analytical method and its main goal is to examine the theoretical foundations, analyze the legal status, and identify challenges related to the criminal liability of artificial intelligence systems. Also, with a comparative look at the legal systems of other countries, an attempt is made to provide solutions to strengthen Iran's legal frameworks in this field. The required information has been collected through the library method and the sources used include specialized books, scientific articles, research reports, domestic and international laws, and official documents related to the research topic. In this research, data collected from various sources have been examined using comparative analysis, content analysis, and a critical approach. In the comparative analysis, the Iranian legal system has been compared with the laws and regulations of leading countries such as the European Union, the United States, and Japan to determine its strengths and weaknesses. Content analysis was used to examine and interpret the data from the perspective of legal arguments, and in a critical approach, the limitations and shortcomings of the Iranian legal system in facing the challenges of artificial intelligence were examined. This research was conducted in several stages: first, the background of the subject was studied and previous research and articles on the criminal liability of artificial intelligence were analyzed. Then, the existing legal situation was examined and domestic and international laws related to artificial intelligence were analyzed. Next, theoretical and practical challenges related to the criminal liability of artificial intelligence systems were identified, and finally, legal reforms and practical solutions appropriate to the conditions of Iran were proposed. This approach, which is a combination of comparative law methods and legal analysis, allows for the examination of laws and the presentation of reform proposals based on global experiences and helps the researcher to reach a comprehensive understanding of the current situation and provide practical solutions for amending laws and strengthening the Iranian legal system in this area. 3. Results and Discussion Artificial intelligence as a technological phenomenon has created numerous challenges in the field of criminal law. Studies show that in the current situation, these systems are considered more as “objects” than “persons” from a legal perspective. However, the expansion of the use of artificial intelligence and its possibility of achieving a level of self-awareness and decision-making autonomy requires a review of the fundamental concepts of criminal law. One of the important results of this research is that artificial intelligence systems in their current state lack will, awareness, and criminal intent and cannot be directly considered criminally liable. For this reason, in the event of a crime related to these systems, criminal liability is attributed to their users, programmers, or producers. This shows that current laws are mainly designed for the liability of human individuals and are inadequate in the face of the complexities of artificial intelligence systems. A comparative analysis of the legal systems of other countries showed that some countries, including the European Union, have addressed these challenges with the concept of “electronic personality.” In this approach, advanced AI systems are granted legal personality that allows them to be held accountable within a specific framework. However, this idea poses several philosophical and operational challenges and requires further studies and the development of detailed laws in Iranian criminal law. Another issue examined in this study is the role of AI in facilitating judicial processes. The findings show that AI can serve as an efficient tool in collecting evidence, analyzing data, and predicting the outcomes of lawsuits. However, there are concerns about algorithmic bias, privacy, and the potential for violating individuals’ fundamental rights, which require the development of detailed regulatory regulations to prevent the misuse of this technology. Another conclusion of the study is that with the increasing use of AI in self-driving cars and other complex devices, the likelihood of crimes such as accidents caused by defects in algorithms increases. In these cases, the current Iranian laws lack comprehensive provisions for determining criminal liability, and there is a need to more precisely define self-driving cars and determine the liability of individuals involved in the design, production, and use of these devices. Finally, the present study shows that the current Iranian laws, despite covering some aspects of the use of AI, are not sufficient to manage the emerging challenges of this technology. It is suggested that to address these shortcomings, a comprehensive legal framework be developed in which new concepts such as “electronic personality,” “producer liability,” and “joint liability” are considered. Also, the establishment of a specialized regulatory body to manage the legal challenges of AI seems necessary. These measures can help create a balance between exploiting the capabilities of AI and protecting fundamental human rights. 4. Conclusions and Future Research The advancement of AI technology and the expansion of its application have created complex issues in legal systems. The present study showed that AI in its current state lacks independent will and consciousness and cannot be recognized as criminally responsible. As a result, criminal liability for crimes related to it falls on users, producers, and programmers. However, if AI achieves the level of self-awareness, a review of legal concepts will be necessary. Current Iranian laws are mainly based on traditional approaches and are not sufficient to face the challenges of AI. In leading countries, concepts such as electronic personality have been proposed that could be the basis for AI liability in the future, but they need to be adapted to national legal structures. Also, AI has great potential to improve judicial processes, but risks such as algorithmic bias and privacy violations must also be managed. Future research could focus on exploring the possibility of granting legal personality to AI, developing regulations for self-driving cars, managing algorithmic biases, and conducting more comprehensive comparative studies to help develop more rigorous legal frameworks to manage the legal challenges posed by this technology