بررسی اعطای شخصیت حقوقی (شرکتی) به هوش مصنوعی (مقاله علمی وزارت علوم)
درجه علمی: نشریه علمی (وزارت علوم)
آرشیو
چکیده
در حال حاضر، هوش مصنوعی با جنبه های مختلف زندگی ما در ارتباط است. تبلیغات اولیه در مورد مزایای فراوان آن، به تدریج جای خود را به افزایش نگرانی ها درباره خطرهای آن در عرصه های مختلف از جمله اجتماعی، اقتصادی، حقوقی و.. می دهد. پیشرفت های بسیار سریع هوش مصنوعی به میزانی است که ماهیت آن را از صرف ابزار و کالا متمایز کرده است؛ درنتیجه شمول قواعد مسئولیت ناشی از فعل اشیا یا مسئولیت ناشی از تولید و عرضه کالا بر آن، نادیده انگاشتن ماهیت منحصربه فرد و حقیقی هوش مصنوعی است. به منظور مقابله با خطرها و به ویژه اهداف مسئولیتی، برخی از محققان پیشنهاد کرده اند که به هوش مصنوعی شخصیت حقوقی اعطا شود. آنان در توجیه استدلال های خود، هوش مصنوعی را با شرکت ها مقایسه می کنند و اعتقاد دارند هوش مصنوعی با داشتن شخصیتی همانند شرکت از مزایای آن، مانند مسئولیت محدود اشخاص ذی نفع آن بهره می برد. همچنین برای حل مشکل دارایی نیز استفاده از بیمه یا الزام به داشتن حداقل دارایی برای هوش مصنوعی را پیشنهاد می کند. درخصوص سوءاستفاده احتمالی ذی نفعان نیز پیشنهاد آنان نادیده گرفتن شخصیت حقوقی و اِعمال مسئولیت مدنی یا کیفری اشخاص ذی نفع است. بدین ترتیب، انگیزه های نوآوری به شدت افزایش یافته و خسارت زیان دیدگان نیز به نحو بهتری جبران خواهد شد. در این مقاله نیز با هدف بررسی و تحلیل هوش مصنوعی به منزله شخص حقوقی تلاش می شود بخشی از ابهام ها و سؤالات مطروحه در زمینه امکان، مزایا و معایب اعطای شخصیت تحت بازشناسی و تبیین قرار گیرد.Study on the Granting of Legal Personality (Corporate) to Artificial Intelligence
Artificial intelligence (AI) has become integral to various aspects of our lives. While initial advertisements highlighted its numerous benefits, concerns regarding its potential dangers have gained prominence in fields such as social, economic, and legal, etc. The rapid development of AI is to the extent that its nature has been distinguished from mere tools and goods, so the application of object or product liability laws will ignore the unique and real nature of AI. As a means to prevent the associated risks, some researchers have proposed granting legal personality to AI. They draw similarities between AI and companies, suggesting that AI, like corporations, could lead to certain benefits, such as limited liability for its stakeholders. Also, to solve the problem of assets and properties of these technologies, they suggest the use of insurance or the requirement to have minimum assets for AI. Regarding the possible abuse of the beneficiaries, their suggestion is to ignore the legal personality and apply the civil or criminal liability of the beneficiaries. In this way, the incentives for innovation will be greatly increased and the losses of the victims will be compensated in a better way. In this article, we aim to explore and analyze the concept of AI as a legal entity, shedding light on the uncertainties, questions, advantages, and disadvantages surrounding the potential granting of legal personality to AI.
Keywords : Artificial Intelligence, Torts, Obligations and Rights, Legal Personhood
1. Introduction
Currently, AI plays an active role in all aspects of human life. Whether functioning as standalone software or integrated with hardware, AI has simplified and improved the way we live. Although not a new technology, recent advancements have captured significant attention. In fact, AI's capabilities are progressing much faster than anticipated, leading to unique traits such as independent decision-making and autonomy. As a result, many now compare AI to humans. This evolution was exemplified in 2017 when Saudi Arabia granted citizenship to a robot named Sophia. While largely symbolic, this event underscored the remarkable progress of AI technology.
Around the same time, the European Union proposed granting AI a legal status termed "electronic personality." Although this proposal faced strong opposition from researchers and experts, it moved the idea of granting legal personality to AI beyond the academic sphere into practical consideration. The basis for this proposal is that current rules regarding accountability and liability for the actions of artificial intelligence are inefficient and inadequate. Present regulations treat these systems merely as tools, ignoring their unique characteristics and potential for independent decision-making.
Proponents argue that granting legal personality to AI could facilitate innovation and investment in the field. The significant differences across various judicial systems, along with the international activities of AI systems, create considerable ambiguity regarding liability for their actions. This uncertainty breeds fear and concern among scientists, investors, and manufacturers. By granting legal personality to AI, stakeholders in AI production and development—including innovators and developers—would have a clearer understanding of the risks and legal consequences of their activities. This increased clarity could, in the long run, attract more investment and resources into the AI industry.
While the European Union's proposal was not explicitly rejected, the EU is currently exploring other methods to address these challenges. However, the discussion this proposal generated has led many researchers to advocate for granting AI legal personality. Given that corporations are considered one of the primary types of legal entities, they have been proposed as a model for bestowing legal personality on AI.
2. Methodology
This research aims to gather information from various sources, including books, articles, theses, research reports, and related materials found in scientific and research data centers, official websites, and libraries. The authors examine two key questions: first, whether there is a scientific basis for granting legal personality to AI; and second, if such a basis exists and legal personality is granted, whether AI has the necessary capabilities to function as a legal person. The analysis of granting legal personality is organized into three sections. The first section explores the nature of artificial intelligence. The second section examines the rationale for granting legal personality, while the third section discusses the advantages and challenges of doing so for AI. This research sought to gather information from various sources, including books, articles, theses, research reports, and similar materials found in scientific and research data centers, official and reputable websites, and libraries. The authors' perspective investigates two key questions: firstly, whether a scientific basis exists for granting legal personality to AI; and secondly, assuming a scientific basis exists and legal personality is granted, whether AI possesses the necessary efficacy to function as a legal person. Therefore, the examination of granting legal personality is organized into three sections. The first section explores the nature of artificial intelligence. The second section examines the basis for granting legal personality. Finally, the third section addresses the advantages and challenges of granting legal personality to AI.
3. Results and Discussion
Proponents of granting legal personality to AI argue that treating AI solely as a product fails to ensure continuous innovation and protect the rights of victims. They define legal personality as a legal construct designed to benefit natural persons, allowing legal systems to allocate specific rights and obligations to this entity as deemed appropriate. Essentially, they view legal personality as a reflection of the legislator's will.
However, bestowing legal personality upon a non-human entity necessitates specific justifications or purposes; the mere appeal or simplification of complex issues is not sufficient. For AI to be considered for legal personality, it must possess certain characteristics, with one of the most critical being the ability to independently own property. The notion of AI holding assets raises several challenges. To address this, proponents suggest implementing insurance or mandating that AI possess a minimum amount of assets. This would require producers and stakeholders to pay insurance premiums or maintain those assets, ensuring that victims receive full compensation without having to deal with a natural person who may lack sufficient resources.
With this approach, AI could be held directly accountable for damages, similar to corporations, while stakeholders (including producers, developers, owners, and operators) would enjoy limited liability akin to corporate directors. Compensation would be confined to the assets of the company, shielding the individuals who created the AI from personal liability for its actions.
Nonetheless, this strategy faces several criticisms. A primary concern is who would be responsible for paying for the insurance or maintaining the minimum assets—the producer or the end-user? Additionally, given the potentially high costs involved, comprehensive insurance coverage may be impractical. There is also the risk that individuals might use AI as a means to evade responsibility.
To counter the potential misuse of legal personality as a shield against liability, proponents suggest applying the "piercing the corporate veil" principle. This principle allows for the dismissal of the legal personality of the AI, holding stakeholders civilly or criminally liable. However, legal systems typically prioritize the social benefits of limited liability and are reluctant to disregard legal personality to expose the identities of its managers. Even if legal systems permitted full application of this principle to AI, advanced, independent AI may not require human oversight or control post-deployment, complicating the identification of responsible parties.
Another challenge arises from the possibility that granting legal personality could discourage producers and users from monitoring their products, as they might feel less accountable for the system's performance. Finally, considering the transnational nature of AI, enforcing legal personality laws across different jurisdictions introduces numerous ambiguities. Granting legal personality to AI in one country while excluding it in another would effectively invalidate that legal personality.
4. Conclusions and Future Research
Granting legal personhood to AI must be grounded in sound reasoning, as it raises both practical and economic considerations. Legal personhood involves not only rights but also obligations. Any AI recognized as a legal person would engage in complex legal relationships that could lead to damages or liabilities. Without genuine accountability, however, such obligations would be meaningless. Additionally, some rights do not require legal personhood; for example, temples in history and animals in current legal systems have enjoyed certain protections without such a designation. Thus, using companies as a model for AI personhood may not be ideal, as AI lacks the organizational structure, decision-making processes, and compliance capacities of corporations.
While granting AI legal personhood could theoretically foster innovation, it also introduces significant risks. AI’s “quasi-shareholders” might lack incentives for safe and cautious product development. Addressing AI-related liabilities requires an approach that aligns with the legal principles of compensation and deterrence. Lawmakers must carefully consider environmental, infrastructural, and technological factors before formally recognizing AI’s legal status. Ultimately, legal systems should first evaluate how existing liability frameworks, preventive measures, and adjustments in product classifications can effectively manage AI’s risks before venturing into the concept of legal personhood.
Therefore, granting personhood to AI could be more harmful than beneficial. In terms of responsibility, it provides no advantage, as stakeholders remain accountable for AI's actions. It's crucial to maintain legal stability; algorithms cannot be recognized as legal entities in one jurisdiction while being ignored in others. Until a comprehensive international legal framework is established, managing AI risks should focus on leveraging existing liability laws, refining product classifications, and implementing robust preventive measures and controls, such as those proposed in the recent European Union AI Act.








