International Law
Heidar Piri; Falah Mohammad
Abstract
This article focuses on how international criminal law responds to the challenges of the use of AI-based weapon systems and the criminal liability of individuals involved in committing international crimes involving these weapons. For this purpose, adopting an analytical-descriptive approach, the study ...
Read More
This article focuses on how international criminal law responds to the challenges of the use of AI-based weapon systems and the criminal liability of individuals involved in committing international crimes involving these weapons. For this purpose, adopting an analytical-descriptive approach, the study addresses whether the existing framework of international criminal law can adequately meet the challenges arising from crimes committed by autonomous weapons. Considering that various actors are involved in their production, designing, programming, training and deploying, the more important question is that who bears responsibility for international crimes involving military AI?
The author argues that while attributing direct criminal liability to a specific individual in this context is complex, an interpretive expansion of Articles 25 and 28 of the Rome Statute—coupled with the theory of hierarchical responsibility—can establish liability for individuals or groups of individuals who play a decisive role in designing, developing, deploying, or commanding such systems. Thus, all human actors who are at a particular point in the chain of supervision or command and who exercise control over others, may be held responsible.
Keywords: Artificial intelligence, Autonomous Weapons, Criminal Liability, International Crimes, Armed Conflicts.
International Law
Meysam Haghseresht
Abstract
Introduction Over the past decades, artificial intelligence (AI) has increasingly permeated nearly every aspect of our lives—including communication, health care, education, means of industrial production, leisure activities, culture, and even our relationships. This widespread integration has ...
Read More
Introduction Over the past decades, artificial intelligence (AI) has increasingly permeated nearly every aspect of our lives—including communication, health care, education, means of industrial production, leisure activities, culture, and even our relationships. This widespread integration has brought about dramatic changes across these fields. From a strategic and organizational standpoint, the measures of political, economic, military, and regional and global institutions reflect a growing awareness of AI’s vast potential, as well as its possible threats to society. AI holds the promise of helping humans maximize their time, freedom, and happiness. Yet, it also carries the risk of leading us toward a dystopian society. It is thus an urgent priority to strike a balance between technological advancement and the protection of human rights, as this will shape our future society. Currently, there is no standardized process for evaluating the impact of AI systems on human rights. A promising way forward is the use of AI human rights impact assessments, which can help AI developers (e.g., government agencies or businesses) anticipate and mitigate the human rights impact of AI systems, both before and after these systems are made available to the public. However, it is not always easy to grasp the range of ways AI can impact human rights. Public discussions often focus on issues like privacy and discrimination, as these are more immediately understandable and relatable. In contrast, the impact on other rights can be harder to conceptualize, making it more difficult to identify exactly how violations might occur. In this respect, the present research aimed to examine the impact of AI on the right to health, evaluating both its positive and negative effects. Although access to AI can be justified under the right to development within the framework of international human rights, its negative effects on the right to health present a significant challenge. Therefore, alongside acknowledging the potential harms, it is necessary to take measures to balance technological advancement with the protection of human rights. To address this challenge, the study first evaluated AI services vis-à-vis the fundamental components of the right to health. Then it explored the specific rights related to health, and finally, analyzed the results. The research questions are as follows: What are the effects of AI on the right to health? And how can we reduce the negative effects of AI on the right to health?Literature ReviewAlthough some researches have discussed the impact of AI on human rights in general, only a few have focused on specific rights—such as the right to work. At the same time, there is valuable literature in the field of medicine addressing the impact of AI. The present research contributes to the discussion through its precise and focused analysis of the effects of AI on the right to health.Materials and MethodsThis research employed a descriptive method to examine the fundamental components of the right to health, as well as the related rights that have been influenced by AI. In addition, an analytical method was used to evaluate both the positive and negative effects of AI on the right to health. Results and DiscussionThe rights to health and technology have become more interconnected than ever, as AI increasingly permeates various aspects of human life. Despite concerns, the significant benefits of AI for human life and personality have prevented any halt in its progress. However, threats arising from the misuse of AI can be intentional, negligent, accidental, or stem from a lack of anticipation and preparedness for its transformative impact on society. It is thus essential to address the root causes of these threats in order to ensure security and safety. The current analysis examined the extent and nature of AI’s impact on the fundamental components of the right to health and related rights. The findings showed both fear and hope. While AI offers many positive effects, gaps in its application raise significant fears. Therefore, it is crucial to establish a regulatory framework for the development and use of AI and robotics that upholds and respects human dignity. Given the unique features of AI, monitoring systems for verification and continuous oversight must also be tailored accordingly. Decisions increasingly rely on these systems, yet there is often a lack of transparency, accountability, and safeguards regarding their design, function, and evolution over time. In addition, the inherent uncertainty surrounding AI adds to the complexity of this challenge. Moreover, the environmental impact of AI (e.g., pollution) contributes to serious risks to human health. Without adequate safeguards, oversight, and protection of human rights in the development and deployment of AI, the health and well-being of both current and future generations will be jeopardized. To responsibly advance AI and harness its benefits, policymakers must carefully consider its effects on a broad range of fundamental rights and freedoms protected by human rights instruments. Finally, ensuring equitable access to AI—based on the principle of non-discrimination—remains a vital concern.ConclusionAI in the health sector presents both opportunities and challenges for the right to health. On the one hand, it offers undeniable benefits such as improved diagnosis and treatment, more equitable access to healthcare services, and increased efficiency within health systems. On the other hand, serious concerns arise from algorithmic discrimination, violations of privacy, and reduced accountability. To address these risks, it is essential to develop comprehensive regulatory frameworks grounded in human rights principles. Such frameworks should ensure algorithmic transparency, data diversity, institutional accountability, and equitable access to technology. Only by balancing innovation and ethics can we achieve a future in which AI not only enhances physical health, but also human dignity and rights.
International Law
Anahita Seifi; Najmeh Razmkhah
Abstract
Artificial intelligence is the science of empowering machines to perform actions similar to human activities. In other words, artificial intelligence is considered a science and a set of computer technologies designed to think, reason and imitate human behavior.Artificial intelligence is considered a ...
Read More
Artificial intelligence is the science of empowering machines to perform actions similar to human activities. In other words, artificial intelligence is considered a science and a set of computer technologies designed to think, reason and imitate human behavior.Artificial intelligence is considered a new technology that has influenced various aspects of human life, from the economy to health and employment.Activists in the field of artificial intelligence always talk about the capabilities of this technology. According to them, the development and expansion of artificial intelligence is a great tool to deal with human problems and dilemmas. For example, the increase in temperature, decrease in biodiversity, deforestation, floods, droughts, air pollution, and garbage accumulation are all among the environmental problems that have plagued humanity, problems that require immediate and effective solutions. For this purpose, resorting to artificial intelligence and its capabilities in environmental care has been proposed as one of the scientific and technical solutions to deal with these environmental challenges.The capabilities of artificial intelligence in agricultural management, measuring the amount of greenhouse gases, managing and monitoring the optimization of energy consumption, recycling waste, and strengthening and optimizing the public transportation system are all among the potential capabilities of artificial intelligence in the protection of the environment.But on the other hand, the process of designing, producing, supplying, and resorting to artificial intelligence has been associated with various challenges such as high energy consumption, extensive use of rare metals, and destruction of mineral resources, as well as increasing waste production and environmental pollution. These problems have caused serious doubts about the capabilities of this technology considering the growing trend to resort to artificial intelligence. This has led to environmental activists raising the question of whether this technology will provide a toolbox for a sustainable future for humans.Concerns regarding the performance of artificial intelligence and the widespread global support for this technology on the other hand prompted the world community to respond to these doubts, by regularizing the processes of research, development, production, and supply of artificial intelligence.One of these attempts is preparing the First Draft of the Recommendation on the Ethics of Artificial Intelligence in September 2020 By the United Nations Educational, Scientific and Cultural Organization (UNESCO).This draft, which was prepared in the form of 8 sections with the efforts of UNESCO international experts, with the aim of creating an international framework in the field of ethical and legal issues related to artificial intelligence systems, is approved at the 41st annual meeting of UNESCO, which was held in November 2021, with the votes of 193 member countries of this organization as the first international document that specifically considers the ethical norms and human rights of artificial intelligence..This document will not be binding but it is significant because it will be the first international document that specifically considers the ethical norms and human rights of artificial intelligence.The drafters of this recommendation talked about four human values which the 1st is respecting, encouraging and ensuring the basic principles of human rights, the second is , protecting the environment, the third is protecting biodiversity and the fourth, is living in peace and reconciliation.This draft demanded all the activists in the field of artificial intelligence to participate in the activities and adhere to principles such as proportionality, safety, fairness, responsibility, and accountability.But when looking at the draft text it seems that in some cases it contains ambiguities and defects, especially environmental discussions.These defects lead to several questions such as: “Has UNESCO's ethical draft been able to address the challenges in the environment sector, to provide effective regulations and solutions?” and “Considering the important and ever-increasing role of private companies active in the production and supply of artificial intelligence systems, have the authors of the draft been able to act successfully regarding attributing responsibility, methods of compensation for environmental damages, and commitment to observe the precautionary principle?” This article aims at working on these subjects, questions, and ambiguities with an analytical-descriptive method.