Please ensure Javascript is enabled for purposes of website accessibility
Autonomous weapon

No Machine Should Choose: Defending Human Dignity in the Age of Autonomous Weapons

September 19, 2024

Share

As the technological landscape of modern warfare continues to evolve, autonomous weapons systems (AWS) have emerged as a prominent and controversial feature on today’s battlefields. These AI-driven technologies, often called “lethal autonomous weapons” (LAWs), can make critical decisions, such as identifying and engaging targets, without direct human intervention. Recent conflicts in Ukraine and Gaza have seen the deployment of these systems, transforming these war zones into testing grounds for increasingly sophisticated and potentially game-changing weaponry. In Ukraine, autonomous drones have been utilized to strike Russian targets, while in Gaza, automated sentry guns and AI systems have played roles in Israel’s military operations. This rapid integration of AWS into active conflict scenarios has raised ethical, legal, and humanitarian concerns.

In response to these developments, the Vatican has renewed its call for a ban on lethal autonomous weapons. During an address at the United Nations in Geneva, Italian Archbishop Ettore Balestrero, the Vatican’s Permanent Observer to the UN, argued passionately for a moratorium on developing and deploying weapons systems that can make lethal decisions independently of human oversight. Balestrero emphasized that the human person, endowed with reason and moral judgment, possesses a unique capacity for ethical decision-making that cannot be replicated by any set of algorithms, no matter how advanced. His remarks echo Pope Francis’ consistent advocacy against technologies that remove the human element from critical moral decisions, particularly those involving life and death.

While the Vatican’s stance against LAWs is firm, it is valuable to deepen the discussion by examining the broader ethical, legal, and strategic implications of autonomous weapons. This exploration can strengthen the case for stringent regulation and potentially a partial ban on LAWs based on the context in question, supporting the Vatican’s view that machines should not be entrusted with life-and-death decisions. In a world increasingly shaped by technological innovation, it is crucial to define clear moral boundaries for AI in warfare, prioritizing the dignity and sanctity of human life.

Moral Responsibility and Human Dignity

At the heart of the Vatican’s call for a ban on LAWs is a commitment to the principles of human dignity and moral responsibility. The Catholic Church teaches that every human being, created in the imago Dei, possesses inherent worth and a unique capacity for moral judgment that sets them apart from all other creatures. This theological foundation underpins the Vatican’s stance against the delegation of lethal decision-making to machines, which lack the moral and ethical reasoning required to weigh the gravity of taking a human life.

“No machine should ever decide to take the life of a human being.”

Archbishop Ettore Balestrero, in his address at the United Nations, articulated a critical distinction between a “choice” and a “decision.” While a choice might involve selecting between various options based on predefined criteria, a decision is a uniquely human act that involves ethical reflection, consideration of the broader consequences, and a sense of responsibility toward the dignity of the affected individuals. Balestrero emphasized that autonomous weapons, regardless of their technological sophistication, can only make choices—they are fundamentally incapable of making true decisions that involve ethical deliberation and respect for human life. This distinction highlights a key ethical deficiency in the use of LAWs: they cannot embody the moral weight that human operators bring to decisions involving lethal force​.

Pope Francis has continually advocated for maintaining human oversight in all technological applications, particularly those potentially impacting human life directly. During the Hiroshima conference on “AI Ethics for Peace,” the Pope underscored that true decisions require human wisdom, compassion, and an inherent respect for life that no machine can replicate. He poignantly stated, “No machine should ever decide to take the life of a human being,” encapsulating the Vatican’s concern that LAWs, by removing human judgment from the loop, not only dehumanize the act of killing but also undermine the ethical frameworks that guide the conduct of war.

These concerns go beyond theory; they underscore a broader ethical imperative to maintain human dignity in all contexts. Allowing machines to make decisions, rather than simply choose, on matters of life and death risks reducing human beings to mere data points in algorithms, stripping away the intrinsic value that defines our humanity. The Vatican’s call to ban LAWs is thus a plea to preserve the essential role of human conscience in warfare, ensuring that the grave responsibility of taking a life remains firmly within human oversight.

An Introduction to Prayer - Bishop Barron
Get This $2 Book!

Theological Considerations

Catholic teaching holds that every life is sacred from conception to natural death (CCC 2319), a belief that informs the Church’s broader ethical stance on issues ranging from bioethics to social justice. This foundational principle is fundamentally compromised when lethal decisions are delegated to autonomous systems that lack the capacity for empathy, moral reasoning, or understanding of the value of life.

Theologically speaking, warfare, while sometimes deemed necessary, is always tragic and requires stringent moral guidelines to minimize harm and protect innocent lives. The just war tradition, a critical element of Catholic moral teaching, emphasizes principles such as proportionality, discrimination between combatants and non-combatants, and the necessity of human judgment in making moral decisions on the battlefield (CCC 2309). Autonomous weapons, however, challenge these principles by introducing a layer of abstraction between the decision-maker and the battlefield, thereby risking the erosion of the human accountability that is essential for ethical conduct in war.

The use of LAWs thus risks transforming warfare into a dehumanized and impersonal activity, where the inherent value of human life is overshadowed by technological efficiency. This dehumanization is antithetical to the Catholic understanding of the human person and the moral responsibilities that come with the power to take life. As Pope Francis and other Church leaders have consistently warned, allowing machines to decide who lives and who dies is a step too far, one that violates the moral order and endangers the dignity of all humanity.

The Vatican’s ethical objections to LAWs thus extend beyond a simple policy position; they affirm the need to maintain the human touch in all aspects of life, especially in those as grave as the decisions of war and peace. By standing against autonomous weapons, the Vatican calls the global community to a higher moral standard that respects the sacredness of life and the unique role of human beings as moral agents in the world.

Concerning compliance, the deployment of LAWs poses significant challenges to the established norms of International Humanitarian Law (IHL). IHL, which governs the conduct of armed conflict, rests on key principles such as accountability, discrimination, and proportionality. These principles are designed to minimize civilian harm and ensure that military actions are justified and conducted within ethical boundaries. However, the unique nature of LAWs, which can (although not necessarily) operate independently of human oversight, raises critical questions about the compatibility of these technologies with IHL standards.

One of the primary concerns with LAWs is their ability to distinguish between combatants and non-combatants—a fundamental requirement under IHL. While current autonomous systems may be capable of identifying targets based on predefined criteria, they lack the nuanced understanding that human operators bring to the battlefield. The principle of proportionality, which mandates that the harm caused by military action must not exceed the anticipated military advantage, is also at risk when machines rather than humans make decisions. LAWs, driven by algorithms that prioritize efficiency and target acquisition, may not adequately weigh their actions’ moral and ethical implications, leading to disproportionate and unintended harm to civilian populations.

The diffusion of accountability risks creating a legal gray area where neither the operators nor the designers of these systems are held fully accountable for the outcomes of their deployment.

Furthermore, the issue of accountability presents a significant legal challenge. In traditional warfare, human operators are accountable for their actions, and there are established legal mechanisms for addressing violations of IHL. However, when autonomous systems make lethal choices, the lines of accountability become blurred. Who is responsible when an autonomous drone mistakenly targets civilians—the developer, the military commander, or the state? The existing legal frameworks may be ill-equipped to address these questions, creating a potential legal vacuum where no clear responsibility can be assigned. This lack of accountability risks undermining the foundational principles of IHL and threatens the rule of law in armed conflict.

The inadequacy of current legal frameworks to regulate AWS effectively underscores the urgent need for international regulation. The lack of clear guidelines for using LAWs creates a significant gap in international law. Although some nations believe the existing IHL is adequate, the rapid technological advancements in autonomous systems surpass these frameworks’ ability to ensure proper oversight and accountability. Therefore, international cooperation and creating new treaties or agreements specifically addressing the unique risks of LAWs are crucial to keeping these technologies within ethical and legal boundaries.

Accountability and Responsibility

Beyond IHL concerns, the deployment of LAWs also introduces a challenge to the concept of accountability in warfare. In traditional combat scenarios, human actors—whether soldiers, commanders, or state officials—bear responsibility for their actions, allowing for accountability in cases of misconduct or violations of international law. However, assigning responsibility becomes far more complex when autonomous systems execute lethal actions. As I explored in Designed for Death, this diffusion of accountability risks creating a legal gray area where neither the operators nor the designers of these systems are held fully accountable for the outcomes of their deployment.

The potential for a legal vacuum is troubling, given that autonomous systems can make decisions that are unpredictable or unintended by their developers. For instance, an autonomous drone might target a location based on faulty or misinterpreted data—a choice a human operator might not make. When such errors occur, it is unclear who is responsible: the software engineers, the military personnel deploying the system, or the government authorizing it. This ambiguity could result in impunity, undermining trust in armed conflict’s legal and ethical standards.

Concerns about the transfer of moral responsibility from humans to machines follow. The decision to take a human life is one of unique moral weight, traditionally reserved for individuals who can make judgments based on ethical considerations, empathy, and understanding of human dignity. By abdicating this responsibility to machines, we risk not only legal impunity but also a moral disconnect that dehumanizes the act of killing. This shift has significant implications for how we conduct warfare and the values we uphold as a global community.

Aquinas Bundle
38% Off & Free Shipping

Therefore, the Vatican’s call for a ban on LAWs is not only a plea for preserving human moral agency but also a call to address the legal and ethical challenges posed by these technologies. By advocating for clear lines of accountability and the reassertion of human control over lethal decisions, the Vatican reinforces the need for international cooperation and regulation in the development and use of autonomous weapons. 

A Nuanced Approach to Regulation

While the Vatican’s call for a complete ban on fully autonomous lethal weapons aligns with its ethical commitment to preserving human dignity and moral responsibility, achieving a universal ban presents significant challenges. The geopolitical landscape is marked by differing national interests, technological races, and varying interpretations of international law, which can complicate the path to consensus on such a ban. Recognizing these realities, a more nuanced approach that includes rigorous regulation and meaningful human control over autonomous weapons may serve as a practical and effective interim measure.

The principle of meaningful human control is vital in regulating LAWs, emphasizing human oversight in critical functions, especially those involving lethal force. Endorsed widely in scholarly circles, it promotes human involvement at every stage to ensure that lethal decisions are not solely made by machines. This principle aligns with the Vatican’s stance on maintaining human decision-making in warfare, asserting that humans, with their moral judgment, should be responsible for life-and-death decisions. Given the resistance to a total ban, a practical path forward could involve creating international norms that limit LAWs’ deployment to scenarios with sufficient human oversight. This strategy would address ethical concerns and establish accountability for states using these technologies. Incorporating meaningful human control into the design and operation of LAWs would help ensure their responsible use, in line with international humanitarian principles.

Proposals for Policy and Dialogue

To address the challenges posed by LAWs, a robust regulatory framework developed through international cooperation is essential. Key proposals include specific international treaties that define clear guidelines for deploying LAWs, outline usage conditions, and mandate transparency in their development. Such treaties would enforce compliance with international standards and prevent the misuse of autonomous weapons. Beyond treaties, confidence-building measures are vital to avoid an arms race and enhance global security. These could involve regular reporting on LAWs development, verification protocols, and sharing best practices for human oversight. Enhanced transparency is crucial to fostering a cooperative international environment where ethical use of technology prevails over competitive pressures.

Effective regulation of LAWs demands a multi-stakeholder approach, involving technological developers, military strategists, ethicists, policymakers, and religious leaders. Such interdisciplinary dialogue is essential for crafting policies that balance legitimate security needs with the ethical imperatives stressed by the Vatican and other concerned parties. Including religious and moral perspectives helps counterbalance the technocentric narratives often prevalent in military innovation debates.

The Vatican’s advocacy against LAWs acts as a moral guide, urging the global community to adhere to fundamental principles in developing and using military technology. By embedding these ethical considerations into policy frameworks, we can strive for a future where AI in warfare is regulated and respects human dignity, accountability, and the preservation of peace.