Authors: Mitt Regan & Jovana Davidovic
This paper maintains that the just war tradition provides a useful framework for analyzing ethical issues related to the development of weapons that incorporate artificial intelligence (AI), or “AI-enabled weapons.” While development of any weapon carries the risk of violations of jus ad bellum and jus in bello, AI-enabled weapons can pose distinctive risks of these violations. The article argues that developing AI-enabled weapons in accordance with jus ante bellum principles of just preparation for war can help minimize the risk of these violations. These principles impose two obligations. The first is that before deploying an AI-enabled weapon a state must rigorously test its safety and reliability, and conduct review of its ability to comply with international law. Second, a state must develop AI-enabled weapons in ways that minimize the likelihood that a security dilemma will arise, in which other states feel threatened by this development and hasten to deploy such weapons without sufficient testing and review. Ethical development of weapons that incorporate AI therefore requires that a state focus not only on its own activity, but on how that activity is perceived by other states.
Introduction
Emerging attention to jus ante bellum as an element of the just war tradition reflects attention to “just preparation for war.” As Ned Dobos frames the issue, “When (if ever) and why (if at all) is it morally permissible to create and maintain the potential to wage war?” (Dobos, 2020, p. 2). We agree with Cecile Fabre that maintaining a standing army that is prepared to wage war if need be is morally justified because it enables a state to protect persons from violent infringements of their fundamental rights (Fabre, 2021). We argue, however that jus ante bellum still requires a state to morally justify the particular ways in which it engages in such preparation. Harry van der Linden suggests that this requires that a state prepare for war in ways that minimize the risk of unjust resort to force—violations of jus ad bellum—and unjust use of force during war—violations of jus in bello (van der Linden, 2010, p. 7).
This essay examines what jus ante bellum requires of states regarding the development and deployment of weapons enabled by artificial intelligence (AI). We define these as weapons that utilize artificial intelligence and machine-learning models in the targeting process, which may include tasks such as object recognition, target identification, or decision-support. We focus on the targeting process, and define AI-enabled weapon systems as those that use AI in that process, because the human-machine interactions in the targeting stages have the most consequential effects on war and on the ways in which norms of war may be violated (Ekelhof, 2018). To clarify, the targeting process consists of several steps at which humans and machines may interact in complex ways, with machines augmenting rather than displacing human judgment. But even when a human is the ultimate decision-maker at the last step, these interactions can shape their understanding of the situation they confront in powerful ways, which in turn influences their decision as to whether to fire.1
We believe that, in light of increasing attention by several states to the potential for incorporating AI into weapon systems, a state is justified in investing in developing such systems in order to protect its population [see Boulanin and Verbruggen (2017) for a discussion of the current state of such efforts]. We argue, however, that jus ante bellum requires that before deploying these weapons a state must engage in a rigorous testing, evaluation, verification, and validation (TEVV) process, which we describe below. It must also carefully consider the appropriate delegation of tasks between machines and humans.2 Finally, it must engage in development of these weapons in ways that do not trigger a security dilemma that leads other states to deploy AI-enabled systems without engaging in these processes.
These requirements reflect concern that premature deployment of AI-enabled weapon systems, and the deployment of systems with an inappropriate delegation of authority between machines and humans, increase the risk of violations of jus ad bellum and jus in bello. The next section elaborates on these risks.