The Strategy Bridge

View Original

Respect for Persons and the Ethics of Autonomous Weapons and Decision Support Systems

Introduction

Last Spring, Google announced it would not partner with the Department of Defense’s Project Maven, which sought to harness the power of artificial intelligence (AI) to improve intelligence collection and targeting. Google’s corporate culture, which one employee characterized as “don’t be evil,” attracted people who were opposed to any arrangement where their research would be applied to military and surveillance applications. As a result, Google had to choose between keeping these talented and skilled employees and losing potentially hundreds of millions of dollars in defense contracts. Google chose the former.[1] Later that fall, the European Union called for a complete ban on autonomous weapon systems.[2] In fact, several organizations and researchers working in artificial intelligence have signed a “Lethal Autonomous Weapons Pledge” that expressly prohibits development of machines that can decide to take a human life.[3]

…if these systems can reduce some of the cruelty and pain war inevitably brings, then it is reasonable to question whether dehumanizing war is really a bad thing.

The ethical problems associated with lethal autonomous weapons are not going to go away as the development, acquisition, and employment of artificially intelligent systems challenge the traditional norms associated not just with warfighting but morality in general.[4] Among the many concerns associated with developing lethal autonomous weapon systems driven by artificial intelligence is that they will dehumanize warfare.[5] On the surface this seems like an odd case to make. War may be a human activity, but rarely does it feel to those involved like a particularly humane activity, bringing out the worst in humans more often than it brings out the best. Moreover, lethal autonomous weapons and decision support systems are often not only more precise than their human counterparts, they do not suffer from emotions such as anger, revenge, frustration, and others that give rise to war crimes. So, if these systems can reduce some of the cruelty and pain war inevitably brings, then it is reasonable to question whether dehumanizing war is really a bad thing. As Paul Scharre notes, the complaint that respecting human dignity requires that only humans make decisions about killing “is an unusual, almost bizarre critique of autonomous weapons;” he adds, “There is no legal, ethical, or historical tradition of combatants affording their enemies the right to die a dignified death in war.”[6]

Scharre’s response, however, misses the point. He is correct that artificial-intelligence systems do not represent a fundamentally different way for enemy soldiers and civilians to die than those human soldiers are permitted to employ. The concern here, however, is not that death by robot represents a more horrible outcome than when a human pulls the trigger. Rather it has to do with the nature of morality itself and the central role respect for persons, understood in the Kantian sense as something moral agents owe each other, plays in forming our moral judgments.

Killing and Respect for Others

Immanuel Kant (Wikimedia)

Drawing on Kant, Robert Sparrow argues respect for persons entails that, even in war, one must acknowledge the personhood of those with whom one interacts, including the enemy. Acknowledging that personhood requires whatever one does to another, it is done intentionally with the knowledge that, whatever the act is, it is affecting another person.[7] This relationship does not require communication or even the awareness by one actor that he or she may be acted upon by another. It just requires the reasons actors give for any act that affects another human being take into account the respect owed that particular human being. To make life-and-death decisions absent that relationship subjects human beings to an impersonal and pre-determined process, and subjecting human beings to such a process is disrespectful of their status as human beings.

Thus, a concern arises when non-moral agents impose moral consequences on moral agents. Consider, for example, an artificially intelligent system that provides legal judgments on human violators. It is certainly conceivable that engineers could design a machine that could consider a larger quantity and variety of data than could a human judge. The difficulty with the judgment the machine renders, however, is that the machine cannot put itself in the position of the person it is judging and ask, “If I were in that person’s circumstances, would I have done the same thing?” It is the inability to not only empathize but then employ that empathy to generate additional reasons to act (or not act) that makes the machine’s judgment impersonal and pre-determined.[8] Absent an interpersonal relationship between judge and defendant, defendants have little ability to appeal to the range of sensibilities human judges may have to get beyond the letter of the law and decide in their favor. In fact, the European Union has enshrined the right of persons not to be subject to decisions based solely on automated data processing. In the United States, a number of states limit the applicability to computer-generated decisions and typically ensure an appeals process where a human makes any final decisions.[9]

This ability to interact with other moral agents is thus central to treating others morally. Being in an interpersonal relationship allows all sides to give and take reasons regarding how they are to be treated by the other and to take up relevant factors they may not have considered before-hand.[10] In fact, what might distinguish machine judgments from human ones is the human ability to establish what is relevant as part of the judicial process rather than before-hand. That ability is what creates space for sentiments such as mercy and compassion to arise. This point is why only persons—so far at least—can show respect for other persons.

So, if it seems wrong to subject persons to legal penalties based on machine judgment, it seems even more wrong to subject them to life-and-death decisions based on machine judgment. A machine might be able to enforce the law, but it is less clear if it can provide justice, much less mercy. Sparrow further observes that what distinguishes murder from justified killing cannot be expressed by a “set of rules that distinguish murder from other forms of killing, but only by its place within a wider network of moral and emotional responses.”[11] Rather, combatants must “acknowledge the morally relevant features” that render another person a legitimate target for killing. In doing so, they must also grant the possibility that the other person may have the right not to be attacked by virtue of their non-combatant status or other morally relevant feature.[12]

The concern here is not whether using robots obscures moral responsibility; rather the concern here is that the employment of artificial-intelligence systems obscures the good humans can do, even in war. Because humans can experience mercy and compassion they can choose not to kill, even when, all things being equal, it may be permissible.

Acting for the Sake of Others: Justice, Fairness, and Autonomous Weapons

The fact that systems driven by artificial intelligence cannot have the kind interpersonal relationships necessary for moral behavior accounts, in part, for much of the opposition to their use.[13] If it is wrong to treat persons as mere means, then it seems wrong to have a mere means in a position to decide how to treat persons. One problem with this line of argument, which Sparrow recognizes, is all employment of autonomous systems breaks the relevant interpersonal relationship. To the extent humans still make the decision to kill or act on the output of a decision support systems, they maintain respect for the persons affected by those decisions.

See this Amazon product in the original post

However, even with semi-autonomous weapons, some decision-making is taken on by the machine, mediating, if not breaking, the interpersonal relationship. Here Scharre’s point is relevant. Morality may demand an interpersonal relationship between killer and killed, but, as a matter of practice, few persons in those roles directly encounter the other. An Islamic State fighter would have no idea whether the bomb that struck him was the result of a human or machine process; therefore, it does not seem to matter much which one it was. A problem remains, however, regarding harm to noncombatants. While, as a practical matter, they have no more experience of an interpersonal relationship than a combatant in most cases, it still seems wrong to subject decisions about their lives and deaths to a lethal artificial-intelligence system just as it would seem wrong to subject decisions about one’s liberty to a legal artificial-intelligence system. Moreover, as the legal analogy suggests, it seems wrong even if the machine judgment were the correct one.

This legal analogy, of course, has its limits. States do not have the same obligations to enemy civilians that they do towards their own. States may be obligated to ensure justice for their citizens and not be so obligated to citizens of other states. There is a difference, however, between promoting justice and avoiding injustice. States may not be obligated to ensure the justice of another state; however, they must still avoid acting unjustly toward that other state’s citizens, even in war. So, if states would not employ autonomous weapons on their own territory, then they should not employ them in enemy territory.[14]

Of course, while states may choose not to apply lethal autonomous weapons in their own territory in conditions of peace, the technology could get to the point where they would employ such systems under conditions of war precisely because they are less lethal. If that were to be the case, then the concern regarding the inherent injustice of systems driven by artificial intelligence could be partially resolved. Of course, it is not enough that a state treats enemy civilians with the same standards it treats its own. States frequently use their own citizens as mere means, so we would want a standard for that treatment that maintained a respect for persons.

As Isaak Applbaum argues, “If a general principle sometimes is to a person’s advantage and never is to that person’s disadvantage, then actors who are guided by that principle can be understood to act for the sake of that person.”[15] So, to the extent systems driven by artificial intelligence do make targeting more precise than human-driven ones as well as reduce the likelihood that persons will be killed out of revenge, rage, frustration, or just plain fatigue, then their employment would not put any persons at more risk than if those systems were not employed. To the extent that is the case, arguably states are at least permitted, if not obligated, to use them. Because employing these systems under such conditions constitutes acting for the sake of those persons, it also counts as a demonstration of respect towards those persons, even if the interpersonal relationship Sparrow described is mediated, if not broken, by the machine.

“An Act of Compassion” (Paul Stivers)

Conclusion

What this analysis has shown is the arguments for considering military artificial-intelligence systems, even fully autonomous ones, mala en se are on shakier ground than those that permit their use. It is possible to demonstrate respect for persons even in cases where the machine is making all the decisions. This point suggests that it is possible to align effective development of artificial-intelligence systems with our moral commitments and conform to the war convention.

Thus, calls to eliminate or strictly reduce the employment of such weapons are off base. If done right, the development and employment of such weapons can better deter war or, failing that, reduce the harms caused by war. If done wrong, however, these same weapons can encourage militaristic responses when other non-violent alternatives are available, resulting in atrocities for which no one is accountable and desensitizing soldiers to the killing they do. Doing it right means applying respect for persons not just when employing such systems but also at all phases of the design and acquisition process to ensure their capabilities improve our ability not just to reduce risk but also to demonstrate compassion.


C. Anthony Pfaff is the Research Professor for the Military Profession and Ethics at the U.S. Army War College’s Strategic Studies Institute. A retired Army colonel, he served on the Policy Planning Staff at the State Department where he advised on cyber policy. The views expressed here are the author’s alone and do not reflect those of the Strategic Studies Institute, the U.S. Army War College, the U.S. Army, or U.S. Government.


Have a response or an idea for your own article? Follow the logo below, and you too can contribute to The Bridge:

Enjoy what you just read? Please help spread the word to new readers by sharing it on social media.


Header Image: An autonomous weapon (ABC News/DOD Photo)


Notes:

[1] Scott Shane, Cade Metz, Daisuke Wakabayushi, “How a Pentagon Contract Became an Identity Crisis for Google,” New York Times, May 30, 2018, available from https://nyti.ms/2LJhmyg, accessed June 28, 2018. It is worth noting that Google’s commitment to avoiding evil is less than consistent, given their proposed cooperation with the Chinese government to provide a censored search engine. See Rob Schmitz, “Google Plans for a Censored Search Engine in China,” All Things Considered, National Public Radio, August 2, 2018, available from https://www.npr.org/2018/08/02/635047694/google-plans-for-a-censored-search-engine-in-china, accessed August 6, 2018.

[2] “Thursday Briefing: EU Calls for Ban on Autonomous Weapons,” Wired, September 18, 2018, available from https://www.wired.co.uk/article/wired-awake-130918, accessed January 14, 2019.

[3] Future of Life Institute, Lethal Autonomous Weapons Pledge, https://futureoflife.org/lethal-autonomous-weapons-pledge/?cn-reloaded=1, accessed August 27, 2018.

[4] For the purposes of this discussion, the term AI systems will refer to military AI systems that may be involved in “life-and-death” decisions, specifically those systems that can select and engage targets without human intervention.

[5] Objections to their use tend to cluster around the themes that such weapons dehumanize warfare and introduce a “responsibility gap” that could undermine International Humanitarian Law (IHL). Moreover, even if one resolved these concerns, the application of such systems risks moral hazards associated with lowering the threshold to war, desensitizing soldiers to violence, and incentivizing a misguided trust in the machine that abdicates human responsibility.

[6] Paul Scharre, Army of None: Autonomous Weapons and the Future of War, New York: W.W. Norton and Company, 2018, pp. 287-288.

[7] Robert Sparrow: “Robots and Respect: Assessing the Case Against Autonomous Weapon Systems,” Ethics & International Affairs, Vol. 30, No. 1, 2016, p. 106. It is worth noting here that the Army Ethic acknowledges the importance of respect and includes as a principle, “In war and peace, we recognize the intrinsic dignity and worth of all people, treating them with respect.” See ADRP-1, 2-7. I owe this point to Michael Toler, Center for the Army Profession and Ethics.

[8] Eliav Lieblich and Eyal Benvenisti, “The obligation to exercise discretion in warfare: why autonomous weapons systems are unlawful,” in Nehal Bhuta, Susanne Beck, Robin Geiβ, Hin-Yan Liu, Claus Kreβ, eds., Autonomous Weapon Systems: Law, Ethics, Policy, Cambridge: Cambridge University Press, 2016, p. 266.

[9] Lieblich and Benvenisti, p. 267.

[10] The point here is not that the battlefield is a place to negotiate or that there must be some kind of interaction independent of a targeting process to justify the decision to use lethal force. Rather, the point is that in any given situation where a human being may be harmed that the decision made to commit that harm is made by another human who can identify and consider the range of factors that would justify that harm. In this way, the person deciding to harm understands it is another person who he or she is harming and considers reasons not to do it. Autonomous systems may eventually be able to discern several relevant factors; however, that only entails they are considering reasons to harm, not reasons not to harm.

[11] Sparrow, p. 101.

[12] Sparrow, pp. 106-107.

[13] Sparrow, p. 108.

[14] Lieblich and Benvenisti, p. 267.

[15] Arthur Isaak Applbaum, Ethics for Adversaries: The Morality of Roles in Public and Professional Life, Princeton, NJ: Princeton University Press, 1999, pp. 162-166. Applbaum refers to situations where someone is better off and no one is worse off as “avoiding Pareto-inferior outcomes.” Avoiding such outcomes can count as “fair” and warrant overriding consent.