The debate over the ethics of employing lethal autonomous weapons systems has now gone on long enough that the main lines of argument for and against are reasonably clear. In a recent report the International Committee of the Red Cross has offered a helpful summary. As they point out, the arguments against employing lethal autonomous weapons systems are broadly of two types:
...objections based on the limits of technology to function within legal constraints and ethical norms; and ethical objections that are independent of technological capability.[1]
The objections based on the limits of technology are, the International Committee of the Red Cross concedes, contingent, relying on assumptions about technological capabilities that may or may not apply in the future. They are also contextually dependent. While it might possibly be argued it will always be the case that we cannot, for example, rely on lethal autonomous weapons systems to operate within legal and ethical constraints if targeting individual combatants in a crowded urban environment, it is much easier to imagine such a system potentially operating within those limits while, say, providing top cover to defend a naval platform against incoming enemy strike aircraft.
Increasingly, then, it is the second set of objections—those independent of technological capability—gaining prominence among opponents of lethal autonomous weapons systems. These objections include the question of whether the use of autonomous weapons might lead to “a responsibility gap where humans cannot uphold their moral responsibility,” whether their use would undermine “the human dignity of those combatants who are targeted, and of civilians who are put at risk of death and injury as a consequence of attacks on legitimate military targets,” and the possibility that “further increasing human distancing—physically and psychologically—from the battlefield” could increase “existing asymmetries”[2] and make “the use of violence easier or less controlled.”[3]
Of these, it is the dignity objection that deserves greatest focus. It is necessary to highlight just how awkward this objection is. Of course, it does not look that way on the surface. For the contemporary West, the idea that humans have a fundamental dignity is part of what the Canadian philosopher Charles Taylor refers to as social imaginary or “that common understanding which makes possible common practices, and a widely shared sense of legitimacy.”[4] As a consequence, the idea of human dignity “has come to count as the taken-for-granted shape of things, too obvious to mention.”[5] But as Remy Debes, editor of Dignity: A History has recently argued, the concept of human dignity in the moralised sense of that term being employed by opponents of lethal autonomous weapons systems is a tenuous notion with remarkably shallow historical roots:
So, why think the concept of human dignity is tenuous? In the first place, it is very young. The term is not in any existing copy of the Magna Carta (1215). It does show up much later in the English Bill of Rights (1689), but not with a moralised meaning. People were not yelling ‘Liberté, égalité, dignité!’ during the French Revolution. And for all its fiery rhetoric about equality and ‘inalienable’ rights, the US Declaration of Independence does not speak of human dignity. Nor does the US Constitution.… You won’t find any moralised talk of human dignity in any of the old slave narratives. And it isn’t in the passionate abolitionist speeches, pamphlets and newspaper editorials of the 19th century. Ditto for suffrage. Mary Wollstonecraft, Sojourner Truth, Frederick Douglass, Susan B. Anthony, Jane Austen, Harriet Beecher Stowe: none used the term much, and almost never with its moralised meaning. In fact, until at least 1850, the English term ‘dignity’ had no currency as meaning anything like the ‘unearned worth or status of humans’, and very little such currency well into the 1900s. When the Universal Declaration of Human Rights (1948) used the terminology of human dignity to justify itself, this turned out to be a conceptual watershed. We have not been talking about human dignity for long.[6]
We largely owe our modern emphasis on human dignity to the practical philosophy of the German philosopher Immanuel Kant, which only really began gaining significant traction outside of the German-speaking world in the latter half of the 19th century. As Taylor explains, it is the notion of humans as rational agents that is central to Kant’s position:
… Kant shares the modern stress on freedom as self-determination. He insists on seeing the moral law as one which emanates from our will. Our awe before it reflects the status of rational agency, its author, and whose being it expresses. Rational agents have a status nothing else enjoys in the universe. They soar above the rest of creation. Everything else may have a price, but only they have ‘dignity’ (Würde). And so Kant strongly insists that our moral obligations owe nothing to the order of nature.[7]
Despite Kant’s clear influence, Debes warns us, “If we want some unambiguous historical basis for the way we talk about human dignity today, then we will be disappointed.” Regardless of our assumptions regarding its status, dignity as a concept “is yet malleable.”[8]
One possible response to the dignity objection to the use of lethal autonomous weapons systems might be to challenge the claim that employing lethal autonomous weapons systems undermines the “unearned worth or status of humans” in some way that other weapons of war do not. There does not yet seem to exist a convincing explanation that goes beyond strong assertions to show how this dignity violation is supposed to work. Still, some might object to the application of such a distinctly Western and individualistic ethical concept as a basis for a global effort to rule out the use of lethal autonomous weapons systems. It is noteworthy that the written submissions by the Chinese to the UN Group of Governmental Experts tasked with evaluating this issue incorporates essentially all the objections outlined in the International Committee of the Red Cross report except the dignity objection.[9]
But the goal here is not to directly challenge the dignity objection, only to highlight its awkwardness. In particular, it should be pointed out that even if we accept as a given that lethal autonomous weapons systems will violate human dignity when they kill someone, in practice such violations will often, perhaps usually, be invisible. It seems likely that military forces will generally prefer weapons systems that are not always-autonomous but are instead optionally-autonomous—that is to say, systems that can operate autonomously but which can also, when so desired, be remotely operated by a human operator. In as much as this is correct, it will be difficult or impossible for an external observer to determine whether the system is operating autonomously or being controlled by a remote human operator. Thus, when someone is killed by such a system—an uninhabited but optionally-autonomous armed aerial vehicle, for example—humankind will be in the awkward position of having to say a person’s dignity might have been violated, but we just do not know. Someone will know—the person who makes the decision as to whether the machine’s switch is flipped to an autonomous mode or not—but for everyone else, the dignity violation will be invisible.
Of course, a right can be violated without anyone but the perpetrator knowing, and it will be no less a violation of that right. The peeping Tom who never gets caught is nonetheless a violator of his victim’s right to privacy. So, the idea of violating a human’s dignity in death by lethal autonomous weapons system is not negated by the invisibility of that violation. Still, when invisible violations are potentially the most common cases, this is at least awkward. An invisible dignity violation is an uncomfortably ethereal basis for regulating as crude and base a practice as the use of violence in war. If humankind wants to walk down this path it should at least do so with its eyes open to how very different this is to existing constraints on military force.
The laws and ethics of war focus almost exclusively on preventing or at least reducing physical harm and suffering—they are a humanitarian but pragmatic response to a decidedly non-ideal form of human interaction. As Valerie Morkevičius has shown in her excellent new book on the Just War tradition, attempts to base an ethics of war on exquisite ideals are anomalous to the broad sweep of that tradition and, more importantly, almost inevitably end up in a position of pacifism. That may be fine in the philosopher’s study, but it will not do for the real world. We need rugged and realistic rules by which self-interested states might actually abide. “When the rules are pragmatic—when they reflect the ways wars are fought and won—they are more likely to be obeyed. At the very least, a common excuse for overriding the rules has been eliminated.”[10]
Deane-Peter Baker is a military ethicist working in the School of Humanities and Social Sciences at the University of New South Wales-Canberra, the academic partner to the Australian Defence Force Academy. He is also a Senior Visiting Fellow of the Centre for Military Ethics at Kings College London. From November 2018 to January 2019 he is being hosted by the Triangle Institute for Security Studies, a consortium of Duke University, UNC-Chapel Hill, and North Carolina State University. He is also a member of the International Panel on the Regulation of Autonomous Weapons. The views expressed here are his own and do not represent the position of the Panel.
Have a response or an idea for your own article? Follow the logo below, and you too can contribute to The Bridge:
Enjoy what you just read? Please help spread the word to new readers by sharing it on social media.
Header Image: Artist’s conception of autonomous weapons. (Chesky_W/CNBC)
Notes:
[1] International Committee of the Red Cross, ‘Ethics and autonomous weapon systems: An ethical basis for human control?’ (April 2018, 8-9)
[2] It is not obvious why this is an ethical issue. Let the reader be reminded of Conrad Crane’s memorable opening to a paper “There are two ways of waging war, asymmetric and stupid.” It doesn’t seem to be to a requirement of ethics that combatants “fight stupid.” See Conrad Crane “The Lure of Strike,” Parameters, vol. 43, no. 2, p. 5.
[3] International Committee of the Red Cross 2018, 9.
[4] Charles Taylor. 2007. A Secular Age (Harvard University Press), 172.
[5] Charles Taylor. 2004. Modern Social Imaginaries (Duke University Press), 22.
[6] Remy Debes, “Dignity is Delicate,” Aeon, 17 September, 2018.
[7] Charles Taylor. 1989. Sources of the Self: The Making of the Modern Identity (Harvard University Press), 83.
[8] Debes 2018.
[9] The Campaign to Stop Killer Robots has declared to be in support of a ban, however, the preeminent analyst of China’s approach to military artificial intelligence, Elsa Kania, has expressed her doubts.
[10] Valerie Morkevičius. 2018. Realist Ethics: Just War Traditions as Power Politics (Cambridge University Press), 225.