It is not news that autonomous weapons capabilities powered by artificial intelligence are evolving fast. Many scholars and strategists foresee this new technology changing the character of war and challenging existing frameworks for thinking about just or ethical war in ways the U.S. national security community is not yet prepared to handle. Until U.S. policy makers know enough to draw realistic ethical boundaries, prudent U.S. policy makers are likely to focus on measures that balance competing obligations and pressures during this ambiguous development phase.
Membership in the profession of arms is a tightrope walk. Just warriors manage a delicate balance between respecting human life and taking it. This is no new phenomenon, but instead has been a fact about war from the beginning. We judge Achilles, but not for killing Hector; that was his soldierly duty. There was a hope, though, that even in death, Achilles might honor Hector’s life. This was not to be. In defiling Hector’s body, Achilles dehumanized his enemy and fell to one side of the tightrope.
Fifty years ago, Vermont Royster wrote that “it may seem cruel, this tradition of asking good and well-intentioned men to account for their deeds.” This accounting should not stop with the commanders at sea, but should also go to actions ashore, including how incidents like this are handled, and learned from.
The widening rift between the Pentagon and Silicon Valley endangers national security in an era when global powers are embracing strategic military-technical competition. As countries race to harness the next potentially offsetting technology, artificial intelligence, the implications of relinquishing their competitive edge could drastically change the landscape of the next conflict. The Pentagon has struggled—and continues to struggle—to make a solid business case for technology vendors to sell their products to the Defense Department. Making the economic case to Silicon Valley requires process improvement, but building a strong relationship will necessitate embracing the ethical questions surrounding the development and employment of artificial intelligence on the battlefield.
The concern here, however, is not that death by robot represents a more horrible outcome than when a human pulls the trigger. Rather it has to do with the nature of morality itself and the central role respect for persons, understood in the Kantian sense as something moral agents owe each other, plays in forming our moral judgments.
Navy culture builds on traditions of the sea and seafaring in a nearly unbroken line from the sailing fleets of the British Empire through today’s modern nuclear-powered ships of steel. One common saying is that the United States Navy is “over 240 years of tradition, unhampered by progress,” a simultaneous indictment of conservatism and a celebration of history and tradition. While the statement is not fully true, however, tradition is a such a cornerstone of naval life that tradition is an unofficial fourth core value and the single most common rationale for any action. Sailors cite tradition in many ways and forms, often interchangeably with custom and routine.
Over the last decade, military theorists and authors in the fields of future warfare and strategy have examined in detail the potential impacts of an ongoing revolution in information technology. There has been a particular focus on the impacts of automation and artificial intelligence on military and national security affairs. This attention on silicon-based disruption has nonetheless meant that sufficient attention may not have been paid to other equally profound technological developments. One of those developments is the field of biotechnology.
It is the objections independent of technological capability that are gaining prominence among opponents of lethal autonomous weapons systems. These objections include the question of whether the use of autonomous weapons might lead to a responsibility gap where humans cannot uphold their moral responsibility, whether their use would undermine the human dignity of those combatants who are targeted, and the possibility that further increasing human distance from the battlefield could make the use of violence easier or less controlled.
The strategic demands of a great power war with a peer-adversary—the high-end conflict—will inevitably push decision-makers to the pale of that which is ethically permissible. We have seen it in the two great wars of the 20th century. In the next great power war—and one hopes it never comes—western states will put their strategic and operational capabilities to the test. But such a war will also test the moral will of their citizens—the people in whose name the killing and dying will take place.
The professionalism of Western militaries is ripe for another discussion. The practitioners who make up the profession of arms—and those that study and teach them—owe it to their citizens, their governments, and themselves to shape their forces, and educate their professionals, in preparation for the future. It is their duty to ensure they are prepared to ethically and effectively achieve the military objectives their leaders lay before them, no matter the adversary or the context of the conflict.
Sentilles’s staccato collection presents as a meditation on the pulsing heritage that underscores life and death. In her Preface, she acknowledges, “I began writing these pages after seeing two photographs.” One was an innocuous photograph of a man, Howard Scott, holding a violin, while the other was of an unidentified detainee from Abu Ghraib. With this juxtaposition, Sentilles sets about to unravel their complicated legacies and reveal their common thread: war.
In the aftermath of the German U-boat campaign in the First World War, many in Europe and the United States argued that submarines were immoral and should be outlawed. The British Admiralty supported this view, and as Blair has described, even offered to abolish their submarine force if other nations followed suit. While British proposals to ban submarines in 1922 and 1930 were defeated, restrictions on their use where imposed that mandated that submarines could not attack a ship until such ships crews and passengers were placed in safety. This reaction to the development of a new means of war is illustrative of the type of ethical and legal challenges that must be addressed as military organizations adopt greater human-machine integration.
Even a casual viewer of the recent Burns and Novack film, The Vietnam War, comes away an understanding of the central theme of moral injury and the difficulty of the moral impacts of war on the individuals who fought and the society that sent them. While Jonathan Shay coined the term ‘moral injury’ in his seminal 1994 book Achilles in Vietnam, this issue has more recently become a prominent part of the public discourse. Concerns about PTSD, moral injury, and the return of veterans from Iraq and Afghanistan from the ‘Forever War,’ as well as an increasing awareness of the so-called military/civilian culture gap. Tim O’Brien’s reading from The Things They Carried at the end of the film is especially evocative because of the public moment we find ourselves inhabiting.
What do the ideas of narrative as doctrine, Stoicism, defeat, chivalry, and fighting for pay tell us about the development of military professionalism in the West? In his new volume, Soldiers and Civilization: How the Profession of Arms Thought and Fought the Modern World into Existence, Reed Robert Bonadonna addresses the role these and other developments in military history played in the development of military professionalism. His book is a fascinating and deep journey through military and intellectual history, which seeks to bring a historical and literary focus to a topic that tends to be dominated by social scientists such as Samuel Huntington or by ethicists rooted in the military practice such as Anthony Hartle. This volume appears unique in its focus and brings an important voice to the debate over the sources and nature of military professionalism in the West.
Officers need not be saints, but they must be people who are willing to confront the unavoidable ethical questions that are running through the decisions they make and the example that they set. An officer’s education and practical experience give her an instinct for prudence, but like other virtues that may be partly innate or existing it should be cultivated. Military officers should also teach prudence to those they instruct,lead, and advise. This is even more critical in times like these, when brinkmanship and imprudence amounting to impudence seem to be the orders of the day.
To be a professional member of the military means to be obedient; to be disobedient is, therefore, unprofessional. However, the Nuremberg trials and events of My Lai demonstrate the concept of obedience is not that simple. Military members are expected to disobey manifestly illegal or immoral orders, so obedience cannot be an unconditional virtue.
Man, since creation, has had to kill and pillage in his quest for security and survival. Our complex characteristics such as greed, ambition, and lust have led us through generations to bear the teeth and spear against our kind in order to keep land, power, and wealth. War and the art of it has therefore been a handy tool for man to either destroy or rebuild.
In the wake of nearly every scandal and moral lapse in the military, we hear the same response, “This is a leadership issue.” This view is problematic as it seems to assume all ethical matters are reducible to leadership issues or these scandals are a product of the personal morality of the leader in question. Responses like these ought to push us to ask, What is the connection and overlap between ethics and leadership in the military?
While the rapid advance of artificial intelligence and warbots has the potential to disrupt U.S. military force structures and employment methods, they offer great promise and are worth the risk. This is especially true as the conceptual mechanisms for providing variable autonomy and direct accountability are already in place. Commanders will retain their central role in determining the level of variable autonomy given to subordinates, whether human or warbot, and will continue to be held accountable when things inevitably go wrong for either.