“Cry Havoc, and let slip the dogs of war.”
As early as 1599, Shakespeare’s turn of phrase for Anthony in his play Julius Caesar tacitly acknowledged a 2000-year-old human acceptance of autonomous war machines. What is a militarily employed dog other than, as autonomous weapons are defined by DOD Directive 3000.09, “a weapon system that, once activated, can select and engage targets without further intervention by a human operator.” As modern-day ethicists agonize over the autonomy’s ascendance, they ignore 2,600 years of wartime employment of autonomous, self-replicating killing machines that are by popular opinion still our best friend.
The first wartime dog deployment records date to 600 B.C. with Anatolia’s King Alyattes setting “his strongest dogs upon the (Cymerrian) barbarians, as if they were wild animals.” Contemporary Assyrian stone reliefs in Iraq show shield bearers joining combat alongside an armored mastiff. Since then, man’s oldest autonomous weapon has seen combat from Marathon to Kursk to Vietnam and Afghanistan. Contemporary U.S. military working dog programs employed as many as 2,500 dogs for missions ranging from explosives detection to security.
We torture ourselves over trusting mechanical autonomous killing systems in which our control is virtually total. The lethality, the decision-making software, the fail safes, and specific purposes and points of employment are all determined precisely by our engineers and operators. The programming and engineering necessary to create an effective high-functioning autonomous kill-capable machine, while difficult, are instantly reproducible on an industrial scale once mastered. Meanwhile, our most trusted companion runs a base software combining indeterminate Pavlovian influences and ancient instinct. The training that suppresses or harnesses these non-operator inputs in pursuit of human objectives is difficult, imperfect, and unique to each animal—and even then with the potential for over-ride during critical moments. And yet, as long as we recognize the capabilities and limitations of either autonomous system—they can be our greatest companion. The oldest autonomous weapon in humanity’s arsenal patrols our cities, lives in our homes, and protects our children while the human operator is away. History has taught us not to be afraid.
And we should not be afraid now. Machines imbued with algorithmic autonomy are more controlled than the biological intelligence of a dog. Mere autonomy is a land-mine, doing no more than as explicitly designed—it is not the independence of intelligence, as with a dog, dolphin, or Skynet. Assuming this debate is new, or undecided not only ignores our 2,600 years of canine combat, it also disregards the 100 year old closed debate on mechanical autonomy. From the Hewitt Sperry Automatic Airplane project of the First World War, to the 1944 ASM-2-BAT radar-guided glide bomb, to the loitering munitions of today—autonomous weapons are standard fare. Autonomous, non-human systems have been assumed and accepted for almost half of written history. The expansion of autonomous mechanical systems specifically has been an acknowledged and accepted goal for the last century.
The contemporary difference is that we’re replacing our mechanical kill vehicles’ tendency toward suicide with re-usability. This inability to see today’s developments as extensions of the past is best demonstrated by the strange reaction to Iran’s Yasir suicide drones in 2014. The suicide Yasir was treated as a new or innovative military development—but those familiar with over-the-horizon weapons would recognize the Yasir as nothing more than a cheap, slow-speed guided missile. The opportunities provided by capable remote systems and reusable drone platforms blind us, destroying the weapon taxonomy allowing us to understand and debate properly.
Further, we seek to expand this already-accepted autonomy with discrete target identification processes. Those who fear robots that can loiter in wait for specific people or aircraft forget our modern fire-and-forget systems utilize rudimentary heat, radio signals, or generic radar return. Western “killer robot” developers pursue new, more conservative systems of discrimination combining electro-optical, infrared, radar, and and other imaging profiles to determine the use of force. We are moving closer to the ancient and advanced targeting systems of our war-fighting dogs, and away from the crudity of the autonomous killing machines of the past 100 years. The fears of the killer robot skeptics are not unique to autonomy, rather they are the fears of misuse common to any conventional weapon system.
The question is not whether autonomy is appropriate, but how much can we can train or design the systems to handle, and to what scope of tasks. Regional powers such as China and Russia are all embracing that arc of history—bolstering their anti-access doctrines or mitigating their demographic death-spiral. Any western campaigns to stop killer robots will not change the march of warfare and weapons development. It will only hold us back, and cost lives. Dogs have been our allies at home and at war for thousands of years. Machines with autonomous kill capability have been in our arsenals for almost 100 years. Lethal autonomy is not inevitable; it is already ascendant and sanctioned with half of human recorded history—and man’s best friend—behind it.
Matthew Hipple is an active duty naval officer and graduate of the Georgetown School of Foreign Service. A co-founder and former President of the Center for International Maritime Security, now he’s an OK dad and annoying husband. The views expressed in this article are those of the author and do not represent the official position of the Department of the Navy, the Department of Defense, or the U.S. Government.
Have a response or an idea for your own article? Follow the logo below, and you too can contribute to The Bridge:
Enjoy what you just read? Please help spread the word to new readers by sharing it on social media.
Header Image: Dogs of War (June LaCombe)