It is not news that autonomous weapons capabilities powered by artificial intelligence are evolving fast. Many scholars and strategists foresee this new technology changing the character of war and challenging existing frameworks for thinking about just or ethical war in ways the U.S. national security community is not yet prepared to handle. Until U.S. policy makers know enough to draw realistic ethical boundaries, prudent U.S. policy makers are likely to focus on measures that balance competing obligations and pressures during this ambiguous development phase.
For those familiar with the traditional narrative of U.S. airpower history centered on the Air Corps Tactical School’s development of bomber doctrine followed by its application against Germany during World War II, Rid provides a jarring but useful counter-narrative focused on human-machine interactions.
The widening rift between the Pentagon and Silicon Valley endangers national security in an era when global powers are embracing strategic military-technical competition. As countries race to harness the next potentially offsetting technology, artificial intelligence, the implications of relinquishing their competitive edge could drastically change the landscape of the next conflict. The Pentagon has struggled—and continues to struggle—to make a solid business case for technology vendors to sell their products to the Defense Department. Making the economic case to Silicon Valley requires process improvement, but building a strong relationship will necessitate embracing the ethical questions surrounding the development and employment of artificial intelligence on the battlefield.
The concern here, however, is not that death by robot represents a more horrible outcome than when a human pulls the trigger. Rather it has to do with the nature of morality itself and the central role respect for persons, understood in the Kantian sense as something moral agents owe each other, plays in forming our moral judgments.
While Russia and China are known for their lumbering civilian and military bureaucracies, both nations are nonetheless demonstrating that they can be nimble enough to accelerate certain technological developments, along with testing and evaluation. So far, both competitors have proven that they can take specific American elements and apply them to their own unique ecosystems. Nonetheless, using American-style institutional and procedural concepts is still a novel idea for the top-heavy ministries tasked with such breakthrough technological developments in both countries.
The military holds an enduring an interest in robotic capability, and teaming these early robots with humans. From the use of remote controlled boats by the Germans in the First World War, unmanned, tracked Goliath robots filled with explosives used in World War Two, through to contemporary EOD robots and unmanned aerial and ground vehicles, military organizations have long sought to leverage robotic capability. At the highpoint of the Iraq War in 2006, the U.S. military fielded over 8000 robots in theater.
This article is the second of three that examines three aspects of human-machine teaming. In the first, I examined the rationale for human-machine teaming through ‘seven propositions’. This article examines key elements military organizations might adopt in a closer integration of humans and machines. It is proposed there are three areas upon which might be constructed a competitive strategy for future operations. The three areas provide background information, analysis and the possible applications of human-machine teams.
The information age, a phrase famously coined by Berkeley Professor Manuel Castells in the 1990s, described a tectonic shift in our culture and economy which we generally take for granted at present. From our current vantage point, replete with ubiquitous pocket-sized personal computing and communications devices, it is hard to imagine a world we cannot convert our data or social networks into physical resources and access. We keep our data in the cloud and call upon it when we need it, regardless of where we are. We log into AirBnB, and somehow money we have never seen transfers to someone else who will never see the money, and that becomes a room for an evening. The idea of a brick-and-mortar video store, such as the 1990s-staple Blockbuster Video, is hopelessly anachronistic in the era of Netflix.
At some point in the future, historians may look back on the current era as the dawn of a human-machine revolution or perhaps even the beginnings of the sixth revolution in military affairs. Williamson Murray notes in The Dynamics of Military Revolution that such things are rarely apparent in advance, and only obvious in retrospect and in the wake of remarkable battlefield success. While certainly the societal, technological, political, and military ingredients of such a revolution are present, whether this consists of a revolution in military affairs will be left to future historical debate.
As the U.S. and China compete to innovate in this domain, the relative trajectories of U.S. and Chinese advances in artificial intelligence will impact the future military and strategic balance. China’s ability to leverage these national strategies, extensive funding, massive amounts of data, and ample human resources could result in rapid future progress. In some cases, these advances will be enabled by technology transfer, overseas investments, and acquisitions focused on cutting-edge strategic technologies.
In many ways, military forces using AI on the battlefield is not new at all. At a simplistic level, the landmine is perhaps a good starting example. The first known record of landmines was in the 13th Century in China and they emerged in Europe somewhere between 1500 and 1600. Most landmines are not intelligent and all and apply a binary logic of “kill” or “don’t kill.” What landmines lack, and one of the primary reasons they are banned by most countries, is the ability to use just and discriminate force. As far as computers have come since the British used “The Bombe” to break the Enigma code, the human mind still has an advantage in determining the just and discriminate use of force and thinking divergently about the second and third order effects resulting from the use of force. But, according to some, that advantage may not last for long.