It is not news that autonomous weapons capabilities powered by artificial intelligence are evolving fast. Many scholars and strategists foresee this new technology changing the character of war and challenging existing frameworks for thinking about just or ethical war in ways the U.S. national security community is not yet prepared to handle. Until U.S. policy makers know enough to draw realistic ethical boundaries, prudent U.S. policy makers are likely to focus on measures that balance competing obligations and pressures during this ambiguous development phase.
For those familiar with the traditional narrative of U.S. airpower history centered on the Air Corps Tactical School’s development of bomber doctrine followed by its application against Germany during World War II, Rid provides a jarring but useful counter-narrative focused on human-machine interactions.
Greek mythologies, while not perfect analogies, provide ample cautions for military leadership faced with the prospect of future algorithmic warfare. Advanced military technologies named after Greek mythological characters—Harpy, Gorgon, Athena, Aegis, Talos, etc.—suggest an analogical construct reminiscent of ancient heroes who relied on the favor of the gods to tip the enigmatic scales in their favor.
The widening rift between the Pentagon and Silicon Valley endangers national security in an era when global powers are embracing strategic military-technical competition. As countries race to harness the next potentially offsetting technology, artificial intelligence, the implications of relinquishing their competitive edge could drastically change the landscape of the next conflict. The Pentagon has struggled—and continues to struggle—to make a solid business case for technology vendors to sell their products to the Defense Department. Making the economic case to Silicon Valley requires process improvement, but building a strong relationship will necessitate embracing the ethical questions surrounding the development and employment of artificial intelligence on the battlefield.
The concern here, however, is not that death by robot represents a more horrible outcome than when a human pulls the trigger. Rather it has to do with the nature of morality itself and the central role respect for persons, understood in the Kantian sense as something moral agents owe each other, plays in forming our moral judgments.
While Russia and China are known for their lumbering civilian and military bureaucracies, both nations are nonetheless demonstrating that they can be nimble enough to accelerate certain technological developments, along with testing and evaluation. So far, both competitors have proven that they can take specific American elements and apply them to their own unique ecosystems. Nonetheless, using American-style institutional and procedural concepts is still a novel idea for the top-heavy ministries tasked with such breakthrough technological developments in both countries.
Absent a clear understanding of which military problems emergent technologies are required to solve, there is, perhaps, too much confidence in their ability to reshape the character of the next war by enabling decisive battlefield advantage. More troublingly, predictions about machine-dominated warfare risk obscuring the human cost implicit in the use of violence to achieve a political objective. This article examines the integration challenge that continues to limit the military potential of available technology. It will then look specifically at why militaries should be cautious about the role artificial intelligence and autonomous systems are expected to play in future warfare.
Interconnectedness has allowed society to take great leaps forward, social media and the internet remain an ungoverned space for nefarious actors. Violent extremist organizations, criminal groups, and state actors have all taken advantage of the anonymity and access afforded by modern technology to plan, execute, and support operations, gaining relative superiority over traditional security structures. As adversaries become more technologically savvy, the United States and its allies must become more adept at leveraging these trends. Open source intelligence, especially when coupled with rapidly improving big data analysis tools, which can comb through data sets that were previously too complex to derive meaningful results, has the potential to offset this growing problem, providing intelligence on enemy forces, partners, and key populations.
In the aftermath of the German U-boat campaign in the First World War, many in Europe and the United States argued that submarines were immoral and should be outlawed. The British Admiralty supported this view, and as Blair has described, even offered to abolish their submarine force if other nations followed suit. While British proposals to ban submarines in 1922 and 1930 were defeated, restrictions on their use where imposed that mandated that submarines could not attack a ship until such ships crews and passengers were placed in safety. This reaction to the development of a new means of war is illustrative of the type of ethical and legal challenges that must be addressed as military organizations adopt greater human-machine integration.
The military holds an enduring an interest in robotic capability, and teaming these early robots with humans. From the use of remote controlled boats by the Germans in the First World War, unmanned, tracked Goliath robots filled with explosives used in World War Two, through to contemporary EOD robots and unmanned aerial and ground vehicles, military organizations have long sought to leverage robotic capability. At the highpoint of the Iraq War in 2006, the U.S. military fielded over 8000 robots in theater.
This article is the second of three that examines three aspects of human-machine teaming. In the first, I examined the rationale for human-machine teaming through ‘seven propositions’. This article examines key elements military organizations might adopt in a closer integration of humans and machines. It is proposed there are three areas upon which might be constructed a competitive strategy for future operations. The three areas provide background information, analysis and the possible applications of human-machine teams.
The information age, a phrase famously coined by Berkeley Professor Manuel Castells in the 1990s, described a tectonic shift in our culture and economy which we generally take for granted at present. From our current vantage point, replete with ubiquitous pocket-sized personal computing and communications devices, it is hard to imagine a world we cannot convert our data or social networks into physical resources and access. We keep our data in the cloud and call upon it when we need it, regardless of where we are. We log into AirBnB, and somehow money we have never seen transfers to someone else who will never see the money, and that becomes a room for an evening. The idea of a brick-and-mortar video store, such as the 1990s-staple Blockbuster Video, is hopelessly anachronistic in the era of Netflix.
At some point in the future, historians may look back on the current era as the dawn of a human-machine revolution or perhaps even the beginnings of the sixth revolution in military affairs. Williamson Murray notes in The Dynamics of Military Revolution that such things are rarely apparent in advance, and only obvious in retrospect and in the wake of remarkable battlefield success. While certainly the societal, technological, political, and military ingredients of such a revolution are present, whether this consists of a revolution in military affairs will be left to future historical debate.
While it took centuries to move from Da Vinci’s vision to the Wright Brothers’ reality, the flash to bang on drones and beyond is rapidly shrinking. Whether we are still on the cusp or already tumbling down the rabbit hole, such technology will continue to combine in wonderful and terrible ways. We hope you enjoy this series as much as we enjoyed putting it together. More importantly, we hope it forces us to think about the future of warfare in new and uncomfortable ways.
As the U.S. and China compete to innovate in this domain, the relative trajectories of U.S. and Chinese advances in artificial intelligence will impact the future military and strategic balance. China’s ability to leverage these national strategies, extensive funding, massive amounts of data, and ample human resources could result in rapid future progress. In some cases, these advances will be enabled by technology transfer, overseas investments, and acquisitions focused on cutting-edge strategic technologies.
As the U.S. and China respectively prioritize advances in the same strategic technologies, innovations may take place simultaneously, and diffusion may occur almost instantaneously. As China has become a global leader in multiple critical technological domains—including unmanned systems, hypersonic weapons, artificial intelligence, and quantum information science—indigenous Chinese innovation, rather than simply its rapid expropriation and effective emulation of foreign advances, also has the potential to prove highly disruptive. Under these conditions, neither the U.S. nor China is likely to achieve or maintain an enduring technological advantage.
In many ways, military forces using AI on the battlefield is not new at all. At a simplistic level, the landmine is perhaps a good starting example. The first known record of landmines was in the 13th Century in China and they emerged in Europe somewhere between 1500 and 1600. Most landmines are not intelligent and all and apply a binary logic of “kill” or “don’t kill.” What landmines lack, and one of the primary reasons they are banned by most countries, is the ability to use just and discriminate force. As far as computers have come since the British used “The Bombe” to break the Enigma code, the human mind still has an advantage in determining the just and discriminate use of force and thinking divergently about the second and third order effects resulting from the use of force. But, according to some, that advantage may not last for long.