"In the early twenty-first century the train of progress is again pulling out of the station—and this will probably be the last train ever to leave the station called Homo Sapiens. Those who miss this train will never get a second chance. In order to get a seat on it you need to understand twenty-first century technology, and in particular the powers of biotechnology and computer algorithms…those left behind will face extinction."
—Yuval Harari
The words of Yuval Harari from Homo Deus are reflected in a recent study which found that “the military is on the cusp of a major technological revolution as it enters the Robotic Age, in which warfare is conducted by unmanned and increasingly autonomous weapon systems, operating across all domains, and across the full spectrum of military operations. The question is not whether the future of warfare will be filled with autonomous, AI-driven robots, but when and in what form.” Key to realizing this shift in how war might be fought in future will be human-machine teaming.
The barriers to entry for this type of technology continue to drop. This is an advantage for small militaries. However, it also increases the probability that this technology will proliferate to non-state and developing state actors. In comparison to heavy conventional, nuclear, and other technology associated with weapons of mass destruction, artificial intelligence and robotics will be cheap and more broadly accessible. This accessibility is being driven by rapid growth in computer performance, falling prices in robotic systems due to wide commercial investment, and continuing advances in machine learning.
Human-machine teams will have application beyond the battlefield and throughout what might be termed the enterprise of military organizations. Understanding this enterprise approach will ensure that the opportunities for human-machine teams can be exploited in institutional strategic planning, recruiting and training people, conducting procurement and strategic logistics as well as being applied in the myriad of battlefield roles currently being imagined.
This article is the first of three that examines the key aspects of human-machine teaming. This article examines the rationale for human-machine teaming through seven propositions. The second article examines three key areas where military organizations might adopt given closer integration of humans and machines. The third and final article examines the principle challenges found in human-machine teaming. These challenges—at the strategic, institutional, and tactical levels—must be tackled by military organizations that aspire to generate advantage through such an approach.
Importantly, none of these articles propose that the closer integration of humans and machines will result in any fundamental change to the nature of war. War will retain its enduring nature, with several continuities: a political dimension, a human dimension, the existence of uncertainty found with a context of a contest of wills. These articles remain focused on another aspect of Clausewitz’s examination of war—that of its changing character.
Throughout On War, Clausewitz highlights how failing to understand the character of war leads to disaster. In discussing the Prussian defeat in 1806, he chastises Prussian generals for misapplying Frederick the Great’s tactic, the oblique order, against a Napoleonic enemy waging a new type of warfare. Similarly, the integration of human-machine teams represents such a change in the character of war.
This article illuminates why such a shift may occur and why it is attractive to military institutions. It also seeks to provide a foundation for military organizations to undertake more detailed analysis of the personnel, equipment, training, education, doctrine sustainment, and infrastructure issues that a move to an integrated human-machine force will entail.
The Imperative: Human-Machine Teaming On and Beyond the Battlefield
Gill Pratt, former DARPA Program Manager and CEO of the Toyota Research Institute, has argued that technological and economic trends are converging to deliver a Cambrian Explosion of new robotic capabilities. Many of the foundational technologies for robots, such as computing, data storage, and communications, have been progressing at exponential growth rates. Two more recent technologies—Cloud Robotics and Deep Learning— are likely build upon these earlier technologies in what Pratt has described virtuous cycle of explosive growth. Cloud Robotics permits each individual robot to learn from the experiences of all robots, in turn leading to very fast growth of robot competence. Deep Learning algorithms are a way for robots to learn and generalize their associations based on very large (and often cloud-based) training sets that often include millions of examples.
Military developments in robotics, artificial intelligence, and augmentation will largely be based on these developments in civil society. The development of artificial intelligence, and machine learning, is an area of significant investment in many nations. Contemporary robots and machine learning are already changing the nature of work in society, and how we conceive shopping and entertainment. Advanced computing has changed the character of mass marketing, warehousing, civil logistics, and entertainment.
A range of applications for robotics and artificial intelligence provide a rationale for military institutions to consider the design of more integrated human-machine organizations. The list of applications, which might be titled the Seven Propositions, is far from exhaustive. But these propositions provide the purpose, or the why, for military establishments to develop their future human-machine forces.
Proposition 1. Military power can be enhanced by combining human potential and robotic and/or artificial intelligence capabilities. Population size and economic strength have traditionally been important determinants of a nation’s military potential. However, the application of large numbers of robotic systems and artificial intelligence—and possibly humans with wearable, mechanical and implantable augmentation—may change this calculus. While not discounting the impact of geography and strategic culture, the combination of humans, robots, and artificial intelligence offers countries with small, elderly, or declining populations the potential to generate military capability and mass well beyond what may have been their traditional capacity. Though such a scenario is speculative, it is possible that a technologically advanced country with a smaller population could build a significant advantage in military systems based on artificial intelligence and thereby field greater numbers of more capable robotic warfighters in highly capable human-machine teams.
Proposition 2. Lethal autonomous robots can reduce threats to humans in military forces. As automatic and autonomous systems become increasingly reliable and capable, militaries have become more willing to delegate decision making authority to them. Many military organizations will face increasing temptation to delegate greater levels of authority to a machine, or else face defeat at the hands of opponents who do. For some, this may even be an existential issue. The Russian Military Industrial Committee approved a plan that would have 30% of Russian combat power consist of remote-controlled and autonomous robotic platforms by 2030. Other countries facing demographic and security challenges are likely to set similar goals. And while the United States Department of Defense enacted restrictions on the use of autonomous and semi-autonomous systems wielding lethal force, nations and non-state actors hostile to Western nations may not exercise such self-restraint.
Proposition 3. Disruptive swarming technologies enable new operating methods. The new and interdisciplinary research areas of artificial life, artificial intelligence, complex adaptive systems, and particle swarm optimization appears to offer an opportunity for self-organized robot swarms to be used in future conflict. As conventional enemy forces move to lower signature systems and operations, and non-state actors continue to hone non-linear and dispersed approaches, the ability to cover more ground by land forces becomes more challenging. One potential solution for friendly forces, described by Robert Scales in the Future Warfare Anthology is to saturate an operational area with small autonomous systems that will force an adversary—conventional or non-state—to move, be detected, and be targeted by friendly forces. As Trevor Dupuy has written, “The importance of new or imaginative ideas in military affairs, as opposed to simply new things, can best be gauged by the fact that new ideas have often permitted inferior military forces to overcome forces that were larger and better equipped.” While we would hope to apply new methods, new ideas may not always be generated by friendly forces.
Proposition 4. Preservation of the force is a tactical and strategic necessity. As of 2017, most military organizations possess equipment worth tens, or hundreds, of billions of dollars. For example, helicopters cost millions of dollars and their annual sustainment costs significantly more. A high-quality quadcopter currently costs roughly $1,000; for the cost of a single high-end helicopter, an Army or Air Force might acquire one million drones. If the robotics market sustains current price decline trends, in the future that figure might become closer to one billion. This is obviously a simplistic comparison that does not consider roles and capabilities. But, in the future, drones could be cheaper than some ballistic munitions are today. How would an amphibious task group respond to an attack from millions of aerial kamikaze explosive drones? Some of the major platforms and strategies upon which military forces are currently relying upon might be rendered obsolete, or, at least, much more vulnerable.
Proposition 5. Robots might be employed in future as an ethical preference. Some experts in robotics have argued that lethal autonomous robots may be ethically preferable to human fighters. Elinor Sloan has written, while the prospect of lethal machines will be chilling for some, they are also unlikely to cause excessive damage and suffering of nuclear and chemical weapons. A compelling argument is that wider use of robots holds the potential to reduce the number of humans killed in conflict. Additionally, it is possible that future autonomous robots will be able to act more humanely on the battlefield as they do not need to be programmed with a self-preservation instinct, and will not possess a shoot-first, ask questions later approach. The judgment of robots is unlikely to be affected by emotions such as fear or hysteria, and they may be able to process more incoming sensory information than humans without discarding or distorting it to fit preconceived notions. Despite these aspirations, incidents of algorithmic misbehaviour, such as and the 2010 Wall Street Flash Crash and the 2016 Microsoft Tay Chatbot rants , indicate there is still significant technological development required for minimising bias in artificial inteligence, and that human understanding and oversight of artificial intelligence will be an ongoing requirement.
Proposition 6. Human augmentation can make it safer for military personnel to do their job. Enhancements have a long history in the military, and the science and technologies underwriting human enhancements are quickly advancing. Unlike the pure mechanical approach of robotics, augmentation seeks to create a super-soldier from a biomedical direction, such as with drugs and bionics. For combat, as well as a range of non-combat functions, military organizations will require their peoples’ soft organic bodies to perform more like machines. As Lin and Abney have noted, “In between robotics and biomedical research, we might arrive at the perfect future warfighter: one that is part machine and part human, striking a formidable balance between technology and our frailties.”
Proposition 7. It is likely future adversaries will use these technologies. As noted in the introduction, there is a low bar for entry given this type of technology and it is continuing to drop. This means that these technologies are highly attractive to other small national military organizations as well as non-state actors. In recent operations in Mosul, the Islamic State deployed a range of unmanned ground and air vehicles—both armed and unarmed. No existing ethical or legal framework prevented this. In comparison to more expensive conventional capabilities, the low cost and accessibility of robotics and artificial intelligence will make these highly attractive capabilities. As Ian Morris writes in War: What Is It Good For?, “When robots with OODA loops of nanoseconds start killing humans with OODA loops of milliseconds, there will be no more debate.” Weapons and other artifacts of war will incorporate artificial intelligence because military organizations will fear that if they do not, their enemies will.
Conclusion
At some point in the future, historians may look back on the current era as the dawn of a human-machine revolution or perhaps even the beginnings of the sixth revolution in military affairs. Williamson Murray notes in The Dynamics of Military Revolution that such things are rarely apparent in advance, and only obvious in retrospect and in the wake of remarkable battlefield success. While certainly the societal, technological, political, and military ingredients of such a revolution are present, whether this consists of a revolution in military affairs will be left to future historical debate.
There is little doubt, however, that adopting a closer integration of humans and machines in military organizations offers potential advantage. Such advantage can extend well beyond the battlefield if military institutions apply an enterprise approach to human-machine teaming. There is a range of ways they might do so; that will be the focus of my second article on human-machine teaming.
Major General Mick Ryan is an Australian Army officer. A graduate of Johns Hopkins University and the USMC Staff College and School of Advanced Warfare, he is a passionate advocate of professional education and lifelong learning. The views expressed are the author's and do not reflect the official position of the Australian Army, the Australian Department of Defence, or the Australian Government.
Have a response or an idea for your own article? Follow the logo below, and you too can contribute to The Bridge:
Enjoy what you just read? Please help spread the word to new readers by sharing it on social media.
Header Image: Will Machines Ever Become Human? (Shutterstock)