The Strategy Bridge

View Original

Strategy, Ethics, and Trust Issues

In the aftermath of the German U-boat campaign in the First World War, many in Europe and the United States argued that submarines were immoral and should be outlawed. The British Admiralty supported this view, and as Blair has described, even offered to abolish their submarine force if other nations followed suit. While British proposals to ban submarines in 1922 and 1930 were defeated, restrictions on their use where imposed that mandated that submarines could not attack a ship until such ships crews and passengers were placed in safety. This reaction to the development of a new means of war is illustrative of the type of ethical and legal challenges that must be addressed as military organizations adopt greater human-machine integration.

This article is the final of three that examines the key aspects of human-machine teaming. In the first, I examined the rationale for human-machine teaming through seven ‘propositions’. The second article examined three forms of human-machine teaming that military organizations might adopt in a closer integration of humans and machines. This, the third and final article, examines the major challenges in human-machine teaming. These strategic, institutional, and tactical challenges must be addressed by military organizations in developing their institutional strategies to generate advantage through human-machine teaming.

...strategic, institutional, and tactical challenges must be addressed by military organizations in developing their institutional strategies to generate advantage through human-machine teaming...

These concerns, expressed by robotics and ethics experts such as Robert Sparrow, need to be considered as a complementary element of the technical aspects of human-machine integration. As a 2013 report on augmenting humans for war noted, “given a significant lag time between ethics and technology, it is imperative to start considering the issues before novel technologies fully arrive on the scene and in the theatre of war.” A review of literature on this topic reveals that there multiple challenges. These could broadly be classified as strategic, institutional, and tactical issues; each of these also contains ethical dilemmas that are considered throughout.

Strategic Issues. The use of robots and advanced artificial intelligence may lower the ‘barrier for entry’ in war. The use of advanced weaponry such as autonomous robotics could make it easier for one nation to engage in war or adopt aggressive foreign policies that provoke other nations. If so, it could be a violation – or at least a significant adaptation - of jus ad bellum. This debate extends well beyond military organisations. It is of concern for wide sections of society – government, academia, and the broader community. In 2014, Steven Metz summed up the challenge for policy makers, writing:

It seems likely that a future president would find it easier to deploy a heavily or completely robotic unit and to keep it in the field for an extended time. This could help with deterrence and crisis containment. But by making it easier to use force, a robot centric military could also tempt a future president into conflicts and crises that the United States might otherwise avoid. This could have a number of adverse effects, including provoking terrorism attacks on Americans and embroiling the United States in quagmires. The Founding Fathers intentionally made it difficult for the United States to use force. Robots, like airpower, will erode this firebreak.

It is almost certain that any military organization that enhances its effectiveness with human-machine teaming will place stress on civil military relations. In a world where robots, human-artificial intelligence teams, and augmented personnel serve in the military, government trust is essential. The military must assure the government, and the broader society, that the highly lethal force, capable of exceptional rapid response times due to autonomous systems, remains subject to national policy at all times. Eliot Cohen notes in Supreme Command that the military remains “the exceptional profession and a way apart.” The creation of more integrated human-machine teams in the military, within a less integrated polity, may serve to reinforce this difference. It is a challenge in civil-military relations that must be addressed.

Deeper human-machine teaming is likely to have an impact on strategy. Hew Strachan (2013) has written that:

...it is patently absurd to deny that the impact of new technology can be strategically significant…the steamship, the manned aircraft or the rocket, have triumphed over geography, changing the relationship between space and time, and thus have a geopolitical effect as well as a directly operational one.

The new technology of long-range aerial bombers had a strategic impact in World War Two by ensuring the allies made a contribution to a coalition war in which the Soviet Union saw itself as fighting a lone battle. Similarly, robots, artificial intelligence, human augmentation, and a deeper integration of humans and machines may have an impact on military strategy as well as the development and implementation of foreign policy.

The application of artificial intelligence as a strategic decision-support tool may, as Ayoub and Payne have written, address some of the human fragility and bias that impacts on strategy development and implementation. Artificial intelligence is not subject to physical issues such as fatigue, and can be built to take account of other psychological dimensions of strategy such as cognitive load, risk taking and aversion, and bias. In mastering large amounts of data, challenging long-held human assumptions and recognizing patterns readily missed by humans, artificial intelligence could improve human strategic decision-making. However, currently artificial intelligence is largely limited by programming and the data sets that are available to it. Further, artificial intelligence may be ‘comfortable’ with strategies that breach the values, ethics, or strategic objectives of its human users. Therefore, while artificial intelligence will impact strategy and strategic decision-making, the degree to which it does must remain the decision of policy makers and senior military leaders.

Rules of engagement are a challenge for autonomous robots. This may appear to be a tactical issue, but in practice these rules originate in strategy and national policies. And while Isaac Asimov’s Laws appear to be simple programmable rules for autonomous robots, rules of engagement are more complex than Asimov’s Laws. This leaves room for contradictory imperatives that may result in undesirable or unexpected behavior in robots. Unclear responsibility is another challenge related to this. To whom would blame be assigned for improper conduct and unauthorised use of force by an autonomous robot (whether by error or intentional): the designers, robot manufacturer, procurement staff, robot controller/supervisor, the commander in the field, or the robot itself?

...rules of engagement are more complex than Asimov’s Laws...

Some contend that human beings may not always be responsible for the behaviour of machines because of artificial agents that have the capacity to learn as they operate. This is described by Mathias (2004) as a ‘responsibility gap’, and he notes that presently there are machines in development or already in use which are able to decide on a course of action and to act without human intervention. The rules by which they act are not set during the production process, but can be changed during the operation of the machine, by the machine itself.” However, as Johnson (2014) writes, “a responsibility gap will not arise merely from the technological complexity of artificial agents…whether or not there will ever be a responsibility gap depends on human choices not technological complexity.” Horowitz and Scharre (2015) have also written, “weapons themselves do not comply with the laws of war. Weapons are used by people in ways that comply with, or violate, the laws of war.” Humans, at least in the foreseeable future, must remain responsible for the actions of robots and autonomous systems.

Kelsey Drake | NY Times

It will be important for the issue of responsibility to decide who, or what, makes the decision for a robot to kill. Some situations may develop so quickly and require such rapid information processing that we would want to entrust our robots and systems to make critical decisions. And as Lin, Bekey, and Abney have noted, if human soldiers must monitor the actions of each robot as they occur, this may limit the effectiveness for which the robot was designed in the first place.

Institutional Challenges. The integration of humans and machines in more tightly coupled warfighting systems will demand examination of how military organizations fight. Currently, as Ilachinksi notes, all forms of unmanned systems are typically integrated into operations in a manned-centric CONOPS point of view. This is unnecessarily self-limiting by implicitly designing operations around the limits in human performance. Future warfighting concepts for a human-machine land combat force must move beyond this construct, and be developed well in advance of major investment decisions.

The design and implementation of a secure network to connect humans and machines in this new integrated force will be an important foundation. While a tactical issue at first glance, this secure network must be designed at the institutional level to connect between tactical organizations (for land, joint, and coalition forces), but to also link those deployed in the battlespace with strategic reach back capabilities. It is a challenge for network architecture, and solutions are needed where the information that flows across a human-machine network is secure and assured.

Institutional culture will also be a challenge. Military organizations for most of history have seen themselves as human institutions. This has reinforced people-centric team cultures – from section (squad) to joint force level. Additionally, different occupational specialties often develop unique subcultures within a larger military force. These cultures and subcultures are powerful elements in developing cohesion and esprit de corps. But as Williamson Murray has written, military culture can also be a barrier for change and the adoption of new ideas, techniques, and technologies. Changing organizational culture will need to be a focus for leaders at all levels in the institutional development of a more integrated human-machine team.

If learning machines are added to this environment, both institutional and individual professional military education must adapt.

A new approach to education and training will be required in this more integrated force. Throughout history and presently, training has focused on the teaching humans to achieve military outcomes as individuals and teams. In a more integrated human-machine joint force, this foundational approach to training is challenged. Similarly, education for military leaders currently seeks to achieve their intellectual development in the art and science of war. If learning machines are added to this environment, both institutional and individual professional military education must adapt.

Part of any new training and education approach will be developing the understanding of common goals in human-machine teams. Humans and autonomous machines will need common goals and share awareness if they are to work together effectively. Many of commercial aircraft accidents associated with automation occurred when the flight crew had one goal (for example, staying on the glide slope during an approach) and the flight management computer had another (such as executing a go-around). Deploying future autonomous systems will demand good human training, as well as increasing each machine’s awareness of what the human or humans are trying to achieve.

Finally, career development and management of military personnel will also need to be adapted to the new human-machine force. As robots potentially replace humans in many ‘dirty, dull and dangerous’ functions, it is possible that many lower ranking soldiers may be displaced. This will necessitate a change to the traditional career ‘triangles’ where the mass of an Army is found in the lowest ranks.

Tactical Challenges. A key tactical challenge in the future teaming of humans and robots is that there may be an increased cognitive load on personnel. For example, it’s possible an infantry soldier might be responsible for several air and ground unmanned vehicles, while also having to operate in a human team. Under normal circumstances this will be demanding. In combat, it will place severe cognitive load on the soldier. Recruiting, training, education, assessment, and development of intellectual capacity and resilience will need to be adapted to address this challenge. A key objective in building a more integrated human-machine force must be to reduce the cognitive load.

Lance Cpl. Frank Cordoba | U.S. Marine Corps

Consent by soldiers to the risks of working with robots and artificial intelligence will be an issue that must be addressed. In October 2007, a semi-autonomous cannon deployed by the South African army malfunctioned, killing nine soldiers and wounding 14 others. This is both a legal and an ethical quandary; as one 2008 study asks, should soldiers be informed that an unusual or new risk exists, such as working with potentially defective robots?

Perhaps the most important challenge is establishing trust. How trusted can robots be to team with humans in dangerous and austere environments? The degree to which military leaders should trust advanced analytics and artificial intelligence to make decisions about its people, and potentially make decisions about saving and taking lives, must be established. The amount of trust people will place on objects implanted in their bodies to deliver improvement in performance is also to be established. Finally, for augmented humans, it is yet to be proven whether they will be trusted in teams composed of augmented, and non-augmented, humans.

Some academics have argued that delegating the decision to target and open fire to a machine violates human dignity, and that people have the “right not to be killed by a machine.” Technologists such as Stephen Hawking and Elon Musk have also spoken out against artificial intelligence. Others, accepting that autonomous systems and artificial intelligence are likely to be play a large role in war and society in future, have proposed concurrent efforts to develop technology and examine the ethics of these systems. As Amir Husain (2016) notes:

...if autonomous systems are to be a pillar of future supremacy, then now is the right time to present a framework within which autonomy can be enabled in an effective and technically viable, yet legal and moral, manner.

The theory and practice of mission command offers some insights into how these questions might be at least partially answered. Mission command - the practice of assigning a subordinate commander a mission without specifying how the mission is to be achieved - has established the foundational relationships between commanders and subordinates that might be adapted for trusted relationships with robots and artificial intelligence. But it does not fully address the issues that must be resolved in building a hybrid human-machine force.

So, there remain considerable challenges in developing a more integrated human machine force. The strategic challenges will not just affect the military but will also have an impact on policy makers and political leaders. At the institutional level, resistance to change will tax military leaders. At the tactical level, ensuring that personnel are not overloaded in a cognitive sense and have trust in their robot and artificial intelligence partners will be important obstacles to reduce. Finally, considerable challenges in the legal and ethical fields must be examined in any move towards a more integrated human-machine force. While potential adversaries may not demonstrate similar concerns, the nature of Western democratic society demands that these are addressed in parallel with the technological aspects of human-machine integration.

Conclusion

Over the next decade, military organizations are unlikely to have achieved Ray Kurzweil’s singularity, where artificial intelligence and machines can outperform the human mind. But it is likely that technology has reached a point where artificial intelligence, machines, and human-machine teaming open up a range of new and exciting possibilities. These will enrich society, but profoundly challenge how military organizations think about the profession of arms, and prepare their people for the intellectual, ethical, and physical rigors of 21st century warfare.

Mankind has agonised over these types of challenges before. For centuries, the application of new technologies in war has generated debate before, during and after conflict. In 1139, Pope Innocent II led the Second Lateran Council in banning the use of crossbows in war. For this time, the crossbow represented very advanced technology, required minimal training and little strength, and possessed unparalleled lethality. A lowly and hastily trained peasant could use the weapon, challenging the existing power structure in war. The Roman Catholic Church viewed the new weapon as a gross transformation of the character of war.

This reaction to the development of a new means of war is illustrative of the type of concerns that must be addressed if the military organisations are to adopt far ranging plans for human machine integration. Each military organisation, regardless of nation or service, will approach the challenges of human-machine teaming differently. Variations in military culture, national strategy, and societal expectations will ensure a multitude of solutions are derived from different military institutions. This variety of institutional strategies developed for human-machine teaming is a good thing. It will provide for sharing of lessons and cross-pollination of best practices – at least among Western military organisations. The central issue, however, is that military organisations must each possess realistic strategies that address the challenges described in this article if they are to successfully exploit the future of human-machine teaming.


Major General Mick Ryan is an Australian Army officer. A graduate of Johns Hopkins University and the USMC Staff College and School of Advanced Warfare, he is a passionate advocate of professional education and lifelong learning. The views expressed are the author's and do not reflect the official position of the Australian Army, the Australian Department of Defence, or the Australian Government.


Have a response or an idea for your own article? Follow the logo below, and you too can contribute to The Bridge:

Enjoy what you just read? Please help spread the word to new readers by sharing it on social media.


Header image: Photograph by Philip Toledano | Rolling Stone