The Strategy Bridge

View Original

Guiding the Unknown: Ethical Oversight of Artificial Intelligence for Autonomous Weapon Capabilities

It is not news that autonomous weapons capabilities powered by artificial intelligence are evolving fast. Many scholars and strategists foresee this new technology changing the character of war and challenging existing frameworks for thinking about just or ethical war in ways the U.S. national security community is not yet prepared to handle. Until U.S. policy makers know enough to draw realistic ethical boundaries, prudent U.S. policy makers are likely to focus on measures that balance competing obligations and pressures during this ambiguous development phase.

On the one hand, leaders have an ethical responsibility to prepare for potential threats from near-peer competitors such as China. But leaders also face a competing obligation to ensure increasingly autonomous systems do not spark or escalate an unnecessary conflict that would violate Americans’ understanding of the appropriate use of force. The following hypothetical scenarios illustrate some of the competing obligations and pressures U.S. technology experts suggest national security leaders must balance and address now.[1] 

Scenario 1: China Attacks With Weaponized Robots

The Pentagon recently released an artificial intelligence strategy that warns China and Russia are accelerating investments in this technology that could “erode our technological and operational advantages” if U.S. innovation cannot also expedite efforts to refine and deploy artificial intelligence.[2] For example, Elsa Kania, a scholar with the Center for a New American Security who studies the Chinese use of artificial intelligence, warns that China’s military may someday deploy its artificial intelligence technologies in swarms of drones which could potentially “target and saturate the defenses of U.S. aircraft carriers."[3] Kania notes improvements in this technology will likely allow drones to coordinate with each other autonomously and adapt to changes in their environment, enabling them to outmaneuver or negate countermeasures. U.S. policy makers therefore face pressure to accelerate the adoption of artificial intelligence technologies to ensure that defensive measures can overcome the advantages enjoyed by an adversary’s more autonomous systems which could be deployed in a  future wartime scenario.

Brian Michelson, a U.S. Army colonel who has studied and written on the military applications of artificial intelligence, argues the debate over autonomy in weapons should include the question, “Would it be moral to cause more casualties for our forces by overly limiting our weapons capabilities?”[4]  Extreme caution or risk aversion is not necessarily the safe ethical or moral choice.

Scenario 2: Algorithms Trigger a Flash War

In 2010, the stock market suffered a trillion dollar “flash crash” that illustrates the difficulties and associated risks that occur when algorithms interpret chaotic, real-world data. During that event, regulators assessed that algorithms in automated trading software began rapidly selling stock futures and then responded to the uptick in activity they themselves generated by selling more on a day when external factors like a European debt crisis made the market particularly sensitive.[5]  Fortunately, the market recovered most of the subsequent one trillion dollar loss, which CNN Money said was the “largest one-day drop on record.”[6] A weapons system that similarly misreads data could trigger escalatory violence, tragically violating Just War and other international norms on appropriate reasons and methods for using military force. Think, for example, of the 1988 incident where a U.S. guided missile cruiser shot down an Iranian passenger jet the ship’s Aegis targeting system misidentified as a military aircraft.[7] Such an error is harder to reverse than the stock market loss.

These hypotheticals underscore competing obligations to develop autonomous capabilities to defend against threats and yet ensure autonomous capabilities do not become Just War-violating threats themselves. Just War violations might include disproportionate violence or disregard for non-combatant safety stemming from algorithms that misinterpret data. According to Aristotle and other philosophers discussed below, attempts to avoid violations of Just War principles in isolation from the real-world drivers for greater autonomy would in fact be unethical.[8] At the same time, assuming that ethical considerations are too abstract or rigid to inform artificial intelligence development would endanger the intrinsic humanity of the society that policy makers aim to protect by defaulting to machine-driven decision making. But are there ethical principles to assist concerned policy makers who are still dealing with limited information about the true capabilities of artificial intelligence?

In ancient Greece, Aristotle described prudence as a virtue that takes competing obligations head-on, which is an approach to ethical decision-making that considers what is possible given real-world pressures rather than simply what avoids moral harm.[9] A present-day philosopher, J. Patrick Dobel, says prudence involves “finding a concrete ‘shape’ to moral aspirations, responsibilities, and obligations.”[10] Retired U.S. Army Lieutenant General James Dubik has similarly argued that strategic dimensions of Just War theory call for practical prudence—using sound judgment, assessing the specific facts of situations requiring decisions, and picking the best action given real world constraints.[11]

Three Ways to Apply Aristotelian Thinking to the Development of Autonomous Capabilities

See this Amazon product in the original post

Drawing on Dobel’s book Public Integrity, three practical considerations offer guideposts to policy makers in applying prudence when balancing competing obligations: prudent leaders should prioritize ethical leadership capacity, research and development modalities, and engagement with society’s moral sensibilities as they oversee emerging autonomous capabilities.[12]

First, ethically-conscious policy makers will pursue technical training and input from diverse perspectives, thereby expanding their own capacity for ethical leadership. The reality is that revolutions in military technology often ride on the shoulders of risk-tolerant mavericks, and leaders will need at least a modicum of technical savvy to oversee them. For example, when Mason Patrick took command of the Army Air Service in the 1920s, he invested time to become a pilot himself and was then able to temper the influence of the brilliant but reckless Billy Mitchell, widely considered the “father of the U.S. Air Force.”[13] For national security leaders today, this aspect of prudence might look like investing time in accessible tech courses for non-technical personnel.

Similarly, prudent leaders will aggressively gather input from diverse operational, technical, and non-government perspectives, even when inconvenient. Dubik argues that war is too complex for a single mind to determine moral implications alone, and this is also true when attempting to overcome the unknowns and unverified assumptions of new, paradigm-bending technologies. Assuming computational tools will operate in noisy, real world environments the same way they have with clean data sets in controlled, development environments is problematic but perhaps too common.[14] Particularly with the potential for flash crash scenarios when algorithms are processing and acting on confusing data streams in real time, prudence requires pushing past convenient assumptions and soliciting dissenting opinions.

Second, ethical policy makers will employ what Dobel would call “modalities”—a combination of mindsets and methods—for prudent oversight. For oversight of artificial intelligence, two of his modalities are especially critical: aligning means with ends and following an iterative development process with application-specific ethical evaluations.[15] Aligning means to ends in the context of autonomous weapon capabilities involves weighing risks and rewards, philosophically extending the traditional Just War concepts of military necessity and proportionality to the interim development phase of new capabilities. 

Leaders can apply these modalities by evaluating specific AI weapons applications rather than categories of autonomous capabilities in the abstract. One engineer advocates clarifying at the outset of any discussion the specific cognitive process a potential artificial intelligence application will improve or automate—in other words, what is the intelligent function that humans normally perform and which the technology aims to replicate? Does it replace the humans that would otherwise observe and analyze data or actually those who choose the response, potentially a return-fire response?  He offers some incisive follow-on questions:

  1. What data is required and is clean data available to train the artificial intelligence application initially?

  2. What sort of computational model will the artificial intelligence application use to make sense of data?

  3. What will be missed by reducing the traditional role of humans who understand nuances and complexities in the process?[16]

Brian Michelson adds that attempting to achieve a perfect product before fielding may seem safe in the near term but the delays would likely heighten long term risks. He recommends testing and learning from each new addition of autonomous capability to ensure means align with ends.[17] For example, an aerial drone program might iteratively test sensing and adapting flight patterns to avoid bad weather and expected aerial traffic before navigating the trickier responses to unexpected and potentially hostile aerial vehicles.

An artist’s concept for autonomous air weapons (Air Force Times)

The third and final consideration for society’s moral sensibilities stands on the notion that policy makers bear responsibility for the moral health of their society. Prudent national security leaders will strive for transparent, public dialogue so that citizens can share responsibility for moral choices and cogently evaluate the military’s application and ultimate use of greater autonomy in weapon systems. As Dobel argues, “[t]rust of other citizens and trust in institutions are social resources and social capital that leaders and major institutions should work to create and sustain.”[18] Without these, “society’s capacity to act for common purpose declines.”[19]

Shouldering responsibility for social and institutional legitimacy, leaders can foster collaboration on critical issues such as methods to code safeguards, checks and post-action reviews of artificial intelligence-enabled autonomy in weapons. A recent Executive Order articulates guiding principles for artificial intelligence development that include: "foster public trust and confidence in [artificial intelligence] technologies," "drive technological breakthroughs in [artificial intelligence]," and "drive development of appropriate technical standards."[20] The policy notes smart minds in industry can help leaders solve critical technical challenges, which will include the capability to program comprehensive audits of code modifications, securing weapons against both hacking and unauthorized insider changes. As one defense industry analyst suggests, “if the U.S. and its democratic allies win the [artificial intelligence] race, the Defense Department will deserve credit because of the unique way it collaborates with the private sector” to resolve these and other thorny issues.[21]

Conclusion

History warns against deferring attention to ethical issues until a national emergency pressures deployment of a new, little understood capability. As some ethicists have warned, “the moral implications of nuclear weapons were not publicly debated until after their first use, and many of the scientists who worked on the Manhattan Project later regretted ignoring those moral issues.”[22] Dobel’s ideas on prudence can help national security leaders avoid a similar mistake while balancing the obligation to maintain our competitive edge during the ambiguous phase of artificial intelligence development.


Gretchen Nutz is a security policy analyst for the Department of Defense. The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of the Department of Defense or the United States Government.


Have a response or an idea for your own article? Follow the logo below, and you too can contribute to The Bridge:

Enjoy what you just read? Please help spread the word to new readers by sharing it on social media.


Header Image: Autonomous Weaponse (Shutterstock)


Notes:

[1] See for example, Paul Scharre, “A Million Mistakes a Second,” Foreign Policy, 12 September 2018, Accessed September 12, 2018. https://foreignpolicy.com/2018/09/12/a-million-mistakes-a-second-future-of-war/.

[2] Department of Defense,  Harnessing AI to Advance our Security and Prosperity: Summary of the 2018 Department of Defense Artificial Intelligence Strategy (Washington, DC: February 2019),  5.

[3] Ian Burrows, “Made in China 2025: Forget Cheap Goods, Think World’s Best Artificial Intelligence” ABC News, October 5, 2018, https://www.abc.net.au/news/2018-10-06/china-plans-to-become-ai-world-leader/10332614.”

[4] Brian Michelson, conversation with author, October 11, 2018.

[5] Ben Rooney, “Trading program sparked May 'flash crash',” CNN Money, October 1, 2010,

http://money.cnn.com/2010/10/01/markets/SEC_CFTC_flash_crash/.

[6] Ibid.

[7] Andrew Ilachinski, “Artificial Intelligence & Autonomy: Opportunities and Challenges,” CNA (Arlington, VA: CNA, October 2017), 69-70, https://www.cna.org/cna_files/pdf/DIS-2017-U-016388-Final.pdf; citing Lt. Col. (U.S. Marine Corps, Retired) D. Evans, “Vincennes: A Case Study,” Proceedings Magazine, U.S. Naval Institute, Aug 1993: http://www.usni.org/magazines/proceedings/1993-08/vincennes-case-study.

[8] See for example Aristotle, Nicomachean Ethics, trans. by W.D. Ross, Blacksburg: Virginia Tech, 2001, Book VI, Section 12, 76.

[9] Aristotle, Nicomachean Ethics, trans. by W.D. Ross, Blacksburg: Virginia Tech, 2001, Book VI, Section 12, 76.

[10] J. P. Dobel, Public Integrity (Baltimore, Md.: Johns Hopkins University Press, 1999), 197, citing Thomas Aquinas, Summa Theologiciae. Vol. 36: Prudence, part 2 of the 2nd part (2a2ae), ed. and trans. Thomas Gilby, O.P. (London: Blackfriers, 1967), Qu. 47, Art. 2, 5, 7; Josef Pieper, The Four Cardinal Virtues (Notre Dame, Ind.: Notre Dame University Press, 1966).

[11] James Dubik, Just War Reconsidered: Strategy, Ethics, and Theory (Lexington, Kentucky: University Press of Kentucky, 2016), 99.

[12]  J. P. Dobel, “Political Prudence” in Public Integrity (Baltimore, Md.: Johns Hopkins University Press, 1999), 193-211.

[13] Minnie Jones, “William 'Billy' Mitchell -- 'The father of the United States Air Force,’” Army.mil, January 28, 2010 https://www.army.mil/article/33680/william_billy_mitchell_the_father_of_the_united_states_air_force

[14] “Clean” data is accurate, consistent, and structured such that a machine can make sense of it.  See for example Willem Sundblad, Data Is The Foundation For Artificial Intelligence And Machine Learning,” Forbes, October 18, 2018, https://www.forbes.com/sites/willemsundbladeurope/2018/10/18/data-is-the-foundation-for-artificial-intelligence-and-machine-learning/; Carten Cordell, “Talent and data top DOD’s challenges for AI, chief data officer says,”FedScoop, January 17, 2019, https://www.fedscoop.com/talent-data-top-challenges-ai-adoption-dod-official-says/. DoD researcher working on command and control computational technology (name withheld), conversation with author, October 31, 2018.

[15] J. P. Dobel, Public Integrity, 205.

[16] DoD researcher working on command and control computational technology (name withheld), conversation with author, October 31, 2018.

[17] Brian Michelson, conversation with author, October 11, 2018.

[18] J. P. Dobel, Public Integrity, 208.

[19] Ibid.

[20] U.S. President, Executive Order 13859 of February 11, 2019, “Maintaining American Leadership in Artificial Intelligence,” Federal Register 84, no. 31 (February 14, 2019): 02544, https://www.govinfo.gov/content/pkg/FR-2019-02-14/pdf/2019-02544.pdf.

[21] Scott Bolick, “We Need More Pentagon and Tech Industry Collaboration, Not Less,” Nextgov.Com, October 29, 2018, https://www.nextgov.com/ideas/2018/10/we-need-more-pentagon-and-tech-industry-collaboration-not-less/152377/.

[22] Aaron M. Johnson and Sidney Axinn, “The Morality of Autonomous Robots,” Journal of Military Ethics 12, no. 2 (2013): 129; citing Leo Szilard, et al., “A Petition to the President of the United States,” July 17, 1945, Accessed September 15, 2018. http://www.dannen.com/decision/45-07-17.html.