Autonomous Systems

Autonomous Systems in the Combat Environment: The Key or the Curse to the U.S.

Autonomous Systems in the Combat Environment: The Key or the Curse to the U.S.

The U.S. military has already begun to incorporate artificial intelligence into its operations. However, the use of autonomous machines in the U.S. could be said to be quite conservative in comparison to its adversaries. Although artificial intelligence assists in providing risk predictions and improving time available to react to events, some believe artificial intelligence and autonomous systems will drastically distance humans from a direct combat role. Observations regarding the complexity of warfare, regardless of the technology, force scientists and military leaders to question the potential consequences of implementing artificial intelligence and autonomous systems in the next military conflict.

It’s Not the Plane, it’s the Payload: A 21st-Century Solution for Armed Overwatch

It’s Not the Plane, it’s the Payload: A 21st-Century Solution for Armed Overwatch

The rapid changes in aerial attack caused by loitering munitions pose a challenge to U.S. Special Operations Command as it embarks upon an acquisition program to provide armed overwatch for the next twenty to thirty years. The development of a common air-launched system for loitering munitions would give U.S. Special Operations Command increased flexibility in the key areas of covert light attack capability, partner forces training, and security assistance competition.

Guiding the Unknown: Ethical Oversight of Artificial Intelligence for Autonomous Weapon Capabilities

Guiding the Unknown: Ethical Oversight of Artificial Intelligence for Autonomous Weapon Capabilities

It is not news that autonomous weapons capabilities powered by artificial intelligence are evolving fast. Many scholars and strategists foresee this new technology changing the character of war and challenging existing frameworks for thinking about just or ethical war in ways the U.S. national security community is not yet prepared to handle. Until U.S. policy makers know enough to draw realistic ethical boundaries, prudent U.S. policy makers are likely to focus on measures that balance competing obligations and pressures during this ambiguous development phase.

Economics Sure, but Don’t Forget Ethics with Artificial Intelligence

Economics Sure, but Don’t Forget Ethics with Artificial Intelligence

The widening rift between the Pentagon and Silicon Valley endangers national security in an era when global powers are embracing strategic military-technical competition. As countries race to harness the next potentially offsetting technology, artificial intelligence, the implications of relinquishing their competitive edge could drastically change the landscape of the next conflict. The Pentagon has struggled—and continues to struggle—to make a solid business case for technology vendors to sell their products to the Defense Department. Making the economic case to Silicon Valley requires process improvement, but building a strong relationship will necessitate embracing the ethical questions surrounding the development and employment of artificial intelligence on the battlefield.

Respect for Persons and the Ethics of Autonomous Weapons and Decision Support Systems

Respect for Persons and the Ethics of Autonomous Weapons and Decision Support Systems

The concern here, however, is not that death by robot represents a more horrible outcome than when a human pulls the trigger. Rather it has to do with the nature of morality itself and the central role respect for persons, understood in the Kantian sense as something moral agents owe each other, plays in forming our moral judgments.

Strategy, Ethics, and Trust Issues

Strategy, Ethics, and Trust Issues

In the aftermath of the German U-boat campaign in the First World War, many in Europe and the United States argued that submarines were immoral and should be outlawed. The British Admiralty supported this view, and as Blair has described, even offered to abolish their submarine force if other nations followed suit. While British proposals to ban submarines in 1922 and 1930 were defeated, restrictions on their use where imposed that mandated that submarines could not attack a ship until such ships crews and passengers were placed in safety. This reaction to the development of a new means of war is illustrative of the type of ethical and legal challenges that must be addressed as military organizations adopt greater human-machine integration.