AI

The Psychology of Killing with Drones: #Reviewing On Killing Remotely

The Psychology of Killing with Drones: #Reviewing On Killing Remotely

To date, moral injury remains a syndrome, that is, a group of symptoms lacking clear definition or cause. Phelps exemplifies a possible way ahead in On Killing Remotely. In terms of quantifiability, Phelps makes room for analyzing a new arena for moral injury without stretching the term past its breaking point. In terms of severity, Phelps clarifies that stakes can be high without involving immediate personal danger, thus opening up discussions of comparable scenarios with the potential to morally injure. In terms of technology, Phelps distinguishes between kinds of unmanned or remote aerial technology, sketching a taxonomy and noting the unique stressors of each tool or mission.

How to Describe the Future? Large-Language Models and the Future of Military Decision Making

How to Describe the Future? Large-Language Models and the Future of Military Decision Making

Today, leaders across the world are seeing the early effects of another transformational technology: widely available large-language models. Viewed as the first step in true artificial general intelligence, large-language models incorporate massive amounts of data from books and articles into training sets that allow them to recognize patterns between words and images. Large-language models will likely have a larger impact on the battlefield than autonomous drones due to their ability to automate the many aspects of staff work that prevent military leaders from focusing on tactics and strategy.

AI in Fiction and the Future of War

AI in Fiction and the Future of War

Fiction is a great way to explore the possibilities and risks of AI. Done right, fiction serves as a way to guide the decisions that we make. Unfortunately, many portrayals of AI in fiction focus too far into the future, sometimes imputing capabilities that are unlikely to ever exist, and consequently fail to engage with the challenges that we face in the near future. Better examples address issues that we are going to face soon. Understanding which fiction fits which description helps us to adjust our understanding accordingly.

Data Analytics in the Combatant Command: Improving the Approach to Decision-Making

Data Analytics in the Combatant Command: Improving the Approach to Decision-Making

To complete their missions, combatant commanders will, out of necessity, leverage data as a weapon system as it constitutes the basis of information development within the commander's decision space. With new sensors, the amount of collected data continues to climb, making more data available for transformation into actionable information supporting decision-making. Given the enormous volume of data that presently exists and the supply of trained analysts within a command, the commander and staff are assumed to have the capability to effectively employ data analytics to support the planning and execution of operations within the area of responsibility decisively. The perception is partially true.

#Reviewing The Kill Chain

#Reviewing The Kill Chain

Christian Brose’s The Kill Chain: Defending America in the Future of High-Tech Warfare is a book about death. It is a book about Senator John McCain’s legacy after pursuing defense reform as Chairman of the Senate Armed Services Committee. It is a book that makes a case for the death of the current tradition of American power projection. Correspondingly it is a book about the desired death of a defense acquisitions ecosystem that has, according to Brose, contributed to building a military ill-equipped for the 21st century.

Autonomous Systems in the Combat Environment: The Key or the Curse to the U.S.

Autonomous Systems in the Combat Environment: The Key or the Curse to the U.S.

The U.S. military has already begun to incorporate artificial intelligence into its operations. However, the use of autonomous machines in the U.S. could be said to be quite conservative in comparison to its adversaries. Although artificial intelligence assists in providing risk predictions and improving time available to react to events, some believe artificial intelligence and autonomous systems will drastically distance humans from a direct combat role. Observations regarding the complexity of warfare, regardless of the technology, force scientists and military leaders to question the potential consequences of implementing artificial intelligence and autonomous systems in the next military conflict.

Artificial Intelligence and the Manufacturing of Reality

Artificial Intelligence and the Manufacturing of Reality

Humans are and have always been vulnerable to being tricked, provoked, conditioned, deceived, or otherwise manipulated. Since at least the 1960s, the Soviet military and subsequent Russian organizations recognized opportunities for exploiting this vulnerability. That is why the Soviets developed a formal research program—called reflexive control theory—to model how one could manipulate targets’ perceptions of reality…While the Russians weaponized reflexive control theory, Madison Avenue used similar logic to evoke emotion—and sell products to American consumers…The contemporary information environment and modern tools, including artificial intelligence, could slash the transaction costs of such manipulation.

Guiding the Unknown: Ethical Oversight of Artificial Intelligence for Autonomous Weapon Capabilities

Guiding the Unknown: Ethical Oversight of Artificial Intelligence for Autonomous Weapon Capabilities

It is not news that autonomous weapons capabilities powered by artificial intelligence are evolving fast. Many scholars and strategists foresee this new technology changing the character of war and challenging existing frameworks for thinking about just or ethical war in ways the U.S. national security community is not yet prepared to handle. Until U.S. policy makers know enough to draw realistic ethical boundaries, prudent U.S. policy makers are likely to focus on measures that balance competing obligations and pressures during this ambiguous development phase.

From Platforms to Control: #Reviewing Thomas Rid’s Rise of the Machines for Its Macro-History of the U.S. Air Force

From Platforms to Control: #Reviewing Thomas Rid’s Rise of the Machines for Its Macro-History of the U.S. Air Force

For those familiar with the traditional narrative of U.S. airpower history centered on the Air Corps Tactical School’s development of bomber doctrine followed by its application against Germany during World War II, Rid provides a jarring but useful counter-narrative focused on human-machine interactions.

Economics Sure, but Don’t Forget Ethics with Artificial Intelligence

Economics Sure, but Don’t Forget Ethics with Artificial Intelligence

The widening rift between the Pentagon and Silicon Valley endangers national security in an era when global powers are embracing strategic military-technical competition. As countries race to harness the next potentially offsetting technology, artificial intelligence, the implications of relinquishing their competitive edge could drastically change the landscape of the next conflict. The Pentagon has struggled—and continues to struggle—to make a solid business case for technology vendors to sell their products to the Defense Department. Making the economic case to Silicon Valley requires process improvement, but building a strong relationship will necessitate embracing the ethical questions surrounding the development and employment of artificial intelligence on the battlefield.

Respect for Persons and the Ethics of Autonomous Weapons and Decision Support Systems

Respect for Persons and the Ethics of Autonomous Weapons and Decision Support Systems

The concern here, however, is not that death by robot represents a more horrible outcome than when a human pulls the trigger. Rather it has to do with the nature of morality itself and the central role respect for persons, understood in the Kantian sense as something moral agents owe each other, plays in forming our moral judgments.

Chinese and Russian Defense Innovation, with American Characteristics? Military Innovation, Commercial Technologies, and Great Power Competition

Chinese and Russian Defense Innovation, with American Characteristics? Military Innovation, Commercial Technologies, and Great Power Competition

While Russia and China are known for their lumbering civilian and military bureaucracies, both nations are nonetheless demonstrating that they can be nimble enough to accelerate certain technological developments, along with testing and evaluation. So far, both competitors have proven that they can take specific American elements and apply them to their own unique ecosystems. Nonetheless, using American-style institutional and procedural concepts is still a novel idea for the top-heavy ministries tasked with such breakthrough technological developments in both countries.

Integrating Humans and Machines

Integrating Humans and Machines

The military holds an enduring an interest in robotic capability, and teaming these early robots with humans. From the use of remote controlled boats by the Germans in the First World War, unmanned, tracked Goliath robots filled with explosives used in World War Two, through to contemporary EOD robots and unmanned aerial and ground vehicles, military organizations have long sought to leverage robotic capability. At the highpoint of the Iraq War in 2006, the U.S. military fielded over 8000 robots in theater.

This article is the second of three that examines three aspects of human-machine teaming. In the first, I examined the rationale for human-machine teaming through ‘seven propositions’. This article examines key elements military organizations might adopt in a closer integration of humans and machines. It is proposed there are three areas upon which might be constructed a competitive strategy for future operations. The three areas provide background information, analysis and the possible applications of human-machine teams. 

Lombardi’s War: Formation Play-Calling and the Intellectual Property Ecology

Lombardi’s War: Formation Play-Calling and the Intellectual Property Ecology

The information age, a phrase famously coined by Berkeley Professor Manuel Castells in the 1990s, described a tectonic shift in our culture and economy which we generally take for granted at present. From our current vantage point, replete with ubiquitous pocket-sized personal computing and communications devices, it is hard to imagine a world where we cannot convert our data or social networks into physical resources and access. We keep our data in the cloud and call upon it when we need it, regardless of where we are. We log into AirBnB, and somehow money we have never seen transfers to someone else who will never see the money, and that becomes a room for an evening. The idea of a brick-and-mortar video store, such as the 1990s-staple Blockbuster Video, is hopelessly anachronistic in the era of Netflix.

Building a Future: Integrated Human-Machine Military Organization

Building a Future: Integrated Human-Machine Military Organization

At some point in the future, historians may look back on the current era as the dawn of a human-machine revolution or perhaps even the beginnings of the sixth revolution in military affairs. Williamson Murray notes in The Dynamics of Military Revolution that such things are rarely apparent in advance, and only obvious in retrospect and in the wake of remarkable battlefield success. While certainly the societal, technological, political, and military ingredients of such a revolution are present, whether this consists of a revolution in military affairs will be left to future historical debate.

数字化 – 网络化 – 智能化: China’s Quest for an AI Revolution in Warfare

数字化 – 网络化 – 智能化: China’s Quest for an AI Revolution in Warfare

As the U.S. and China compete to innovate in this domain, the relative trajectories of U.S. and Chinese advances in artificial intelligence will impact the future military and strategic balance. China’s ability to leverage these national strategies, extensive funding, massive amounts of data, and ample human resources could result in rapid future progress. In some cases, these advances will be enabled by technology transfer, overseas investments, and acquisitions focused on cutting-edge strategic technologies.

How to Build a Virtual Clausewitz

How to Build a Virtual Clausewitz

In many ways, military forces using AI on the battlefield is not new at all.  At a simplistic level, the landmine is perhaps a good starting example.  The first known record of landmines was in the 13th Century in China and they emerged in Europe somewhere between 1500 and 1600.  Most landmines are not intelligent and all and apply a binary logic of “kill” or “don’t kill.”  What landmines lack, and one of the primary reasons they are banned by most countries, is the ability to use just and discriminate force.  As far as computers have come since the British used “The Bombe” to break the Enigma code, the human mind still has an advantage in determining the just and discriminate use of force and thinking divergently about the second and third order effects resulting from the use of force.  But, according to some, that advantage may not last for long.