Airpower Beyond the Last Red Button
Thinking about Causality: The Spear, the Arrow, and the Shuriken
“Make things as simple as possible, but not simpler.”
— Variant of Ockham’s Razor, attributed to Albert Einstein
For all its sound and fury, the controversy sparked by former Defense Secretary Panetta’s Distinguished Warfare Medal never rose high enough out of the gutters to actually deal with the complex issues raised by remote warfare. While the Secretary’s desire to recognize the crews of Remotely Piloted Aircraft (RPA) for their profound battlefield impact demonstrated great insight, the decoration itself was dead on arrival. For RPA crews, it amounted to a second-class Distinguished Flying Cross, further reinforcing a toxic ‘second-class citizen’ narrative, and for most everyone else, implying ‘cubicle warriors’ had any equivalency to traditional combat veterans was a disgrace to those who served in the line of fire. The latter worked itself into a lather condemning the former for a decoration it did not even want.
Because ‘most everyone else’ was deafeningly loud, and the RPA community and service leadership were almost entirely silent, the whole episode deteriorated into a procession of op-eds pouring out frustration about modern technological warfare upon effigies of drone operators.[1] This was no one’s finest hour, but dignity was not the only casualty of this episode . We also missed an opportunity to deal with the new realities of remote warfare. All sides arrived at the discussion with ill-considered assumptions about ‘who did what’ and ‘why things happen.’ Insofar as these assumptions diverged, we talked past each other.
So, for the RPA skeptics, the robot aircraft’s mechanics reduced the risk and skill requirements to such a degree that the ‘operators’ could not claim any special acumen for effects resulting from their actions, or at least not any acumen comparable to those who faced higher requirements to produce similar effects. If the RPA did something, it was the plane, not the crew, who ‘did it.’ For the RPA advocates, traditionally manned aircraft use robotic fly-by-wire systems to simplify the task of aircraft control and data-linked smart weapons to keep themselves out of the range of return fire; if risk reduction via datalink and computerized aircraft control do not invalidate the contribution of the F-16 pilot, neither should they invalidate the contribution of the MQ-9 crew. For others, if either aircraft did something of significance, the crew or pilot ‘did it’ by mastering the technology at their command. A more mature debate would have synthesized these arguments to gain a better understanding of the relationship between humans and hardware.[2]
It is hardly unusual to find two competing theories of why something happens — the field of philosophy has been debating this problem of causality as long as there have been philosophers.[3] And ultimately, this is a question about why things happen — if a system is so completely scripted the human is merely a custodian, then perhaps the engineers and programmers actually ‘did it,’ but if the human aspect is decisive, then it is more correct to say the crew or pilot ‘did it.’ This is where the discussion could have become much more interesting, if only it had been less adolescent: if we know who (or what) ‘did it,’ we can calibrate our personnel and acquisition systems to get more of ‘it.’ In other words, strategy implies causality.
Understanding Cause
Understanding causality tells us where to focus our efforts to improve our own forces and disrupt our adversary’s forces. It is all the more important for airmen, as airpower implicitly includes systems theories about how our enemies work: Warden’s Five Rings,[4] the Air Corps Tactical School’s Industrial Web, and the Social Network theories behind the current SOF air campaign all rely on different assumptions about causality. The naval scholar Captain Wayne Hughes tells us, “to know tactics, know technology.”[5] We’d add that “to know strategy, know causality.” Only a strong theory of causality can tell the difference between a symptom and a cause. There are two sides of the causal coin - positive causality, why what happened ended up happening, and negative causality, or why things that might have happened did not happen.
Positive Causality: Who killed Zarqawi? Positive causality wrestles with question, “How did we get here?” Given that an event occurred, what were the prime movers along the preceding chain of events, and what was just along for the ride? Brookings Senior Fellow Peter Singer, author of Wired for War, describes an excellent example of this problem in the context of the Distinguished Warfare Medal controversy. From an interview with National Public Radio:[6]
Host: But, you know, explain for me exactly how — when a person distinguishes themselves if they’re a drone pilot, for example. I mean, how do you go above and beyond if you’re sitting at a computer, piloting a drone?
Singer: Well, you’re putting your finger on one of the controversies that surrounds this, and that’s what a lot of the spin around has been. But let’s use the case of the mission that got the leader of al-Qaida in Iraq, Zarqawi. So there was a team of unmanned aerial systems, drone operators, that tracked him down. It was over 600 hours of mission operational work that finally pinpointed him. They put the laser target on the compound that he was in, this terrorist leader, and then an F-16 pilot flew six minutes, facing no enemy fire, and dropped a bomb — a computer-guided bomb — on that laser. Now, who do you think got the Distinguished Flying Cross?
Host: Whoa. The…
Singer: The people who spent 600 hours, or the six-minute pilot? And so that’s really what we’re getting at. Actually, the drone operators, in that case, they didn’t get the medal, but they did get a nice thank-you note from a general. This is a true story, here.
Singer highlights a tension between two implicit theories of causality. Both theories would agree the RPA crews and the F-16 pilot were part of the causal chain that resulted in the elimination of Al-Qaeda in Iraq’s number one high-value target, along with a legion of special operators, intelligence troops, maintainers, and many others. However, the theories diverge on the issue of who was the critical (and hence laudable) link in the chain. The first implicit theory, naïve proximate causality, assigns the primary causal force to the link in the chain closest to the effect, the F-16 delivering bombs on target — in effect, the causal chain collapses to the ‘Last Red Button.’ Under this theory, it makes perfectly good sense to present a revered flying decoration to the pilot who removed the top enemy commander from the battlefield. The second implicit theory looks instead at fractional contributions to the outcome — six minutes vs. six hundred hours of work — this ‘Weight of Effort’ approach sees the F-16 as a punctuation mark to a task accomplished by the Predator crews. Our purpose here is not to determine which approach is correct, but rather to highlight that different approaches to causality have implications that ultimately come to bear on our priorities and strategies.
Negative Causality: Ops Successes, Intel Failures. The flip side of the causal coin is negative causality, which instead asks, “What didn’t happen that might have, and why?” This is a very common academic research problem: how do you count dogs that don’t bark? The intelligence and electronic warfare communities face a direct application of this problem — how does one prove a negative, especially if one’s job is to produce negative events by reducing risk? If an intelligence analyst or an Electronic Warfare Officer (EWO) does his or her job, they’ve fully mitigated risk, and the would-be SAMs or interceptors never manifest themselves. In this case, the strikers complete their mission and return home safely — an ops success. Conversely, if the intel troop or the EWO misses something, the risk may fully manifest and frustrate the mission — an intel failure. In the F-117 community, intel flight planners used threat knowledge and the technical characteristics of the aircraft to craft a mission profile that circumvented defenses. If the mission went well, one only saw the pilot conducting the strike. If it went poorly, then one would look to the intel planner for an explanation.
It is challenging to rigorously estimate the positive effects of these troops by what they prevented, because we would have to weigh all possibilities and determine how their actions prevented would-be calamities. Ironically, without a model that deals with things that never happened, only last-ditch maneuvers — interventions after the threat has fully manifested — are recognized. Conversely, the much preferable small, early, and low-risk planning interventions that prevent threats from manifesting are overlooked. This leads to a strange set of incentives, as only the crew members who mitigate manifest threats are rewarded, while those who see far enough to mitigate potential threats reduce the most risk but are the least likely to receive single-event recognition. Without rigorous thinking, we might ascribe the contributions of these highly-effective flyers to luck or chance, but to explain what they did, we need the abstraction of risk mitigation. Because they did their job, risks were not given a chance to manifest, therefore the uneventful flight was the direct result of good planning. But abstractions are difficult, and it is a quicker emotional return to celebrate the last-ditch hero shot than the well-planned but banal shot.[7] It is preferable to accomplish the mission without giving the adversary the opportunity to attack, but it’s hard to give credit for a job well done when that job produces, quite literally, nothing.
Counterfactuals and Abstraction: These non-occurrences are critically important to the formulation of strategy, as one gains victory not only through achieving all the things that did happen, but also through addressing all the things that didn’t. Therefore, we need to consider positive causality — how the things that did happen happened —and negative causality — what else could have happened and why it did not. The best way to do this, in my opinion, is the counterfactual: a thought experiment where we imagine how the world would be different if we changed one variable.[8] This helps us understand how important any given variable is in creating the outcome, and this method is extensively employed by the renowned analyst Ken Pollack. [9]
This tool should prove uniquely helpful to airmen, as airpower achieves its greatest impacts when used in abstraction against deep elements of the enemy’s systems. Consider a hypothetical debate between a tank force commander and a bomber force commander in the Second World War. The tank force commander describes destroying ten enemy tanks in a force-on-force engagement. The bomber commander responds by describing how his crews bombed a tank factory and thereby destroyed a hundred tanks. “What were their serial numbers?” “They never had any, because they weren’t ever built.” “Well then, how do you know you blew up a hundred and not ten or a thousand?”
Both sides have points. Destroying tanks is an abstraction from bombing a tank factory — the number of tanks prevented depends heavily on the accuracy of mental models about the nature of German industry. The earlier in the causal chain, the better payoff per intervention, and the more it makes sense to employ abstract airpower (deep strike against industrial capacity & force multipliers) along with concrete airpower (close support against fielded forces.) However, abstraction is a risk —the costly Schweinfurt raid of the Second World War attempted to remove a key node in the German Industrial Web based on a flawed theory of logistics, and yielded disappointing results.
Still, the counterfactual method is more accurate than naïve proximate causality in assessing who did what. An action’s failure or excellence, respectively, is then judged by the degree that it comes to bear on the outcome. If a mission is successful because one player in the chain went well above and beyond, and it would have failed if that player had simply done what was expected of them, then the outcome turned on their performance. However, the more one steps back from the event, the more complex it becomes, with spiraling numbers of intricately intertwined causes. To manage this complexity, I propose three causal lenses: the Spear, which focuses on the ‘last-step player,’ the Arrow, which incorporates random chance, and the Shuriken, which sees impacts as a result of some irreducible loop of players.
Three Models of Causation
Tip of the Spear and Simple Proximate Causality: The first lens, simple proximate causality or “the tip of the spear,” is also our most familiar. This helps us understand events where most of the play is in the last step prior to the impact. In these cases, we should see large variations in the outcome based on choices made immediately beforehand. In situations where so much change in outcome hinges on this final player, it makes sense to calibrate our causal lens on them.[10]
An apt analogy to describe this lens is an infantry spear. Everything composing the spear is built with the purpose of hitting a target with the tip of the spear with as much force as possible. The idea of the ‘last-step-player’ is key to this lens — the focal point of attention is the spear-point, the link in the chain closest to the desired effect. This lens is best scoped to simple processes where there is an unambiguous delineation of roles and a clear, deterministic link between the culminating final action and the desired effect. In situations with fluid roles, or a strong element of chance, this lens may lead to a deceptive over-simplification by crediting a lucky or glamorous player with the whole of a complex process.
Even with its last-step focus, this lens should not neglect the importance of the other steps in the chain that made the last step possible. For instance, consider a kicker on a football team whose 60-yard kick turned a loss into a win. The interview and MVP award will doubtless go to him, but he is remiss if he does not share the credit with the linemen who protected him, the offense that put him in position, the coaches and trainers who prepared him for the kick, or the fans who pay the bills for the game. A vast array of players ensure the last step player has every advantage in their favor as they step into the situation. This lens should not devalue their contribution, only point out that in these sorts of situations the best economy of analytical effort is found in focusing on the last causal step.
This lens is the simplest, but its focus on the endgame may play to our predilection for tactical rather than systemic explanations. It also plays to our bureaucratic vices — since there is a concrete and easily quantifiable link between the last step and the impact, it lends itself to metricization. Most seductively, it implicitly advantages powerful communities within the institution, who can claim credit for the entire chain of events. Unfortunately, it yields wildly incorrect results if the lion’s share of variation is not found in that last step.
The classic example of this analytical failure is the Luftwaffe during the Battle of Britain, where the Germans ascribed almost all causal force to the martial spirit of their pilots. Conversely, the British cared far more about building an integrated air defense system, and therefore spent much more time thinking about support functions and tactical integration. When faced with a failure to achieve operational objectives, Hermann Goering would berate his Luftwaffe pilots for cowardice. Conversely, RAF Fighter Command’s Hugh Dowding looked farther upstream in the causal chain, and found victory by “praying for radar.”[11] Similarly, by overly relying on this lens, we run the risk of falling into Goering’s trap of blaming those closest to the outcome simply because they are at hand, and missing the deeper structural aspects of what’s going on.
Longbow Archers and Stochastic Causality: Our second lens recognizes that the world includes elements of randomness and holds that we can harness probabilistic processes to our advantage. At Agincourt and Crecy, English Longbowmen darkened the sky with a “cloud of arrows,” to cut down a generation of French knights.[12] At range, these longbows were an area-effect weapon — archers were not looking to snipe individual targets, but were rather using their high rate of fire to saturate the enemy battle lines. While no archer was individually any more accurate than another, in the aggregate they were accurate enough. Many things in warfare are similarly stochastic, involving both random and deterministic elements. The Longbow Archer serves as an analogy for these stochastic events, where there is a clear delineation between operations and support, but with randomness intervening between the culminating final action and the impact in question.
The B-17 and B-24 crews of the Eighth Air Force employed stochastic causality to great effect during the Second World War. In theory, precise formations and precision bombing aids would allow the bomber force to arrive together at the calculated release point and drop as one. In practice, formation management, navigation and bombardier-ship were more complex, but the engagement profiles of the bombers were still relatively deterministic.[13] The bombs themselves, however, were subject to chance in their flight (random imperfections in the bomb’s construction or release mechanism, unpredictable variations in air density or currents, and so on). Managing the deterministic elements provided predictable results in the aggregate despite this element of randomness. Bombs were not truly random in their placement, but were unpredictable in a predictable way, centered on the target with a probabilistic error. These airmen accomplished their missions through art of directing randomness toward concrete ends. They lived, died and won in the aggregate — the angry clouds of exploding FlaK rounds the Germans sent back at them killed just as impersonally and probabilistically as their bombs did (and for this reason, the Eighth Air Force recognized these crews with ‘counter medals,’ which aggregated effects and risks over a number of sorties.)[14]
The key concept in the Longbow Archer lens is the stochastic departure point, the link in the chain where the process transitions from predominantly deterministic to random forces. At this departure point, we should pool all the players doing the same things, and assign all that comes afterward to all of them as fractions of the aggregate effects. Links in the causal chain past this departure point are driven by stochastic forces, and hence cannot be assigned to individual agency.
This lens abstracts out to expected values over time for contributions, and is appropriate only for events that include an element of randomness. For instance, Electronic Warfare Officers generated these sorts of effects when they defended ground convoys against IED attack with jammers —Electronic Warfare is a game of probabilities. These sorts of stochastic explanations have been part of airmen telling the airpower story for as long as there have been airmen — the Longbow Archer lens is an essential element of our causal toolkit.
The Shuriken and Complex Adaptive Causality: Military strategy deals with many scenarios not reducible to either direct causality or clearly stochastic processes. Enemies adapt and causes intertwine in complex ways, and supporting and supported forces can exchange places. For instance, in the Special Operations targeting cycle, intelligence would lead strikers who would raid a target, which would yield new intelligence and prime the next operation. In these cases, operations and intelligence were working in parallel like two pistons in an engine, and it would be difficult to determine which of the two was the support force. Because of this, the causal chain turns back on itself and becomes a causal cycle. This lens is concerned more with how fast that cycle runs than with who is the striking face of the cycle at any given point in time.
The operative analogy is the Japanese Hira Shuriken, or “Ninja Star,” which has a number of striking faces, all of which face the enemy multiple times during the weapon’s flight. If the weapon spins quickly, it will accurately close the distance to its target; the flight profile is more important than the terminal strike face. Similarly, both the F3EAD and F2T2EA targeting cycles are rooted in Col John Boyd’s OODA loop, where the imperative is to iterate faster than your adversary. Who kicks in the last door, or finds the last piece of intel, or discovers the last hidden network connection is far less important than the person who does the most to accelerate everyone toward a culminating point. In the movie Zero Dark Thirty, an analyst named Maya badgers the intelligence community to act on a lead that she believes indicates Osama bin Laden’s hideout. The operators and intelligence professionals depicted in the film are absolutely heroic and essential, but replacing Maya with an average analyst would have slowed the cycle more than replacing any other player with an ‘average’ counterpart.[15] Therefore, she provides a disproportionate amount of the causal force for the resulting raid. It may not actually be possible to differentiate between the weight of these roles, however.
As the above diagram demonstrates, these targeting cycles neither begin nor end until the campaign itself draws to a close. In network-on-network warfare,[15] every lap around the cycle should diminish enemy capabilities and improve friendly situational awareness. The enemy attempts to return the favor, and the side that cycles through the loop with the most speed and precision gains the advantage over time. Since a key aspect of the network duel is the hidden nature of the enemy’s network, one may not even realize they’ve achieved a culminating point until well after the fact. It is possible that even the adversary may not realize you’ve inflicted a fatal wound until they’ve actually bled out from it. Therefore, trying to figure out who is directly responsible for that culminating point is not particularly useful and likely impossible (especially in any timeframe that would be operationally useful.) The only thing that is certain is that faster is better, provided you don’t fall off the cycle in the process.[16]
The key orienting concept for this lens is the irreducible loop, which is composed of the fewest steps required for the causal cycle to continue to operate. First-order success in a network duel belongs to the members of this irreducible loop, in proportion to the degree that they propelled the loop forward. This is typically an operations-intelligence cycle, but this lens also fits killbox/SCAR operations and AWACS-directed air battles well.[17] As cyber develops, this lens may be the most appropriate to that domain of dueling networks as well. This lens is best scoped to complex processes, where the delineation between operations and direct support is murky at best, and where there are indeterminate (and even reciprocal) links between a culminating final action and the impact in question.
On a tactical level, a multi-ship strike demonstrates this ‘irreducible loop’ concept. If one aircraft is firing and another one is providing the terminal guidance, then both are in the irreducible loop. It is inappropriate to try to reduce the ‘who did it’ question below the level of this loop. For this reason, when someone asks an AC-130 Gunship crew “who shoots the gun,” at least half the crew raises their hands. We should consider these sorts of deeply integrated actions as irreducible loops, though within these loops, there are often players who fill leadership roles, such as the Aircraft Commander on a Gunship. These players might rightly be considered primus inter pares, but the emphasis remains on the “equals” part of the phrase.
Conclusion: Applying the Lenses
These lenses are not solutions, but perspectives. An event can be considered from a number of these lenses, with each providing different insights. As heuristics, they are each calibrated to different questions, and these questions may overlap. A process may be deterministic on the tactical level, where one crew definitively performs one mission.It may also be stochastic on an operational level, where any of a number of crews may have yielded the same result, and the choice of crew was based on a random draw of who happened to be on alert.
Similarly, a particularly notable sortie might be performed in the larger context of a CAOC-integrated network duel, where different lenses would be appropriate to different levels of analysis. The core “implicit counterfactual” method applies across all lenses — the key diagnostic question is, “How different would the outcome have been if I had replaced this person or event with a random draw?” As an analogy, this method stretches out the causal chain like a rubber band, holding the outcome constant, and then pokes each step of it to figure out how much play each step has. The steps with the least margin for change, given the constant outcome, are the ones where those specific actions bore the greatest causal force.
We might imagine AFSOC Persistent Strike aviators holding a target under surveillance waiting for a fleeting moment to strike. Given limited station time, three or four crews might be overhead during a strike window; all have to be equally prepared to execute the strike, but only one will be overhead when the target randomly presents a vulnerable moment. Therefore, the strike itself might be best understood as the Shuriken or the Spear, but the crews in the strike window should be understood as the Arrow — all were good enough to be selected for a crucial mission, and it was their shared coverage that enabled a random sample of their number to take the shot. Presumably, all of them would have similarly taken a successful shot if given the chance, as demonstrated by their selection for the task. Therefore, applying more than one lens provides additional perspectives on a given event.
The alternative to a good theory of causality is not the lack of a theory of causality, but a poor or ill-considered theory of causality. Unfortunately, such a theory of causality has made it remarkably difficult for airmen to explain and advance what air, space, and cyberspace do for the joint community and national objectives. We’ve spent the last decade disrupting threat networks from the air, but without the language of causality, we’ve analytically relegated these actions to the realm of support instead of claiming the mantle of airpower. A water-thin theory of causality leaves us all scrambling for the prize real estate on the “tip of the spear,” while a better theories of causality allows us to appreciate how the diversity of airmen’s contributions actually complement each other. General Arnold understood that if airmen are to tell the Air Force story, they must speak the language of abstractions, which is why he worked so hard to advance the field of Operations Research under RAND. [18] It’s time airmen reclaimed their inheritance by thinking past the ‘last red button’ to the world of analytical possibilities bequeathed to us by visionaries like General Arnold.
Dave Blair is a U.S. Air Force officer and is a graduate of the United States Air Force Academy. He holds a PhD and a Masters degree from Georgetown, and a Masters in Public Policy from the Harvard Kennedy School. The views expressed in this article are the author’s alone, and do not reflect those of the U.S. Air Force, the Department of Defense, or the U.S. Government.
Have a response or an idea for your own article? Follow the logo below, and you too can contribute to The Bridge:
Enjoy what you just read? Please help spread the word to new readers by sharing it on social media.
Notes:
[1] In one particularly tragic irony, one local veterans’ organization had generously opened its doors to RPA crews to unwind after stressful shifts supporting troops under fire in the company of those who understood; after several weeks’ worth of unchecked bile in the comments section of their national parent organization, our crews ceased to darken their door.
[2] For further reading on this topic, I highly recommend MIT Professor David Mindell’s work on the social construction of technology. David A. Mindell, Our Robots, Ourselves: Robotics and the Myths of Autonomy (Viking, 2015).
[3] Graham White, “Medieval Theories of Causation,” in The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta, Fall 2013, 2013, http://plato.stanford.edu/archives/fall2013/entries/causation-medieval/.
[4] John A. Warden, The Air Campaign: Planning for Combat (iUniverse, 1998), http://books.google.com/books?hl=en&lr=&id=K8xEa7-dD_UC&oi=fnd&pg=PR2&dq=the+air+campaign&ots=z9OiJqnc4M&sig=gGuD2ce-pgjlCB-lXRYUO5P5FSY.
[5] Wayne P. Hughes Jr, Fleet Tactics and Coastal Combat, 2 Sub (Naval Institute Press, 2010).
[6] “It’s Time to Recognize the Valor of Cyber Warriors,” The Brookings Institution, accessed March 2, 2013, http://www.brookings.edu/research/interviews/2013/02/19-cybersecurity-singer.
[7] As an example of this, Colonel Ray O’Mara’s MIT doctoral dissertation describes the cultural tension between navigators and gunners on B-17 crews around the idea of abstraction in combat. A navigator that did their job well would circumvent known defenses, thereby reducing the number of enemies able to engage the aircraft. Conversely, a gunner would defend against enemies who were able to engage the aircraft. The problem is that gunners would shoot down real aircraft with actual tail numbers, while navigators would prevent hypothetical aircraft from reaching the bomber. Even if the navigator prevented, hypothetically, 2.6 Messerschmitts per sortie from engaging the bomber, it’s hard to argue with ‘well, I never saw them’ from a crewmember that warded off an actual attacking Messerschmitt. Which is, of course, true — that’s the point. Raymond P. (Raymond Patrick) O’Mara, “The Socio-technical Construction of Precision Bombing : a Study of Shared Control and Cognition by Humans, Machines, and Doctrine During World War II” (Thesis, Massachusetts Institute of Technology, 2011), http://dspace.mit.edu/handle/1721.1/67754.
[8] To do this responsibly, the eminent military historians in the classic work What If? spell out a number of rules, such as ‘events should return to their historical flow unless directly prevented by the counterfactual chain.’ Robert Cowley, What If? (Pan Macmillan, 2001).
[9] Kenneth M. Pollack, Arabs at War: Military Effectiveness, 1948–1991 (Lincoln: Bison Books, 2004).
[10] In statistical terms, this is a ‘power-law’ distribution of outcomes, where most draws yield relatively low results, but a few extreme outliers account for the majority of events. In other words, the mode and median are significantly below the mean.
[11] Stephen Bungay, The Most Dangerous Enemy: The Definitive History of the Battle of Britain, Reissue (Aurum Press, 2010); Richard Overy, The Battle of Britain: The Myth and the Reality (W. W. Norton & Company, 2002). The Battle of Britain also highlights a second problem in dealing with causality in incentive structures. The Germans would reward pilots solely for outcomes, which resulted in a number of glory-hungry flight leads losing inordinate numbers of their Katschmarek wingmen. Conversely, the British would reward aircrew for making the right decisions, with less regard to outcome. This resulted in more sober flight discipline and better results in the aggregate.
[12] Quoting Walsingham. Anne Curry, The Battle of Agincourt: Sources and Interpretations (Boydell Press, 2000).
[13] O’Mara, “The Socio-technical Construction of Precision Bombing.”
[14] One difficulty here is determining the pool from which to assess ‘average.’ An average elite Special Operator is far from average, as compared to the general population of the military. So there is clearly some laudatory causal force that comes from being part of the sort of unit or capability who would be invited to the table for such an event. If extraordinary actions were so ordinary for an elite unit that any member could perform an extreme task on any given day, then the member might be ‘average’ by the extreme standard of the unit. This means that the unit’s standards were the causal factor in the outcome, and the distinctive accomplishment of the individual is in achieving and upholding the unit’s standards. Perhaps we might solve this problem through unit awards, since this shares the credit with the entire unit subject to these extreme standards.
[15] John Arquilla and David Ronfeldt, Networks and Netwars: The Future of Terror, Crime, and Militancy, 1st ed. (Rand Corporation, 2001).
[16] Aristotle’s admonition that “the mark of an educated man to look for precision in each class of things just so far as the nature of the subject admits” applies here. Nicomachean Ethics, 2nd edition (Indianapolis, Ind: Hackett Publishing Company, Inc., 1999).
[17] Strike Coordination and Reconnaissance.
[18] “History and Mission | RAND,” accessed November 30, 2015, http://www.rand.org/about/history.html.