Feedback Driven Decisions and the Evolution of Intelligence Analysis in the United States

What the customer thinks he is buying, what he considers 'value,' is decisive–it determines what a business is, what it produces and whether it will prosper.”
Peter F. Drucker

The Commission on the Intelligence Capabilities of the United States Regarding Weapons of Mass Destruction in 2005 recommended finished intelligence be made available to customers in a way that enables them—to the extent they desire.[1] Despite this clear recommendation, it is likely that today’s U.S. intelligence community all-source analysts will have difficulty improving the overall quality of analysis without more consistent, substantive, and aggregated feedback from their customers. While the commission referenced this issue in the final report, feedback, and its importance in the intelligence cycle, was nominally emphasized.

The true quality, or value of intelligence analysis is influenced by an analyst’s ability to acquire, learn, and convey new information as knowledge to a customer in an effective manner. At the national level in the U.S., contemporary changes made by the analytic community to their methodology and delivery vehicles are predominantly based on an inherently insular learning model, vice a procedure more akin to the military’s OODA loop, which is the cyclical process to observe, orient, decide, and act.[2] While this serves as a helpful operational analogy, especially because it is driven by recurring feedback, there are other professional models more appropriate to consider to better meet national objectives.

Diagram of the OODA Loop (Wikimedia)

Conceptually embracing assessment models commonly utilized in higher education and sales management, integrated with changes occurring now in the intelligence community’s digital infrastructure, may hold the key to solving aspects of this challenge. This would necessitate both a cultural change on the analysts’ part and require consumers to improve their relationship and condition responsive feedback into a continuous, structured, measurable, and digital information exchange in line with existing marketplace practices. Otherwise, intelligence analysis will never evolve in an efficient manner that better aligns to their customers’ requirements.

Blackholes and Secret Doors—The Current Model

The modern American intelligence community exists in an exclusive environment where information requirements, processes, and production are necessarily compartmentalized not only from the public, but often from those operating within the community. Only intelligence personnel directly involved in a project are positioned well enough to gauge their own impact. Even then, attempting to do so is a challenging task because it is unlikely enough meaningful feedback exists upon which to base it. “Read with interest” is better than nothing as a common virtual customer response, but not overly substantive. Even with direct briefings or engagements, it is difficult for analysts to comprehend the inner workings of their customer’s mind. Without feedback, analysts have to guess at whether they answered the right question, prioritized the threat in a helpful manner, or provided reasonable opportunities for decision advantage.

The actual value of the intelligence report, which customers and academics rightly center on as well, is personal and lies in the eye of the beholder. What might have been drafted specifically for the National Security Council may also end up in the read book of an Assistant Secretary of Defense and their executive officers several days later. Each customer has different experiences, authorities, and requirements based on their mission. Most importantly, each cognitively expects a specific question to be answered that is different in each person. The variety of consumer interests inherently affects the utility of the intelligence report. A timely and helpful response for one might be late and meaningless for another. The diversity of the customer set, and the community’s ability to disseminate across it, naturally creates a variety of results. The influence of digital automation widens it further. Digitally posted material can be pulled from across the enterprise and used in ways unseen by analysts.

Photograph of President Reagan in a briefing with National Security Council Staff on the Libya Bombing (Ronald Reagan Presidential Library/Wikimedia)

The President of the United States, often supported by the National Security Council and other senior advisors, makes decisions based on a compilation of information heavily affected by political interests, but has limited time to discuss the aggregated importance of finished intelligence with the producers. Some of the decisions are formalized into explicit policy that takes years to enact and gauge, while other decisions reinforce the status quo, a far less observable activity. Regardless, Presidential Daily Briefs are highly compartmented, for obvious reasons, and only a select number of briefers gain an immediate sense of the president’s response as their customer. The amount of direct ingestion of intelligence varies by president, but every one also relies on key influencers who drive the need and importance of the medium. If a Cabinet member, National Security Advisor, or confidant utilizes intelligence in a manner differently, or in more volume than the president, and plays a strong role in many of the policy decisions coming out of that office, does it matter who is being influenced more? Ultimately, the influencers are more likely to consider providing consistent feedback to the producer, thereby indicating its value.

A combatant commander, however, is far more likely to consider tactical, operational, and strategic effects based on unique mission and position than the commander in chief, resulting in a different perception of similar information. Actionable intelligence, a term often aligned to quality in some circles, is a subjective term. Intelligence can lead to visibly public strike packages, diplomatic efforts, clandestine activities, or decisions that materialize into qualitative results long after national leaders are reassigned or retire.

From 2010-2012, I served as a Defense Intelligence Agency Branch Chief responsible for a staff operations team that managed analytic taskings. Members of my team had to perform an annual assessment of the number of intelligence production requests that were formalized across the analytic directorate. The number of tasks, or requests for information as they’re commonly known, was staggering. Not only were these formally directed from a widening customer set, including national and operational levels, they increased at an alarming rate every few years. Alarming because the customer was so voracious for intelligence products that management was forced to make difficult staffing decisions to more efficiently respond and disseminate them. We were also one of the only groups in our analytic directorate able to measure the entirety of the requests, and even that was imperfect.

Some criteria driving these requests include military engagements on multiple fronts, an evolving acquisition community legally mandated to incorporate threat information to justify the expense of a new platform, and daily congressional and presidential needs. The extensions the analysts requested became standardized in response. This trend was experienced across our sister agencies and continues to this day. The number of taskers, themes, and scope of requests for information are inherently classified and contained within a relatively closed loop seen comprehensively by a small number of staff officers. The unremitting requests reinforce the customers interest in intelligence assessments, and—rightly or wrongly—bolster the personal importance of the intelligence analyst’s job and perception of value of their product. If the finished intelligence lacked influence, or was of poor quality, why would the taskers continue at such a pace directly from the customers? Most would assume the numbers would go down, not up, if a customer were dissatisfied.

President George Bush at the Central Intelligence Agency (AP)

Some all-source intelligence products undergo internal evaluation against the analytic tradecraft standards described in Intelligence Community Directive 203, and are incorporated into an annual congressional assessment, but the picture it paints is insular in nature. The same can be said about the products that are only rated by peers within the community. The dysphemism of a self-licking ice cream cone, a term used widely across the community, only helps the producer, not the customer of intelligence. Without substantive feedback from the customer, the analyst’s learning process is crippled. This furthers stymies the analyst's ability to evolve in form and function.

While there are better methods today to determine if a significant number of intelligence community members access a digitally posted product, there are far fewer means to determine the real impact of the product from the customer’s standpoint. Intelligence products do, however, go through an arduous peer review that requires an incredible amount of coordination across offices and agencies, and through a hierarchy of experts. This has long since become a default aspect of the intelligence production process. There are always a few exceptions, often predicated on time, but it is relatively rare compared to before the National Commission on Terrorist Attacks Upon the United States, also known as the 9/11 Commission, issued a final report in 2004. Unfortunately, without a standardized and disseminated feedback process, two areas will always suffer: quality and evolution of product.

A Two Body Problem—Comparable Models for a Better Future

In the private, consumer-based world, customer feedback is not only digitally standardized by product, but integral for each participant. Without qualitative comments mixed with quantitative ratings, producers cannot determine whether their product meets the customers’ requirements in quality, cost, or utility in real time. Without this information, the only other metric is a change in sales, a far more macroscopic measurement. For the consumers—or clients, in some cases—comments and ratings relay details that provide the producer enough information to positively affect their behavior. The relationship results in a reinforcing loop that can better achieve benefits for both parties, and improvements therein. If the customer desired a more interactive experience, there would need to be a push-pull scenario from each end of the relationship that mandates more feedback. If the customer thought the product was of poor quality, or lacked appropriate material, the requirement would be the same. Furthermore, responses would be required to occur and assessed in aggregate. There are hundreds of analytic products formalized each day across the whole of the community. Current models in the intelligence community often track internal markers that do not intrinsically align to customer interest.

Benchmarking, as defined by the University of Maryland Graduate School, tracks key markers of accomplishment to help students progress through a program or to demonstrate the program’s success. Direct and indirect assessments, however, help institutional personnel gather and analyze qualitative and quantitative information on students that can be used to improve the program itself. Assessments can be used to identify patterns in what students are learning and producing and to use that information to determine what aspects of a program might be working well or need attention. The U.S. intelligence community has no equivalent benchmark, or standardized direct assessment tools predicated on customer feedback, to measure the overall effectiveness of its products. Most recommendations are built against internal evaluations, or a small sampling of personal observations and interviews more akin to opinion-based comments—a type of indirect assessment—instead of a full complement of assessment tools used to inform decisions. An example of a benchmarking question might be: What percentage of the customers rated the product at least 80% insightful?

In Conclusion—Feedback as a Foundation of Learning

Academic researchers have studied the importance of feedback as a foundation of learning for years. Dr. Natalie Saaris, writing for Actively Learn, an e-reading platform dedicated to improving reading comprehension and education, summarized effective feedback for deeper learning into four categories: task emphasis, specific improvement guidance, frequency of feedback, and feedback that inspires reflection about the cognitive process versus proficiency.

A future model that requires feedback in exchange for the intelligence product would need to align itself to this learning foundation and include benchmarks to measure key objectives by categories to complement data generated by aggregated assessments. This type of model would need to be integrated with the U.S. intelligence community’s digital advancements in cloud computing services and improved electronic architecture. Recall that President Obama was famously pictured using a new tablet-based intelligence product in 2012 instead of a classic read book. The tablet created a unique platform for visual products, like videos, and positively affected the creativity and impact of the graphic production staff. There were second- and third-order effects, based on the president’s reaction to the capability, that rippled across the intelligence enterprise in terms of modernized information conveyance. These types of platforms are also well known for being able to incorporate digital feedback software to enable controlled data exchange between the two parties.

Most importantly, consumers of intelligence would need a facile and secure method to provide a response to the analysts, most likely delivered through an electronic interface and dissemination process. An example aggregated output might be written, “82% of the Under Secretaries rated the product insightful, and 40% commented on the need for more opportunity analysis and visuals.”

Fundamentally, the intelligence community’s analytic product dissemination needs to move to a more transactional framework. Accumulated consumer feedback delivered back to the producer would result in more rapid changes from the community, improve the client’s decision-making, and elevate the national security impact of both producers and consumers. Only a change of this magnitude would ultimately meet the true intent of the Commission’s 2005 recommendations.


Brian Holmes is the Dean of the Oettinger School of Science and Technology Intelligence at the National Intelligence University in Bethesda, MD. The School is the focus for science and technical analytic education, research and external engagement across the intelligence and national security communities. The views expressed are the author’s alone and do not represent the official policy or position of the National Intelligence University, the Department of Defense or any of its components, or the U.S. Government.


Have a response or an idea for your own article? Follow the logo below, and you too can contribute to The Bridge:

Enjoy what you just read? Please help spread the word to new readers by sharing it on social media.


Header Image: President's Daily Brief (Jay Godwin/LBJ Presidential Library)


Notes:

[1] Full recommendation text is: "Recommendation 13 The DNI should explore ways to make finished intelligence available to customers in a way that enables them—to the extent they desire—to more easily find pieces of interest, link to related materials, and communicate with analysts."

[2] A detailed discussion of the concept of the OODA Loop by Col. John Boyd can be found in Frans Osinga, Science, Strategy, and War - The Strategic Theory of John Boyd.