#Reviewing Prediction Machines: Cutting Through the AI Hype

Prediction Machines: The Simple Economics of Artificial Intelligence. Ajay Agrawal, Joshua Gans, and Avi Goldfarb. Boston, MA: Harvard Business Review Press, 2018.


There is no shortage of opinions about artificial intelligence (AI). Scour the blogs, and you’re bound to find references to both its promises and its perils. Frequently, the predictions are Janus-faced. Artificial intelligence will eliminate human jobs, and artificial intelligence will create human jobs. It heralds a new industrial revolution, and its impact will be constrained by its significant limitations. Such conflicting rhetoric appears in the military sphere, too. Artificial intelligence will lead to a post-strategy era; it is but a military enabler. Artificial intelligence will lift the fog of battle; it will cloud our understanding of battle. Artificial intelligence will lead to a more humane battlefield; artificial intelligence will eliminate humanity from the battlefield altogether.

Will Farrell as Ricky Bobby (IMDB)

Will Farrell as Ricky Bobby (IMDB)

The cacophony can be deafening. Russian President Vladimir Putin’s pronouncement in 2017 that “[the leader in artificial intelligence] will be the ruler of the world,” or even Google CEO Sundar Pichai’s 2016 announcement of a new AI first strategy, amplifies the discord by injecting anxiety into the current frenzy. Little wonder, then, that some have started applying the faux-wisdom of Will Farrell’s Talladega Nights character Ricky Bobby to artificial intelligence, frantically proclaiming, “If you’re not first, you’re last.” But these trite soundbites and taglines do little to resolve the fundamental questions of what it is and how it will affect us?

Ajay Agrawal, Joshua Gans, and Avi Goldfarb offer a valuable, fresh perspective, cutting through the hype in their recent Prediction Machines: The Simple Economics of Artificial Intelligence. All three authors are economists at the University of Toronto’s Rotman School of Management, and they all have experience nurturing artificial-intelligence start-ups at the Creative Destruction Lab. For this trio, the key to understanding artificial intelligence is to reduce it to a simple supply-and-demand curve.

Artificial intelligence is about generating predictions. “[It] takes information you have, often called ‘data,’ and uses it to generate information you don’t have.”[1] Historically, collecting and parsing data, constructing models, and employing the resident statistical expertise to offer intelligible interpretations demanded significant resources. But what happens if the cost of prediction falls substantially? According to the law of supply and demand, cheaper equals more; hence, “Cheaper prediction will mean more predictions.”[2] Understanding the implications of “more predictions” is the challenge.

Agrawal, Gans, and Goldfarb tackle it admirably. The book is organized into 19 brief chapters, grouped into five parts. Because it’s written primarily for a business audience, the discussion focuses on identifying emerging opportunities in artificial intelligence “likely to deliver the highest return on investment.”[3] Hence, examples in the text tend to focus on C-suite strategies and business applications such as reducing customer churn or predicting credit card fraud.

Nonetheless, by successfully reframing artificial-intelligence tools as cheap prediction machines, the trio of economists offer several critical insights that are as applicable to military deliberations as they are to discussions in the boardroom. I summarize three below: predictions will be situation-specific, predictions will sometimes be wrong, and decisions will still require human judgement.

Predictions Will Be Situation-Specific

Unlike guesses, predictions require data. More data provide more opportunities to discover critical linkages, generating better predictions. In the past, analytic techniques such as multivariate regression constrained the amount of data that could be scoured for correlations. Consequently, these techniques relied on an analyst’s intuition or hypothesis, and they functioned only as an average, potentially never actually yielding a correct answer.[4] Not so with modern techniques in artificial intelligence, techniques feasting on the immense data sets and complex interactions that would otherwise overwhelm classic statistical models. Data have therefore been likened to “the new oil”—without them, the machine of artificial intelligence would grind to a halt.[5] But, as Agrawal, Gans, and Goldfarb remind us, not all data are created equal.

Data must be tailored to the task at hand. Asking artificial intelligence to predict whether the pixels in an image (information we know) correspond to a cat (information we don’t know) will not necessarily help when trying to predict if another group of pixels in another image correspond to a vehicle-borne improvised explosive device. Similarly, training a system to play Go and then asking it to play the much simpler game of Tic-Tac-Toe would still cause it to crash. Despite the common objectives to recognize a specified object or win a game, the data are mutually exclusive based on the desired prediction.

Regrettably, Agrawal, Gans, and Goldfarb may contribute to some of the confusion by occasionally oversimplifying the prediction problem. For example, in their discussion of autonomous driving, the authors identify only a single necessary prediction, “What would a human do?”[6] While framing the problem this way may help an engineer move beyond a rules-based programming decision tree, to be relevant the prediction demands additional nuance. For example, “What would a human do if a truck pulled out in front of him or her?” Only then can the data be searched for similar situations to generate a usable prediction. Without the nuance, the data collected by Tesla of humans driving their electric vehicles could be deemed equally applicable to soldiers driving their tanks on a battlefield.

Not only are data specific to the prediction, but the problems to which we can apply artificial intelligence are also situation-specific. Building on Donald Rumsfeld’s oft-repeated taxonomy of known knowns, known unknowns, and unknown unknowns, the trio of economists add another category: unknown knowns.[7] For Agrawal, Gans, and Goldfarb, known knowns represent the “sweet spot” for artificial intelligence—the data are rich and we are confident in the predictions.[8] In contrast, neither known unknowns nor unknown unknowns are suitable for artificial intelligence. In the former, there are insufficient data to generate a prediction—perhaps the event is too rare, as may often be the case for military planning and deliberations. In the latter, the requirement for a prediction isn’t even specified, a situation described by Taleb’s black swan.[9] In the final case of unknown knowns, the data may be plentiful and we may be confident in the prediction, but the “answer can be very wrong” due to unrecognized gaps in the data set, such as omitted variables and counterfactuals that can contribute to problems of reverse causality.[10]

Consequently, current artificial-intelligence prediction machines represent “point solutions.”[11] They are optimized for known known situations with plentiful data relevant to specific, understood work flows. To understand how an artificial-intelligence tool may function within a specific workflow, the authors introduce the useful concept of an “AI canvas” that helps “decompose tasks in order to understand the potential role of a prediction machine,” the importance and availability of data to support it, and the desired outcome.[12] The most important element of the “AI canvas,” though, is the core prediction. Its identification and accurate specification for the task-at-hand are essential. Otherwise, the entire artificial-intelligence strategy can be derailed.

Predictions Will Sometimes Be Wrong

Artificial-intelligence tools rely on available data to generate a prediction. Agrawal, Gans, and Goldfarb identify three types of necessary data: training, input, and feedback. The tool is developed using training data and fed input data to generate its prediction. Feedback data from the generated prediction are then used to further improve the algorithm.[13]

More and richer training data generally contribute to better predictions, but collecting data can be resource intensive, constraining the data available for initial training. Feedback data fill the gap, allowing the prediction machine to continue learning. But that feedback data must come from use in the real world. Consequently, the predictions of artificial intelligence are more likely to be wrong when the tool is first fielded. “Determining what constitutes good enough [for initial release] is a critical decision.”[14] What is the acceptable error rate, and who makes that determination?

Even if data are plentiful and the algorithm refined, if data are flawed the predictions will still be incorrect. Additionally, it’s important to remember that all data are vulnerable to manipulation, which would significantly degrade the tools of artificial intelligence. For example, feeding corrupt input data into a prediction machine could crash an artificial-intelligence tool. Alternatively, the input data could be subtly altered such that an artificial-intelligence tool will continue to function while generating bad predictions.[15] By altering just a few pixels unrecognizable to the human eye, researchers at the Massachusetts Institute of Technology successfully fooled one of Google’s object recognition tools into predicting an image of four machine guns was actually a helicopter. Similarly, feedback data can be manipulated to alter the performance of an artificial-intelligence tool, as was observed in Microsoft’s failed Twitter chatbot, Tay.[16] Training data introduce their own vulnerabilities into artificial intelligence—an adversary can interrogate the algorithm, bombarding it with input data while monitoring the output in order to reverse-engineer the prediction machine. Once the inner workings are understood, the tool becomes susceptible to additional manipulation.[17]

Wired/Getty Images

Detecting flawed predictions, either due to inadequate learning or adversarial data manipulation, poses a significant challenge. It’s impossible to open the artificial-intelligence “black box” and identify “what causes what.”[18] While DARPA is trying to resolve this shortcoming, presently the only way to validate whether the predictions are accurate is to study the generated predictions. Agrawal, Gans, and Goldfarb suggest constructing a hypothesis to test for flawed predictions and hidden biases, and then feeding select input data into the prediction machine to test the hypothesis.[19] However, since “we are most likely to deploy prediction machines in situations where prediction is hard,” the authors acknowledge that hypothesis testing of these complex predictions may prove exceptionally difficult.[20] This challenge may be further exacerbated in military-specific scenarios due to the lethal outcomes that often characterize perplexing military problems.

Decisions Based on Predictions Will Still Require Human Judgement

For all the promise of more, better, and cheaper predictions, the decisions based on those predictions will still require human judgement. In fact, just as the value of creamer rises with the value of coffee (they are economic complements), we can expect the value of human judgement to rise as predictions generated by artificial intelligence become more prevalent.[21]

Predictions are but an input into eventual decisions and associated actions. I could estimate the likelihood of my car breaking down in the next six months, an unexpected overseas relocation, or the costs of my kids’ college education (that I predict I’ll have to help finance), but these predictions may not alter my decision to purchase a new car, because they don’t determine the value I’ve assigned to the outcome of driving a new car. The process of assigning that value—the associated reward or payoff—is a distinctly human judgement, and one that varies among individuals.

In the past, these prediction and judgement inputs into our decisions were obscured because we often performed both simultaneously in our head. However, the outsourcing of the prediction function to the new tools of artificial intelligence forces us to “to examine the anatomy of a decision” and acknowledge the distinction.[22]

Herein lies an essential point and the crux of the argument put forward by Agrawal, Gans, and Goldfarb. For all the popular talk of artificial intelligence displacing humans, the three economists assert “prediction machines are a tool for humans,” and “humans are needed to weigh outcomes and impose judgement.” Humans decide what constitutes a best outcome based on the predictions.[23] Moreover, more predictions will yield more payoffs for humans to evaluate and more decisions for humans to make.[24]

Occasionally, an appropriate payoff based on an prediction generated by artificial intelligence can be predetermined and the resulting decision coded into the machine. In these cases, because the prediction dictates the decision, the task itself is ripe for automation. But more often, situations are complex and prediction is hard. As identified above, these are the situations where we are most likely to introduce prediction machines, and the residual uncertainty of the prediction can actually necessitate greater human judgment because the prediction, even if generated through artificial intelligence, may not always be correct.[25] Thus, rather than eliminating the human, artificial intelligence often places an even greater imperative on the human remaining within the system.

The human’s position within the system and his or her relationship to the task will likely change, though. Once the prediction function is automated and assigned to artificial intelligence, tasks previously deemed essentially human and their associated skills will likely be superseded by new tasks and new skills. Indeed, this evolution in human tasks and skills is common and has been observed during other periods of automation from 19th century paper-making to modern air combat. Agrawal, Gans, and Goldfarb provide one contemporary example of a school bus driver. While the driving portion of the human task may be amenable to artificial intelligence and automation, the other tasks the driver accomplishes, ranging from protecting the children from hazards at bus stops to exercising discipline over unruly schoolchildren, are not so easily automated.[26] As David Mindell, a professor at the Massachusetts Institute of Technology, observed in his book about automation, “Automation changes the type of human involvement required and transforms but does not eliminate it.”[27]

Still, the fact that human skills remain essential to the process does not necessarily dictate the same humans be retained in the process.[28] In their final chapter assessing the broader societal impacts of artificial intelligence on the future, Agrawal, Gans, and Goldfarb conclude “the key policy question isn’t about whether AI will bring benefits but about how those benefits will be distributed.”[29] As these tools become more prevalent, individuals will have to learn new skills, and in the process income inequality may be temporarily exacerbated. “Reward function engineers,” those who understand “both the objectives of the organization and the capabilities of the machines” and who can therefore provide the necessary judgement to help guide decisions based on the various predictions, will likely flourish.[30] It’s likely that within the military, strategic and operational planners, as well as subject matter experts, will serve as these essential reward function engineers.

Conclusion

Our current and near-future artificial-intelligence tools are idiot savants. Give them a problem and data for which they are trained, and they will perform remarkably; give them a problem for which they are ill-equipped, and they will fail stupendously. It doesn’t matter if the tool is designed for business or national defense.

Too often in the public discourse, artificial intelligence is portrayed as magical fairy dust that should be applied liberally to our most challenging problems. Agrawal, Gans, and Goldfarb’s Prediction Machines dismisses this fallacy. Although written for a business audience, its insights are not confined to the boardroom. Prediction Machines provides a compelling, fresh perspective to help us understand what artificial intelligence is and its potential impact on our world. The text is essential reading for those grappling to make sense of the field.

For Agrawal, Gans, and Goldfarb, artificial intelligence is simply a prediction machine—it uses information we possess to generate information we do not possess. This simple realization immediately refocuses contemporary discussions and guides fruitful development of artificial intelligence. It underscores the situation-specific nature of its data and tools. It discloses its fallibility. And it reveals the role of predictions in our decision process, not as determinants but rather as inputs that must be evaluated according to our uniquely-human judgement. According to the three economists, that is the “most significant implication of prediction machines”—they “increase the value of judgement.”[31]

Those humans and their judgement may not always be apparent once the tools of artificial intelligence are released into the wild. But they are there. And it is our challenge to seek out and locate those humans, because it is they, not the machines, who determined what’s best for all of us.[32]


Steven Fino is an officer in the United States Air Force. He is the author of Tiger Check: Automating the US Air Force Fighter Pilot in Air-to-Air Combat, 1950-1980. The opinions expressed here are his own and do not reflect the official position of the U.S. Air Force, the Department of Defense, or the U.S. Government.


Have a response or an idea for your own article? Follow the logo below, and you too can contribute to The Bridge:

Enjoy what you just read? Please help spread the word to new readers by sharing it on social media.


Header Image: Benefits and Risks of Artificial Intelligence (Future of Life Institute)


Notes:

[1] Agrawal, Ajay, Joshua Gans, and Avi Goldfarb, Prediction Machines: The Simple Economics of Artificial Intelligence (Boston: Harvard Business Review Press, 2018), 24.

[2] Ibid., 14.

[3] Ibid., 3.

[4] Ibid., 33-34.

[5] Ibid., 43.

[6] Ibid., 14.

[7] Ibid., 59.

[8] Ibid.

[9] Ibid., 60.

[10] Ibid., 62-63.

[11] Ibid., 130.

[12] Ibid., 134.

[13] Ibid., 43.

[14] Ibid., 185.

[15] Ibid., 200.

[16] Ibid., 204.

[17] Ibid., 203-4.

[18] Ibid., 197.

[19] Ibid., 197-98. “Some in the computer science community call this ‘AI neuroscience.’”

[20] Ibid., 200.

[21] Ibid., 15, 19-20.

[22] Ibid., 74.

[23] Ibid., 94.

[24] Ibid., 83.

[25] Ibid., 91.

[26] Ibid., 149. The authors also provide a useful example of bank tellers and ATMs, p171-72.

[27] Mindell, David A., Our Robots, Ourselves: Robotics and the Myths of Autonomy (New York: Viking, 2015), 10.

[28] Argawal et al., 151.

[29] Ibid., 213. Italics in original.

[30] Ibid., 214.

[31] Ibid., 18.

[32] Mindell, 13, 15. Mindell similarly challenges his readers to ask, “Where are the people? Which people are they? What are they doing? When are they doing it? … How does human experience change? And why does it matter?” when investigating autonomy. Italics in original.