Christopher Paul and Marek N. Posard
In 2016, a third of surveyed Americans told researchers they believed the government was concealing what they knew about the “North Dakota Crash,” a conspiracy made up for the purposes of the survey by the researchers themselves.[1] This crash never happened, but it highlights the flaws humans carry with them in deciding what is or is not real. The internet and other technologies have made it easier to weaponize and exploit these flaws, beguiling more people faster and more compellingly than ever before. It is likely artificial intelligence will be used to exploit the weaknesses inherent in human nature at a scale, speed, and level of effectiveness previously unseen. Adversaries like Russia could pursue goals for using these manipulations to subtly reshape how targets view the world around them, effectively manufacturing their reality. If even some of our predictions are accurate, all governance reliant on public opinion, mass perception, or citizen participation is at risk.
One characteristic human foible is how easily we can falsely redefine what we experience. This flaw, called the Thomas Theorem, suggests, “If men define situations as real, they are real in their consequences.”[2] Put another way, humans not only respond to the objective features of their situations but also to their own subjective interpretations of those situations, even when these beliefs are factually wrong. Other shortcomings include our willingness to believe information that is not true and a propensity to be as easily influenced by emotional appeals as reason, as demonstrated by the “North Dakota Crash” falsehood.[3]
Machines can also be taught to exploit these flaws more effectively than humans: artificial intelligence algorithms can test what content works and what does not over and over again on millions of people at high speed, until their targets react as desired.[4]
Consider the role of Russia in the 2016 British vote to withdraw from the European Union (Brexit). There is evidence that Russian-linked Twitter accounts sent more than a thousand tweets from 3,800 accounts promoting a pro-Brexit vote on the day of voting.[5] Such tweets appeared to have fanned the flames of the pro-Brexit camp on Twitter over time, with the anti-Brexit camp reacting a few days before the election.[6]
Emerging technologies including generative adversarial networks, natural language processing, and quantum computing could make such scenarios far more effective. In the future, for example, Russian actors could tailor the messages in these tweets using a combination of the characteristics of the recipients and their behaviors on various online platforms based on user data they legally buy from data brokers, illegal data they purchase from hackers, and data they retrieve themselves.
These opportunities are available today, and some may become increasingly easier to exploit with artificial intelligence. In the future, for example, adversaries like Russia could query these data streams to tailor their messages and test them on social media platforms to identify the most effective messages. Such adversaries could then alter these tested messages accordingly and deploy them to users and those in their social networks via a wide variety of online media (e.g., traditional social media platforms, augmented reality, virtual reality devices, or noninvasive brain-computer interfaces).
Currently, much of this is done manually, but artificial intelligence allows for a change in scale. Emerging technologies will allow this loop of iterative refinement to occur almost instantaneously, in real time, and at a scale affecting far more people than was seen during the Brexit vote.[7]
Manufacturing Reality Isn’t New
Humans are and have always been vulnerable to being tricked, provoked, conditioned, deceived, or otherwise manipulated. Since at least the 1960s, the Soviet military and subsequent Russian organizations recognized opportunities for exploiting this vulnerability. That is why the Soviets developed a formal research program—called reflexive control theory—to model how one could manipulate targets’ perceptions of reality.[8] The theory focuses on ways to strategically transmit information to targets in ways that subtly change the motives and logics of their decisions.[9] The end goal is to get people to do something by making them believe it is in their best interest, even if it is not. While the Russians weaponized reflexive control theory, Madison Avenue used similar logic to evoke emotion—and sell products to American consumers.
There are countless examples of the Russians using reflexive control. In October 1993, for example, Russian lawmakers took over their own Parliament to advocate for a return to communism. The authorities decided to allow the rebels to occupy a police communications post, giving them access to a secure communications channel then used by police to transmit false conversations between government officials about a plan to storm the occupied Parliament building. After hearing this message, one of the rebel leaders, Parliament Speaker Ruslan Kashbulatov, called on the crowd of supporters to seize a local television station, the first step in a coup. By getting Kashbulatov to make this public request for violence, Russian authorities created justification for storming the Parliament and arresting the dissidents.[10]
Similarly, the East Germans recognized the power of manufactured reality for maintaining internal control. Starting around the 1970s, their Ministry of State Security known as the Stasi expanded the scope of their work from physical abuse of targets—such as torture or executions—to include a certain kind of psychological abuse. The Stasi called this technique Zersetzung, which loosely translates as “decomposition.” It was an organized, scientific effort to collect information about people and then use it in ways that destroyed their sense of self in private and public life. The Stasi broke into the homes of targets to rearrange their furniture, steal items of clothing, or turn off their clocks.[11] They would send compromising photos to loved ones, discredit people in their workplace, remove children from dissident parents or trick them into believing they were mentally ill, something known today as gaslighting. Victims of decomposition struggled to understand why their lives were becoming unrecognizable.
But these Russian and Stasi tactics required careful research and execution to disrupt or manipulate the targets one at a time. The contemporary information environment and modern tools, including artificial intelligence, could slash the transaction costs of such manipulation. The following methods enable a dramatic scaling in the weaponization of information.
The contemporary information environment and modern tools, including artificial intelligence, could slash the transaction costs of such manipulation.
Big Data. By 2025, some predict humans will produce around 463 exabytes each day, enough to fill 212 million DVDs. Their personal data describes individuals with a frightening level of detail. With access to such data from legitimate and illegitimate data brokers, artificial intelligence could combine and match their Amazon purchases, Google searches, tweets, Facebook photos, 401k account balances, credit history, their Netflix viewing habits, and the searches they conduct online searches what they watch on Netflix, etc.
Precision Marketing and Microtargeting. Big data can help identify which individuals are like others—and which similarities matter—based on status updates, drafts of posted videos, facial-recognition data, phone calls, and text messages. Artificial intelligence will make this increasingly easier in the future. If a manipulator knows enough about one person to send him or her messages to provoke or inspire, sending the same messages to similar individuals should produce similar results at scale.
Shallowfakes, Deepfakes, and Social Bots. Shallowfakes are created by manually doctoring images, video, or audio. Deepfakes use artificial intelligence to superimpose images, videos, and recordings onto source files to subtly alter who is doing what. Artificial intelligence-driven “social bots” can carry on conversations as if they were an actual person. Artificial intelligence will enable a dramatic rise in the number of such inauthentic “people” and make it even harder to tell human conversations from artificial ones.
Generative Adversarial Networks. One of the technologies that helps make deepfakes so realistic is the use of a class of machine-learning systems called generative adversarial networks (GANs). These networks have two neural network models, a generator and discriminator. The generator takes training data and learns how to recreate it, while the discriminator tries to distinguish the training data from the recreated data produced by the generator. The two artificial intelligence actors play the game repeatedly, each getting iteratively better at its job. At the moment, generative adversarial networks are being used to build deepfakes for fake porn videos and political satire.[12] But their power to manipulate should worry us for several reasons. First, they can use inputs like big data and precision marketing to scale the manufacturing of content like deepfakes. Second, the iterative competition between generator and discriminator neural networks takes place at machine speed and is accurate. Third, the same logic that underpins generative adversarial networks can be applied to other practices.
Conclusion
Artificial intelligence-driven social bots that are now sending you car or skin care advertisements could start chatting with you—actually, experimenting on you—to test what content will elicit the strongest reactions. Your responses would be fed back into a generative adversarial network-like system where you and countless others play the role of discriminator, all helping the artificial intelligence learn how better to manipulate you in future stimuli. You, or others like you, could slowly be pushed into changing your attitudes, preferences, or behaviors toward other groups or on foreign or domestic policy issues. Whoever is first to develop and employ such systems could easily prey on wide swaths of the public for years to come.
Defending against such massive manipulation will be particularly tricky given the current social media landscape which allows for the easy multiplication of inauthentic individuals, personas, and accounts through the use of bots or other forms of automation. There are several possible ways to develop protective efforts against using artificial intelligence for experimentation on humans. Government regulation is one way. The government could regulate standards of identity authentication by social media companies. This could be done by fining companies that fail to meet these standards, providing tax breaks for companies that meet them, or taxing companies for each user who opens a new social media account. Further, the U.S. Securities and Exchange Commission could develop standards for publicly traded social media companies to report authentic, active users in their filings. In 2016, for example, the Securities and Exchange Commission raised concerns about the ways that Twitter reported Daily and Monthly Active Users on their platforms.[13] The Securities and Exchange Commission could be develop standards of reporting authentic users, which would not only protect investors but also force transparency for who is behind what account on these platforms.
Alternatively, social media providers might take steps to limit multiple or inauthentic accounts at their own initiative. If these firms believe it is in their interest to do so, there are numerous ways available to them to reduce the propagation of inauthentic accounts. Such measures might include more rigorous identity authentication and verification or perhaps imposition of a nominal cost associated with the opening of each account. Legitimate users would be only minimally inconvenienced by such a cost, but bot herders might find such costs prohibitive or the procedural hurdles of arranging payment for innumerable false accounts a barrier. Further, the presence of a money trail would be one more way to identify inauthentic accounts or even trace back to the perpetrators.
Combined, such steps would align costs and incentives against users opening thousands of artificial intelligence-backed inauthentic accounts. Regardless of new technologies that evolve, the producers of inauthentic content will always need a way to distribute their manipulative messages. It is these distribution channels that will become a first line of defense from the weaponization of our reality.
Authenticity could become a valued brand attribute for some social media platforms and be incentivized by consumer behavior.
Unfortunately, current incentive structures leave little impetus for social media firms to take such steps, as the nature of ad revenues reward larger numbers of active accounts whether or not they are managed by real people or through automation. However, one could imagine users who value interacting with authentic persons rewarding social media companies which take steps to ensure the authenticity of accounts with their business and traffic. Authenticity could become a valued brand attribute for some social media platforms and be incentivized by consumer behavior.
If a third of Americans do in fact believe the government is concealing something about the made-up “North Dakota Crash,” imagine how many more would believe this fictitious event after wading through a deluge of bots, generative adversarial networks, and deepfakes on the internet targeting them with precision-level accuracy?
Christopher Paul is a senior social scientist and Marek N. Posard is an associate military sociologist at the nonprofit, nonpartisan RAND Corporation. Paul’s research has addressed information operations and the information environment for more than a decade. Posard’s research has focused on military sociology, survey methods, and experimental methods.
Have a response or an idea for your own article? Follow the logo below, and you too can contribute to The Bridge:
Enjoy what you just read? Please help spread the word to new readers by sharing it on social media.
Header Image: Artificial Intelligence (NeedPix)
Notes:
[1] “What Aren’t They Telling Us? Chapman University Survey of American Fears,” Chapman University, October 11, 2016, https://blogs.chapman.edu/wilkinson/2016/10/11/what-arent-they-telling-us/.
[2] Daniel Chandler and Rod Munday, A Dictionary of Media and Communication, Oxford: Oxford University Press, 2011.
[3] Tapan K. Panda, Tapas K. Panda, and Kamalesh Mishra, "Does Emotional Appeal Work in Advertising? The Rationality Behind Using Emotional Appeal to Create Favorable Brand Attitude," IUP Journal of Brand Management 10, no. 2 (June 2013): 7-23.
[4] Cade Metz, “Finally, a Machine That Can Finish Your Sentence,” New York Times, June 7, 2019.
[5] Matthew Field and Mike Wright, “Russian trolls sent thousands of pro-Leave messages on day of Brexit referendum, Twitter data reveals,” The Telegraph, October 17, 2018, https://www.telegraph.co.uk/technology/2018/10/17/russian-iranian-twitter-trolls-sent-10-million-tweets-fake-news/
[6] Miha Grčar, Cherepnalkoski, Darko, Mozetič, Igor, and Kralj Novak, Petra “Stance and influence of Twitter users regarding the Brexit referendum,” Comput Soc Netw 4, 6 (2017) doi:10.1186/s40649-017-0042-6
[7] David Ingram, “Facebook Says 126 Million Americans May Have Seen Russia-linked Political Posts,” Reuters, October 30, 2017, https://www.reuters.com/article/us-usa-trump-russia-socialmedia/facebook-says-126-million-americans-may-have-seen-russia-linked-political-posts-idUSKBN1CZ2OI. This story reports that Russian operatives made about 80,000 Facebook posts over the two years prior to the 2016 U.S. election, and that these posts were seen by roughly 126 million Americans. All signs suggest this engagement took place with minimal automation, that is, these posts were typed by human operatives using human-curated false personas. Imagine what might be achieved with automated processes opening accounts and establishing personas, and with automated content generation and posting. At machine scale, the same level of engagement might be achieved in orders of magnitude less time (days or weeks instead of years) and at orders of magnitude less cost (computer time is always cheaper than human labor).
[8] Timothy Thomas, "Russia's Reflexive Control Theory and the Military." Journal of Slavic Military Studies Vol. 17, Issue 2 (2004): Pp. 237-256.
[9] “Russia, Reflexive Control, and the Subtle Art of Red Teaming,” Red Team Journal, October, 2016: https://redteamjournal.com/archive-blog/2016/10/13/russia-reflexive-control-and-the-subtle-art-of-red-teaming.
[10] Tom Balmforth, Robert Coalson, and Glenn Kates. “Who Was Who? The Key Players In Russia's Dramatic October 1993 Showdown,” Radio Free Europe, October 2, 2018, https://www.rferl.org/a/russia-players-1993-crisis/25125000.html.; Christian Kamphuis. “Reflexive Control,” Militarie Spectator, June 21, 2018, https://www.militairespectator.nl/thema/strategie-operaties/artikel/reflexive-control.
[11] Michael Stuchbery. “Why Germany Will Never Forget the Stasi Era of Mass Surveillance,” The Local De, February 8, 2019, https://www.thelocal.de/20190208/what-the-stasi-show-about-an-unforgotten-era-of-mass-surveillance.
[12] James Vincent, “AI Tools will Make it Easy to Create Fake Porn of Just About Anybody,” The Verge, December 12, 2017: https://www.theverge.com/2017/12/12/16766596/ai-fake-porn-celebrities-machine-learning; CNN, “Why it's Getting Harder to Spot a Deepfake Video,” CNN Business, November 6, 2019: https://www.cnn.com/videos/business/2019/06/11/deepfake-videos-2020-election.cnn.
[13] Twitter, Inc. “Form 10-K for the Fiscal Year Ended December 31, 2016,” U.S. Securities and Exchange Commission, As of December 6, 2019: https://www.sec.gov/Archives/edgar/data/1418091/000156459017012077/filename1.htm, February 27, 2017.