“The freedom of press makes its influence felt not only upon political opinions but also on all men’s opinions. It modifies customs as well as laws.”
—Alexis de Tocqueville
In October 2019, select U.S. officials offered closed-door congressional testimony regarding their knowledge of events surrounding Russian interference in the 2016 presidential election. Dr. Fiona Hill, a former adviser on President Donald Trump’s National Security Council, testified it was very likely Russian disinformation influenced the documents used to acquire a surveillance warrant on members of then-candidate Trump’s campaign. A January 2018 Wall Street Journal editorial by the Central Intelligence Agency’s former Moscow station chief, Daniel Hoffman, appears to support her assessment.
If even partially true, this is a significant development. It would force the national security enterprise to amend its understanding of disinformation’s potential to shape the national consciousness—a conversation that until recently has been defined by references to social media bots and Internet trolls.
Reporting on disinformation generally focuses on either violent extremists or hostile states deploying carefully crafted lies to influence portions of the civilian population by distorting their perception of the truth. But this was not always the case. In fact, this emphasis on public opinion is a rather nascent phenomenon. How did we get to this point, why is disinformation so prevalent, and what should the world expect from it going forward? The following analysis explores these increasingly important questions, and concludes that the skyrocketing volume, reach, and subtlety of disinformation from both states and non-state actors will make it harder to combat at the policy level in the future.
Disinformation: A Short History
In its simplest form, disinformation is lies spread consciously with an explicit—often political—objective. Not to be confused with misinformation, disinformation is purposeful, usually deriving from the source of the lie, whereas the misled carriers of that lie become the unknowing propagators of misinformation. In other words, misinformation can be the product of a successful disinformation campaign.
The practice has had its place in every civilization, but its fundamental purpose of achieving political change limited its use throughout antiquity and into the dynastic age when autocratic figures controlled the distribution of information. Even prominent campaigns of the early 17th century, such as those waged by Habsburg supporters during the Thirty Years’ War, were examples of localized attempts to consolidate domestic power by controlling one’s own information environment. Such intrastate politics are a far cry from interstate campaigns designed to erode the legitimacy of distant ruling parties by altering public perception. Targeted lies were not often intended for the general population of foreign powers for two reasons.
First, the ability for such information to travel rapidly through large swaths of citizenry was severely limited. Second, even if the disinformation gained traction, an autocracy possessed the means to smash murmurs of insurrection rather decisively. Instead, early disinformation campaigns usually sought to deceive specific officials or misdirect the general intelligence estimates of governments. Athenian general Themistocles used disinformation to fool King Xerxes of Persia into a hasty withdrawal across the Hellespont in the fifth century BCE with devastating results. In the following century, King Philip II of Macedon was notorious for employing spies who fed the Athenian council false information regarding Philip’s true designs on Athens.
The infamous Boston Massacre of 1770 demonstrated the opportunistic nature of localized propaganda in the wake of major events. Paul Revere’s sensationalized—and likely plagiarized— engraving of the massacre, though influential, still failed to convince a plurality of colonists to take up arms against the British crown at the time. Even if it had, this would be another Habsburgian example of localized, internal disinformation. Sometime later, during the Great Game period of 18th and 19th century in Central Asia, spies and commercial espionage agents littered the continent. Numerous empires sought to map Asia and feed their imperial archives—to use a term coined by Harvard University professor Thomas Richards. They became both the distributors and recipients of disinformation as competing colonial powers such as Tsarist Russia and Napoleonic France each vied for influence in the unforgiving reaches of Afghanistan, India, and elsewhere.
Still, throughout this period the most effective disinformation remained aimed at the state and its associated functionaries where it could garner the greatest return on investment. It was not until the crumbling of empires after the Great War that the proponents of disinformation began to witness a shifting base of power. Throughout the 20th century, decolonization and democratization placed imperial power into the hands of the people, thus redirecting the focus of groups bent on coerced political transformation from the governors to the governed.
Around this same time there emerged an Austrian-American man and nephew of Sigmund Freud named Edward Bernays, who is often referred to as the father of propaganda. Bernays pioneered the mass psychology movement that shaped propaganda campaigns in the 20th century. Unlike disinformation, propaganda itself is not necessarily harmful—it was conceptualized as a means of educating the public on issues with which they were relatively unfamiliar. This applies to advertisements for the best dish soap as much as war posters. Consequently, mass-base-focused propaganda collided with state-focused disinformation, giving birth to new forms of influence campaigns.
Government organizations such as the Communist International certainly played a role in casting a wide net of anti-American propaganda during the Cold War that infested even the deepest reaches of the Middle East. In the United States, a 1980s Soviet intelligence initiative known as Operation Denver (also Infektion) spread disinformation claiming the Pentagon engineered the human immunodeficiency virus (HIV) that causes Acquired Immune Deficiency Syndrome (AIDS). As a Wilson Center report explains, such efforts were most effective against recipients who were already predisposed to entertain conspiracy theories painting the U.S. Government as a malicious entity. In other words, these campaigns did not change minds, they simply fed them based on a pre-existing diet.
That said, we know from declassified documents such as the Vasili Mitrokhin archive and Venona Papers that even though the Soviet Union did have an interest in altering the mass psychology of the United States, most of the political intrigue surrounding the two powers involved compromising government agencies and high-ranking officials. In a short time, however, the connectivity provided by social media and the Internet would link foreign governments and extremists to their target populations in unimaginable ways.
When Public Opinion Becomes Proxy
The wave of constitutional thinking hatched in the post-colonial era empowered individual citizens throughout much of the world. But with this empowerment came additional risks in the form of two political realities that influenced disinformation operations in the coming century and beyond. The first was that the levers of political and military power once restricted to a centralized cabal of government officials were now, more or less, in the hands of a voting public. The second shift meant that malicious actors need not engage a state directly through high-risk attacks on public officials or military targets to influence a political process. Disinformation campaigns tailored to a population’s biases and fears could drive them to think, act, and ultimately vote in a certain manner. Public opinion became a proxy for attacking the state.
Some have referred to this phenomenon as the CNN effect, a term coined by Australian Defense Forces in the early 1990s. Coupled with the rise of the Internet and skyrocketing access to mass media content, this movement ruptured long-held assumptions regarding the use of information to effect rapid political change.
As others have observed, the core challenge associated with population-centric disinformation is not as much the medium (social media, etc.) as it is the manner in which consumers process and evaluate the information to which they are exposed—especially information they may be inclined to approve. Vladimir I. Lenin understood a century ago that telling people what they want to believe is often the most effective way to gain their favor, and the cognitive science seems to be on his side as human beings experience dopamine rushes while validating their beliefs. Confirmation bias thereby becomes an addiction for which the current information environment provides a limitless prescription. Since the turn of the century, the momentum of disinformation has been staggering.
A 2019 University of Oxford report entitled “The Global Disinformation Order” found that at least 26 countries are using state-sponsored online propaganda to stifle dissenting opinions and amplify existing social, political, and economic fissures. The number of countries with at least one government agency taking part in a coordinated disinformation campaign increased from 28 in 2017 to 70 in 2019. Even though the current disinformation debate has taken place in the shadow of the 2016 election, these numbers prove that Russia does not have a monopoly on approaches to influencing public opinion, which makes false narratives harder to detect.
High-ranking Soviet defector Viktor Suvorov explained decades ago how the Soviet Union’s Chief Directorate of Strategic Deception in the General Staff was responsible for shaping messaging campaigns that were pleasing to the ears of its recipients. This was referred to as perception management throughout the Cold War. Russia expert Fiona Hill, mentioned above, confirmed in her congressional testimony that these techniques endure. This demands an entirely different framework for understanding and countering disinformation. Many recommended prescriptions involve the state or private sector screening and filtering information in an effort to make it pure—a process that will likely veer into the realm of impossibility as technological proliferation accelerates.
Mitigation and Challenges
Experts and policymakers have proposed everything from reigning in tech giants such as Google and Facebook to criminalizing the intentional distribution of false information. While there is merit in each of these seemingly well-intentioned proposals, there is also the risk that such means may not achieve desired ends. New countermeasures will undoubtedly lead to new methods of circumvention, and even propaganda-detecting moral algorithms cannot expose disinformation-by-omission or biased reporting from areas in which journalists have limited access—as was the case with Russia’s “counterterrorism zones” during the second Russo-Chechen War. It is hard to fact check information that cannot be independently verified, and a fact check gone unseen is useless.
Some studies have found the oft recommended practice of corporations or governments granting privilege to approved news sources could actually exacerbate the factors behind disinformation’s success, such as distrust in institutions and confirmation bias. There is also little recourse when otherwise credible sources inadvertently include elements of disinformation in their reporting, as intimated in the opening of this article. Individual citizens will still seek out information in the manner they deem necessary and process it according to their values, education, and life experiences—particularly in liberal democracies that pride themselves on notions such as intellectual liberty and the free exchange of ideas.
The good news is that the world is taking notice of these old methods draped in a new skin. The bad news is that the many educational institutions founded during the Cold War to study these challenges have been dismantled and deemed anachronistic after the dissolution of the Soviet Union. Harvard University, Columbia University, and the University of California at Los Angeles each had centers long since decommissioned or turned into nonspecific Eastern European and Eurasian studies departments.
Throughout the previous century, members of the defense and intelligence communities were expected to possess a working knowledge of Soviet strategic communications and active measures. Harvard’s Davis Center for Russian and Eurasian Studies and its associated Journal of Cold War Studies, for example, now contribute to what has become a niche field of interest in the post-Cold War environment.
If the world is indeed experiencing what some consider Cold War 2.0, then security professionals consumed by counterinsurgency thought for the last 20 years are entering the game of interstate deception as it plays out amidst the backdrop of a rapidly evolving technological landscape. These problems are compounded by the fact that the current environment is not binary, but rather it involves numerous competitors, such as China, Iran, and others, each vying aggressively for influence in the information domain. Consequently, achieving substantive change in the fight against disinformation will require more than whole-of-government efforts to control the distribution of data—it demands academic initiatives aimed at improving the processing quality of information upon reception.
Conclusion
Challenges associated with disinformation are constantly evolving, but they are not unpredictable. The nature and therefore intent of disinformation remains unchanged. As highlighted in the introduction, French philosopher Alexis de Tocqueville alerted readers to the risk of a CNN effect some 200 years ago in his treatise on the American experiment, Democracy in America. Awareness of disinformation’s history, purpose, and nature is the best defense against its divisive effects.
Because the transfer of power from the state to its citizens created both liberty and liability, competing powers will have as much interest in shaping the perceptions of their opponent’s population as they will in weakening its government through diplomatic, military, or economic action. For better or worse, popular opinion is now as fragile and as mercurial as it is central to interstate competition, and the way we consume and interpret an increasingly dizzying amount of information is ground zero of the battle.
Western governments and corporations will seek ways to counter mounting threats related to disinformation, but they cannot eradicate its existence, nor can they dictate how information is processed by its consumers. The fight against disinformation is a generational struggle that will only be won through education and long-term cultural shifts related to the manner in which populations seek, consume, and validate information. In addition to reviving academic centers dedicated to studying disinformation, taking an occasional break from the 24-hour news cycle and picking up a dusty book is a good place to start pushing back against disinformation’s harmful effects on all of us.
Michael P. Ferguson is an officer in the U.S. Army with operational experience throughout Europe and the Middle East. He has a research background in Soviet and strategic studies, and he often writes on national security issues related to historical trends. The views expressed here are his own and do not reflect the policies or positions of the U.S. Army, the Department of Defense, or the U.S. Government.
Have a response or an idea for your own article? Follow the logo below, and you too can contribute to The Bridge:
Enjoy what you just read? Please help spread the word to new readers by sharing it on social media.
Header Image: The entrance t o the Kaspersky Building and the online disinformation war. (Kudryatsvev/AFP)