Social Engineering as a Threat to Societies: The Cambridge Analytica Case

Justin Sherman and Anastasios Arampatzis


Social engineering broadly refers to the psychological manipulation of human behavior that makes people act in certain ways or divulge confidential information. It’s a technique that exploits our cognitive biases and basic instincts (e.g., trust) for the purpose of information-gathering, fraud, or system access. Sometimes referred to as “human hacking,” social engineering is a favorite tool of hackers worldwide. While this was historically practiced face-to-face, over the phone, or through printed writing, social engineering can now occur on societal scales through social media and other internet platforms. The platform designers may not intend users’ data to be leveraged for exploitation and political manipulation, but the recent revelations regarding Cambridge Analytica’s use of Facebook data are only one indication of this global threat.

The Cybersecurity Context

Employee behavior has a serious impact on organizational cybersecurity –– which means, by extension, that social engineering does as well. The ways in which we frame and educate employees on cybersecurity fundamentally impact cybersecurity itself. Leveraging cultural concepts can help different segments of an organization work towards effective information security, as can designing education for human cognitive biases. These principles fall under the umbrella of an information security culture, defined as the totality of patterns of behavior that contribute to the protection of an organization’s information.

Part of a security culture requires an awareness of social engineering. Understanding that hackers try to actively manipulate our behavior is essential to daily risk management and the development of cyber “instincts.” When employees do not see themselves as part of this effort, as Andersson and Reimers articulate, they will act in ways that ignore security interests.

Cognitive Exploitation

Social engineering techniques are based on specific attributes of human decision-making known as cognitive biases. These biases, sometimes called "bugs in the human hardware," are by-products of the brain taking shortcuts to quickly process information. They’re advantageous in an evolutionary sense, but they also leave us open to exploitation through social engineering. For instance, take representativeness –– our tendency to group similar-looking stimuli together. Each time we see a car, we don’t have to remember the specific color or make; our brain looks at the object, sees its four wheels, movement, and general shape, and says “car.” This is a bias social engineers can exploit in the cyber domain. For instance, we might receive many emails from Apple or Amazon, but we’re not necessarily going to look too closely at a false one with the same logo; our brain will just say “Amazon email,” and click on a link or type in our credit card number.

The recent revelations surrounding Cambridge Analytica and its use of Facebook user data don’t just raise important questions about data privacy and online consent; they also demonstrate the ease with which companies can design and execute social engineering campaigns against an entire society.

Attacks such as the one described above are used to steal employees' confidential information, but they’re not the only mechanism by which cognitive exploitation can occur. Some social engineers may call a company, for example, and use manipulation over the phone. This has proven quite effective as well; individuals who aren’t properly taught to combat such attacks will be unaware they’re even occurring.

Principles of Influence

Social engineering relies heavily on the six principles of influence established in Robert Cialdini’s Influence: The Psychology of Persuasion:

  1. Reciprocity – People tend to return a favor, hence the pervasiveness of free samples in marketing.

  2. Commitment and consistency – If people commit to an idea or goal (orally or in writing), they are more likely to honor that commitment because it’s now congruent with their self-image. Even if the original incentive or motivation is removed after they have already committed, people will continue to honor the agreement.

  3. Social proof – People will do things that they see others doing.

  4. Authority – People will tend to obey authority figures, even if they’re asked by those figures to perform objectionable acts.

  5. Liking – People are easily persuaded by others that they like.

  6. Scarcity – Perceived scarcity will generate demand. For example, by saying offers are available for a "limited time only," retailers encourage sales.

Other techniques of social engineering and manipulation include elicitation, the subtle and indirect gathering of information; framing, which forces information into a certain context; pretexting, the conjuring of stories and excuses for asking questions (also referred to as emotional positioning); and cold calling, gaining information through a seemingly random interaction. Gaslighting is another technique of particular interest to social engineers; the technique involves misdirection, persistent denial, and lying to confuse a target and disrupt their sense of reality.

While influence is complicated in any context, from marketing to so-called human hacking, the latter case provides a fundamental advantage: defenses are many-to-many whereas offense is one-to-many. It’s much more advantageous to be the attacker than the target.

 

Cambridge Analytica

Since the most recent presidential election in the United States, many news agencies have discussed the possibility of social engineering being used as part of influence campaigns and how it may have affected voters during the 2016 election. To this point, the recent revelations surrounding Cambridge Analytica and its use of Facebook user data don’t just raise important questions about data privacy and online consent; they also demonstrate the ease with which companies can design and execute social engineering campaigns against an entire society.

We already live in a world where subtle advertising is everywhere. Entertainment companies slip product images and references into everything from song lyrics to television shows. Some companies don’t just advertise their products but go so far as to work them into the plot itself. Social media companies buy our information from a host of different websites to provide customized advertisements for products we already want (whether we know it or not). Many online retail companies even adjust prices depending on our identity, in a largely-legal process known as price discrimination.

Social engineering is a threat to political stability and free, independent discourse.

It all comes down to knowing your targets and obstacles well enough to maneuver with the least effort and still accomplish your goal. This is true for any type of influence campaign –– which is why Cambridge Analytica demonstrated that social engineering, in the cybersecurity sense, isn’t just a threat to customer information, business operation, or military secrets. Social engineering is a threat to political stability and free, independent discourse. The subtle advertising techniques currently used on social media platforms raise enough ethical questions as is –– however, political manipulation and the spread of mis- and disinformation greatly amplifies existing ethical problems.

The Threat to Global Societies

Is it possible that social engineering will fuel a regional war or uprising somewhat like the 2011 Arab Spring? Could foreign adversaries trick a nation-state’s citizens into voting against their national interest? If a leader wanted to manipulate their own citizens, would that be possible as well? The answer to all of these questions is yes. Social engineering through pervasive digital platforms is a serious threat.

Core to global ideas of democracy is that power is derived from the people, for the people. Citizens may speak their minds and are provided forums in which to have open, protected, and free dialogue. Accountability, specifically for government officials, corporations, and private citizens is a similarly important principle. Through mass data collection without accountability, however, these principles are put in jeopardy.

Do you want candidate A to be elected into office? Find out which populations are most susceptible to political belief A, and then target them with mis- and disinformation that pushes them in the desired direction. Leverage confirmation bias and exploit cognitive dissonance to encourage political extremism, and then use principles of framing and liking to reinforce those ideas within like-minded groups. In many cases, such targeted influence campaigns against specific populations can even make us think the ideas were our own.

This is what already happened with Cambridge Analytica and Russian interference in the 2016 U.S. presidential election, where cognitive flaws such as confirmation bias were exploited to encourage certain voting behavior. It’s social engineering but without the need for reconnaissance; the data is already there, just waiting to be analyzed.

To be sure, not every instance of disinformation or political manipulation is someone else’s fault. Not all social media bias is the platform’s fault either. As pointed out in Kartan Hosanagar’s 2016 Wired article, Facebook’s echo chambers are symptomatic of our own existing biases. “We deliberately choose actions that push us down an echo chamber,” Hosanagar explains. “First, we only connect with like-minded people and ‘unfriend’ anyone whose viewpoints we don't agree with, creating insular worlds. Second, even when the newsfeed algorithm shows cross-cutting content, we do not click on it.” Indeed, Facebook is following a demand for content –– one that values self-verification, validation, and building networks of trust. They are, arguably, just doing what the customers want.

Deputy Attorney General Rod Rosenstein, joined by Justice Department officials John Demers and Edward O'Callaghan, discusses the indictments of Russian agents in a news conference at the Department of Justice on July 13, 2018 in Washington, DC. (Getty Images/Chip Somodevilla)

So, the threat to society isn’t from organizations like Facebook or Google themselves, although there are certainly ethical problems to be addressed regarding their data-collection and advertising practices. The real threat, as evidenced by the Cambridge Analytica scandal and the U.S. indictments of the Russian Internet Research Agency, is how malicious actors can exploit these platforms.

For all of the harm that occurs on and through the internet, the platform has also aided political movements, provided platforms for otherwise-oppressed voices, and empowered checks on corrupt regimes. But large-scale social engineering disrupts all of these positive effects. Exploiting human trust, injecting mis- and disinformation into legitimate public discourse and distorting perceptions of reality via gaslighting can push societies to the fringe. Truth is questioned more than ever before. Time and resources are taken away from legitimate media coverage and instead spent on refuting patently false claims. Political polarization appears to grow, which is a product of either true polarization or disproportionate media coverage of the extremes or both.

Awareness is not about a 40-, 60-, or 90-minute program once a year. It’s about creating a set of standards and a way of thinking that is used all the time.

The sharing and re-sharing of news articles on social media, and the planting of subtle advertisements, petitions, and political messages in unaccountable, secretive ways enables political distortion. Trust to the political system is breached which can lead to the election of extremist political parties, as happened in Hungary and Austria, or to referendum outcomes that challenge political and economic structures, such as Brexit. We can imagine many other distorted outcomes that have yet to be realized.

Social engineering, in the cybersecurity sense, has direct implications for societies around the planet –– especially as it’s enabled by mass data-collection by the private sector. This is a threat to political stability that needs addressing.

Looking Forward

The key to counteracting social engineering is awareness since social engineers are targeting our lack of cognition, our ignorance, and our fundamental biases. This awareness approach is twofold: first, we need to develop strategies and good practices to counter the social engineering itself; and second, we need to develop sustainable policies to mitigate its effects.

In a cybersecurity context, it’s not as easy to mitigate social engineering as it is to mitigate software and hardware threats. On the software side, we can purchase intrusion detection systems, firewalls, antivirus programs, and other solutions to maintain perimeter security. Attackers will certainly break through at one point or another, but strong cybersecurity products and techniques are readily available.

When it comes to social engineering, we can’t just attach a software program to ourselves or our employees to remain secure. As Christopher Hadnagy points out in his book Social Engineering, The Art of Human Hacking, social engineering mitigation requires a comprehensive, human-oriented approach:

●      Learning to identify social engineering attacks: Knowing the signs of manipulation, and understanding the consequences of clicking a malicious link, are fundamental to self-protection. Proactive awareness is also essential to a dynamic defense, so employees should stay informed on the latest attackers and their favored techniques, in addition to training on the ways in which social engineering occurs (e.g., through simulations).

●      Creating a personal security awareness program: Security awareness in most companies is at “failure” stage, mainly because security awareness is impersonal to employees. Trainers often just discuss malicious emails instead of showing what a phishing attack looks like from both the victim’s and the attacker’s computers. The goal should be to make employees think of security not just in a requirement context, but on personally- and professionally-relevant levels. And, as it is a “program,” education should always be ongoing.

●      Creating awareness of the value of the information that is being sought by social engineers: Humans have a built-in desire to help those we feel need it, but this is also how social engineers manipulate a target into disclosing information. Making information security contextually-relevant is therefore critical. For those in accounting, educators could discuss the importance of confidentiality and trust. For those in business operations, educators could discuss sensitive data in the context of risk and profits. By increasing the value placed on information, employees will be more motivated to protect it.

●      Keeping software updated: With the interaction of internet signals, cloud services, software and hardware tools, internet of things devices, and other digital technologies, a hacker has many possible attack routes into a given system. In many cases, it’s not enough to have firewalls, intrusion detection systems, and email filtering; patching outdated and vulnerable software is essential. Less-secure technologies should be discarded for stronger ones as well.

●      Developing outlines: Walking employees through potential social engineering scenarios is enormously important. As previously mentioned, this is best done through simulation and gamification and is assisted through the use of personally-relevant framing.

●      Learning from social engineering audits: Hiring security professionals to “penetration test” people, policies, and physical perimeters for vulnerabilities can help inform security improvements and give a sense of real attacks.

The aforementioned points, while focused on a culture of cybersecurity, can transfer over to social engineering in the political context. Awareness is not about a 40-, 60-, or 90-minute program once a year. It’s about creating a set of standards and a way of thinking that is used all the time; security is holistic and cultural rather than situation-specific. This should be the same approach we take to combat social engineering through online platforms.

We need to make social engineering a part of a broader, global cyber education. To combat social engineering on a societal scale, we must first educate on the vulnerability of modern communication platforms (e.g., social media); the reasons they might be used for manipulation (e.g., with customized advertising, Cambridge Analytica, etc.); and the ways in which that manipulation occurs (e.g., fake news). Awareness once more is key.

 

Conclusion

Of course, it can be difficult in today’s information age to determine what is real and what is meant to manipulate our beliefs. Online platforms such as Facebook enable malicious actors to socially engineer users on a large scale, at relatively low costs. Entire societies, rather than just groups of individuals, can be vulnerable.

Cybersecurity training often promotes critical thinking as the go-to defense against such social engineering threats, but we must achieve awareness of the issue before we can bolster identification and mitigation capabilities. This means we must recognize the bias of circulated information –– such as with our bubbles of common thought, to paraphrase Christopher Hadnagy; as many recent elections have shown us, the information we read may not, in fact, be real. Thus, breaking echo chambers, getting offline, and engaging with other communities is essential as well. We might disagree with others’ political views, but that doesn’t mean we should allow malicious actors to subtly and maliciously engineer our beliefs.


Justin Sherman is studying Computer Science and Political Science at Duke University focused on cybersecurity, warfare, and governance. Justin is a Cyber Policy Researcher at the Department of Defense- and NSA-backed Laboratory for Analytic Sciences, where his work focuses on federal cybersecurity policy, industry security benchmarks, and national cyber strategy.

Anastasios Arampatzis is a retired Hellenic Air Force officer with over 20 years of experience in managing IT projects and evaluating cybersecurity. He is an informatics instructor at AKMI Educational Institute.

The views expressed here are their own.


Have a response or an idea for your own article? Follow the logo below, and you too can contribute to The Bridge:

Enjoy what you just read? Please help spread the word to new readers by sharing it on social media.


Header Image: Mr. Robot by carts (DeviantArt)