Illusive Blog January 11, 2021

The Johari Window: How Known Unknowns Led to the Largest Cybersecurity Breach of National Security in U.S. History

By Ofer Israeli
Manufacturing, Attack Surface Reduction

“Therefore just as water retains no constant shape, so in warfare there are no constant conditions.” -Sun Tzu

This article presents a different perspective on the recent SolarWinds breach in the growing number of articles on the recent attacks. It also proposes a different approach to adversary detection by detecting the constants in a breach using the concept of active defense as described by the new MITRE Shield framework. The idea is that blue teams should detect lateral movement and living off the land after the adversary has established a beachhead instead of relying solely on detecting the attack using known knowns.

Introduction

Einstein, a $5 Billion government program meant to operate as the cyber forward operating base (FOB) to our nation’s government networks, failed against the recent supply chain compromise of Solarwinds. The breach resulted in backdoors being pushed out to 18,000 Solarwinds customers in an update designed to provide a hostile nation state access to any network it was installed on.

These backdoors (collectively referred to now as SUNBURST) were subsequently installed on numerous government and commercial networks, including the U.S. Treasury, National Nuclear Security Administration (NNSA), universities, and commercial companies, including Microsoft and FireEye.

But the failure of Einstein to detect the command-and-control (c2) traffic from the backdoors phoning home on the government networks it was designed to protect is not due to any inordinate amount of sophistication. The tactics and techniques used by the adversaries are known and have been documented in the MITRE ATT&CK framework for years. What went wrong?

Einstein, a unified threat management (UTM) solution built by the Department of Homeland Security (DHS) and subsequently handed over to the Cybersecurity Infrastructure Security Administration (CISA), was designed to fail. It used signatures of known malicious IP addresses and malware for its detection engine, was never connected to the National Institute of Standards and Technology (NIST) where it was designed to retrieve the latest threat updates and signatures and was only deployed to 5 of the 23 government agencies mandated to deploy it. It is possible that this was a result of the GAO report, which after testing Einstein found it to only be 6% effective at detecting 489 different attacks. [1]

The first approach to intrusion detection systems (IDSs) based on signatures dates back twenty years, and includes the free, open source project Snort, Snort-Inline, and its predecessor, Shadow, the first pattern-based detection system built on tcpdump by the U.S. Navy in 2003.

Signature-based detection systems, both their open source and commercial options, have now largely become antiquated and are being replaced by technologies now built on supervised and unsupervised machine learning models that moved us away from trying to detect so-called known knowns — known exploits, known shellcode, known payloads, known IP addresses.

Unless we stop trying to detect known knowns, meaning, known exploit patterns such as known payloads and C2 IP addresses, and move to a different methodology of active defense where we detect lateral movement after the beachhead, we will continue to repeat a flawed history. The fact of the matter is, trying to detect cybersecurity breaches based on known tools and IP addresses is like the military trying to detect enemies based on the bullets they use.

“Let’s move away from this archaic mindset of attempting to detect when the enemy has fired the first shot and instead move towards detecting what they do afterwards within a synthetic environment using active defense.”  – Alissa Knight

The fact is, today there are two types of organizations: ones that have been breached and ones that will. It is no longer a question of if, but when, and when that does happen, organizations who are attempting to detect known knowns will suffer the same fate as the victims in the SolarWinds compromise. Organizations must move to a posture of active defense to perform detection of lateral movement and other techniques used in living off the land through deception technology.

Johari’s Window

Intelligence analysts in the U.S. intelligence community (IC) have long used analytical techniques described by two American psychologists, Joseph Luft and Harrington Ingham in a concept named Johari’s Window created in 1955. [2] Johari’s Window popularized the concept of known knowns, known unknowns, and unknown unknowns and was adopted by the U.S. IC in order to identify blindspots in what we know and don’t know when analyzing data it referred to as informational blind spots.

As its name describes, the Johari window is represented as a four-paned window, with each of the two panes in the quadrant representing one’s self and the other two panes representing the part unknown to ourselves but known to others.

For the purposes of this article it is not necessary to fully delve into the conceptual framework underpinning Johari’s Window. However, the idea of known knowns and known unknowns comes from this concept, so the it is worth taking some time to demystify these two ideas that are derived from the model at a superficial level.

Figure 1 shows the original Johari Window while Figure 2 shows a modified Johari Window applied to cybersecurity created by Amazon in its AWS Incident Response white paper. [3]

Figure 1: The Original Johari Window
The Original Johari Window
Source: communicationtheory.org

Figure 2: The Johari Window modified for application to cybersecurity
The Johari Window modified for application to cybersecurity
Source: Amazon

In the modified Johari Window applied to cybersecurity, the window looks the same except for the names of the quadrants and how they are defined. While Amazon modified Johari’s Window for application in an AWS sense and applied it to APN partners, it is modified here to apply to a more general cybersecurity sense.

Johari Window quadrants in incident response:

  1. Obvious: Known knowns: Vulnerabilities your organization and others are aware of.
  2. Internally Known: These are known unknowns, meaning vulnerabilities known to you but not other organizations. For example, a zero-day exploit that you are aware of because you discovered it and has not been published or discussed in any other forums or exploited in the wild.
  3. Blind Spot: This is unknown knowns. Vulnerabilities that others are aware of, including adversaries, but your organization is not. For example, a new vulnerability discovered and is being exploited in the wild. The vulnerability is patched by the vendor, but your organization is unaware of the vulnerability and has not downloaded and applied the patch. A recent example of this is the Citrix vulnerability that was actively being exploited in the wild because organizations had not downloaded and applied the patch.
  4. Unknown: And here we have unknown unknowns. These are vulnerabilities that your organization and others are not aware of, including adversaries. They are vulnerabilities that potentially will be discovered and actively exploited, and no detections are available for them because of a lack of patterns or signatures.

Network Threat Detection

Detection of the tactics, techniques, and procedures (TTPs) used by adversaries can be performed using three separate types of detection mechanisms: (1) Detecting known patterns/signatures (legacy intrusion detection systems and antivirus), (2) deviation from expected behaviors (machine learning) and (3) interaction with synthetic assets (deception technology) (FIGURE 3).

Figure 3: Three network threat detection techniques

Source: Knight Ink

The traditional approach to network threat detection was with the introduction of intrusion detection systems (IDSs), which used a database of patterns/signatures for already known exploits, malware, or known bad IP addresses. Signature detection systems were made popular in the open-source community with projects such as Shadow IDS and Snort IDS, created by Martin Roesche, who later went on to found Sourcefire (later sold to Cisco). Commercial off-the-shelf (COTS) solutions such as ISS RealSecure, Top Layer, Intruvert, and other commercial IDS/IPS solutions incorporated the same approach to threat detection using signatures.

However, pattern-based detection systems have historically been criticized for their high false positive rate. Packets that the IDS thought were related to an attack because they matched a particular signature in the packet headers or payload but often turned out to be innocuous. Pattern-based systems also choked under high load, and were unable to keep up with wire speed demands as internal networks began moving to 10 Gbps with no packet loss.

Active Defense

MITRE Shield is a matrix designed to help defenders instrument their network using a new concept of defending a contested network through active defense. Shield is the brainchild of MITRE following 10 years of analysis that MITRE performed of adversarial lateral movement used on their own networks. The idea behind active defense is that the defender uses deception technologies to create a synthetic world, which the adversary then interacts with. This permits the detection of the adversary’s beachhead on the network as an alternative to using traditional network threat detection solutions that can produce false positives. The idea is that if a user is attempting to authenticate with decoy credentials or interacting with a decoy server, the activity simply cannot be legitimate.

Had the U.S. government employed active defense on their networks, a concept they have employed in their own weapons systems over the last century, instead of trying to detect known knowns, the adversaries would have been detected sooner from the techniques they used as they moved laterally around the network.

Recently, MITRE has begun to merge the ATT&CK model with Shield to illustrate a more complete picture between tactics and techniques used by adversaries and those adopted by defenders to detect them.

Shield’s tactics cover seven containers:

  • Channel, which contains techniques used to usher adversaries down a predefined path in the network away from production systems using decoys;
  • Collect, which include techniques used to collect information about the TTPs used by adversaries to achieve their ultimate goal;
  • Contain, which includes techniques to relegate an adversary to a specific area of the network (secure enclave) that’s within control of the defender to limit their potential to move laterally;
  • Detect, which leverages network and endpoint detection and response — NDR and EDR — in order lower the mean-time-to-detection (MTTD) and mean-time-to-response (MTTR) of an adversary;
  • Disruption, which includes techniques that prevents the adversary from doing what they came onto a network to do using tools to make the synthetic environment look indistinguishable from the production;
  • Facilitate, which covers techniques that presents vulnerable systems to the adversary to focus on instead of production servers;
  • Legitimize, the techniques used to add authenticity to the synthetic environments created by the deception technology being used, such as synthetic credentials, systems, and other “breadcrumbs” planted by the deception technology.

Understanding Deception Technology

Deception technology is a relatively new and growing product space designed to assist in the automated creation and management of decoy credentials, systems, processes, and other synthetic “breadcrumbs” in the network and systems to distract adversaries away from production systems who have established a beachhead on a network.

Deception technology is increasingly being used in the arsenal of network security controls that CISOs are using to perform early detection of an adversary in a breach as an alternative to legacy intrusion detection systems. The idea is akin to identifying the enemy once they are already there, instead of trying to detect them before they get there.

Conclusion

It is clear that, at least for the U.S. government in the SolarWinds backdoor compromises, they should have not been relying on the pattern-detection capabilities that Einstein was built upon.

Instead of attempting to detect known knowns, organizations should stop trying to detect the attacks themselves as they will continue to come, evolve, and become harder to detect over time. Instead, organizations should already assume the adversary is on the network and create a synthetic environment for the adversary to interact with using deception technology that would detect the adversary’s presence.

Until we get away from this old way of thinking that we can detect and deter an attack from happening at the first initial point of entry and understand that the threat is already inside the network or will be, we will continue to see headlines like SUNBURST.

Detection needs to happen with the adversary interacting with the environment once they are already there. through detection of the techniques used to live off the land and move laterally, so we can finally detect and respond quicker instead of not at all.

Organizations need a well-formed defensive strategy that includes good cybersecurity hygiene, such as a documented and regularly updated patch and vulnerability management program, routine annual risk assessments, regular penetration tests, and a regularly updated and maintained asset management system.

Indispensable to any cybersecurity program is a solution that is capable of leveraging deception to detect lateral movement early and capable of deploying decoy accounts, content, credentials, networks, personas, processes, and systems.

In addition to being able to detect lateral movement effectively and quickly, the ability to manage your attack surface should also be inherent to your layered defense model, so that the attack surface is reduced to a manageable level.

The attack surface management capability should be able to automatically discover and map the environment and its “crown jewel” assets and the attack path to get there, identification of conditions exploited for lateral movement, and the ability to find shadow admin accounts, local admins, domain user credentials, and saved connections to the crown jewel assets discovered.

Deception technology should be used with caution. Many of the deception solutions on the market are agent-based and can be easy to identify, making it clear to the adversary that they are in a synthetic environment created by decoys.

Finally, the deception technology used should ideally be agentless and able to self- destruct, in order to eliminate any traces of itself to limit the evidence of any synthetic environment being created.

Sources

[1] The U.S. government spent billions on a system for detecting hacks. The Russians outsmarted it https://www.washingtonpost.com/national-security/ruusian-hackers-outsmarted-us-defenses/2020/12/15/3deed840-3f11-11eb-9453-fc36ba051781_story.html
[2] The Johari Window Model https://www.communicationtheory.org/the-johari-window-model/
[3] AWS Incident Response: https://d1.awsstatic.com/whitepapers/aws_security_incident_response.pdf

Learn more about: