Adversary-Based Threat Modeling

Julian Cohen
5 min readDec 24, 2019
Source: https://www.istockphoto.com/photo/basketball-game-plan-gm869782498-144829921

Most threat models start with attack surface or critical assets. Those threat models are useless and lead to bad decision-making. In this post, I demonstrate how to develop more accurate and actionable threat models, based on our adversaries.

Process

  1. Determine our adversaries
  2. Understand our adversaries
  3. Build their playbooks from threat intelligence
  4. Design defenses for their playbooks
  5. Prioritize defenses based on adversary economics
  6. Predict future adversary evolution

Frameworks and Best Practices Don’t Work

Security teams often choose frameworks, best practices, and what feels most secure over what is actually necessary to defend against an adversary. Either due to ignorance or poor critical thinking or lack of information, security teams are making ineffective decisions. Adversary-based threat modeling is designed to help security teams make valuable defensive decisions, and defend them with evidence.

Understanding Our Organization

Most threat models start with an inventory of critical assets or an attack surface map, but this is not how adversaries see our organization. You want to understand how adversaries view our organization in order to figure out how our organization falls into adversary target lists. What industries are you in? What size is our organization in terms of employees and assets? Is our organization public or private? What countries does our organization operate in? What externally visible value does our organization have? Use the answers to these questions to determine who our adversaries are.

Understanding Our Adversaries

The most important part of this threat model is understanding our adversaries. If you don’t understand our adversaries, you can’t defend against them.

Create a set of dossiers of our adversaries. They don’t have to be for specific groups (when you first start, you might not know specific adversaries). You can start with generic groups likes “criminal enterprises from Eastern Europe motivated by money” and “foreign intelligence agencies from Asia motivated by intellectual property”.

Below is an example of the diamond model for Unit 78020 of the People’s Liberation Army. When creating dossiers of our adversaries, you don’t have to use the diamond model. The format isn’t important, the data is. You need to have the adversary’s goals, targets, constraints, resources, and techniques from a technical perspective and a political perspective.

Diamond Model for Unit 78020 of the People’s Liberation Army

An Example (APT 1)

Unit 61398 of the People’s Liberation Army or APT 1 is an old, but well understood adversary. We know that they used phishing and watering holes, we know that they used social engineering to convince users to download malware, we know that they used Poison Ivy, we know they used publicly available post-exploitation tools.

Adversary Playbook

Below is one intrusion kill chain for APT 1. When building playbooks of adversaries, you don’t have to use the intrusion kill chain or the courses of action matrix. The format isn’t important, the data is. You need to have all the technical information about how the adversary operates.

APT 1 Playbook or Intrusion Kill Chain

Defenses

Now, let’s turn our adversary’s playbook into a set of possible defenses.

Courses of Action Matrix

High Cost for Adversary

Now that we have a set of defenses that would be effective against this adversary, we need to prioritize them based on how effective they will be. First, which phases will be the highest cost for the adversary to alter and which defenses will be the highest cost for the adversary to defeat.

Low Cost for Defender

Second, which phases will be the lowest cost for our team to prevent, detect, respond to and which defenses will be the lowest cost for our team to implement.

If you agree with these data points, then the obvious top priority is to turn on 2FA and then focus on installation and actions on objectives phases.

Now you have a prioritized list of defenses for this playbook for this adversary. Follow this process for all our adversaries and their playbooks and tally up the results weighed by how likely each adversary is to target you, and and now you have a prioritized list of effective defenses for our organization.

Adversary Evolution

We can also predict future adversary behavior. Below we take the playbook we used above, add the most commonly used defenses, and determine the cheapest, simplest, successful change to the adversary’s playbook. Use adversary axioms during this process for accurate results.

Adversary Evolution

Adversary Sophistication and Targeting

During this threat modeling process, always consider how effective our defenses will be against each adversary:

  • Is an adversary bypassing off-the-shelf products and tools?
  • Is an adversary changing their tactics per target?
  • Does an adversary have prior knowledge of target systems?

Results

When you’ve completed our adversary-based threat model you should have the following items:

  • A set of adversaries
  • Adversary capabilities, resourcing, motivation, and constraints
  • Historical and current adversary playbooks
  • A prioritized list of defenses
  • Prediction of future adversary behavior

Review

This threat modeling process is meant to be iterative. Our threat intelligence team should be constantly learning new things about our adversaries and our defense teams should be constantly reporting how effective our controls are. At the end of each iteration, ask these questions:

  • How confident are we in the accuracy of our results?
  • Are our results effective?
  • What could be missing from our results?
  • Do we have enough data and intelligence?
  • How do we get better data and intelligence for the next iteration?
  • How good is our situational awareness?
  • What events may change our results?
  • Do our results pass a sanity check?

This article is part 2 in my series on building security programs with adversary intelligence. Part 1 is Adversary Axioms and part 3 is Adversary-Based Risk Analysis.

--

--

Julian Cohen

Risk philosopher. CISO. Team and program builder. Ex-vulnerability researcher. Ex-CTF organizer and competitor.