Playbook Based Testing

This blog posts introduces my new product, Automatic Playbook Testing

Julian Cohen
10 min readFeb 9, 2017

This work has been distilled into a three part series on building security programs with adversary intelligence. Part 1 is Adversary Axioms, Part 2 is Adversary-Based Threat Modeling, and part 3 is Adversary-Based Risk Analysis.

I am so excited to announce my new security product that I have been working on for a long time, Automatic Playbook Testing. It is based on the ideas in this blog post and the ideas that I have been talking about for a long time.

I have given this presentation at RSA (pdf), NCC Group Open Forum, and Hushcon (pdf).

Slide 1: Title

Today we’re going to talk about why we should re-evaluate how we secure applications, networks, and organizations.

I’ll be suggesting that major parts of our security industry make significant shifts in how we think and operate in order for all of us to be more effective. I know that a lot of this is controversial and I know that most people will disagree with the majority of these points. I simply ask that you listen to these ideas with an open mind.

Slide 2: Biography

Here are some things about me that are supposed to convince you to trust my judgement.

Slide 3: Wubba lubba dub dub

Security is not special, it’s simply in its infancy. Let’s start with an anecdote from a parallel universe, the medical industry.

Slide 4: Tonsillectomies

In 1930, a doctor named J. Alison Glover grew suspicious of the rate and success of a common operation, the Tonsillectomy or the removal of the tonsils.

He found that tonsillectomies were three times more commonly performed in wealthier neighborhoods, often without adequate cause.

He found that the procedure was often performed without sufficient regard to the possibility of the enlargement of the tonsils being temporary, physiological, or immunological.

He could not find any correlation between the number of operations and any environmental factors, stating that it defies explanation, except that of different medical opinions between professionals.

He summarizes, “…one cannot avoid the conclusion that there is a tendency for the operation to be performed as a routine prophylactic ritual for no particular reason and with no particular result.” — J. Alison Glover

Slide 5: Medicine

Starting to sound familiar? It should, because this is what the security industry does.

Slide 6: We get three or four more of these, tops

A penetration test is when an authorized team of attackers perform a security assessment of a system. Here’s how our industry looks right now:

  • Penetration testers are our experts
  • We expect their methodologies to be built from experience and intuition
  • Defensive security programs are focused on fixing issues that they find
  • We end up in a continuous loop of discovering and fixing issues
  • Finally, organizations continue to get owned all the time
Slide 7: Penetration testing market

These are some of the results of an independent, objective study of the penetration testing market:

“…penetration testing companies leaves a lot to be desired…it’s a marketplace that’s shrouded in mystery and myth…it’s very difficult…to assess the marketplace…you need to be an expert yourself to buy an expert…” — A client of penetration tests

Slide 8: Penetration testing companies

When asked about the methodologies they use, penetration testers say “whatever’s available.”

When asked about the quality of other consultancy’s reports, penetration testers say they are “shocking,” “appalling,” “generally very hit and miss,” and that there are “a lot of bad ones.”

When asked about the quality of penetration testing reports, customers say “the quality varies immensely… the quality can be atrocious,” that there is “a great deal of variability,” and “some are so shocking, it’s hilarious.”

Slide 9: Penetration Testing Considered Harmful

In 2010, Haroon Meer gave a presentation at 44CON called Penetration Testing Considered Harmful where he discussed the failure of the penetration testing industry in depth.

Haroon goes through a number of reasons why penetration tests are ineffective and he calls the industry a market for lemons.

The original economic paper by George Akerlof states that any market with asymmetrical information will incentivize sellers to sell poor quality goods over high quality goods, eventually leaving only lemons in the market.

Buyers don’t know what they’re getting in the penetration testing market. For instance, if a buyer receives a report with a low number of findings, it could be because the testers did nothing or because there are actually a low number of vulnerabilities to be found. Often, the customer can’t tell the difference.

Even the highest quality consultancies might not be able to deliver a consistent, effective penetration test. The results of a test depend on a lot of factors:

  • Which penetration testers are available
  • Your penetration tester’s mood
  • The efficacy of your kick-off call and scope
  • Penetration testers focused on discovering cool vulnerabilities
  • Penetration testers focused on writing a “Nice Report”
Slide 10: Attacks

Penetration testing avoids highly likely attacks because the vulnerabilities that our penetration testers discover are not the vulnerabilities that real attackers discover.

Slide 11: Attackers

In order to figure out what vulnerabilities real attackers find, we need to talk about attackers.

Almost everything you know about attackers is wrong. Defenders are constantly making bad assumptions about attackers, defenders rarely understand attackers, and defenders are not profiling attackers correctly.

Slide 12: Attacker fallacies

Here are some common fallacies about attackers:

Resourced Attackers — APT1 is often talked about as one of the largest cyber espionage organizations on the planet. If anyone has resources it’s them, but Mandiant says that they use spear phishing to break into organizations. Resourced attackers aren’t necessarily sophisticated.

Motivated Attackers — APT28 is a massive group that operates out of Russia, but Mandiant has never observed an action taken by them outside of Russian business hours. They work strictly 9 to 5. Attackers are rarely as motivated as we assume.

Intelligent Attackers This snippet describes a previously unknown exploit for Internet Explorer written by the Elderwood group. The exploit is successful 50% of the time, only on machines that were built to be exploited by their exploit. Sophisticated attackers aren’t always as sophisticated as we assume.

Slide 13: Attacker playbooks

These insights come from offensive expertise. Not penetration testing, but real attackers, under operational constraints. Just like penetration testers know the techniques to find the slickest bugs and the best findings to build a good looking report, attackers know the techniques to scale their attacks to pop the most boxes in the shortest amount of time.

To achieve low-overhead and scalability, attackers create playbooks.

Attackers that have multiple targets care about repeatability and scalability.

Repeatability — The capability to change the target and have the attack still work with the same success rate.

Scalability — The capability to launch the attack against multiple targets with minimal cost per additional target.

Here are just some of the factors that go into how playbooks are written:

  • Who the attacker’s targets are
  • Required attack success rate
  • How fast attackers need to be successful (or convert their targets to assets)
Slide 14: Attacker economics

At penetration testing companies, all employees are treated as experts. Consultants are hard to fire because they are hard to replace. Information asymmetry means that customers don’t know if a penetration testing organization is being effective. Bad penetration testing organizations survive because some customers need to pay for tests to be compliant, these customers might not even read their report.

Slide 15: Attacker efficiency

When attackers are not being sophisticated, but still being successful, that’s market efficiency at work.

Slide 16: Defensive complexity

I am not the first person to suggest that we’re looking at this problem the wrong way. In 2011, Peiter “Mudge” Zatko gave a presentation called If you don’t like the game, hack the playbook… where he introduces DARPA Cyber Fast Track.

Mudge says the complexity of attacks have generally not increased ever, while the complexity of defenses have skyrocketed.

Slide 17: Attacker cost

In 2011, Dino Dai Zovi gave a presentation called Attacker Math 101 (video) where he talks about how real attackers deal with different types of practical costs.

This cost graph describes the cost to exploit each of the major browsers and the permissions each vulnerability would yield (in 2011). Each node is an individual unit of difficult work, perhaps each requiring a different set of skills and tools. The Java path has the least number of nodes, affects every browser, and results in the highest privileges. Get all three for the price of one, it’s the obvious choice.

Slide 18: SEA case study

Now, I want to show you some case studies of how real attackers conduct operations.

The Syrian Electronic Army used social engineering to successfully hijack the DNS records of The New York Times, Twitter, The Washington Post, The Associated Press, and The Financial Times. They built botnets with opportunistic known vulnerabilities and launched successful DDoS attacks against Al Jazeera, BBC News, Orient TV, and Al-Arabia TV. They used old hosted website-defacing techniques targeted at Israeli websites, anti-Asaad websites, American and British news websites, and did a lot of collateral damage to websites that were not news websites and did not contain content related to Syria or Palestine.

Slide 19: ShadowCrew case study

ShadowCrew slowly stole credit card data from e-commerce websites. Because each SQL injection vulnerability used was unique, this campaign scaled very poorly. For example, Albert Gonzalez of ShadowCrew was charged with only breaking into 13 organizations over a period of 18 months, stealing about 140 million credit card numbers and a couple million dollars.

Slide 20: FIN6 case study

FIN6 is a fairly large operation focused on stealing credit card information. Unfortunately, Mandiant and iSIGHT intelligence have very little reliable information about FIN6’s methods of initial compromise, but I would bet that FIN6 uses a handful of cheap methods for initial compromise including exploit kits, mass scanning, and including their malware in free software. In a rough estimate, FIN6 made around $400 million US dollars through April 2016, and it is estimated that FIN6 has been operating for the last 3–4 years.

Slide 21: Elderwood case study

Elderwood is known for their watering hole and spear phishing attacks from 2010 to 2014. They typically use new vulnerabilities in their attacks, mostly use low-reliability Internet Explorer and Flash memory corruption vulnerabilities.

It’s important to note that we don’t know if they use low-reliability Internet Explorer bugs because that’s the best they can do or because that’s all that’s required to achieve their objectives. I would bet on the latter.

Slide 22: Cybersecurity market

We need a new security strategy. Gartner, here we come.

Slide 23: Intrusion Kill Chain

Finally, I want to introduce my six step plan to solve information security, which is based on Lockheed Martin’s Intrusion Kill Chain.

Slide 24: Attacker emulation

Identify Attacker Groups — Similar to a threat modeling exercise, but where the focus is on which attacker groups target assets your organization has access to rather than how attackers could get access to those assets. Which target lists are you most likely to end up on?

Profile Attackers — This is a real threat intelligence exercise, similar to what the best threat intelligence companies do, but with a focus on techniques and procedures.

Obtain Key Tactics — Determine what the key techniques and procedures are, what is least likely to change, and the cost factors associated with them.

Rebuild Attacker Playbook — Based on these key tactics, rebuild the playbook as closely as possible.

Replay Attacker Playbook — And play it against your organization.

Utilize Key Results — Now you have useful, actionable results, use them throughout your security organization.

When done with the proper intelligence about attacker groups, your results will be repeatable, precise, practical, and effective.

Slide 25: Threat intelligence

In the four case studies above we covered attacker intelligence insight into application security, infrastructure security, lateral movement, client-side security, endpoint security, reconnaissance, and social engineering. These detailed, specific attacker tactics are what security teams should be using to understand attacker playbooks and design effective defenses.

Slide 26: Free business ideas

This is what we’re missing from the defense solution space and this is what I want to to offer from Automatic Playbook Testing.

Slide 27: Conclusion

I would like to thank Justin Berman, Nicholas Arvanitis, Chris Sandulow, Adam Zollman, and Marc Budofsky for helping me to turn these ideas into a reality within a real organization.

Thanks to John Terrill and Chris Surage for dealing with me while I struggled to apply these ideas within an organization that would not budge.

Thanks to Jordan Wiens, Doug Britton, Ted Fair, Hudson Thrift, Tyler Nighswander, Alex Sotirov, Erik Cabetas, and Chris Rohlf without whom I would lack the perspective required to solve these problems.

Finally, thanks to Dino Dai Zovi, Dan Guido, and Brandon Edwards, for planting the seeds for this idea so long ago and giving me the courage to challenge the status quo in the security industry.

--

--

Julian Cohen
Julian Cohen

Written by Julian Cohen

Risk philosopher, CISO, Program builder, Advisor, Investor, Ex-vulnerability researcher, Ex-CTF organizer and competitor.

No responses yet