Making Sense of Penetration Tests with RiskLens

June 19, 2020  Isaiah McGowan

Pen testers don’t have all the context necessary to provide reliable ratings for their findings. In spite of useful technical results, they are set up to fail when sorting issues into a list of priorities.

What’s in a pen test? 

Every pen test has at least two outcomes:

  1. A list of technical flaws, how to abuse them, and how to resolve them
  2. Risk ratings associated to each issue

The notion is that recipients can take the list of issues, sort by risk rating, and work through them.

Penetration testing is just a start

Pen testers perform the first triage of the technical findings. Once they hand over a set of technical flaws and risk ratings it’s up to risk analysts to make sense of the work. Step one is to ensure each issue is framed for analysis such that it includes a threat, an asset, and a loss effect.

Using RiskLens and the FAIR model, we would approach this problem by evaluating findings in three dimensions:

  1. Stack-rank the findings based on the results of FAIR analyses
  2. Compare the current state of the findings to a resolved future state
  3. Prioritize resolutions to maximize efforts and reduce the most risk in the shortest amount of time

Proper ratings are based on frequency and magnitude

Over and again I’ve seen pen test results with “High” findings that really aren’t. More likely you experience a situation when findings should swap their ratings. This happens because testers don’t have all the details. Take the situation of two findings:

  1. Medium – A set of ATMs need a physical improvement to protect from jackpotting attacks
  2. High – A key mainframe has a flaw that allows a well positioned, highly skilled attacker to access hundreds of thousands of Social Security numbers

It’s commonplace to see findings for a mainframe ranked higher. It is intuitive because of the sensitive nature of the data as well as the volume. However, that information alone does not tell the whole story. That’s where RiskLens comes into play.

The RiskLens analysis results below show the ATM issue carries a higher annual risk. What the pen tester doesn’t know is that this organization is experiencing ATM attacks today. The risk analysts can factor in information on frequency and magnitude of ongoing attacks for a clear look at the probable loss exposure in a year. 

 

Additionally, the “well positioned, highly skilled” requirements of the attack against the mainframe betray the rating. In reality, the mainframe has a low susceptibility to the attack over a year compared to the ATMs because of compensating controls and the increased level of difficulty to get into position to launch the attack. 

Context is king…

Without proper context of the likelihood of attacks and the magnitude of the outcomes it would be difficult to make sense of which problem is worse. For pen testers, there are many confounding variables:

  • Fully enumerating environments is unlikely in a single engagement
  • White-hat testing a system or network can give a false sense of ease-of-use for a technical flaw
  • Organizations may not -and often don’t- let pen testers carry out the full force of the attack described

These sort of factors make it improbable to properly prioritize findings based solely on comparing the technical details.

What it boils down to is a lack of assessment of the findings based on the likelihood of occurrence and/or the magnitude of the outcome. Until penetration testers have the broader context expected of risk practitioners they will continue to provide great visibility with poor prioritization. The good news is risk analysts can use RiskLens Cyber Risk Quantification to reprioritize penetration test findings to maximize risk reduction efforts.

Related:

Who (or What) Is Really a Cyber Threat (FAIR Institute Blog)

What Is Cyber Risk Quantification?