Qualitative vs. Quantitative Risk Analysis, Explained in 90 Seconds

By Rachel Slabotsky | January 11, 2019

NIST CSF, ISO 2700X and other standards say that cybersecurity risk, and its contributing factors, can be assessed in a variety of ways, including “quantitatively” or “qualitatively”. But what’s the difference?  And which is the better form of risk measurement for your organization? Let’s explore the differences between quantitative and qualitative risk analyses: Qualitative Risk Analysis  Analysts use ordinal rating scales (1 – 5) or assign relative ratings (high, medium, low or red, yellow, green) to plot various risks on a heat map with Loss Event Frequency (or Likelihood) on one axis and Loss Severity (or Magnitude or Impact) on the other. But how do analysts decide where to place the risks relative to each other?  Based on their experience in risk management or – as Jack Jones writes in his book  Measuring and Managing Information Risk: A FAIR Approach –  their “mental models”.  Purely qualitative analyses are inherently subjective. This makes prioritizing risks a challenge. How do you determine, for instance, which red risk is the “most red”? Second, there is also no systemic way to account for the accumulation of risk (e.g., does yellow times yellow equal a brighter yellow?). Finally, there is a tendency to gravitate toward the worse-case scenario for Loss Magnitude since analysts are forced to choose a specific value (e.g., red, yellow, green) vs. assigning a value along a continuum. As a result, ratings are subject to bias, poorly defined models and undefined assumptions. Quantitative Risk Analysis Instead of mental models that vary by analyst, the quantitative approach runs on a standard model that any analyst can use to produce consistent results.  At RiskLens, we use the FAIR model – that’s Factor Analysis of Information Risk. FAIR takes the guesswork out of the concepts of Loss Event Frequency and Loss Magnitude, the two main components of Risk that are also leveraged in qualitative analysis. The difference is that ranges or distributions are used to capture high and low ends of possible outcomes rather than discrete values.
The FAIR Model Explained in 90 Seconds
Various iterations of these inputs are then run through a Monte Carlo engine. The result is not a simple two-axis heat map but a bell curve showing a range of probable outcomes. The model breaks down these two factors into subcomponents that can be estimated based on information collected from subject matter experts in the company, then builds them back up into accurate, overall estimates of Frequency and Magnitude – with Magnitude expressed in terms of dollars and cents. The final product: Analysts present decision-makers with a way to visualize risk that’s more accurate than plotting points on a heat map, uses financial terms that anyone in the business can understand, and is based on logical analysis that can be explained and defended. Of course, if you’d still like to present your quantitative analysis on a heat map, there’s nothing stopping you. Here's how:  4 Steps to a Smarter Heat Map. RiskLens can introduce your business to quantitative risk assessment with the FAIR model. Contact us for a demo. Related: Add Dollars and Cents to Your NIST CSF Reporting How FAIR and ISO 2700X Go Together