Defending FAIR to Skeptics in Your Organization

January 15, 2019  Cody Whelan

I have worked with an organization that, from top to bottom, runs a cyber risk analysis process that is the antithesis of everything I rail against as a FAIR practitioner.

Their “risk register” is full of nothing but control deficiencies, “risk statements” are broadened and expanded to the point there could be no wrong answer, and assessments are conducted by those with no formal risk assessment training, leveraging ordinal and subjective scales.

To my dismay, this organization seemed to have only limited concern about this approach.  To the contrary, they challenged me on the approach we take at RiskLens to quantitative risk analysis – the FAIR model - and whether that approach was defensible.

These were smart people. I wondered how could they make these basic mistakes...

Confusing control deficiencies with risks. 

Risks, or loss events, clearly articulate the bad thing we’re concerned with occurring, such as "a malicious external actor (Threat) exfiltrating (Effect) IP from a SQL database (Asset)".   You can tie a frequency and a magnitude to this statement, but not to "unpatched SQL servers".

Writing broad and vague “risk statements”

Depending upon the purpose of a risk analysis, the work can be inclusive of many components of the risk landscape.  Yet what happens far too often is that “risk statements” become so broad, and vague that no two people could make the same set of assumptions about the loss event (i.e. the bad thing we’re concerned with occurring).  This makes gathering data, and standing behind the results of an analysis especially dicey. (See my blog post Assumptions in Risk Analysis Are a Powerful Thing.)

Scoring with subjective scales

Risk scoring, such as medium, or yellow, or 15 leave a lot of the interpretation to the experiences and biases of the assessors.  From one person to the next, a rating of “medium” perfectly encapsulates X scenario, while to another it’s much better at a “high”.  This disparity in assessments is all too common unless these subjective scales are tied to frequency and magnitude ranges.

I'm no psychologist, but from what I could see, the organization was suffering from these three mental roadblocks:

1. Sticking with established process

Many of these haphazard processes have been around so long that to alter them would seem almost sacrilegious.  It’s the way that we’ve always done it, and if it ain’t broke, don’t fix it.

2. Comfort Zone-itis

I would imagine that those who don’t see a problem with this approach, are also those who are comfortable with hiding behind the ambiguity it encourages.  If nobody really knows what we’re analyzing, then our assessment is never really wrong.

3.  Aversion to sticking your neck out

Scariest of all is probably sticking your neck out and advocating for something new.  Nobody questions old processes.  New processes or ways of thinking though are excellent fodder for examination and scrutiny.  Those not up to the task are more likely to save their necks and trudge on with the old process.

Still, I accepted their challenge to show them that FAIR was a defensible way to measure risk – after all, it's a valid question and deserves a response. 

Here's how presented it to them:

The FAIR model  

At its core, FAIR is a model that allows you to break down and critically think through risks.  This may sound like a given, or even a small item, but without a model to reference, think back to and help you diagnose the problem, what you end up relying on is the mental model that lives in your head.  And how much do you want to bet that mental risk model is flawed in various ways, applied inconsistently from one problem to the next and is almost certainly different from the mental model of the person sitting next to you?  I’d take that bet.

The FAIR model is also what allows for, and fosters communication.  Far too frequently terms like Risk, Vulnerability and Threats are used interchangeably.  As we rarely conduct risk analysis work completely by ourselves, the chances that you can diagnose and gather data on a problem where you don’t share a common lexicon becomes exceptionally remote.  The beauty of FAIR is that when I say Threat Event Frequency, or Vulnerability, or even Risk, my fellow risk analysts know exactly what I’m talking about.

Risk Assessment Process

At RiskLens, we’ve developed a time-tested process that spans all risk assessment components:

  • Scoping: Ensuring a firm understanding of the scenario’s Asset(s), Threat(s), Effect(s) and ultimately the Loss Event.
  • Data Gathering: Gathering the data necessary to run a FAIR analysis
  • Run/Refine: Review results with a critical eye ensuring they accurately reflect the problem being analyzed.
  • Finalize/Report: Finalize results and prepare them for their audience and next steps. 

Measurement Concepts

When performing quantitative risk analysis, its always helpful to keep in mind and leverage the following concepts that we teach in our FAIR fundamentals training course:

  • Probability vs. Prediction: The goal of quantitative risk analysis should not be to try to predict the future with 100% certainty; this is impossible. What is attainable is to leverage information we have to develop probabilities about future events.
  • Probability vs. Possibility: Any event is technically possible. From the craziest sci-fi, mission impossible information security event, to the earth and the moon colliding.  Yet, possibilities are not actionable.  We cannot make well informed decisions based on possibilities.  On the other hand, we can make well informed decisions based on probabilities.  “There is a 30% chance we’ll experience a breach this year causing a $1.5 million impact.  Should we accept this risk or mitigate it?”
  • Accuracy vs. Precision: When gathering data for a quantitative risk analysis, you should shoot for accuracy over precision.  It is more likely that we can be accurate, or correct about forecasting future events, rather than precise, or exact.  We do this by leveraging distributions to account for our level of uncertainty and variance in any experience.
  • Objectivity vs. Subjectivity: Objectivity and subjectivity live on a spectrum. We cannot get completely away from providing inputs that are at least partially derived from our biases and experiences.  What we can do though is drive more objectivity into the process by leveraging a sound and rigorous model.

Monte Carlo

The last item is that we leverage Monte Carlo simulation to help us out with our results.  As a shout out to my colleague David Musselwhite who put together a fantastic video on Monte Carlo simulation, “Monte Carlo is a method for performing calculations when you have uncertainty about the inputs.”  Remember, we cannot predict the future with 100% certainty, and because we’re shooting for accuracy rather than precision, using Monte Carlo simulation is the best way to develop our results.

What I was really saying to this organization was...

  • FAIR is more effective than the subjective model in your head that's now driving your analysis. 
  • The approach we've developed at RiskLens is time tested and produces reliable and consistent results.
  • Leveraging a mathematical and decision support staple, such as Monte Carlo simulation, should add additional confidence to your results.   

Still need help breaking through the roadblocks? Our team of experienced FAIR Consultants are here to help. Contact us today to schedule a free consultation