The CXOWARE Blog

Welcome to the CXOWARE blog. We hope you’ll join us for lively and good natured discussion about risk and risk issues!  We’re risk geeks, plain and simple. We’re big advocates of the Factor Analysis of Information Risk (FAIR) framework for quantifying risk.

NIST 800-30 - Room for Improvement

By: john.coleman

by: Jack Jones

NIST’s risk assessment method follows a very logical process that should help analysts perform better, more consistent analyses. And clearly, a lot of thought and effort went into its development. That said, there are a few aspects of NIST’s method where opportunities for improvement exist. In this blog post I’ll cover two of those opportunities.

Overall Likelihood

The manner in which the NIST method determines Overall Likelihood of an event appears to result in inaccurate risk ratings in some instances. Specifically, the matrix it uses to combine the Likelihood of Threat Event Initiation or Occurrence (Threat Event Likelihood) with the Likelihood that Threat Events Result in Adverse Impacts (Vulnerability) overlooks a fundamental “law” -- the overall likelihood of an adverse outcome can’t exceed the likelihood of the event occurring in the first place.

To make the point, let’s look at an example. Let’s say that you’re trying to understand the risk associated with some malicious actor running off with sensitive information. You’ve brought the appropriate subject matter experts together and come up with a Threat Event Likelihood rating of “Low.” Now the team examines the controls and other factors that would drive the level ofVulnerability and comes up with an estimate of “High”. Using the matrix you look up the results of combining those two values and arrive at an Overall Likelihood of “Moderate.” So what’s the problem here? Somehow we started off with a Threat Event Likelihood of “Low” but ended up with an Overall Likelihood of “Moderate.” How can we have a higher Overall Likelihood than the likelihood of the threat event itself?

Simply stated, logically, we can’t.j31

In order to be accurate, the Overall Likelihood values in the matrix can never be greater than the Threat Event Likelihood values - even when Vulnerability is High or Very High. Given this upper likelihood limit, many of the Overall Likelihood values need to be adjusted. The table below provides an example of what an alternative table might look like.

j32

Of course, absent a quantitative underpinning, it’s ambiguous in both the existing matrix and any alternative matrix as to how much the Overall Likelihood should drop as Vulnerability decreases. Still, the fundamental logic seems indisputable regarding the need to limit Overall Likelihood values based on Threat Event Likelihood values.

Likelihood Definitions

The other opportunity for improvement has to do with the definitions within NIST’s Likelihood of Threat Event Initiation (adversarial) or Occurrence (non-adversarial) scales (tables G-2 and G-3 in 800-30, shown below).

j33

The descriptions with the Likelihood of Threat Event Initiation (adversarial) scale are purely qualitative, using terms like “almost certain”, “highly likely”, etc. This is no better or worse than most other qualitative likelihood scales. Unfortunately, like many other qualitative likelihood scales, there are two concerns:

  • No time-scale is given. If something is rated “High” in likelihood, does that mean highly likely this year, this decade, or in our lifetime? Absent a timeframe reference, likelihood is wide open to interpretation and nearly meaningless. For example, it would be entirely legitimate to state that the likelihood of our sun going supernova is Very High -- eventually, but (knock on wood) it isn’t something we need to worry about in the near future.
  • Qualitative likelihood ratings using terms like “almost certain” are inherently upper-bounded at one. In other words, there’s no way to easily distinguish between events that are likely to occur once from those that are likely to occur multiple times.

The descriptions within Likelihood of Threat Event Occurrence (non-adversarial) appear to have the same time-scale problem noted above but differs in that the descriptions include frequency verbiage such as “more than 100 times per year”, “between 10-100 times a year”, etc. The inclusion of a frequency range would seem to solve the second problem I noted above regarding being upper-bounded at one event, but introduces (at least) one new problem.

Let’s say that we’ve evaluated two scenarios -- one with an adversarial threat community and the other with a non-adversarial threat community. Our likelihood rating for the adversarial scenario is “High”, meaning that, by definition, the event is highly likely to occur but upper bounded at one. Our likelihood rating for the non-adversarial scenario is “Moderate” based on an expectation that the event will occur between 1 and 10 times per year. If the impact is the same from either scenario (“High” for example) the resulting risk ratings are backwards in relation to one another. The overall risk rating using NIST’s matrices for the adversarial event will be “High” (based on an apparent maximum of one occurrence) and the non-adversarial risk rating (from as many times as ten events per year) will be “Moderate”.

k34

It’s important to note that the NIST method isn’t unique in having these sorts of problems. You don't have to look very far to find other qualitative frameworks that have similar (and sometimes worse) issues. The problem is, these kinds of issues increase the odds of being inaccurate in our measurement of risk and thus making poorly informed decisions.

The FAIR ontology helps to avoid these kinds of issues by providing a logical framework for critically thinking through an analysis. It also, by the way, can be used very nicely as the risk analysis step within almost any of the popular risk assessment frameworks -- e.g., NIST, OCTAVE, ISO, etc.

About The Author