The CXOWARE Blog

Welcome to the CXOWARE blog. We hope you’ll join us for lively and good natured discussion about risk and risk issues!  We’re risk geeks, plain and simple. We’re big advocates of the Factor Analysis of Information Risk (FAIR) framework for quantifying risk.

Consistency

By: john.coleman

By: Jack Jones

I’ll occasionally run into someone who claims qualitative risk assessment is better than quantitative because you’ll get greater consistency between analysts. Their point is that a key characteristic of data quality is repeatability -- i.e., in measuring something, you should expect to get the same results from different analysts. It’s a very important point.

So, let’s say we have four analysts (Don, Richard, Alex and Jack) each evaluating the loss exposure associated with the same risk scenario. Don and Richard are using a qualitative approach, while Alex and Jack are using a quantitative approach. Don and Richard evaluate the scenario independently of one another and both come up with a “Medium” risk rating. Hurray for consistency!

Now lets say Alex and Jack independently evaluate the same scenario and some up with, OMG(!), different results. Alex says the loss exposure is $50k per year, while Jack says it’s more like $75k per year. Surely this must support the claim that quantitative analysis is less consistent than qualitative analysis -- Game, Set, and Match to Qualitative Analysis! Right?

Three points to consider:

  1. If someone is performing quantitative analysis and using discrete, deterministic values like those above, then they’re doing it wrong (Jack and Alex should know better!). Quantitative analysis on risk scenarios should ALWAYS be done using ranges and/or distributions in order to represent the uncertainty that exists. We don’t know whether Jack’s $50k represents the worst-case, minimum, average, or some other point in between minimum and maximum. Likewise with Alex’s value. Even I might prefer a qualitative risk statement over a deterministic quantitative risk statement.
  2. The range of actual values that might underlie something labeled “Medium Risk” are undefined and large (i.e., imprecise), which inherently increases the odds of getting consistency from analyst to analyst. I mean, after all, if “High”, “Medium” and “Low” are expected to cover the entire spectrum of possible outcomes, how much less precise can you get? Here’s the deal, though. Given a wide enough range (i.e., imprecision) in a quantitative analysis, we could certainly get a degree of repeatability equal to what we might get from a qualitative analysis.
  3. For all we know, Don’s “Medium” isn’t anywhere close to Richard’s. Medium is a word that, without definition (http://www.cxoware.com/underneath-it-all/), is largely ambiguous.

Of course, the qualitative folks out there may say that it’s entirely possible for one quantitative analyst to get results that are orders of magnitude different than what a different analyst might get on the same scenario. Yes, that’s true. But if their understandings of the scenario, their assumptions, and/or their data are that different, then they would almost certainly get different qualitative results too (e.g., one analyst might say “High Risk” and the other “Medium Risk”).

Regardless, an extreme lack of precision, whether qualitative or quantitative, comes at a price in terms of usefulness. Prioritizing a portfolio of “High Risk” issues becomes especially soft and squishy, and describing the benefit for remediating a “High Risk” issue largely becomes a matter of how articulate a person is versus whether they have a clue or can defend their analysis.

Bottom line -- repeatability is an extremely important consideration, but we have to understand the measurement context and constraints, as well as usefulness of the values. The key is to strike the right balance between precision and utility.

About The Author