Trench warfare on the risk analysis front
I am ashamed to admit it, but it’s happened to me before. I failed to practice one of the core concepts that we as FAIR practitioners preach. I have been part of quantitative risk analyses that have become larger and more complicated, while contributing minimum to no additional benefit or utility. To my fellow colleagues in the risk analysis trenches, this may bring back all too familiar haunting memories of analyses gone awry, and the needless amounts of work and headache. I’m of course talking about including everything but the kitchen sink in a risk analysis when all you needed was that sink.
To FAIR practitioners, this concern most closely ties back to the concept of Probability vs. Possibility. To ensure that fewer colleagues fall prey to the same pitfall, I’ll relive these moments, with the hope you’ll heed the warning: focus on what is probable, not what is possible.
Probability vs. Possibility
To level set us on the pitfalls, let’s review the main premise behind the concept of Probability vs. Possibility. In the world of risk analysis, yet this can also be applied more broadly to the world overall, we want to focus our time and efforts on the more probable loss events as opposed to what’s infinitely possible. Anything is possible. A silly example we use during training sessions is to ask the question:
“Is it possible that an Alaskan Brown Bear will attack us in this office?”
Whereby I rhetorically answer, “Sure, anything is possible, but is it probable?”
As we’ve never held a session in Alaska the answer is almost invariably “no”.
To drive home the point, I’ll follow up with the observation that, “I also don’t see anybody carrying around bear mace just in case of a chance encounter.”
Again, I know this is a silly example, but it never ceases to amaze me at how well it gets the point across. Why would we spend our time gathering data and running an analysis on the frequency and impact of bear attacks if it’s of absolutely of no concern to us? I’ve seen this same thing play out numerous times in cyber risk analyses:
What I’m hoping you take away from this is that it’s an all too common phenomenon to expand the scope of an analysis, to go beyond what is probable, and include everything possible, because more is often associated with better. I hate to break it to you, but this is very often a false equivalency.
By providing unnecessary information to your decision makers, you muddy the waters. You may assume you are providing them additional context, yet more often than not you are preventing them from focusing on the data that really matters (i.e. what is probable). Not to mention, you’ve created additional work for yourself and colleagues as you identify and track down the data.
Heed my advice, focus on what is probable, not what is possible. And keep your analysis simple unless you absolutely need to make it larger in order to satisfy the purpose of your analysis.