A question I don’t hear from enough CIROs, CISOs and other cybersecurity and risk executives is, “what framework or model did we use to come up with these risk analysis results?" Considering how much effort, time and resources stem from a decision made on those results, this has always struck me as odd.
Those executives assume their organizations are using the most rigorous and defensible means to assess risk. Is that indeed the case? We all know what happens when we assume…If you, Ms./Mr. executive have not asked that question, I recommend that you do. If you haven’t, then this post is for you.
I’ve seen organizations spend more time pondering where to hold the next company retreat then they do on the very foundation on which they base all of their risk decisions. Yet so much of what many organization’s base their risk decisions on is nothing more than their own gut feelings and/or a mash up of home grown and industry frameworks.
Over my years of risk management experience, I’ve been guilty of and have been witness to risk analysis “frameworks” that do not foster any form of structured thinking that builds consistency in analysis work, make inconsistent use of the term Risk (which you would think would be foundational to any risk analysis framework) and lead to subjective and qualitative results that can't really be explained…just to name a few.
The Risk Management Stack
As we know from the risk management stack (see the graphic), we can only truly achieve effective risk management by ensuring that, at the foundation of our decisions, we base our logic and critical thinking on an accurate model of the problem space. Without it, all decisions are doomed from the start.
So, what is the accurate model of the problem space? If you’re reading this post, you should not be surprised to see me identify the FAIR standard as that accurate model. But let me tell you why.
At its most basic, FAIR provides us with a structured thought process for thinking through and breaking down a risk.
Defining the problem
From the start, we’re required to define the key aspects of any scope: what is the loss event, the asset(s), the threat(s), and the effect(s)? This not only defines the problem we’ll look to analyze, it also surfaces key assumptions that should, or should not be considered as part of the analysis.
Understanding the problem
Leveraging the scope, we can apply it to the various components that make up the FAIR model – a model that is an international standard by the Open Group and used by a wide variety of industries and organizations. FAIR not only provides the means by which to break down and discuss how risk is derived, it is also a roadmap to performing risk analysis. When you know what inputs you need to derive the risk, you can go forth into your organization and acquire them.
Nowhere is it more evident than in our own industry that terms you would think are foundational to what it is we do--such as risk, vulnerability, and threat--have at times separate meanings and simultaneously the same meaning. Communication, not to mention gathering inputs for an analysis become exceptionally difficult when we don’t share a common language. FAIR not only has standard definitions for those components listed above but requires that you use them consistently as part of conducting a FAIR analysis.
Financial terms Lastly, by leveraging FAIR, you gain your seat at the decision table by seeing your results in financial terms (see a sample FAIR analysis report from a RiskLens case study on IP theft below). No longer will you have to spend an extensive amount of time explaining your decisions on cyber risk management; your colleagues will immediately understand, as you all will be speaking the same language.
These components, in addition to a few others, are what provide the rigor, and generate consistency and defensibility in a risk analysis framework. Another question for you: Does your organization's current risk analysis framework contain these things? If not, I’d consider adding FAIR to it.
See the FAIR model in action in these case studies from RiskLens customers: