by: Jack Jones
NOTE: This post contains material that is part of a book on FAIR that I’m co-authoring with Jack Freund. Please recognize that this material should be considered DRAFT in terms of editorial fixing-up, so thoughtful feedback would be sincerely appreciated. Apologies in advance for this post’s lengthy nature. I hope you find it worth your time!
When people talk about “risk management”, often it’s within the context of a framework like ISO2700n, COBIT, PCI, NIST, or something similar. And rest assured that these frameworks all offer significant value by providing checklists and descriptions of the things that should exist in an organization’s risk management program. Unfortunately though, checking boxes does not equate to effective risk management, and it is unusual to see an organization’s risk management program operate effectively as a feedback system. As a result, very often you’ll see risk management programs that experience the same problems over and over again (even when the program has checked all of the requisite boxes in its risk management framework of choice). Picture Bill Murray in the movie Groundhog Day.
In this post, i’ll share some insights that can help you to avoid (or get out of) the Groundhog Day infinite loop.
Breaking out of Groundhog Day
Let’s start with a couple of foundational concepts:
- The amount of risk an organization has today is a lagging indicator of its ability to manage risk up to that point.
In other words, the risk decisions that were made (or not made) in the past, and the organization’s ability to execute effectively against those decisions, got it to where it is today. This being the case, if we understand the factors that drive effective (or not so effective) risk management, then we can do some powerful root cause analyses to understand what brought us to where we are. But that’s not all. There’s a corollary to the first concept:
- If we understand our current risk management capabilities, we can infer what our future risk position is likely to be
Essentially, we can turn the coin around and evaluate our current risk management capabilities to get some idea of what our future risk posture is likely to be -- unless we make changes.
But in order to do this, we first have to understand the risk management landscape as a feedback system.
A systems view of risk management
In this section I’ll build a picture of the risk management landscape element-by-element, and describe how these elements (should) work together as a feedback system. It will also become readily apparent where and how risk analysis fits into the system and why it is so important in order for the system to function properly.
The first two elements in the picture are Risk and Risk Management. Keeping in mind that, at the macro level, Risk equates to the aggregate loss exposure an organization has. In addition, remember our first foundational concept -- that the amount of risk an organization has is a function of its ability to manage risk up to that point. More on this to come...
Next, recall that Risk is a function of the threats, assets, controls, and impact factors (e.g., laws, etc.) that drive loss exposure.
Risk Management is, as we’ve already mentioned, comprised of decisions and execution.
Those decisions revolve around the policies, human resources, processes and technologies that an organization chooses to implement, all of which are intended to achieve risk objectives. Now, let’s not fool ourselves. Rarely, if ever, do organizations set specific, measurable risk objectives. Oh, they may establish KRI’s and such, but those are rarely tied to actual loss exposure values except in some of the more mature risk disciplines (e.g., credit risk, investment risk, etc.). As a result, KRI thresholds are typically pretty arbitrary and may not even remotely represent relevant loss exposure levels.
Although decisions in the form of policies, processes and technologies represent (albeit loosely) an intended level of risk, what an organization actually gets in terms of risk is a function of execution, within the context of those decisions.
Now, in order for decisions to have a reasonable shot at being the right decisions, the decision-makers must have good information regarding the value proposition of those decisions, the organization’s risk tolerance, how much risk is involved in the decision, the best policy, process and/or technology options, and the organization’s ability to execute consistently and effectively against the decision.
And in order for execution to take place consistently and effectively, the people responsible for execution have to be aware of the expectations, have the capability to execute (in terms of skills and resources), and be motivated to execute. All of which are dependent on the communication, support and enforcement applied to those decisions.
Great. Now we have a picture of the risk management landscape, but it’s still not a system. It needs a feedback loop. The illustration below shows just such a feedback. In this case the feedback is about control conditions (e.g., passwords, process compliance, etc.). This controls feedback loop is fine as far as it goes, but it doesn’t go very far, really. Knowing control conditions without a risk-based context doesn’t allow decision-makers to understand whether control conditions (both compliant and non-compliant conditions) are relevant. They can (and often do) make assumptions in that regard, but they need to understand the level of risk in order to know whether controls need to be strengthened, left alone, or whether they can be relaxed.
In order to get that risk-based information, they need the complete picture of the things that create risk. This is the first point at which most organizations fall down. Oh, to be sure, the various risk assessments that are done provide risk ratings. Unfortunately, many times those ratings are inaccurate, grossly imprecise, or both.
To get the point across, let’s imagine that you’re in charge of running a company. People come to you regularly with ideas for initiatives that will (supposedly) drive more revenue into the organization. For example, Bob brings you an idea that he says will increase revenue by a Medium amount. Medium? Has Bob been drinking? How the heck are you supposed to make a decision based on that? But, okay, let’s imagine that’s how it’s done. Nobody quantifies revenue opportunities in your company. It’s all a matter of High, Medium, and Low. Of course, nobody bothers to define High, Medium, or Low very explicitly. It’s either left to your own imagination and assumptions, or they define it with something like, “High revenue is that revenue which is either highly likely to occur and/or be significant in impact to the bottom line.” Defining qualitative terms with other qualitative terms is usually not very helpful in clarifying matters. How effective do you imagine you’re going to be as a decision-maker? And how is this any different than the other side of the decision coin -- the risk side? Not very.
As for precision, you can’t get much less precise than High, Medium, and Low if those are supposed to represent the entire continuum of revenue opportunity or loss exposure.
This is the first, best place for FAIR, a quantitative risk management methodology, to affect an organization’s ability to manage risk by providing higher-quality risk intelligence to decision-makers.
Frankly, if the risk analyses performed within an organization were accurate, had a useful degree of precision, and were effectively communicated to decision-makers, then that organization would be in far better shape than the vast majority of organizations from a risk management perspective. But there’s still something missing. We’re still in stuck in Groundhog Day.
When we get information about non-compliant control conditions through the controls feedback loop, we rarely see organizations go to the trouble of also determining WHY the controls aren’t the way they’re supposed to be. This is probably the main contributing factor behind the Groundhog Day phenomena. We see organizations fix controls. We rarely see them fix the reasons for repeated control failures. Oh sure, if the control is important enough and if it’s non-compliant often enough, then some draconian threats will usually be made to “encourage” people to comply. Sometimes that’s enough. At least for a while. But often it isn’t effective over the long haul because the fundamental problem isn’t being addressed.
We need to add another feedback loop to our system. In this case a root cause analysis into execution failures. Something that goes well beyond the frequently superficial, “Our awareness program isn’t strong enough.”
For example, if your root cause analysis suggests that motivation is the reason that non-compliance is a problem for a particular control, go further. Evaluate whether motivation to comply is lacking because, just maybe, your risk management program hasn’t informed management that people don’t view compliance as important because management doesn’t enforced it. And maybe, management isn’t enforcing it because the policy in question is the wrong policy for the organization. And maybe, it’s the wrong policy because nobody has done a decent risk analysis to evaluate what the right policy might be. See? Not superficial. But if you get to the root of the problem, you have a much better chance of actually solving the problem and breaking out of Groundhog Day.
I would much prefer to do a little extra digging to get to the root of the problem and solve it once and for all, then to play whack-a-mole with the same issues repeatedly.
The definition of insanity...
...is doing the same things over and over again, expecting a different result. Given that definition, and the lack of effective feedback loops, I’d argue that many risk management programs are relatively insane.
Risk management metrics
Well, this post is over-long already so I won’t go into metrics today. Perhaps in another post. The book will, of course, treat the subject of metrics in-depth.
To learn more
or download the FAIR-on-a-page