One of the reasons I love attending local FAIR Institute chapter meetings around the country is that I invariably get asked questions that prompt ideas for this blog site. For example, in a recent meeting a gentleman said that he believed executives would be skeptical of numbers in cyber risk assessment and prefer simpler red/yellow/green representations of risk.
Furthermore, he was concerned about the potential for someone to “game the numbers” to drive an agenda. Let me address each of these concerns in order.
Jack Jones is the creator of the FAIR model that powers RiskLens. Read Jack's latest eBook, An Executive's Guide to Cyber Risk Economics.
Yes, indeed. The first time or two that you present quantitative risk analysis results to executives, they probably will (and should) be skeptical of the numbers.
After all, they’ve likely never seen cyber/technology risk presented this way, and they may even have been previously told that it can’t be done. Or perhaps someone may have presented quantitative risk analysis results to them in the past, but had butchered it for one reason or another (I've seen some incredibly amateurish attempts to do quantitative risk analysis in our industry).
So of course, their skepticism is healthy and appropriate. The good news is that I have never had a negative reaction from an executive once I’ve explained that the analysis leveraged:
- An industry standard model (FAIR)
- Well-established computational algorithms like Monte Carlo
- Data gathered by their own subject matter experts (with calibration applied), as well as industry data when it exists
The fact that I can explain, in detail, how the analysis was performed and the assumptions underlying it has also always been warmly received.
The bottom line — we can show our homework, which isn’t possible when ordinal risk ratings are based on waving a wet finger in the air.
It is certainly true that executives like to keep things simple. But usually this is a function of something more than just “simplicity as a goal.”
Very often, the cyber and technology risk reporting executives have seen in the past have been patching metrics, malware counts, and other security-centric techno-babble that isn’t meaningful to them. As a result, they simply defaulted to wanting to be told whether a topic is something they need to worry about — i.e., is it “red?”
In my experience, if cyber/technology risk information is put in front of them that they can wrap their heads around, they very often prefer numbers. Even when some of them still prefer red/yellow/green for the dashboard, being able to explain/justify those ordinal colors with quantitative analysis has always been appreciated.
Gaming the numbers
The FAIR model — just like any other analysis approach — can be gamed to drive an agenda. There are, however, two points you need to understand about gaming risk measurements:
- Because FAIR analysis (done right) involves clear scoping of the analysis, and an explanation of the underlying assumptions and sources of data, gaming becomes both difficult and dangerous. Gaming a FAIR analysis is a great way to lose all credibility, and perhaps your job.
- The easiest risk measurement to game is the ordinal wet finger in the air. After all, these are almost never based on a clearly defined scope or assumptions, and are so ambiguous in nature that it’s just too easy to fall back on the age-old shamanistic “I’m the SME, so trust me” argument.
Occasionally, even executives who are quantitatively inclined have some discomfort with the imprecision that can result when quantitative risk analyses rely on sparse data. The key to dealing with this is to remind them of two things:
- Risk measurement in a FAIR analysis provides a faithful representation of data quality, which is always an important risk parameter for them to be aware of (and is something they will never get from red/yellow/green risk ratings). There is often a very good explanation for why data are sparse, and very often there are options for improving data over time through better technology/process/policy/etc., which at least now is on the table for discussion.
- Data are what they are. By that I mean, the alternative red/yellow/green risk ratings are based on the same (or usually worse) data than the imprecise quantitative analysis. So, back to my point above, at least now they understand when and where and why imprecision exists.
The bottom line is that executive concerns regarding quantitative analyses are usually based on wanting to avoid cyber-babble that wasn’t meaningful to them.
Any other concerns they may have about the numbers can be logically and effectively answered. These are smart people who just need clear and logical answers.