« Return to Blog Listing

Talking Cyber Risk Analysis to Skeptical Executives

by Jack Jones on Jun 12, 2017 11:02:45 AM

talking-cyber-risk-to-skeptical-executives.jpgOne of the reasons I love attending local FAIR Institute chapter meetings around the country is that I invariably get asked questions that prompt ideas for this blog site. For example, in a recent meeting a gentleman said that he believed executives would be skeptical of numbers in cyber risk assessment and prefer simpler red/yellow/green representations of risk.

Furthermore, he was concerned about the potential for someone to “game the numbers” to drive an agenda.  Let me address each of these concerns in order.


Jack Jones is the creator of the FAIR model that powers RiskLens. Read Jack's latest eBook, An Executive's Guide to Cyber Risk Economics.


Skepticism

Yes, indeed.  The first time or two that you present quantitative risk analysis results to executives, they probably will (and should) be skeptical of the numbers. 

After all, they’ve likely never seen cyber/technology risk presented this way, and they may even have been previously told that it can’t be done.  Or perhaps someone may have presented quantitative risk analysis results to them in the past, but had butchered it for one reason or another (I've seen some incredibly amateurish attempts to do quantitative risk analysis in our industry). 

So of course, their skepticism is healthy and appropriate.  The good news is that I have never had a negative reaction from an executive once I’ve explained that the analysis leveraged:

 

The fact that I can explain, in detail, how the analysis was performed and the assumptions underlying it has also always been warmly received. 

The bottom line — we can show our homework, which isn’t possible when ordinal risk ratings are based on waving a wet finger in the air.

Simpler?

It is certainly true that executives like to keep things simple.  But usually this is a function of something more than just “simplicity as a goal.” 

Very often, the cyber and technology risk reporting executives have seen in the past have been patching metrics, malware counts, and other security-centric techno-babble that isn’t meaningful to them.  As a result, they simply defaulted to wanting to be told whether a topic is something they need to worry about — i.e., is it “red?” 

In my experience, if cyber/technology risk information is put in front of them that they can wrap their heads around, they very often prefer numbers.  Even when some of them still prefer red/yellow/green for the dashboard, being able to explain/justify those ordinal colors with quantitative analysis has always been appreciated.  

Gaming the numbers

The FAIR model — just like any other analysis approach — can be gamed to drive an agenda.  There are, however, two points you need to understand about gaming risk measurements:

  • Because FAIR analysis (done right) involves clear scoping of the analysis, and an explanation of the underlying assumptions and sources of data, gaming becomes both difficult and dangerous.  Gaming a FAIR analysis is a great way to lose all credibility, and perhaps your job.
  • The easiest risk measurement to game is the ordinal wet finger in the air.  After all, these are almost never based on a clearly defined scope or assumptions, and are so ambiguous in nature that it’s just too easy to fall back on the age-old shamanistic “I’m the SME, so trust me” argument.

Still…

Occasionally, even executives who are quantitatively inclined have some discomfort with the imprecision that can result when quantitative risk analyses rely on sparse data.  The key to dealing with this is to remind them of two things:

  • Risk measurement in a FAIR analysis provides a faithful representation of data quality, which is always an important risk parameter for them to be aware of (and is something they will never get from red/yellow/green risk ratings).  There is often a very good explanation for why data are sparse, and very often there are options for improving data over time through better technology/process/policy/etc., which at least now is on the table for discussion.
  • Data are what they are.  By that I mean, the alternative red/yellow/green risk ratings are based on the same (or usually worse) data than the imprecise quantitative analysis.  So, back to my point above, at least now they understand when and where and why imprecision exists.

The bottom line is that executive concerns regarding quantitative analyses are usually based on wanting to avoid cyber-babble that wasn’t meaningful to them. 

Any other concerns they may have about the numbers can be logically and effectively answered.  These are smart people who just need clear and logical answers.

Related:

Presenting the Top 10 Risks to the Board [video]

An Executive's Guide to Cyber Risk Economics [eBook]

Sign Up for Blog Updates
This post was written by Jack Jones

Jack Jones is the EVP of R&D and a Founder of RiskLens. He has worked in technology for over 30 years, the past 28 years in information security and risk management. He has a decade of experience as a Chief Information Security Officer (CISO) with three different companies, including a Fortune 100 financial services company. His work there was recognized in 2006 when he received the Information Systems Security Association (ISSA) Excellence in the Field of Security Practices award. In 2007, he was selected as a finalist for the Information Security Executive of the Year, Central United States, and in 2012, he was honored with the CSO Compass Award for leadership in risk management. Jones, who lives in Spokane, Washington, has served on the ISACA CRISC Certification Committee and RiskIT Task Force, as well as the ISC2 Ethics Committee. He is the author and creator of the Factor Analysis of Information Risk (FAIR) framework. He writes about that system in his book Measuring and Managing Information Risk: A FAIR Approach, which was inducted into the Cyber Security Canon in 2016, as a must-read in the profession.

Connect with Jack