Gartner 2019 Debate: Quantitative vs. Qualitative Cyber Risk Analysis

August 20, 2019  Jack Freund

In the Gartner Security and Risk Management Summit this week, one session addressed quantitative versus qualitative cyber risk assessments and covered the pros and cons of each. This session (called “Crossfire: Which works better, qualitative versus quantitative risk assessment?”) had four Gartner analysts discussing the state of risk assessments and whether the industry was ready for reporting cyber risk analytics metrics quantitatively.

The discussion had two analysts in the “pro” quantitative camp and two in the “con.” Overall it was a fun and lighthearted discussion and in the end the analysts agreed that while there’s still a place for qualitative as a communication tool, the quantitative approach to cyber risk is the rising trend. 


Definitions

Qualitative cyber risk analysis uses ordinal (1-5) or relative  (high-medium-low) scales to rate risks based on the expert opinion of an analyst.

Quantitative cyber risk analysis shows risk in dollar terms in a range of probabilities, based on a standard model (such as the FAIR model that powers RiskLens) that produces consistent results from any analyst.

For a deep dive into quantitative vs qualitative solutions download the FAIR Institute’s ‘Understanding Cyber Risk Quantification: The Buyer’s Guide’ by Jack Jones


However, there were some points raised by both sides in order to show the rough edges of each approach. For instance, one of the key points raised by the qualitative assessment side was that they are not arguing against quantitative methods at all. Instead they were arguing that a quantitative measure is not the only thing that should be used. They argued that it’s time to use a combination of quantitative and qualitative methods to communicate risk. They indicated that there should be sufficient rolling up and contextualizing of data into a narrative the board understands.

Indeed, the quant side opened up by saying that, amongst the CISOs and other C-suite executives they are talking to, their boards all want quantitative risk measurement in dollars, like the other areas in the company. Indeed, the Gartner analysts indicated that 70% say that qualitative measures are not helpful and nearly the same amount say that they don’t understand qualitative measures anyways.

From our perspective at RiskLens, we see more stats that confirm the change: Some 30% of the Fortune 1000 and 75% of the Fortune 50 companies are represented in FAIR Institute membership.

The response from the Qual side was that they didn’t debate that the boards they with talk to want dollars. Instead, their argument was on other aspects of how to come up with those values. As an example, they cited that 45% of those they spoke with didn’t know where their critical systems were nor were their data was and as a result, quant risk would be inaccurate as a result.

System and Data Inventory Roadblocks to Risk Analysis – FAIR Shows the Way Through

Systems and data inventory is a huge problem and something that is perennially on the SANS Top 20. This affects way more than information security, as asset inventory is a key component of a CIOs financials. Product consolidations happen frequently because organizations don’t realize that they have multiple products doing the same thing.

But, contrary to the argument from the Qual side, FAIR guides us through an information gathering and risk analysis scoping process that can account for incomplete or limited data.

For CISOs trying to get their arms around where their crown jewels are, here are some things RiskLens recommends:

First, start small and realize the snowball effect. Identify key contacts in the lines of business (and supporting IT managers) that can help you better understand the critical business processes and the technologies that support them. If you create a service that has as an input the list of key resources, it will funnel the organization to maintain and improve that inventory so as to gain access to critical security, IT, and financial services (e.g. require the registration of vendors and systems in a central CMDB before approving purchase orders).

Sometimes partner organizations will have information that can be used as a proxy. For instance, the business continuity teams typically documents in the business continuity plan location of the critical systems/processes. If you start with just the systems you know and begin to build maturity over time then that puts a stake in the ground and gets you on a path to goodness.

Defending the Numbers 

Another criticism leveled by the Qual team was that even if you have the numbers, if they aren’t defensible you are better off using qualitative measures. There is a fundamental flaw with this argument however, and that’s that we are ALWAYS using some model to make decisions. Quite often, that model is unaided human judgement.

In truth, whenever you are challenged to defend your qualitative assessments, you are attempting to find data points to justify the rating you supplied. In a sense, when doing this, you are doing the same process that you would do anyways with a quantitative risk assessment approach, without the explicit attention paid to reducing error and bias in the assessment.

Specifically, doing good quantitative risk requires the use of ranges of values, as these assessments are about events that may happen in the future. If that’s the case, then it’s unreasonable for anyone to place a single value on something that is predicted to happen. Indeed, even the Qual team admitted that range-based quantitative risk assessments are eminently defensible – another win for the quant side.

Moving from Qualitative to Quantitative Cyber Risk Analysis

The Qual team concluded by saying that they were not recommending that organizations don’t do quantitative risk analysis. Indeed, they think the time is right to find initial use cases, projects, and analyses to test the waters and build more mature risk analysis practices in your organization. They believe that a wholesale swap of Qual for Quant is detrimental to risk management success and maturity.

A long list of RiskLens customers who have successfully gone quant would disagree (for more, read our Case Studies). They’ve found that quantitative risk assessments have several advantages over qualitative risk.Here are a few: 

  • You cannot use qualitative risk assessments to determine how much risk-based capital to stow away in a rainy-day fund
  • You cannot effectively perform cost-benefit analysis without risk quantification
  • You cannot know if you are managing to organization risk appetite with only qualitative verbal risk labels
  • You cannot effectively benchmark to other organizations as their version of ‘high risk’ may not be the same as yours
  • You cannot ensure consistency of ratings with different analysts using qualitative
  • You cannot resolve conflicts when analysts/execs disagree on the qualitative ratings without quantitative data to back it up
  • You cannot prioritize amongst all the high-risk things in your organization without using quantitative to create a stratification
  • You cannot show trends amongst the high-risk items over time without quantitative
  • You cannot make a distinction between high rated risk scenarios and high rated control deficiency without a solid quantitative risk model
  • You cannot aggregate multiple high rated risks using qualitative ratings

Jack Freund, PhD, is Risk Science Director at RiskLens and co-author with Jack Jones of Measuring and Managing Information Risk: A FAIR Approach.