4 Most Surprising Results from Quantitative Risk Analysis

November 15, 2019  Taylor Maze

To run a FAIR quantitative risk analysis is to take a truly fresh look at all your assumptions about risk: the value of your assets, the strength of your controls and the real likelihood of loss. You'll probably have a few “head scratcher” moments along the way , when the results you expected do not quite align to what is actually calculated as a result of the analysis.

Here are some of the surprise analysis results we hear about from RiskLens clients (and why they may not be so surprising after all).

1. You implemented FAIR Risk Analysis in your organization and your maturity scores have gotten worse, not better.

One of the observations we have made throughout our FAIR risk analyses is that as FAIR is initially implemented in an organization, the organization’s performance on maturity scales/scores takes a nose dive. You probably do not want to hear this – but that is a good sign!

Prior to implementing FAIR, you likely did not have as much visibility into your risk environment. As a result, when we take a more close and accurate view of organizational maturity utilizing the FAIR model, your eyes are opened to the gaps that currently exist in the organization. This is a sign of progress – that we are taking the steps to better the organization and the industry overall.

2.  In general, the loss exposure that you calculated using FAIR is significantly less than what you estimated previously.

Contrary to maturity scores, as a result of implementing FAIR, organizations typically see risk levels decrease. This is primarily due to two things. The first being that without using a structured and consistent model, there is a tendency to inflate risk measurements. If you think about it, this is not that surprising at all. In general, people are risk adverse. This means that when they are considering the potential risk associated with an event, they have a tendency to think of the worst possible fall-out as a result of said scenario. Better to have it and not need it than need it and not have it, right?

As you probably have gathered, that is not the way FAIR works. Instead, we focus on the most likely scenario and use calibration to ensure we are considering the full range of potential outcomes, as well as denoting the most likely result. This method helps to state assumptions and avoid the “sky is falling” method of risk analysis that has a tendency to overstate results.

The second reason is that other methods typically do not take into consideration forms of loss. In FAIR, we carefully categorize loss into six types. By doing so we make sure that each potential loss is considered and that no loss is considered twice, thereby inflating the level or risk. Without the use of a systematic method, it is more likely that loss types will be double accounted or not properly calibrated, as referenced above.

3.  High value assets may pose less risk than you thought. 

Similarly to the observation regarding all scenarios in general, we hear that for Availability scenarios specifically, the anticipated level of risk was significantly higher than the actual results. This result, we have noted, stems primarily from a lack of a clear understanding of the environment surrounding the specific asset in question. Generally, we have seen that IT anticipates there will be a significant loss as a result of an outage, due to the high visibility of the asset in question.

However, after discussing the outage with the business teams, it becomes evident there are numerous manual controls in place to provide workarounds in the event of an outage. As a result, there is a significantly less pervasive disruption of business processes, if any at all. This lack of disruption dramatically reduces the overall impact to the organization and as such, reduces the level of risk.

4.  Small, frequent losses may be more of a risk than you thought.  In FAIR analysis, we consider both the frequency and the impact of loss events, and we often find that clients tend to focus on impact over frequency. Here's an  example from a FAIR analysis for a bank, run by my colleague Cody Whelan.

Fraudulent client transactions due to compromised customer credentials seem to be a concern to almost all financial institutions that we work with.  Most, if not all have robust fraud departments with any number of detection and recovery capabilities to safeguard their clients.  Yet, it may be due to these efforts that this scenario more or less flies under the radar from a concern perspective. Let me elaborate…

The number of customer accounts at the bank that are compromised due to social engineering runs into the hundreds per year, which means there are just as many initiated fraudulent transactions.  For each fraudulent transaction, the bank engages several teams: Fraud, Cyber Security, Wire Investigation. The teams thwart about one-third of the attempted compromises but every successful fraudulent transaction costs the bank $15,000 in person hours—not much per incident but multiply by number of incidents and it’s death by a thousand cuts.

If you experience analysis results that you cannot quite wrap your head around, resources like the RiskLens team and the FAIR Institute members discussion board are there to help to decipher the underlying reasons for the results. Happy analyzing!