Three “Surprises” from a FAIR Cyber Risk Analysis

July 24, 2019  Brock Krawczun

We recently conducted an engagement with a bank analyzing the risk associated with wire fraud. The outcome of a quantitative cyber risk analysis surprised some of the team who went through the process. One of the biggest overall findings was that the loss exposure was significantly less than expected in each of the scenarios. In many of the scenarios the team feared most, our cyber risk analysis depicted a much lower cause for concern.

The analysis leveraged Factor Analysis of Information Risk,  the FAIR model, to break down loss into its two key components of risk:

(1) Frequency: how frequently losses occur and,

(2) Magnitude: how bad is the loss when the event occurs

Although the results may have challenged some of the team’s predispositions, many of the bigger surprises manifested during the process of performing the risk analysis. Below are three key takeaways identified during the analysis which other organizations may benefit from when adopting FAIR.

1. Risk Identification Intake Process

Prior to leveraging FAIR, the organization captured “findings” based on industry leading practice checklists or reports from employees across the enterprise. These findings were then loosely organized and delegated to teams to be addressed.

By using FAIR, the team identified some of the shortcomings from this process. There was no consistent method for defining possible loss events. In some cases, key descriptive elements were missing or the scenario was highly unlikely to occur. For example, in a finding, a threat actor would need to execute many complicated steps in which several controls would have to fail while also having prior knowledge of the exact time to carry out these steps. Ultimately, the team committed to redesigning the method.

Moving forward, the team will require all “findings” to be organized into loss events with a clear asset, threat, and effect identified, following the FAIR model ( see the model on one page). By changing the intake process to more clearly define these components, the team standardized loss events across functions and framed findings in context to enable appropriate prioritization.

2. Opportunities for Risk Reduction

After modeling the initial scenarios, we began to consider methods to decrease the loss exposure. Despite the assumptions of many that remediating the previously identified findings would provide the optimal solution or that expensive investments in hiring or procuring new security technology would be required, we challenged the team to consider making changes to things beyond the information security environment. For example, what business process changes or operational reviews may impact the scenario?

The results surprised many in that most of the opportunities for improvement to reduce the loss exposure were actually in this area of process improvements. The initial remediation assumptions in many cases would cost more in time, effort, and money than adjusting processes associated with the scenario (e.g., the levels at which manual review is required). By utilizing this more comprehensive approach and looking at what factor might make the biggest difference, the organization was able to reduce its loss exposure and attain a better return on investment.

3. Reconsidering Industry Frameworks (e.g., DREAD)

The fraud team was familiar with the DREAD threat framework and therefore existing processes leveraged that scoring method. Consequently, there was initial reluctance to deviate or ignore the DREAD scores. Despite this initial inertia we collectively worked through a series of examples of the DREAD scores and highlighted some of the areas of concern, such as arbitrary or inconsistent scoring.  

The team was surprised at how some of the high scores from DREAD were not validated as high risks in FAIR analysis. They realized that the DREAD framework could best be used to inform the frequency side of FAIR analysis, and not as a direct indicator of risk. (For more, read this blog post from the FAIR Institute: How to Use DREAD Analysis with FAIR).

While the risk analysis for the bank ultimately quantified the loss exposure related to fraud scenarios, the biggest breakthrough emerged in creating the analysis according to the FAIR principles that reversed the previous way of assessing risk. Many of these “surprises” prompted broader changes in how risk was assessed and managed throughout this organization.


RiskLens is the only cyber risk analytics platform purpose-built on FAIR.  The Wall St. Journal recently reported that FAIR is "gaining traction" among organizations with sophisticated risk management operations because "the risk-based system can help companies better understand the costs of cyber threats.”  Gartner recently named risk quantification one of the  requirements for integrated risk management.