A common concern I hear from new RiskLens customers starting with cyber risk quantification, along with some executives of our existing customer base, is that the risk analysis process, more specifically data gathering, takes too long and is too burdensome on their resources.
They worry that identifying subject matter experts (SMEs), tracking down data points and developing calibrated estimates would hinder their current process--and the internal politicking getting SME’s involved and engaged isn’t worth the time or effort.
When I hear this, I have two responses:
First, anything worth doing, is worth doing well.
And to do something well means spending time and applying the appropriate amount of effort. If we’re comparing the rigorous and critical thinking process fostered by the FAIR model (that powers the RiskLens platform), to the typical shooting from the hip, this “feels like a red/yellow/green risk” that is all too pervasive in our industry, then of course the latter will seem like a lot of additional effort.
My other response is, I hear you.
I’ve worked with enough of our customers to know that many of them agree with my first response in sentiment, but that they have to implement this and make it work within their own organization.
So how do we help RiskLens customers balance the efficiency concerns, while also implementing a more rigorous process? Two things come to mind:
First, through the development of loss tables.
At a high level, loss tables can be leveraged for aggregating data points on the Loss Magnitude side of the FAIR model – developing distributions for all six forms of loss so they can consistently be used across analysis, and easily accessed by all risk analysts.
Second, through development of baseline frequency and resistance strength figures.
The goal here is to develop, with SME input, a library of frequency (for threat events) and resistance strength (for controls) based on broad asset categories (i.e. web facing applications, internal servers, etc.), and types of threat actors (i.e. external malicious, internal malicious, etc.).
The idea being that when you need to conduct an analysis over a specific web facing application, and your concern is external malicious actors for example, you can easily access this information. You’d feel comfortable running the analysis with the baseline frequency and resistance figures, or you can use them as an efficient starting point to make additional assumptions.
It’s important to keep in mind that both of these items take time and effort up front; I won’t lie to you about that. But I promise they will ensure efficiencies going forward, as well as a solid foundation of a more sound and rigorous risk analysis process within your organization.
RiskLens can help your risk management team develop loss tables and threat libraries.