At RiskLens, we figure risk as the probable frequency and probable magnitude of a future loss – in other words, how often losses are likely to happen and how much loss is likely to result. It’s the frequency side of the equation that can be like nailing Jell-O to the wall. How do you pick a number for future ransomware attacks if you’ve never – or hardly ever – been hit with ransomware?
The common reaction: Guess high, and hope that somehow garbage in won’t produce garbage out. We have a better way, following the guidance of FAIR (Factor Analysis of Information Risk): Make “calibrated” estimates.
It’s a technique of arriving not at an exact frequency for threats but an estimate in a range that gives us 90% confidence that our estimate is accurate.
Here’s how we do calibration:
1. Start with the absurd Remember, the purpose is to come up with a range, so we’re going to start ridiculously wide and narrow it down. Say we are trying to determine how many ransomware attacks hit a given server, when the reporting is not as robust as we need it to be. I make an absurd estimate of between one and 1,000 times a year.
2. Consider what you do know and break down the problem
Our organization hasn’t seen any ransomware. But we do know how many malware attempts we get in a given year, 500.
3. Challenge your assumptions
Time to play a betting game. Since ransomware is a form of malware, would I bet with 90% confidence that our ransomware attacks were at least 500 a year? I would take that bet. (For more on the "equivalent bet" method, see this blog post from the FAIR Institute: No Data? No Problem.)
4. Consider what data may exist I find that, according to the respected Verizon DBIR report, only about three per cent of all malware is considered ransomware. So I would feel comfortable reducing my maximum to 50 ransomware attacks a year on this server. But based on my experience as a trained FAIR analyst, I think I can do better...
5. Seek out subject matter experts
I check with the company’s Vulnerability Management team. They have only seen one ransomware infection on devices in the last five years. So now I reduce my maximum frequency for ransomware hitting this server to 15 per year.
6. Focus on accuracy rather than precision
To wrap up, I think an accurate range would be one per year to 15 times per year. Our approach is, it's better to be accurate than precise--because what's the point of being precise if you are precisely wrong? In a FAIR analysis we shoot for accuracy with a useful amount of precision.
Next Steps...
Estimate calibration to arrive at a loss frequency range is just one step in a RiskLens analysis using FAIR techniques. Download a chart of the FAIR model to see other components. For instance, the magnitude data is typically collected from incident response or business continuity teams. All the data gets fed into the RiskLens platform to ultimately produce a report on annualized loss exposure in dollar terms, in this case, for the server at risk from ransomware. For more, see this blog post: What Does RiskLens Reporting Tell Me?