In June, 2020, Microsoft announced its biggest Patch Tuesday ever, with fixes for 129 vulnerabilities, followed by the second-biggest Patch Tuesday in July, covering 123 vulnerabilities. In the same timeframe, Adobe, VMWare, SAP and more companies announced new vulnerabilities.
It seems that vulnerabilities dominate cybersecurity news, and well they might: an Apache Struts server left unpatched for a known vulnerability opened the way for the massive Equifax data breach of 2017. But this focus on vulnerabilities and patches is a distraction from real risk management, argues Jack Jones, creator of the Factor Analysis of Information Risk (FAIR™) model and co-author of the FAIR book, Measuring and Managing Information Risk.
In a 2018 article for Homeland Security Today, Jack put it this way:
“Many practitioners try to communicate cyber risk to their department and agency heads in terms of how many vulnerable systems and missing patches there are…
“In mature risk disciplines, risk is expressed in terms of the probable frequency of loss events and their impact, typically in economic terms like annualized loss exposure, or in terms of the effect on an organization’s mission…
“Vulnerabilities only matter because they, to some degree, increase the potential frequency and/or magnitude of future loss events…
The same caution goes for equating CVSS scores with risk, as FAIR book co-author Jack Freund wrote for the FAIR Institute blog: “CVSS provides a measure of exploitability, or how virulent or contagious a particular vulnerability may be. Despite practice to the contrary, it does not purport to measure risk.”
How the FAIR Standard Handles Vulnerability for Quantitative Cyber Risk Management
While the technical definition of Vulnerability dominates the industry, FAIR proposes a different way to consider these exploitable control gaps: how they impact the susceptibility of the organization to attempted cyber events.
The important distinction between the technical and the FAIR approaches is that, in addition to addressing the weakness introduced by the vulnerability, FAIR also considers how often that vulnerability is likely to be targeted as well as other mitigating controls in place. The ultimate goal of FAIR analysis is to go beyond the technical to answer the most important question at all - what does all of that mean for the bottom line?
In the FAIR standard, the basis of risk analysis in the RiskLens platform, Vulnerability is one of the factors on the left side, that figure into determining Loss Event Frequency.
The definition of Vulnerability is the probability that a threat event will become a loss event, expressed as a percentage. Since FAIR analysis always considers a particular risk scenario, that might come down to “What percentage of the time would a cyber criminal threat actor break through the controls around our PII database?” In the model, Threat Event Frequency X Vulnerability = Loss Event Frequency.
Learn more in these two blog posts from the FAIR Institute:
Gathering Vulnerability Data for FAIR Analysis with RiskLens
FAIR analysts gather that probability estimate from the subject matter experts (SME’s) who best know the asset to be analyzed, the strength of its controls and the history of attempted and successful attacks. In FAIR analysis, the percentage is always expressed in a range to account for uncertainty. The RiskLens platform guides the interview process and data collection.
But what if the SME is uncertain? RiskLens has it handled – as Cary Wise, Partnership Lead, Professional Services Team explains in this short video:
One of the tips and tricks that I use to help customers come up with a good range when estimating vulnerability is thinking about the threat event.
Say we’re going to do a scenario over a breach of a database.
If you’re talking to SMEs and they really have no idea where they should say the vulnerability would be, oftentimes, I’ve gotten, “You want a range? How about zero to 100%.”
So, having to manage that conversation, one of the tips and tricks that I’ve used is to say, “When someone attempts to breach this database, would they be successful the majority of the time? Or would you be able to block them the majority of the time?”
When you are doing that, you are able to take it from zero to 100% and bring it down to zero to 50% or 50 to 100%. Then you can start talking about most likely, and bring down that range a little bit more.
If, worst case scenario, they have absolutely no idea, it’s a coin flip and they’re hard pressed on that zero to 100%, you can say, “If it really is a coin flip, then we can come up with a range around that, say 25-75%, most likely 50% because it’s a coin flip – and your confidence level will most likely be low.