In a Harvard Business Review article “Cyber Risk Is Growing. Here’s How Companies Can Keep Up,” former Homeland Security Secretary Michael Chertoff makes some on-target points about the need to improve cyber risk management – but also writes skeptically about the “trend toward quantifying financial impacts of cyber risk through models like Value at Risk” that “depending on what data drives these models, can present an overly rosy view of risk.”
We would like to take this as a teachable moment for Sec. Chertoff and the readers of the HBR.
- the creator of Factor Analysis of Information Risk (FAIR™), the standard Value at Risk model for cybersecurity (recognized by the National Institute of Technology and Standards Cybersecurity Framework and other authorities) and
- a leading aggregator and modeler of cybersecurity risk data (see our 2023 Cybersecurity Risk Report)
We thumbs-up Chertoff’s statement that “we need to fundamentally change the way we measure performance” of cybersecurity programs, that the conventional practices of maturity assessments and compliance attestations, etc., aren’t up to the challenge of today’s threat landscape.
But Chertoff goes on to list “Three Ways We Need to Improve Current Cyber Risk Measures” that we argue are already hugely improved by FAIR quantitative risk analysis, as practiced at RiskLens. The Three Ways:
“First, at the front end, we need to bring greater visibility to organizations’ inherent risk levels — essentially, ‘What are we being asked to defend?’”
RiskLens comparison risk assessments allow our enterprise platform users to assess baseline risks in dollar terms for loss exposure, and then compare how changes to security posture, data management, etc. will change the risk exposure. Here’s an example of what that looks like in practice on the RiskLens SaaS platform, as a risk treatment analysis comparing identity access management to encryption in a loss-of-confidentiality risk scenario, based on data from a client and relevant data curated by RiskLens (encryption was the clear winner for risk reduction in dollar terms).
Risk treatment analysis on the RiskLens enterprise platform.
“Second, we need much greater transparency, accuracy, and precision around how we perform against likely threats and whether we do so consistently across the attack surface.”
Cyber risk quantitative analysis with RiskLens starts with loss event scenarios that incorporate explicit assumptions about the causal drivers of risk events, including the threat actors, assets at risk, attack methods and susceptibility to successful attack. Our clients routinely use the MITRE ATT&CK tool – strongly recommended by Chertoff in this article – to refine their risk scenarios. MITRE ATT&CK also suggests controls for mitigation efforts specific to attacks; RiskLens users can assess those controls for their cost-effectiveness in risk reduction, lifting risk analysis up from a purely technical exercise to business decision support.
“Third, we need to plan for, and measure performance against, low probability high consequence events.”
Here, Chertoff questions the value of models to financially quantify cyber risk because they are 1) “dependent on data inputs” and 2) those inputs “can present an overly rosy view of risk” because they can’t account for tail-risk events that are low probability, high impact and (though he doesn’t mention it) presumably have no or low mitigating controls.
On models and data: FAIR creator and RiskLens co-founder Jack Jones commented that “Even the most superficial qualitative risk measurements are dependent on data. After all, how else do you choose between “high” and “medium” if not on some sort of data? The difference is, in risk quantification we have the opportunity to faithfully reflect uncertainty in our data and the analytic outcomes.”
We collect and carefully curate loss-event data from the best sources (Verizon and Zywave/Advisen, for instance) going back to 2005.
Data inputs in FAIR analysis, as conducted by RiskLens, are simulated over 50,000 years in our computational engine using Monte Carlo methods. Our outputs generate “rosy” and not so "rosy" predictions in the form of distributions that “reflect uncertainty” in its complete form.
Monte Carlo simulation on the RiskLens platform
On “low probability/high consequence events”, “our Assessment Top Risks reports allow risk practitioners to examine the deep tails of scenarios, once in more than 100 years, as well as the probability of exceeding high thresholds of loss for individual scenarios and aggregate scenarios,” says RiskLens Senior Data Scientist Benjamin Gowan.
So, yes, RiskLens clients could game out risk scenarios for the end of the world, but we find they are much more focused on the more probable risks they face in making daily business decisions based on return on investment for cybersecurity. And we’re certainly aligned with the call to action that ends Sec. Chertoff’s Harvard Business Review article:
“We can turn risk into opportunity: if we can coalesce around mechanisms to measure cybersecurity performance with transparency, accuracy, and precision…There is no such thing as risk elimination, but through better measurement and incentivization, we can not only manage these technology risks, but turn them into opportunities for a more resilient economy.”