As part of our series on the newly proposed cyber risk management regulations for US banks, I wanted to piggyback off of the great insights my colleague Isaiah McGowan recently shared in a post, as well as bring to the forefront a concerning premonition that is based on past experience.
Call for cyber risk quantification in new proposed US federal banking regulations
As a reminder, the following is a list of proposed enhancements. Regulated entities shall:
In addition, there are another set of more stringent rules being considered for those systems that provide key functionality to the financial service sector. One of those rules would require regulated entities to quantitatively measure their ability to reduce the aggregate risk to a minimum level.
The last bit of level setting information I’d like to recall is the desire for the agencies (i.e. the Federal Reserve, the OCC and FDIC), to develop a consistent, repeatable methodology to support the ongoing quantification of cyber risk within covered entities.
My hope is that you can see the recurring quantification theme in the newly proposed enhancements outlined above. I don’t believe this to be an oversight on the part of the regulators, or cherry picking on my part of those portions of the proposed enhancements that outline quantification. I truly believe that our industry (information risk management) is in the process of maturing. Understanding that the ways of old don’t really provide sufficient information to make sound risk-related decisions is part of that maturation process.
What qualifies as ‘quantification’?
As promising as these new enhancements are, I am wary that we as an industry will settle for a less rigorous form of “quantification”, one that on the surface gives the appearance of reliable, sound justification for our risk decisions, but falls drastically short when approached with a critical eye.
If you’ve been in our industry for any length of time you know exactly what I mean, whether it be multiplying ordinal scales, or perhaps doing math with colors. There are inherent limitations to these approaches: assessment are more likely to be subjective rather than objective, inconsistent from analysis to analysis as well as from analyst to analyst, and do not allow for effective prioritization.
Some organizations are using risk management frameworks such as NIST CSF to rate themselves against a set of best practices, but soon realize that scoring does not equate to quantification and does not allow for effective decision-making, as illustrated in this recent article.
It’s my belief that we as an industry need to demand more from the word quantification, and from those that claim to use it. That when a regulated entity claims to be following the newly proposed enhancements around quantification, that there is no doubt that they are using a methodology that:
For all of my foreboding, there is reason to be optimistic though. As part of the newly proposed enhancements, regulators have identified Factor Analysis of Information Risk (FAIR) and Carnegie Mellon’s Goal-Question-Indicator-Metric process as notable quantification methodologies to build off of. As a FAIR practitioner myself, I can speak first hand that the FAIR model encapsulates and delivers on all of the quantification elements outlined above.
As a call to action to all of my fellow risk analysts, let’s ensure that what gets included as quantification in these new regulations is not the quantification of old, but what I would dare say is the “right kind of quantification”.