How Can We Show Reliable Peer Comparisons of Risk Profiles? Part 1

January 10, 2019  Isaiah McGowan

Not a week went by in 2016 when I didn’t hear cybersecurity or operational risk leaders say something akin to, ‘I want to benchmark my risk against my peers’. What rarely accompanies that request is admitting the core requirement for reaching this particular nirvana: the benchmark model must be logical and consistently applied.

Let’s 'Fermi-ize' the problem
Whenever we are faced with a seemingly intractable problem, we should break it down into more manageable subcomponents. In the discipline of risk management, peer comparisons are an intractable problem that boils down to one cornerstone assumption:

If we know our risk score and our peer’s risk score(s), we can make determinations about our risk profile. 

Can we rely on this statement? To get that answer,  Enrico Fermi, the renowned Italian physicist, would encourage us to ask the question: what must be true in order for this to be true? We can ask and answer this question over and again to break down the problem into its sub-components. Then, validate each component prior to continuing.

To begin making determinations about our own risk based on our peers the following must be true:

We must know our risk score, know our peer score(s), and they must be comparable.

Current models do not meet the burden
Let’s focus on the 3rd point -the lynchpin. Breaking it down further: what makes two risk profiles comparable?

They were measured using the same model applied consistently over the same problem space. Commonly leveraged models in cybersecurity and operational risk do not meet this burden. They are rife with issues. If any one of the following problems exists, the results are incomparable:

  • Logic flaws exist in the underlying scoring mechanism, such as NIST 800-30; the model itself is broken and unreliable. Any result provided would be unreliable both in measurement and comparison.
  • When results are open to interpretation because of subjective inputs, they cannot be normalized for comparison. Your ‘medium’ and your peer’s ‘medium’ may be (and likely are) different.
  • Some models, such as NIST CSF, are not built for normalization – a necessary component of profile comparison.

Your organization’s and your peer’s scores would not be reliably measured in the same way. The game is rigged for failure. Any attempts to compare risk profiles scored using the most well known approaches fall flat.

FAIR can overcome these ambiguities
Factor Analysis of Information Risk (FAIR) does not s uffer from these problems. That’s part of why we at RiskLens built our software on FAIR. The only way for cybersecurity and operational risk disciplines to stand a chance of making reliable comparisons between peers is to leverage a model that:

  • Imposes sound logic in the relationship of model components.
  • Limits emotion and subjectivity at each stage of the process.
  • Supports normalized outputs.
In future posts I will step through more detail on why the problem continues to elude us, the role of open vs black-box models, and what’s required to solve the problem. Stay tuned...