Welcome to the CXOWARE blog. We hope you’ll join us for lively and good natured discussion about risk and risk issues!  We’re risk geeks, plain and simple. We’re big advocates of the Factor Analysis of Information Risk (FAIR) framework for quantifying risk.

Risk Rating Litmus Test

By: Jack Jones

Find me on:

One of the significant challenges the risk profession faces is the ability to prioritize. What I see a lot of in the industry are tools and methods that spit out dozens or even hundreds of “High Risk” or even “Critical” findings from a single evaluation. As a result, typically one of the following happens:

  • Paranoid organizations cripple their operations and/or burn out their people by trying to aggressively remediate those findings, or
  • Non-paranoid organizations schedule remediation efforts for months or even years out.

In the first case, it’s common to see “committed” closure dates being missed and/or repeatedly pushed out. This drives auditors nuts (as it should), and sets the organization up for a big fall if a significant loss event occurs. Unfortunately, in both cases, there may be a handful of issues within the findings that truly are high risk or critical in nature, but because the organization hasn’t differentiated those, they get pushed out with the rest.

Setting aside for a moment the debate over quantitative vs. qualitative assessment, I have a simple “litmus test” I apply to audit or security findings that helps me perform crude prioritization. This test is based on a recognition that remediation efforts can/should be characterized in very practical terms and applied consistently. Consider the following descriptions. For:

  • Critical Risk findings: All hands on deck. Efforts extend into evenings and weekends. High value business objectives may be postponed, extra resources brought in, and “costs be-damned”.
  • High Risk findings: Remediation efforts begin immediately, bumping existing priorities and stealing existing resources.
  • Medium Risk: Remediation efforts scheduled and prioritized amongst other future work to be done.
  • Low Risk: Either no remediation or “opportunistic” remediation as a part of other activities.

As a risk professional, if I’m going to label a risk issue “Critical” or “High Risk” and cause the organization to react accordingly, I’d better have a REAL good reason — a reason based on loss exposure (the combination of loss likelihood and impact) vs. “exploitability” or “vulnerability”. Significant loss is either occurring right now, or it’s imminent. And forget about formal analysis for a moment — if my intuition is telling me that remediation for issue X doesn’t need to be started immediately, then I am implicitly characterizing it as Medium.

Some time ago I had a conversation with a friend who was faced with hundreds of “critical” and “high risk” findings from a single security tool. We spent about 30 minutes categorizing the findings by common traits (e.g., exploitability, frequency of attack, and impact), and then another 30 minutes of evaluating which type of response seemed most appropriate. At the end of the conversation there were zero Critical and just a couple of High Risk findings. Consider what this means to an organization from a resource utilization and remediation focus perspective. Also consider what it means in terms of the improved accuracy with which the organization’s risk posture is communicated to management and stakeholders. Finally, consider what it means regarding the accuracy of industry tools and common methods…

Keep in mind that even though this approach may not require detailed quantitative analysis, it does still require an ability to think numerically in terms of frequency and impact, as well as how to apply critical thinking skills and recognize the difference between what’s possible vs. what’s probable.

This post reflects my own opinions and positions, and does not necessarily reflect the opinions or position of my employer.

About The Author

Jack Jones
Jack Jones is the EVP of R&D and a Founder of RiskLens. He has worked in technology for over 30 years, the past 28 years in information security and risk management. He has a decade of experience as a Chief Information Security Officer (CISO) with three different companies, including a Fortune 100 financial services company. His work there was recognized in 2006 when he received the Information Systems Security Association (ISSA) Excellence in the Field of Security Practices award. In 2007, he was selected as a finalist for the Information Security Executive of the Year, Central United States, and in 2012, he was honored with the CSO Compass Award for leadership in risk management. Jones, who lives in Spokane, Washington, has served on the ISACA CRISC Certification Committee and RiskIT Task Force, as well as the ISC2 Ethics Committee. He is the author and creator of the Factor Analysis of Information Risk (FAIR) framework. He writes about that system in his book Measuring and Managing Information Risk: A FAIR Approach, which was inducted into the Cyber Security Canon in 2016, as a must-read in the profession.