Taking the Guesswork Out of Exception Mitigation for IT Audit

January 22, 2019  Taylor Maze

There are few things in life that are less fun than exception mitigation due to audit findings. In fact, I have compiled a list: root canals, a weekend trip with your mother in law (here’s hoping she doesn’t read my blogs), and 4:30 p.m. Friday meetings. The one thing that tops even that list, though, is having multiple required exception mitigations. As if one were not bad enough, how do you prioritize the auditor’s five different “high risk” audit findings. And how do you, or they for that matter, know that these findings are high risk?

The answer is FAIR (to all parties involved). Beyond excellent pun material, FAIR is Factor Analysis of Information Risk and it is a way to quantitatively analyze risk so that it can be expressed in the most widely known language there is: economic value - dollars and cents.

If you’re in IT Risk and new to FAIR, here is a quick crash course:

A concern is only a risk if there is a loss event associated with it. Only events have both frequencies of occurrence and magnitudes of impact. Ergo, “the cloud” is not a risk. Cybercriminals exfiltrating customer PII from cloud hosted applications is a risk.

In order to quantify that risk and express it in dollars and cents, you gather data and estimates related to the frequency (how often it may happen) and magnitude (what happens if it does) of the event and then use the RiskLens Cyber Risk Quantification platform (or do some mad statistical analysis in your head if you’re a genius) and voila – you have your loss exposure. Easy as pie.

So how does that apply to audit findings? Utilizing the FAIR model and RiskLens Cyber Risk Quantification, you can determine how much risk is associated with each of the audit findings.

In order to do so, first you need to determine the event that is sparking the concern and high-risk designation. For example, if the finding is elevated-privilege service accounts with default passwords in place on an application housing sensitive data, then the event of concern is likely someone hacking into one of those service accounts and breaching the sensitive information in the application.

That sounds like a loss event which means you can determine the average annualized loss exposure (ALE). This value (or range of values, rather) takes into consideration both the frequency and magnitude of the event and expresses the risk in a way that can be compared apples to apples with other risks.

By doing this, you can easily prioritize which of the findings is of the highest priority and begin mitigation procedures. Further, if you’re feeling squirrely you can also use this additional value to challenge the risk level assigned to the audit findings. Check out this blog post Case Study: ‘High Risk’ Audit Finding Doesn't Hold Up to FAIR Analysis to see how one company did just that.

You can also take the analysis a step further and run iterative or future state analyses to determine the overall risk reduction depending on what mitigating control or procedure is put into place. By taking this additional step, you can calculate the ROI of the different potential mitigating activities and determine which is the best option for the organization as a whole.

Related:

Two Milestones for the FAIR Institute: 3,000+ Members. 30% Adoption Rate

Making Your IT Audit Job More Than Compliance

Warning: Potential (Happy) Side Effects to Implementing FAIR