How Do You Measure the Value of Cybersecurity Controls? (Part 2)

How do you know that a NIST control is ‘worth it’? Should you take the High baseline set of controls, or is Medium good enough?

In the previous post, we reviewed NIST 800-53 r5 control AC-10 which purports to prevent the likelihood that threats can re-use accounts maliciously while they are also in-use for normal work purposes. We will follow this by using RiskLens Cyber Risk Quantification to perform our analyses, compare their results, and draw some conclusions. Let’s jump back in using RiskLens and NIST 800-53 Revision 5 (draft).

We can quickly scope the threats, assets and loss events because the control gives us context

Following the supplemental guidance from the control, we know we are focused on system accounts used to access critical systems. In our analysis that will be denoted by Web Applications. In Figure 1 you can see the remaining pieces of the scope in RiskLens: financially motivated hackers conducting web app attacks and privileged insiders misusing system account credentials.

how-do-you-measure-value-cybersecurity-controls-part-2-risklens-screenshot-crop.png
Figure 1

Gathering data to test our hypothesis is more straightforward than you think

Our goal is to determine if the implementation cost of AC-10 is worth it. To support the decision, we need to understand how much risk we run when using a Medium baseline (from NIST 800-53) without AC-10 and compare it to a High baseline with AC-10. To reach this goal we need two analyses with the same scope. We can leverage the same people in the organization to discuss all the data. Here are some common questions we could ask along the way:

  • How often are system account credentials used to access web applications from more than one source at the same time?
  • How often is this use tied back to inside users versus compromised credentials within web app attacks?
  • How many unique records are stored, processed or transacted by the web application?
  • Which lines of business would be affected by a breach?
  • What resources would be put into motion for crisis management?
  • Would we expect regulatory scrutiny, fines or penalties, or class action suits?

Of these questions, only the first one should change between the analyses we will conduct in comparison of Medium and High baselines.

Conducting the what-if analysis and comparing the results shows actionable data

Once we’ve gathered the data and input it to RiskLens we can get our current state (Figure 2). We can see we’re looking at approximately $198M of annualized loss exposure (ALE) at the 95th percentile in our current state. Remember, the only input (read: assumption) that changes from current to future is the Threat Event Frequency input based on the first two questions above. In the future state (Figure 3) we see an ALE of approximately $127M.

Why are we comparing the 95th percentile vs the average? Well, in truth we care more about the range; however, given such a wide range for a systemically applied control, the 95th percentile seems more stable and captures the second mode near $125M. We’ll use this as a reasonable comparison point between the two analyses

It stands to reason that we could reduce our ALE by $71M at the 95th percentile by implementing AC-10.

How-do-you-measure-value-cybersecurity-controls-NIST-AC-10-Baseline.pngFigure 2

how-do-you-measure-value-cybersecurity-controls-NIST-AC-10-2.png

Figure 3

What’s the ROI on AC-10?

To dive into ROI we need to understand two key components: capital expenses and operating expenses. With regards to CapEx, many organizations looking to do this sort of analysis will have two things in place already: a Privileged Access Management (PAM) solution and a team to run it. If it’s not already applied to the systems under analysis we could add some costs to our ROI calculation commensurate with the asset owners putting some skin in the game. The second cost, OpEx, is a matter of understanding what overhead AC-10 adds to our organization. The PAM team, SOC, and Infrastructure are likely to be affected in some way.

For our purposes, let’s call it $3M to account for 2 things:

  1. We have a PAM program in place which reduces our cost of implementing AC-10
  2. Implementing AC-10 in our environment won’t be free

Finally, when thinking through cost within the ROI calculation it’s important to limit how far into the future you are forecasting and comparing. In this case, it’s reasonable to look at one year or three years. Beyond three years we are likely to see more significant and unknowable changes to PAM. Additionally, it is increasingly difficult to forecast these types of threat events that far out.

Analysts should look at the data in a few more ways before making recommendations. Figure 4 shows the ROI report. Notice in the table the risk reduction at each data point in the range for the High baseline. Figure 5 shows a bar chart comparison of the two analyses. Figure 6 shows the overlapped density curves.

how-do-you-measure-value-cybersecurity-controls-ROI.png

Figure 4

how-measure-value-cybersecurity-controls-CRQ.png

Figure 5

how-measure-value-cybersecurity-controls-CRQ-fig6.png

Figure 6

No matter how you slice and dice the reports, there’s a good case to be made to spend the $3M and go with the High baseline which includes AC-10. An analyst could speak truth to power on the value of a High baseline in these instances. Every GRC (nay IRM) program manager, should covet these sorts of results and bake them into their metrics programs which tell the story of the value of controls implemented (or not) in an environment.

Can you really analyze an IT control by itself? What about the other layers?

It’s worth mentioning some thoughts on confounding variables due to compensating controls. Security is layered. It’s an onion, or a stack of Swiss cheese, or an Oreo cookie. Regardless of metaphor, you cannot get away from the intertwined nature of IT controls.

If you’re going to evaluate a single control or a small subset of controls within a framework, such as NIST 800-53, you should account for noise contributed by confounding controls.

You can usually do this in one of two ways: work at a level in the environment and/or FAIR such that confounding controls are taken out of scope; or, you can ensure noise contributed by other controls effects all analyses equally. In our example, we are doing the latter: compensating controls that reduce risk exist in both analyses. For instance, egress filtering or DLP controls that limit the likelihood of malicious actors getting data out of the environment; these controls affect the scenarios at bar equally.

These analyses are relatively straightforward for the Professional Services team at RiskLens. The RiskLens platform leads the industry in its capability to perform quantified analyses of the IT controls landscape; and we will continue to push the boundaries of IT controls evaluation using FAIR.

Related:

How Do You Measure the Value of Cybersecurity Controls? (Part 1)

What Metrics Matter in Risk Management? (Video)

Subscribe to our blog to learn more about how you can use RiskLens, powered by FAIR, to prioritize your IT controls landscape.