Automate Cyber Risk Analysis with RiskLens Data Helpers

April 21, 2020  Rebecca Merritt

Ahh, data helpers! If you are a RiskLens customer, you know it has been all the chatter this past year. Data helpers are going to help you do this and automate that! And that’s right – it honestly is such a great improvement to the RiskLens platform – but you want to ensure you are building data helpers out for success.

With data helpers, analysts can store data for repeated use in answering risk analysis workshop questions. Similar to loss tables, data helpers cannot just be thrown together in an arbitrary way. There needs to be context and thought behind them in order for them to be useful.

Data Helpers can be created for any question in a RiskLens workshop and can have multiple answers related to different circumstances. For example, the user can input estimates for incident response efforts dependent on the criticality of the event. Then, when analyzing a given scenario, they simply need to determine which criticality is appropriate and the underlying values will be automatically used within the calculation.


Rebecca Merritt is Sr. Manager, Professional Services, for RiskLens


Now, this does not necessarily mean you need to go out and gather mountains of data – you most likely already have this data sitting on your desktop somewhere. You just need to determine how to put it into RiskLens so it makes sense and can be reusable, every analyst’s favorite word!

I’ve seen customers implement data helpers for all sorts of items and that are useful for their organization and industry but there are a few that are key, regardless of what industry you’re in.

Let me walk you through the three most useful and common data helpers I’ve seen as a RiskLens consultant.

1.  Primary Response Costs / Hours (Guided On and Native)

Where to start

Start with the effect you use most – is your organization focused on protecting data or keeping your systems online? Typically, you’ll see more work in one area or another so start here and then expand as you go. It’s best to build out one effect, try it out, and adjust before moving on to the next.

How to build them

The first step is to pick which effect you’d like to focus on – let’s use confidentiality here. If we are worried about a breach of data, there are different types of breaches we can focus on.

There are the smaller breaches – maybe under 100 records that happen often – if this occurs maybe 1 – 2 people get involved to investigate this for 2 – 5 hours and their loaded average wage is $75/hour.

  • Primary Response Costs: $150 - 750
    • 1 person * 2 hours * $75 = $150
    • 2 people * 5 hours * $75 = $750
  • Primary Response Hours: includes just the time above

Continue on this path for medium, large, and critical breaches. You may already have this data but, if not, talk to the business once and you can reuse this data for literally every analysis!  

2. Secondary Loss Event Frequency (SLEF) (Guided On and Native Workshops)

Where to start Look at prior analyses and understand how you typically use SLEF. This one can be relatively easy to break down.

How to build them

You need to approach it in a manner that works best for your organization. I’ve seen the following breakdown quite a bit:

  • 0 – 0% - No fallout forecasted for this event
  • 0 – 25 % - Fallout is sometimes forecasted for this event
  • 100 – 100% - Fallout is always forecasted for this event

It’s up to you and your analyst team to determine how to break this out but using ranges can allow you to account for various events.  

3. Vulnerability (Native Workshop)

Where to start

Gain a high-level understanding of your IT landscape. Does your organization use any classification for assets such as a tier system or rating for critical, high, medium, low? This may be a good way to break out your vulnerability data.

How to build them

This again varies based on your tier system. You will want to assess those key controls and determine what range seems appropriate based on the tier of your asset. This will help to drive analysis work as you scope in various assets. Special note here – you may have goals in the organization of what security standards your assets should meet but we all know that they often do not meet these standards. One great thing about data helpers is that you can always override them. If you feel the data helper does not give you enough refinement, you can override and update appropriately.  

The beauty of data helpers is that as you see changes in the organization – maybe longer (or shorter) response times -- you can update the data helper once and it will push to all of your analyses it is subscribed to, helping you to build out the repeatable and iterative process. How exciting!