« Return to Blog Listing

RiskLens Picks Up Where Ponemon Leaves Off

by Isaiah McGowan on May 11, 2016 1:17:26 PM
Picks_Up_Ponemon_Leaves_Off_Interested_Man.jpg
It should have caused blood to boil and budgets to rise. Instead, I was marginalized. I lost a measure of trust with my executive team.
That’s what happened the last time I used Ponemon’s record cost number to report on risk associated with a data breach. I shared that story previously, now, let me show you how I should have done it.
 
The common error when using Ponemon
 
The biggest mistake I made was using Ponemon's number where it didn’t make sense. What do I mean by this statement? I didn’t take the unwritten advice of Ponemon:
Don’t use the record cost number to model the cost of a breach involving more than 100,000 records.
 
Abusing the Ponemon number was the worst, but I made other mistakes, such as using a discrete number instead of a distribution. The end result of my work grossly over represented the cost associated with a breach at my organization. How could I have done it differently? How could I have quantified risk in a way that resonated with the Chief Credit Office from the previous story?
 
Today, I would use FAIR
 
I should have used Factor Analysis of Information Risk (FAIR).  In fact, the interaction I described last week was part of why I stumbled onto FAIR. Let’s rework the preparation leading up to my training presentation failure and then we’ll compare the results.
 
Today, I would start with RiskLens Cyber Risk Quantification (CRQ), powered by FAIR. I would create a simple analysis with a scope of cyber criminals attacking the key SQL database resulting in a breach of confidentiality.
 
scope.png
 Figure 1 - Assessment scope
 
Look at the results
After answering 19 questions, I would get a calculated result for Aggregate Loss Exposure that looks like this:
 
ALE.png
Figure 2 - Analysis loss results
 
The results in figure 2 are annualized. That means the application is factoring in the frequency of attacks and the condition of controls protecting the database. While critical to good risk analysis, we cannot include these factors when comparing our results to Ponemon.
We need to focus on the Single Loss Exposure (SLE) results.
When looking only at the cost of a data breach, it is important to weed out two things:
  1. The effect of preventative controls
  2. The frequency of attacks.
Figure 3 shows a box-and-whisker plot of the Single Loss Exposure for all of the simulated attacks that resulted in a data breach. Figure 4 shows the summary data for the graph.
 
SLE.png SLE_summary.png
Figure 3 - FAIR results Figure 4 - SLE summary

 
Do a Ponemon-based calculation

Before we start comparing the values from figures 3 and 4, let’s take a quick look at what the original calculation would look like. We’ll use the 2015 Ponemon number. I’m using the 2015 number because the losses we calculated using RiskLens are based on costs our customers see today. Table 1 has the results.
Remember, I got in hot water claiming the risk could be calculated by multiplying the Ponemon number by the record count in our database.

2015 Ponemon cost per record breached $217
Number of records in the database 4,000,000
SLE using Ponemon $868,000,000
 
Now compare the results
 
Both sets of data now have Single Loss Exposure results we can compare. Let’s focus on the values from figure 4 and table 1. Table 2 has the side-by-side comparison:
 
RiskLens Average SLE Ponemon SLE
$17,900,000 $868,000,000
 
The results are worlds apart. What happened? The unwritten advice from the Ponemon report comes into play: do not use the per record cost when more than 100,000 records are at stake. This is most prevalent when you’re dealing with what Ponemon considers a ‘Mega Breach’ - the sorts of breaches like Target and Home Depot that make it into the news. Those types of breaches aren’t included in their research. The big difference in how the losses are calculated by RiskLens is scale.
We account for economies of scale in soft and hard costs for loss events. Ponemon doesn’t. That makes it valuable regardless of how many records are at stake.
Here-in lies my big mistake. I did not account for the scale of my breach the first time I used Ponemon. Had I used the tools available to me with RiskLens, I would have accounted for economies of scale in my losses, among other important things. I could confidently show my audience a distribution of losses ranging from $11.9 million to $24.5 million with an average of $17.9 million. It would have resonated with everyone without inciting fear. I would have opened the floodgates of conversation as opposed to inviting aggravation from a senior executive.
 
A final word on using Ponemon
 
Within these two posts, I hope you find some nuggets of wisdom to help you better use Ponemon’s numbers. The Ponemon report is accurate when modeling 10K to 100K breaches. However, it is not a safe assumption to say their reported per record cost scales to large breaches.
When used properly, the Ponemon report is useful. It can get you in the ball-park of understanding your risk, but it’s not enough.
You need a richer expression of risk than what the traditional Ponemon-based calculation can get you. Here are three things you need that Ponemon alone cannot provide:
  1. A distribution of loss that accounts for frequency of events.
  2. Results that express the value of controls protecting your data.
  3. Cost calculations that are appropriate for any size breach.
Where can I learn more?
 
For more information, I recommend two good reads and one great conversation:
Finally, reach out to our team to ‘Schedule a Demo.' We live and breath risk quantification and would like to help your organization do good analyses such as what you’ve seen here.
 
Schedule a Demo
This post was written by Isaiah McGowan

Isaiah McGowan is a Senior Risk Consultant on Customer Success

Connect with Isaiah