« Return to Blog Listing

Case Study: Data Walking Out the Door. Data Masking Worth It?

by Cody Whelan on Mar 17, 2017 3:52:23 PM

Case_Study_Data_Walking_Out.jpgThe CISO knew he had a data leak but he didn’t know how big. He suspected data masking was the solution but he couldn’t make a business case for the investment. Those were the problems RiskLens Risk Consultant Cody Whelan and team set out to solve for this client. (No company names here; we respect our clients’ privacy.) 

Read Cody’s notes to pick up the story:

Like many of the other customers that we work with, this team had limited experience analyzing risk scenarios.  They were thinking of risk basically as a scary event, and part of the job when we’re onsite is to draw the information out so we could quantify their risk.

As part of the scoping process, we first identified what assets were of most concern from a data leak perspective—in this case, data repositories and SharePoint holding personally identifiable information (PII) and contractual information from clients.  

Then we looked to identify the threat community. Their main concern going in was malicious external actors causing an exfiltration of information.

Yet when we asked a few key questions, we quickly came to understand that the most likely concern was insiders either accidentally sending out emails that contain PII or stealing contractual information, particularly as they left the company for other jobs. But the CISO only had a few confirmed cases of data leakage, not a lot of hard evidence to project from.

We also learned through our data gathering that if information does get sent to the wrong person, there were no real procedures in place to notify information security, or DLP solution.

Next, we mapped their scenario to the FAIR ontology [chart], using the RiskLens application. Although we were working with limited data, we could still forecast frequency and magnitude based on the components you see in the chart. (Get an explanation of the FAIR model and the FAIR ontology.)

FAIR Ontology Blue.png
Also, we put an estimate on their Secondary Loss Event Frequency [SLEF], in other words, a Loss Event that affects their customers, regulators, business partners, etc. In the past, they’d had limited Secondary Loss Events, mainly because the company had reached out to clients and assuaged them – but they recognized they couldn’t count on that working forever.

On the right side of the chart, we could estimate the magnitude of their potential losses from data leakage--lost productivity, response time for the security team, fines and judgements, loss of reputation with customers—from the loss tables that we’ve built, based on industry experience.

With the frequency and magnitude in hand, we could show the CISO his risk, or the Annualized Loss Exposure  in FAIR terminology. We always show that as a distribution to account for the variance in exposure when modeling future events:

Loss from Data Leakage.jpg

Next, we modeled their future state if they put data masking in place.  So this means that their staff could not accidentally send an email containing sensitive customer data to the wrong source, or accidentally lose a USB stick containing sensitive customer data.

There was a tremendous reduction in risk--in fact, releases would almost never occur–- going from an average annualized loss exposure of $2 million a year to $61,000 on average.

Loss Reduction.jpg

So, at the end of the day, we were able to give the CISO a solid understanding of his forecasted data leak loss exposure, enough for him to go forward with confidence to have a ROI discussion with any of his colleagues about the merits of implementing a data masking initiative.

And another important deliverable: We showed the CISO’s team how they could run scenarios going forward with the RiskLens application, to keep on quantifying risk, as other investment decisions came up. 

 



Schedule a Demo
This post was written by Cody Whelan

Connect with Cody