The bank’s infosecurity team was split on the need for a data retention policy. Half of the team didn’t feel they needed to eliminate customer files because customers would likely open another account down the line – potentially in a different sector of their business maybe, such as a savings account, car loan, or personal loan.
The other half of the team felt that it was extremely important to work out a system to purge files -- there were customers in the database with accounts inactive for 10+ years.
The team called in RiskLens to help settle the issue with the FAIR model for cyber risk quantification: Just how much risk were they running with the status quo on data retention?
Defining the Loss Event
Every analysis based on the FAIR model starts with a rigorous definition of the risk to be investigated, in terms of a Loss Event with a Threat, an Asset and an Effect. In this case our loss event was
“The risk associated with an external malicious actor (Threat) breaching sensitive customer personally identifiable (PII) data from the Online Banking Database (Asset), resulting in confidentiality loss (Effect).”
Download a copy of the FAIR model on one page to follow along with this case study.
Data Gathering for the Analysis
In order to gather the appropriate data, we first needed to clear up a few assumptions that had been floating around the room: “How much data was really inside of this Database?”, “Was it all in fact PII?”, “What controls stood in place of the threat actor and the data of concern?”.
Given these unanswered questions, we began the data gathering process using the FAIR model to analyze each of the relevant factors that ultimately make up the two key components we use to quantify risk, probable Loss Event Frequency and probable Loss Magnitude.
On the Loss Event Frequency Side of the Model:
Threat Event Frequency
- The team estimated that an external malicious actor attempt could happen between once every other year and three times a year.
- The organization had never seen a data breach from this asset but considered that industry-wide data breaches are trending up.
- The organization sees a lot of phishing campaigns, some of which are not caught by their email filtering system.
- To ensure that all possibilities are captured, the team went with a wider range to account for the unknown.
- The team discussed the controls in place to prevent an external malicious actor (threat) from breaching the PII data within the Online Banking Database and identified the following controls
- Privileged access reviews are performed on a semi-annual basis
- Passwords are set to company policy
- Patching is performed, but not in a timely manner, at times they fall behind on patching or they are unable to update because this Database interfaces with many other applications.
- Firewalls, etc…
- Given the controls in place the team decided they were vulnerable (or susceptible) to around 5 – 15 % of those attempts turning into a successful loss event.
On the Loss Magnitude Side of the Model:
Primary Loss Magnitude
- The only quantifiable loss that the organization would see from a primary standpoint is a response to the event.
- The main team involved would be Investigations and IT –although Management would get included on correspondences.
Secondary Loss Magnitude
The team went back and forth when it came to how many files they actually had in the database. Some members thought they only had 5 million records and others believed that there was an interface that pulled in up to 15 million records. Eventually we were able to jump into the database and truly find out how many records were within the database and it was in fact 15 million!
Then we began to calculate the potential secondary loss figures. Secondary is where the majority of the loss will fall for any breach-related scenario. Some of the losses the organization could expect:
- Response to customers, regulators, PR involvement, additional audits, etc.
- Credit monitoring for customers affected by breach
- Fines and judgments
- Potential reputational damage
Running the Risk Quantification Analysis
With such a divergence of opinion on the team, we ran the analysis a few times to settle the debate. With the RiskLens tool, we were able to run the analysis to look at a 15%, 25%, and 50% reduction in database record count and compare the difference in a reduction of loss exposure, ranging from $5-11 million.
Takeaways for the team from the FAIR analysis:
- The organization learned the accurate amount of data within their ‘crown jewel’ database.
- They were able to make effective comparisons given the reduction of records they retained.
- They were able to provide defensible reporting to quantify the effectiveness of reducing the number of records in the database – supporting well-informed decision-making.