When it comes to the frequency half of the FAIR model, I consider Threat Event Frequency (TEF) king. Yes, in Factor Analysis of Information Risk (FAIR™), all of the ontology is important. But mistakes with Threat Event Frequency act as a multiplier.
We see this when analysts new to FAIR are working on Threat Event Frequency for web applications. A common use of RiskLens is to prioritize remediation efforts for web applications, and using the FAIR model provides a consistent and contextual way to identify what matters.
If TEF is incorrectly derived, the analysis can run aground. Fortunately, most mistakes made with Threat Event Frequency are simple to correct and easy to avoid.
Discover your industry's greatest cyber risks in this new report. TRY IT FREE.
irst, let’s do a quick recap of Threat Event Frequency. "Threat Event Frequency is the probable frequency, within a given time frame, that threat agents will act in a manner that may result in loss (from the FAIR book, Measuring and Managing Information Risk).” That's a mouthful, so let's unpack it:
Keep this definition in mind as we go over these common mistakes and how to avoid them.
Those who are just learning FAIR occasionally confuse Threat Event Frequency (TEF) with Loss Event Frequency (LEF), and this is one of the most-missed questions on the certification exam. There is a strong similarity between the definitions of TEF and LEF. The only difference is that the definition for TEF doesn’t include whether a threat agent’s actions are successful. TEF captures the number of attempts (by a threat actor) to cause harm to an Asset. A common example of a malicious threat event (where harm or abuse is intended) would be the attacker (Threat Actor) who unsuccessfully attacks a web server. Such an attack would be considered a threat event, but not a loss event.
--FAIR Terminology 101 – Risk, Threat Event Frequency and Vulnerability
Train in FAIR quantitative risk analysis with the experts: the RiskLens Academy
To avoid this, the analyst needs to use a consistent method to group contact events into threat events. Not doing so will overstate Threat Event Frequency and make it appear that significant cyber incidents are occurring more often than there actually are. Using concepts such as Imperva's notion of a Battle Day can help. Just remember that web application contact frequency will likely be grouped into threat events. If you're not grouping, you're likely making a mistake.
To avoid this, be sure to understand what your data is showing you. Is this a scan (simple contact) or an attack? If it's a scan, don't include it in the final Threat Event Frequency. Or include scans as part of an attack, but call it the opening move; just one move among many in a single attack.
Learn more about using the attack chain for cyber risk analysis in these blog posts:
Business Email Compromise Risk: The What, Why and How to Quantify
Case Study: Quantify Cybersecurity Risk for Industrial Robots
To avoid this, add a "smell test" task to your workflow. A smell test is a simple step that adds critical thinking for quality control. Come back to the answer later, after you've gotten your head out of the data. Ask yourself if it still seems realistic. Does it make sense given the context of the risk scenario? If not, go over it once more. Apply the definition to the data again, and see if you fell for a common pitfall and, if you did, correct it accordingly.
If you can avoid these 4 mistakes, your organization will be able to build better plans for incident response based on correct quantification of the likelihood of a cyber event taking place. If you want to learn more about cyber risk quantification and the RiskLens platform, contact us and schedule a demo today.