Threat Event Frequency is King When it comes to the frequency half of the FAIR model, I consider Threat Event Frequency (TEF) king. Yes, in the FAIR model, all of the ontologyis important. But mistakes with Threat Event Frequency act as a multiplier. We see this when analysts new to FAIR are working on Threat Event Frequency for web applications. A common use of RiskLens is to prioritize remediation efforts for web applications. Using FAIR provides a consistent and contextual way to identify what matters. But if TEF is incorrectly derived the analysis can run aground. Fortunately, mistakes made with Threat Event Frequency are often simple and easy to avoid. But first a quick recap of what Threat Event Frequency is. "Threat Event Frequency is the probable frequency, within a given time frame, that threat agents will act in a manner that may result in loss." (pg. 29 - Measuring and Managing Information Risk) That's a mouthful, so let's unpack it:
- Probable Frequency - We're dealing with how often something will happen. In this context how often bad guys attack our web application.
- Given Time Frame - We have to put a limit on how long we will count events. In RiskLens we always deal with annualized time. So one year.
- Threat Agent Acting - A threat agent can be many things. A person, a group (think the hacking group Anonymous), malware, etc. The important part is the agent is acting in a malicious way against your web application. (I'm assuming the risk scenario context is external hackers here.)
- May Result in Loss - Just because an attack happened doesn't mean a loss happened. But it's important to remember the intent for loss was there.
Mistake #1 - Confusing Contact Frequency with Threat Event FrequencyThis is by far the most common mistake analysts new to FAIR make. A great example of this is using security logs from a web application firewall (WAF). A WAF can be a great source of data to derive Threat Event Frequency. But it's important to understand individual log entries are contact events, not threat events. To avoid this the analyst needs to use a consistent method to group contact events into a threat events. Not doing so will overstate Threat Event Frequency. Using concepts such as Imperva's notion of a Battle Day can help. Just remember web application contact frequency will likely be grouped into threat events. If you're not grouping you're likely making a mistake.
Mistake #2 - Confusing IntentNot all attacks look alike. Actually, not all attacks are actually attacks. Remember in our definition above, the threat agent is acting maliciously and it may result in a loss. Is a scan of a system malicious? Some may say it is, but in the TEF definition it is not. A scan also doesn't lead to a loss. To avoid this be sure to understand what your data is showing you. Is this a scan or an attack? If it's a scan don't include it in the final Threat Event Frequency. Or include scans as part of an attack, call it the opening move.
Mistake #3 - Not Doing a Smell TestYou've crunched the numbers and the data says X. You put the results in and blissfully jump to the next question. Stop. You're not done, you've skipped an important step. Not doing a smell test, a critical look at the result in context of the full risk scenario is a mistake. A mistake that allows #1 and #2 to creep into your analysis. To avoid this, add a "smell test" task to your workflow. A simple step that adds critical thinking for quality control. Come back to the answer later, after you've got your head out of the data. Ask yourself if it still seems realistic? Does it make sense given the context of the risk scenario? If not go over it once more. Apply the definition to the data again and see if you fell for a common pitfall and correct it accordingly.