Guarding the Galaxy Against Supervillains: A FAIR Risk Analysis

January 16, 2019  Tim Wynkoop

I frequently hear from clients that they'd like to perform a FAIR risk analysis on more than just information risk or cyber risk.  They want to be able to perform more of an operational risk analysis.  Thankfully, FAIR (that’s Factor Analysis of Information Risk, the model that powers RiskLens) is flexible enough that you can do just that.  I have enlisted the help of my friends at Marvel to show how easy it is to perform a FAIR analysis on just about anything, including operational risk–just for fun, let’s say, on  guarding the galaxy.  

Download a copy of the FAIR model before we take off.

Here are the steps:

  1. Scope your analysis:  How much risk is associated with Earth being attacked by a supervillain who wants to destroy it?
  2. Define your Loss Event:  Total destruction of Earth.
  3. Discover your Threat Actors:  Malicious External actor - AKA evil supervillain
  4. Chose a focus on C, I, or A (Confidentiality, Integrity, Availability):  Availability (if Earth isn’t available then we have a problem)

Learn more: How to Scope A Risk Analysis Using FAIR

Once you have completed your scoping you are ready to perform the analysis.  I like to go one step beyond just to make sure I don't miss anything or fall victim to scope creep.  So I define what each part of the FAIR model (at a minimum the left side, everything that rolls up to Loss Event Frequency) is in relation to this analysis.

Risk: How much risk is associated with Earth being attacked by a supervillain who wants to destroy it?

Loss Event Frequency (LEF) :  How often Earth is made unavailable by a supervillain?

Threat Event Frequency (TEF):  How many times a supervillain attempts to make Earth unavailable to its inhabitants?

Vulnerability (VULN):  What percentage of the time Earth becomes unavailable?

Now I would ask the question “What information do I know?”

Thanks to Marvel, I have some pretty good Threat Event Frequency numbers. Let’s count:

  • Johann Schmidt attempted to conquer the world with his weapons made from the Tesseract
  • Loki took over Asgard and then set his sights on Earth
  • The Dark Elves in attempting to obtain the dark matter threatened the world in the process
  • Hydra and the Winter Soldier attempted a world takeover
  • Ego attempted to take over the Universe
  • Ultron was created and attempted to cleanse the Earth
  • Dormammu attempted to assimilate the Earth into his dimension

I count at least 7 attempts.

Also, thanks to Marvel, I have some “controls” to consider when estimating Earth’s Vulnerability to attack:

  • Captain America
  • Thor
  • The Avengers
  • The Guardians of the Galaxy
  • Dr. Strange

Since Vulnerability in the FAIR model is expressed as a percentage, I would say our vulnerability is actually rather low as the number of people with special abilities grows.

So what does this all mean?  If you complete a FAIR analysis with the information you have, given this scenario, you may get results something like this below using the RiskLens platform, showing an average $451 billion in risk, based the information gathered and industry data:

Given that, regarding these Threat Events:

  1. Full annihilation has never occurred
  2. Earth has many protectors
These annualized numbers appear to be reasonable. Mainly because, as #1 states, we have never seen a full-scale annihilation.  Keep in mind sometimes assumptions must be made.

Learn more: Assumptions Are a Powerful Thing

To sum up, whether you are measuring cybersecurity or fictional attacks on Earth, the FAIR model helps you to work out the problem and give results that makes sense for your business.  That is where RiskLens fits in.

For more on how to make sure your risk analysis make sense in business terms and how to fix it if it does not, check out these other blog posts:

Two Questions Every Risk Assessment Should Answer

How to QA Review Your FAIR  Analysis in 5 Minutes

So go be a guardian of the galaxy, your galaxy, your organization.


How to Do Aggregate Risk Analysis on a Big Problem