« Return to Blog Listing

How to Effectively Leverage the FFIEC Cybersecurity Assessment Tool

How to Effectively Leverage the FFIEC Cybersecurity Assessment Tool

by Jack Jones on Jan 6, 2016 10:31:25 AM

In June of 2015, the FFIEC released its Cybersecurity Assessment Tool (CAT) to “…help institutions identify their risks and determine their cybersecurity maturity.”  So, if the FFIEC released this as a free spreadsheet “tool”, why on earth would we at RiskLens bother to bake it into our Cyber Risk Maturity application?

Assessing_Risk.pngSpreadsheets are not a “mature” solution

It seems a little bit… let’s say, logically inconsistent… to use a spreadsheet tool to gauge the security maturity of an organization.  Spreadsheets are notoriously hard to maintain control of, and the information contained within this tool is clearly sensitive in nature.

It’s partially there

Like many other checklist assessment frameworks, the FFIEC CAT is relatively binary in how it forces the user to characterize the condition of the elements it evaluates.  For example: “Audit log records and other security event logs are reviewed and retained in a secure manner.”  Essentially, the user has to characterize this as true or false, when very often the answer to these kinds of questions is, “sometimes” or “in part of the organization”.  Being able to recognize these partial states is critical in order to accurately portray and understand where opportunities exist for improvement.  In our version of the CAT, users rate each element of the framework as “Weak”, “Partial”, or “Strong”, enabling them to identify elements that have room for improvement and providing actionable insight.

Confidence matters, a lot

Regardless of whether someone claims that the condition of an element is True, False, Strong, Partial, Weak or Orange, the question has to be asked — “How much faith should anyone place in that claim?”  In our version of the CAT, users also have to state whether the claim of Strong, Partial, or Weak for an element is “Documented and Tested”, “Documented”, or “Undocumented”. 

  • Undocumented means that the rating is simply a claim of the person(s) completing the evaluation.
  • Documented means that the organization has policies and/or procedures that set specific expectations for that element (e.g., a policy stating that audit logs are to be reviewed and retained in a secure manner, and perhaps a documented log review process description).
  • Documented and Tested means that the organization has documented policies and/or procedures that support the claim AND the claim is supported through test results (e.g., by internal audit, a 3rd party, etc.).

Differentiating the answers by confidence level, provides a much richer understanding of an organization's maturity.  In our version of the CAT, we combine the "Weak", "Partial", and "Strong" claim with the confidence level to provide this deeper understanding.

The big picture

It’s fine and useful to evaluate the condition of controls, but the FFIEC CAT also has a set of elements that attempts to characterize the “inherent risk” an organization faces.  Essentially, the evaluated amount of " inherent risk" provides context to help gauge how mature an organization's controls ought to be.  These inherent risk elements include things like the services the financial institution provides, its technology landscape, etc.

Making a meaningful comparison between “inherent risk” and control conditions is tricky though, and the FFIEC CAT describes a rudimentary matrix-like approach for doing so.  In our version of the CAT, we combine these measurements graphically, which makes the comparison easier to digest. 

Good, but not enough

A final point is that, as useful as the FFIEC CAT may be, it’s a first draft and has a lot of room for improvement.  For example, it is missing some key elements that determine how mature a cybersecurity program really is (e.g., elements related to risk analysis). It lacks any analytic underpinning and treats all elements as equal in importance, and its maturity assignments (e.g., whether an element is reflective of a “Baseline”, “Evolving”, etc.) are in some cases very debatable.  By including the FFIEC CAT as just one of the assessment tools in our Cyber Risk Maturity suite, we are able to help organizations get a more complete and analytically sound understanding of their strengths and opportunities for improvement.

Schedule a Demo
This post was written by Jack Jones

Jack Jones is the EVP of R&D and a Founder of RiskLens. He has worked in technology for over 30 years, the past 28 years in information security and risk management. He has a decade of experience as a Chief Information Security Officer (CISO) with three different companies, including a Fortune 100 financial services company. His work there was recognized in 2006 when he received the Information Systems Security Association (ISSA) Excellence in the Field of Security Practices award. In 2007, he was selected as a finalist for the Information Security Executive of the Year, Central United States, and in 2012, he was honored with the CSO Compass Award for leadership in risk management. Jones, who lives in Spokane, Washington, has served on the ISACA CRISC Certification Committee and RiskIT Task Force, as well as the ISC2 Ethics Committee. He is the author and creator of the Factor Analysis of Information Risk (FAIR) framework. He writes about that system in his book Measuring and Managing Information Risk: A FAIR Approach, which was inducted into the Cyber Security Canon in 2016, as a must-read in the profession.

Connect with Jack