Assessing Cyber Risk in Legacy Systems with RiskLens

January 22, 2019  

Here's a quick look at the power of cyber risk analytics with the FAIR model and the RiskLens application to solve an everyday business problem. A RiskLens colleague and I recently helped a risk team in the information industry quantify the risk associated with a legacy server. In the process, the organization was also able to gain some important insights into their IT environment and cybersecurity risk management in general.

Let’s set the scene:

An unsupported legacy server hosts a customer-facing application. The server has been unsupported for a significant amount of time and several known vulnerabilities have surfaced. The organization is divided on whether or not there is a significant enough risk to consider migrating the application.

Scoping the problem:

In the scoping sessions with the client, it was determined the primary concern was an availability outage of the application, as a result of vulnerabilities being exploited in the unpatched server. Based on the industry and type of application, we determined the most likely attacker would be the General Hacking Community.

Additionally, in our initial sessions, two main assumptions surfaced:

  1. This type of event had not occurred in the organization, but was a concern due to the unsupported server
  2. An additional concern (out of scope) was that an attacker could pivot from the application and wreak havoc on other systems in the network

Gathering data to populate the FAIR model’s Loss Magnitude side:

After determining the scope of the analysis, we mapped it to the FAIR model and began our data gathering sessions. Although there was limited data on past losses, by using the knowledge of the organization’s SMEs and the data points available, we were able to confidently determine a range for the potential Loss Magnitude of the event. The majority of the loss exposure was related to Response Costs, which is common in Availability scenarios.


Read more case studies on cyber risk quantification with RiskLens and FAIR


Filling out the Loss Event Frequency side of the FAIR model:

Following the Loss Magnitude sessions, we discussed the frequency of the event. In typical RiskLens fashion, we sat the clients down in front of a whiteboard to once again flesh out the scope of the scenario and how it would likely occur.

In this exercise, we discovered that one of the initial assumptions was incorrect: This event had happened before. More than once, as it turned out. The events had not been communicated on a broad scale in the organization, which led to the assumption they had not occurred. By getting the right people in the room, we were able to correct this assumption. Based on the new information, we were able to apply the highest level of the frequency side of the model and calculate a probable Loss Event Frequency.

Determining the Vulnerability of the server:

The final round of data gathering sessions held was on Vulnerability. In the discussions of the control environment held within this session, we were able to determine that 1) due to the server being unsupported but 2) having multiple levels of controls in place, the ultimate vulnerability of the server was uncertain. In order to account for this uncertainty and remain accurate, the range of Vulnerability was relatively wide. Given the precision of data we gathered elsewhere in the analysis, we were comfortable with the Vulnerability range determined.

Additionally, in this session another assumption was challenged: that an actor could gain a foothold in the network from this server and application. Based on the segmentation in place as well as additional controls, the SMEs realized that it would actually be significantly more difficult than originally assumed to pivot from the application to other key systems. Although it was still a concern, the additional information provided clarity around the issue that was not previously available.

Analysis results:

After we had conducted all of the workshops and gathered the necessary data, we utilized the  RiskLens Cyber Risk Quantification (CRQ) tool to calculate the Annualized Loss Exposure (ALE) of the loss event.

Based upon the calculations, we discovered there was both a relatively low probable Loss Event Frequency and Loss Magnitude associated with the event. In fact, the data showed that even if the server and application were permanently taken offline, there would be minimal impact to the organization.

Additionally, In order to determine if migrating the application would be beneficial, we also versioned the analysis and ran a "what if" analysis on if the organization were to migrate the application. In under five minutes we were able to determine the ROI of the proposed migration. As a result of both the initial analysis and the determined ROI, the organization made the decision to continue to host the application on the outdated server, now with knowledge of what specific risk they were accepting by doing so.

Although the loss exposure was not particularly significant, the data gained from the analysis could be applied to additional IT debt situations in the organization and shed light on the topic of cyber risk management overall. Further, throughout the course of the analysis we were able to provide clarity on some assumptions within the organization, which is equally as valuable as the analysis itself.