There are two main types of risk assessment methodologies: qualitative risk assessment and quantitative risk assessment.
1. Qualitative Risk Assessment
Qualitative risk assessment is the process of determining the likelihood of a risk occurring, the impact it would have if the risk event occurs, and its severity.
The process involves recording the results in a risk assessment matrix, which can help risk professionals to quickly identify the top risks – those falling in the highest likelihood and impact categories.
2. Quantitative Risk Assessment
According to the International Risk Management Institute (IRMI), risk quantification is “forecasting of loss frequency and severity to make risk financing decisions. Dependable estimates of the likelihood and dollar amount of loss-causing events allow an organization to take appropriate steps now and in the future to minimize their financial impact.”
In simple words, risk quantification is associating a monetary value to risk. For example, while performing a risk assessment, an assessor calculates the annual loss expectancy (the potential loss due to risk in a year) of $1 million. This quantitative value brings clarity to risk professionals as to how much could be the loss if a risk becomes an event.
What gets measured gets managed. For a comprehensive risk management program, it is critical to effectively manage non-financial risks as well. Quantifying NFR helps organizations to better evaluate if their risk exposure is aligned to their risk appetite, within their tolerance level, whether they need to invest more to improve controls, how much investment is worth it, and more.
Value at Risk (VaR) is a way to quantify the risk of potential losses, i.e., the expected loss from risk exposure. Factor Analysis of Information Risk (FAIRTM) is one of the most widely used VaR models for cybersecurity and operational risks.
In the words of the FAIR Institute, “FAIR provides a model for understanding, analyzing and quantifying cyber risk and operational risk in financial terms.”
The model is based on the concept that risk is uncertain and therefore the focus should not be on what is possible but on the probability that a risk event will occur and the loss exposure. By enabling assessors to express the factors contributing to a risk in quantitative terms, such as numbers, percentages, monetary values, etc., it helps estimate the probable frequency and magnitude of loss. It provides a model to measure and analyze risk via complex risk scenarios.
FAIR helps calculate total loss exposure, loss event frequency, loss magnitude, threat event frequency, susceptibility, and primary and secondary loss.
Annual Loss Expectancy (ALE) is determined from Single Loss Expectancy (SLE) which is nothing but the loss that could result from a single risk event. For example, consider the risk of fire hazards. If the organizational infrastructure, including office building, furniture, etc., is valued at $100,000 and a fire outbreak destroys 75% of it, then the monetary loss to the organization is $75,000. So, in this example, SLE is $75,000.
Once we know the SLE, then ALE can be calculated by multiplying the frequency of a risk event by the magnitude of loss. For example, consider a scenario in which a risk event can occur 5 times in a year and an organization would lose $1,000 in each event, then ALE will be $5,000.
In another scenario, suppose a single risk event can result in a loss of $100,000. However, if the event occurs only once in 5 years, then the ALE would be $20,000.
Here’s a quick look at these terms (as defined by the FAIR Institute):
The probable frequency that a threat action will result in loss within a given timeframe.
The probable frequency that a threat agent will act against an asset within a given timeframe.
The probable magnitude of primary and secondary loss resulting from an event.
The probability of a threat event becoming a loss event. Or, in other words, the probability that an asset will be unable to resist a threat agent.
The probable level of force (as embodied by the time, resources, and technological capability) that a threat agent is capable of applying against an asset. Or, in other words, how much damage can be caused to an asset by a threat.
The strength of a control compared to the threat capability. Or, in other words, the strength of a control to protect an asset from a threat.
Monte Carlo simulation is one of the widely used methods for running a FAIR analysis. The modeling technique involves simulating a risk event, such as supply chain disruption, ransomware attacks, etc., multiple times and predicting the financial losses that could result from each scenario. The process generates a range of possible outcomes of any risk event along with their relative probabilities.
To simplify it further, the simulation helps run several what-if analyses by assigning multiple values to a variable, which would produce multiple results. It then computes the average of the results to arrive at an estimate. Ultimately, the technique provides a full range of potential outcomes, how likely the outcomes can occur, what factors are impacting the most, and by how much.
For example, consider the risk of natural calamity. Monte Carlo simulations can provide risk teams with a range of outcomes that are possible instead of single point estimates. With the probability distribution table, organizations would be able to quickly see their risk exposure at multiple levels and compare it with their risk limit for making better-informed decisions.
From a practical viewpoint, the decision to perform a qualitative or quantitative risk assessment depends on what the risk assessor is trying to assess and what they expect to learn. For example, let’s consider the risk of fire hazard faced by an organization. An initial risk assessment would involve survey questions such as:
Most of these questions require a yes/no response and greatly rely on the expertise and knowledge of the assessor. Though qualitative risk assessments are subjective in nature and can be influenced by an assessor’s perception and bias, they are important to understand the severity and likelihood of any risk event. At the same time, it is also important to note that while risk quantification is important, it is highly dependent on the availability of reliable data and the scale and maturity level of risk function. For truly understanding and assessing risks, organizations must use both qualitative and quantitative risk assessment methodologies.
"The deepest insights come from the widest perspectives. For true risk assessment, perform both qualitative and quantitative risk assessments to gain real visibility into the overall organizational and cyber risk posture. You may have heard it called a 360-degree view of risk…(Read More)"
- Patricia McParland, Senior Director, Product Marketing, MetricStream
MetricStream enables organizations to effectively plan, schedule, manage, and perform risk and control assessments. It equips risk professionals to evaluate inherent and residual risks both quantitatively and qualitatively using configurable assessment methodologies.
With the Danube release, MetricStream brought advanced risk quantification capabilities to MetricStream Enterprise Risk Management and Operational Risk Management products. Risk teams can now express loss exposure in monetary terms. Associating an external standard and financial value makes it simpler for all stakeholders to quickly grasp and accurately understand the relative importance of each risk and make better-informed decisions. Users can build any kind of custom model, use various factors and variables, and capture values for factors (e.g., threat event frequency) that are represented in a simple, parent-child hierarchal format. A wide range of factors (e.g., Min, Max, Most Likely, and confidence) are already available to improve the accuracy of quantification. Earlier, MetricStream had rolled out risk quantification capabilities to the IT & Cyber Risk Management product.
MetricStream Risk Assessment: