From a security perspective, hiring a new employee is one of the most important decisions that an organization makes. In this video, learn about the importance of candidate screening, reference checks, and employment, and education verification. Also learn about including policy reviews and employment agreements in the hiring process.
- [Instructor] When we're able to gather quantitative data about our assets and risks, we can use that information to make data-informed decisions about risk. The process of using numeric data to assist in risk decisions is known as quantitative risk assessment. Security professionals performing quantitative risk assessment do so for a single risk-asset pairing. For example, they might conduct an assessment based upon the risk of flooding to a data center facility. As they conduct this assessment, they must first determine the values for several variables.
The first of these variables is the asset value, or AV. This is quite simply the estimated value, in dollars, of the asset. Risk assessors determining an asset's value have several options at their disposal. The original cost technique simply looks at invoices from an asset purchase and uses the purchase prices to determine the asset value. This is the easiest technique to perform because it simply requires looking at invoices. However, it is often criticized because the costs to actually replace an asset may be be significantly higher or lower if asset prices have changed since purchase.
The depreciated cost technique is an accounting favorite. It begins with the original cost and then reduces the value of an asset over time as it ages. The depreciation technique uses an estimate of the asset's useful life and then gradually decreases the asset value until it reaches zero at the end of its projected lifespan. The replacement cost technique is the most popular among risk managers because it produces results that most closely approximate the actual costs that an organization will incur if a risk materializes.
The replacement cost technique goes out and looks at current supplier prices to determine the actual cost of replacing an asset in the current market and then uses that cost as the asset's value. We might use this technique to value a data center at $20 million because that is the amount of money that would be required to rebuild it after a disaster. The second variable that we must consider is the exposure factor, or EF. The exposure factor is based upon the specific risk considered in the analysis, and it estimates the percentage of that asset that will be damaged if a risk materializes.
For example, if we expect a flood might damage 50% of our data center, we'd set the exposure factor for that flood to 50%. The next quantitative risk assessment variable is the single-loss expectancy, or SLE. This is the actual damage we expect to occur if a risk materializes once. We compute the SLE by multiplying the asset value by the exposure factor. So, if we have a data center valued at $20 million and expect that a flood would cause 50% damage to the facility, we compute our SLE by multiplying these two numbers together and finding that a single flood would cost $10 million in damage.
That's the impact of the risk. The SLE only gives us an idea of impact. As you know, risk assessment must also consider the likelihood of a risk. That's where the annualized rate of occurrence, or ARO, comes into play. The ARO is the number of times each year that we expect a risk to occur. In the case of a flood, we might consult FEMA flood maps and determine that there is a 1% annual risk of flood in the vicinity of our data center. That's the same as saying that we expect 0.01 floods to occur each year, so our ARO is 0.01.
Finally, a risk analysis should incorporate both of these likelihood and impact values. We do this by computing the annualized loss expectancy, or ALE. This is the amount of money we expect to lose each year from that risk, and it's a good measure of the overall risk to the organization. We compute the ALE by multiplying the single-loss expectancy and the annualized rate of occurrence together. In the case of flood risk to our data center, the SLE was $10 million and the ARO was 0.01.
Multiplying these together, we get an annualized loss expectancy of $100,000. This means that we should expect to lose $100,000 each year from the risk of flooding to our data center. It is important to remember that in reality, this cost won't occur each year. What we'll really have happen is $10 million in damage each time a flood occurs. But since we expect that to happen only once every 100 years, it averages out to $100,000 a year.
That's how you perform a quantitative risk analysis. You should definitely memorize these formulas and be prepared to compute them on the exam if given a scenario. Quantitative techniques also help us assess our ability to restore IT services and components quickly in the event of a failure. We do this by looking at several time values. The values we use depend upon whether an asset is repairable or non-repairable, that is, whether we can fix it or whether it needs to be replaced.
For non-repairable assets, those that we cannot fix, our most important metric is the mean time to failure, or MTTF. This is the amount of time that we expect will pass before an asset fails. When using mean values, it's important to remember that these are averages. Half of the assets of this type will fail before the MTTF, and half will last longer than the average value. Mean values are useful for planning purposes, but you shouldn't completely depend upon them.
If an asset is repairable, we look at two different values. The first is the mean time between failures, or MTBF. This is quite similar to the MTTF. It's simply the average amount of time that passes between failures of a repairable asset. The second value we track for repairable assets is the mean time to repair, or MTTR. This is the amount of time that an asset will be out of service for repair each time that it fails.
When we look at the MTTF and MTTR values together, we can get a good idea of the expected downtime for an IT service or component.
- Using information classification
- Selecting and implementing security controls
- Conducting ongoing risk management activities
- Comparing adware, spyware, and ransomware
- Dangers posed by advanced persistent threats (APTs)
- Understanding attackers
- Types of attacks, including networking and password attacks
- Social engineering attacks
- Scanning for vulnerabilities
- Business continuity and disaster recovery planning
- Managing vendor relationships