Updated: Jun 9, 2020
Risk scores are commonly used to support risk-based decisions and are usually derived from a semi-quantitative analysis of the underlying risk factors to produce a single value such as: low, medium, and high.
This value is subsequently applied to the ranking of options or as a trigger for additional actions and as such can be extremely helpful to support decision making. However, if not implemented correctly, they can introduce vulnerabilities that expose companies to unnecessary and avoidable risk.
In a recent discussion on LinkedIn, a person wrote about a situation where risk scores were used. With their permission, I have included an excerpt from that discussion:
"A firm with an IS0 27001 certification had both a gap with risk evaluation and risk estimation unrealized by the external auditor. First, its vendor risk management process held that firms with services that cost more need more oversight than firms with services that cost less. This is fine until one looked at why a service might cost less. In this case, the service requests for vulnerability patching a corporate firewall were costing less because they had been skipped for three years. Falsely, the system reported the firewall service was lower risk because it cost less -- in this case too little for the firm’s best interests. Next, risk computations themselves were done in a manner that sounded good but was mathematically flawed. By adding a score for Confidentiality to Integrity to Availability it was possible to rank the security needs of a service, product, software or vendor. But by adding rather than multiplying it became possible for 70% or more of all risks to all have the score of medium. Summing risk indicators presumes statistical independence that was not truly present. The result is a bell curve with 70% of the answers for any combination of inputs resulting in a medium risk score. "
This story helps serve to illustrate potential problems with the improper use of risk assessments, scores and ranking. Here are 5 key problems:
1. Outcomes were not validated
The resultant scores were not validated to ensure that they would produce the appropriate outcomes. In addition, incorporating the other criteria: confidentiality, integrity, availability; in the calculation was not implemented correctly and may in fact not be statistical valid as mentioned in the excerpt. The decision to create a single-value score (most likely to facilitate the decision making process) contributed to unintended outcomes.
2. Risk scores were not calibrated
Risk scores were not calibrated and aligned with the risk attitude (appetite and tolerance) of the organization.
There are two aspects to this: (1) the scores themselves need to generate the right distribution of outcomes based on the inputs, and (2) the use of the score must be consistent with the risk attitude of the organization. For example, choosing a high risk option even if it was free would not be acceptable if the risk tolerance for the organization is low.
3. Using single-variable scores produced sub-optimal results
Choosing a set of options using single-variable ranking (ex. a resultant score between 0 and 10) can often lead to a less than optimal selection. The primary concern is that a single value is not always sufficient to differentiate the available options. This appears in other domains such as choosing the optimal portfolio of: projects, investments, or process improvement initiatives.
Issues with using single-variable ranking are well documented and there are solutions to overcome them. Among these include using: real options, efficient frontiers, multi-attribute ranking, and others. Often just using a matrix of value against risk is enough to produce a more optimal result.
4. Using risk scores in an automated process may be vulnerable to the "Automation Bias"
As risk-based thinking becomes more embedded in the organization it is likely to also become more embedded in the decision support systems. Although, not specifically stated in the above scenario, it is possible that the resultant risk score was used (or could be) to automatically select the vendor.
The automation bias is defined as, "the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct."
Automating the selection process may result in: (1) decision makers abdicating their responsibility for the decision to a computer system, and (2) leaning too much on a score to inform them as to the appropriate decision to make.
For those who work in the safety field know, you cannot delegate safety (or decisions about it) to a computer system.
5. Using risk scores may not be ethical
Decision support systems use numerical values which is some ways are no different from risk scores. However, in the case of the majority of these systems, they address situations of certainty where decision analysis is effective and can be mechanized in terms of moral rules and conditions. When this is done, responsibility (and possibly accountability) is abdicated to a computer system. Doing so might be appropriate accept for when decisions involve risk.
Risk-based decisions due to their inherent uncertainty are in the category of ethical decisions that a company makes and cannot easily (or at all) be reduced to a set of rules. If the risk can be completely eliminated by removing the hazard then rule-based decisions might be appropriate. However, should the hazard remain and uncertainty persist then the decision to proceed becomes an ethical choice.
Organizations should not transfer accountability for ethical decisions to an algorithm or a decision support system. Research is on-going and there may be at some point the possibility of implementing ethical subroutines that can be appropriately regulated. However, as of this point in time these do not exist and regulatory accountability is a human one.
In the example above, the decision to pick a lower cost (although higher risk) option should be made by a person who can ensure that the decision aligns with the company's ethical standards and guidelines.