This discussed some of the limitations or considerations in using risk matrices. Given the paper draws on worked examples and figures, I can only give a basic description of a few points and suggest you read the source (which explains the discontinuity between paragraphs in my summary, since I’ve jumped over a lot of supporting arguments).
It’s said that some organisations use one matrix for business risks and another for OHS, with mismatches in their allocation of descriptors for likelihood and consequence vales introducing some confusion. There’s no universally agreed definition of risk in OHS, and this further increases difficulty in communication outcomes of risk assessments.
Risk assessment (RA) is said to be highly subjective, with individuals [and groups] being prone to systematically misperceive risk. Additionally, it’s stated that despite the widespread use and promotion of risk matrixes (RM), there “is limited scientific study to show if risk matrices improve risk making decisions” (p10).
RM typically have an array of cells, presented as squares or rectangles; these rows and columns then represent risk categories or levels. Regarding the design they state that RM with too few categories may suffer from range compression where “risks with significant variation in likelihood and or consequence might become grouped into the same category” (p11).
Regarding likelihood and consequence values in semi-quantitative and quantitative RMs, it’s said that data derived from injury statistics or epidemiological studies or other historic data may “be problematic as incident rates vary over time and data collection may be biased” and “The number of incidents and injuries within organisations is usually too low to provide a basis for quantification of risk” (p11).
In one example they note that if the numerical values of both likelihood and consequence are known then the quantitative measure of risk is also known and therefore “a Risk Matrix is not required to rank hazards as this will be self evident” (p11).
For consequence values in quantitative matrices, it’s said these are often represented by ranges because they rely on conditional factors. However, lack of point values is said to be a weakness in RMs. Nevertheless, establishing point values through accurate projections in likelihood and consequences is said to be impractical in most cases and outside the expense, time and resources for an organisation to undertake.
For the use and interpretation of RMs, one challenge is that there can be a nonlinearity of points of equal risk in matrices (which the authors highlight by applying different but equal risk curves overlayed on a 5×5 matrix). They say that despite the risk curves being equal risk, they don’t align themselves with the cells or boundaries and thus “render the plot of the likelihood and consequence estimations ambiguous” (p11).
Because few people are aware of these structural factors in matrices, the authors say that users may inadvertently over- or under-estimate risk relative to the anticipated category.
Designers of RMs are said to not evenly space risk levels on the matrices and “values are decided by placement on the matrix rather than being mathematically derived” (p12). Issues in how low, medium & high categories are distributed on a matrix “has a dramatic effect on where the levels lie on a matrix” (p12).
As a practical measure it’s said that rounding up to the highest value seems reasonable (e.g. if a cell contains some ‘high’ area then categorise the assessment as high), but this rounding up isn’t as useful for the lowest row/column which contain some ‘low’ data points. This can lead to overestimating very low likelihood and high consequence events. They give an example of a meteorite strike which is catastrophic consequence but with a negligible likelihood and that many RMs “will indicate something greater than low risk and thereby prescribe some preventive action” (p13).
They then state that the ability to rank risks in order to prioritise corrective actions is a purported purpose of RMs, but even this is problematic. They give an example of a ‘medium’ assessed risk having a higher numerical risk value than a ‘high’ assessed risk, which may suggest that the medium risk should be prioritised first.
Based on other research, it’s argued that high risk categories should have greater values than those in the low categories and that “small increases in likelihood or severity should not cause a jump in category from Low to High without going through an intermediate category” (p14). They note that equal quantitative risks should have the same qualitative risk rating. While this is said to be impossible to achieve for all risk values, it may be possible to align low and high categories while sacrificing a bit of consistency in the intermediate categories.
In another example they cite, it’s shown that one matrix is “unable to correctly rank two risks over 90% of the time. This does not promote accurate resource allocation”, and some 5×5 matrices may not match well with observed reality.
They then talk about the subjective challenges in RMs. I’ll skip most of this, but it involves things like:
- people overestimating small probabilities and underestimate large ones
- a tendency for people to “move selections away from the lowest and highest measures of likelihood and populate cells towards the middle of the likelihood scale” (p14)
- exaggeration of loss particularly for people with a personal interest in the outcome; biasing selection of risk cells towards higher consequences
- promoting “reverse engineering”, where likelihood or consequences are modified to achieve a desired risk score
In concluding, they note that RMs have little scientific analysis demonstrating their value in improving risk-related outcomes. Lack of RM design principles may cause confusion through variations in the number of rows & columns, the direction of risk scaling and the values themselves.
They argue that “A shift of emphasis from the risk assessment stage to the risk control stage of a hazard management process may lead to better and more timely decision making and better use of resources” (p15).
Link in comments.
Authors: Alexander Pickering, Stephen P Cowley, 2010, Journal of Health & Safety Research & Practice.
Link to the LinkedIn article: https://www.linkedin.com/pulse/risk-matrices-implied-accuracy-false-assumptions-ben-hutchinson