Major Accident Indicators in High Risk Industries – A Literature Review

This 2016 paper explored research & materials to that point on the following questions:

·        Q1: What does the literature state about relationships between indicators and major accident risk?

·        Q2. What does the literature state about the effect of use of indicators?

·        Q3. What does the literature state about the conditions in the surroundings or the context having an impact on the reliability of the indicator?

174 documents were reviewed.

Results

Key findings from this review were that:

·        there is lack of documented relationships between indicators and major accident risk,

·        lack of demonstrable effects of the use of indicators

·        lack of demonstrable evidence surrounding the factors which affect the reliability of indicators

Further, none of the 174 studies documented the effects of using indicators or their reliability as central themes within that research. As they argue, “This is somewhat surprising since demonstrating that indicators have an effect, and ensuring reliable indicator measurements can be seen as premises for using indicators” (p2).

The topic of validity as per Q1 was discussed as a central topic in several documents, “but without being substantiated through empirical studies’.

Key proposed relationships between indicators and major accident risk are then discussed.

Q1: Relationships between indicators and major accident risk

Key findings here were:

·        An earlier HSE UK 2007 conclusion appeared just as relevant in 2015 concluding that “Most performance indicators seem to have been developed in the absence of any underlying rationale or holistic model. There are some suggestions that the use of performance indicators leads to improvements in system safety, but no concrete evidence of this” (p5)

·        Only 3 papers addressed Q1 as a main question

·        One paper noted that “process safety indicators seem to provide information of safety of a process or a company, but that confirmation, based upon empirical research is needed”

The table below highlights some key challenges based on the literature:

No alt text provided for this image

Q1B: Relationships through correlations

·        Because major accidents are rare, risk can’t adequately be measured in terms of the number of actual events. The relationship between an indicator and risk “must therefore be established through correlations between the indicator and intermediate measures like the number of mishaps or precursors to major accidents” (p6)

·        One study found correlations between the frequency of hydrocarbon leaks and factors of safety climate – based on data from 28k questionnaires from 2002 to 2008

·        Another concluded that correlations between safety performance and safety climate “as a set of organizational factors tend to be moderate or weak” (6)

·        Many other studies have found no significant correlations

·        Another used safety culture survey data over a two-year period, finding potential safety factors or leading indicators. The following factors had significant correlations: hiring quality personnel, safety orientation, promotion of safety, formal learning systems, vessel responsibility, communication, problem identification, prioritization of safety, vessel feedback, empowerment, individual responsibility, anonymous reporting, and individual feedback

·        Another found that out of psychosocial risk indicators, installation age, weight or number of leakage sources, only psychosocial risk indicators were significantly correlated to hydrocarbon leaks

·        They discuss some debates between occupational accidents vs major accidents. Andrew Hopkins in 2009 argued that these types are so fundamentally different that occupational accidents can’t serve as indicators for major accident risk and that “the distinction between personal safety and process safety is really a distinction between different types of hazards” (p6). Examples were raised like 2005 Texas City and 2010 Macondo

·        In contrast, Bellamy’s 2015 study argues the opposite: “occupational accidents and major accidents can be related to the same hazards and, given that they are, correlations between less serious accidents and process safety accidents may be found” (p6)

Q1C: Logically explained relationships (versus assumptions)

·        If major accident risk is modelled using a total risk analysis of potential incidents, then indicators that are logically connected can be estimated – relying on the effect of a change of a risk influencing factor. These are called risk indicators – such as non-ignited hydrocarbon leaks and well kicks

·        Some authors argue that a challenge of many leading indicators is they’re associated with “organizational and managerial relations that are difficult to quantify and where the relation to major accident risk is less obvious” (p7)

·        Nevertheless, some use of risk models incorporating human and organisational factors have shown promise

·        They discuss how determining the strength of relationships between indictors and risk factors on the basis of risk analysis “is not the same as claiming that the indicator can predict a major accident” (p7)

·        Indeed, one author argued that the above is a common mistake regarding the use of leading indicators, stating that “There is no way to identify a metric that can reliably predict a particular future outcome; don’t even try” (p7)

·        Instead, leading indicators can help identify and avoid accident-prone situations

Q1D: Relationships through retrospective analysis of accidents

·        Another means is analysing the indicators by using existing accident or incident reports

·        One study drew on resilience engineering and similar concepts to develop a resilience-based early warning indicators, based on the Macondo event. They concluded that mapping predefined contributing factors against direct contributing factors shows that one can gain necessary early warnings

·        Another author highlights the importance of adequate response to indicator warnings is made problematic because major accidents have multiple precursors and cues and knowing how to distinguish between important signals and noise is the real challenge

·        Another study which evaluated accident reports revealed that the indicators in use at the time wouldn’t have helped predict the underlying causes of hydrocarbon leaks which occurred

·        They argue that in developing safety performance indicators, one must “find a balance between focusing on direct, reactive and result based indicators having enough data to be meaningful and concentrating on indirect, proactive and leading indicators with less direct safety relevance, but which can provide early warnings” (p7)

·        Here, focusing too strongly on validity “may reduce the chances of obtaining early warnings” (p7)

Q2: Effects of use of indicators (RQ2)

·        When discussing this question, they argue that it’s useful to distinguish between effect on result, effect on activities and processes, and unwanted and unintended effects

·        The effect of a change in an indicator value on the risk of major accidents is argued by some to be determinable based on risk influencing factors in a risk model. In these, the risk analysis is carried out on the basis of potential accidents and therefore “the effect is a demonstrated potential effect rather than an experienced effect” (p8)

·        The above has been used to measure risk via a “risk barometer”

·        It’s argued that there may always be empirical issues of validity and predictability of major accident risk indicators because of the huge rarity of the events

·        For effects on activities and processes, few papers provided evidence of efficacy. Instead, most authors were found to have discussed assumed effects and not demonstrable effects. Examples included: use of indicators supports assessments and priorities of risk (like maintaining a high consciousness of major risks); showing limits for acceptable operations; serving as a foundation for planning of inspections and revisions; increasing the understanding of safe system operation and hindering instability of systems

·        Unwanted and unintended effects of indicators include things like a drive of “optimizing the indicators and not the phenomena underlying them” (p8); e.g. managing the indicators rather than safety

·        One study used an example of how reducing the backlog in equipment maintenance could instead lead to quicker maintenance with reduced quality. Here, “the actual level of safety may be reduced in spite of the indicator value improving” (p8, emphasis added)

Q3: Conditions in the surroundings or context having an impact on the reliability of indicators

·        Here issues like how different raters might disagree in their assessments or different raters might use different standards at different times to gauge indicator performance effects the reliability of measures

Q3B: Event occurrence

·        Some papers argued that the occurrence of events affect the reliability of event-based indicators. The authors of this review argue that this statement isn’t supported by concrete evidence

·        One study suggested that major hazard precursors “typically occur with an average frequency of one per installation per year” and thus “such a frequency is insufficient as a basis for incident based indicators on an installation level, and argues that a certain amount of data is required to ensure reliable predictions that can contribute to maintaining necessary focus” (p9)

·        Others argue that a major challenge is the insufficient data available to support reliable indicators – events which are counted must occur frequently enough to allow for statistical comparison and analyses

·        Variability in technical indicators occur due to multiple reasons and this is amplified by indicators based on the number of events during a set time period. Changes from one event in a time period from two events in the next period “does not necessarily imply a change in the actual condition; rather, such changes are often random” (p9)

·        Increasing the limits of time or definition of leaks is high, then few reports will occur. If definitions are lowered, more instances will be recorded but of lower reliability. One author argued that indicator limits should be of “medium to high severity” events.

·        Other issues relate to differences in reporting taxonomies, and the individual perceptions of people on what the definitions constitute as reportable

·        Other challenges include not just intra-company or installation indicators, but also that many are used to compare across industry. Therefore, indicators must be identical at each location to maintain validity

·        Other challenges include different contexts, cultures, different ways of working, differences in reporting etc. Therefore, “such comparison should only be done when it really makes sense, and that results related to indicators otherwise should stay on a local level” (pp9-10)

In concluding, they make the following remarks: (p11)

·        When it comes to the reliability of indicators, it is important having well-defined reporting criteria

·        defined lower limit values

·        a clear understanding of the indicators purpose

·        Indicators are communicated in a way useful for personal and managers to take necessary action

·        The above requires them to see the indicator as relevant, as something they can have an effect on, and in line with their management and decision timescales

Authors: Kilskar, S. S., Øien, K., Tinmannsvik, R. K., Heggland, J. E., Hinderaker, R. H., & Wiig, S. (2016, April). In SPE International Conference and Exhibition on Health, Safety, Security, Environment, and Social Responsibility. OnePetro.

Study link: https://doi.org/10.2118/179223-MS

Link to the LinkedIn article: https://www.linkedin.com/pulse/major-accident-indicators-high-risk-industries-ben-hutchinson

Leave a comment