This brief conference paper explored the unintended consequences of a safety performance metric. This metric is/was apparently common in offshore oil & gas. It’s a semi-quantitative metric used to classify and track the strength of corrective actions.
[Note: I’ve never heard of this metric, so did my best to describe it based on what I thought the paper was indicating. However, what’s interesting about this paper isn’t the specific metric but the consequences and gaming that can accompany metrics.]
The metric involved assigning points for each corrective action assigned from accident investigations against: 1) direct action, 2) supervision, 3) Management. The strength of the corrective action against points 1,2,3 were then assigned and a weighted average was used to calculate the strength. Two adjustment factors were then applied which “checked if reporting managers were sensitive to near misses and if they actually implemented the corrective actions” (p2).
A final index score was calculated after the adjustment factors. In theory, the index was intended to generate a “number that was directly related to how well managers identified and solved operational problems” (p2) – with higher strength actions being more directly linked to higher prevention efforts.
Because long-term prevention was seen to require “corrective actions lasting forever” rather than fix-and-forget approaches, having management focused actions on all of the corrective actions was seen to be desirable.
However, after the trial of this new metric “it became clear that when this metric was used to define performance goals it had unanticipated consequences that cumulatively and insidiously caused more damage than the accidents it was intended to prevent” (p1).
Problems with the use of the metric:
1. Problems were said to begin when upper management set goals or performance standards based on the index. It’s said that a corrective action that lasts into the future which relies on changing behaviours of people not yet employed, is really a change in the rules by which a company governs itself. One company set a standard requiring managers to “produce an index number that could only be attained by producing rule changes for an unrealistically high percentage of all their recordable accidents” (p2).
The requirement to create a management corrective action, which translated into a new rule, for every recordable accident led to an untenable redundancy in rules that “increasingly became micromanagement in nature, and because rules must be enforced to actually be rules, the requirement amounted to forcefeeding the bureaucracy that ultimately suffocates any organization” (p2).
2. Problem 1 was compounded by another principle that required managers to report five minor potential harm events for every higher potential event. The author argues that if we applied the same logic to fatal traffic accidents (~30k per year in the US), then law changes would be required for 150k minor traffic events – a total of 180k “new laws, complete with enforcement provisions, every year in the USA alone, and all those accidents are already covered by existing (p2).
3. Gaming the numbers. This is amplified by rewards when a certain number is achieved, leading to stretch definitions or slant reports to achieve a target number “sometimes at the expense of accurately identifying and fixing underlying problems” (p5). An example of bidding/tendering influenced by TRIFR scores of contractors and how this can lead to metrics becoming “a game with high stakes and big rewards for contractors who develop creative accounting tactics to achieve whatever numbers are needed” (p5).
Likewise, it was assumed that the index metric based on corrective action effectiveness would eliminate the gamesmanship, however managers quickly learnt how to report actions “that looked good, sounded good, and got high scores without making a significant difference in probability of accidents recurring” (p5).
Furthermore, “strong” corrective actions also began to be reported for weaker problems to reach goals.
4. As above, easier issues to report and assign corrective actions become a focus of management. Including reporting incidents that didn’t need to be reported or assigning corrective actions without real positive benefits.
Modifying a JSA was found to quickly become the most popular corrective action for the management category. Expectedly, “the requirement for producing an unrealistic number of new rules caused the JSA process to lose its effectiveness by becoming lengthy documents of micromanagement inviting scorn, derision, shortcuts, and disuse among the work force” (p5).
The author aptly states that in trying to meet the performance targets of the metric, “made managers almost desperate to find excuses to modify JSAs in order to meet their corrective action targets” (p5).
An example the author provided was an accident report submitted by a manager for a wasp nest that was discovered on a wellhead – which included corrective actions, assignments and accountabilities. The author doesn’t hold back in saying “This is bureaucratic nonsense, misusing accident reporting, and micromanagement that reduces overall efficiency and eventually causes more harm than a rare potential for wasp stings” (p5).
The author then provides some conclusions and recommendations on metrics, not covered here.
Author: Carl D. Veley, 2013, SPE European HSE Conference and Exhibition held in London, United Kingdom, 16-18 April, 2013: SPE 164961
Study link: https://doi.org/10.2118/164961-MS
Link to the LinkedIn article: https://www.linkedin.com/pulse/unintended-consequences-promising-safety-management-ben-hutchinson