Analytical traps in accident investigations

This YT video from Johan Bergstrom is a great use of 7 minutes (link below). It covers three analytical traps in understanding incidents/investigations.

I can’t do this video justice so just watch it. However, three traps are:

1. Counterfactual reasoning

Johan notes that counterfactual reasoning is “when the investigator is discussing a case that never happened … like a parallel universe”.

Examples from reports like “did not initiate” or “did not have” are said to be clear indications of the counterfactual trap. Another example is when investigators “hypothesise different scenarios” that may involve keywords like “if”, “had”, “would likely”, “would have” etc.

As Johan notes, these types of counterfactual statements “prioritises an analysis of what the system did not do and as a consequence it ignores an analysis of why it made sense for the system to act the way that it did when it did”.

2. Normative language

Normative language in accident analysis is when an investigator “puts his or her values into the analysis of other people’s performance”. For instance, terms like people “mismanaged” something or performance was “inappropriate”.

It measures human performance against “some norm or idea of what is appropriate behaviour”. It’s said to also include “speculations as to why the crew behaved in an “inappropriate” manner”. The norms are said to be often highly subjective notions.

The trap of normative language can result in an investigator sounding “more like a judge than a curious investigator”.

The norms used to baseline normative frames are “typically defined in hindsight”.

3. Mechanistic reasoning

Mechanistic reasoning is the trap that shifts people to believing that accidents are largely the result of malfunctioning components in basically otherwise well-functioning, reliable and safe environments.

This trap may lead focus towards listing apparently *functioning components* and then excluding them to end with the malfunctioning component/s. These shifts in focus may inevitably end in “malfunctioning” human behaviour.

That is, human behaviour was the malfunctioning component in an otherwise well-functioning sociotechnical system.

One reason this is problematic is that accidents can be “results of well-functioning components interacting in unexpected ways”.

Moreover, we still have little indication on why it made sense for people to do what they did based on their exact “expectations, knowledge and focus of attention”.

There’s a bonus fourth trap – but I’ll leave that for you to discover in the video.

Video link: https://youtu.be/TqaFT-0cY7U