This is another study on Latent Error Detection (LED), something I posted on a while back. Individual LED (I-LED) describes a process where people spontaneously recover system failures by remembering to complete some missed step or action. E.g. suddenly wondering if you left the gas on at home when leaving for work and then going to check.
In this paper, authors look at I-LED in the case of naval aircraft maintenance, where people may suddenly wonder if they left a tool in the engine bay or whether an oil filler cap was replaced.
Based on prior work from these authors, they note examples from naval air engineers at work and how they later spontaneously detected errors and missed steps at a later point after the task and which were independent from the procedures.
I-LED was most effective when engaged via system cues that triggered recall within a time period of two hours (e.g. physical cues like particular tools in the workplace or posters with words and pictures). For instance, if maintenance personnel have done an oil filter change, a word or picture cue in the workshop of an oil filler cap (or even a strategically placed oil filler cap itself) may enhance the worker’s ability to remember if they replaced the cap or not.
Importantly, the authors note that despite the issues many people have now with labels of “human error”, they argue that it can “survive as a valid descriptor in systems safety but only if it is used carefully to highlight the need to analyse the causal effects of safety failures generated by the system and not by the individual” (p305). Further, latent error refers to “residual effects created when the required performance was not enacted as expected due to system-induced sociotechnical traps generated by the organisation, i.e. system failures that pass undetected and therefore lie hidden” (p305).
In this study, latent errors and recoveries were studied in two training squadrons from the Royal Navy. The impact of the I-LED methodology (which uses visual & auditory cues to enhance post-task error detection) was tested against a control group. Different I-LED conditions were tested which included a mixture of words, pictures, a combination and a ‘Stop, Look, Listen’ (SLL) condition which (I think) includes all of these items and a reflective review of past actions and the environment to identify errors.
I can’t describe all the ins and outs of the methodology or I-LED interventions.
Results:
This study found squadron operatives experienced 144 errors, where 46% were detected and 54% were missed. Supervisors experienced 270 errors, missing 76% and finding 24%.
Despite supervisors committing nearly twice as many errors as operatives and not detecting as many errors, it’s notable that supervisors generally had more complex tasks than the less experienced operatives. Previous findings from the authors indicate that I-LED may be more effective for simple and habitual tasks, perhaps explaining why operatives had higher error detection rates.
Operatives that used the SLL intervention had the highest error detection (73%), compared to the control group (33%), just words (45%), just pictures (44%) and 43% for combined pictures and words.
These results suggest the added benefit of I-LED interventions for post-task error detection, where targeted system cues (words, tools, pictures) are used and being immersed in the same environment that the work and errors occurred.
The SLL intervention was particularly effective and in part believed to be because it uses a period for operatives to stop and reflect on their environment and work.
Word cues in the environment (to prompt error recall) were said to be carefully chosen for their context. In this case, a slight negative effect was found in their use. Authors posit that perhaps submersing supervisors in a word-rich environment desensitised them to the word cues and rendered the intervention ineffective.
Authors discuss how I-LED may enhance risk management within safety-critical work, by enhancing the environments to help guide operators to locally engage with system cues to detect latent errors in a timely manner. They argue that I-LED can be considered as a safety-II inspired intervention that enhances the effectiveness of safety-I controls (using the S-I / S-II philosophy).
Further drawing on S-I / S-II to frame their discussion, they note that a focus on systemic changes via training and maintenance (mainly from a S-I frame) and a focus on routine local application of I-LED during normal work (a S-II frame) may enhance the everyday risk management application.
Further, by tapping into the S-II focus on normal work, it’s argued that the positive benefit of I-LED is said to be greatest during routine habitual tasks, where errors may still be high but the perception of risk may be perceived to be the lowest. Use of I-LED interventions may also enhance performance across the whole spectrum and not just be limited to safety.
Thus, if safety is created through effective interactions between risk management that matches human performance variability (using systemic approaches), then I-LED interventions may be able to play a role in aligning these capabilities.
Overall, if your work involves safety-critical activities and steps that may get omitted (and/or be difficult to verify after the fact), then considering I-LED may be worthwhile.
Authors: Justin R.E. Saward, Neville A.Stanton, 2018, Safety Science
Study link: https://doi.org/10.1016/j.ssci.2017.09.023
Link to the LinkedIn article: https://www.linkedin.com/pulse/individual-latent-error-detection-simply-stop-look-ben-hutchinson/?published=t