
Another interesting paper co-authored by the late Richard Cook. This paper discusses the usefulness of error.
It’s a larger post with a lot of quotes – I just can’t do a better or more succinct job of restating what they’ve already written.
Providing background, they argue that while some see “error” as a dead-end, others like organisations use error “as if it were a stable category of human performance [and they] apply the term to performances associated with undesired outcomes, tabulate occurrences of ‘‘error’’, and justify control and sanctions through ‘‘error’’ (p87).
That is, error is a form of social control and “useful” for organisations – just not useful in the way we generally hope.
An example of a surgical patient sliding off a tilt table was provided. Some findings were that: 1) no one person had been assigned responsibility to operate the table, 2) the control/display design was complicated and ambiguous, making it impractical for people to understand, 3) the manufacturer seemed surprised about this event despite this event having occurred elsewhere on this table.
The organisational response was training the surgical team and using a warning sign – leaving the design issues in place, as to “the prospect of clinicians being assigned blame for failing to operate the dangerous table correctly” (p88); this set the stage for further “errors”.
The usefulness of error was evident in this example. The source of the problem wasn’t removed or changed but instead a report was created (to inform the manufacturer), and the organisation reverted to a “classic ‘‘blame and train’’ strategy for the operators” (p89). It’s said that this example also highlights how an organisation responds to adversity.
That is, “systems retain multiple contributing factors to adverse events” (p89) and efforts to create a safer and less opaque environment was stalled because:
- Identifying contributing factors is a “daunting task”, and pressures including time pressure, high stakes, practitioner expertise, inadequate info, ill-defined goals and more make it sometimes difficult to discover dysfunction
- The organisation lacked the degrees of freedom and resources to effectively change
- Lacking contingency plans to help anticipate courses of action from events and to properly learn – therefore the easiest option was to cope rather than adapt.
- E.g. “Management sought closure and followed a satisficing strategy (Simon 1996). The method that they used, root cause analysis is a ‘‘good enough’’ solution approach for them when dealing with tension between conflicting agendas like these” (p89)
They then discuss issues around the definition and construct of error. One view from Rasmussen is that it’s difficult to give a satisfactory definition of error, another view is that error may be academically interesting but not particularly useful for designing safe systems.
Error is said to be “slippery, controversial, and indistinct”, and further that “it is meaningless to talk about mechanisms that produce errors. Instead, we must be concerned with the mechanisms that are behind normal action” (p89).
Here it was further remarked that “Inventing separate mechanisms for every sing[l]e kind of ‘‘Human Error’’ may be great fun, but is not very sensible from a scientific point of view” (p89).
However that’s not to say error has no purpose or usefulness. They give the example of an organisational leader standing at a lectern saying something like “this will never happen again” to the media.
The “surety of “never” comes from first stories – overly simplified accounts of apparent “causes”, biased by knowledge of outcome. These frequently account for accidents by narrowly looking at the factors at hand, often human error or similar terms.
First stories “appear to be attractive explanations for failure, but they lead to sterile responses that limit learning and improvement” (p89). First stories and associated error act as forms of social control.
They allow managers at the blunt end to “[cast] adverse events as anomalies” (p90). Identify and correct the errors therefore “gives the appearance of restoring the organization to ‘‘normal’’ conditions” (p90). Quoting Reason, it’s said that managers—who have the largest degrees of decisional autonomy—blame most of the safety problems on the shortcomings of people at the frontline; limiting learning and understanding.
Other issues involve efficiency-thoroughness trade-offs (ETTO). Understanding requires considerable investment of time to be thorough, but this is limited by resource constraints. This results in “the construction of a [accident causal] reason rather than finding one” (p90).
Further, investigations that are extended could be seen as a sign of weak or indecisive leadership. Thus, there’s an overall pressure to resolve investigations quickly.
The usefulness of error
The authors contend that error is useful for organisations. Error operates as:
1. A defence against entanglement with accidents
Error in investigations is a valuable organisational defence. One PhD dissertation found that shipping companies could limit their liability by lodging accident causes as error; error thereby allowed the investigation to be constrained within narrow boundaries.
They argue that “This paradoxically makes error quite safe, at least as far as the larger organization is concerned. By directing attention to an isolated human failure, the organization avoids entangling itself in open-ended inquiry that might prove damaging, or costly, or even reveal characteristics that it wishes to keep hidden” (p90).
Treating error as a stochastic event minimises liability for companies, but this, of course, ignores the environmental and contextual factors which shape human perception and action. Further, quoting Rasmussen it’s said “A deeper analysis of accident causation indicates that the observed coincidence of multiple errors [sic] cannot be explained by a stochastic coincidence of independent events. Accidents are more likely caused by a systematic migration toward accident by a company operating in an aggressive, competitive environment” (p91).
In summary, the “best” and least cost solution for an organisation is for an accident to be conceptualised as “flowing from a sporadic human error” … unpredictable and unheralded” (p91). Cynically, they quip that “While the chronically intoxicated ship captain produces liability for the company, the unusually drunken one is a virtual godsend” (p91).
2. The illusion of control
If accidents flow from error then exerting control over individuals could help prevent accidents. This relates to a worldview that “Situating error in the individual raises the prospect of creating an orderly, rational world in which accidents are less likely” (p91).
If error isn’t largely due to individuals, then “the sources of accidents are harder to identify and the opportunities for control are harder to see” (p91).
Illusion of control is also revealed in other ways, like via taxonomies for error. Taxonomies of errors “create the impression of understanding … toward control” and “serve as maps of the impoverished understanding of human performance” (p91).
3. A means for distancing
The narrowness derived from accidents being due to human error has other advantages. It “serves as a way to distance individuals from the implications of overt failure at work” (p91).
That is, attributing accidents due to personal characteristics helps “Others feel less at risk” since “error can be ascribed to a practitioner’s deeply seated, but personal, flaws” (p91).
By assigning the fault to an individual, it allows others to believe that the event has no relevance for them (distancing). E.g. That event occurred because Joe is inattentive or lazy, and I’m not lazy, so this isn’t relevant for me.
It’s noted that “Distancing limits and obscures the deeper examination of the sources of accidents” (p91).
4. A marker for failed investigations
They argue that the “most important value of ‘‘human error’’ is that it provides an acceptable end point for adverse event investigation” (p91).
These often unstated/unwritten “stop rules” allow the investigation to confidently conclude when they have traced back far enough to identify “a human with apparent freedom of action” (p91).
Discussing the findings, it’s said that:
- Error isn’t merely a technical cover for social features which organisations wish to hide or remain ignorant of. It serves also as a catchall term for events which also can’t be identified as mechanical failures and the like
- The above is useful since it allows a free-pass for design issues, which are “virtually absent” in many reports
- Error encompasses a variety of phenomena and is itself not specific
- Error is seen by many to be “a marker for an incomplete or failed investigation” (p92) and/or a marker that an investigation has ended too early
- “High rates of ‘‘human error’’ point to a particular form of human error problem. This is not error by the practitioners who were involved in the accident, but rather error by the analysts who assessed the accident’s source and evolution” (p92)
- They argue “this use for error may ultimately be the greatest contribution of human error to the creation of high reliability systems” (p92)
- “Complexity alone may frustrate investigations in ways that lead to application of the label ‘‘error” (p92)
In concluding, they argue that error remains useful not in spite of its misapplication but because of it. They state error should be taken seriously “not because it is an accurate assessment but because it is inaccurate” (p92).
Treating error as comparable to noise fails to understand the value error has in organisations. Thinking this way may also blind practitioners to a deeper understanding.

Authors: Cook, R. I., & Nemeth, C. P. (2010). Cognition, Technology & Work, 12(2), 87-93.

Study link: https://doi.org/10.1007/s10111-010-0149-0
Link to the LinkedIn article: https://www.linkedin.com/pulse/those-found-responsible-have-been-sacked-some-error-ben-hutchinson
2 thoughts on ““Those found responsible have been sacked”: some observations on the usefulness of error”