The Error of Counting “Errors”

For a brief paper (two pages), this is a hard hitter. The late Bob Wears discusses some challenges with myopic focus on “error”.

I’ll use a lot of direct quotes since I can’t put it any better than the author.

Fundamentally, he says we can look at human error in two ways.

1.       As a cause of failures, or

2.       See error as a consequence; a symptom of deeper problems in the organisation.

Of course, reality doesn’t need to be black and white and he uses this arbitrary distinction for discussion only.

He says that although “partisans” of safety emphasise the importance of adopting broad ideas from other domains about human performance and how it contributes to both success and failure, “the continued focus on pursuing “errors” shows just how deeply we as a profession remain trapped by old ideas” (p. ). [ Noting that he isn’t advocating for the abandonment of the idea or language of error, but specifically the pursuit of ‘error’ as the output or end-point.]

He argues that the view of error as the main cause of accidents is dominant in healthcare. It sees the prevention of error committed by frontline workers as the main unit of analysis.

This view is also characterised by a view that our “systems are basically safe; the chief threat comes from the inherent unreliability of people”. Finally, this view can also be characterised by a view that we should be “protecting the system from the erratic humans through selection, training, procedures, protocols, automation, and discipline”.

In this view, investigators are led to focus on finding places where people failed. These failures are often framed in counterfactual language; e.g. “if only they had noticed X, how could they not have seen Y”.

Importantly, this view becomes operationalised by “counting “errors” in work, points at which now, with our knowledge of the outcome, we can see that a zig instead of a zag could have avoided disaster”. The author then cites a current healthcare paper that exemplified this approach, highlighting that this approach was still relevant at the time of writing.

He says that this view of people is “in a way reassuring”. That is, believing that our systems are basically safe allays concerns, and avoids uncomfortable and costly fundamental changes. Importantly, this belief is “also convincing, in the same way that optical illusions are convincing; it is easy to see “errors”, especially in hindsight”.

He counters with another view of human performance, where:

·       “Human error” isn’t the cause of anything, and may not actually “exist as an objective, separable, and reliably identifiable construct”. Instead, error can be a symptom of deeper trouble in the system, and to understand failure then, we “must find how people’s assessments and actions made sense to them at the time”.

·       Our systems are inherently hazardous, and this facet is driven by the realisation that our systems “embody fundamentally irreconcilable conflicts among goals that must be pursued simultaneously, like production, time, cost etc.”

·       Human error isn’t necessarily random, but “systematically connected to features of workers’ tools, tasks, artefacts, and working environments”. Thus, improving safety can come about from understanding and influencing these connections

Highlighting these alternate characteristics, he maintains that “underneath every simple and obvious story about error, there is a deeper, more complex story and that eliciting, sharing, and learning from that story, not the “error” story, is what leads to improvement”.

This appreciation requires a perspective welcoming to local rationality—being that people’s actions make/made sense to them at the time, given their context, worldview, focus of attention, constraints etc.

Moreover, human performance doesn’t occur in a vacuum, but can be shaped to varying degrees in complex organisational webs made up of communities of practice, procedures, artefacts, coworkers, which act as a joint cognitive system and “distributed over time and space and conditioned by organizational, professional, and institutional contexts”.

He argues that this perspective can be hard to hold, since it “implies that our occasional accidents are not aberrant events but rather normal outputs of our systems, caused not by defective components but rather by the unanticipated interaction and tight coupling of normally operating components”.

Indeed, allocations of individual fault and aberrations “with some soporific injunctions about better training” allows the preservation of existing systems of practice.

In any case, the existing perspectives, and the alternative perspectives, may displace the issue to elsewhere in the business.

For instance, instead of blaming an individual, we can move down the decomposition axis and blame their cognitive or emotional components, which has the effect of replacing the unreliable person with “unreliable mental components (eg, working memory, attention, effort, motivation, heuristics, biases)”.

Or blame can move up the decomposition axis away from frontline workers to managers, units or the whole organisation. Here, “unreliable frontline workers is simply replaced, this time by one of the unreliable managers and executives”.

These displacements can be unproductive given that “everyone’s blunt end is someone else’s sharp end”. And while allocating responsibility to somebody else may be satisfying, it’s less likely to lead to sustained and meaningful improvement.

Moreover, the belief that getting control over error means quantifying it is also likely to displace effective action. He argues this can be due to two reasons:

1.       Identifying an error, or what they should or could have done (counterfactuals) doesn’t enlighten us, in itself, on why that action made sense at the time; nor upstream or work factors that permitted that action.

Hence, without explaining why it made sense “it allows the same “error” trap that caught them to remain hidden in the work environment, where it can catch somebody else.

2.       The second reason is that “errors” and injury bear only a loose relationship to one another”. That is, many errors are said to never lead to harm, and many injuries occur without error.

He argues that the goal of (healthcare) safety should be to reduce harms, and focusing too heavily on errors “diverts energy and attention into sterile debates about preventability and into overly detailed, narrowly targeted “fixes” that treat symptoms but not the underlying problem”.

Wears remarks that “What labeling a fragment of behavior as an error really means is that we do not yet have a good enough understanding of the problem”.

Wears concludes with “Error” counts are thus measures of ignorance, rather than measures of risk”.

Author: Wears, R. L. (2008). The error of counting “errors”. Annals of emergency medicine, 52(5), 502-503.

Study link: https://doi.org/10.1016/j.annemergmed.2008.03.015

LinkedIn post: https://www.linkedin.com/pulse/error-counting-errors-ben-hutchinson

2 thoughts on “The Error of Counting “Errors”

Leave a comment