Human Error: Trick or Treat?

This 2007 chapter from Hollnagel unpacked whether we really need the concept of “human error” (HE).

It’s a whole chapter, so I’ve skipped HEAPS.

Tl;dr according to Hollnagel:

·         “there is no need of a theory of “human error” because the observed discrepancies in performance should be explained by a theory of normal performance rather than specific “error mechanisms”

·         “human error” is best understood as a judgment made in hindsight”

·         However, “We do need a way to account for the variability of human performance, not least in how this may affect the safety of socio-technical complexes. But we also need models and theories that can express human behavior”

First he says that HE enjoys a unique positions of being a phenomenon that, academically, belongs to one discipline but is used more by people from other disciplines.

Specially, while HE seems to fit the psychology domain, since it apparently refers to an aspect of human behaviour, but the focus on HE in psychology has been only a recent central concept. Instead, it’s mostly been commandeered by practice-oriented disciplines, like human factors engineering, accident analysis, and industrial safety.

Despite its use, it’s ambiguous and not simple to define.

Human Error” as a Psychological Concept

He discusses the scientific basis of HE within psychology. It may be seen as idiographic or nomoethetic. As idiographic, it may lead to ideas about error-proneness as an individual trait, or risk taking via subjectivity.

More common though is HE as a nomoethetic view, where it’s described as a general trait and something that can be “treated statistically and described by a general theory”,

He argues that based on the common view of HE, it then follows that it’s treatable like psychological phenomena, like learning, remembering, problem solving. But it can’t really, because it’s of a different character.

He talks about learning by trial/failure, and its importance in cognitive processes and survival. If everything happens as it’s expected to and nothing ever fails, then “it is impossible to improve”.

He talks about the Fundamental Regulator Paradox, where the task of a regulator is to eliminate variation; but variation “is the ultimate source of information about the quality of its work”. Hence, the better a regulator works—reduces variability—the less information it receives on how to improve.

He quips that while “failure is necessary for the survival … of the species”, this doesn’t “mean that ‘human error’ as such is necessary”.

He talks about the nature of normal variability in human performance ,and the challenge of defining something as correct/normal, or incorrect/abnormal. He says that these nomenclatures are “warranted for technological systems such as car engines or computers, it is not justified in describing humans”.

So we can logically assume that if a mechanical system doesn’t work as intended, then something went wrong. But the logic doesn’t hold for humans and “if for no other reason than because it is difficult to define what is meant by correct performance”.

Definitions of HE

HE can mean a lot to different people.

Quoting Woods et al, “The label “human error” is a judgment made in hindsight. After the outcome is clear, any attribution of error is a social and psychological judgment process, not a narrow, purely technical, or objective analysis. (Woods et al. 1994, p. 200).  

He unpacks the differences in how HE is defined and conceptualised – see below.

·         In short, it can be a cause: e.g. the failure was due to human error

·         It can be the event or action itself, e.g. I forgot to check the water level

·         It can be an outcome – I left the key in the lock

Interestingly, Hollnagel says that the “growing interest in “human error” as a phenomenon in its own right did not come from psychology but from practitioners facing the problems of humans working in complex industrial systems”.

Consequences and Causes

Here it’s argued the law of reverse causality is relevant to the discussion. While the law of causality states that every cause has an effect, the reverse law says that “every effect has a cause”.

The belief that there is always a cause which can be observed relies on the law of reverse causality and the rationality assumption (that it’s logically possible to reason backwards in time from the effect to the cause). The rationality assumption, however, relies on a “deterministic world that does not really exist”.

He says that HE came to fore after technical issues and reliability were readily resolved; e.g. once mechanical issues became rarer, attention focused to people again. This changed “drastically around the 1950s”, following introduction of digital technology, computerisation, mechanisation, centralisation and automation.

However, these changes, which engineered reliability into systems, also “meant that humans were faced with a new world of work for which they were ill suited”. He says that paradoxically, technology was once again limited by human performance (but not by tracking and regulation, but on monitoring and planning complex systems).

He also talks about how some assumptions of the logics of HE depends on perfect systems. These don’t exist. He maintains that concepts of “error and “human error” are only necessary if processes are flawless”.

But if we assume that processes and functions always have variability, then the variability itself is sufficient to account for the unwanted performance – no need for HE.

Hence, using this view (which I’ve skipped HEAPS), he says “the need of “human error” as an explanatory concept is an artifact of the underlying assumption of a flawless process. On the other hand, if we accept that cognition always is approximate, then there is no need for “error” in cognition as such”.

Assumptions of HE

Here Hollnagel outlines three assumptions necessarily for HE to be empirically justified:

1)      There needs to be a clearly specified performance standard or criterion against which a deviant response can be measured.

2)      There must be an event or an action that results in a measurable performance shortfall such that the expected level of performance is not met by the acting agent.

3)      There must be a degree of volition such that the person had the opportunity to act in a way that would not be considered erroneous.

He argues that these assumptions don’t always hold in reality, nor are always readily observable or definable.

As he says throughout the work though, this isn’t to say we don’t need predictions and models of human performance – we do, but the models “should represent the essential (requisite) variety of human performance and of the joint cognitive system, rather than elegant but ephemeral theoretical constructions”.

Hollnagel unpacks some of these assumptions. For instance, he says there’s often a ‘performance shortfall’, how most definitions of HE don’t adequately define the predefined performance standards.

He also talks about pessimistic and optimistic views of errors. The pessimistic looks at the ‘design flaws’ of cognitive processing, whereas the optimistic views look at how these cognitive processes help us survive and flourish in a complex world, most of the time. E.g. satisficing and other heuristics.

Hence, “In the optimistic view, the shortcomings are not due to specific information-processing limitations but rather to the characteristics of the overall process. The muddling-through types of decision-making, or satisficing, have evolved to be effective in real-life situations”.

THE CONSEQUENCES OF A REALISTIC APPRAISAL OF “HUMAN ERROR

To this point, he maintains that there’s no real need for human error, either theories or methods.

It’s also said to be an artefact from both an ontological and a pragmatic perspective.

While there’s no pragmatic need for HE, “there is an undeniable utility in using the term”.

Nevertheless, he agrees that abandoning HE doesn’t mean studying and describing human performance. But it isn’t HE, because it is “of little use both because it is a catchall category and because it confuses actions and causes”.

Rather, what’s needed isn’t a theory of HE but a consistent classification of performance manifestations.

Interestingly, Hollnagel then described the HAZOP – saying classification of manifestations is something the HAZOP does well. And it’s guidewords can be used, more or less, to describe human performance.

He says in comparison, most descriptions of HE don’t “separate in a consistent manner descriptions of manifestations (“error-as-action”) from descriptions of potential explanations (“error-as-cause”)”.

Making “Human Error” Obsolete

Finally, he ties the key points together – I’ve skipped a lot of this, but:

·         He says moving forward requires acknowledging that “performance always varies and is never flawless”. Once we do this, “the need of a separate category for “human error” evaporates”.

·         “We need to understand better the variability of everyday performance and the consequences this may have vis-à-vis the activity to be performance (the task)”

·         “As a second step, but far beyond this level, we also need to consider the ways in which the variability may aggregate over individuals, to account for the nature of social interaction”

Ref: Hollnagel, E. (2007). Human error: Trick or treat. Handbook of applied cognition, 219-238.

This image has an empty alt attribute; its file name is buy-me-a-coffee-3.png

Shout me a coffee

Study link: https://doi.org/10.1002/9780470713181.ch9

LinkedIn post: https://www.linkedin.com/pulse/human-error-trick-treat-ben-hutchinson-8693c

Leave a comment