The ironies of ‘human factors’

This brief book chapter from Hollnagel & Dekker adopts Lisanne Bainbridge’s idea of ironies, in the context of contemporary human factors practice.

Can’t cover all the points. Highly recommend you check out Bainbridge’s original paper, though (link in comments).

Ironies in this context is a “solution which expands rather than solves or eliminates a problem, thus making something worse rather than better”. Bainbridge’s paper explored these effects related to automation, but the current authors argue it also applies to how “human factors engineering relies on training, procedures, design and automation as its main app­roaches to managing human variability”.

Their key points are that:

1) “‘Human factors’ tends to consider human agility or performance variability as a liability that should either be eliminated or brought under control,

2) “recognise[s] that variability is an indispensable asset, without which few of the common human factors solutions would ever work.

[** NB. I’ve seen this criticism before of HF – that it treats people as a factor or ‘part’ of a system, like a mechanical cog. I don’t doubt one can find examples of this, but I’m not convinced that this is the modus operandi of the HF domain. Moreover, it would have been useful if the paper provided more examples of this mechanical perspective within contemporary HF, rather than relying on Fitts’ excellent, but dated work from 70 years ago.]

They briefly discuss Bainbridge’s paper. The first irony is that “the more advanced a control system is, the more crucial the contribution of the human operator would be”. Further, “human supervisory controllers cannot really be unskilled in their monitoring of whether the automation is carrying out its work correctly, because they wouldn’t know what they were looking at (or for)”.

Another irony is that the “designer who tried to eliminate the operator still leaves the operator to do the tasks which the designer could not think how to automate— often without adequate support”.

Another irony is that “the automatic control system was introduced because it could do the job more reliably, or cheaper, than the operator, yet the operator was asked to monitor that it worked effectively”.

They argue that “Fundamentally, the ironies of human factors coagulate around the widely-shared (if not always explicit) view of humans as parts or components of work systems”; applying a type of mechanical Newtonian view.

Next, they discuss Paul Fitts work—a really forward-thinking pioneer of human factors engineering. I’ve skipped most of this section, but highlight the calls for training, design and automation within the HF arena.

The ironies of training

Here they start with that point that people soon discovered that “training was not the perfect way to engi­neer the human factor and overcome the problems Fitts wanted to solve”.

This results partially from the inherent performance variability within work. They argue that the Achilles heel of HF is that it adopted a machine analogy of human performance and hence, “humans were from the very beginning seen as inefficient, variable, and unreliable” in comparison to machines.

And because of human ‘imperfections, technology makes it possible to fully exploit new potentials to overcome human imperfections. Where the use of technology couldn’t address these so-called imperfections, then training was used. They very briefly discuss the history of Scientific Management, which saw performance variability as something that should be reduced.

Using a mythical analogy, they talk about making humans longer or shorter as a coronally for adjusting human capabilities to suit the environment, rather than the environment to suit human capabilities. Training can likewise be a way to extend or stretch human capabilities beyond what they could naturally or normally do.

On the latter, training does enable humans to perform some tasks they couldn’t have otherwise performed; like intuitively flying a modern aircraft or operating a nuclear power plant. The Boeing 737 MAX provides another example.

They argue that “Pilots didn’t stand much of a chance in the face of an automation take-over and nose-dive by hidden software …This software was, ironically, meant to smoothen out the difference between the MAX and previous generations for certification purposes, but pilots (and airlines) were never made aware of its existence. It denied them the chance to build out their variability so that they might have stretched their capabilities to meet its potentially fatal automation surprise”.

The irony of procedures

They argue that from the first years of HFE, procedures have played an essential role. Despite this, there is “always a difference what people actually do (Work-as-Done, or WAD) and what they are or were supposed to do (Work-as-Imagined or WAI)”.

They talk about some stuff here which I’ve skipped, and then lead into following guidelines, standards and the like. They note that following standards “to the letter”, is “assumed to compensate for human shortcomings and performance variability and result in work that is correct and flawless as long as work and workers comply with the standards”.

Moreover, departures from instructions, standards etc. in investigations is frequently used as an explanation for why something bad happened. The irony here they say is that “there is ample evidence that non-compliance, particularly in non-standard situations, can be safer than sticking with the rules”.

The ironies of design

Simply put, improving performance characteristics of human-machine combinations can involve altering the human so that they fit the machine better, or modifying the machine to fit the human better.

They argue that some see design as about telling stories about the future. An irony here, then, is “that the inevitable differences WaI and WaD weaken the very basis from which the design is made. It is inevitable that a design cannot solve the problem it was supposed to address but possibly make it worse”.

Further, “By creating more uncertainty it can iron­ically increase rather than reduce the need for human performance variability to fill out the gaps between design (WaI) and reality (WaD)”.

They say that the ‘solution’ for HF, at least theoretically, isn’t to necessarily force real work to comply with WAI, nor constrain WAI so that it corresponds to WAD. Instead, “A solution is rather to try to understand what determines how work is done and to find effective ways of managing that to keep the variability of WaD within acceptable limits”, but not via ways to curtail human capabilities/adaptability.

In further support, socio-technical systems primarily function not because of design or anticipation of WAI, but “because human performance variability compensates for design deficiencies and the limits of our imagination about the working conditions it will encounter”.

Therefore, an irony is that by trying to eliminate human performance variability, even if this was possible, we may be amplifying the variability, by masking the “problems of imprecision, of the nuances and ‘messy details’ of what actual practice requires, or of variability and lack of speed”.

They go back to the point that HF has been “preoccupied with the legacy view of humans as a liability and source of error and variability”. But it’s this same variability that allows systems to function. And “Performance agility is the putty for the inevitable discrepancy between WaI and WaD”.

They further unpack the folly of trying to anticipate all work, e.g. overprescribing WAI, as it represents an “ infinity of interpretations”, and that, somewhere in the system WAI will always be different from WAD.

WAD remains a “moving target because internal and external working conditions (demands, and resources) never are stable or fully predictable. The design requires that the user interprets the information in a limited number of (pre-) specified ways. Design, in other words, forces the human user to function as a finite state”.

Substitution Irony

They propose another irony to add to Bainbridge’s: the substitution principle. This is the assumption that artefacts, risk controls, practices etc. are “neutral in their effects and that their introduction into a system therefore only has intended and no unintended consequences”.

E.g. some actions are taken or controls implemented following a high potential incident. The assumption is that those corrective actions will only have positive or effects, or at worst, neutral/null effects. They propose that nothing is truly neutral in complex environments with people.

E.g. “substitutability only works when parts are not interacting and when there is no appreciable tear and wear. If parts are interacting, they constitute a system with (inter-)dependencies, which almost by definition invalidates the substitu­tion assumption.”

Conclusion: worker-as-imagined

Wrapping up their arguments, they reiterate that HFE relies on training, procedures, design and automation as its “chief approaches to managing human variability”.

They propose a “worker-as-imagined” effect in play: where people have to “make up for the inevitable shortcomings of our own imagination as we design artifacts and attempt to automate more of the work that gets done with them”.

They again argue that HF may not have fully recognised the valuable and indispensable role of human agility or variability. Here, “To learn about its necessity, and appreciate the dynamic sacrifices and tradeoffs that get made in Work-as-Done all the time (Rasmussen 1997), the ability to somehow put oneself in the perspective of those who carry out the work is critical”.

Curiosity, then, can “replace judgment about what work is expected or ‘should’ be done”. They briefly talk about some of their recent work which has used questions like the below; which they propose leads to a more compassionate learning opportunity.

These lead to better answers and learning, and helps to avoid the “the construction of a new project or design on some ‘human-as-imagined:’ for by the time it is done and implemented, all that is left for us is to hurl accusations at the user about why they couldn’t be more perfect, as some indeed did in the wake of the Boeing 737 MAX”.

Finally, they note that such a humanistic approach “not only respects the humanity of everyone involved in the ergonomic enterprise; it also allows us early on to capture the systemic factors that contribute to human-machine breakdowns and failures”.

Ref: Hollnagel, E., & Dekker, S. W. (2024). The ironies of ‘human factors’. Theoretical Issues in Ergonomics Science, 1-11.

Study link: https://doi.org/10.1080/1463922X.2024.2443976

My site with more reviews: https://safety177496371.wordpress.com

LinkedIn post: https://www.linkedin.com/pulse/ironies-human-factors-ben-hutchinson-0nr8c

One thought on “The ironies of ‘human factors’

Leave a comment