Safe AF #10: Are safety myths –like most accidents due to human error –holding us back?

Are our safety myths–like most accidents being the result of human error–holding back genuine improvement within safety?

Can myths like these actually hamper learning, and increase operational risk?

Today’s article is from Besnard, D., & Hollnagel, E. (2014). I want to believe: some myths about the management of industrial safety. Cognition, Technology & Work16, 13-23.

Make sure to subscribe to Safe AF on Spotify/Apple, and if you find it useful then please help share the news, and leave a rating and review on your podcast app. I also have a Safe AF LinkedIn group if you want to stay up to date on releases.

Spotify: https://creators.spotify.com/pod/show/ben0261/episodes/Ep-10-Are-safety-myths-holding-us-back-e35bsc0

Apple: https://podcasts.apple.com/au/podcast/ep-10-are-safety-myths-holding-us-back/id1819811788?i=1000717052597

This image has an empty alt attribute; its file name is buy-me-a-coffee-3.png

Shout me a coffee (one-off or monthly recurring)

Transcription:

Are we stuck in a safety trap, constantly pointing fingers at human error, or adding more layers of ineffective protection? What if the very beliefs guiding our safety efforts are holding us back?

G’day everyone, I’m Ben Hutchinson and this is Safe As, a podcast dedicated to a thrifty analysis of safety, risk and performance research. Visit safetyinsights.org for more research.

This paper is from Bernard and Holnagel, 2014, titled “I Want to Believe – Some Myths about the Management of Industrial Safety” in Cognition, Technology and Work. The authors argue that many industrial safety practices are “lettered with fragile beliefs, rendering safety management flawed and ineffectual.” They define a myth not merely as an assumption, but as an idea or story that many people believe but which is not true. These myths are deeply rooted in our culture, widely shared, and profoundly influence decisions and actions. The authors contend that acknowledging these myths is a crucial first step towards genuinely improving industrial safety.

Let’s explore six myths.

Myth one, human error is the largest single cause of accidents and incidents. But this belief is pervasive. For instance, a 2010 report declared that human error is involved in over 90% of all accidents and injuries in a workplace. This idea has a long history. It remains a fundamental part of many investigation methods, often marking the deepest point of analysis. However, the authors argue that this is a simplistic and counterproductive view. They highlight that labelling something human error is a judgment, often made with hindsight bias.

It kind of implies wrongdoing in seeking a culprit. Crucially, it typically focuses only on sharp-end operators, the people directly involved with the process, ignoring the broader context and working conditions imposed by managers and the organisation. And if human error is the cause of an event going wrong, what about the countless times human actions make things go right? They propose their revised statement. Human error is an artefact of a traditional engineering mindset that treats humans like fallible machines, failing to consider the vital role working conditions play in shaping performance.

Myth two, systems will be safe if people comply with the procedures. We often assume that following a procedure not only gets the job done, but gets it done safely and any deviation from procedures automatically creates risk. This roots people as if they were machines, but this isn’t helpful. Procedures are inherently incomplete. They can’t cover every possible situation or fully describe every action. Humans constantly interpret and adapt procedures based on the situation and their experience. The authors draw on a major accident and they highlight that rigid, blind compliance can actually be detrimental to safety and efficiency. Human flexibility is essential to compensate for the brittleness of procedures and actually contribute to safety. A revised statement is, actual working situations usually differ from what the procedures assume and strict compliance may be detrimental to practice. Procedures should be used carefully and intelligently.

Myth three, safety can be improved by barriers and protection. More layers of protection results in higher safety. This seems intuitive, reflecting the defence in depth approach. However, the relationship between protection and risk is not straightforward. One reason is psychological. People often adjust their behaviour based on perceived risk. The authors tie this point to risk homeostasis, suggesting people maintain a certain level of comfortable risk. For example, some limited studies suggested that taxi drivers with ABS braking systems drove more aggressively in curves and actually had a slightly higher accident rate than those without ABS. However, this argument hasn’t aged too well because the risk homeostasis hypothesis, as advanced by Gerald Wilde, isn’t well empirically substantiated, but behavioural adaptations is well supported, meaning people can and/or change their behaviour in response to interventions and designs, etc. The second reason is technical. Adding protection inherently increases a systems complexity. More components, more couplings, not only introduce new value points, but also exponentially increases the number of combinations that can lead to unwanted outcomes. A revised statement is, “Technology is not neutral. Additional protection changes behaviour so that the intended safety improvements might not be obtained.”

Myth 4. Accidents have root causes and root causes can be found. Root cause analysis is a common umbrella term for a number of different techniques and approaches, assuming that system parts are causally related and effects propagate orderly, allowing us to trace problems back to their origin. However, this method relies on assumptions that don’t really hold for complex systems, that events repeat outcomes are strictly bimodal, that is correct or incorrect, and cause effects relations can be fully described. Human performance isn’t bimodal, it varies, it rarely fails completely, and humans can recover from failures. When an analysis points to human error as their root cause, it often overlooks how the same human flexibility makes things go right most of the time. The paper suggests that their preference for simple root cause explanations is due to a psychological desire for comfort and satisfaction in tracing something unfamiliar back to something familiar. Instead, the authors argue that unwanted outcomes in complex systems don’t necessarily have clear, identifiable causes. A revised statement is “human performance cannot be described as if it was bimodal.” In complex systems, things that go wrong happen largely in the same ways that things go right.

Myth 5. Accident investigation is a rational process. While investigations are serious undertakings, they are rarely purely rational. Practical constraints like deadlines and resource limitations often dictate the depth of analysis and the methods used, becoming a trade-off between efficiency and thoroughness. Moreover, every investigation method embodies some sorts of assumptions about how accidents happen. Perhaps most challenging is the need to establish responsibilities. This need can heavily bias investigations, making finding a culprit more important than understanding the contributing or causal factors.

As Woods et al put it, “attributing error is fundamentally a social and psychological process and not an objective technical one.” The paper suggests that accident investigation is a social process where causes are constructed rather than found. Indeed, investigators operate on what you look for is what you find principle, meaning their chosen method and their existing worldviews directs what they see and don’t see. To overcome this, it’s essential to pursue what they call second stories, deeper analyses that go beyond the simplified first stories of apparent causes. A revised statement is “accident investigation is a social process where causes are constructed rather than found.”

At the 6th, safety first. Statements like “safety always has the highest priority” and “will never be compromised” are often made for communication purposes, expressing noble values and goals. The paper cites a 2004 assessment of the BP-Texas city refinery accident, which had a low injury rate but experienced a major explosion, killing 15 people. Before the explosion, employees ranked making money, cost and budget and production as their top priorities. With major incidents and the coming in 5th, safety has undeniable financial implications.

Safety costs are immediate and tangible, while their benefits are often potential and distant. This leads to trade-offs, where safety is often as high as affordable. Safety budgets, like any other constraint, are limited and decisions involve prioritization and feasibility, often trading safety against economy. A revised statement is “safety will be as high as affordable from a financial, risk acceptability and ethical perspective.”

So some takeaways is that these myths shared across all layers of organizations and society lead to flawed safety practices. They are resistant to change because they are rarely questioned. The authors propose a radical shift. Instead of defining safety as a property that a system has, safety should be seen as a process, something a company does. It’s dynamic, constantly negotiated and varies in response to changing conditions. More importantly, the goal of safety should shift from focusing on what goes wrong to also understanding and enabling what goes right. We spend immense efforts preventing unsafe sort of functioning but, relatively speaking, hardly any efforts are directed towards bringing about safe and reliable functioning.

Measuring safety solely by a low number of negative outcomes is insufficient. It should be tied to indicators of an organization’s dynamic stability, its ability to succeed under varying conditions, its capacities to respond to monitor, anticipate and learn. In a complex world with multiple interacting constraints, operating perfectly is impossible. The safety myths covered here support an unrealistic ideal of safety management. To successfully operate increasingly complex systems, we need to abandon these myths and adopt more sensible and sustainable assumptions about safety. Hence, complex systems work because people learn to identify and overcome design flaws and functional glitches.

Quoting Dekker, people will finish the design in practice. People can adjust their performance to the current conditions. People interpret and apply procedures to match the situation. People can detect when something is about to go wrong, and intervene before the situation becomes seriously worsened. This means that systems work because people are flexible and adaptive, rather than because the systems have been perfectly thought out and designed.

But, of course, these same positive capabilities can also be our roads to ruins, because people may adapt in ways that are locally optimized, but more hazardous at a global organizational level.

Leave a comment