
A 1998 paper from James Reason discussing complex system failures and cultures of safety (which he calls safety cultures, SC—note he often, but not always, says cultures as plural).
Way too much to cover, so worth checking out the original paper.
First Reason points out the lack of a single definition of SC, but one is “’Shared values (what is important) and beliefs (how things work) that interact with an organization’s structures and control systems to produce behavioural norms (the way we do things around here)”.
He says organisational cultures don’t “spring up ready-made”, but adapt like organisms. Hence, SCs evolve “ gradually in response to local conditions, past events, the character of the leadership and the mood of the workforce”
At least two ways describe how to view SC: 1) something that the organisation is (beliefs, attitudes, values), 2) something that the organisation has (structures, practices, controls, policies etc).
Both are essential, but he argues that the latter is “easier to manipulate than the former”. It’s hard to change attitude and beliefs of adults, but “acting and doing, shaped by organizational controls, can lead to thinking and believing”.
Ideal safety cultures are said to be the engine that drives the organisation towards countering operational hazards, and this may be regardless of the leadership or commercial concerns.
One element of SCs is “not forgetting to be afraid”. This is challenging, especially for industries with few accidents (aviation, nuclear). Weick has defined safety as a dynamic non-event. Quoting the paper, “safety is invisible in the sense that safe outcomes do not deviate from the expected, and so there is nothing to capture the attention. If people see nothing, they presume that nothing is happening, and that nothing will continue to happen if they continue to act as before. But this is misleading because it takes a number of dynamic inputs to create stable outcomes”.
With an absence of bad news/events, like in ultra-safe industries, “the right kinds of data” is apparently the best way to resensitise people to danger. This is said to make up an informed culture—an organisation informed on the ‘right’ intel, e.g. technical, human, organisational and environmental factors.
Pathways to Disaster
Next he discusses the Swiss Cheese metaphor, which I’ve skipped, and he discusses the difference between individual vs organisational accidents. See below:
However, he says that the active failures tend to be short-lived, arising largely from local triggering conditions, and specific local stuff. Latent conditions in contrast “may lie dormant for many years until they are revealed by regulators, internal audits or by incidents and accidents”
However, the holes in the Swiss Cheese aren’t static (this is a limitation of a fixed image), but are dynamic and in continuous flux.
With all of the varied defences in modern, highly defended complex systems – then how do they still fail? For one, multiple defences are a mixed blessing: “While they greatly reduce the likelihood of a bad accident, they also render the system as a whole more opaque to the people who manage and operate it”.
Also, the people charged with monitoring and operating highly complex technical systems are also often disconnected from the systems (e.g. they can’t see the underlying software logics etc) and that “Both this distancing effect and the rarity of bad events make it very easy not be afraid”.
Further, complexity and tight-coupling per Perrow’s description, makes complex systems not only opaque to the operators, but “almost impossible for any one individual to understand such a system in its entirety”.
Hence, if no one person has visibility over all of the gaps, then “no one person can be responsible for them. Some gaps will always escape attention and correction”.
Interestingly, Reason argues that because of the diversity and redundancy in complex systems, with distributed defences across the organisation, they are hence “only collectively vulnerable to something that is equally widespread”.
Reason suggests that culture is the only other candidate that is equally widespread.
He says that the “universally accepted feature of culture is that its influence extends to all parts of an organization”.
He argues that culture has the “pervasive effects” that can create holes in defensive layers. Cultures can also allow gaps to remain unseen and uncorrected. Hence, “In a well-defended system only cultural influences are sufficiently widespread to increase substantially the probability of lining up a penetrable series of defensive weaknesses”.
Dangerous adaptations
Per Schein’s definition, culture is a “pattern of basic assumptions invented, discovered or developed by a group as it learns to cope with its problems of external adaptation and internal integration”.
However, these adaptations and goals aren’t only positive – they can also run “contrary to the pursuit of safety”.
He next talks about some similarities between many major accidents – having at least these three traps:
The push towards local traps is generated in part by the conflict between safety and production; which culture plays a part. To remain competitive, organisations will push performance to the boundaries of acceptable performance to maximise gains (higher risk, higher reward).
Here Reason argues “As the distance to the ‘edge’ diminishes so the number of local traps increases”. The local traps increase near the edge as does the cultural pressures to get more work done with less.
A ‘safe culture’ is also one that prioritises a reporting culture, where an “intelligent and respectful wariness” is created by collecting, analysing and circulating intel.
He says that engineering an reporting culture isn’t easy since it partially involves people to “confess their own slips, lapses and mistakes”. Whereas a lot of different approaches can help here, “the single most important factor is trust”.
To generate requisite trust, an organisation must first develop a just culture. So an effective reporting system depends crucially how the organisation handles blame and punishment.
He takes aim at the utopian idea of ‘no blame’, saying “this is neither feasible nor desirable”. Further, a “culture in which all acts are immune from punishment would lack credibility in the eyes of the workforce”.
Reason says that a prerequisite for a just culture is that all members understand “where the line must be drawn between unacceptable behaviour, deserving of disciplinary action” [** though other authors argue that these delineations aren’t always so clear, given the role of power, privilege etc.]
Reason discusses use of the substitution principle, seeing whether the conduct would have been repeated by others, and use of peer evaluation.
Further, while things like procedural ‘violations’ may appear clearcut – somebody didn’t follow the agreed process, reality isn’t always so clear. Procedures may be confusing, incomplete or inefficient. Hence, some processes are “’rewritten on the hoof’ by skilled workers who discover both a safe and a less labour-intensive means of doing a job”.
Anyway, it’s argued that organisation’s shouldn’t typically use disciplinary actions against rule departures, since the “task of these decision makers should be to evaluate the erring technician’s conduct in the light of what was reasonable to do in the circumstances”.
Reason discusses a few other things, which I’ve skipped.
But in all, his version of a culture of safety involves:
· An informed culture
· A reporting culture
· A just culture
· [** He later added learning and flexible cultures, too]
He concludes with:

Ref: Reason, J. (1998). Achieving a safe culture: theory and practice. Work & stress, 12(3), 293-306.

Study link: https://www.raes-hfg.com/reports/21may09-Potential/21may09-JReason.pdf
LinkedIn post: https://www.linkedin.com/pulse/achieving-safe-culture-theory-practice-ben-hutchinson-2cyvc