
An interesting discussion paper from David Woods, in response to a paper from Andrew Hopkins discussing process safety indicators (see my article from a few weeks back).
** I haven’t done a good job of this – so suggest you read the original paper. You might want a strong coffee.
Woods takes a bit of a detour, avoiding debates about the definitions, and more focuses on what types of intel an organisation actually needs, how it achieves the intel, and how it can become decoupled from effective and proactive oversight and management of risk.
First, he suggests that statements about organisations needing to develop and support mechanisms to create foresight about emergent risks “remain hopes rather than herald well specified action plans”.
He takes the position that a key difficulty with establishing leading indicators relates to a form of organisational failure – failures of foresight.
Hence, “It is not that data about leading indicators are unavailable, rather, what will be seen as clear warnings after the fact were discounted before the fact”.
Hopkins’ argument in his paper seems to suggest that leaders will know a good indicator when they see it, Woods’ argues, but this may not be the case; failures to recognise the warning signs, the indicators, relates to breakdowns in sense-making.
He says that seeing organisational failures of foresight as common in complex settings opens up some patterns:
a) establishing foresight is an extremely difficult cognitive work; it’s an unstable process given pressures from an organisation’s setting and management
b) the difficulties emerge from “basic dynamic and cyclic patterns in how adaptive systems behave”
c) measures of how and where a system is resilient or brittle provide a resource for developing and sustaining foresight
One universal finding is that hindsight wonderfully reveals the ‘clear signals that disasters loomed ahead’, whereas before the incident, these ‘warning signs’ were discounted by various actors as not being relevant at that time. He draws on the Columbia accident, where the CAIB report recognised the challenge that NASA had in balancing safety risks with intense production pressure.
This factor combined with other patterns, highlighting a “fragmented distributed problem solving process that was missing cross check mechanisms and unable to see the big picture, the result was an organization that could not see its own blind spots about risks”.
Further, he argues that “NASA was unable to revise its assessment of the risks it faced and the effectiveness of its countermeasures against those risks as new evidence accumulated”. Hence, the accumulation of risks became “essentially invisible to people working hard to produce under pressure, so that the organization was acting much more riskily than it knew or wanted”.
He says that this evolution toward failure leading up to the Columbia accident was driven by a mis-assessment that was resistant to change, e.g. the mis-assessment that foam strikes pose only a maintenance issue and not flight risk. Moreover, this mis-assessment also included an inability for NASA to re-evaluate and re-examine incoming evidence about its vulnerability.
This inability to update mindsets or models of emerging risk can be described as ‘distancing through differencing’. This highlights a process where people or groups focus on the differences between events, circumstances or data, real or imagined, rather than focusing on the potential similarities.
NASA also suffered through a ‘Faster, better, cheaper’ pressure, leading to management embracing more and more risky decisions without realising this shift was happening. This manifested in weaknesses in space mission planning and disappearing cross-checks. This had the effect of increasing brittleness, and “blocked the management from seeing signals that risk was becoming unacceptably high”.
Hence, organisations “need mechanisms to assess the risk that the organization is operating nearer to safety boundaries than it realizes – a means to monitor for risks in how the organization monitors its risks relative to a changing environment”.

Foresight is fragile
He talks here about how the pressures experienced by an organisation, like tightened production goals, naturally incentivises the downplay of schedule disruptions. With less time and money, the additional pressure gradually leads to “a narrowing of focus on some goals and the supporting activities and information while obscuring the tradeoff with other goals”.
This type of tradeoff is called a sacrifing judgement, and these occur when someone faces a trade-off trying to decide if acute production/efficiency goals should be temporarily relaxed to reduced risks, due to approaching the safety boundaries.

Building foresight
Here he discusses some challenges for an organisation to build foresight mechanisms. One is the constant balance between production and safety.
Notably, if an organisation never sacrifices production to follow-up on warning signs then it’s operating far too riskily. Conversely, organisations are acting far too cautiously if they constantly halt or slow production to follow up on uncertain warning signs and this impacts acute goals.
He argues that, “But it is precisely at points of intensifying production pressure that extra investments need to be made for safety in the form of proactive searching for side effects of the production pressure” and that “extra investments in safety ‘‘are most important when least affordable”.

Mal-adaptive dynamics
Here it’s noted that establishing foresight is not “simply patterns of human behavior but arise from basic dynamic and cyclic patterns in how adaptive systems behave in general”.
A challenge is that making a system operate more optimal in some ways in response to variations or uncertainties “will also make that system more brittle when exposed to novel events, variations or uncertainties that fall outside this design envelope”.
Woods proposes some questions about organisational behaviour in response to production/safety conflicts, including:
· “can an organization step outside itself and examine its own adaptive capacity in order to recognize when brittleness is on the rise”
· Can organisations “recognize when it has underestimated the potential for complicating factors and surprises to occur”
· Can organisations “recognize the actual sources of resilience it depends on, when these fall outside the formal description of how the system should work”
· Can an organisation “recognize that it is overly reliant on only a few sources of resilience (more precarious than it thought)”
Assessing system brittleness and resilience
It’s said resilience/brittleness is a parameter of a system that captures how well that system can adapt to handle events that challenge the boundary conditions of that system.
The challenges occur because:
· “plans and procedures have fundamental limits”
· The environment changes
· The system itself adapts given different pressures and expectations for performance
Adaptive capacity
Adaptive capacity refers to how a system stretches in response to changes in demands or loads. Measures of adaptive capacity are said to assess how the system is in some ways resilient, and in other ways brittle to different challenges.
The capacities of a system to stretch in response to demands is needed to avert major failure due to accumulating stresses/strains. The ability to stretch is provided via local adaptations, and these “are provided by people and groups as they actively adjust strategies and recruit resources so that the system can continue to stretch”.
Resilience is said to be a form of adaptive capacity, and is a “system’s potential for adaptive action in the future when information varies, conditions change, or new kinds of events occur, any of which challenge the viability of previous adaptations, models, or assumptions”.
Assessing resilient capacities “does not assess adherence to procedures in operations, rather it looks for gaps between the procedure system and the variations, uncertainties, events and complicating factors that can arise to challenge procedural work”.
Hence, the “search for how an organization is resilient or brittle leads us to look beyond conformance to standards and norms that management believes govern behavior in order to see how people demonstrate expertise beyond these norms – how they anticipate bottlenecks”.
This search leads efforts to consider people “not as a source of uncontrolled variance but as one normal source for local adaptive action which usually makes systems work despite gaps”.
Managing system resilience
Here it’s argued that once information about how and where an organisation is resilient and brittle and is tracked over time, this intel can be used to supply the missing control feedback signals to buttress gaps in foresight.
For example, this intel may signal whether expected buffers are being depleted (resourcing, safety margins, staffing, expertise or more); it can assess whether margins are becoming more precarious, or processes are becoming more rigid, squeezes and deadlines become tighter.

Finally, Woods ponders whether large, complex organisations can really convert the promises of resilience “into specific measures and indicators for organizational decision making prior to accidents?”.
Without providing an exact answer, and more providing some critical thoughts and questions, he proposes that, perhaps, organisations can:
· Look at themselves and determine the basic sources of resilience their systems depend on
· Are these capacities limited or can they be made more adaptive?
· Are they at risk of depleting under certain circumstances?
· Does taking this ‘adaptive’ systems stance provide any answers to this dilemma of absence of foresight?
· Can organisations measure and track changes in how and where their systems are resilient and brittle?
· Do organisations even have the tools to adequately influence system resilience when “brittleness is on the rise?”
He concludes that specific, tested and pragmatic answers to these questions have, at of this paper’s date, not been established.
Nevertheless, “the adaptive stance provides an alternative diagnosis to the leading indicator dilemma and a promising direction for innovation and testing”.
Ref: Woods, D. D. (2009). Escaping failures of foresight. Safety science, 47(4), 498-501.
Study link: https://doi.org/10.1016/j.ssci.2008.07.030
My site with more reviews: https://safety177496371.wordpress.com
LinkedIn post: https://www.linkedin.com/pulse/escaping-failures-foresight-ben-hutchinson-mzyyc
One thought on “Escaping Failures of Foresight”