
Really interesting 2012 article from Pate-Cornell about black swans, perfect storms and risk management.
Although Pate-Cornell isn’t likely a household name within safety, she’s one of the GOATs in risk analysis.
As usual, I’ve skipped a lot.
Her key thesis is that popularised concepts like black swans and perfect storms have “struck the public’s imagination” and at times, used indiscriminately for unthinkable or extremely unlikely events. And problematically, they may be used as “excuses to wait for an accident to happen”.
Some industries are said to ignore near misses as they tend to lean into the idea that some events are “so rare as to be unimaginable”.
Citing Taleb’s example of black swans – being outliers/extreme events which were so inconceivable or unlikely that they were unthinkable. A key idea is that these black swans when they emerge were “often arbitrarily characterized by Gaussian curves”, which weren’t suitable for such issues.
Financial systems using quantitative assessments, based on Gaussian curves and the like are “useless because they are likely to miss rare events that are not part of the database”. They also underplay fat tails (low prob / high consq).
Perfect storms are a conjunction of very rare factors which combine, again in an unexpected way/time.
Aleatory vs Epistemic Uncertainty
Pate-Cornell covers an important distinction: between aleatory and epistemic uncertainty.
Perfect storms “involve mostly aleatory uncertainties (randomness) in conjunctions of rare but known events”, whereas “Black swans” represent the ultimate epistemic uncertainty or lack of fundamental knowledge”.
In short, aleatory uncertainty cannot be resolved simply with more information, since this uncertainty sits with randomness. Epistemic uncertainty, on the other hand, relates to limits of our knowledge (e.g. the unknown unknowns, and possibly also known unknowns).
[For those interested, there’s some really cool articles about epistemic and aleatory uncertainty, and John Downer talks a bit about this also – coining the concept of an ‘epistemic accident’.]
Most major risk scenarios are said to involve both types of uncertainty. And while there are probably only a smaller number of truly unimaginable scenarios which can’t be foreseen upfront, there are often signals which can be observed or inferred. (Pate-Cornell gives the example of a new virus emerging which was pretty relevant for COVID).
Therefore, “Reasoned imagination is thus an important part of risk assessment because it implies, first, anticipating by systematic analysis scenarios that have not happened yet”.

How to Think About These Two Kinds of Rare Events?
The idea of a perfect storm lies more in the eyes of the beholder, because “For each “perfect” storm one can often imagine a worse one”. There may also be some “imperfect storms” that do not combine worst-case values for all the factors involved, but are catastrophic nonetheless”.
The important part is that these events weren’t anticipated because “their conjunctions seem too rare to care about”.
The author gives some examples, like with the Fukushima nuclear plant accident. I’ve skipped a lot of context, but briefly a 14m tsunami hit the plant due to an earthquake. The plant’s seawall had a maximum height of 5.7m.
“Such a combination of events had not occurred in recent times, but had been recorded at least twice in history(17) in that area, in the 9th and 17th centuries”. While historical events had been considered in the seawall height, they apparently didn’t go far enough back.
A large number of numerical calculations were carried out for the nuclear reactor also, “under various conditions within a reasonable range”, combined with the recent data on earthquakes. It’s said that nevertheless, a 2006 Japanese authorities estimate said that “the probability that a tsunami in the Fukushima area could be more than 6 m high was than 10−2 in the next 50 years”.
Labelling Risks After the Fact
Perfect storms in the financial space aren’t often predicted by global statistics because “they are rare, and in an ever-changing world, may never have occurred quite in the same way”.
As such, “existing statistics may have lost their relevance”.
Pate-Cornell argues that among different metaphors of risk, whether the 2008 financial crisis was a black swan or perfect storm depends on one’s perspective – and isn’t even important anyway.
That is, “It may only be an excuse for failing to detect precursors and warning signals”. The nature of the financial market is such that decisions need to be made about being proactive and reducing risk—at some cost—or “reap the immediate benefits—with minor caveats— and rely on risk management after the fact to limit the damage”.
Proposing another metaphor, Pate-Cornell observes that “In reality, such market failures can be described as “brewing bubbles,” whose bursting was likely but the timing was unclear”.
Successfully countering these types of uncertainty are risk management approaches that rely on “quick reactions to signals, improbable as they may look, either by acting immediately or gathering further data as quickly as possible”.
Risk Quantification Involving Rare Events Can Seldom be Based on Statistics Alone
Again skipped a lot here, but briefly many people have tried to characterise the risks of low prob/high conseq events.
Nevertheless, “But statistics alone (and frequencies) can characterize only randomness”. These techniques are said to be “helpful when a phenomenon is relatively stable, the sample size sufficient, and dependencies well understood”.
But they fail part when used to “represent epistemic uncertainties when new or poorly known factors are at play (new economic structure, climatic conditions, or technologies then needs Bayesian probability(32−34) to quantify and combine aleatory and epistemic uncertainties”.
Dependencies Are Key Risk Factors: The Role of External Events and Human Errors
Here the author observes that “A critical feature of the probability of a scenario is the level of dependence among the factors involved”. System dependencies are critical for analysing perfect storms, too.
It’s argued that dependencies within engineering systems are “major sources of failure”, including dependencies with external events, human performance, and subsystems.
Single point failures are said to usually not be a problem in safety critical systems if the components are robust enough. Redundancies are also often part of the solution for providing additional capacity.
However, redundancies “may be of little worth, however, if their failures are highly dependent”.
Human Errors Are Seldom “Black Swans”
While most accidents involve errors in some ways, “these behaviors, in turn, are often influenced by the structure, procedures, and culture of the organization”.
Pate-Cornell isn’t convinced about the uncertainty and unpredictable nature of human performance, noting that these should be accounted for in risk assessments. They often have their roots in management decisions, hiring practices, work schedules, incentives and more.
Therefore, “Including those factors in a risk analysis model requires linking the performance of the physical system to agents’ behaviors, negative or positive,13 rational or not, and these behaviors, in turn, to management factors in order to provide a more complete assessment of the failure risks”.
Risk assessment approaches also fail to capture the positive actions of people, who perform beyond expectations “and may save the day by unusually courageous, competent, or effective behavior”.
Some of the issues related to human performance are said to be “the direct results of incentives that can be anticipated given the rewards, yet are sometimes dismissed as “black swans” after the fact”.
For instance, things which shouldn’t be surprises are “cases in which managers set schedule or budget constraints that are too tight and may cause their agents to cut corners in ways that they would probably disapprove if they were aware of it”.
Black Swans vs Perfect Storms: Horses for Courses
Here it’s said that since black swans and perfect storms are different, so are the risk management approaches for both. Perfect storms require a long-term observation of the records of threat scenario components and a careful evaluation of their marginal and conditional probabilities”.
After considering the possibilities of combinations, changes to design and risk management strategies can then be undertaken.
In contrast, a prior prediction of black swan events may not be possible, but “signals can emerge and have to be properly observed and interpreted to permit a fast reaction”.
Therefore, black swans require “reasoned imagination and updating of probabilities based on degrees of belief”. The author cites 9/11, where the a director of the investigation commission “called the misreading of precursors to these events “failure of imagination.”
In both black swans and perfect storms, the first way to reduce risks is “the systematic observation and recording of near-misses and precursors”. But this, of course, requires an effective warning system and this depends on costs and values.
Further, “When the signal is imperfect, deciding when to issue an alert involves managing a tradeoff between false positives and false negatives”. Hence, a new question emerges: “When to respond to the warning and at what level of risk given the quality of the signal, the lead time, and the consequences of an event?”
The author also briefly discusses how to address these issues at the design stage. It’s said to be difficult for decision makers given the ambiguity and nature of uncertainties. Decision makers generally prefer “to face a lottery that is “firmer” (less epistemic uncertainties), and who are very explicit about it”.
And therefore, “Black swans,” since they are initially unknown, are not even on the radar screen of risk managers … But as soon as they begin to be perceived, they lead to decision problems that are dominated by epistemic uncertainty, and unless surprises accumulate, gradually involve more randomness”.
A European tradition, Pate-Cornell suggests, has been the precautionary principle, to “address the emergence of what they consider the equivalent of black swans, banning the activity or the product until enough information has been gathered, regardless of the potential benefits”. However, this approach has been questioned.
Risks of Failure of Nuclear Reactors
The failure of reactors was also discussed. The author refers to the classic WASH-1400 study which was undertaken “at a time when the risk could only be assessed based on the identification of events combinations that had not happened before but could be anticipated”.
This resulted in assessments which were “a mix of aleatory and epistemic uncertainties based on scenarios of “perfect storms.” Some progress in techniques have subsequently occurred.
The Columbia disaster is also discussed. The heat shield damage was said to be a surprise to the agency, when it shouldn’t have been since there was a 1990 study which highlighted the loss of tiles.
It’s argued that “This type of heat shield failure had never happened at the time of the study, and for lack of statistical data, the risk was considered tolerable”, even though this scenario was covered in the 1990 study.
In any case, “The possibility was identified and the risk was assessed—but, of course, the event was not “predicted”.
Conclusion
In sum:
· The differences between black swans and perfect storms depend on the beholder and probably make little difference in practice
· “Problems arise when these terms are used as an excuse for failure to act proactively”
· “Clearly, one cannot assess the risks of events that have really never been seen before and are truly unimaginable. In reality, there are often precursors to such events”

Ref: Paté‐Cornell, E. (2012). On “black swans” and “perfect storms”: Risk analysis and management when statistics are not enough. Risk Analysis: An International Journal, 32(11), 1823-1833.

Shout me a coffee (one-off or recurring monthly)
LinkedIn post: https://www.linkedin.com/pulse/black-swans-perfect-storms-risk-analysis-management-when-hutchinson-yffgc