
This discussion paper explored some myths of the risk construct.
Way too much to cover, so just a few extracts.
Myth 1: Risk. must have a single, well-defined meaning
Risk has many common definitions. An example is provided about lung cancer being one of the major risks that affect smokers.
In this sense, 1) risk = an unwanted event which may or may not occur.
If the statement is reframed to “Smoking is by far the most important health risk in industrialised countries”, then risk takes on another definition
2) Risk = the cause of an unwanted event, which may or may not occur.
If the smoking-related risks are quantified, and the statement is reframed to “There is evidence that the risk of having one’s life shortened by smoking is about 50%”, then risk represents:
3) Risk = the probability of an unwanted event, which may or may not occur
Another variation of risk is covered, being:
4) Risk = the statistical expectation value of unwanted events, which may or may not occur.
The author discusses a distinction between risk and uncertainty. Whereas for risk “probabilities are assumed to be known”, or at least knowable, in uncertainty the probabilities aren’t known (or may be unknowable).
From this perspective, “The probabilities of various smoking-related diseases are so well-known that a decision whether or not to smoke can be classified as a decision under risk”.
Hence, 5) risk = the fact that a decision is made under conditions of known probabilities (decision under risk)
As of this article’s publication date, Hansson says that definition 4, risk as a statistical expectation value, is the most common technical definition.
And while statistical expectation values have been calculated since the 17th century, risk in the context of statistical expectation values took off after the influential WASH-1400 Rasmussen report in nuclear safety.
Myth 2: the severity of risks should be judged according to probability-weighted averages of the severity of their outcomes
Next it’s argued that how ‘risk’ is used doesn’t matter too much providing its specific use is well-defined.
One challenge is that if risk is defined as an expectation value then it’s associated with the view that the severity or acceptability of risks should also be measured as expectation values.
An advantage is that expectation values are “simple, operative, and mathematizable”.
But, problematically, it “often goes severely wrong when applied to real-life problems”.
A key reason is that it’s only factoring in probabilities and utilities, whereas in life there are always often factors which can and should influence appraisals of risk.
For instance, risks are “inextricably connected with interpersonal relationships”, and “They do not simply exist: they are taken, run, or imposed”.
Hence, risk should include “person-related aspects such as agency, intentionality, consent, equity, etc”.
Analyses of risk that incorporate agency and responsibility will “be an analysis more in terms of the verb (to) risk than of the noun (a) risk”.
Major policy debates on risks have been fuelled by the noun and verb uses of risk. Whereas risk analysts and experts tend to prefer the noun – the size of the risk or risk as an object, members of the public often view it more of a verb.
Hence a one-sided focus on probabilities and outcomes to the exclusion of other pertinent factors is one reason why experts have trouble effective communicating with the public.
Therefore, “Instead of blaming th epublic for not understanding probabilistic reasoning, risk analysts should learn to deal with the moral and social issues that the public rightly put on the agenda”.
The third myth of risk: decisions on risk should be made by weighing total risks against total benefits
This type of risk analysis is said to permeate most applications. It is “seductive: at first glance it may even seem to be obviously true”. In the author’s view, who can object to weighing the advantages and disadvantages and then determining which sum is the greatest?
But it lacks a moral dimension, because advantages and disadvantages of risk decisions differentially affect stakeholders. Like, if one group gets most of the benefits without the negatives, it’s not sufficient to compare them to other groups.
E.g. “A disadvantage for one person is not necessarily outweighed by an advantage for another person, even if the advantage is greater than the disadvantage”.
So, cost-benefit analyses don’t necessarily take the personal costs seriously; like pollution. The total economic advantages to the overall population may be positive for polluting industries, but the health impacts on particular communities can be disproportionate.
Moreover, the poor or powerless tend to face greater imposed risks.
The fourth myth of risk: decisions on risk should be taken by experts rather than laymen
Here “The view that risk issues are for experts, so that the public had better just follow their advice, is seldom expressed directly, but is implicit behind much of the current debates”.
Some attempts at legitimising risk management have also advanced the idea that it should be based exclusively on science.
But this ignores a lot of other pertinent factors. I’ve skipped most of this section.
The fifth myth of risk: risk-reducing measures in all sectors of society should be decided according to the same standards
This myth is said to be an extreme form of #4. It “consists in the conviction that risk analysts should perform analysis with uniform methods for all parts of society, be it mammography, workplace safety, railway safety, or chemicals in the environment”.
The logic is that to calculate risks to different sectors requires standardising the measures, and then allocating resources for risk reduction. It’s claimed by some to be “the only rational way to decide on risks”.
But this type of approach only makes sense in a completely isolated way from the rest of society. Risk issues are dispersed across all of society “where they are parts of various larger and more complex issues”.
Hence, standardising all areas to the same risk measures is insensitive to the factors that matter to different communities.
The sixth myth of risk: risk assessments should be based only on well-established scientific facts
A drive for utilising only well-established scientific principles/data in risk assessment has been advanced over time. Like, with chemical risk assessment. The author argues that few would object to such a demand, for why shouldn’t we only rely on robust and sound science?
Scientific knowledge begins with data from observations and experiments etc., then is critically assessed and incorporated into the scientific corpus. The corpus “Roughly speaking … consists of those statements which could, for the time being, legitimately be made without reservation in a (sufficiently detailed) textbook”.
But even what is considered something to be accepted for the time as being part of the corpus “the onus of proof falls squarely on its adherents”.
Likewise, those who claim another as yet unsubstantiated idea carry the burden of evidence. These are integral principles for science.
For many risk policy decisions, drawing on the corpus is a sensible approach, but “in the context of risk an exclusive reliance on the corpus may have unwanted consequences”.
For instance, a chemical may be suspected to be hazardous to health but lack sufficient evidence.
Nevertheless, “If sound science means good science, then all rational decision-makers should use sound science, combining it with the decision criteria they consider to be appropriate for the social purposes of the decision”.
However, this isn’t said to be the common application of ‘sound science’. Instead, it often refers to the “application of certain value-based decision criteria that are incorrectly depicted as being derivable from science”.
Hence, sound science can be weaponised into a “political slogan”.
The seventh myth of risk: if there is a serious risk, then scientists will find it if they look for it
“It is often implicitly assumed that what cannot be detected cannot be a matter of concern either”.
Therefore, if no ill-effects are found, then we can reasonably assume no ill-effects exist.
But, clearly, this isn’t so. On the contrary, “surprisingly large health effects can be difficult or even impossible to detect in a human population, even in the rare cases where exposures and disease incidences are carefully recorded and analysed”.
An example is provided: assume 1000 persons are exposed to a chemical that increases lifetime mortality in coronary disease from 10 to 10.5%. Here, “Statistical calculations will show that this difference is in practice indistinguishable from random variations”.
An epidemiological study comparing an exposed vs unexposed group won’t have any chance of discovering the change in harm since “More generally, epidemiological studies cannot (even under favourable conditions) reliably detect excess relative risks unless they are greater than ten per cent”.
For risks that occur over a lifetime, like cancers and the like, “lifetime risks are of the order of magnitude of about ten per cent. Therefore, even in the most sensitive studies, an increase in lifetime risk of the size 10-2 (ten per cent of ten per cent) or smaller may be indistinguishable from random variations”.
Therefore, “It is a major reason why the inference of no risk to no known risk is a dangerous one” (emphasis added).
The author then discusses the precautionary principle, which seeks to help protect against these types of issues.
The author discusses a few more things that I’ve skipped.

Shout me a coffee (one-off or monthly recurring)
Study link: https://link.springer.com/content/pdf/10.1057/palgrave.rm.8240209.pdf
LinkedIn post: https://www.linkedin.com/pulse/seven-myths-risk-ben-hutchinson-6cezc