Probability Times Consequence: Rational and Scientific, or Just Imprudent?

Norman Rubin
University of Waterloo Press
May 23, 1986

Originally presented at a Symposium on the Risks and Benefits of Energy Alternatives held at the University of Waterloo. May 20-23, 1986.

I’d like to begin with a few words of introduction, which might help prevent some misunderstanding of my position in this paper. Like most delegates to this conference, I was raised as a scientist and a quantifier-in my case, leading to a degree from M.I.T. just about twenty years ago. I am, in short, part of that minority in society which I and others often refer to as technocrats, technically literate, or occasionally as “techno-twits”. I am also, personally, not particularly risk-averse-and if anybody needs proof of that, just take a look at the car that I drove to this conference!

As part of my training at M.I.T., I was taught the meaning of “risk”. I was taught that risk is a technical term, that it is basically a negative value of the technical term “expectation”, and that it is equal to the probability of an unfortunate event multiplied by its consequence. Thus, risk, as I was taught, is one of several English words-like work, color, and charm-that scientists have borrowed or stolen and given very specific definitions. For example, I was also taught that work is the application of force through a distance, and is mathematically the force multiplied by (or integrated over) the distance.

It was made clear that the public, and the makers of dictionaries like Webster’s, continue to attribute other meanings to these words. So, for example, a Physics professor, who knows that work = force x distance, can still ignore that definition when talking to a teaching assistant who thinks that standing motionless in front of a class for thirty hours a week is too much work. In fact, the technical definition of work is obviously a nuisance in any discussion of human work, working conditions, labour relations, appropriate pay for doing work, etc.

In risk, as in work, it was made clear at M.I.T. (as it has been made clear at this conference) that the public definition and the technical definition are different. For example, as many speakers here have acknowledged, the high (estimated) consequences of potential catastrophes like nuclear meltdowns concern the public far more than the low (estimated) probabilities of those catastrophes. The technical definition obviously gives precisely the same weight to each.

But while the technical definition of work and the “normal person’s” definition of work were both acknowledged by my Professors to be meaningful concepts, I was taught that the public’s definition of risk is irrational, emotional-in short, a “Bad Thing”, to be educated out of them by scientists like us.

So, like most “techno-twits”, I was taught (1) that the concept of risk is linear from zero to infinity, (2) that it is the product of probability and consequence, each of which is also linear from zero to infinity, (3) that probability and consequence are not only commensurable, but interchangeable, and (4) that anybody who disagrees is irrational and uneducated. Implicitly, if not explicitly, I was taught that public policy on hazards should be designed to minimize the product of probability times consequence, and that society should spend money avoiding hazardous activities only when it is cost-effective to do so, using that technical definition of risk.

The problem with all of this is simply that none of it is science, although it is taught in science classes. It is either religion or politics, but it is not science. In fact, I would suggest that it is precisely part of the current religion of science.

As religious dogma, the technical view of risk is unchallengeable. But as science-thai is, as a guide to rational thought-the technical view of risk is a bit like Newtonian physics: it works well as a rule of thumb as long as all the numbers are “in the middle of the scale”. But when the numbers become extremely large or extremely small, it breaks down-in fact, it produces patent nonsense. So, for example, I myself multiply probability times consequence when I play poker, but only when I am betting amounts I can easily afford to lose.

In fact, I believe the behaviour of scientists as human beings generally violates their own stated definition of rational behavior, where risk is concerned. Two questions are illustrative here: (1) How does a scientist gamble? and (2) How docs a scientist insure? I suspect that there are very few scientists who would bet their house, or their entire estate, on even a very good poker hand. Yet, if the probability of winning is over 50%, the effect of not betting, or of betting less, is (according to the linear definition) tantamount to throwing away large sums of money, which scientists do not usually do. Here the scientists throw out their linear risk-benefit calculations and agree with the public’s common-sense idea of prudence: betting the farm-even on a very good hand-is imprudent and irresponsible unless there’s no alternative. In short, the public and the scientistgambler agree in this case that consequence (losing the farm, the’house, the
whole estate) is simply more important than probability (odds are, you won’t lose).

Similarly, I suspect that many of you now hold insurance policies. Yet it is childishly simple to show that any insurance policy (unless it was acquired fraudulently) decreases your expectation and increases your “technical” risk. So, for example, if you have examined your estate, and determine that it is, say, 550,000 smaller than the minimum amount you would consider “acceptable” for your spouse and children to have if you died tomorrow, you would probably respond by buying a $50,000 life insurance policy. But if there is statistically a one-in-a-hundred chance of your dying this year, the risk that the policy eliminates is exactly $500 (1/100 probability times $50,000 consequence), and the policy will surely cost more than that. Why would a scientist ever insure? Simply because, as in this case, the consequence-dying and leaving your heirs impoverished-is unacceptable, even though the probability is low. As human beings, even scientists understand that “It’s only one in a hundred” is not a satisfactory answer to an unacceptable consequence, if the risk is your own. The only satisfactory answer is to make it zero, to take it away, which is exactly what the insurance company does for a fee. Another policy, that would pay the $50,000 on your death nine times out of ten, or even ninety-nine times out of a hundred, would not solve the problem, because the problem is neither probability, nor probability times consequence, but the presence of a non-zero chance of an unacceptable outcome.

So, the scientist’s behaviour, when gambling or insuring, generally departs from the “rational”, linear model in favour of the public, “irrational” human aversion to unacceptable, catastrophic consequences-even unlikely ones. This generalhuman attitude is well encapsulated by what I call The American Express Principle: “Never carry more than you can afford to lose”.

At several points during this conference, we have already heard the overwhelming majority of delegates, from the nuclear industry, agree that “the public doesn’t understand risk,” and more specifically that “the public understands consequences, but doesn’t understand probability.” I see the public’s attitude toward nuclear risks in a different way: (1) the public doesn’t put its faith in vanishingly small theoretical probabilities of inherently possible failures in new poisoncontainment systems like nuclear reactors, and (2) the academic/nuclearindustry “risk” community does not understand how and why the public values consequences-specifically, catastrophic consequences~as it does.

The basic reason for the public attitude-and the reason that it is ultimately more rational than minimizing estimated probability times estimated consequence-is survival. And by survival I do not primarily mean survival of the individual, but of the group: the family, the tribe, the village, the nation, the culture, the race, the species. We are, all of us, the direct descendants of a long line of survivors, and our genes and our culture reflect that evolution. There may have been tribes of ancients that hit on the idea of minimizing the product of probability times consequence. It would be fascinating to interview their descendants, but it is impossible, for they long ago became extinct. One can imagine a situation: crop failure, the tribe is hungry enough that they estimate that, on average, two of their fifty members will be weakened enough overnight that they will die. But they have found a field full of a new mushroom, that looks quite a bit like a mushroom that is safe and nourishing. Their wise men estimate that there is only one chance in fifty that the mushroom is poisonous. Their choice is mathematically
obvious, and leads eventually, inevitably, to extinction. Our ancestors fed the mushroom to the dog.

If there is one thing that is clear, it is that our modern society has increased its power to threaten group survival, perhaps even the entire human species. And those few specific technologies that create most alarm in the public are all credibly group-threatening: chemicals, nuclear energy, genetic engineering, and of course nuclear weaponry. I personally believe that all of these are in fact credibly species-threatening-although I would be content for today to have you delegates agree on the principle and agree to disagree on the technologies.

To take a locally less controversial example than nuclear energy: the experts in Washington are apparently convinced that building about 1500 nuclear warheads this year-and avoiding a Freeze and a Comprehensive Test Ban-will make the world safer, presumably by lowering the probability of nuclear war. The public clearly favors a Freeze, and is apparently more eager to reduce the consequences of nuclear war (by decreasing, not increasing, the number of warheads) than the experts.

One similarity between the nuclear weapons industry and the nuclear energy industry is that both are dependent on the spending of public (i.e., taxpayer) funds for their continued existence, so the views of the public should, at least in theory, be easier to impose than in a largely private endeavour like the chemical industry.

Given the human response to unacceptable consequences-“Don’t make it unlikely, take it away”-it is illustrative to compare the response to two recent catastrophes: Bhopal and Chernobyl. First, who are the actors? At Bhopal, themain actors in risk decisions are corporate risk managers, acting on behalf of their shareholders/investors, and the insurers, who will pay the first several hundreds of millions of dollars of the settlements of the accident. At Chernobyl, the main actors are the Soviet and Ukraine nuclear and economic technocracies, and the governments.

Second, what is the response? At Bhopal, it appears that a solution is being implemented, in all the plants that use MIC (Methyl isocyanate), to eliminate completely the bulk storage of MIC, by manufacturing it only at the moment it is needed. Unlike the other possible responses-stronger tanks, stronger buildings, backup cooling and drying systems, computer-controlled emergency response, and so forth-the chosen solution actually addresses the public’s concern (which in this case roughly coincides with the company’s and the insurer’s) by eliminating the possibility of an unacceptable consequence. The other, rejected solutions do not eliminate the unacceptable consequence, but make it less likely. In other words, a high-consequence problem has, it seems, received an appropriate, consequence-lowering solution.

At Chernobyl, it is too soon to see what the ultimate response will be, but the possibilities include: shutting down the RBMK-1000 reactors, slowing down the construction of nuclear stations in general or of nuclear stations near population centres, building better containment buildings, installing better/more engineered safety systems, and providing computer control of reactors. Except for the first two, which would be unpalatable to any nuclear industry, all the solutions address the probability of a catastrophe, not its consequences or its possibility. In fact, the catastrophic hazard in a large nuclear reactor is inherent. One simply cannot eliminate fission products and actinides (or decay heat production) from a nuclear plant the way one can eliminate bulk MIC storage from a pesticide factory.

Finally, it would be illustrative for the nuclear-dominated academic risk community to look at the other professional risk communities: the financial/corporate risk-takers (investors and corporate risk managers) and the insurance industry. For the former, the key risk issue is containing risk-that is, limiting consequences. Thus, it is the corporate risk managers in chemical companies that are leading the push for non-catastrophic processes (may it continue!). For the insurance industry, increasing consequences generally lead to higher rates, in a more-or-less linear fashion, until the consequences become unacceptable to the company. At that point, the insurers write in a specific exclusion or limitation in the policy. In other words, they behave just like normal people, faced with an unacceptable consequence: they say “Don’t tell me it’s unlikely. Don’t ask me how unlikely is unlikely enough. Take it away.”

Of course, among the occurrences insurers have refused to insure is the property damage from a nuclear plant accident. Your home-owner policies have a specific exclusion in this regard. In terms of the operator’s liability for property damage and bodily harm from a nuclear plant accident, the risk-makers and the insurers have insisted on nuclear-specific legislation to reduce their liability. In the U.S., it’s the Price-Anderson Act; in Canada, it’s the Nuclear Liability Act It is worth asking whether or not the nuclear manufacturing industry, and its risk experts, would be willing to take the financial risks of nuclear power-those now borne by taxpayers, and those now borne by potential accident victims.

In closing, the risk-assessment community should be very slow to try to influence public policy or public opinion in this field until it has devised a model of risk that is at least half as subtle, sophisticated, and especially as survivalenhancing as the public attitudes that it criticizes. One approach could be to try to model group survival: program one group to minimize probability times consequence, and another to avoid all avoidable catastrophes, and see which one survives for more generations. If the “rational” group wipes itself out first, go back and change your definition of what is “rational”.

Any model of risk-aversion (like risk = probability x consequence) that is less subtle and less survival-enhancing than the public response that got our ancestors and us this far is not a positive contribudon to the public-policy debate, and will continue to be ridiculed by the public. And rightly so.

Read Norm Rubin’s Bio

This entry was posted in Nuclear Safety. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s