Engineering and Risk Analysis
`Choosing Our Pleasures And Our Poisons: Risk Assessment For The 1980’s’
`Choosing Our Pleasures And Our Poisons: Risk Assessment For The 1980’s’
by William Lowrance
At the root of many contemporary concerns about technology is the question of risk. Most of the controversies over nuclear power, pesticides, and asbestos (to cite but a few examples) focus on the possibility of harm to humans and the environment, the issue of who might be harmed and how much, the matter of responsibility, and the relative merits of various steps to alleviate the risks. As such controversies have become part of the political scene, techniques for risk assessment have emerged and gained in sophistication. In “Choosing Our Pleasures and Our Poisons: Risk Assessment for the 1980s, ” William W. Lowrance reviews the state of the art of this important new field. Originally prepared in 1980 for the American Association for the Advancement of Science contribution to the federal government’s Five Year Outlook for Science and Technology, Lou Tance’s paper explains the concept of risk and shows how quantitative measures of risk are developed and employed. It stresses the need to be explicit in characterizing risks, suggests means of dealing with uncertainties, and places special emphasis on the need to treat risks comparatively so that minor ones do not displace more significant ones on the policy agenda. William W. Lowrance has been a senior fellow and director of the Life Sciences and Public Policy Program at the Rockefeller University in New York City since 1980. Prior to that he held policy, research, and teaching positions at Stanford University, Harvard, the U.S. State Department, and the National Academy of Sciences. A biochemist by training, he holds a Ph. D. from Rockefeller University and is the author of Modern Science and Human Values (Oxford University Press, 1985).
INTRODUCTION It takes only a few highly charged terms to evoke the risk -assessment milieu of the past decade: DDT, the pill, saccharin, “Tris,” asbestos, nuclear waste, Three Mile Island, smoking, black lung, Clean Air Act, Delaney clause, recombinant DNA, 2,4,5-T, “Research Mining versus EPA,” Teton Dam, DC-10….
This rash of accidents, disruptions and disputes has left the public and its leaders fearful that the world is awfully risky and that, although science can raise warnings, when crucial decisions have to be made, science backs away in uncertainty. Further, there is a feeling that as with medical catalepsy, in which the simultaneous firing of too many nerves draws the body into spasms, the body politic has been drawn into a kind of regulatory catalepsy by too many health scares, too many consumer warnings, too many environmental lawsuits, too many bans, too many reversals. A related complaint is that we are afflicted with excessive government intervention, often of a naive, or trifling or nay saying sort. Among professional analysts as well as members of the public, there is a conviction. that many risk-reduction efforts are disproportionate to the relative social burden of the hazards.
Public apprehensiveness has a number of causes. Is life becoming riskier? Not in any simple sense…. Many classical scourges have been conquered; infants get a healthier start in life; on average people live longer lives than ever before. The historical record of floods, hurricanes, typhoons, tornadoes, earthquakes and other geophysical disasters shows a relatively constant pattern of occurrence over the centuries. I… What we are menaced by now are enormous increases in the physical and temporal scale and complexity of sociotechnical hazards. Of these, the most threatening are risks having low probability and high consequence, such as genetic disaster, nuclear war and global climate change. Too, alarm arises, in an almost paradoxical sense, because science has become so much better at detecting traces of chemicals and rare viruses and at identifying birth defects, diseases and mental stress. Often we know enough to worry but not enough to be able to ameliorate the threat.’ Warnings and accusations are amplified by the public media, often with unseemly haste. Worse, scientific hunches are announced as scientific fact, only to have to be withdrawn later. With all this, it would be surprising if the public’s sensibilities were not battered….
THE EVOLUTION OF MORTAL AFFLICTIONS In his 1803 Essay on Population Thomas Malthus observed of Jenner’s new vaccine- “I have not the slightest doubt that if the introduction of cowpox should extirpate the smallpox, we shall find … increased mortality of some other disease.” This general expectation holds true today if, in addition to disease, we include noninfectious threats. The communicable diseases of smallpox, diphtheria, typhus, cholera, tuberculosis and polio have been conquered. So have scurvy, pellagra and other nutritional deficiency diseases. Infant mortality has dropped dramatically. As the toll from these causes has lessened, mortality has shifted toward degenerative diseases-notably heart disease and cancer-which are attributable either to personal life style or to causative agents in the environment. While the causes of death have changed, the average age of onset of fatal illness has moved higher. Life span has lengthened. Put crudely, we die now of stroke and cancer in part because we live long enough to do so.
Thus at present in the United States the leading cause of death is heart disease, followed by cancer. The rest of mortality is accounted for by other diseases and by accidents, homicide and natural disasters (in that order).2 Within these gross statistics, however, there is great variability by age and socioeconomic status: Motor vehicles and other accidents kill the most children under 14; for black mates between the ages of 15 and 24, homicide is the largest threat; cirrhosis of the liver is the fourth leading cause of death for people between 25 and 64.
In a recent analysis of the prospects for saving lives in this country, James Vaupel developed the ‘concept of “early deaths.” (The definitional problem is fully treated in his report; for short, early death can be taken to refer to death before the age of 65.) Vaupel concluded:
The statistics indicate that the aggregate social losses due to death are largely attributable to early death and that the losses due to early death are immense, that the early dead suffer an egregious inequality in life-chances compared with those who die in old age, and that non-whites, the poor, and males suffer disproportionately from early death. Furthermore, statistics on the leading causes of death and statistics comparing non-whites and whites, males and females, current mortality with mortality earlier in this country, and the United States with Sweden and other countries suggest that early deaths could be significantly decreased.3
Extrapolation of life expectancy data has led to another provocative observation about survival. Some analysts now speculate that the human species is approaching a “natural” life span limit of about 85 years. These analyses have led James Fries to predict that “the number of very old persons will not increase, that the average period of diminished vigor will decrease, that chronic disease will occupy a smaller proportion of the typical life span, and that the need for medical care in later life will decrease.”4
Surely, coming to terms with these trends will lead us as a society to strive less to fend off full-lifetime mortality and to attend more to illness, accidents and quality of life. Among occupational diseases demanding attention, for instance, are the pneumoconioses.- black lung disease, asbestosis and brown lung (textile dust) disease; among the most debilitating, lingering and painful conditions are arthritis, emphysema and allergies; among “life style” diseases, cirrhosis of the liver and the venereal diseases.
IMPROVEMENTS IN ASSESSMENT
Becoming More Comparative
As a society we find ourselves, relative to all previous human confrontation with mortal risk, in the enviable but emotionally unsettling situation of living longer and healthier lives than ever before; of not having to remain ignorant and vaguely apprehensive of hazards but of understanding many of their causes, likelihoods and effects; and of having now accumulated substantial experience in predicting, assessing, reducing, buffering and redressing harm. Blissfulness is prevented by our having too many options. If we still lived only on the margin of survival, we would not have the luxury of worrying about microwaves and hairdryers. If we lacked scientific understanding and the prospect of taking preventative action, we would be more fatalistic about Legionnaires ‘ disease and toxic shock syndrome. If we had not established the hurricane warning network and the national air traffic control system, we would not have to argue about their budgets.
Howard Raiffa made the central analytical point recently in congressional hearings:
We must not pay attention to those voices that say one life is just as precious as 100 lives, or that no amount of money is as important as saving one life. Numbers do count. Such rhetoric leads to emotional, irrational inefficiencies and when life is at stake we should be extremely careful test we fail to save lives that could have easily been saved with the same resources, or lest we force our disadvantaged poor to spend money that they can ill afford in order to gain a measure of safety that they don’t want in comparison to their other more pressing needs.5
To proceed in dealing with risks without making comparisons, both of import of threats and of marginal risk reduction effectiveness (and cost-effectiveness) of public programs, makes little sense. Yet surprisingly little sophisticated comparative work has been done.
In studies meant to be illustrative, Bernard Cohen, Richard Wilson and others have assembled catalogues of common risks.6 Cohen and Lee have calculated effects from different hazards upon life expectancy (for people at specified ages). They found that cigarette smoking reduces U.S. male life expectancy by six years on average. Being 30 percent overweight reduces life expectancy by about four years. Motor vehicle accidents cut off 207 days. And assuming that all U.S. electricity came from nuclear power and that the unoptimistic risk estimates published by the Union of Concerned Scientists are correct, nuclear accidents would claim 2 days from the life of an average citizen…. Although these studies are flawed in numerous ways, their most valuable lesson has been to illustrate how difficult it is to reduce complex social phenomena, such as cigarette smoking and nuclear power generation, to single scalar risk rankings.
Stimulated in part by the early contributions of Chauncey Starr, …. . assessors have attempted to compare technological hazard to natural hazard.7 For example, the so-called Rasmussen Report attempted to compare nuclear reactor accident risks to those of meteorite impacts and other natural hazards in order to provide some intuitive groundings The difficulty is that reliable numbers are hard to compute, and because polls have shown that most people, including scientists, do not have a very accurate intuitive sense of the likelihood and magnitude of natural hazards, such grounding may not be very useful anyway.9
The next logical step has been to “to compare the relative impacts various risk-reduction measures make on longevity. Shan Pou Tsai and colleagues, for example, have examined the question of what gains in life expectancy would result if certain major causes of death were partially eliminated. They calculated that for a newborn child, reduction of cardiovascular disease by 30 percent nationally would add 1.98 years to life expectancy at birth; 30 Percent reduction of malignant cancers would add 0.71 years; and 30 percent reduction of motor vehicle accidents would add 0.21 years. If such 30 percent. causative reduction were to exert effect during the working years of 15 to 60, there would be gains of 1.43 years (cardiovascular), 26 years (cancer), and 0. 14 years (motor vehicle accidents). “Even with a scientific breakthrough in combating these causes of death, ” the authors concluded, “it appears that future gains in life expectancies for the working ages will not be spectacular.
Obviously the outcome of comparisons is heavily dependent on the way the boundaries of comparison are set…. In calculating the risks of coal, do we count deaths from train wrecks, air pollution or release of radioactive radon from the burning fuel? In assessing nuclear power, do we include terrorist abuse or nuclear weapons proliferation? In appraising solar sources, do we include health effects on copper and glass workers? There is no avoiding such analyses. The problem is to learn how to perform them with technical sophistication and to take due account of all relevant social considerations. Overreaching is hard to avoid. The consolation of most such ambitious studies has been that the process of assessment has itself sharpened the social debate and clarified technical-analytic needs.
That the general public is sophisticated enough to understand and endorse the idea of comparative risk assessment has been demonstrated in such situations as Canvey Island in Britain. Within an area of 15 square miles on that island in the Thames near London are oil refineries, petroleum tanks, ammonia and hydrogen fluoride plants and a liquefied natural gas facility. When a few years ago controversy arose as to whether Canvey’s 33,000 people were exposed to unusually high risks, a thorough government inquiry was conducted. Upon deliberation the residents passed a resolution that no further construction be accepted until the overall industrial accident risk on the island had been reduced to the average level for the United Kingdom. But they did not demand that their neighborhood be risk free. 11
That the same toleration for comparative approaches holds in the United States is evident in industrial areas, such as Ohio and New Jersey, where residents are demanding cleanup, but not closing, of industries. Similar moderation led the voters of Maine, an environmentally sensitive state that has had to deal with cold winters but also with proposals of supertanker ports, in their 1980 referendum to vote against measures that would have had the effect of being more restrictive of nuclear power.
If this country is to move toward more “rational” apportionment of risk-reduction and -management efforts, we must assure ourselves that there is reasonable parallel between the burden, in whatever terms, of particular risks and the avidity with which we defend against them, and that programs take into consideration age of onset of harm, degree of debilitation, longevity erosion and cost-effectiveness of ameliorative programs. Before any of this can be done, hazards have to be stated explicitly and goals of hazard reduction agreed upon.
Facing Hazards Explicitly
Comparative approaches are necessarily more quantitative, and they tend to force the revelation of specific consequences. As it dawns on social consciousness that even strict protection inevitably admits some residual harm, even if only by inducing exposure to the hazards of alternatives, little by little public officials have moved toward explicitness.
One of the most widely discussed test cases is that of DES (diethylstilbestrol, the growth hormone sometimes fed to beef cattle). The Food and Drug Administration (FDA) has formally proposed to allow beef producers to use this putatively carcinogenic but economically important agent, if they remove it from feed sufficiently in advance of slaughter that residual DES in marketed beef does not exceed a specified, extremely low concentration. In its proposal the FDA argued that “the acceptable risk level should (1) not significantly increase the human cancer risk and, (2) subject to that constraint, be as high as possible in order to permit the use of carcinogenic animal drugs and food additives as decreed by Congress…. A risk level of 1 in 1 million over a lifetime meets these criteria better than does any other that would differ significantly from it.” The agency noted that further reduction “would not significantly increase human protection from cancer.”12 This proposal and similar ones are predicated on a conviction that the underlying carcinogen assessments are worst possible case overestimates of human risk. The DES standard is still under discussion. In March 1980 FDA Commissioner Jere E. Goyan stated that he would favor amending the food additives laws so that the chemicals testing out under the level of one chance in a million would be permitted (the Delaney clause prohibits even minute traces of very weakly testing carcinogenic additives-a prohibition honored mostly in the breach, because of its absolutist nature).
One by one, as cases have developed-the 1979 Pinto lawsuit, the national review of earthwork dams, amendment of the Clear Air Act-there has been a tendency to require that an upper bound on the estimated actual hazard be stated.
Specifying Risk-Management Goals
Although industrial and legislative programs usually operate under guidelines mandating “reduction of harm” or “protection of consumers,” the degree of reduction or protection is often not specified (except when absolute protection is called for, which, usually being impossible, simply amounts to defaulting). Goal ambiguities may remain even when program objectives are spelled out. Different goals may come into conflict: reducing use of asbestos insulation, in order to protect miners and insulation installers, may have the effect of increasing fire hazard in buildings; forbidding black airmen who are sickle-cell-trait carriers to serve as Air Force Pilots, to avoid the possibility of their becoming functionally impaired under emergency oxygen loss, conflicts with equal opportunity goals.
A recent RAND Corporation study for the Department of Energy (DOE), Issues and Problems in Inferring a Level of Acceptable Risk, lists types of risk-reduction goals that can be considered, such as minimization of maximum accident consequences, minimization of probability of most probable accident and so on. After describing ways in which goal choices can make a difference to programs, the report urges that “DOE and other agencies need to be self-aware in specifying risk-reduction goals, as well as in relating them to goals of other agencies and interested parties, and understanding their implications for the choice of energy alternatives.”13
Skeptics may be tempted to dismiss the topic, saying that we in this country do not have a consensus on social goals. Rebuttal to that too- simple dismissal is evidenced, for example, in the way our medical X ray protection practices, which are the result of decades of reassessment and improvement by industry, medicine and government, pursue goals.- minimization of probability of damage (by decrease in frequency of use of diagnostic X rays, compensated for by more sensitive films), minimization of potentially irreversible damage to the human gene pool (special protection of gonads) and minimization of threat to infants in utero (again, special protection). The typically American goal of helping disadvantaged citizens underlies special health programs for minority groups. The goal of preserving maximum consumer choice can be seen as a goal of food quality programs.
Setting goals is not impossible, but setting realistically attainable goals is not easy. It is imperative that programs be tailored to goals more precise than “protection of all Americans against all harm.”
Weighing Risks in Context with Benefits and Costs
All decisions, indirectly or directly, rely on judgments of the sort Benjamin Franklin referred to as “prudential algebra.” Under the Toxic Substances Control Act, the Environmental Protection Agency (EPA) must protect the public against “unreasonable risk of injury”; under the stationary-sources provisions of the Clean Air Act, it must ensure “an ample margin of safety”; under the Safe Drinking Water Act, it must protect the public “to the extent feasible … (taking costs into consideration).” “Unreasonable,” “ample” and “feasible” are not defined in these laws. For the EPA the question is not whether analysis but what form of analysis, taking what considerations into account. For all such risk reduction regimes, the day has passed when benefits and costs could be ignored.
Every segment of industry and government-food, energy, transportation-has to ask:
•Are there ways to take benefits and costs into consideration along with risk? Do existing policy and managerial rules allow consideration of all such factors? Should they?
•Which methodological approaches (cost-benefit analysis, decision theory, cost-effectiveness analysis, etc.) are appropriate?
•How should secondary, indirect and intangible effects be taken into consideration?
•Are formal, explicit, published analyses required to form the basis of decision, or should they be used as informational background only?
•What are the procedural rules by which definitions, analytic boundaries and conceptual assumptions are established?
•Should those reviewing a technological option be required to review the attributes of alternatives also?
After a decade of concentrating on the negative side of the ledger, society is now trying to learn how to measure benefits. The National Academy of Sciences (NAS) 1977 study of ionizing radiation (“BEIR 11”) struggled with the issue of how to appraise the benefits of such applications as medical X rayS.14 Its 1979 food safety policy report analyzed the benefits of saccharin and of food-safety policies regarding mercury, nitrites and aflatoxin (in peanut butter)15 and its 1980 report, `Regulating Pesticides’, described the methods available for estimating marginal gains in crop yield and benefit expected from a candidate pesticide. 16
Several methods, usually referred to in shorthand as “risk-benefit” or “cost-benefit analysis,” are available for constructing a balance sheet of desirable and undesirable attributes. Analysis is thus a problem of handicapping what will happen (the odds of a destructive flood, the probable incidence of a disease) and comparing quantities that are rarely expressible in common-denominator terms (social cost of lives shortened, benefits of production, risks of genetic mutation).
With a few well-defined projects, for which goals and constraints are agreed upon by the major affected parties, for which health and environmental risks, costs and benefits are well known and understood (not only in magnitude but in social distribution, over both the near and long term), risk-benefit accounting has proven itself useful. Under such rare circumstances of certainty, commonsensical estimates as well as more formal analyses derived from operations research are applicable. The latter tend to be favored by specialists, technical or otherwise, who have been given a specific task to accomplish (the Army Corps of Engineers has pioneered in their use). The occasional “successful” application of such techniques-and, one suspects, also the all-embracing ring of their title-tempts legislators, administrators, managers and judges to call for their use.
The griefs of analysis could fill a large set of books. Most reviews conclude that such approaches are very use” for structuring discussion but are less useful, or even subject to misuse, when granted formal, legalistic weight. In their Primer f6r Policy Analysis, Edith Stokey and Richard Zeckhauser warned that:
Benefit-cost analysis is especially vulnerable to misapplication through carelessness, naiveté, or outright deception. The techniques are potentially dangerous to the extent that they convey an aura of precision and objectivity. Logically they can be no more precise than the assumptions and valuations that they employ; frequently, through the compounding of errors, they may be less so. Deception is quite a different matter, involving submerged assumptions, unfairly chosen valuations, and purposeful misestimates. Bureaucratic agencies, for example, have powerful incentives to underestimate the costs of proposed projects. Any procedure for making policy choices, from divine guidance to computer algorithms, can be manipulated unfairly. 17
These and other critics respond to their own complaint by acknowledging that “prudential algebra” of one form or another must be resorted to, nevertheless.
All analytic approaches have difficulty with scientific uncertainties, with fair and full description of societal problems, with predicting all possible consequences, with placing a “price” on human life and environmental goods, with taking into account intangibles and amenities in general and with assessing the social costs of opportunities precluded. 18…
Furthermore, formal analysis is still helpless to accommodate many major effects: the weapons-proliferation and terrorist risks of the spread of civilian nuclear power, the highly touted and ambipotent benefits and risks of recombinant DNA development, the opportunity costs from undue conservativeness in regulation of contraceptive and pharmaceutical development.
Defining “Negligible” and “Intolerable” and Setting Priorities
A disturbing feature of the 1960s and 1970s was that as each sector of manufacturing, or municipal governance, or research or purchasing found itself having to confront risk problems, each had to develop its own approach and work through hearings, scientific studies, economic reviews, lawsuits and insurance disputes. The social teaming process was, unavoidably, painful. So were the disruption and unpredictability caused by the lack of defensible priorities. Industries and agencies found themselves so distracted by disputes over sensational cases that they could hardly pursue their main tasks, even if their charter was to reduce major risks: Neither “major” nor “minor” had been defined. Expressed in a metaphor of the time, smoldering barn fires had to be neglected while brushfires were fought.
Chastening has been accomplished. Now the challenge is to develop ways of keeping priorities clear: to avoid frittering away worry-capital on very small hazards, to prohibit unbearably large hazards and to concentrate decision-making attention on problems that affect large numbers of people in important ways. This admonition may appear an obvious one, but our failure to protect appropriate priorities is just what has set us up for the regulatory “overload” and disproportionateness we now labor under.
This concern was expressed in the 1980 NAS report, Regulating Pesticides:
A serious flaw in the current procedure is that those compounds that receive the most publicity or pressure-group attention may not necessarily be those that present the greatest public health or environmental hazards. The current procedure does not provide for a broad comparison of the hazards posed by the large number of registered pesticides. At the same time, outside pressures to regulate a specific compound rarely arise from careful evaluation of comparative risks of alternative pesticides. To the extent that external pressures are influential in determining the order in which the [Office of Pesticides Programs] evaluates compounds, the consequence may well be that considerable resources are devoted to regulation of minor, low-risk compounds while important high-risk ones remain unreviewed for periods longer than would otherwise be the case . 19
Naturally, regulatory agencies do try to apply their most vigorous attention to the most important issues, but their problem is to set protectable priorities (ones that are buffered from sporadic undermining) so that all parties involved know the analytic and legal agenda and can allocate resources accordingly. OSHA has tried to do this with occupational carcinogens, as has EPA with chemicals regulated under the Toxic Substances Control Act. The new National Toxicology Program is taking over some of the priority-setting tasks and will try to rationalize them across agency lines. The Consumer Product Safety Commission bases its priorities in part on a “frequency- severity index” derived from a computerized sampling system of hospital emergency-room admissions.
“Intolerable” and “unacceptable” are being invested with real-world connotations, as are “negligible” and “insignificant.” These boundary- setting adjectives gain meaning in two ways- as experts, insurers and others rank hazards in hierarchies by severity, incidence and overall social exposure (hazards at the top and bottom of lists thus becoming obvious candidates for prohibition or acceptance); and as public opinion, lawsuits and so on indicate endorsement of the ranking. This helps administrators and managers allocate attention to the difficult cases in the middle….
In a striking case recently, the FDA approved the hair-dye chemical lead acetate. While acknowledging that in high doses the material is carcinogenic to rodents, the agency concluded that human exposure is so small, especially relative to overall lead intake, as not to warrant prohibition. 20
Risk ceilings also can be established. In this country and many others polychlorinated biphenyls (PCBS) have been banned from commerce because their carcinogenic potency is judged to be absolutely intolerable. From time to time, high-technology projects have been vetoed because their risks were unthinkably high- some macro- engineering modifications of the environment and certain potentially disastrous recombinant DNA experiments are landmark examples. The issue may not only be whether the hazards are actuarially high, but whether the threat would have an intolerably disruptive effect, physically or psychologically, on the fabric of society….
Seeking Accommodation Between Technical and Lay Perceptions
It is evident that “the public” often views risks differently from the way technical analysts do. Of course, consensus is also rare, even within relatively closed circles of experts.)
From what do these differences of opinion stem? First, science itself is, in effect, simply a matter of “voting”; the scientifically “true” is no more than what scientists endorse to be true. Empirical knowledge is developed systematically within the scientific community, subject to criteria of repeatability, controlled observation, statistical significance, openness and the other guides of western science. By itself, procedure guarantees nothing, though. Good science is science that “works”- science that can predict with consistency and generality and accuracy what will happen in the physical and social world. The weighing of facts remains subjective; perfect objectivity is a myth.
And second, judgments of hazards involve consideration not only of “size” of risks-likelihood and magnitude-but also of social value.21 This, of course, leaves much room for disagreement.
Researchers have speculated that people’s opinions about risks depend on many biasing factors, such as voluntariness of exposure, frequency of occurrence, amenability to personal control, reversibility, immediacy, bizarreness, catastrophic nature and so on.22
Social scientists such as Paul Slovic, Baruch Fischhoff and Sarah Lichtenstein have used polling techniques to survey risk perceptions and risk-taking proclivities. What they find, to neither their surprise nor ours, is that people have different perceptual biases. This research has concluded that human beings’ brains, whether expert or lay, get overloaded with risk information and have trouble comparing risks; that the media accentuate social reverberations in risk disputes; and that, in essence, people believe what they want to believe. Person- in-the-street interviews of technical people show them to be not much better than nontechnical people at guessing, for example, how many fatalities are incurred annually from tornadoes, contraceptives or lawnmowers. 23
Many of these polling studies are open to criticism. They suffer from the usual shortcomings of questionnaire design and the generic weaknesses of polling. Often they ask about only a single hazard at a time, which, by failing to foster or force comparison and by allowing people to express self-contradictory views, provides little guidance for policymaking. They force people artificially to break down their views into components. And these studies are vulnerable to being assumed (not necessarily by their authors) to imply findings about “the public,” when in fact most of them have dealt with only small population samples. …
In the risk-assessment domain, as in others, we are being forced to realize that “the public” is a very elusive construct. No one per- son or group of people fully represents, or is representative of, all of our citizenry; and the “organized public” remains small and keeps changing in composition and opinion. For this reason, and others, the notion of “public participation” lacks conceptual shape. To oppose closed bureaucratic proceedings is usually legitimate, but it is a lot harder to devise proceedings that are not only open to the affected policy but that encourage extensive “public” participation without just opening channels for special-interest lobbying. A recent Organization for Economic Cooperation and Development study of public participation, entitled Technology on Trial, concluded: “the general thrust of participatory demand would appear to be for a greater degree of public accountability; freer public access to technical information; more timely consultation on policy options; a more holistic approach to the
assessment of impacts: all of which amounts, of course, to more direct public participation in the exercise of decision-making Power.” 24
In recent years both governmental and nongovernmental bodies have been taking steps to seek accommodation between lay perceptions and technical-analytic ones. 25 Regulatory agencies have opened up their proceedings and have solicited public input. Professional organizations have explored perceptual issues. In 1979 the National Council on Radiation Protection and Measurement held a symposium resulting in a volume entitled `Perceptions of Risk’. 26
If an attitudinal bias emerges, it can be incorporated into standards. In recognition of the public’s extraordinary concern about catastrophic potential (as opposed to diffuse chronic risks) of nuclear reactors, for example, industry and its regulators have incorporated “risk aversiveness,” or disproportionate conservatism, into reactor safeguarcis.27…
It is worth surmising that what is under perceptual dispute in many cases is not only the hazard itself but the social “management” of it. Nowhere has this been more bluntly evidenced than in the overall conclusion of the President’s Commission on the Accident at Three Mile Island: “To prevent nuclear accidents as serious as Three Mile Island, fundamental changes will be necessary in the organization, procedures, and practices and-above all-in the attitudes of the Nuclear Regulatory Commission and, to the extent that the institutions we investigated are typical, of the nuclear industry.”28 Too, one suspects that risk opinions often may in effect be proxies for more deeply seated opinions about corporate bigness, or bureaucratic inaction or erosion of personal control.
As though swatting at swarms of hazards on all sides, during the 1970s the Congress passed, inter alia, the Consumer Product Safety Act, the Fire Prevention and Control Act, the Occupational Safety and Health Act, the Federal Water Pollution Control Act, the Toxic Substances Control Act, the Mine Safety and Health Act, the (aircraft) Noise Control Act, the Federal Environmental Pesticide Act, the National
Earthquake Hazards Reduction Act, the Medical Devices Amendment to the Food, Drug, and Cosmetic Act, the Safe Drinking Water Act, the Resource Conservation and Recovery Act and various Clean Air Act amendments. To ensure independence of control, Congress split off the Nuclear Regulatory Commission from the old Atomic Energy Commission. And it established the Environmental Protection Agency, the Occupational Safety and Health Administration, the Consumer Product Safety Commission, the National Fire Prevention and Control Administration and the Federal Emergency Management Agency to administer all the new laws.
The effect of this legislative crusade has been to bring tens of thousands of hazards into regulatory frameworks of many kinds, based on science, medicine, engineering, law and economics, that were- and still are-inadequate bases for decision.
The Congress has chosen a variety of roles for itself in risk assessment. It has established the regulatory agencies and overseen their work. With some issues, such as automobile emissions, it has insisted on reviewing the scientific and economic evidence in detail and on itself setting primary standards. With others, such as the arcane questions of recombinant DNA research, it has held hearings to establish a record but has refrained from instituting strong control. Occasionally, in response to constituent pressure or political opportunity, it has intervened precipitously in regulatory action, as it has repeatedly done with saccharin, directing the FDA to stay an action or requesting the NAS to conduct another study. In emergencies it has held high-level inquiries, as it did during the Three Mile Island accident.
Recently the Office of Technology Assessment, the General Accounting Office, and the Congressional Research Service have all gotten more involved in preparing risk-related reports for the Congress. Congressman Don Ritter and others have proposed man- dating that cost-benefit analysis be used as the basis for regulatory action…. Prompted by such flaps as that over the questionable studies of health risks at Love Canal, legislators are considering establishing guidelines for scientific peer review of assessments used in regulation. Congressional concern over risk issues remains high, but it tends to focus on individual hazards rather than on a comparative high-risk-reduction agenda, and it tends to favor regulation as its best instrument.
Various Executive Branch sagas in risk decision making have been described elsewhere and will not be reviewed here. We should, however, notice several trends that go beyond the straightforward execution of regulatory mandates.
There is some movement toward interagency coordination of regulatory actions. The complexity of the administrative task is illustrated by the fact that the Interagency Review Group on Nuclear Waste Management had to be constituted from 14 major entities of government (the Departments of Commerce, Energy, Interior, State and Transportation; National Aeronautics and Space Administration, Arms Control and Disarmament Agency, Environmental Protection Agency, Office of Management and Budget, Council on Environmental Quality, Office of Science and Technology Policy, Office of Domes- tic Affairs and Policy, National Security Council and Nuclear Regulatory Commission).29 The Interagency Regulatory Liaison Group (Consumer Product Safety Commission, Environmental Protection Agency, Food and Drug Administration, Occupational Safety and Health Administration and Department of Agriculture) has developed coordinated guidelines on carcinogenicity assessment.30 A National Toxicology Program has been established to serve the needs of a number of agencies.
Fundamental research in support of regulatory work may be improving: The National Institutes of Health have become more involved in such matters as development of reliable and practical screening tests for carcinogens; the National Science Foundation now sponsors risk-related policy studies; the National Bureau of Standards conducts fire research for the benefit of many agencies. How to marshall such support effectively is still a challenge: The basic research agencies don’t have specific mission mandates, and the regulatory agencies lack strong fundamental research capabilities….
Thousands of tort cases are heard every year. For the present review, what is important are the ongoing debates over the role of the courts and the landmark decisions handed down by the high courts. One respected view of the role of the judiciary is that championed by judge David Bazelon- “Courts cannot second-guess the decisions made by those who, by virtue of their expertise or their political accountability, have been entrusted with ultimate decisions. But courts can and have played a critical role in fostering the kind of dialogue and reflection that can improve the quality of those decisions. “31 Others disagree, believing that courts should be free to review the substantive evidence and logic of assessments and decisions. The extent of judicial intrusion into agency decision making will remain an issue.
Recent years have seen the courts interpreting legislative mandates (as to whether, for instance, regulation under the Clean Air Act must consider costs, or whether the FDA properly interpreted its mandate in banning laetrile) and refereeing territorial disputes between agencies. A crucial issue that continues to work its way up to the Supreme Court relates to the imperative for cost-benefit analysis in regulatory decisions. The recent case of Industrial Union Department, AFL-CIO versus American Petroleum Institute sidestepped the issue of whether OSHA must, under its statutes, base’ its decision-in this case, over whether to tighten occupational exposure limits for benzene from 10 parts per million to 1 part per million-on formal, explicit, published cost-benefit analyses, the issue that many observers hoped the court would address.32 The justices have, however, agreed to hear an analogous case, on cotton dust. The legislative background from which the Supreme Court has to work does not provide much guidance.
Several recent developments exemplify the increasingly collective initiatives being taken by nongovernmental bodies. An impressive contribution has been made by the Food Safety Council, a non-profit coalition of industrial, consumerist and other members, which has developed and published a thorough review of the technical problems associated with food risk assessment and made proposals that are now under consideration by regulatory and other bodies.33 The American Industrial Health Council, a coalition of 140 companies and 80 trade associations, has developed concerted positions on regulatory issues and is now proposing structural and procedural reforms.34 In the aftermath of the Three Mile Island accident, the country’s electric utilities and nuclear industry pooled their interests and established a Nuclear Safety Analysis Center, associated with the Electric Power Research Institute, to serve as an industry-wide reactor performance clearinghouse. Some 35 major chemical firms have recently established the Chemical Industry Institute of Toxicology, a research center charged with performing state-of-the-art toxicological research and assessment of large-volume commodity chemicals (not proprietary products) for the benefit of the industry as a whole. The major U.S. automobile and truck companies have joined the EPA in establishing a Health Effects Institute to study the effects of motor vehicle pollution. 35
It is not yet -possible to evaluate the promise of these new institutions. They deserve watching because they typify efforts to develop techniques, procedures, databases and focal centers for risk assessment outside of government. The question will be whether the work they produce is of high technical quality, whether they develop reputations of integrity and whether government and the courts can effectively accommodate the work of these hybrid institutions as alternatives to direct regulation and government sponsored assessment.
SCIENTIFIC INTEGRITY AND AUTHORITY
Serious criticism is currently being leveled at the manner and quality with which scientific analysis is brought to bear on public hazards. Not to be interpreted as disaffection with science per se, this dismay reflects confidence that science can indeed help assess these problems, if it is property applied.
Proposals are gathering for establishment of central authority structures to which technical disputes can be appealed. For example, the New York governors’ panel (chaired by Lewis Thomas) formed to review the Love Canal fiasco found that “only further questions and debates on scientific credibility have been the result” of the “inadequate research designs” and “inadequate intergovernmental coordination and cooperation in the design and implementation of health effects studies” at the dump; as a remedy it recommended establishment of a Scientific Advisory Panel responsible to the governor.36 Editorials have appeared in Science and elsewhere calling for reincarnation of the President’s Science Advisory Committee to referee such disputes…. In somewhat the same vein, the American Industrial Health Council has urged Congress to establish a Science Panel:
AIHC advocates that in the development of carcinogen and other federal chronic health control policies scientific determinations should be made separate from regulatory considerations and that such determinations, assessing the most probable human risk should be made by the best scientists available following a review of all relevant data. These determinations should be made by a Panel of eminent scientists located centrally somewhere within government or elsewhere as appropriate but separate from the regulatory agencies whose actions would be affected by the determinations. 37
Two questions must be asked of such proposals: whether “scientific and technical determinations” can legitimately be separated from “political and social determinations” and whether centralization of authority assures higher quality science.
To the first the answer is probably, yes, to a considerable extent, as long as it is understood that the very process of defining the problem is subjective and that scientific assessments usually have to be conducted iteratively. For example, to view the problem of liquefied natural gas facilities as one of time-averaged risk is different from worrying about the potentially massive social disruption one large accident could cause. Complex issues, such as energy policy, have to go many rounds of assessment, criticism, redefinition and reassessment.
To the second question, the answer is that communal scientific assessments do tend to gain critical analytic strength and social legitimacy over assessments made by individuals alone, but that plural- ism and variety within the scientific community should be encouraged: recruiting more skilled policy-analytic scientists and engineers in industry, government and other organizations; appointing able advisory panels to many different administrative, legislative and managerial bodies; upgrading assessment work in academies, professional societies and trade organizations; and so on. Pluralism remains an essential safeguard against narrowness. Centralization and consistency are not always good in themselves. Besides, high-level bodies will always be limited to handling only a few contentious issues at a time. What they can do is raise warning flags about hazardous situations, draw attention to suspect scientific studies and help set the national agenda of assessment.
One of the more encouraging developments of the last few years has been a willingness of technical people, acting as professional communities, to review major assessments. When the original “Rasmussen Report” on reactor safety was issued, for example, it was subjected to detailed critique by a panel of the American Physical Society, by an ad hoc review group (the “Lewis Panel”) chartered by the Nuclear Regulatory Commission, by the Union of Concerned Scientists and by others. Currently the Society of Toxicology is reviewing the controversial “ED-01” effective-carcinogen-dose experiment performed by the National Center for Toxicological Research….
1. The overall urging of this essay is that bodies responsible for appraising public risk ask of their assessment efforts:
*Are risks, benefits and costs characterized as explicitly as possible?
*Are uncertainties and intangibles acknowledged and, where possible,
*Are programs oriented to agreed-upon societal goals?
*Do procedures guarantee that high-quality technical evidence is made
available and used as the basis for decision?
*Are risks examined in a property comparative context along with benefits
*Are precautions taken to prevent minor hazards from displacing larger
ones on the protection agenda?
*Are the formality and legal bindingness of the analytic base appropriate?
2. Excerpts of well regarded risk-assessment studies should be collected and published with commentary. (The NAS food safety study published several examples, and the NAS current review of some of its past projects-the “Kates study”-will provide more.) Critique should be made not only of analytic methodology but also of how boundaries of assessment were set, how assessors were chosen, how conflicts-of- interest and biases were dealt with, how findings were expressed and how the study groups maintained their relationships with patrons and clients.
The causal connection between environment and health deserves continued investigation. As part of this, baseline surveys like the “LaLonde Report” (Health of Canadians) or the 1980 California Health Plan should be developed for the United States; this would be an extension of the 1979 Report of the U.S. Surgeon General on Health Promotion and Disease Prevention.38 Then those determinants of health that are amenable to environmental influence should be evaluated.
4. The Office of Management and Budget, the Congressional Bud, get Office or others might direct or commission comparative evaluations of the marginal longevity gains and other benefits from the key regulatory programs.
5. Evaluation should be made of such longstanding risk- management regimes as food inspection programs, fire-prevention pro- visions of building codes, flood plains insurance, black lung insurance and the like, asking whether they accomplish their risk-spreading or risk-reduction goals.
6. As the nation contemplates deregulation, sectoral net-assessment of regulatory policies should be conducted and reviewed. Alternatives to regulation should be examined, especially hybrid nongovernmental- governmental approaches.39 In this regard the experiences of other countries, such as Sweden’s in food safety, should be reviewed.
7. High-level scientific leadership needs continual renewal. One function of an upgraded White House scientific advisory body should be to identify major risk issues needing attention (such as, for example, the underattended issues cited at the end of this paper). This body, or other groups, should consider setting up a watchdog commission like the United Kingdom’s Advisory Committee on Major Hazards to lead in the anticipation and assessment of important, long-term hazards.
8. There are many specific research needs, ranging from toxicology to policy analysis. Broad topics deserving attention include:
• Evaluation of the overall predictive usefulness of the toxicological gauntlet through which chemical products now are required to be run.40
• Improvement of epidemiology as an analytic complement to toxicological testing and continued development of the necessary databases.
• Refinement and comparison of such analytic techniques as cost- benefit analysis, decision theory cost-effectiveness analysis.
• Evaluation of the validity of fault-tree and event tree analysis as applied to nuclear reactors and other engineered structures.41
• Investigation of ways in which human error (maintenance error, operation error, emergency-response error) can be taken into account in probabilistic assessment of technological systems.
NOTES 1. Ian Burton, Robert W. Kates, and Gilbert F. White, The Environment as Hazard (New York- Oxford University Press, 1978). 2. National Safety Council, Accident Facts (National Safety Council, 425 North Michigan Avenue, Chicago, Ill. 60611, 1980). 3. James W. Vaupel, “The Prospects for Saving Lives- A Policy Analysis,” printed as pp. 44-199 of the U.S. House of Representatives, Subcommittee on Science, Research and Techno- logy (of the Committee on Science and Technology), Hearings on Comparative Risk Assessment, Ninety-Sixth Congress, Second Session (14-15 May 1980). 4. James F. Fries, “Aging, Natural Death, and the Compression of Morbidity,” New England journal of Medicine, vol. 303 (1980), pp. 130-35. 5. U.S. House of Representatives, Committee on Science and Technology, Subcommittee on Science, Research and Techno- logy, Hearings on Comparative Risk Assessment, Ninety-Sixth Congress, Second Session, (14-15 May 1980). 6. Bernard L. Cohen and I-Sing Lee, “A Catalog of Risks,” Health Physics, vot. 36 (1979), pp. 707-22; Richard Wilson, “Analyzing the Daily Risks of Life,” Technology Review (February 1979), pp. 41-46. 7. Chauncey Staff, Richard Rudman and Chris Whipple, “Philosophical Basis for Risk Analysis,” Annual Review of Energy, vol. 1 (1976), pp. 629-62. 8. N. Rasmussen, et al.., Reactor Safety Study: An Assessment of Accident Risks in U.S. Commercial Nuclear Power Plants (Washington, D.C.- Nuclear Regulatory Commission, 1975), document number WASH-1400 (NUREG-75/014). 9. Paul Slovic, Baruch Fischhoff and Sarah Lichtenstein, “Rating the Risks,” Environment, vol. 21 (1979), pp. 14ff. 10. Shan Pou Tsai, Eun Sul Lee and Robert J. Hardy, “The Effect of Reduction in Leading Causes of Death- Potential Gains in Life Expectancy,” American journal of Public Health, vol. 68 (1978), pp. 966-71. See also Nathan Keyfitz, “What Difference Would It Make If Cancer Were Eliminated? An Examination of the Taeuber Paradox,” Demography, vol. 14 (1977), pp. 411-18. 1 1. U. K. Health and Safety Executive, Convey: An Investigation of
Potential Hazards from Operations in the Canvey Island/Thurrock Area (London- Her Majesty’s Stationery Office, 1978). 12. U.S. Food and Drug Administration, “Chemical Compounds in Food-Producing Animals: Criteria and Procedures for Evaluating Assays for Carcinogenic Residues,” Federal Register, Vol. 44 (1979), pp. 17070-114. 13. Steven L. Salem, Kenneth A. Solomon and Michael S. Yes’leyl Issues and Problems in Inferring a Level of Acceptable Risk, Report R-2561-DOE (Santa Monica, Calif. RAND Corporation, 1980). 14. National Academy of Sciences National Research Council, Advisory Committee on the Biological Effects of Ionizing Radiation, Considerations of Health Benefit-Cost Analysis for Activities Involving Ionizing Radiation Exposure and Alternatives (Washing- ton, D.C.: National Academy of Sciences, 1977). 15. National Academy of Sciences National Research Council, Committee for a Study on Saccharin and Food Safety Policy, Food Safety Policy: Scientific and Societal Considerations (Washington, D.C.-. National Academy of Sciences, 1979). 16. National Academy of Sciences National Research Council, Committee on Prototype Explicit Analyses for Pesticides, Regulating Pesticides (Washington, D.C.: National Academy of Sciences, 1980). 17. Edith Stokey and Richard Zeckhauser, A Primer for Policy Analysis (New York: W. W. Norton, 1978). 18. David Okrent and Chris Whipple, “An Approach to Societal Risk Acceptance Criteria and Risk Management,” #UCLA- ENG-7746 (Los Angeles: UCLA School of Engineering and Applied Science, June 1977); Baruch Fischhoff, “Cost Benefit Analysis and the Art of Motorcycle Maintenance,” Policy Sciences, Vol. 8 (1977), pp. 177-202; Dan Litai, “A Risk Comparison Methodology for the Assessment of Acceptable Risk” (Ph.D. dissertation, Massachusetts Institute of Technology, 1980); David Okrent and Chris Whipple An on Social Risk,” Science, Vol. 208 (25 April 1980), pp. 372-75; Chauncey Starr and Chris Whipple, “Risks of Risk Decisions,” Science, Vol. 208 (6 June 1980), pp. 1114-19. 19. National Academy of Sciences/National Research Council, Committee on Prototype Explicit Analyses for Pesticides, Regulating Pesticides (Washington, D.C.- National Academy of Sciences, 1980). 20. U.S. Food and Drug Administration, “Lead Acetate: Listing as a Color Additive in Cosmetics that Color the Hair on the Scalp,” Federal Register, vol. 45 (1980), pp. 72112-18. The decision was lauded in a New York Times editorial, 9 November 1980. 21. William W. Lowrance, Of Acceptable Risk: Science and the Determination of Safety (Los Altos, Calif.: William Kaufmann, Inc., 1978); and William W. Lowrance, “The Nature of Risk,” in Richard C. Schwing and Walter A. Albers, eds., Societal Risk Assessment: How Safe Is Safe Enough? (New York: Plenum Press, 1980), pp. 5-14. 22. Paul Slovic, Baruch Fischhoff and Sarah Lichtenstein, “Facts and Fears: Understanding Perceived Risk,” in Schwing and Albers, Societal Risk Assessment, pp. 181-216; Charles Viek and, Pieteran Stallen, “Rational and Personal Aspects of Risk,” Acta Psychologica, vol. 45 (1980), pp. 273-300. 23. Baruch Fischhoff, Paul Slovic, Sarah Lichtenstein, Stephen Read and Barbara Combs, “How Safe Is Safe Enough? A Psychometric Study of Attitudes Toward Technological Risks and Benefits,” Policy Sciences, vol. 9 (1978), pp. 127-52; “Labile Values: A Challenge for Risk Assessment,” Society, Technology and Risk Assessment, Jobs; Conrad, ed. (New York: Academic Press, 1980), pp. 57-66. 24. Organization for Economic Cooperation and Development, Technology on Trial: Public Participation in Decision-Making Related to Science and Technology (Paris- OECD, 1979). 25. Nancy E. Abrams and Joel Primack, “Helping the Public Decide: The Case of Radioactive Waste Management,” Environment, vol. 22 (April 1980), pp. 14ff. 26. Perceptions of Risk (Washington, D.C.: National Council on Radiation Protection and Measurement, 15 March 1980). 27. J. M. Griesmeyer, M. Simpson; and D. Okrent, The Use of Risk Aversion in Risk Acceptance Criteria, #UCLA-ENG-7970 (Los Angeles.- UCLA School of Engineering and Applied Science, October 1979). 28. President’s Commission on the Accident at Three Mile Island, The Need for Change: The Legacy of TMI (Washington, D.C: U.S. Government Printing Office, 1979).
29. U.S. Interagency Review Group on Nuclear Waste Management, Report to the President (Washington, D.C.: U.S. Department of Energy, 1979). 30. U.S. Interagency Regulatory Liaison Group, “Scientific Bases for Identification of Potential Carcinogens and Estimation of Their Risks,” Journal of the National Cancer Institute, Vol. 63 (1979), pp. 241-68. 31. David L. Bazelon, “Risk and Responsibility,” Science, vol. 205 (1979), pp. .277-80. 32. Industrial Union Department, AFL-CIO versus American Petroleum Institute (U.S. Supreme Court, decided 2 July 1980). Described in R. Jeffrey Smith, “A Light Rein Falls on OSHA,” Science, vol. 209 (1980), pp. 567-68. 33. Proposed System for Food Safety Assessment: Final Report of the Scientific Committee of the Food Safety Council (Washington, D.C.: Food Safety Council, 1980). 34. AIHC Recommended Alternatives to OSHA’s Generic Carcinogen Proposal (Scarsdale, N.Y.: American Industrial Health Council, 24 February 1978, OSHA Docket No. H-090). 35. Philip Shabecoff, “Health Institute to Study Motor Vehicle Emissions,” New York Times, 12 December 1980. 36. Governor of New York, Panel to Review Scientific Studies and the Development of Public Policy on Problems Resulting from Hazardous Waste, Report (8 October 1980). 37. “AIHC Proposal for a Science Panel” (Scarsdale, N.Y.: American Industrial Health Council, 26 March 1980). 38. Marc LaLonde, A New Perspective on the Health of Canadians (Ottawa: Health and Welfare Canada Working Document, 1975); Office of Statewide Health Planning and Development, California State Health Plan 1980-85 (Sacramento, 1980); and U.S. Surgeon General, Healthy People: The Surgeon General’s Report on Health Promotion and Disease Prevention, 1979, U.S. Department of Health, Education and Welfare Publication No. 79-55071 (Washington, D.C., 1979). 39. Michael S. Baram, Alternatives to Regulation for Managing Risks to Health, Safety, and Environment (a report to the Ford Foundation from the Program on Government Regulation, Franklin Pierce Law Center, White Street, Concord, N.H., I September 1980). 40. U.S. Congress, Office of Technology Assessment, Assessment
of Technologies for Determining Cancer Risk from the Environment (Washington, D.C., 1981). 41. Issac Levi, “A Brief Sermon on Assessing Accident Risks in U.S. Commercial Nuclear Power Plants,” The Enterprise of Knowledge: An Essay on Knowledge, Credal Possibility, and Chance (Cambridge, M.I.T. Press, 1980). Also, enlightening analyses were presented in staff reports to the President’s Com- mission on Three Mile Island.