Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 Mar 12.
Published in final edited form as: Bioethics. 2001 Aug;15(4):364–370. doi: 10.1111/1467-8519.00244

THE CONCEPT OF RISK IN BIOMEDICAL RESEARCH INVOLVING HUMAN SUBJECTS

PETER H VAN NESS 1
PMCID: PMC4357417  NIHMSID: NIHMS666908  PMID: 11697390

Abstract

An established ethical principle of biomedical research involving human subjects stipulates that risk to subjects should be proportionate to an experiment’s potential benefits. Sometimes this principle is imprecisely stated as a requirement that ‘risks and benefits’ be balanced. First, it is noted why this language is imprecise. Second, the persistence of such language is attributed to how it functions as a rhetorical trope. Finally, an argument is made that such a trope is infelicitous because it may not achieve its intended persuasive purposes. More importantly, it should be avoided because it masks the important role that chance plays in clinical research. Risk is the possibility of harm. As a precondition of harm it is unintended and undesirable in projects of biomedical research. It requires ethical vigilance. As a vehicle of chance, however, it is both intended and desirable. It requires methodological appreciation.


The concept of risk figures prominently in several areas of public health. Analytic epidemiologists search for the risk factors for diseases, risk assessors evaluate the hazards associated with environmental exposures, and economic decision analysts calculate the expected utility of alternative health programs. Each perspective contributes something helpful for understanding the concept of risk. Economists, for instance, customarily distinguish risk from uncertainty. Uncertainty characterizes situations in which many outcomes are possible and their likelihoods are unknown. Risk, in contrast, describes circumstances in which one knows the number of possible outcomes and the probabilities of each of them.1 Risk assessors and other decision analysts identify two components of risk: the probability that a certain adverse event will occur and a characterization of the consequences of that event.2 The literature on the ethics of biomedical research involving human subjects from the past 50 years provides another key insight into the concept of risk. This essay will seek to articulate it.

The concept of risk is invoked in many of the important ethical documents governing contemporary biomedical research. For example, the sixth article of the Nuremberg Code (1947) reads: ‘The degree of risk to be taken should never exceed that determined by the humanitarian importance of the problem to be solved by the experiment.’3 Implicit here is an appeal to the process of comparing potential harms and benefits. It is similarly present in an earlier article that requires that subjects should be informed of ‘all inconveniences and hazards reasonably to be expected.’4 (It is assumed here that ‘hazard’ and ‘risk’ refer to the same concept.5) The Declaration of Helsinki of the World Medical Association (1964; most recent amendment, 1996) includes similar points. The fourth basic principal mandates that ‘the importance of the objective is in proportion to the inherent risk to the subject.’6 The seventh makes explicit the process of weighing potential harms and benefits: ‘Physicians should cease any investigation if the hazards are found to outweigh the potential benefits.’7

In 1978 the authors of the Belmont Report provided a more sustained reflection on the ethical principles that should guide biomedical research. They clarify a point about the concept of risk that had been unstated in previous writings:

Unlike ‘risk,’ ‘benefit’ is not a term that expresses probabilities. Risk is properly contrasted to probability of benefits, and benefits are properly contrasted with harms rather than risk of harms. Accordingly, so-called risk/ benefit assessments are concerned with the probabilities and magnitudes of possible harms and anticipated benefits.8

A windfall is the semantic counterpart of a risk or hazard because it connotes a benefit that is at least partially due to chance. Although the authors of the Belmont Report cogently argue for the asymmetry of the two concepts the pairing of ‘risks and benefits’ is still a common occurrence in biomedical contexts. In the ethics chapter of a widely used textbook on clinical trials the author calls for a ‘careful assessment of predictable risks in comparison with foreseeable benefits.’9 Also, the International Guidelines for Ethical Review of Epidemiological Studies (1991) contains the following statement in its section on ethical principles: ‘Investigators must be able to demonstrate that the benefits outweigh the risks for both individuals and groups.’10 (This latter unmodified pairing of risks and benefits is the more egregious violation of the Belmont clarification.) Even the authors of the Belmont Report revert to language of risk and benefits in a subsequent section of their report.11

It is not only grammatical scrupulosity that makes the continued usage of the asymmetrical pairing of ‘risks and benefits’ worthy of note. Rather the usage persists for a reason. It is especially evident in the context of biomedical research involving human subjects that benefits are intended and harms are not. If harm does come to an experimental subject in an ethically conducted clinical trial, for instance, it should be because of bad luck. It is commonly allowed that experimental subjects who are not especially vulnerable, e.g., not children or pregnant women, may with consent be subjected to ‘minimal risk.’ The Code of Federal Regulations concerning the protection of human subjects defines this phrase to mean ‘that the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life or during the performance of routine physical or psychological examination and tests.’12 Hence, in general, there is a predictably small probability that an adverse event will occur. Harm occurs to the subject for whom this generally small probability becomes their particularly acute misfortune.

The asymmetrical pairing of ‘risks and benefits’ is likely a consequence of the fact that the benefits are intended but harms are not. In customary usage the notion of benefit connotes the presence of agency and intentionality while the concept of risk does not. (The English word ‘benefit’ is etymologically derived from the Latin ‘benefactum’ which is the past participle of the verb ‘benefacere’ meaning ‘to do a service.’)13 To preface ‘benefits’ with words like ‘potential’ or ‘possible,’ or even ‘expected,’ tends to qualify the researchers intent and ability to do good. Thus this asymmetrical language is something of a rhetorical trope that conveys connotations of benevolence and optimism – benevolence, because it communicates indirectly that benefits are intended and harms are not, and optimism, because it suggests that harms are a product of chance but benefits are not.

Recognition of the presence of chance and the absence of intentionality in the concept of risk helps to explain why the language of ‘risk and benefits’ persists in colloquial speech and in the ethical literature about biomedical research involving human subjects. The phrase serves certain rhetorical purposes. Yet is this explanation a good reason for persisting in this usage? In addition to the advantage of conceptual consistency (which many may regard as little more than an Emersonian hobgoblin14) there are two points that militate against the use of this colloquial phrase in biomedical contexts. One is practical but somewhat speculative; the other is more theoretical yet indubitably central to contemporary biomedical research.

First, by not qualifying the benefits associated with biomedical research as potential or expected, one accentuates them, and by doing this, one might seem to enhance recruitment and lobbying efforts on behalf of such research. The effect of this linguistic ploy, however, might not be so felicitous. The psychologists Amos Tversky and Daniel Kahneman have proposed a theory about how the ‘framing of decisions’ influences people’s choices and, in particular, their choices among options that involve varying degrees of risk.15 On the basis of numerous experiments with human subjects they conclude that people are more risk seeking when presented with decisions that are framed in terms of losses and more risk averse when presented with decisions that are framed in terms of gains. When people are asked to choose between two treatment programs they tend to be more willing to undergo risk when the relevant probability is stated in terms of the possible number of lives lost than when stated in terms of the possible number of lives saved. Apparently when the message is framed in terms of a benefit people respond in a way that seeks to conserve a contemplated benefit and avoid its possible loss. Alexander Rothman and Peter Salovey have applied the results of ‘prospect theory’ to public health efforts to promote healthy behavior.16 They correctly point out that the impact of the loss frame versus the benefit frame is very sensitive to the social context in which the message is presented and so one must be cautious in making general applications of Tversky’s and Kahneman’s theory. Still, this line of research suggests that a rhetorical deployment of the language of risks and benefits may have an opposite recruitment impact than might naively be imagined.

A second reason for not concealing the qualified character of benefits associated with biomedical research on human subjects is that it misrepresents the nature and purpose of such research. The primary intended benefit of a clinical trial is the attainment of what the authors of the Belmont Report call ‘generalizable knowledge.’17 Any benefits that accrue to experimental subjects are certainly desired by compassionate researchers but, even so, they are epiphenomenal in the design, conduct, and evaluation of a clinical trial. The trial is directed toward attaining the medical and clinical knowledge that will empower the effective treatment of the population of patients that the experimental subjects statistically represent. The ethical obligation of the researchers to human subjects is well summarized by the traditional phrase ‘Primum non nocere.’ Their primary responsibility is to do no harm to the subjects, yet they cannot carry out their task of attaining generalizable knowledge if they accept the related obligation of doing good in the same way that a clinician does. Researchers allow experimental subjects to be subject to chance in a way that clinicians do not.

The concept of risk involves the idea of chance. A risk is a possible harm in which the probability of harm may be predictable to a certain degree but never controllable with certainty. The chance that pervades the risk to human subjects in biomedical research is also associated with the benefit that may accrue from it. It is unavoidable; in fact, its presence is desirable because of the way that it promotes the goal of securing generalizable knowledge. The random allocation of experimental subjects to comparison groups employs chance for the purpose of conducting a sound scientific study. Randomization helps to avoid selection bias or the differential assignment of subjects to comparison groups in a way that favors a particular outcome. It helps ensure that prognostic factors are evenly represented in the comparison groups and thereby militates against confounding the treatment/disease relationship. Finally, random assignment of subjects to treatment options provides a sound basis for the statistical analyses that contribute to the interpretation of study results.

The randomized clinical trial is a paradigmatic case of what the philosopher Ian Hacking has called ‘the taming of chance.’18 Hacking reports that Enlightenment theorists of natural science regarded chance as an irrational and disruptive force that only vulgar and superstitious people invoke in explaining natural events. Chance is what scientific theories banish from human explanations of how the world works. In his book entitled with this same phrase, Hacking recounts how in the nineteenth century social scientists began articulating a new type of scientific law that rather than banishing chance incorporated probabilistic thinking into scientific generalities. Francis Galton is a statistician who contributed importantly to this development; Emile Durkheim is a sociologist and Charles Sanders Peirce a philosopher who introduced statistical methodology into their respective disciplines. Of course, with the advent of quantum mechanics probabilistic thinking eventually came to pervade even physics – the principal domain of Enlightenment determinism.

The persistence of the language of ‘risks and benefits’ is a symptom of the ambivalence people feel about chance. The authors of the Belmont Report cogently stated that biomedical research is a project in which human subjects become vulnerable to potential harms and available to potential benefits. They insisted, albeit inconsistently, on the symmetrical pairing of harms and benefits as similarly subject to chance. In this brief essay I have argued more strongly for this conceptual symmetry and for a linguistic practice that reflects it in order to educate people to the nature of scientific research as an epistemological and as a social process.19 At its best science is a social process in which people cooperate with intelligence, hope, and good will. Yet there are no guarantees. Chance cannot be banished; indeed, it should be embraced as an inevitable part of reality that in some ways may be tamed for the sake of human well-being. Included in the concept of risk is its natural inevitability. The poet Stéphane Mallarmé states it this way: ‘Toute Pensée émet un Coup de Dés (All thought emits a throw of the dice).’20 At least as regards the thoughtful planning of clinical trials the inevitability of risk is a fact that warrants both methodological appreciation and ethical vigilance.21

References

  • 1.Pindyck RS, Rubenfeld DL. Microeconomics. 4. Upper Saddle River, NJ: Prentice-Hall; 1998. p. 148n. [Google Scholar]
  • 2.The Presidential/Congressional Commission on Risk Assessment and Risk Management. Risk Assessment and Risk Management in Regulatory Decision Making. 1997;2 It has long be recognized that the assessment of the consequences of an adverse event is not merely a scientific matter; rather, it involves a complex network of social and cultural meanings. For a relevant anthropological account, see: Douglas M, Wildavsky A. Risk and Culture: An Essay on the Selection of Technical and Environmental Dangers. Berkeley: University of California Press; 1982.
  • 3.The Nuremberg Code. 1947. p. Article 6. [Google Scholar]
  • 4.Ibid., Article 1.
  • 5.The Oxford English Dictionary. Oxford: Oxford University Press; 1971. ‘Hazard’ (substantive): definition 3; ‘Hazard’ (verbal): definition 1. [Google Scholar]
  • 6.World Medical Association Declaration of Helsinki. JAMA. 1997;277:925. amended 1996. [PubMed] [Google Scholar]
  • 7.Ibid.
  • 8.The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. Washington, DC: 1978. DHEW Publication No. (OS) 78-0012:15. [PubMed] [Google Scholar]
  • 9.Pocock SJ. Clinical Trials: A Practical Approach. New York: John Wiley & Sons; 1984. p. 101.For more precise usage, see: Levine RJ. Ethics and Regulation of Clinical Research. 2. New Haven: Yale University Press; 1988.
  • 10.Council for International Organization of Medical Sciences (CIOMS) International Guidelines for Ethical Review of Epidemiological Studies. Geneva: 1991. p. 16. [Google Scholar]
  • 11.Op. cit., note 8:16–18.
  • 12.Code of Federal Regulations: Title 45-Public Welfare, Part 46-Protection of Human Subjects. 1991:102.i.
  • 13.The American Heritage Dictionary of the English Language. 3. Boston: Houghton Mifflin; 1996. p. 173. ‘Benefit.’. [Google Scholar]
  • 14.Emerson RW. Essays: First Series, Vol. 2 of The Works of Ralph Waldo Emerson. Boston: Houghton Mifflin; 1983. Self-Reliance; p. 58. [Google Scholar]
  • 15.Tversky A, Kahneman D. The Framing of Decisions and the Psychology of Choice. Science. 1981;211:453–458. doi: 10.1126/science.7455683. [DOI] [PubMed] [Google Scholar]
  • 16.Rothman AJ, Salovey P. Shaping Perceptions to Motivate Healthy Behavior: The Role of Message Framing. Psychological Bulletin. 1997;121:3–19. doi: 10.1037/0033-2909.121.1.3. [DOI] [PubMed] [Google Scholar]
  • 17.Op. cit., note 8:3.
  • 18.Hacking I. The Taming of Chance. Cambridge: Cambridge University Press; 1990. [Google Scholar]
  • 19.The German sociologist Ulrich Beck has provocatively claimed that the distribution of risks has become as important a factor in the functioning and meaning of contemporary society as the distribution of wealth. If the social distribution of hazards from technological and economic development has attained this new status then how the concept of risk is conceived and deployed in biomedical research involving human subjects may be paradigmatic of a more extensive social situation. See: Beck U. Risk Society: Toward a New Modernity. London: Sage Publications; 1992. p. 19.
  • 20.Hartley A, translator and editor. Mallarmé. Baltimore: Penguin Books; 1963. Un coup de dés; p. 233. [Google Scholar]
  • 21.The author acknowledges with appreciation the assistance of Susan L. Katz and Stanislav V. Kasl. The research was partially supported by a grant from the National Institute of Mental Health: MH14235-24.

RESOURCES