Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Jan 1.
Published in final edited form as: Account Res. 2016 Jun 13;24(1):1–29. doi: 10.1080/08989621.2016.1198978

The Role of Intuition in Risk/Benefit Decision-Making in Human Subjects Research

David B Resnik 1
PMCID: PMC5126729  NIHMSID: NIHMS831411  PMID: 27294429

Abstract

One of the key principles of ethical research involving human subjects is that the risks of research to should be acceptable in relation to expected benefits. Institutional review board (IRB) members often rely on intuition to make risk/benefit decisions concerning proposed human studies. Some have objected to using intuition to make these decisions because intuition is unreliable and biased and lacks transparency. In this paper, I examine the role of intuition in IRB risk/benefit decision-making and argue that there are practical and philosophical limits to our ability to reduce our reliance on intuition in this process. The fact that IRB risk/benefit decision-making involves intuition need not imply that it is hopelessly subjective or biased, however, since there are strategies that IRBs can employ to improve their decisions, such as using empirical data to estimate the probability of potential harms and benefits, developing classification systems to guide the evaluation of harms and benefits, and engaging in moral reasoning concerning the acceptability of risks.

Keywords: risks, benefits, human subjects research, institutional review boards, intuition, reasoning

1. Introduction

One of the key principles1 of ethical research involving human subjects is that risks should be acceptable in relation to expected benefits (Emanuel et al 2000).2 The Department of Health and Human Services (2009) regulations, otherwise known as the Common Rule, state that for an institutional review board (IRB) to approve research it must determine that “Risks to subjects are reasonable in relation to anticipated benefits, if any, to subjects, and the importance of the knowledge that may reasonably be expected to result (45 CFR 46.111a2).” Regulations in other countries and international ethical guidelines include similar statements (Australian National Government 2015, Canada Institutes of Health Research 2005, Council for the International Organization of Medical Sciences 2002, United Kingdom Department of Health 2011, World Medical Association 2013).

Although the idea that risks should be acceptable in relation to expected benefits has widespread acceptance, there is little guidance on how to interpret or apply this principle because regulations and guidelines do not clearly define “risks” and “benefits” nor do they say what makes risks acceptable (or reasonable or justified) in relation to expected benefits (Levine 1988, Kimmelman 2004, Rid et al 2010, Rid and Wendler 2011). Lack of regulatory guidance concerning risks and benefits may lead to inconsistent IRB decisions in the same or similar cases.3 For example, Green et al (2006) found significant variation in IRB risk/benefit evaluations of the same study reviewed at 43 different research sites. Ten IRBs gave the study expedited review because they judged it as minimal risk, 31 gave it full board review because they viewed it as more than minimal risk, one declared that the study was exempt from review, and one refused to approve the study on the grounds that it was too risky (Green et al 2006). Studies by Shah et al (2004) and Van Luijn et al (2002) have also documented variation in IRB risk/benefit assessments. For example, Shah et al (2004) reported that 48% of 188 responding IRB chairs judged a magnetic resonance imaging scan with no sedation to be minimal risk, 35% said it was a minor increase over minimal risk, 9% said it was more than a minor increase over minimal risk, and 8% said that they didn’t know its risk (Shah et al 2004).

Inconsistent IRB decision-making concerning risks and expected benefits is a significant ethical and practical concern for several reasons (Wendler et al 2005, Rid et al 2010, Rid and Wendler 2012). First, it suggests that some IRBs may provide inadequate protections for human subjects because they underestimate risks, overestimate expected benefits, or both. Second, it suggests that some IRBs may impede valuable research because they overestimate risks, underestimate expected benefits, or both. Third, if a study involves multiple research sites, inconsistent IRB risk/benefit decisions at different sites could delay final approval needlessly and waste resources (Silberman and Kahn 2011, Klitzman 2015).

One of the reasons why IRB risk/benefit decision-making often exhibits significant variation is that IRB members may evaluate risks and benefits based on intuition rather than reasoning from empirical data (Van Luijn et al Kimmelman 2004, 2002, Rid et al 2010, Pritchard 2011). For example, Van Luijn et al (2002) interviewed 53 IRB members concerning risk/benefit decision-making and found that only 12% assessed benefits and risks systematically, while 20% made assessments based on an overall impression or feeling of the balance of risks and expected benefits, and 10% made an assessment based on whether they would participate in a study or recommend that a family member do so. Other studies of IRBs have found that members often make risk/benefit decisions based on their personal experiences (Stark 2012) or gut feelings (Klitzman 2015).

Commentators on the ethics of human research have recognized for many years that oversight committees often do not make risk/benefit decisions by means of systematic reasoning. For example, the authors of the Belmont Report noted that “It is commonly said that benefits and risks must be ‘balanced’ and shown to be ‘in a favorable ratio.’ The metaphorical character of these terms draws attention to the difficulty of making precise judgments. Only on rare occasions will quantitative techniques be available for the scrutiny of research protocols (National Commission 1979, pp. 8-9).” While the authors of the Belmont Report understood that quantifying risk/benefit assessments may be difficult, they also held that oversight committees should “strive to assess risks and benefits systematically (National Commission, 1979, p. 9).” Since the publication of the Belmont Report over three decades ago, numerous authors have proposed methods for assessing risks and expected benefits of research systematically (Levine 1988, Meslin 1990, Chuang-Stein 1994, Martin et al 1995, Weijer 2000, National Bioethics Advisory Commission 2001, Weijer and Miller 2004, Wendler et al 2005, London 2006, Wendler and Miller 2007, Rid et al 2010, Rid and Wendler 2011, Shamoo and Ayyub 2011, Bernabe et al 2012a, 2012b, Kimmelman and Henderson 2016). Most of these methods attempt to reduce reliance on intuition in risk/benefit decision-making.

In this paper I will examine the role of intuition in IRB risk/benefit decision-making in human subjects research and argue that there are limits to our ability to reduce reliance on intuition in this process. The fact that IRB risk/benefit decision-making must sometimes involve intuition need not imply that it is hopelessly subjective or biased, however, since there are strategies that IRBs can employ to improve their decisions, such using empirical data to estimate the probability of potential harms or benefits, developing a classification system to guide the evaluation of harms and benefits, and engaging in moral reasoning concerning the acceptability of risks.

2. What is Intuition?

Before beginning this inquiry, it will be useful to clarify what I mean by “intuition.” Philosophers and psychologists have traditionally understood intuition as a mental process in which one forms a belief or judgment immediately, without any conscious awareness of an inference process at work (Rorty 1967, Haidt 2001, Audi 2001, 2004, Kahneman 2011, Pust 2012).4 In layman’s terms an intuition is a gut feeling or hunch. Intuition is usually distinguished from reasoning, which involves forming beliefs or judgments as a result of conscious inference or deliberation (Kahneman 2011). Personal preferences or tastes are paradigmatic examples of intuitive judgments (Kahneman 2011). For example, suppose that John samples pizza from two different restaurants and determines that the pizza from Restaurant A is better than the pizza from Restaurant B. If he forms this judgment without any awareness of a reasoning process at work we could describe it as intuitive.

By contrast, suppose that Jane is an inspector who is charged with grading restaurants according to clearly defined sanitation standards developed by the health department. She inspects both restaurants and gives A an A+ rating and B a B rating. We could describe Jane’s ratings as resulting from reasoning, since she formed these judgments on the basis of observations of the restaurants and her interpretation of the rating standards as they apply to the restaurants. Examples of other types of reasoning processes include: solving mathematical problems, interpreting legal texts, analyzing arguments, and deliberating about moves in a chess game (Kahneman 2011).

3. Intuition in Psychology/Neuroscience

Psychologists and neuroscientists attempt to describe and explain the processes and mechanisms involved in forming intuitive beliefs and judgments. Many of our intuitions result from emotional responses and feelings, such as empathy, fear, jealousy, joy, disgust, anger, pleasure, and pain (Kahneman 2011 Greene 2013). Emotional responses may be triggered by many different factors, including social interactions, biological needs, mental and physical illnesses, and sensory perceptions. For example, the movie The Silent Scream has been particularly effective at persuading people to regard abortion as wrong, because it includes ultrasound video images of the abortion of a 12-week old fetus (Greene 2013). The images elicit empathetic responses, because the fetus appears to make human-like movements, such as sucking its thumb and recoiling from the abortion instrument.

How one frames or describes a decision problem or action may also trigger emotional responses that impact judgment and belief. For example, suppose a cancer patient is deciding whether to participate in a clinical trial. Statement A says: “If you participate in this study, there is a 10% that you will be cured,” while statement B says: “If you participate in this study, there is a 90% chance that you will not be cured.” Although these statements are logically equivalent, people who are presented with statement A are more likely to choose to participate in the study than those who are presented with statement B, because statement A frames that decision optimistically, whereas statement B does not (Kahneman 2011, Greene 2013).

Intuitive judgments and beliefs may also result from subconscious inferences. For example, suppose that police officers A and B are interrogating a suspect and A declares after they are finished that “I think the suspect is guilty.” B says, “How do you know?” and A replies, “No reason, just a hunch.” Office A be unaware that his hunch has resulted from subconscious inductive inferences from his observations of the suspect’s mannerisms under questioning which indicate, based on his previous experience with other suspects, that this suspect is probably not telling the truth when he says that he is innocent. Much of what we label as “commonsense” or “professional judgment” may result from subconscious processes involving inductive or deductive inferences (Kahneman 2011).

Psychologists have conducted experiments to better understand some of these subconscious processes, also known as heuristics (Tversky and Kahneman 1974, Kahneman et al 1984). Some of these include: the availability heuristic, in which people estimate probabilities based on their ability to recollect or imagine similar events; and the anchoring heuristic, in which people fail to adjust initial probability estimates upwards or downwards in response to new evidence (Tversky and Kahneman 1974). Although heuristics can serve us well most of the time, they can lead to biases and errors. For example, a person who overestimates the probability of getting bitten by a shark due to vivid media coverage of a recent attack would be misled by the availability heuristic. A person who originally estimates the probability of being attacked by a shark while swimming at the beach as 1/100 but fails to revise this estimate downward, despite being aware of evidence that the probability is much lower than 1/100, would be misled by the anchoring heuristic.

Heuristics also play a role in intuitive beliefs or judgments formed on the basis of sensory perception, such as beliefs or judgements related to distance, depth, size, and shape. For example, we follow a heuristic that tells us that the clarity of a visual image is an indicator of the distance of the object. While this heuristic is accurate most of the time, it may mislead us when poor atmospheric conditions make nearby objects appear blurry or excellent atmospheric conditions make far away objects appear clear (Kahneman 2011).

According to Kahneman (2011), human beings have evolved two types of cognitive systems: one that is quick and useful (intuition) and one that is slower but more accurate (reasoning). Kahneman claims that the intuitive system guides most of our day-to-day cognition until it leads us astray and we turn to the reasoning system to rethink our judgments or beliefs (Kahneman 2011).

Psychologists and neuroscientists have also investigated the role of intuition in moral judgment. Those who follow Kohlberg’s (1981) approach to moral development hold that moral judgments are based on conscious reasoning. Kohlberg conducted studies that asked children at different ages how they would respond to specific moral dilemmas, including their reasons for making a particular choice. Younger children chose to act morally because they feared being punished for immoral acts, while older children chose to act morally because they felt they had an obligation to obey social conventions or rules. Adolescents offered justifications for their actions that demonstrated a grasp of moral principles and concepts (such as justice and universalizability) which transcend social conventions. Kohlberg (1981) concluded from these experiments that fully developed moral judgment involves reasoning.

Other psychologists, such as Haidt (2001, 2007) have challenged the rationalist approach. Evidence for the importance of intuition in moral judgment comes from experiments in which subjects make moral judgments but cannot clearly articulate their reasoning. Also, subjects may claim to accept moral principles that seem to have no impact on the judgments they make and they may offer reasoning that is confused or inconsistent. Haidt (2001, 2007) concludes from this evidence that we make moral judgments on the basis of intuition and that reasoning comes into play only afterwards, when we attempt to justify (or rationalize) our moral judgments. Other psychologists, such as Pizarro and Bloom (2003), Cushman et al (2006) and Feinberg et al (2012) have conducted experiments which tend to show that some moral judgments emanate from reasoning while others are due to intuition. In these experiments, subjects offer coherent justifications for their moral judgments (Cushman et al 2006). Pizarro and Bloom (2003) and Feinberg et al (2012) also found that moral intuitions can be shaped or tempered by reasoning processes, which indicates that reasoning can have an indirect effect on moral intuition.

Neuroscientists have contributed to this debate by attempting to identify regions of the brain, patterns of neuronal activity, and neurotransmitters associated with moral judgment (Greene 2013, Will and Klapwijk 2014). Some of the most interesting studies of the neurological basis of moral judgment focus on individuals with damage to specific areas of the brain (Greene 2013). For example, individuals with damage to the frontal cortex tend to have impaired moral judgment, suggesting that reasoning plays a key role in moral thinking, since the frontal cortex is largely responsible for reasoning and deliberation. However, individuals with damage to emotional centers of the brain also have impairments of moral judgment, suggesting that emotion also plays an important role in moral judgment (Greene 2013). Experiments also indicate that centers of the human brain associated with reasoning and emotion work together to influence moral decision-making (Greene et al 2001).

4. Intuition in Philosophy

In contrast to psychologists and neuroscientists, philosophers are interested not so much in describing how we form intuitions but in considering whether we should rely on intuition. Philosophers take a normative (as opposed to descriptive) approach to intuition.

One of the main arguments against relying on intuition is that it can be unreliable and biased (Goldman 1988, Rid et al 2010). As we have noted earlier, heuristics can lead people to make erroneous judgments concerning probability and risk. Emotional reactions can also bias empirical judgments (Kahneman 2011). Moral intuitions may be influenced by personal interests, emotions, racial or ethnic prejudices, and other factors that lead to erroneous or biased judgments (Rachels 1993, Greene 2013). Many philosophers have claimed that we should engage in moral reasoning to overcome our prejudices and biases and think more clearly about moral questions (Rachels 1993, Pojman 2005). Philosophers and scientists have developed systems of thought—such as deductive and inductive logic, statistics, decision theory, and the scientific method—which one can use to form judgments and beliefs based on reasoning (Giere et al 2005). Another argument against relying on intuition is that, unlike reasoning, intuition lacks transparency (Haidt 2001, Rid et al 2010). Since we are not aware of how we have formed intuitive judgments or beliefs, we may not be able to justify or explain them to other people. Lack of transparency can be a serious problem when one makes decisions that significantly impact society, because we expect people who make such decisions (e.g. government officials, elected representatives) to be publicly accountable (Daniels and Sabin 2002). One might argue that transparency is also important in IRB decision-making, because IRB decisions have significant impacts on human subjects, investigators, and institutions (Rid et al 2010, Schneider 2015).

There are, however, several arguments for relying on intuition. First, reliance on intuition might be justified when we do not have enough time or information to engage in reasoning. For example, a surgeon who is operating on a patient may not have enough time or information to determine whether an unusual piece of tissue he or she finds in someone’s abdomen while performing an appendectomy is benign or malignant. The surgeon may form an intuitive judgment that the tissue is probably malignant and decide to remove it to promote the patient’s health. Simon (1957, 1990) argues that when people make choices they must decide whether there is enough time and information to engage in reasoning or whether the best course of action is to follow intuition. In some cases, reasoning may need to be aborted because one is running out of time a decision must be made.

Second, reliance on intuition might be justified for forming judgments or beliefs over which reasoning has little power, which I will call beliefs or judgments which are not judicable by reason. Judgments or beliefs concerning personal preferences, tastes, pleasure, pain, discomfort, or offensiveness would seem to fit into this category. For example, if John and Jane have a debate about whether restaurant A or B makes better pizza, they may not be able to decide this question by means of reasoning. Or suppose that John and Jane are arguing about which is more painful, a blood draw or a headache. John says that a headache is more painful because he hates headaches, whereas Jane says that a blood draw is more painful because she hates being stuck with a needle.5 Again, while John and Jane might be able to argue about this issue, reasoning would seem to have little power to convince them one way or the other.

Some philosophers, such as Hume (2000), Shaftsbury, Hutchinson, and Moore (2004), have argued that reasoning has little influence over moral beliefs or judgments, but I will not adopt that controversial position (Audi 2004). I will assume it is possible to form moral beliefs or judgments on the basis of reasoning and that scientists can investigate how often (and under what conditions) this occurs. It may turn out that empirical studies of moral cognition indicate that most of the time we rely on intuition to make moral choices, but I can accept this result as long as it is possible to use reason to form moral judgments and beliefs.

Third, reliance on intuition might be justified for forming beliefs or judgments which are taken to be self-evident. Philosophers have debated for centuries about whether there are some intuitive beliefs or judgments that provide the foundation for human knowledge (Audi 2010). Epistemological foundationalists hold that there are some self-evident beliefs that underlie all human knowledge (Fumerton 2010, Poston 2015). The regress argument is one of the main rationales for foundationalism. According to this argument, for a belief to be counted as knowledge, it must be justified. We justify beliefs in terms of other beliefs.6 Since justification cannot go on indefinitely, there must be some beliefs that are accepted without further justification, i.e. self-evident, intuitive beliefs that provide the basis knowledge. For example, Descartes (1993) argued his awareness of his own existence was a self-evident belief that provided the basis for his knowledge, while Hume (2000) claimed that beliefs generated by the senses (matters of fact) and tautologies (relations of ideas) provide the foundation for knowledge.

Coherentists challenge the notion that there are self-evident beliefs by claiming that beliefs are justified by virtue of their inferential connections to other beliefs in a system of belief (Murphy 2015).7 Sellars (1956), for example, argues that beliefs or judgments formed on the basis of sensory perception cannot be taken as self-evident, because the senses may be unreliable. The decision to rely on sensory perception must itself be justified by other beliefs or judgments. Foundationalists object that coherentism ultimately leads to circular reasoning, because one eventually assumes the very belief one is trying to prove, but coherentists reply that circularity need not be a problem as long as the circle is large enough (Fumerton 2010, Poston 2015, Murphy 2015).

The foundationalism vs. coherentism issue also arises in debates about moral knowledge.8 Moral foundationalists claim that all moral beliefs or judgments are based on some intuitive, self-justified moral beliefs or judgments (Ross 1930). For example, utilitarians view the principle of utility as foundational (Mill 1979), Kantians regard the Categorical Imperative as self-evident (Kant 1964), and natural rights theorists hold that some basic, human rights provide the basis for morality (Nozick 1974).9 Moral foundationalists also appeal to the regress argument to justify their view (Tramel 2015). Coherentists reject this approach (Stratton-Lake 2014, Tramel 2015). Rawls (1971) argues that we can use reflective equilibrium to achieve a coherence of our moral judgments. To use this method, we start with a set of intuitive moral judgments concerning right and wrong in particular situations and then develop principles that systematize those judgments. We may use the principles to revise our initial judgments and may revise our principles in light of new judgments. As we go back and forth between judgments and principles we eventually reach a point (i.e. reflective equilibrium) in which our principles and judgments cohere. Although the method systematizes intuitive moral judgments, it does not take any particular judgment as foundational. That is, judgments that we initially accept could later be rejected to achieve coherence. Other philosophers (e.g. Daniels 1996, Sayre-McCord 1996) have argued that Rawlsian coherence should include beliefs obtained from the natural and social sciences to ensure that morality is consistent with scientific knowledge.

Though I will not take a stand on the foundationalism vs. coherentism debate in epistemology, I will assume that intuitive beliefs or judgments are unavoidable in moral cognition. My argument for this position takes the form of a dilemma: either moral justification is foundationalist or it is coherentist. If justification is foundationalist, then we must rely on self-evident (intuitive) moral beliefs or judgments to provide the foundation for morality. If justification is coherentist, then following Rawls (1971), the coherent set of beliefs or judgments must include intuitive beliefs or judgments. Although intuitive moral beliefs or judgments would not be taken to be self-evident, they would nevertheless play an important role in our moral system of thought. Thus, on either horn of the dilemma reliance on intuitive moral beliefs or judgments is unavoidable at some point. While reasoning can—and should—play a role in moral discourse and decision-making, intuitive beliefs or judgments cannot be entirely eliminated from morality (Audi 2004).

5. Can We Reduce Reliance on Intuition?

As mentioned in the previous section, we often form intuitive judgments or beliefs during day-to-day activities, such as social interactions, work, or professional life. Though these beliefs or judgments may be useful, they lack transparency and may be unreliable or biased. If we want our moral decisions to be more reliable, unbiased, and transparent, then we need to consider whether we can reduce our reliance on intuition in moral cognition. 10 As noted earlier, many have argued that IRBs should reduce their reliance on intuition.

To inform our discussion concerning this issue, it will be useful to distinguish between replaceable and irreplaceable intuitions. An intuitive belief or judgment is replaceable if it can be replaced by another belief or judgment obtained by cogent reasoning.11 The replacing belief or judgment might be equivalent to the one it replaces or it could be a more correct (i.e. accurate, reliable, unbiased) belief or judgment.

Intuitive beliefs or judgments concerning personal preferences, pain, and so on are not replaceable because, as argued earlier, they are not judicable by reason. However, intuitive empirical beliefs or judgments can be viewed as replaceable.12 For example, suppose you are asked to estimate the probability of being killed in an automobile accident each time you drive. You could take an intuitive guess, e.g. 1/10,000, or you could make an estimate based on statistical data provided by government agencies. Your intuitive guess could be replaced by a more accurate estimate based on empirical data. Even if your intuitive guess happens to be correct, we could still say that it is replaceable because it could have been obtained by cogent reasoning.

Likewise, intuitive mathematical beliefs can be replaced by beliefs obtained by mathematical reasoning. For example, if you intuitively estimate that 6144/6 = approximately 1000, your intuitive estimate could be replaced by the correct value, 1023. Even if you gave the correct answer the first time by intuition, your answer would be replaceable because it would be equivalent to one obtained by mathematical reasoning.

Intuitive moral beliefs or judgments are replaceable if they can be replaced by other moral beliefs or judgments obtained by cogent reasoning. Replacement of an intuitive belief or judgment may occur as a result of obtaining more information, engaging in further reflection, or both. For example, suppose a family is trying to decide whether to withdraw medical care from M, an elderly matriarch who is comatose and seriously ill as a result of massive stroke that has damaged the frontal cortex of her brain. M is not expected to regain consciousness. M’s daughter arrives at the hospital room first and instructs the medical team to maintain all life support. Within two days, other family members arrive, including M’s youngest son, who produces a legally valid living will that M signed several years ago in which she stated that she would not want to be kept alive under these conditions. As a result of learning about the living will, the family holds a conference and decides to take M off all artificial life support, because this decision would honor her autonomous choices as expressed in the living will. The family used reasoning to replace its initial, intuitive judgment (i.e. “maintain all life support”) with a different one (i.e. “stop all artificial life support”).

In this example, the principle of respect for autonomy plays a key role in the family’s reasoning. However, some philosophers, known as casuists, argue that one can engage in moral reasoning without appealing to rules or principles. Proponents of the casuist approach to moral reasoning argue that people form moral judgments and beliefs not on the basis of principles or rules but on the basis of comparisons to cases (Jonsen and Toulmin 1990, Strong 2000). In the case discussed above, the family could make a decision concerning withdrawal of life support by considering other cases involving withdrawal of life support and then deciding whether their case is similar to those cases. If their case is similar to cases where they would regard withdrawal of life support as ethical, then they should withdraw life support; if it is similar to cases where they would regard with of life support as unethical, then they should not withdraw it.

Critics of casuistry argue that one must still appeal to moral principles or rules when comparing cases, since one must use a principle or rule for deciding whether cases are similar or different in relevant ways (Kuczewski 1998, Iltis 2000, Richardson 2000). For example, if the family decided to withdraw life support because their case was similar to another case involving a living will, then they would be assuming that the living will was a relevant point of similarity between the cases. Making a decision on the basis of a living will would be relevant, one might argue, because this would respect the autonomous choices the person made while they were still competent. Furthermore, the case-based approach would appear to assume a meta-principle used in moral reasoning, i.e. “cases that are morally similar in appropriate respects should be decided in the same way.” While cases can play an important role in interpreting and applying moral principles, moral reasoning is not totally devoid of rules or principles (Kuczewski 1998).

Given that moral principles or rules play an important role in moral reasoning, the question arises as to whether they must be accepted on an intuitive basis or whether they can be justified on the basis of other principles, judgments, or beliefs. Some secondary moral principles may be justified on the basis of more fundamental (primary) principles. For example, the moral principle “don’t assault other people” could be derived from a primary principle “don’t harm other people” because assault is a type of harm. Primary moral principles may or may not be replaceable, depending on one’s stance on the foundationalism vs. coherentism debate. If one takes a foundationalist approach to moral justification, then some primary moral principles are not replaceable (i.e. they are self-evident); whereas if one take a coherentist approach, these principles are replaceable because they could be replaced as one revises the system of judgments/beliefs/principles over time.

To summarize this section, our ability to reduce our reliance on intuition depends on the nature of the belief or judgment in question. We can reduce our reliance on intuition concerning mathematical or empirical beliefs or judgments by engaging in reasoning or obtaining additional evidence. We can reduce our reliance on intuition concerning moral beliefs or judgments by engaging in reasoning, unless the beliefs or judgments are regarded as not judicable by reason or self-evident.

6. Risks and Expected Benefits in Human Subjects Research

With this account of intuition in mind, we can return to the main topic of this paper—can IRBs reduce their reliance on intuition in risk/benefit decisions? To gain some traction on this question, I will describe a series of steps that IRBs can use to make risk/benefit decisions systematically (i.e. by means of reasoning). The steps I describe below are based on the work of other authors (e.g. Levine 1988, Meslin 1990, Weijer 2000, National Bioethics Advisory Commission 2001, Weijer an Miller 2004, Rid et al 2010, Rid and Wendler 2011) and constitute an ideal decision-making process which IRBs may or may not follow in real life. The steps are as follows:

  • Step 1: Identify potential harms and benefits associated with the research.

  • Step 2: Ensure that the research includes appropriate measures to minimize potential harms and maximize potential benefits.

  • Step 3: Estimate the probability or likelihood harms and benefits.

  • Step 4: Evaluate harms and benefits.

  • Step 5: Decide whether risks are acceptable in relation to expected benefits.

  • Step 6: If risks are not acceptable, require the investigator to modify the proposal to make risks acceptable or table or reject the proposal.

  • Step 7: If (when) the risks are acceptable and the proposal meets other review criteria (e.g. informed consent, confidentiality, etc.) accept the proposal.

7. Identifying Potential Harms and Benefits

In the first step, IRBs identify the potential harms and benefits associated with the research. Investigators usually will provide IRBs with this information in the description of the proposed study. Investigators may obtain this information from the published literature, professional experience, or informed speculation. Investigators should describe potential harms and benefits in enough detail to allow IRBs to evaluate them and estimate their probability. IRBs may also identify potential harms and benefits that have not been discussed in the research proposal, based on their own review of the literature or professional or personal experience (Klitzman 2015).

The potential harms and benefits from human subjects research depend on many different factors, including: the study design and aims; the methods and procedures used in a study; prior research related to the study; the characteristics of the population targeted for enrollment; the qualifications of the research team; the resources available at the research site(s); and the hypotheses under investigation (Levine 1988). Potential harms to human subjects typically include (Levine 1988, Rid et al 2010):

  • Physiological harms, such as bleeding or bruising, nausea, dizziness, immune reactions, drug toxicity, exposure to radiation, hospitalization, disability, or death;

  • Psychosocial harms, such as adverse emotional reactions, or embarrassment, stigma, bias, or discrimination resulting from a breach of confidentiality;

  • Economic harms, such as loss of wages or employability;

  • Legal harms, such as disclosure of suspected child abuse to authorities by investigators;

  • Experiential harms, such as pain, discomfort, or inconvenience.

Members of the research team, identifiable third parties not involved in research, or communities impacted by research may also face potential harms (Levine 1988, Resnik and Sharp 2006, Resnik and Kennedy 2010, Klitzman 2015). Although the federal research regulations focus on risks to human subjects, many argue that IRBs should also address risks to members of the research team or third parties, because other sources of guidance concerning risks (such as the Belmont Report or ethical codes or principles) address these risks (Resnik and Sharp 2006).

Expected benefits to human subjects typically include (Levine 1988, Rid et al 2010):

  • Medical benefits, such as access to medical interventions or treatment under investigation or ancillary care provided by a study;

  • Informational benefits, such as information pertaining to one’s health, e.g. results of laboratory tests or physical exams;

  • Psychological benefits, such as enhanced self-esteem related to contributing to the advancement of human knowledge or public health.

Additionally, society may benefit from research as a result of the knowledge that is gained, which could be used to improve our understanding of disease, develop new treatments, or improve public health (Rid and Wendler 2011). Communities may also benefit from research via interventions developed during research or investments in public health infrastructure (Resnik and Kennedy 2010). Though most people consider money to be a benefit, guidance provided by federal agencies discourages IRBs from treating financial incentives for participation as benefits to subjects since doing so might encourage IRBs to approve risky research with substantial financial rewards (Wertheimer 2013).

Intuition probably does not play a significant role in the IRB’s identification of potential harms and benefits because these are usually spelled out in the research proposal or can be inferred from the description of the study (Klitzman 2015). If IRB members have questions pertaining to the identification of potential harms and benefits, they can ask the investigator additional questions or consult the published literature on the topic.

Intuition may come into play when investigators and IRB members have little empirical data that can be used to identify potential harms or benefits (Kimmelman 2004). For example, in 1971 Stanford University psychology professor Phillip Zimbardo conducted an experiment in which 24 male college students agreed to play the roles of prisoners and guards in a mock prison setting. Zimbardo stopped the experiment after six days because the guards started acting brutally and sadistically toward the prisoners. While Zimbardo expected that the subjects would engage in role-playing behavior, he did not anticipate the extent of the guards’ aggressive behavior toward the inmates (Zimbardo 2008). This was a risk that probably would not have been identified, based on the available evidence. To identify this risk, an IRB would need to engage in speculation, which could involve imagination guided by intuition. However, since beliefs concerning potential harms or benefits are empirical, reliance on intuition in forming these beliefs can be reduced if the IRB has sufficient time or information to employ reasoning.

8. Minimizing Harm and Maximizing Benefit

The research proposal should contain information concerning steps that the investigator plans to take to minimize potential harms and maximize potential benefits. These measures depend on various aspects of the proposed study, including the design, methods, procedures, population, and so on. While a survey may require almost no measures to minimize harms and maximize benefits other than the safeguarding of confidential information, a Phase II clinical trial may require many different measures, such as: excluding participants who are too sick to tolerate the treatment or are not likely to benefit from it; regular observations of participants’ health; reporting of adverse events and unanticipated problems to the IRB and sponsor; appropriate drug dosing; training of research staff on ethics and safety issues; data and safety monitoring; provision of treatment for injuries or other health problems detected during the study; and referral of participants to health care providers to follow-up on abnormal findings (Levine 1988). Intuition probably plays a minimal role in the IRB’s decision-making concerning steps used to minimize harms or maximize benefits, because most of these will be identified in the research proposal or can be inferred from the proposal (Levine 1988). If the IRB does not have enough information, it may form some intuitive beliefs concerning measures to minimize potential harms or maximize potential benefits, but since these beliefs are empirical, they could be replaced by beliefs based on more complete information.

9. Estimating the Probability of Potential Harms and Benefits

After identifying potential harms and benefits and ensuring that they are minimized or maximized (respectively), IRBs should attempt to estimate the probability (or likelihood) that different outcomes will occur. In thinking about probability, it will be useful to distinguish between objective estimates and subjective estimates (Weiss 2011). Objective estimates base the probability of an event on the observed frequency of the event (i.e. statistical probability) or the number of times the event can occur out of a set of possible occurrences (i.e. mathematical probability). For example, one could obtain a statistical probability for rolling a pair of dice and getting a seven by rolling them 100 times and counting how many times they add up to seven, or one could obtain a mathematical probability by dividing the number of outcomes that equal seven by the total number of possible outcomes (i.e. 6/36 or 1/6).

Mathematical estimates of probability are not likely to be very useful in thinking about potential harms and benefits in humans subjects research because they can be misleading. For example, if one wants to estimate the mathematical probability of dying as a result of taking an experimental drug, it would be the outcome (death) divided by the total possible outcomes (death or no death) or 1/2. This probability could be very misleading if one has evidence that the probability of death is significantly less or greater than 0.5. Thus, the probabilities that IRBs use in thinking about potential harms and benefits would be statistical probabilities or subjective ones.

A subjective estimate of the probability of an event is an individual’s personal judgment about the likelihood of the event. A subjective estimate may be an educated guess (Weiss 2011). For example, suppose that one is betting on a horse race and one does not have enough data to obtain a statistical estimate concerning the horse’s odds of winning a race because it has run in only three races. One might place the bet based on one’s personal judgment that the horse will win. This judgment might be informed by empirical evidence, such as the outcomes of the horse’s three races, and information about the horse’s jockey and trainer. Even though the probability estimate might be based on some empirical data, we would still call it a subjective estimate because it is a personal judgment, not a judgment based on statistical data or mathematical relationships. One of the criticisms of subjective probabilities is that they may be biased by one’s opinions, interests, political views, prejudices, and so on (Earman 1992). Proponents of the subjectivist approach counter that one can use Bayes’ theorem to update initial probability estimates in light of new evidence. Diverging subjective estimates of probability will eventually converge on the correct estimate of probability as result of Bayesian updating (Howson and Urbach 1993). Critics of this approach argue that it can be difficult to overcome initial biases and that there is often not enough time for convergence to occur (Earman 1992).

Which of these types of probability estimates would be or could be intuitive? Objective estimates of probability would not be intuitive because they would be based on reasoning (such as inferences from observed frequencies or mathematical relationships). Subjective estimates of probability might also involve reasoning if they are based on inductive inferences from the information one has available to develop a good guess. Only subjective estimates that do not involve conscious reasoning would be intuitive.

As noted earlier, evidence suggests that IRBs often estimate probabilities based on intuition (Van Luijn et al 2002, Stark 2012, Klitzman 2015). Many commentators (e.g. Weijer 2000, National Bioethics Advisory Commission 2001, Weijer an Miller 2004, Wendler et al 2005, e.g. Rid et al 2010, Rid and Wendler 2011, Shamoo and Ayyub 2011) have argued that IRBs should use inductive reasoning to estimate probabilities in order to reduce reliance on intuition. Investigators should provide IRBs with probability estimates in their research proposals. For example, an investigator could estimate the probability of a particular outcome, such as fainting during a glucose tolerance test, based on the frequency of the outcome reported in the scientific literature. If an investigator does not provide evidence pertaining to probabilities, an IRB could ask the investigator for this information or conduct its own inquiry to obtain an objective estimate.

One problem that IRBs may face in reducing reliance on intuition is that empirical data may be unavailable, because there may be no published literature on the topic (Rajczi 2004). Lack of data can be a significant issue when IRBs are considering potential harms and benefits of novel therapies or interventions, or harms and benefits to research staff, identifiable third parties, or communities. When data on observed frequencies are not available, investigators and IRB members may have no alternative but to estimate probabilities subjectively (Kimmelman 2004). As noted earlier, these judgments may be informed by some empirical data or they could be intuitive judgments. In either case, the probability estimates of IRB members might radically diverge, which could present significant problems for IRB decision-making. For example, if one IRB member judges the probability of death for study participants to be 1/1000, whereas another judges it to be 1/10, this could lead to a serious disagreement concerning the acceptability of the study.

10. Evaluating Potential Harms and Benefits

After estimating the probabilities of potential harms and benefits, the IRB’s next task is to evaluate them. Evaluating harms and benefits involves moral cognition because it requires one to rate outcomes in terms of their value or worth (positive or negative). One must be able to determine, for example, the degree of harm of a headache, minor infection, or other harm related to a study; or the degree of benefit from access to medical care, the advancement of human knowledge, or some other benefit (Kimmelman 2004). Determining the degree of harm or benefit is important for risk/benefit decision-making, since a risk can be understood as a product of the probability and magnitude (or degree) of a harm, while an expected benefit can be understood as a product of the probability and magnitude of a benefit.13 For example, if a study involves a 1/1000 chance of death, we might regard it as more risky than one which involves a 1/2 chance of nausea (and no chance of death) because death is a catastrophic outcome even though it is highly improbable (Rid et al 2010).

Since the evaluation of benefits and harms involves moral cognition, an important question to ask is whether IRBs can reduce their reliance on intuition when carrying out this task. If the intuitive judgments or beliefs formed by IRB members can be replaced by judgments or beliefs resulting from reasoning, then reliance on intuition can be reduced. In some cases, particular intuitive judgments or beliefs formed by IRB members may be based on some commonly accepted moral principles or values.14 In these cases, particular intuitive judgments or beliefs may be replaceable. For example, if IRB members agree that being able to function normally is an important value, they may be able to agree, by means of reasoning, that loss of a limb is much more harmful than nausea lasting two days. If IRB members agree that being cured of a serious disease is an important value, they may be able to agree, by means of reasoning, that treatment provided to cancer patients in a clinical trial is a significant benefit. Thus, intuitive evaluations of harms and benefits could be replaced by ones obtained by reasoning when IRB members agree on some basic moral principles or values. Of course, these basic principles or values may or may not be replaceable, depending on one’s stance on the foundationalism vs. coherentism debate. For example, a foundationalist might argue that beliefs or judgments concerning the value of life, happiness, health, or autonomy, are self-evident.

In some cases, the intuitive evaluation of harms or benefits may rest on judgments or beliefs concerning pain, discomfort, or personal preferences. In these situations, reliance on intuition may be unavoidable, because these judgments or beliefs are not judicable by reason. For example, suppose IRB member A asserts that the harm associated with a venipuncture is minimal while IRB member B states that this harm is more than minimal. There may be little that A and B can say to each other to justify their different evaluations. IRB member B might justify his judgment on the grounds that he hates being stuck with a needle and finds it to be very painful. If IRB member A does not despise needle-sticks as much as B, there is little that she can say to convince B to make a different judgment. Or suppose that IRB members A and B disagree about the value of research participation. IRB member A places considerable value on research participation because contributing to the advancement of science or public health enhances her self-esteem. B does not place a much value on research participation as A does because B he does not view participation as enhancing his self-esteem. Their disagreement may come down to their intuitive judgments concerning personal preferences, which are not judicable by reason.

Some writers have argued that IRB members can reduce their reliance on intuition in evaluating harms and benefits by using a classification system for categorizing outcomes (Rajczi 2004, Shamoo and Ayyub 2011, Rossi and Nelson 2012). The system would be based on widely-shared values. Rid et al (2010) have developed a seven-point scale for classifying potential harms in terms of their duration and magnitude. The categories on the scale include: negligible (e.g. mild nausea), small (e.g. headache), moderate (e.g. insomnia lasting one month), significant (e.g. ligament tear with no permanent disability), major (e.g. permanent, disabling arthritis), severe (e.g. loss of a limb or paraplegia), catastrophic (e.g. permanent, severe dementia or death). Although Rid et al (2010) do not apply the seven-point scale to psychosocial harms to participants, or harms to research staff, third parties or communities, one could conceivably use their scale to classify these harms. One could also construct a scale for classifying benefits similar the harm scale developed by Rid et al (2010).

While a classification system can be a very useful tool for reasoning about harms and benefits, some reliance on intuition may be unavoidable when IRB members implement the system because they may classify harms and benefits according to their judgments concerning pain or discomfort, or their personal preferences. Nevertheless, it would appear that using a classification system to categorize harms and benefits can reduce reliance on intuition because IRB members may appeal to the system to justify their evaluations of harms of benefits (Rid et al 2010).

Bernabe et al (2012b) and Musschenga et al (2007) have defended a proposal for completely eliminating intuitive judgments from the evaluation of harms and benefits. According to their proposal, IRB members could form judgments or beliefs concerning harms or benefits based on the values of the “reasonable person,” an idea which was first suggested by Levine (1988). IRB members could base their judgments on what a reasonable person would regard as harmful or beneficial. For example, to evaluate the harm associated with a venipuncture, an IRB member could think about the degree of harm that a reasonable person would assign to a venipuncture, not the degree of harm that he or she would assign to the procedure. Likewise, an IRB member could evaluate the benefit of research participation in terms of what the reasonable person would find to be beneficial.

Using the reasonable person standard to guide the evaluation of harms and benefits creates more problems than it solves, however. The “reasonable person” can be understood descriptively or normatively (Miller and Perry 2012). Under a descriptive interpretation, the “reasonable person” is understood as the statistically normal (or average) person within a population. To arrive at some characterization of the “reasonable person” one must specify the relevant population, which creates the potential for significant variation in harm/benefit evaluations. For example, the average person living in the U.S. might have very different opinions concerning research harms and benefits than the average person living in Ethiopia, Peru, or any other country. Harm/benefit evaluations might also vary according race, ethnicity, gender, religious background, and culture.

Under a normative interpretation, the “reasonable” person is understood as a hypothetical individual who makes morally justified judgments concerning the evaluation of benefits and risks. However, this understanding of the “reasonable person” assumes prior acceptance of a moral viewpoint (i.e. a theory or set of principles) that describes how a reasonable person would make these judgments. Given the widespread disagreement among scholars and laypeople concerning moral issues and viewpoints (Gutmann and Thompson 1998), it is unlikely that a widely accepted, normative account of the reasonable person will be available to IRB members for the foreseeable future.

11. Determining Whether Risks are Acceptable in Relation to Expected Benefits

After going through the previous five steps, IRBs must decide whether the risks of research are acceptable in relation to expected benefits. To make this decision, IRBs must synthesize all of the findings from the previous steps to form an overall assessment of risks in relation to expected benefits. This decision involves moral cognition because risks and expected benefits are a product of probability estimates and moral evaluations. As we have already seen, IRB members often make intuitive judgments concerning the acceptability of risks in relation to expected benefits (Van Luijn et al 2002, Stark 2012, Klitzman 2015). The question we need to ask is whether IRB members can reduce their reliance on intuition at this stage of decision-making, keeping in mind that intuition may have already played a key role in prior stages.

Since determining the acceptability of risks in relation to expected benefits is a moral decision (Shrader-Frechette 1991, Hansson 2003), we can use the insights from the previous sections to understand the role of intuition at this stage. If the intuitive judgments formed by IRB members concerning the acceptability of risks can be replaced by judgments resulting from reasoning, then reliance on intuition can be reduced. To curtail the use of intuition, IRB members would need to base their decisions on some commonly accepted moral principles for managing risks. In thinking about these principles, it is important to keep in mind that a relatively small number of people may bear the risks of research, whereas many may benefit. For example, a new medication may be tested on several thousand human subjects but may help millions of patients if it is approved for marketing. Also, while the risk to research participants may be significant, direct, and well-established, the benefits to other people may be marginal, indirect, or speculative.

Thus, the moral principles used by IRB members should be able to address complex risk/benefit comparisons (Hannson 2003). One obvious candidate for such a principle would have a utilitarian formulation: “The risks of research are acceptable if and only if the expected benefits of research to all people in society are greater than risks.”15 Utilitarianism is controversial, however. Critics of the theory object that it does not provide adequate protection for the rights or welfare of individuals (Rachels 1993, Pojman 2005). For example, utilitarian principles might support research, such as the hypothermia studies by the Nazis, which imposes a high risk of death on healthy human subjects in order to benefit society.16 Most IRB members would probably not approve such a study, even if the investigators would obtain the subjects’ consent (Klitzman 2015).

Another potential candidate is the principle of beneficence articulated by the National Commission (1979) in the Belmont Report: “(1) do not harm and (2) maximize possible benefits and minimize possible harms (p.2).” While the beneficence principle seems reasonable, it does not provide any useful guidance for deciding whether risks are acceptable. The first part of the principle (i.e. the injunction “do not harm”) is not very helpful because research, by its very nature, often involves some harm to human subjects (Rid et al 2010). What matters is not whether research causes some harm but whether the harms associated with research are acceptable in relation to potential benefits. The second part of the principle does not provide useful guidance because questions concerning the maximization of benefits and minimization of harms are distinct from questions concerning the acceptability of risks. For example, a study of hypothermia in healthy volunteers might minimize risks and maximize benefits but still be regarded as too risky.

A third candidate might be a Kantian principle. For a Kantian, the risks of a study would be acceptable in relation to the benefits if the risk/benefit comparison would conform to the Categorical Imperative. According to one version of this principle, one should always treat humanity as an end in itself, and never merely as a means to another end (Kant 1964). Requiring that researchers obtain informed consent from research subjects would help to ensure that participants are not treated merely as a means, since they would agree to take the risks associated with the study (Shamoo and Resnik 2015). However, requiring consent, by itself, has no implications for the acceptability of risks. An IRB might regard a study (such as a hypothermia experiment) as too risky even if the investigators will obtain informed consent from the subjects (Miller and Joffe 2009).

Another version of the Categorical Imperative might provide some guidance on the acceptability of risks. According to this version, one should act according to a rule that could become a universal principle for all rational beings (Kant 1964). One might argue that some types of studies might be excessively risky because a rule that permitted such studies could not become universal principle for all rational beings. For example, one could argue that the rule “It is acceptable to expose consenting healthy volunteers to a significant risk of death in research in order to benefit society” could not become a universal principle for all rational beings because rational beings would not impose such risks on others (Resnik 2012). However, one could argue that rational beings would be willing to expose consenting, health volunteers to risky research in some circumstances, so it is not at all clear whether the Categorical Imperative would set an upper limit for risk acceptability. Moreover, we have said nothing about the acceptability of risks for studies that offer participants significant benefits, such as clinical trials. One could argue that it is acceptable to expose human subjects with a serious disease (such as cancer) to significant risks in research if the study offers them the prospect of effective treatment (Miller and Joffe 2009).

To summarize this section, to reduce their use of intuition in determining the acceptability of research risks in relation to expected benefits, IRB members need to be able to make decisions based on moral principles for managing risks. While some IRB members might make decisions based on principles associated with specific moral perspectives (such as utilitarianism, Kantianism, etc.) it is likely that not all IRB members will accept or use those principles. In the absence of well-established moral principles, many IRB members may make decisions based on intuition rather than reasoning (Rid and Wendler 2011). It may be difficult to scale-back the use of intuition in this part of IRB risk/benefit decision-making, due to disagreements concerning moral theories or perspectives.

Moreover, while relying on moral principles to determine the acceptability of risks may reduce the use of intuition in decision-making, these principles might themselves be regarded as intuitively self-evident, depending on one’s stance on the foundationalism vs. coherentism debate. If the principles are regarded as self-evident, then further reduction in the use of intuition would not be possible.

12. Deciding Whether to Accept the Research Proposal

After completing steps 1-5, the IRB can make decisions concerning the overall acceptability of the research. As noted earlier, an IRB might require an investigator to modify a research proposal so that the risks will be acceptable. An investigator might need to take additional steps to decrease risks, increase expected benefits, or both, as a condition for approval. Since the decisions made at this stage of IRB review are based on those made at earlier stages, these decisions are impacted by the use of intuition at earlier stages. For example, if an IRB has relied on intuition to form judgments concerning the probability of potential harms or benefits, intuition will impact the decisions it makes concerning the changes it requires an investigator to make for approval. Thus, to reduce the use of intuition in decisions related to accepting a research proposal, an IRB must reduce its use of intuition in earlier stages.

13. Conclusion: Reducing the Use of Intuition in IRB Risk/Benefit Decision-Making

To sum up, several studies have shown that IRB members often rely on intuition in risk/benefit decision-making (Van Luijn et al 2002, Stark 2012, Klitzman 2015). Numerous writers have argued that IRBs should reduce the use of intuition in risk/benefit decision-making because intuition lacks transparency and can be unreliable or biased (Rid et al 2010). In this paper, I have argued that IRBs can reduce their use of intuition, with some limitations.

If IRBs are relying on intuition to form empirical judgments or beliefs, they can often reduce their use of intuition by obtaining additional data. Several authors (e.g. Rid et al 2010) have argued that IRBs should use empirical data to estimate probabilities concerning potential harms and benefits. I concur with this recommendation and would add that IRBs should use empirical data to identify potential harms and benefits and assess procedures for minimizing harms and maximizing benefits. However, there are some practical limitations to reducing the use of intuition in forming empirical judgments or beliefs pertaining to risks and expected benefits, because empirical data may not be readily available and IRBs may not have sufficient time or resources to obtain the data they need. To help overcome this practical limitation, institutions should ensure that IRBs have adequate resources (e.g. support staff) and manageable workloads.17 If IRBs are relying on intuition to form moral judgments or beliefs, they can reduce their use of intuition by engaging in moral reasoning. As argued earlier, IRBs form moral judgments or beliefs when they evaluate potential harms or benefits or decide whether risks are acceptable in relation to expected benefits. IRB members can engage in moral reasoning by reflecting on these decisions, gathering additional information, and offering arguments for their opinions, provided that they have enough time, given their workload. There are, however, some significant limitations to reducing the use of intuition in forming moral judgments or beliefs concerning risks and expected benefits. These limitations are generally philosophical, rather than practical, in nature.

First, although IRB members may be able to agree on some basic moral values (such as life, health, or autonomy) which they can use to evaluate harms and benefits in some circumstances, situations may arise in which they must evaluate harms and benefits based on intuitive judgments or beliefs concerning pain, discomfort, personal preference, or other matters which are not judicable by reason. Some have proposed that IRB members could use a classification system to categorize harms (and by extrapolation benefits), which would help reduce the use of intuition in this context. However, IRB members might still rely on intuitive judgments or beliefs concerning pain, discomfort, or personal preferences when categorizing harms or benefits. Others have proposed that IRB members can eliminate the use of intuition in the evaluation of harms and by forming judgments and beliefs on the basis of what a reasonable person would regard as harmful or beneficial, but there are problems with clearly articulating a widely-shared understanding of the reasonable person.

Second, although some IRB members may justify their decisions concerning the acceptability of risks by appealing to moral principles for managing risks (e.g. utilitarian or Kantian principles), it is likely that not all IRB members will accept the same principles and some will make decisions on an intuitive basis, without any consideration of moral principles.

Third, basic moral values or principles may be regarded as intuitive, depending on one’s stance on the foundationalism vs. coherentism issue. Foundationalists claim that some moral beliefs or judgments (such as values or principles) are intuitively self-evident. Coherentists argue that there are no self-evident moral beliefs or judgments, which means that even basic moral values or principles should be justified in terms of their relationship to other beliefs or judgments. If one accepts a foundationalist approach to moral justification, then reasoning must stop at the level of basic, moral beliefs or judgments, and no further reduction in the use of intuition is possible. If one accepts a coherentist approach, then justification may continue as one seeks coherence of principles, beliefs, and judgments. Even though no single principle, belief, or judgment would be accepted as intuitively self-evident, the coherent system would not be devoid of intuitive beliefs or judgments.

None of the preceding need imply that IRBs should forego the attempt to reduce the use of intuition in making risk/benefit decisions. On the contrary, there are sound reasons for trying to reduce the use of intuitive judgment because intuition lacks transparency and can be unreliable or biased. Wherever practical, IRBs should obtain empirical data pertaining to risks and benefits, use classification systems to evaluate of potential harms and benefits, and engage in moral reasoning concerning the acceptability of risks. However, it is important acknowledge that there are practical and philosophical limitations to reducing the use of intuition in IRB risk/benefit decision-making.

Acknowledgments

This research was supported by the intramural program of the National Institutes of Environmental Health Sciences (NIEHS), National Institutes of Health (NIH). It does not represent the views of the NIEHS, NIH, or U.S. federal government. I am grateful to Sam Bruton, Jonathan Kimmelman, Joel Pust, Michael Resnik, and David Wendler for helpful comments and discussions.

Footnotes

1

I will use the term “principle” rather broadly in this paper to include any general rule for conduct or decision-making.

2

“Risk” is typically understood as a product of the probability (or likelihood) and magnitude (or severity) of a harm (Levine 1988, Rid et al 2010). Thus, risk includes an epistemic component, i.e. probability. Most regulations and guidelines simply refer to “benefits” without placing any epistemic qualifications on the term. However, as Levine (1988) points out, this way of referring to benefits is misleading, since benefits may also occur with some degree of probability. For the sake of consistency and clarity, in this paper I will refer to “risks” and “expected benefits.”

3

IRBs are also known as Research Ethics Boards (REBs) or Research Ethics Committees (RECs) outside the U.S.

4

Philosophers often characterize intuitions as beliefs, while psychologists describe them as judgments. To accommodate both viewpoints, I will consider intuitions to be beliefs or judgments. See Pust (2012), Kahneman (2011).

5

Many philosophers and scientists would say that pain is inherently subjective. See Resnik et al (2001).

6

The justification relationship could involve deductive, inductive or explanatory connections between beliefs. For example, I might be justified in believing that octagons have more sides than hexagons because this follows from the definitions of these objects (deductive); that John is married because he is wearing a gold ring on his left ring finger, and most people who wear gold rings on that finger are married (inductive); or that my car battery is dead because this belief explains why the radio and starter motor are not working (explanatory).

7

An important problem for coherentists is to specify what is meant by “cohere.” According to some coherentists, coherence consists of internal consistency of beliefs and external validation of beliefs via their practical utility or basis in reality (Sayre-McCord 1996).

8

The foundationalism vs. coherentism issue does not arise for those philosophers, known as non-cognitivists, who deny that we can have moral knowledge (Sayre-McCord 1996). For example, emotivists, such as Ayer (1952) claim that moral discourse expresses emotions but does not express judgments or beliefs. I will assume that we can have moral knowledge, however.

9

A moral principle can be viewed as a moral belief concerning a general rule for conduct.

10

The concept of “reduction” I have in mind here has nothing to do with reduction in the philosophy of science and the philosophy of mind. By “reduce” I mean “use less” or “scale down.” For more on reduction, see Van Riel (2014).

11

By “cogent” I mean “valid” (for deductive reasoning) or “good” for inductive reasoning.

12

By “empirical” I mean beliefs related to what we observe in the world. Foundationalists argue that intuitive, self-evident beliefs concerning our relationship to the world underlie all of our empirical knowledge, but I will not consider that issue here. See Audi (2010).

13

See footnote 1. Some approaches to risk management include others factors that enter into the evaluation of risk, such as the degree to which an adverse outcome is within one’s control or the uncertainty related to the outcome (Kahneman 2011). Since most discussions of risk in human research ethics focus on the simple formula used here, I will stick to this approach rather than exploring avenues that are beyond the scope of this paper.

14

By “value” I mean an aim or goal that is morally worthwhile, such as happiness, health, autonomy, justice or social welfare. See Rawls (1971) and Nussbaum (2011).

15

Bernabe et al (2012a, 2012b) apply expected utility theory to IRB risk/benefit decision-making. One could argue that expected utility theory is a utilitarian approach to ethical decision-making because it holds that one should choose the action that maximizes overall expected utility, where an expected utility is a product of the probability of an outcome and its utility (e.g. value or worth).

16

The Nazi hypothermia experiments were conducted on non-consenting human subjects (i.e. concentration camp prisoners). The studies exposed human beings to extremely cold temperatures in order to collect data on how the body responds to hypothermia, presumably to develop treatments for this condition. Obviously, lack of consent was a serious moral problem with these experiments (Shamoo and Resnik 2015). But one might hold that such experiments would be morally questionable even if the subjects are consenting volunteers.

17

Workload is a function of how many actions an IRB is expected to review at a particular meeting. Institutions can reduce IRB workload by increasing the number of IRBs. For example, if an institution has only one IRB that typically reviews 10 new protocols, 10 renewals, 6 amendments, and 6 problem reports per month, it could divide this workload in half by creating a new IRB handle half of these actions.

References

  1. Audi R. The Architecture of Reason. Oxford University Press; New York: 2001. [Google Scholar]
  2. Audi R. The Good in the Right. Princeton University Press; Princeton, NJ: 2004. [Google Scholar]
  3. Audi R. Epistemology: A Contemporary Introduction to the Theory of Knowledge. 3rd Routledge; New York: 2010. [Google Scholar]
  4. Australian National Government National Statement on Ethical Conduct in Human Research. 2015 Updated 2015. Available at: http://www.nhmrc.gov.au/_files_nhmrc/publications/attachments/e72_national_statement_may_2015_150514_a.pdf. Accessed: June 30, 2015.
  5. Ayer AJ. Language, Truth, and Logic. 2nd Dover; New York: 1952. [Google Scholar]
  6. Bernabe RD, van Thiel GJ, Raaijmakers JA, van Delden JJ. The risk-benefit task of research ethics committees: an evaluation of current approaches and the need to incorporate decision studies methods. BMC Medical Ethics. 2012a;13:6. doi: 10.1186/1472-6939-13-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bernabe RD, van Thiel GJ, Raaijmakers JA, van Delden JJ. Decision theory and the evaluation of risks and benefits of clinical trials. Drug Discovery Today. 2012b;17(23-24):1263–1269. doi: 10.1016/j.drudis.2012.07.005. [DOI] [PubMed] [Google Scholar]
  8. Canadian Institutes of Health Research, Natural Sciences and Engineering, Research Council of Canada, Social Sciences and Humanities Research Council of Canada Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans. 2005 Available at: http://www.pre.ethics.gc.ca/archives/tcps-eptc/docs/TCPS%20October%202005_E.pdf. Accessed: June 29, 2015.
  9. Chuang-Stein C. A new proposal for benefit-less-risk analysis in clinical trials. Controlled Clinical Trials. 1994;15(1):30–43. doi: 10.1016/0197-2456(94)90026-4. [DOI] [PubMed] [Google Scholar]
  10. Council for International Organizations of Medical Science International Ethical Guidelines for Biomedical Research Involving Human Subjects. 2002 Available at: http://www.cioms.ch/publications/layout_guide2002.pdf. Accessed: August 17, 2015. [PubMed]
  11. Cushman F, Young L, Hauser M. The role of conscious reasoning and intuition in moral judgment: testing three principles of harm. Psychological Science. 2006;17(12):1082–1089. doi: 10.1111/j.1467-9280.2006.01834.x. [DOI] [PubMed] [Google Scholar]
  12. Daniels N. Justice and Justification: Reflective Equilibrium in Theory and Practice. Cambridge University Press; New York: 1996. [Google Scholar]
  13. Daniels N, Sabin JE. Setting Limits Fairly: Can We Learn to Share Medical Resources? Oxford University Press; New York: 2002. [Google Scholar]
  14. Descartes R. Meditations on First Philosophy. 3rd. Hackett; Cress DA (transl.). Indianapolis: 1993. [1637] [Google Scholar]
  15. Department of Health and Human Services . Protection of Human Subjects. In: Earman J, editor. Bayes of Bust? A Critical Examination of Bayesian Confirmation Theory. M.I.T. Press; Cambridge, MA: 2009. 1992. 45 CFR 46. [Google Scholar]
  16. Emanuel EJ, Wendler D, Grady C. What makes clinical research ethical? Journal of the American Medical Association. 2000;283(20):2701–2711. doi: 10.1001/jama.283.20.2701. [DOI] [PubMed] [Google Scholar]
  17. Feinberg M, Willer R, Antonenko O, John OP. Liberating reason from the passions: overriding intuitionist moral judgments through emotion reappraisal. Psychological Science. 2012;23(7):788–795. doi: 10.1177/0956797611434747. [DOI] [PubMed] [Google Scholar]
  18. Fumerton R. Foundationalist Theories of Epistemic Justification. Stanford Encyclopedia of Philosophy. 2010 Available at: http://plato.stanford.edu/entries/justep-foundational/. Accessed: August 23, 2015.
  19. Giere RN, Bickle J, Mauldin R. Understanding Scientific Reasoning. 5th Wadsworth; Belmont, CA: 2005. [Google Scholar]
  20. Goldman A. Epistemology and Cognition. Harvard University Press; Cambridge, MA: 1988. [Google Scholar]
  21. Green LA, Lowery JC, Kowalski CP, Wyszewianski L. Impact of institutional review board practice variation on observational health services research. Health Services Research. 2006;41(1):214–30. doi: 10.1111/j.1475-6773.2005.00458.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Greene J. Moral Tribes: Emotion, Reason, and the Gap between Us and Them. Penguin Press; New York: 2013. [Google Scholar]
  23. Greene JD, Sommerville RB, Nystrom LE, Darley JM, Cohen JD. An fMRI investigation of emotional engagement in moral judgment. Science. 2001;293(5537):2105–2108. doi: 10.1126/science.1062872. [DOI] [PubMed] [Google Scholar]
  24. Gutmann A, Thompson D. Democracy and Disagreement. Harvard University Press; Cambridge, MA: 1998. [Google Scholar]
  25. Haidt J. The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychology Review. 2001;108(4):814–834. doi: 10.1037/0033-295x.108.4.814. [DOI] [PubMed] [Google Scholar]
  26. Haidt J. The new synthesis in moral psychology. Science. 2007;316(5827):998–1002. doi: 10.1126/science.1137651. [DOI] [PubMed] [Google Scholar]
  27. Hansson SO. Ethical criteria of risk acceptance. Erkenntnis. 2003;59:291–309. [Google Scholar]
  28. Hare RM. Moral Thinking: Its Levels, Method, and Point. Oxford University Press; Oxford: 1981. [Google Scholar]
  29. Howson C, Urbach P. Scientific Reasoning: The Bayesian Approach. Open Court; Chicago: 1993. [Google Scholar]
  30. Hume D. In: A Treatise of Human Nature. Norton DF, Norton MJ, editors. Oxford University Press; New York: 2000. [1739] [Google Scholar]
  31. Iltis AS. Bioethics as methodological case resolution: specification, specified principlism and casuistry. Journal of Medicine and Philosophy. 2000;25(3):271–284. doi: 10.1076/0360-5310(200006)25:3;1-H;FT271. [DOI] [PubMed] [Google Scholar]
  32. Jonsen AR, Toulmin S. The Abuse of Casuistry: A History of Moral Reasoning. University of California Press; Berkeley, CA: 1990. [Google Scholar]
  33. Kahneman D. Thinking, Fast, and Slow. Farrar, Straus, and Giroux; New York: 2011. [Google Scholar]
  34. Kahneman D, Slovic P, Tversky A, editors. Judgment under Uncertainty: Heuristics and Biases. Cambridge University Press; Cambridge: 1983. [DOI] [PubMed] [Google Scholar]
  35. Kant I. In: Groundwork for the Metaphysics of Morals. Paton HD, editor. Harper and Rowe; New York: 1964. [1785] [Google Scholar]
  36. Kimmelman J. Valuing risk: the ethical review of clinical trial safety. Kennedy Institute of Ethics Journal. 2004;14(3):369–393. doi: 10.1353/ken.2004.0041. [DOI] [PubMed] [Google Scholar]
  37. Kimmelman J, Henderson V. Assessing risk/benefits trials for using preclinical evidence: a proposal. Journal of Medical Ethics. 2016;42(1):50–53. doi: 10.1136/medethics-2015-102882. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Klitzman RL. The Struggle to Make Human Research Safe. Oxford University Press; New York: 2015. The Ethics Police? [Google Scholar]
  39. Kohlberg L. The Philosophy of Moral Development. One. Harper and Rowe; New York: 1981. [Google Scholar]
  40. Korsgaard C. The Sources of Normativity. Cambridge University Press; Cambridge: 1996. [Google Scholar]
  41. Kuczewski M. Casuistry and principlism: the convergence of method in biomedical ethics. Theoretical Medicine and Bioethics. 1998;19(6):509–524. doi: 10.1023/a:1009904125910. [DOI] [PubMed] [Google Scholar]
  42. Levine RJ. Ethics and the regulation of clinical research. 2nd Yale University Press; New Haven, CT: 1988. [Google Scholar]
  43. London AJ. Reasonable risks in clinical research: a critique and proposal for the integrative approach. Statistics in Medicine. 2006;25(17):2869–2885. doi: 10.1002/sim.2634. [DOI] [PubMed] [Google Scholar]
  44. Martin DK, Meslin EM, Kohut N, Singer PA. The incommensurability of research risks and benefits: practical help for research ethics committees. IRB. 1995;17(2):8–10. [PubMed] [Google Scholar]
  45. Meslin EM. Protecting .human subjects from harm through improved risk judgments. IRB. 1990;12(1):7–10. [PubMed] [Google Scholar]
  46. Mill JS. Utilitarianism. Hackett Publishing Company; Indianapolis: 1979. [1863] [Google Scholar]
  47. Miller AD, Perry R. The reasonable person. New York University Law Review. 2012;97(2):323–392. [Google Scholar]
  48. Miller FG, Joffe S. Limits to research risks. Journal of Medical Ethics. 2009;35(7):445–449. doi: 10.1136/jme.2008.026062. [DOI] [PubMed] [Google Scholar]
  49. Moore GE. Principia Ethica. Dover; New York: 2004. [1903] [Google Scholar]
  50. Munthe C. The Price of Precaution and the Ethics of Risk. Springer; Dordrecht: 2011. [Google Scholar]
  51. Murphy P. Coherentism in epistemology. Stanford Encyclopedia of Philosophy. 2015 Available at: http://www.iep.utm.edu/coherent/#H2. Accessed: September 6, 2015.
  52. Musschenga AW, Van Luijn HE, Keus RB, Aaronson NK. Are risks and benefits of oncological research protocols both incommensurable and incompensable? Accountability in Research. 2007;14(3):179–196. doi: 10.1080/08989620701455217. [DOI] [PubMed] [Google Scholar]
  53. National Bioethics Advisory Commission . Volume I: Report and Recommendations of the National Bioethics Advisory Commission. National Bioethics Advisory Commission; Bethesda, MD: 2001. Ethical and Policy Issues in Research Involving Human Participants. [Google Scholar]
  54. National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research . The Belmont Report. Department of Health, Education, and Welfare; Washington, DC: 1979. Available at: http://www.hhs.gov/ohrp/humansubjects/guidance/belmont.html. Accessed: June 17, 2015. [Google Scholar]
  55. Nozick R. Anarchy, State, Utopia. Basic Books; New York: 1974. [Google Scholar]
  56. Nussbaum M. Creating Capabilities: The Human Development Approach. Harvard University Press; Cambridge, MA: 2011. [Google Scholar]
  57. Pizarro DA, Bloom P. The intelligence of the moral intuitions: comment on Haidt (2001) Psychology Review. 2003;110(1):193–196. doi: 10.1037/0033-295x.110.1.193. [DOI] [PubMed] [Google Scholar]
  58. Plato . In: The Republic. Reeve CDC, editor. Hackett; Indianapolis: 1992. [380 BCE] G.M.A Grube (transl.) [Google Scholar]
  59. Pojman LP. Ethics: Discovering Right and Wrong. 5th Wadsworth; Belmont, CA: 2005. [Google Scholar]
  60. Poston T. Foundationalism. Stanford Encyclopedia of Philosophy. 2015 Available at: http://www.iep.utm.edu/found-ep/. Accessed: August 30, 2015.
  61. Pritchard IA. How do IRB members make decisions? A review and research agenda. Journal of Empirical Research on Human Research Ethics. 2011;6(2):31–46. doi: 10.1525/jer.2011.6.2.31. [DOI] [PubMed] [Google Scholar]
  62. Pust J. Intuition. Stanford Encyclopedia of Philosophy. 2012 Available at: http://plato.stanford.edu/entries/intuition/. Accessed: August 23, 2015.
  63. Quine WV. Theories and Things. Harvard University Press; Cambridge, MA: 1986. [Google Scholar]
  64. Rachels J. The Elements of Moral Philosophy. 2nd McGraw-Hill; New York: 1993. [Google Scholar]
  65. Rajczi A. Making risk-benefit assessments of medical research protocols. Journal of Law, Medicine & Ethics. 2004;32(2):338–348. doi: 10.1111/j.1748-720x.2004.tb00480.x. [DOI] [PubMed] [Google Scholar]
  66. Rawls J. A Theory of Justice. Harvard University Press; Cambridge, MA: 1971. [Google Scholar]
  67. Rawls J. Lectures on the History of Moral Philosophy. Harvard University Press; Cambridge, MA: 2000. [Google Scholar]
  68. Resnik MD. Choices: An Introduction to Decision Theory. University of Minnesota Press; Minneapolis, MN: 1987. [Google Scholar]
  69. Resnik DB. Limits on risks for healthy volunteers in biomedical research. Theoretical Medicine and Bioethics. 2012;33(2):137–149. doi: 10.1007/s11017-011-9201-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Resnik DB, Kennedy CE. Balancing scientific and community interests in community-based participatory research. Accountability in Research. 2010;17(4):198–210. doi: 10.1080/08989621.2010.493095. [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Resnik DB, Rehm M, Minard RB. The undertreatment of pain: scientific, clinical, cultural, and philosophical factors. Medicine, Health Care, and Philosophy. 2001;4(3):277–288. doi: 10.1023/a:1012057403159. [DOI] [PubMed] [Google Scholar]
  72. Resnik DB, Sharp RR. Protecting third parties in human subjects research. IRB. 2006;28(4):1–7. [PMC free article] [PubMed] [Google Scholar]
  73. Richardson HS. Specifying, balancing, and interpreting bioethical principles. Journal of Medicine and Philosophy. 2000;25(3):285–307. doi: 10.1076/0360-5310(200006)25:3;1-H;FT285. [DOI] [PubMed] [Google Scholar]
  74. Rid A, Emanuel EJ, Wendler D. Evaluating the risks of clinical research. Journal of the American Medical Association. 2010;304(13):1472–1479. doi: 10.1001/jama.2010.1414. [DOI] [PubMed] [Google Scholar]
  75. Rid A, Wendler D. A framework for risk-benefit evaluations in biomedical research. Kennedy Institute of Ethics Journal. 2011;21(2):141–179. doi: 10.1353/ken.2011.0007. [DOI] [PubMed] [Google Scholar]
  76. Rorty R. Intuition. In: Edwards P, editor. Encyclopedia of Philosophy. Vol. 3. MacMillan; New York: 1967. pp. 204–212. [Google Scholar]
  77. Ross WD. The Right and the Good. Oxford University Press; Oxford: 1930. [Google Scholar]
  78. Rossi J, Nelson RM. Is there an objective way to compare research risks? Journal of Medical Ethics. 2012;38(7):423–427. doi: 10.1136/medethics-2011-100194. [DOI] [PubMed] [Google Scholar]
  79. Russell B. A priori justification and knowledge. Stanford Encyclopedia of Philosophy. 2014 Available at: http://plato.stanford.edu/entries/apriori/. Accessed: August 23, 2015.
  80. Sayre-McCord G. Coherentist epistemology and moral theory. In: Sinnott-Armstrong W, Timmons M, editors. Moral Knowledge? Oxford University Press; New York: 1996. pp. 137–189. [Google Scholar]
  81. Sellars W. Empiricism and the philosophy of mind. In: Feigl H, Scriven M, editors. Minnesota Studies in the Philosophy of Science. I. University of Minnesota Press; Minneapolis, MN: 1956. pp. 253–329. [Google Scholar]
  82. Schneider C. The Censor’s Hand: The Misregulation of Human-Subject Research. M.I.T. Press; Cambridge, MA: 2015. [Google Scholar]
  83. Shah S, Whittle A, Wilfond B, Gensler G, Wendler D. How do institutional review boards apply the federal risk and benefit standards for pediatric research? Journal of the American Medical Association. 2004;291(4):476–482. doi: 10.1001/jama.291.4.476. [DOI] [PubMed] [Google Scholar]
  84. Shamoo AE, Ayyub BM. Risk/benefit estimates in clinical trials. Drug Information Journal. 2011;45:669–685. [Google Scholar]
  85. Shamoo AE, Resnik DB. Responsible Conduct of Research. 3rd Oxford University Press; New York: 2015. [Google Scholar]
  86. Shrader-Frechette KS. Risk and Rationality: Philosophical Foundations for Populist Reforms. University of California Press; Berkeley, CA: 1991. [Google Scholar]
  87. Silberman G, Kahn KL. Burdens on research imposed by institutional review boards: the state of the evidence and its implications for regulatory reform. Milbank Quarterly. 2011;89(4):599–627. doi: 10.1111/j.1468-0009.2011.00644.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  88. Simon HA. Models of Man. John Wiley; New York: 1957. [Google Scholar]
  89. Simon HA. Reason in Human Affairs. Stanford University Press; Stanford, CA: 1990. [Google Scholar]
  90. Stark L. Behind Closed Doors: IRBs and the Making of Ethical Research. University of Chicago Press; Chicago: 2012. [Google Scholar]
  91. Stratton-Lake P. Intuitionism in ethics. Stanford Encyclopedia of Philosophy. 2014 Available at: http://plato.stanford.edu/entries/intuitionism-ethics/. Accessed: July 22, 2015.
  92. Strong C. Specified principlism: what is it, and does it really resolve cases better than casuistry? Journal of Medicine and Philosophy. 2000;25(3):323–341. doi: 10.1076/0360-5310(200006)25:3;1-H;FT323. [DOI] [PubMed] [Google Scholar]
  93. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science. 1974;185(4157):1124–1131. doi: 10.1126/science.185.4157.1124. [DOI] [PubMed] [Google Scholar]
  94. United Kingdom, Department of Health Governance Arrangements for Research Ethics Committees: A Harmonized Edition. 2011 Available at: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/213753/dh_133993.pdf. Accessed: June 30, 2015.
  95. Van Luijn HE, Musschenga AW, Keus RB, Robinson WM, Aaronson NK. Assessment of the risk/benefit ratio of phase II cancer clinical trials by Institutional Review Board (IRB) members. Annals of Oncology. 2002;13(8):1307–1313. doi: 10.1093/annonc/mdf209. [DOI] [PubMed] [Google Scholar]
  96. Van Riel P. Scientific reduction. Stanford Encyclopedia of Philosophy. 2014 Available at: http://plato.stanford.edu/entries/scientific-reduction/. Accessed: September 21, 2015.
  97. Wendler D, Belsky L, Thompson KM, Emanuel EJ. Quantifying the federal minimal risk standard: implications for pediatric research without a prospect of direct benefit. Journal of the American Medical Association. 2005;294(7):826–832. doi: 10.1001/jama.294.7.826. [DOI] [PubMed] [Google Scholar]
  98. Weijer C. The ethical analysis of risk. Journal of Law, Medicine & Ethics. 2000;28(4):344–361. doi: 10.1111/j.1748-720x.2000.tb00686.x. [DOI] [PubMed] [Google Scholar]
  99. Weijer C, Miller PB. When are research risks reasonable in relation to anticipated benefits? Nature Medicine. 2004;10(6):570–573. doi: 10.1038/nm0604-570. [DOI] [PubMed] [Google Scholar]
  100. Weiss PA. Introductory Statistics. 9th Pearson; Upper Saddles River, NJ: 2011. [Google Scholar]
  101. Wendler D, Miller FG. Assessing research risks systematically: the net risks test. Journal of Medical Ethics. 2007;33(8):481–486. doi: 10.1136/jme.2005.014043. [DOI] [PMC free article] [PubMed] [Google Scholar]
  102. Wertheimer A. Is payment a benefit? Bioethics. 2013;27(2):105–116. doi: 10.1111/j.1467-8519.2011.01892.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  103. Will GJ, Klapwijk ET. Neural systems involved in moral judgment and moral action. The Journal of Neuroscience. 2014;34(32):10459–10461. doi: 10.1523/JNEUROSCI.2005-14.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  104. World Medical Association Declaration of Helsinki. 2013 Available at: http://www.wma.net/en/30publications/10policies/b3/index.html. Accessed: June 27, 2015.
  105. Zimbardo P. The Lucifer Effect: Understanding How Good People Turn Evil. Random House; New York: 2008. [Google Scholar]

RESOURCES