Skip to main content
PMC Canada Author Manuscripts logoLink to PMC Canada Author Manuscripts
. Author manuscript; available in PMC: 2015 Jun 26.
Published in final edited form as: Kennedy Inst Ethics J. 2010 Mar;20(1):75–98. doi: 10.1353/ken.0.0307

Extending Clinical Equipoise to Phase 1 Trials Involving Patients: Unresolved Problems

James A Anderson, Jonathan Kimmelman
PMCID: PMC4482670  CAMSID: CAMS4750  PMID: 20506695

Abstract

Notwithstanding requirements for scientific/social value and risk/ benefit proportionality in major research ethics policies, there are no widely accepted standards for these judgments in Phase 1 trials. This paper examines whether the principle of clinical equipoise can be used as a standard for assessing the ratio of risk to direct-benefit presented by drugs administered in one category of Phase 1 study—first-in-human trials involving patients. On the basis of the supporting evidence for, and architecture of, Phase 1 studies, the articles offers two provisional conclusions: (1) the risks of drug administration in such trials cannot generally be justified on therapeutic grounds but by appeal to the social value of the research; and (2) a framework for adjudicating the ratio of risk/ social-value must be developed.


Notwithstanding requirements for scientific/social value and risk/ benefit proportionality in major research ethics policies, there are no widely accepted standards for judgments concerning risk, benefit, and value in Phase 1 trials. This gap in the research ethics literature is troubling given the critical role played by “Phase 1” studies in the translation of basic research into clinical application. The need to address this problem is underscored by a recent proliferation of policy initiatives designed to spur translational clinical research (Kimmelman 2007; 2009; AAMC 2006; FDA 2004; 2006; Zerhouni 2003).

Faced with this situation, an obvious starting point for assessing risk, benefit, and value presents itself: perhaps a principle applied elsewhere in research ethics—e.g., the principle of clinical equipoise—might be extended to Phase 1 trials. Briefly, the principle of clinical equipoise establishes two conditions to be met at the start of a trial: (1) “there must be honest, professional disagreement among expert clinicians about the preferred treatment” (hereafter called the “first requirement”), and (2) “the trial must be designed in such a way as to make it reasonable to expect that, if it is successfully conducted, … the results … should be convincing enough to resolve the dispute among clinicians” (hereafter called the “second requirement”) (Freedman 1987, p. 144).

The proposed extension of these requirements to Phase 1 trials involving patients is attractive for a number of reasons. First, there is the inherent appeal of the principle itself. Many regard clinical equipoise as a cornerstone in the moral foundation of late phase trials involving human subjects (Miller and Weijer 2007; Evans and London 2006; Djulbegovic, Cantor, and Clarke 2003; Weijer, Shapiro, and Glass 2000; Ashcroft 1999). If the principle can be extended, the normative foundation of early phase research is thereby secured. Second, researchers and reviewers are already familiar with this concept. Why start from scratch if an “off-the-shelf” concept will do? Third, clarifying whether clinical equipoise can be extended to early-phase trials will facilitate the proper application of component analysis (Freedman et al 1992; Weijer and Miller 2004)—one of the most influential frameworks for risk analysis in clinical research—to Phase 1 trials involving patients, thereby clarifying the ethical analysis of risk in this context. Finally, some features of both the theory and practice of early phase research appear to be best explained by a commitment to something like clinical equipoise.1 Many clinical researchers and ethicists claim that Phase 1 trial enrollment can be a legitimate therapeutic option for certain patients; accordingly, these commentators would tend to see the risks of such well-designed trials as having a justification in direct-benefits (Miller and Joffe 2008; Markman 2006; Agrawal and Emanuel 2003; Eisenhauer et al. 2000; ASCO 1997). Enrollment in studies involving potent or unpredictable interventions also typically is restricted to patients who lack alternative treatment options—the “treatment refractory.” Both of these features suggest a prior commitment to the core of clinical equipoise: ensuring that patients are not disadvantaged, relative to the standard of care available outside the study, by their participation.

In the following discussion, we examine the proposed extension of the principle of clinical equipoise to Phase 1 trials by identifying and assessing a variety of challenges confronting the use of the principle’s first requirement as a standard for assessing the ratio of risks to direct-benefits when new drugs are administered to patient-volunteers in Phase 1 trials. We conclude that the use of the first requirement as a normative standard faces major difficulties for at least one category of Phase 1 studies—namely, first-in-human trials involving patient-volunteers. We hope that our investigation will motivate further clarification of the problems involved in the ethical appraisal of Phase 1 studies and the articulation of an alternative framework that does the same work shouldered by clinical equipoise in later phase trials.

FOCUSING THE QUESTION

According to the standard interpretation, the principle of clinical equipoise is supposed to resolve moral problems associated with randomized controlled trials (RCTs). Presumably, the feasibility and coherence of extending clinical equipoise to Phase 1 studies will hinge in part on what makes Phase 1 studies distinct from RCTs. Yet it is far from clear that “Phase 1” picks out a sufficiently homogenous class of trials to make such a comparison meaningful. One commentator describes the term “Phase 1” as so diffuse as to be “nearly useless,” because it obscures important differences of purpose and design among a variety of trial designs (Pianta-dosi 2005, p. 224). Others also have drawn attention to the heterogeneity encompassed by this term (See, e.g., Joffe and Miller 2006; Agrawal and Emanuel 2003).

Given the varied purpose and design of the studies contained in this category, it is clear that we cannot extend the principle of equipoise generically to all Phase 1 studies. For example, clinical equipoise is not the appropriate framework for evaluating the risks of administering drugs in Phase 1 trials involving healthy volunteers, because healthy volunteers are not in a position to benefit directly from these drugs. For the purposes of this paper, then, we center our analysis on first-in-human dose escalation or dose finding trials in patient-volunteers (hereafter, “FIH trials”). We direct our analysis to this category for a number of reasons. First, the appropriateness of the proposed extension is especially unclear for this category of study. On the one hand, patients who enter these trials typically have unmet medical needs; it is an open question whether clinical equipoise can be extended to these trials because participation could present these patients with a risk/direct-benefit ratio consistent with the standard of competent care available outside of the study. On the other hand, there are also good reasons for skepticism because the risks and benefits of agents are most uncertain at this stage in the development process. Second, FIH trials involving patients represent a “stereotyped” category of Phase 1 studies, although it should be noted that they account for only one quarter of all National Cancer Institute-sponsored Phase 1 studies (Horstmann et al 2005). Third, we think that the many translational research initiatives previously catalogued are directed in large part toward promoting this type of trial, since FIH studies represent a critical link between basic and clinical science. Finally, further investigation of the extension of clinical equipoise to evaluating the drug administration component of FIH trials would help establish how far back into the drug development process the principle can or should provide moral guidance.

THE QUESTION

A salient feature of clinical equipoise is that it simultaneously provides both a standard of acceptable risk/direct-benefit to volunteers and a standard of scientific and social value.2 Again, in this paper we restrict our analysis to the former standard. Accordingly, our task is to examine whether the first requirement of the principle of clinical equipoise is the appropriate normative standard for assessing the ratio of risk to direct-benefit for administration of drugs in FIH trials.

There are a range of challenges confronting an affirmative answer to this question. Some commentators would object to the question itself, arguing that clinical equipoise fails to provide the appropriate normative standard for clinical research of any kind, let alone FIH trials (see, e.g., Veatch 2007; Miller and Brody 2003). Clearly we disagree. To quote one set of commentators, we believe that “[d]espite these objections, [the principle of] clinical equipoise remains the most widely accepted ethical justification for randomized controlled trials” (Daugherty et al. 2008, p. 1373). It is important to note, furthermore, that many more commentators defend the principle where trials involve interventions for patients with life-threatening conditions, which are the types of subjects recruited to many FIH trials (Meropol 2007; Committee on Strategies 2001; ). For the purposes of this paper, then, we will presuppose that clinical equipoise provides the appropriate standard of risk/direct-benefit for (at least) RCTs. As discussed later, however, our analysis is relevant to those who have advocated alternative frameworks for the ethical analysis of risk and direct-benefit in clinical research.

In the rest of this paper, we consider three problems confronting the extension of the first requirement of clinical equipoise to FIH trials that stem from the characteristics of FIH studies themselves. (1) FIH studies do not involve randomization; (2) preclinical evidence is too weak to justify claims of therapeutic warrant;3 and (3) the balance of risks and direct-benefits posed by FIH trial participation is unfavorable in comparison with the standard of care available outside the study.4

Objection 1: FIH Trials are Single-Arm Studies

The principle of clinical equipoise was first developed to address the moral tensions surrounding the random allocation of treatments, and many subsequent defenders of the principle view it as directed specifically toward resolving the types of uncertainties that arise in the context of RCTs (Djulbegovic 2007). The problem is FIH studies typically do not involve randomization. FIH studies are rarely designed to compare an experimental treatment against standard therapy. With this in mind, it is unclear that it even makes conceptual sense to extend the principle of clinical equipoise to the ethical evaluation of FIH studies: the existence of (the state of) clinical equipoise—i.e., “honest, professional disagreement among expert clinicians about the preferred treatment”—requires at least two arms.

We find this objection unpersuasive. Although clinical equipoise originally was proposed as a solution to the RCT dilemma,5 this historical fact in and of itself does not show that the principle cannot be generalized. The tension between research and practice is, perhaps, most clearly exemplified in RCTs. But this tension pervades clinical research of all kinds. The principle of clinical equipoise is widely supposed to provide a satisfying resolution to this tension because it requires that the possible benefits and risks of clinical studies be judged by comparison with the standard of competent care available outside the study.6 When the study under review is an RCT, the relevant comparison is explicitly contained within the protocol itself (or should be). When the study under review does not involve a control arm, the institutional review board (IRB) must look outside of the protocol to the clinical context, judging the acceptability of the protocol on the basis of existing evidence and, perhaps, the testimony of relevant experts. Various proponents of clinical equipoise have made similar arguments (see, e.g., Freedman, Fuks, and Weijer 1992; London 2007). Their conclusions are the same as ours: the scope of the principle of clinical equipoise is not restricted to RCTs.

Objection 2: The Evidentiary Basis for Therapeutic Warrant

According to one of the most influential frameworks for the ethical analysis of benefits and harms in clinical research—component analysis—protocol reviewers must begin by distinguishing between therapeutic and nontherapeutic procedures (Freedman, Fuks, and Weijer 1992; Weijer and Miller 2004). Therapeutic procedures are those offered with therapeutic warrant, whereas nontherapeutic procedures are offered purely to answer the scientific question under study. This distinction is crucial for present purposes because, according to component analysis, these types of procedures are subject to independent moral standards, and only therapeutic procedures are subject to the requirements of clinical equipoise (Freedman, Fuks, and Weijer 1992; Weijer and Miller 2004).

According to Paul Miller and Charles Weijer (2004, p. 570), a procedure is therapeutically warranted if and only if it is “administered on the basis of evidence sufficient to justify the belief that [it] may benefit research subjects.” Although Miller and Weijer do not explicitly mention risk in this definition, we suspect that they are using “benefit” to refer to a “favorable balance of risks and benefits.” In order to avoid confusion, however, we will be explicit: for present purposes a procedure is therapeutically warranted if and only if it is carried out on the basis of evidence sufficient to justify the belief that it may present research subjects with a favorable balance of risk and direct-benefit. The problem for the proposed extension of clinical equipoise is that the drugs administered in FIH trials are not therapeutically warranted because they are administered on the basis of evidence that is insufficient to justify such a belief.

Imagine that you are sitting on an IRB reviewing a FIH trial protocol. According to component analysis, you must begin by distinguishing the therapeutic study procedures from the nontherapeutic study procedures. This task is carried out by determining whether the drug to be studied in this protocol satisfies the standard of therapeutic warrant: is the evidence sufficient to justify the claim (implicit or explicit in the protocol) that the drug presents research subjects with a favorable balance of risks and direct-benefits? You examine the evidence, and find yourself troubled by the fact that the only evidence available to support this claim is preclinical.

Of course, this is not necessarily a problem. The use of animal data in medical management decisions is not inconsistent with standards of competent medical practice. The canons of evidence-based medicine, for example, articulate a hierarchy of evidence that includes “physiologic studies” in the penultimate tier (Guyatt et al. 2000). According to evidence-based medicine, the use of physiologic studies becomes suspect only when higher quality forms of evidence are available. FIH studies—assuming they enroll patients who lack validated, alternative disease management options—would appear to fulfill these conditions.

But these points also raise questions about how evidence standards for launching FIH trials should be calibrated when superior forms of evidence are absent and patients lack alternative disease management options. The logic of the previous paragraph would seem to suggest that prevailing evidential standards in FIH trials are unnecessarily restrictive. For example, preclinical studies come in two varieties: in vitro studies (performed in cells or cell extracts), and in vivo studies (performed in tumor-bearing, live animals). Conventionally, in vivo data are required before FIH trials can be initiated, although there is a “minority view” that maintains in vivo data are not necessary (Eisenhauer, Twelves, and Buyse 2006, p. 18). If evidence standards diminish when higher forms of evidence are not available and patients lack alternatives, there appears to be no basis for insisting on in vivo data if in vitro data are promising. Accordingly, an important question facing those who would justify the risks associated with drug administration in FIH trials on therapeutic grounds is why the evidentiary standard should be set at the level of in vivo preclinical studies.

On the other hand, duties of nonmaleficence are primary in medicine, and this raises problems for the view that preclinical data provide a sufficient evidentiary basis for therapeutic action. From the duty of nonmaleficence, it follows that, as an intervention’s risk increases, so too should the evidential support for applying the intervention (Kimmelman et al. 2009). However, the evidence base supporting FIH trials tends to be weak. First, experience demonstrates that encouraging outcomes in preclinical studies only rarely translate into favorable clinical outcomes in FIH studies. In cancer—as with many other diseases—preclinical models often show profound pathological and physiological differences with human disease. Second, there is a growing literature showing a prevalence of flawed methodologies in preclinical studies. Practices aimed at reducing bias that are routine in clinical research—random allocation, a priori statement of hypothesis, blinded treatment allocation and outcome assessment—are applied only sporadically in preclinical research (Philip et al. 2009; Perel et al. 2007; MacLeod et al. 2004; Bebarta, Luyten, and Heard 2003). There are additional concerns about publication bias (Benatar 2007; MacLeod et al. 2005; Gladstone, Black, and Hakim 2002) and about the reproducibility of preclinical studies (Lowenstein and Castro 2009). Third, such worries about evidence are compounded when one considers that preclinical studies often provide limited information about the conditions under which a drug’s administration can elicit a therapeutic response—e.g., the appropriate schedule and route of drug delivery, whether the drug should be combined with another, and whether the drug is only active in certain disease subtypes or at a specific point in a disease process.7 Last, there is a crucial asymmetry in the quality of evidence used to assess the risks and benefits of participation. Risks in FIH cancer studies involve “hard” endpoints such as death and grade 3 or 4 toxicities, and animal models have proven reasonably reliable about predicting their occurrence (Newell et al. 2004; 1999; Clark et al. 1999). At any rate, toxicities of some sort are an almost certain outcome in trials aimed at testing safety and dose. For reasons noted above, however, preclinical studies are much less reliable for predicting therapeutic benefits. Moreover, what limited information exists about concordance between therapeutic outcomes in preclinical studies and Phase 1 trials is based on surrogate endpoints, such as tumor response, which may not correlate with clinically meaningful benefit (King 2000; Miller and Joffe 2008; Fleming and DeMets 1996).8 Assessments of risk and benefit, thus, are made on qualitatively different evidential bases, the former solid, and the latter less so. Correlatively, participants can be relatively certain about the risks they will face, while the prospect of clinical benefit is far from clear.

These considerations lead to a response to those who would defend therapeutic justification by referring to sliding scales of evidence for patients lacking treatment alternatives: evidence-based medicine only says that clinical decisions can be based on lower tier evidence when higher tier evidence is not available, not that they should be made on this basis. Given that the quality of evidence concerning benefit is typically lower than that concerning risk, and that the risks of FIH trials can be high, it follows that the administration of drugs in FIH trials will rarely be justified on therapeutic grounds even when higher forms of evidence are not available.

Of course, one can envision exceptions to this general claim where there is a record of strong concordance between a particular preclinical system and human patients, where preclinical studies have been well designed and executed, and where a drug is not expected to cause major toxicity. In general, however, it seems likely that it is only later in the development process—after Phase 1 and perhaps Phase 2 studies are completed—that reliable evidence will become available about risk, benefit, and the conditions needed to elicit therapeutic properties. At this point, evidence can be sufficient to justify claims that risks are offset by therapeutic benefits. And it is at this point, furthermore, that the social conditions associated with clinical equipoise—namely “current or imminent disagreement in the clinical community”—become a real possibility (Djulbegovic 2007).

In sum, this objection raises important reasons for skepticism about extending the first requirement of clinical equipoise to FIH trials, but it does not rule it out. Evidentiary thresholds for justified medical action remain to be worked out in translational research settings (as they do in clinical care). Perhaps the best we can do here is to acknowledge that there may be exceptional circumstances in which preclinical evidence could, in principle, justify a claim that risks are justified by the prospect of therapeutic benefit.

Objection 3: Balance of Risks and Therapeutic Benefits Are Unfavorable Relative to the Standard of Competent Care

Historically, the principle of clinical equipoise was intended primarily to ensure that research subjects who are also patients are not disadvantaged therapeutically by participation in RCTs. This is why the first requirement of clinical equipoise demands that a trial be initiated only if “there [is] honest, professional disagreement among expert clinicians about the preferred treatment”: as long as a significant minority of expert clinicians believes on the basis of good evidence that the treatment(s) under study is/are preferred for the population in question, patients are not disadvantaged by participation because the treatment(s) is/are consistent with competent care.9

Therapeutic warrant is a necessary but insufficient condition for clinical equipoise. Therapeutic warrant requires only that an intervention be administered on the basis of evidence sufficient to justify a belief that it may present research subjects with a favorable balance of risks to direct-benefits simpliciter. Clinical equipoise, by contrast, requires that an intervention be administered on the basis of evidence sufficient to convince at least a significant minority of expert clinicians that it may present research subjects with a favorable balance of risks to direct-benefits relative to the standard of competent care available outside the study. But only a therapeutically warranted procedure is a bona fide therapeutic procedure, and only a therapeutic procedure is amenable to evaluation via the principle of clinical equipoise. The previous objection raised concerns about the evidential justification for claims of therapeutic warrant in this context. The current (and final) objection is that, even if we set aside the epistemic concerns discussed in the previous objection, there are reasons to believe that typical FIH studies cannot satisfy the first requirement of clinical equipoise.

Some of these reasons are more convincing than others. One oft-heard argument is that FIH trials are designed primarily to evaluate safety, not efficacy. The idea is that this feature of the design of FIH trials entails that participants will not benefit from participation. However, the fact that a trial is not designed with the primary goal of evaluating efficacy does not mean, in and of itself, that participants will not benefit therapeutically by participation. In any case, FIH trials generally are designed with the secondary goal of evaluating promise of efficacy (Piantadosi 2005, p. 224; Eisenhauer, Twelves, and Buyse 2006, Chapter 3), and various trial design reforms are intended to enhance the therapeutic benefits of participating in Phase 1 studies (Eisenhauer, Twelves, and Buyse, Chapter 6)—albeit, with limited success (Koyfman et al. 2007).

Another argument against the applicability of clinical equipoise to FIH studies is that the risk-benefit balance of such studies is too unfavorable to count as therapeutic (relative to the standard of competent care available outside of the study). One recent estimate put the probability of tumor response for FIH cancer studies at 5 percent and the rate of toxic death at 0.25 percent (Horstmann et al. 2005; Roberts et al. 2004). Some commentators view this risk-benefit ratio as too unfavorable to ground a claim that the risks are therapeutically justified (Annas 1996). But, as we already have noted, many oncologists, translational researchers, and ethicists disagree. They view this therapeutic index as favorable enough to cohere with care standards in oncology, and thus they defend the claim that well designed Phase 1 cancer studies are a reasonable therapeutic option for treatment refractory patients (Miller and Joffe 2008; Markman 2006; Agrawal and Emanuel 2003; Eisenhauer et al. 2000; ASCO 1997).

We cannot resolve this difference of opinion here, but we can point to some principled reasons for concern about extending the principle of clinical equipoise to FIH trials. Much of the debate about risk and benefit in Phase 1 trials has centered on whether they have a favorable risk-benefit balance in the aggregate (Agrawal and Emanuel 2003), or on whether it is ethical for physicians to offer individual patients enrollment as a treatment option (Miller and Joffe 2008). The problem is, even if both of these questions are answered in the affirmative, an FIH trial may nonetheless fail to satisfy the conditions of clinical equipoise because the principle is concerned with a different question: when an IRB assesses the risks in an FIH trial, will any patient receive interventions in a way that falls below the standard of competent care?10 Answering this question requires attending to the aims and architecture of FIH trials.

By definition, FIH trials are aimed primarily at determining the optimal dose and conditions for subsequent trials (Piantadosi 2005, p. 226). This goal dictates a study architecture that, in the words of one authority, can “provide information about the shape, steepness, and location of the dose response function” (Piantadosi 2005, p. 227). In order to discover which dosage is optimal for testing in subsequent studies, different doses must be tried. Typically, Phase 1 researchers attempt to maximize the therapeutic benefit-risk balance by beginning trials at a relatively safe dose. If the initial dose is not toxic, dosage is escalated in new patient cohorts until major safety concerns are encountered (Eisenhauer, Twelves, and Buyse 2006, Chapter 3).11

It may be ethical for physicians to offer individual patients enrollment as a treatment option—pending resolution of other, previously identified, concerns—precisely because it is not known in advance which dose level will prove subtherapeutic, optimal, or toxic. But this does not mean that the trial is consistent with the conditions of clinical equipoise because it is known in advance that some patient-participants will receive doses that are subtherapeutic or toxic; only one of the doses tried will be optimal. Defining the boundary for optimal dosage requires a study design that exposes some patient-volunteers to doses that have an unfavorable risk/ direct-benefit ratio in comparison with the standard of competent care outside the study—i.e., no treatment.12 And this is true, furthermore, even if the risk-benefit balance presented by study participation is favorable in the aggregate.

At this point, we anticipate a number of objections. First, it might be claimed that the relevant standard of competent care outside the study is not “no treatment.” Indeed, some commentators have argued—in the context of cancer Phase 1 trials—that the risks and benefits of study participation should be compared against the risks and benefits posed by nonvalidated anticancer agents that treatment refractory patients might receive outside a study (Miller and Joffe 2008). The risk-benefit balance associated with subtherapeutic and supertherapeutic dosing could, in many circumstances, be comparable with that posed by these agents because nonstandard interventions can carry considerable risk and burden. The problem with this suggestion, however, is that such “treatments” are by definition nonstandard, and it is unclear how nonstandard practices could provide a stable and meaningful benchmark for comparison.13 Nor is it clear whether any risks would be excluded under such a nonstandard standard. Instead, because there is no established effective therapy for treatment refractory patients, the relevant standard for evaluating risks and benefits in FIH trials is “no treatment.”

Second, defenders of extending the principle of clinical equipoise to the drug administration component of FIH trials might acknowledge the necessity of “under-” and “over-” dosing at least some patients. However, they might claim that even suboptimal dosing can have a risk-benefit balance that is comparable to “no treatment.” As we will discuss, we agree that there may be some circumstances in which this objection holds. However, we speculate that these will be exceptional for FIH trials. Because subtherapeutic levels of drugs often cause at least some side effects, their administration involves burden. Patients in lower dose cohorts, thus, likely will experience modest toxicities and burdens without compensatory therapeutic benefit. Conversely, patients in the last cohort often experience serious and in some circumstances life-threatening toxicities beyond those needed for therapeutic effect. By definition, these doses have an unfavorable risk-benefit balance.

A third objection might point to the variety of novel Phase 1 dose escalation strategies designed to reduce the number of patients receiving sub-therapeutic doses (Babb and Rogatko 2001; Eisenhauer et al. 2000; Simon et al. 1997; Daugherty et al. 1998), or to the fact that many newer agents do not require escalation to toxic doses (Parulekar and Eisenhauer 2004). The problem with this objection is that, although these design reforms may reduce the ethical concerns identified, no design reform eliminates the fact that FIH Phase 1 study objectives require that a patient or group of patients receive greater than and lesser than the optimal therapeutic levels of an intervention.14 That many patients will receive active, and tolerable, doses in middle cohorts, furthermore, does not “purchase” the therapeutic justification for individuals on either side of the dose optimum.15

Consider how this situation contrasts with well designed RCTs. If a RCT begins with an honest null hypothesis, there should be no expectation at the outset of the trial that any patient-volunteer in any arm of the study will be allocated to a treatment that is excessively toxic, insufficiently active, or otherwise inferior. Prospectively, then, they begin with the belief that no patients will be disadvantaged by enrollment, even though retrospectively, it may turn out that patients allocated to one arm were disadvantaged relative to patients in the other arm and/or outside the study. Note that RCTs that fail to refute the null hypothesis still achieve their scientific objective of testing a hypothesis: nonsuperiority of the newer intervention may not be the desired result, but it is an informative one. By contrast, in a Phase 1 trial it is known in advance—not discovered—that at least one dose tested will be demonstrably inferior to one of the other doses tested. Prospectively, then, Phase 1 trials begin with the knowledge that at least one patient will be disadvantaged relative to the other patients in the study and/or outside it, although the identity of the patient cannot be determined until the trial is run. This knowledge follows from the aims and architecture of FIH trials: a Phase 1 study that fails to show demonstrable superiority of one dose does not achieve its scientific objectives and provides less than secure footing for subsequent drug development.

In sum, then, even if it is ethical for physicians to offer individual patients enrollment as a treatment option and the trial promises a favorable risk-benefit balance in the aggregate, given the aims and architecture of FIH trials it should be difficult for an IRB to accept the claim that all patients in an FIH trial will receive interventions in a way that is consistent with the standard of competent care available outside of the trial—although there may be exceptions, as discussed subsequently. Thus, the purpose and design of FIH trials raises important challenges for the proposed extension of clinical equipoise and, for that matter, any framework that proposes to justify the risks of FIH trial enrollment on therapeutic grounds (see, e.g., Wendler and Miller 2007).

Discussion

At this point we can summarize our findings. We have identified two credible objections to the extension of the first requirement of the principle of clinical equipoise to FIH trials. The first centers on the quality of pre-clinical evidence and whether the evidential standards for launching FIH trials support extension of the principle. The second objection is that the aims and architecture of FIH trials of necessity require that some patient-volunteers receive inactive and/or toxic levels of a new drug.

The objection from evidence requires that proponents of clinical equipoise articulate a basis for defining a reasonable threshold for evidence where patients lack treatment alternatives. It also raises important questions concerning the epistemic function of the principle of clinical equipoise, questions that are beyond the scope of the current paper. However, with respect to our present question—whether clinical equipoise is the appropriate normative standard for assessing the ratio of risk to direct-benefit for administration of drugs in FIH trials—this objection is inconclusive; although it suggests that IRBs should be unlikely to approve (even well-designed) FIH studies if these trials must satisfy the first requirement of clinical equipoise, it does not entirely rule out this possibility. Again, given the nontrivial risks posed by participation in FIH trials and ignorance of the conditions needed to elicit therapeutic action, it is unlikely that preclinical evidence alone can credibly justify judgments of favorable risk-benefit balance. In specific cases, however, IRBs may be reasonably convinced that patients who enter FIH trials will receive interventions with therapeutic justification.

The “aims and architecture” challenge presents a more serious problem for the proposed extension of clinical equipoise to FIH trials. It is difficult to imagine ways in which FIH trials could simultaneously achieve their objective—e.g., the reliable identification of optimal dosage for Phase 2 trials—and meet the first condition of clinical equipoise. Nevertheless, we can imagine circumstances in which, because departure from optimal dosing imposes no more than minimal risk, burden, or disadvantage, the “aims and architecture” challenge does not present an insuperable barrier to extending the first requirement of clinical equipoise. This might occur when there are sound reasons to expect that the test drugs will have a very broad therapeutic index, or when interventions are tested over a very narrow dose range. We also note that trial designs that maximize efficiency of information gain while minimizing the number of patients receiving subtherapeutic and/or intolerable doses will at least depart to a lesser extent from the principle of clinical equipoise than other designs. We suggest that applying such designs would help discharge ethical obligations to minimize risk.

Our analysis has several implications. First, we have identified an outer edge for the first requirement of clinical equipoise. No one could credibly claim that this requirement should be extended to Phase 0 and/or feasibility studies involving administration of study drugs in a patently nontherapeutic manner. The task now is to determine where, during clinical development, a protocol’s therapeutic justification emerges: do risks of administering drugs in Phase 1 trials that involve combinations with established modalities have a more credible claim of consistency with the first requirement of clinical equipoise? Do non-FIH Phase 1 trials conducted in pediatric populations involve a narrow enough dose range to neutralize the “aims and architecture” argument? Does clinical equipoise justify the risks of administering cytotoxic drugs in well-designed Phase 2 studies? The tension here is that, although drugs are administered at levels believed—on the basis of preclinical and Phase 1 studies—to have an optimal risk-benefit balance, inferences about toxicity are more reliable at this point than are inferences about clinical impact.

Second, our analysis suggests that at the level of review, the risks of administering drugs in FIH trials cannot be justified by appeals to the prospect of therapeutic benefit. Although others have made similar claims (Ross 2006), our approach of analyzing study architecture bypasses the need to resolve otherwise irreconcilable differences of opinion about the risk-benefit balance presented by FIH trials. It also avoids the problems posed by comparing the risks and benefits presented by an experimental intervention with a poorly defined and unstable category of nonstandard interventions aimed at disease management.

The conclusion that risks in FIH trials should not be justified by appeal to therapeutic benefits may seem counterintuitive and troubling. Many FIH trials involve considerable risk. In oncology, 10 to 14 percent of subjects experience grade 3 or 4 toxicities (Roberts et al. 2004; Horstmann et al. 2002). In movement disorder trials involving surgical delivery of an agent to the brain, the risk of hemorrhage leading to permanent neurological deficits is on the order of 1 to 2 percent (Kimmelman et al. 2009). A recent analysis of outcomes for patients in Phase 1 and 2 trials involving autologous stem cell transplantation for the treatment of progressive multiple sclerosis reported 5 percent mortality from early conditioning procedures (Saccardi et al. 2006). If such risks are not justified by corresponding therapeutic benefits, the ethical justification of risks in FIH trials will hinge on demonstrating the prospect of significant social benefits. We suggest that our findings place demands on Phase 1 researchers to define carefully the knowledge value of their studies and to use study designs that will be highly informative.

Our analysis further suggests that IRBs need workable frameworks for evaluating risk/social-value tradeoffs. At present, there are no widely accepted standards for balancing social value and risk (Kimmelman 2010; London 2005). Existing proposals, furthermore, are typically too restrictive for FIH studies—e.g., the standard for demarcated research risks proposed by Alex London (2006). The lack of standards raises concerns that risk evaluation for FIH trials is arbitrary and opaque. To protect FIH trial participants and the integrity of drug development, then, there must be a normative standard for assessing when a FIH study possesses sufficient research value.

Of course, even if the first requirement of clinical equipoise is inappropriate in this context—because it is unworkable—the second requirement may provide the normative standard required for determining when risks are justified by social value. Recall that the second requirement of the principle of clinical equipoise states that a “trial must be designed in such a way as to make it reasonable to expect that, if it is successfully conducted, … the results … should be convincing enough to resolve the dispute among clinicians” (Freedman 1987, p. 144). Clearly, our present analysis suggests that the second requirement is not the appropriate standard if the “dispute among clinicians” is defined in terms of a state of clinical equipoise. Even if this “dispute” is defined in more appropriate terms—e.g., as a dispute among clinical investigators about a drug’s clinical promise—however, we believe there are independent reasons for believing that the second requirement cannot provide the needed framework.

Although we leave the elaboration of our position to future work, if our working hypothesis proves correct, an alternative framework for assessing risk/social-value tradeoffs will have to be devised. Our present analysis suggests three desiderata for any such alternative. First, if current or imminent disagreement in the expert community—i.e., a state of clinical equipoise—is not the appropriate standard for initiating a FIH trial, an alternative framework must provide another standard. Specifically, it must say something about the location and nature of uncertainty warranting trial initiation. Second, an alternative framework must spell out what counts as an informative FIH trial result—one that can justify the considerable risks of drug administration in FIH trials. Should FIH trials be designed in order to provide adequate data for testing in Phase 2 studies, or should something more—e.g., data on pharmacodynamics (that is, the intervention’s effect on a biological target)—be sought? Third, an alternative framework must provide guidance concerning morally preferable trial designs by mandating maximum efficiency of information gain while minimizing the number of patients receiving subtherapeutic and/or intolerable doses. Satisfying these desiderata would take us a long way toward a workable and morally defensible standard for risk/social-value tradeoffs in FIH trials.

CONCLUSION

Given the critical role played by Phase 1 studies in the translation of basic research into clinical application, it is imperative to develop standards for judgments of risk, benefit, and value in Phase 1 trials. Currently, no widely accepted standards of this kind exist. In this paper, we examined whether the first requirement of clinical equipoise is the appropriate normative standard for assessing the ratio of risk to direct-benefit of drug administration in FIH trials.

We identified two credible challenges to the extension of the first requirement of clinical equipoise to FIH trials. The first centered on the quality of preclinical evidence: preclinical evidence is likely too weak to justify the belief that drugs administered in FIH trials present research subjects with a favorable balance of risks and direct-benefits. The second centered on the aims and architecture of FIH trials: the objectives and design of FIH trials ensure that some patient-participants in FIH trials will be presented with an unfavorable balance of risks and direct-benefits relative to the standard of competent care available outside the study.

Although much work remains to be done before firm conclusions can be drawn, our analysis supports the contention that the first requirement of clinical equipoise probably is not the appropriate normative standard for the justification of the risks posed by drug administration in FIH trials. Given that “ought implies can,” and that most FIH trials cannot satisfy the first requirement, it stands to reason that FIH trials generally should not be held to this standard. Failure to meet this standard, in other words, should not lead one to conclude that FIH trials are ethically impermissible. Rather, our analysis suggests that the ethical permissibility of FIH trials will turn not on the question of therapeutic justification, but on the question of whether risks of drug administration are outweighed by the social value of the knowledge to be gained.

Acknowledgments

This work was funded by Canadian Institutes of Health Research (CIHR) grants MOP 68835 and FRN 102823 and a post-doctoral fellowship (awarded to JAA) from the Research Institute of the Montreal Children’s Hospital. Without suggesting their endorsement, we thank Stan Shapiro, Kathleen Glass, Charles Weijer, Abe Fuks, and an anonymous referee for helpful feedback.

Footnotes

1

We are aware of at least one set of commentators who have endorsed the extension of the principle of clinical equipoise to Phase 1 trials involving patients (Miller and Weijer 2003).

2

Alhough much of the literature and debate around clinical equipoise has focused on the former feature, we note that many accounts emphasize that clinical equipoise furnishes a basis for establishing scientific and social value as well. This aspect of clinical equipoise has been emphasized most clearly in the debate over placebo controls. Various commentators have argued against the use of placebo controls in trials of second-generation treatments because, given the existence of an established effective first-generation treatment, comparison with placebo is clinically irrelevant and, therefore, lacks scientific and social value (Freedman 1990; Weijer 1999; 2003; National Placebo Working Committee 2005; Anderson 2006; Djulbegovic 2007). As this argument implies, the principle of clinical equipoise indexes scientific and social value to clinical relevance, a feature of the principle that is explicit in its second requirement.

3

Note: therapeutic warrant is an insufficient but necessary condition for (a state of) clinical equipoise; therapeutic warrant is not the same as clinical equipoise. This point is further clarified later in the text.

4

For the purposes of this paper, our concept of risk is based on (dis)utilities rather than events. If two patients have a 1 percent probability of dying from an intervention, an event-based definition would consider the risk each faces as equivalent. However, if one of those patients is expected to live only two months and the other two years, a (dis)utility-based definition would consider the risk to the second patient to be significantly greater, because more potential years of life are lost.

5

The “RCT dilemma” refers to the moral tension at the heart of RCTs. There are varying formulations of this dilemma. In its original formulation, however, the dilemma was stated in terms of the tension between a physician’s commitment to serving the best interests of each of his/her patients (duty of care), and his/her commitment to sound medical research and, thus, to the RCT (Fried 1974; Marquis 1983; Miller and Weijer 2003). These two commitments were seen to be incompatible because the random treatment assignment characteristic of RCTs was taken to be inconsistent with the duty of care. Clinical equipoise is supposed to resolve this dilemma by ensuring that randomization is consistent with the duty of care precisely because the expert community is uncertain or in conflict concerning which treatment is best.

6

Of course, there is ongoing debate concerning whether this standard should be local or global, but the central point remains important and uncontroversial (at least according to supporters of the principle): judgments concerning value and risk necessarily are comparative in nature and the relevant comparison class is always found in the context of clinical practice.

7

In cancer, for example, FIH trials often are performed in patients with advanced solid tumors. Subsequent, studies might narrow the indication to a particular disease subtype, such as ovarian cancer (Eisenhauer et al. 2006, Chapter 3).

8

We note that, in cancer studies, toxicity itself often is used for this purpose: potential efficacy is inferred from toxicity. But it is an open question whether and when toxicity is an effective biomarker for potential efficacy, whereas toxicity is a direct and reliable measure of harm and burden.

9

Technically (in our view), clinical equipoise does not require that a minority of experts actually believe that the therapeutic procedures in question are consistent with the standard of care. It is sufficient that the existing evidence would justify such a belief if a minority of experts did hold it. This view is shared by other defenders of the principle (see, e.g., Miller and Weijer (2003, pp. 101–2).

10

The distinction between this question (which is a question that IRBs must answer) and what we might call the question of enrollment (which the enrolling physician must answer) has been developed most explicitly by Miller and Weijer (2003, 2006). According to them, equipoise applies at the level of IRB review, but plays no direct role at the level of enrollment.

11

In oncology, the maximum tolerated dose (MTD) typically is reached in several steps. The starting dose is set at one-tenth the lethal dose for 10 percent of rodents in a toxicology study (LD10) in order to strike a favorable balance between risk and potential benefit under conditions of uncertainty. Risks are minimized by providing a reasonable margin of safety. But potential benefits also are reduced correspondingly. Given that the optimal dose is typically the MTD, past experience suggests that one-tenth the LD10 will be subthera-peutic. But the relationship between rodent and human response can vary, and one-tenth the LD10 may, on occasion, turn out to be the MTD. If it is not the MTD, however, a larger dose is administered to the next cohort. This process is repeated until the MTD is reached.

12

By contrast, doses at or very close to the optimal dose—once discovered—may well present a risk/direct-benefit ratio comparable with the standard of competent care outside the study—i.e., no treatment.

13

Further complicating the notion of using nonstandard treatment as a standard is the fact that many FIH cancer studies enroll patients with different cancer types; the risks and benefits associated with nonstandard treatments for patients with different cancer types seem likely to vary.

14

Newer, Bayesian designs do not eliminate this tension. Briefly, these designs use outcomes from each patient to adjust a predicted dose-response function. For example, investigators might define the MTD as the dose at which 20 percent of patients experience dose limiting toxicities, and begin the study with a prior belief that a certain dose level will be the MTD. As evidence on toxicity accumulates after each patient is given the drug, the team would adjust their dose response curve and use this to provide a better prediction of the MTD. If toxicity observed at a given dose is less than expected from the predicted curve, investigators would update their dose-response curve, thereby increasing their estimate of the MTD. Dose escalation would continue until investigators achieve a prespecified confidence that their estimate of MTD is correct. Our aims and architecture argument applies to Bayesian designs because: (1) the goal of adjusting dose-response curves requires that patient-volunteers be administered different doses, of which only one will be optimal; (2) for safety reasons, adaptive studies almost always begin well below the initial estimated MTD. Thus, they almost always begin at a level that is believed to be subtherapeutic. As a matter of practice, moreover, patient-volunteers are sometimes deliberately given drug doses below the estimated MTD to enable pharmacokinetic studies. For reviews of Bayesian designs in the setting of Phase 1 oncology, see Eisenhauer et al. (2000); Le Tourneau, Lee, and Siu (2009); and Paoletti et al. (2006).

15

Similarly, that some fraction of patients in a placebo-controlled trial receives a promising or therapeutically active drug does not “purchase” the therapeutic justification for the placebo arm.

Contributors

James A. Anderson, Ph.D., is a Postdoctoral Fellow, Clinical Trials Research Group, Biomedical Ethics Unit, Faculty of Medicine, McGill University, Montreal, Quebec, Canada.

Baruch A. Brody, Ph.D., is Distinguished Service Professor, Leon Jaworski Professor of Biomedical Ethics, and Director, Center for Medical Ethics, Baylor College of Medicine, and Andrew Mellon Professor of Humanities, Department of Philosophy, Rice University, Houston, TX.

Courtney S. Campbell, Ph.D., is Hundere Professor of Religion and Culture, Department of Philosophy, Oregon State University, Corvallis.

Cynthia B. Cohen, Ph.D., J.D., is a Faculty Affiliate, Kennedy Institute of Ethics, Georgetown University, Washington, DC.

Peter J. Cohen, M.D., J.D., is Adjunct Professor of Law, Georgetown University Law Center, and Chair, Physicians Health Program of the District of Columbia Medical Society, Washington, DC.

Jonathan Kimmelman, Ph.D., is Assistant Professor, Clinical Trials Research Group, Biomedical Ethics Unit/Social Studies of Medicine, Faculty of Medicine, McGill University, Montreal, Quebec, Canada.

References

  1. AAMC. Association of American Medical Colleges. Promoting Translational and Clinical Science: The Critical Role of Medical Schools and Teaching Hospitals. Report of the AAMC’s Task Force II on Clinical Research 2006 [Google Scholar]
  2. Agrawal Manish, Emanuel Ezekiel J. Ethics of Phase 1 Oncology Studies: Reexamining the Arguments and Data. JAMA. 2003;290:1075–82. doi: 10.1001/jama.290.8.1075. [DOI] [PubMed] [Google Scholar]
  3. Anderson James. The Ethics and Science of Placebo-Controlled Trials: Assay Sensitivity and the Duhem-Quine Thesis. Journal of Medicine and Philosophy. 2006;31:65–81. doi: 10.1080/03605310500499203. [DOI] [PubMed] [Google Scholar]
  4. Annas George. Questing for Grails: Duplicity, Betrayal and Self-Deception in Postmodern Medical Research. Journal of Contemporary Health Law and Policy. 1996;12:297–324. [PubMed] [Google Scholar]
  5. ASCO. American Society of Clinical Oncology. Critical Role of Phase I Clinical Trials in Cancer Treatment. Journal of Clinical Oncology. 1997;15:853–59. doi: 10.1200/JCO.1997.15.2.853. [DOI] [PubMed] [Google Scholar]
  6. Ashcroft Richard. Equipoise, Knowledge, and Ethics in Clinical Research and Practice. Bioethics. 1999;13:314–26. doi: 10.1111/1467-8519.00160. [DOI] [PubMed] [Google Scholar]
  7. Babb James S, Rogatko Andre. Patient Specific Dosing in a Cancer Phase I Clinical Trial. Statistics in Medicine. 2001;20:2079–90. doi: 10.1002/sim.848. [DOI] [PubMed] [Google Scholar]
  8. Bebarta Vik, Luyten Dylan, Heard Kennon. Emergency Medicine Animal Research: Does Use of Randomization and Blinding Affect the Results? Academic Emergency Medicine. 2003;10:684–87. doi: 10.1111/j.1553-2712.2003.tb00056.x. [DOI] [PubMed] [Google Scholar]
  9. Benatar Michael. Lost in Translation: Treatment Trials in the SOD1 Mouse and in Human ALS. Neurobiology of Disease. 2007;26:1–13. doi: 10.1016/j.nbd.2006.12.015. [DOI] [PubMed] [Google Scholar]
  10. Clark Diana L, Andrews Paul A, Smith David D, et al. Predictive Value of Preclinical Toxicology Studies for Platinum Anticancer Drugs. Clinical Cancer Research. 1999;5:1161–67. [PubMed] [Google Scholar]
  11. Evans Charles H, Jr, Ildstad Suzanne T., editors. Committee on Strategies for Small-Number Participant Clinical Research Trials. Small Clinical Trials: Issues and Challenges. Washington, DC: National Academy Press; 2001. [PubMed] [Google Scholar]
  12. Daugherty Christopher K, Ratain Mark J, Emanuel Ezekiel J, et al. Ethical, Scientific, and Regulatory Perspectives Regarding the Use of Placebos in Cancer Clinical Trials. Journal of Clinical Oncology. 2008;26:1371–78. doi: 10.1200/JCO.2007.13.5335. [DOI] [PubMed] [Google Scholar]
  13. Daugherty Christopher K, Ratain Mark J, Minami Hironobu, et al. Study of Cohort-Specific Consent and Patient Control in Phase I Cancer Trials. Journal of Clinical Oncology. 1998;16:2305–12. doi: 10.1200/JCO.1998.16.7.2305. [DOI] [PubMed] [Google Scholar]
  14. Djulbegovic Benjamin. Articulating and Responding to Uncertainties in Clinical Research. Journal of Medicine and Philosophy. 2007;32:79–98. doi: 10.1080/03605310701255719. [DOI] [PubMed] [Google Scholar]
  15. Djulbegovic Benjamin, Cantor Alan, Clarke Mike. The Importance of the Preservation of the Ethical Principle of Equipoise in the Design of Clinical Trials: Relative Impact of the Methodological Quality Domains of the Treatment Effect in Randomized Controlled Trials. Accountability in Research. 2003;10:301–15. doi: 10.1080/714906103. [DOI] [PubMed] [Google Scholar]
  16. Eisenhauer Elizabeth A, Twelves Christopher, Buyse Marc. Phase 1 Cancer Clinical Trials: A Practical Guide. Oxford: Oxford University Press; 2006. [Google Scholar]
  17. Eisenhauer EA, O’Dwyer PJ, Christian M, Humphrey JS. Phase I Clinical Trial Design in Cancer Drug Development. Journal of Clinical Oncology. 2000;18:684–92. doi: 10.1200/JCO.2000.18.3.684. [DOI] [PubMed] [Google Scholar]
  18. Evans Emily L, London Alex John. Equipoise and the Criteria for Reasonable Action. Journal of Law, Medicine and Ethics. 2006;34:441–50. doi: 10.1111/j.1748-720X.2006.00050.x. [DOI] [PubMed] [Google Scholar]
  19. Fleming Thomas R, DeMets David L. Surrogate End Points in Clinical Trials: Are We Being Misled? Annals of Internal Medicine. 1996;20:637–48. doi: 10.7326/0003-4819-125-7-199610010-00011. [DOI] [PubMed] [Google Scholar]
  20. FDA. Food and Drug Administration. Critical Path Report, Innovation or Stagnation: Challenge and Opportunity on the Critical Path to New Medical Products. 2004. [Google Scholar]
  21. FDA. Food and Drug Administration. FDA Guidance for Industry, Draft Guidance, INDs—Approaches to Complying with CGMP During Phase 1. 2006. [Google Scholar]
  22. Freedman Benjamin. Clinical Equipoise and the Ethics of Clinical Research. New England Journal of Medicine. 1987;317:141–45. doi: 10.1056/NEJM198707163170304. [DOI] [PubMed] [Google Scholar]
  23. Freedman Benjamin. Placebo-Controlled Trials and the Logic of Clinical Purpose. IRB: Ethics and Human Research. 1990;12(6):1–6. [PubMed] [Google Scholar]
  24. Freedman Benjamin, Fuks Abraham, Weijer Charles. Demarcating Research and Treatment: A Systematic Approach for the Analysis of the Ethics of Clinical Research. Clinical Research. 1992;40:653–60. [PubMed] [Google Scholar]
  25. Fried Charles. Medical Experimentation: Personal Integrity and Social Policy. Amsterdam: North Holland; 1974. [Google Scholar]
  26. Gifford Fred. Pulling the Plug on Clinical Equipoise: A Critique of Miller and Weijer. Kennedy Institute of Ethics Journal. 2007;17:203–26. doi: 10.1353/ken.2007.0020. [DOI] [PubMed] [Google Scholar]
  27. Gladstone David J, Black Sandra E, Hakim Antoine M. Toward Wisdom from Failure: Lessons from Neuroprotective Stroke Trials and New Therapeutic Directions. Stroke. 2002;33:2123–36. doi: 10.1161/01.str.0000025518.34157.51. [DOI] [PubMed] [Google Scholar]
  28. Guyatt Gordong H, Haynes R Brain, Jaeschke Roman Z, et al. Users’ Guides to the Medical Literature XXV. Evidence-Based Medicine: Principles for Applying the Users’ Guides to Patient Care. JAMA. 2000;284:1290–96. doi: 10.1001/jama.284.10.1290. [DOI] [PubMed] [Google Scholar]
  29. Horstmann Elizabeth, McCabe Mary S, Grochow Loise, et al. Risks and Benefits of Phase 1 Oncology Trials, 1991 through 2002. New England Journal of Medicine. 2005;352:895–904. doi: 10.1056/NEJMsa042220. [DOI] [PubMed] [Google Scholar]
  30. Joffe Steven, Miller Franklin. Rethinking Risk-Benefit Assessment for Phase 1 Cancer Trials. Journal of Clinical Oncology. 2006;24:2987–90. doi: 10.1200/JCO.2005.04.9296. [DOI] [PubMed] [Google Scholar]
  31. Kimmelman Jonathan. Ethics at Phase 0: Clarifying the Issues. Journal of Law, Medicine, and Ethics. 2007;35:727–33. doi: 10.1111/j.1748-720X.2007.00194.x. [DOI] [PubMed] [Google Scholar]
  32. Kimmelman Jonathan. Gene Transfer and the Ethics of First-in-Human Research: Lost in Translation. Cambridge: Cambridge University Press; 2010. [Google Scholar]
  33. Kimmelman Jonathan, London Alex John, Ravine Bernard, Ramsay Tim, et al. Launching Invasive, First-in-Human Trials Against Parkinson’s Disease: Ethical Considerations. Movement Disorders. 2009;24:1893–901. doi: 10.1002/mds.22712. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. King Nancy. Defining and Describing Benefit Appropriately in Clinical Trials. Journal of Law, Medicine & Ethics. 2000;28:332–43. doi: 10.1111/j.1748-720x.2000.tb00685.x. [DOI] [PubMed] [Google Scholar]
  35. Koyfman Shlomo A, Agrawal Manish, Garrett-Mayer Elizabeth, et al. Risks and Benefits Associated with Novel Phase 1 Oncology Trial Designs. Cancer. 2007;110:1115–24. doi: 10.1002/cncr.22878. [DOI] [PubMed] [Google Scholar]
  36. Le Tourneau Christophe, Lee J Jack, Siu Lillian L. Dose Escalation Methods in Phase I Cancer Clinical Trials. Journal of the National Cancer Institute. 2009;101:708–20. doi: 10.1093/jnci/djp079. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. London Alex John. Does Research Ethics Rest on a Mistake? The Common Good, Reasonable Risk and Social Justice. American Journal of Bioethics. 2005;5(1):37–39. doi: 10.1080/15265160590927750. [DOI] [PubMed] [Google Scholar]
  38. London Alex John. Reasonable Risks in Clinical Research: A Critique and a Proposal for the Integrative Approach. Statistics in Medicine. 2006;25:2869–85. doi: 10.1002/sim.2634. [DOI] [PubMed] [Google Scholar]
  39. London Alex John. Clinical Equipoise: Foundational Requirement or Fundamental Error? In: Steinbock Bonnie., editor. The Oxford Handbook of Bioethics. New York: Oxford University Press; 2007. pp. 571–96. [Google Scholar]
  40. Lowenstein Pedro R, Castro Maria G. Uncertainty in the Translation of Preclinical Experiments to Clinical Trials. Why Do Most Phase III Clinical Trials Fail? Current Gene Therapy. 2009;9:368–74. doi: 10.2174/156652309789753392. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. MacLeod Malcolm R, O’Collins Tori, Howells David W, Donnan Geoffrey A. Pooling of Animal Experimental Data Reveals Influence of Study Design and Publication Bias. Stroke. 2004;35:1203–8. doi: 10.1161/01.STR.0000125719.25853.20. [DOI] [PubMed] [Google Scholar]
  42. Macleod Malcolm R, O’Collins Tori, Horky Laura L, et al. Systematic Review and Meta-analysis of the Efficacy of FK506 in Experimental Stroke. Journal of Cerebral Blood Flow and Metabolism. 2005;25:713–21. doi: 10.1038/sj.jcbfm.9600064. [DOI] [PubMed] [Google Scholar]
  43. Markman Maurie. “Therapeutic Intent” in Phase 1 Oncology Trials: A Justifiable Objective. Archives of Internal Medicine. 2006;166:1446–48. doi: 10.1001/archinte.166.14.1446. [DOI] [PubMed] [Google Scholar]
  44. Marquis Don. Leaving Therapy to Chance. Hastings Center Report. 1983;13(4):40–47. [PubMed] [Google Scholar]
  45. Meropol Neal J. A Renewed Call for Equipoise. Journal of Clinical Oncology. 2007;25:3392–94. doi: 10.1200/JCO.2007.11.9503. [DOI] [PubMed] [Google Scholar]
  46. Miller Franklin, Brody Howard. A Critique of Clinical Equipoise: Therapeutic Misconception in the Ethics of Clinical Trials. Hastings Center Report. 2003;33(3):19–28. [PubMed] [Google Scholar]
  47. Miller Franklin G, Joffe Steven. Benefit in Phase 1 Oncology Trials: Therapeutic Misconception or Reasonable Treatment Option? Clinical Trials. 2008;5:617–23. doi: 10.1177/1740774508097576. [DOI] [PubMed] [Google Scholar]
  48. Miller Paul, Weijer Charles. Rehabilitating Equipoise. Kennedy Institute of Ethics Journal. 2003;13:93–118. doi: 10.1353/ken.2003.0014. [DOI] [PubMed] [Google Scholar]
  49. Miller Paul, Weijer Charles. Trust Based Obligations of the State and Physician Researchers to Patient-Subjects. Journal of Medical Ethics. 2006;32:542–47. doi: 10.1136/jme.2005.014670. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Miller Paul, Weijer Charles. Equipoise and the Duty of Care in Clinical Research: A Philosophical Response to Our Critics. Journal of Medicine and Philosophy. 2007;32:117–33. doi: 10.1080/03605310701255735. [DOI] [PubMed] [Google Scholar]
  51. National Placebo Working Committee. Final Report of the National Placebo Working Committee on the Appropriate Use of Placebos in Clinical Trials in Canada. Ottawa, Canada: Canadian Institutes of Health Research; 2005. [accessed 8 March 2010]. Available at http://www.cihr_irsc.gc.ca/e/25139.html. [Google Scholar]
  52. Newell DR, Burtles SS, Fox BW, et al. Evaluation of Rodent-Only Toxicology for Early Clinical Trials with Novel Cancer Therapeutics. British Journal of Cancer. 1999;81:760–68. doi: 10.1038/sj.bjc.6690761. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Newell DR, Silvester J, McDowell C, Burtles SS. The Cancer Research UK Experience of Pre-Clinical Toxicology Studies to Support Early Clinical Trials with Novel Cancer Therapies. European Journal of Cancer. 2004;40:899–906. doi: 10.1016/j.ejca.2003.12.020. [DOI] [PubMed] [Google Scholar]
  54. Paoletti Xavier, Baron Benoît, Schöffski Patrick, et al. Using the Continual Reassessment Method: Lessons Learned from an EORTC Phase I Dose Finding Study. European Journal of Cancer. 2006;42:1362–68. doi: 10.1016/j.ejca.2006.01.051. [DOI] [PubMed] [Google Scholar]
  55. Parulekar Wendy R, Eisenhauer Elizabeth A. Phase I Trial Design For Solid Tumor Studies of Targeted, Non-Cytotoxic Agents: Theory and Practice. Journal of the National Cancer Institute. 2004;96:990–97. doi: 10.1093/jnci/djh182. [DOI] [PubMed] [Google Scholar]
  56. Perel Pablo, Roberts Ian, Sena Emily, et al. Comparison of Treatment Effects Between Animal Experiments and Clinical Trials: Systematic Review. BMJ. 2007;334:197. doi: 10.1136/bmj.39048.407928.BE. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Philip Maria, Benatar Michael, Fisher Marc, Savitz Sean I. Methodological Quality of Animal Studies of Neuroprotective Agents Currently in Phase II/III Acute Ischemic Stroke Trials. Stroke. 2009;40:577–81. doi: 10.1161/STROKEAHA.108.524330. [DOI] [PubMed] [Google Scholar]
  58. Piantadosi Steven. Clinical Trials: A Methodologic Perspective. 2. Hoboken, NJ: John Wiley & Sons, Inc; 2005. [Google Scholar]
  59. Roberts Thomas G, Goulart Bernardo H, Squitieri Lee, et al. Trends in the Risks and Benefits to Patients with Cancer Participating in Phase 1 Clinical Trials. JAMA. 2004;292:2130–40. doi: 10.1001/jama.292.17.2130. [DOI] [PubMed] [Google Scholar]
  60. Ross Lainie. Phase I Research and the Meaning of Direct Benefit. Journal of Pediatrics. 2006;149:S20–S24. doi: 10.1016/j.jpeds.2006.04.046. [DOI] [PubMed] [Google Scholar]
  61. Saccardi R, Kozak T, Bocelli-Tyndall C, et al. Autologous Stem Cell Transplantation for Progressive Multiple Sclerosis: Update of the European Group for Blood and Marrow Transplantation Autoimmune Diseases Working Party Database. Multiple Sclerosis. 2006;12:814–23. doi: 10.1177/1352458506071301. [DOI] [PubMed] [Google Scholar]
  62. Simon Richard, Freidlin Boirs, Rubinstein Larry, et al. Accelerated Titration Designs for Phase I Clinical Trials in Oncology. Journal of the National Cancer Institute. 1997;89:1138–47. doi: 10.1093/jnci/89.15.1138. [DOI] [PubMed] [Google Scholar]
  63. Veatch Robert. The Irrelevance of Equipoise. Journal of Medicine and Philosophy. 2007;32:167–83. doi: 10.1080/03605310701255776. [DOI] [PubMed] [Google Scholar]
  64. Weijer Charles. Placebo-Controlled Trials in Schizophrenia: Are They Ethical? Are They Necessary? Schizophrenia Research. 1999;35:211–18. doi: 10.1016/s0920-9964(98)00127-3. [DOI] [PubMed] [Google Scholar]
  65. Weijer Charles. The Ethics of Placebo-Controlled Trials. Journal of Bone and Mineral Research. 2003;18:1150–53. doi: 10.1359/jbmr.2003.18.6.1150. [DOI] [PubMed] [Google Scholar]
  66. Weijer Charles, Miller Paul B. When Are Research Risks Reasonable in Relation to Anticipated Benefits? Nature Medicine. 2004;10:570–73. doi: 10.1038/nm0604-570. [DOI] [PubMed] [Google Scholar]
  67. Weijer Charles, Shapiro Stanley H, Glass Kathleen C. Clinical Equipoise and Not the Uncertainty Principle is the Moral Underpinning of the Randomized Controlled Trial. BMJ. 2000;321:756–58. doi: 10.1136/bmj.321.7263.756. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Wendler David, Miller Franklin G. Assessing Research Risks Systematically: The Net Risks Test. Journal of Medical Ethics. 2007;33:481–86. doi: 10.1136/jme.2005.014043. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Zerhouni Elias. The NIH Roadmap. Science. 2003;302:63–72. doi: 10.1126/science.1091867. [DOI] [PubMed] [Google Scholar]

RESOURCES