Skip to main content
Learning Health Systems logoLink to Learning Health Systems
. 2017 Dec 5;2(1):e10048. doi: 10.1002/lrh2.10048

The regulation of clinical research: What's love got to do with it?

John D Lantos 1,
PMCID: PMC6508771  PMID: 31245575

Abstract

The central philosophical pillar of the current system of research regulation in the United States today is that clinical investigators cannot and should not be trusted to protect the interests of the people whom they recruit to participate in research. That distrust of researchers is coupled with a starry‐eyed idealism about trustworthiness of clinicians. In my opinion, the distrust of researchers and the complacency about clinicians are both misplaced. The result of these twin errors is that people are overprotected in research studies and inadequately protected in clinical care. Patients outside of research studies are exposed to many types of risks from innovative therapy and from practice variation. Researchers who try to study these risks in a risk‐reducing way are hampered by burdensome regulations.

We need a fundamental theoretical and conceptual change. The change would require us to acknowledge 2 things. First, research can be done in a way that does not harm (and might help) current patients. Second, researchers as moral agents can balance their moral obligations to patients with their obligations to science just as clinicians balance their fiduciary obligations to patients with other interests.

Keywords: ethics, IRB, minimal risk, neonatology, OHRP, oxygen‐saturation, regulation, research


Abbreviations

SUPPORT

Surfactant, Positive‐Pressure, Pulse Oximetry Randomized Trial

FiO2

Fraction of inspired oxygen.

1. THE REGULATION OF CLINICAL RESEARCH

1.1. The heart of research regulation

It should not be controversial to claim that our system of research regulation in the United States today is based on a deep distrust of researchers and the entire research enterprise. This distrust may be warranted because of past research abuses, including Tuskegee, the Human Radiation Experiments, the Guatemala syphilis studies, and the many other unethical studies that were copiously documented by Beecher in 1966. As a result of these examples of researchers who put the goals of science ahead of the interests of their research subjects, we have put in place a system of research regulation that is based on the idea that clinical investigators cannot and should not be trusted to protect the interests of the people whom they recruit to participate in research. It is likely, though difficult to prove, that research subjects are safer as a result of this system. It is also possible that an unintended consequence of the meticulous and sometimes intrusive oversight is that some ethically justifiable research is delayed or cannot be done at all.1, 2, 3

In comparison to the distrust of researchers, our current system of oversight and regulation treats clinicians as much more trustworthy. This is true even as studies show that, overall, the public's trust in physicians has declined.4

The systems of regulation may not reflect the views of the general public. According to a Pew Center study, the public trust in medicine and in research was about the same. For both groups, about 40% of the public had “a great deal of confidence in the people running these institutions.”5 But those systems demand meticulous oversight of the researcher. Researchers must be certified annually as understanding the ethics of research. Their projects must be reviewed by institutional review boards (IRBs) before they begin, and any change must be submitted to the IRB. By contrast, clinicians are entrusted to decide which therapies to provide and to explain the risks, benefits, and treatment choices that patients face. Together, clinicians and patients are thought capable of arriving at optimum treatment choices without any outside oversight.

The contrast between the 2 is highlighted in situations in which a clinician wants to study the safety and efficacy of 2 widely available treatments. That physician could prescribe either of those treatments without any oversight. But if he wanted to study them, he would no longer be trusted to independently oversee the choices that his patients made. Instead, he would be seen as having a serious and irreducible conflict of interest that could only be mitigated by IRB oversight. Only the IRB, not the investigator, is empowered to decide exactly what he could or could not say in describing the study and seeking the patient's informed consent.

In my opinion, the distrust of researchers and the complacency about clinicians are both misplaced.

The result of these twin errors is that people are overprotected in research studies and inadequately protected in clinical care. From an ethical perspective, or a public health perspective, or a purely economic perspective, we would be better off thinking about a continuum between clinical care and comparative effectiveness research and demanding a similar level of oversight for both.

To understand the roots of our current system of research regulation, I will first review the ethical arguments that have buttressed this system for the last 5 decades. I will then present some of the counterarguments. Finally, to illustrate the dangers of our current overregulation of research and underregulation of clinical practice, I will discuss 2 recent controversies in research ethics.

1.2. Deep suspicions have deep roots

The view that researchers cannot be trusted but that clinicians can comes from a particular understanding of the psychological motivations of each group of professionals. In 1980, Churchill outlined the argument that has become conventional wisdom about researchers. He began by noting that clinicians have (or ought to have) one goal, the best interests of the patient, while researchers have (or ought to have) another, the pursuit of generalizable knowledge. He wrote, “The acknowledged goal of the physician‐patient relationship is healing or the health of the patient. The scientific investigator cannot claim this goal or the moral authority which goes with it.”6 He saw the 2 relationships as mutually exclusive, “The two relationships have lives of their own which, by their very nature, compel or urge to certain priorities and inclinations to perceive and act in certain ways.”

Churchill's language is strong. The researcher is “compelled” and “urged” to pursue knowledge. These words suggest that the research has no choice, that he is driven by a force as powerful as an addiction. It is this view of the researcher that has led to the mandate for strict oversight. The researchers' urges are so compelling that he cannot be independently accountable for his actions. As Clancy Martin noted, writing about addiction in another context, “It is almost impossible for the addict to learn, to understand, and to remember that he cannot have his drug.” Following Churchill, this is how we see the researcher. Researchers cannot be trusted to think rationally, to make moral judgments, or to distinguish between competing norms and goals.7

Contemporary debates about the risks of research, and our system of close monitoring by IRBs, suggest that Churchill's view of the moral psychology of the clinical investigator is the common view today. The investigator is an idealistic utilitarian; working hard to generate knowledge that will help future patients but, in the process, willing to heedlessly and unreflectively sacrifice the interests of present patients.

Brody and Franklin echo Churchill's view and elaborate on the ways in which the clinician's loyalties allow trust while the researcher's undermine that trust. They write,

Patients should understand that when they enter a physician‐patient relationship, what defines this specific sort of relationship is the overriding commitment of the physician to that individual patient's benefit. Research participants form a different sort of relationship with the professionals in charge. Failing to see the difference between these two sorts of relationships (however much they might be blurred or overlap in particular settings) creates a fundamental problem for protecting patients or subjects from exploitation.8

Churchill's and Brody's view of the clinician, by contrast, imagines that the fiduciary responsibilities that the clinician assumes and accepts are so powerful as to serve as an adequate protection of the patient's interests. A key question, then, is whether this represents an unrealistic oversimplification of the motives and the moral psychology of both researchers and clinicians. I believe it does. I believe that researchers can act out of a powerful sense of fiduciary responsibility by which they want to do what is best for their patient by means of studying the efficacy of the treatments that they prescribe. Furthermore, clinicians can compromise their fiduciary responsibilities if they exude and communicate unwarranted confidence in their knowledge of what is best.

1.3. An alternative view: Researchers as patient advocates

Most clinician‐researchers do not see themselves as urged or compelled in this way. Instead, they see well‐designed human subject research as entirely harmonious with uncompromised loyalty to the best interests of patients. They see these interests and commitments as complimentary rather than competing.

Fost, for example, believes that it is unethical to recommend unproven therapies to patients. He imagines that in conversation with a patient about enrolling in a clinical trial, he would say, “My own conscience tells me it would not be responsible to give (an unstudied treatment) to you in an uncontrolled way, because neither you, nor I, nor future patients would ever know whether it helped or hurt.”9

Barrington echoes this view. He notes, “I have a fiduciary obligation to provide optimal treatment. I also have a moral obligation to know what the optimal treatment is. I also, simultaneously, have a moral obligation as a researcher to keep trying to find out what the best treatments may be.”10

This view, too, has deep roots. Katz was a pioneer of research ethics and one of the earliest advocates for patient empowerment, informed consent, and shared decision‐making. In 1969, he suggest that research and therapy overlapped in complex ways, “The multiple purposes of medical practice, caring for patients, advancing science, improving the health of the community, nations, and future generations cannot be separated clearly in most decisions that physician investigators have to make. Instead, more often than not, all these purposes are present in every decision.” Katz concluded that “research and therapy, pursuit of knowledge and treatment, are not separate but intertwined.”11

Toulmin also believed that the researcher was not uniquely conflicted. Instead, he suggested that all professionals had conflicting obligations, “The possibility of internal conflicts of obligation has been built into the practice of medicine ever since the time of Hippocrates, whose oath had the physician swear to serve, not merely his immediate patient, but also ‘the art.’ At the present time, this conflict shows up in the moral quandaries attending the conduct of random clinical trials.”12

Passamani sees these overlapping roles as a way to “protect physicians and their patients from therapies that are ineffective or toxic.” Grunberg and Cefalu posit that the alternative to such clinical research is not individualized and thus better patient care but merely the pretense of omniscience that physicians do not and cannot possible have.13

1.4. How these theories play out in the real world

A recent controversy about a study of oxygen therapy for premature babies illustrates the controversies.14 The debate has implications the development of learning healthcare systems.15 It focuses on the ways that we think about and inform patients about the risks of different treatments or the risks of studies to evaluate those treatments.

The controversy focused on a particular type of research, namely, research on therapies that are in widespread use and about which there is known variation in physicians' practices. This type of research has been called “research on medical practices” or “comparative effectiveness research.” In such studies, all treatments that are offered in a trial are also readily available outside the trial and, thus, all of the treatments are available to patients whether they are in a study or not. Often, it is unclear when or how doctors decide to use one or another of these therapies, both of which are considered to be within the standard of care. Comparative effectiveness research is designed to determine whether there are clear differences between such therapies.

In these situations, there are, broadly speaking, 2 ways to think about risk. One way is to focus on the measurable physical or psychological risks that are associated with the procedures, interventions, or drugs that are used for the patients who become research subjects in the actual trial. Those risks can be compared between the 2 arms within the trial. Such trials also allow comparison of risks between all patients in the trial (regardless of which treatment they receive) and patients who are not in the trial who are receiving the study treatments in a nonrandom way.16 The other way to think about risk focuses on the less quantifiable risks that are associated with being involved in a research protocol itself: the fact that the patient's doctor is also a researcher, that treatment will be assigned at random, and that the intervention will be provided according to a pre‐defined protocol as opposed to by the choice of the doctor.17 This approach to deciding on therapy can threaten the trust that a patient has in his or her doctor. Loss of trust is a very different sort of risk than is the risk of physical or psychological harm that is directly caused by the study interventions themselves.

In assessing the importance of these 2 types of risks, it is tempting to simply combine them and to think of them as both contributing equally to the sum total of risks associated with research. Of late, however, it has become necessary to disentangle them. The rise of comparative effectiveness studies and our increasing knowledge of the degree to which widespread practice variation is the norm suggest that 2 types of risk might pull in opposite directions. That is, an attempt by a clinical investigator to honestly disclose the truths about practice variation to quantify the relative risks of randomization could lead to decrements in trust. On the other hand, deceptively withholding information about the true degree and nature of a doctor's uncertainty could lead to increased trust and psychological well‐being.

But here's the problem: if increased trust requires decreased transparency and honesty, than it is not necessarily a good thing. Similarly, decreased trust that results from a physicians' greater honesty about uncertainty might be a good thing. Patients should be wary of unstudied treatments. False reassurance, in such situations, does not empower patients or promote autonomy.

We ought to require full disclosure about what we know and do not know for both research and for clinical treatment.

The Surfactant, Positive‐Pressure, Pulse Oximetry Randomized Trial (SUPPORT study) randomized extremely premature babies to 2 target levels of oxygen saturation. Researchers wanted to determine whether different oxygen levels were associated with different rates of retinopathy, chronic lung disease, or neurodevelopmental problems.18 There was widespread uncertainty about the effect of different oxygen levels on all of these outcomes.19 There was well‐known practice variation at the time when the study was implemented.20 Researchers believed that because all of the oxygen levels that were used in the study were within the range of oxygen levels that were recommended by experts at the time, the risks associated with enrollment in the study were minimal.21 Federal regulators and many other critics of the study disagreed.22

As it turned out, there were differences in retinopathy and mortality between the 2 arms of the study. Babies in the lower oxygen arm had less retinopathy (17% vs. 8%) but higher mortality (20% vs. 16%). Rates of lung disease and neurodevelopmental impairment were not different.23 Compared to babies who were eligible for the study but not enrolled, babies in the study had lower rates of severe retinopathy (13.3% vs. 24.1%) and mortality (17.6% vs. 24%).24 Investigators believed that these results confirmed their beliefs that the being in the study was less risky than not being in the study and that the study's surprise finding of a lower mortality in the high‐oxygen arm was an unexpected and very important additional discovery.

The federal Office for Human Research Protection claimed in its 2013 Determination Letter that neither the actual physical risks nor the better outcomes associated with participation in the study were the primary focus of their analysis of risk. Instead, they opined, such research is inherently risky because of the nature of the relationship that the clinical investigators had with the study subjects, ie the “risk of randomization”: taking the choice of therapy out of the individual clinicians' hands (who was presumably acting in the patients' best interests) and submitting it to the researcher (who was presumably acting in the best interests of the pursuit of knowledge). Echoing Churchill, Miller, and Brody, Office for Human Research Protection wrote, “Ultimately, the issues in this case come down to a fundamental difference between the obligations of clinicians and those of researchers. Doctors are required, even in the face of uncertainty, to do what they view as being best for their individual patients. Researchers do not have that same obligation.”25 Bioethicists Macklin and Shepherd agreed, “It is the doctors, not the researchers, who have a fiduciary obligation and long‐standing ethic to pursue the patient's best interests above all other considerations.”26 Annas, an expert in health law, echoed this view, “A physician must be guided by a fiduciary obligation to the patient. A researcher has no such obligation.”27 By this view, babies in the SUPPORT study were at higher risk than those not in the study, regardless of their actual outcomes, because the nature of their relationship with the clinician‐investigators put them at risk.

Such relational concerns about the risks to research subjects are based on a vague, subjective fear, based on often inaccurate normative presumptions, about a type of risk that is unquantifiable and therefore can never be thought of as minimal.

Now, of course, nobody could know the outcome of the study when it was designed. All they could do was make their best, educated guess about the reasonably foreseeable risks that might accrue to participants as a result of being in the study. At the outset, then, the researchers were of the opinion that there were no increased risks to babies in the study compared to babies who were not in the study. They said that in the consent form. As it turned out, they were correct. One might ask whether their confidence in this outcome was warranted, or whether, instead, the consent form should have said that study participation had higher risks than nonparticipation. That would not have been honest (they did not believe it to be true) and, as it turned out (though nobody could have known this for sure at the outset), would have been inaccurate.

To some, it was simply inconceivable that clinicians did not know the best level of oxygen to use. Carome stated this deep‐seated belief succinctly, “It is inconceivable that in 2005, highly trained, expert neonatologists providing routine individualized care outside the research context did not adjust FiO2 levels to achieve different oxygen saturation levels — in different babies and at different times for the same baby — within the broad range of 85‐95% based on important clinical indicators of tissue oxygenation.” He imagined that such decisions would follow “consultations with parents regarding balancing of specific risks”28 thereby allowing for parent preference and clinician knowledge—both acting in the best interests of the baby—to guide the choice of oxygen level.

Although Carome found this “inconceivable,” it was, in fact, the standard approach to the treatment of premature babies outside the study protocol. Each neonatal intensive care unit determined a target oxygen saturation and then adjusted the levels of oxygen provided to keep babies within the target range, just as they did on the study protocol.

1.5. Is the clinician really so unconflicted?

This dichotomous view of the moral psychology of clinician and investigator assumes that patients get the best care when their doctor exercises individual clinical judgment. That assumption is undermined by robust data on the randomness of practice variation. From studies by Wennberg and others,29 we now know that clinical practice varies widely between doctors, between hospitals,30 and between regions of the country.31 Furthermore, the variation has no plausible basis in evidence about improved outcomes.32 These variations can be studied to determine which treatments led to the best outcomes at the lowest cost. Many healthcare systems now take this approach. They analyze electronic health record data for quality assessments, quality improvement initiatives, and system redesign projects, both across the system and within individual medical centers. Such activities are generally retrospective, but they can be combined with prospective studies to yield more precise information. They raise the question, however, of whether they require the sort of regulation that we currently mandate for research. Such activities also illustrate the ways in which our current dichotomized view of the loyal, patient‐focused clinician and the conflicted, research‐oriented investigator is destined to crumble.

1.6. Problems with research regulation today

One of the ironies of our current approaches is that it allows the clinician tremendous latitude to withhold important information from patients. The physician who assures her patient that “Doctor knows best” in a situation in which there is significant professional uncertainty about what is best misleads her patient. Imagine a neonatologist caring for a patient at the time of the SUPPORT study. A consensus among the experts was that we did not know what was best. A doctor who assures her patient that she does, in fact, know what is best is deceiving either herself, her patient, or both. Respect for patient autonomy and the obligations to inform the patient of treatment options would demand that a responsible and nonpaternalistic practitioner would inform the patient of the disagreement in the professional community that led to the need for a rigorous trial. Not to do so will result in 2 things. First, clinical trials will continue to be viewed as ethically problematic compared to treatment based on clinical judgment. Second, we will be left with retrospective data from the “natural experiments” of idiosyncratic practice variation rather than the better data that we might derive from rigorously designed clinical trials.

The view that clinician‐investigators have divided loyalties and that “pure” clinicians do not is also naïve about the demands on clinicians. As Wendler noted, “Clinicians have a number of appropriate interests that compete with providing the best care possible, including earning a living, helping other patients, conserving the resources of the institutions where they work, and training new clinicians.”33

Interestingly, patients and research subjects seem to be ahead of the regulators in understanding these issues. In both domains, they are pushing the boundaries of the permissible. With the help of social media, wearable devices, and other methods of organizing and communicating, patients and research subjects are taking matters into their own hands. Disease‐specific websites and advocacy groups have humbled leading hospitals in the United States and around the world by organizing social media campaigns to force doctors to disclose more than they otherwise would have and to change hospital policies and clinical practices.34, 35 Furthermore, citizens are developing their own independent research enterprises.36, 37

There are many risky innovations in modern medicine. Those should be studied. Studying will make them safer. They can be studied in ways that do not increase risk. If they are not studied, then we will never know which are the safest and the most effective. We need to improve the rigor with which patients are informed about the risks of unstudied and nonvalidated therapies that are part of routine clinical practice. This could be done by, for example, mandating that patients who are eligible for comparative effectiveness studies but who choose not to enroll be given the same information as those who choose to enroll, with an accurate description of the risks of both choices.

1.7. What's love got to do with it?

The debate between these 2 views of the clinical investigator has implications for the way we regulate research. If the clinical investigator is viewed as reflective, trustworthy, and capable of maintaining his primary commitment to the patient's well‐being, even when the patient is enrolled in a research study, then assessment of the risks of research will focus only on the risks of the study procedures themselves. If, on the other hand, the investigator is seen as unable to disentangle his conflicting loyalties and as inevitably prioritizing the goals of research over the goals of patient care, then careful and constant oversight will be necessary.

Our current system of research regulation reflects the latter view. It treats researchers as incapable of making even simple moral judgments about study design, patient enrollment, informed consent, or even data analysis.

Ironically, then, researchers are overseen by committees made up primarily of other researchers. (IRBs must also include at least one nonscientific person, but, since IRBs make decisions by majority vote, the balance of power will always lie with the researchers.) Thus, people who are not thought capable of supervising themselves when they do research become IRB members who are thought to be capable of learning and applying rules for the responsible conduct of research.

The irony of this sort of self‐governance by people who are not thought capable of governing themselves illustrates the absurdity of the sharp distinction between clinical research and medical practice. The distinction blurs, rather than sharpens, the classification of people whose interests are at risk because of innovative medical practices. It reflects a view that research subjects need protection in ways that patients do not and that doctors take their fiduciary responsibilities seriously in a way that researchers cannot.

This curious view of the dichotomy between research and practice leads to a system in which patients are exposed to many types of risks from innovative therapy and from practice variation. Researchers who try to study the risks of such clinical care are engaged in activities that often cause no reasonably foreseeable risk to patients. Nevertheless, they are hampered by burdensome regulations. Many people have proposed changes to the regulations to carve out exceptions for areas of low‐risk research. The fundamental change needed, however, is a theoretical and conceptual one. Unless we acknowledge that research can be done in a way that does not harm (and might help) current patients and will improve the care of future patients and that researchers might be able to balance their moral obligations to those patients and to the scientific work that they are doing, then merely fiddling with the regulations and definitions, in ways designed to carve out the most uncontroversial low‐risk studies, will not address the central tension in clinical research ethics today.

FUNDING SOURCE

No external funding for this manuscript

FINANCIAL DISCLOSURE

The author has no financial relationships relevant to this article to disclose.

CONFLICT OF INTEREST

The authors have no potential conflicts of interest to disclose.

Lantos JD. The regulation of clinical research: What's love got to do with it?. Learn Health Sys. 2018;2:e10048 10.1002/lrh2.10048

REFERENCES


Articles from Learning Health Systems are provided here courtesy of Wiley

RESOURCES