Abstract
Clinicians have good moral and professional reasons to contribute to pragmatic clinical trials (PCTs). We argue that clinicians have a defeasible duty to participate in this research that takes place in usual care settings and does not involve substantive deviation from their ordinary care practices. However, a variety of countervailing reasons may excuse clinicians from this duty in particular cases. Yet because there is a moral default in favor of participating, clinicians who wish to opt out of this research must justify their refusal. Reasons to refuse include that the trial is badly designed in some way, that the trial activities will violate the clinician’s conscience, or that the trial will impose excessive burdens on the clinician.
Introduction
Substantial scholarship emphasizes the need for pragmatic clinical trials (PCTs) to provide high-quality evidence needed to guide real-world treatment decisions, and the importance of embedding that research into contexts that are representative of the patients and clinical settings for whom or within which those decisions are relevant. PCTs can differ from traditional explanatory trials in various ways (Sugarman and Califf 2014). Often pragmatic research involves system-wide commitments. The goal is to see whether and how interventions actually work in ordinary clinical care, rather than within the stylized settings common to explanatory trials (Loudon et al. 2015). In response to this need, funders have launched large-scale initiatives aimed at facilitating the capacity to implement large-scale embedded PCTs, including the National Institutes of Health (NIH) Pragmatic Trials Collaboratory, the National Institute of Aging IMPACT (IMbedded Pragmatic Alzheimer’s disease (AD) and AD-Related Dementias Clinical Trials) Collaboratory, as well as related efforts within the Patient Centered Outcomes Research Institute (PCORI). These efforts share features with the aspirational model of a “learning health system” (Committee on the Learning Health Care System in America and Institute of Medicine 2013; Faden et al. 2013; Kass et al. 2013) in which research is embedded into ongoing clinical care and the knowledge from that research is in turn incorporated into care processes to facilitate a cycle of continuous improvement.
Embedding research into “ordinary” clinical care requires, by definition, at least some level of involvement by clinicians who provide care within those settings. The extent and implications of this involvement will vary according to the specifics of a given PCT. While some will merely examine clinicians’ ordinary practices, others may require them to substantially change those practices for the sake of a study. Nevertheless, even if a particular study does not require clinicians to substantially change their activities, it may still affect clinicians’ interests in diverse—and potentially harmful—ways. Participating in research, even about ongoing clinical care, might be understood to conflict with an imperative to do what is best for one’s patients (Grady and Wendler 2013). Clinicians might also fear that the research could illegitimately reflect on their professional competence, training, judgment, or something else they regard as private or proprietary—e.g., an analysis of pain-management treatments might show that a doctor is much quicker to prescribe strong medications than comparably situated colleagues, which, absent the proper context, may invite suspicion of the clinician’s motives or abilities. Moreover, it may impact the clinician-relationships, as patients may not welcome the idea that they could be involved in research simply by receiving health care (Largent et al. 2011)
How should we understand clinicians’ obligations to contribute to pragmatic research? We argue that clinicians have a defeasible duty to participate in pragmatic research that takes place in usual care settings and does not involve substantial deviation from their ordinary care practices. This obligation is grounded upon clinicians’ duties to provide skillful care to patients, to further develop and extend clinical practice, and to maintain a useful body of medical knowledge on behalf of the society. Correspondingly, clinicians ought to cooperate with efforts to systematically evaluate the relative benefits, burdens, and risks of biomedical and behavioral health interventions within ordinary care settings to support generalizable knowledge about effective care (Califf and Sugarman 2015). They may also have reasons to be direct or indirect subjects of research (Smalley et al. 2015). However, we are not arguing for a duty to be an investigator—that is, to be primarily responsible for the design, conduct, analysis, and reporting of pragmatic research.
Our argument diverges from the traditional view, deriving from explanatory research, in which research is clearly distinguished from clinicians’ “ordinary” work. Within the traditional view, research often tests novel therapies (with potentially unknown risks), follows a strict protocol, and collects data beyond what is essential for the patients’ health. Protections for participants’ interests are made explicit, and usually their consent is required. In contrast, pragmatic research is designed to involve minimal changes to clinical delivery processes. It characteristically compares two or more real-world treatments, often uses flexible protocols, and aims to integrate data collection into clinical care processes (Loudon et al. 2015; Weinfurt 2017). In many cases, patients’ experience might not differ from what they regard as normal health care, and clinicians may not even be aware of a study until after the fact. A particular trial can be more or less pragmatic, but the focus on assessing the delivery of an intervention in a usual care setting is a distinctive feature of a PCT. Arguing that clinicians have reasons to contribute to pragmatic clinical research does not necessarily mean that clinicians must do a different kind of job, such as to be the primary investigator, but instead that they have reasons to join the cooperative effort of extending medical knowledge.
In what follows, we begin by making the case for a duty to participate in pragmatic research. We then explore a variety of possible excuses from this duty. We argue that clinicians have a general duty to participate in pragmatic research. While clinicians may have legitimate reasons to refuse to participate in specific studies, the burden rests on clinicians to justify their refusal.
Why clinicians should participate
The idea that clinicians have some kind of duty to participate in (well-designed, ethically sound) PCTs can find support from a variety of arguments. This duty could be grounded in considerations that range from general duties as members of the moral community, which would be shared by all the society’s members, to specific professional duties. For simplicity, we treat just two kinds of argument for the claim that clinicians have a duty to participate in pragmatic research: one argument from a duty of expert care for patients, and another argument from fairness to fellow clinicians.
First, clinicians’ duties to their patients give some support to a duty to participate in PCTs. Clinicians have special duties of care and beneficence to their patients. This duty is not just to current patients, but also to those in the future. It requires the clinician not merely to provide aid, but to do it skillfully and knowledgeably. Clinicians need to know how to best realize the aims of health in general so that they may apply this knowledge to the particular medical needs of their individual patients. Insofar as their patients trust them to provide competent care, they have a duty to do so. Where it is possible to know that some kind of care is bad or less effective than an alternative, choosing to remain ignorant of a better option violates this trust. These kinds of considerations underwrite the common requirement that clinicians engage in continuing medical education.
However, this argument does not go far enough. It shows that clinicians’ fiduciary duties to their patients requires them to know about good care. However, it does not immediately require them to contribute to research about their care. Insofar as systematic research is the best source of high-quality evidence about care, clinicians have a duty to be informed by it. Yet it may be that we can justify only an imperfect duty to contribute to research. Many of our moral duties allow for some flexibility among options for satisfying the moral standard. One ought to try to give aid to sick people, for example, but not everyone needs to do so in the same way. (Indeed, it would be counterproductive if we tried to.) These “imperfect” duties are distinctive in that some contribution is required, but not a specific action (Hill 1971; Kant 2012; Campbell et al. 2017).
A second argument starts differently. As Alex London (2021) has recently argued, clinical research is one of the many social institutions of public life. The purpose of our social institutions is to allow some members of society to specialize in providing or securing some basic interests we have as a society. The securing of these basic interests is a public, common good. For example, we create police forces to maintain order on behalf of the society in general. The good that a police force aims at—namely, law and order—is a public good, but it is administered by a group of professionals who have expert training and skills to do this well. A failure to do it well, whether by incompetence or abuse, is a violation of the duty they have to the society.
Similarly, the health care system is a social institution that seeks to preserve and promote a public good: the health of the members of the society. Within this social institution, medical professionals perform a specialized task on behalf of broader society, and they correspondingly are granted certain powers to provide useful care. Biomedical research is a related (and overlapping) social institution, and its goal is generalizable medical knowledge. This knowledge is a public good in that it enables medical professionals to pursue the aims of health well. A failure to cultivate medical knowledge constitutes a failure to serve the moral ends for which the social institution exists, insofar as the basic health of all members of a society is a matter of moral concern for it.
This framework provides moral force to a social imperative to conduct research about health care (Harris 2005; Tinetti and Basch 2013; Gelinas et al. 2016; Campbell et al. 2017; London 2021). This imperative requires different things of different people based on their roles within the society, ranging from securing funding or political support to fostering specialized skills in conducting clinical trials to advance knowledge about effective care processes. Care performed without adequate evidence of its effectiveness harms patients insofar as it makes them worse off, whether by depriving them of a chance to have better health, or by exposing them to unnecessary risks. The social institution of biomedical research serves the society’s collective and individual interests by trying to prevent these mistakes or inefficiencies.
PCTs are therefore essential to promoting effective clinical care, for they can provide evidence in a reasonably efficient way about how clinicians and/or health systems can realize the aims of health. As the name implies, PCTs extend knowledge about practice, rather than of biomechanical processes that are the domain of some traditional biomedical research. But their epistemic rationale is similar to traditional randomized controlled trials (RCTs), in that they both seek to systemically develop generalizable knowledge. Unlike explanatory trials, which seek evidence about the efficacy of an intervention, PCTs seek evidence about the quality of care, where what makes the care good may vary across several dimensions. One intervention might be better than another because it has better health outcomes. Another might be better because it imposes fewer burdens on patients than alternatives with the same outcomes. Still another might be better because it is significantly less expensive than alternatives with the same outcomes. Stakeholders (patients, clinicians, institutions, payers) need information about these matters to make decisions about practice. Acquiring this information requires careful study, and that is what pragmatic research is for.
Because PCTs are characteristically about “real-world” activities, their usefulness depends on them being implemented in settings where their results will ultimately be applied. Consequently, in order to get the “real-world” evidence about the effectiveness of an intervention, “ordinary” (i.e., non-research-focused) clinicians need to deliver it. Therefore, if “ordinary” clinicians systematically refuse to participate in pragmatic research, then it will not accomplish its goals (Dember et al. 2019).
Pragmatic research partially satisfies the social purpose of biomedical research, in that it helps clinicians provide safe, effective, and efficient care. “Ordinary” clinicians are essential to this kind of research, for it is about their clinical activities. Therefore, “ordinary” clinicians have a moral reason to be involved in pragmatic research.
This second argument, as with the first, goes only so far. It implies that “ordinary” clinicians have an imperfect duty to contribute to pragmatic research. It does not necessarily mean that they must therefore participate in PCTs. Yet imperfect duties can be shaped by particular roles and circumstances into more perfect duties. The point of positing an imperfect duty is not to give complete freedom in how that duty is exercised, but to require some further explanation for any particular action that satisfies it. Several possible actions may fulfill an imperfect duty, but when we consider the duty in a particular context, we can often identify a more specific action that is required. For example, a duty of beneficence is a characteristic moral duty had by all (Ross 1930; Herman 2021). However, a duty of beneficence in a health care setting takes on particular forms, and many rules and procedures of clinical care are given ethical justification by it (Beauchamp and Childress 1994). A clinician’s general duty of beneficence gets specified, often in considerable detail, by the particular clinical setting. The right clinical action might be done out of a beneficent motive, but it must conform to a variety of further rules of clinical practice and institutional policy, and these are not optional from the clinician’s point of view.
Similarly, a clinician who has an imperfect duty to contribute to research may have the nature of that contribution shaped by a particular situation. Pragmatic research is an excellent candidate for the kind of research in which the clinician should participate. One reason to think that an individual’s contribution should be participation is that it is fair. Some “ordinary” clinicians have to participate in pragmatic research for it to occur at all, but everyone in the society benefits from (informative) pragmatic research, for it realizes the public good of medical knowledge, which in turn contributes to providing better medical care. Someone who, without good reason, consistently relies on the contributions of others, without contributing himself, acts unfairly, and thus unjustly (Rawls 1999, 96). It could be that a clinician happens to work in a health system that does not conduct pragmatic research. Insofar as such clinicians lack the opportunity, they have not failed a duty to participate, even if they should still learn from research conducted elsewhere. But the case at hand is when a clinician does work in a health system that is conducting PCTs. In such a context, fairness to the others who would benefit from the research—whether other clinicians in that health system, clinicians who work in other health systems, current patients, future patients, and other stakeholders—requires more of the clinician. The fairness in question here is not merely a matter of relieving the relative burdens on other members of the profession, but rather a matter of “doing one’s part” in a larger scheme of social cooperation. Recent scholarship has argued that even patients have some duty to participate (Schaefer et al. 2009). If a patient has some moral duty to participate in research, then surely a clinician does as well.
Therefore, from a particular clinician’s point of view, the ways of satisfying an imperfect duty to contribute are not entirely optional or voluntary. When a PCT seeks to study an intervention as it would generally function in the course of clinicians’ ordinary care, they have a good reason to participate in the study. Thus, clinicians generally need a reason not to participate in pragmatic trials, rather than an additional special reason in favor of participating.
For some clinicians these reasons may be unnecessary, for involvement in clinical research, including PCTs, is simply a condition of their employment. Learning healthcare systems will characteristically have many trials running at any given time, so their employees will almost necessarily be involved with them. The moral and social reasons to participate may have less motivating force than conditions of employment. However, our argument implies that more “traditional” (i.e., not “learning”) healthcare systems may also reasonably expect their clinicians to participate in PCTs. If the clinicians readily agree to participate, then so much the better. Our argument also implies that clinicians have good reasons to agree to participate upon being invited or asked to be involved, or even studied, as part of a PCT.
Exceptions to participating
Various exceptions to the prima facie duty to participate may apply. However, the burden of justification in a dispute about a particular case shifts to the clinician who wants to refuse. We consider four kinds of plausible objections or excuses: 1) an excuse that implies that the trial will make patients worse off; 2) an excuse that doubts the social value of the trial, even if patients will not be worse off; 3) personal moral objections to participating in the research; and 4) non-moral personal objections. At root, each excuse is a reason that an individual might give, but it can be helpful to distinguish between reasons that generalize versus those that do not, and also between reasons that are moral versus those that are not. The kinds of excuses described below are intended to be merely representative rather than exhaustive.
Representative types of excuses
| Moral | Non-moral | |
|---|---|---|
| General | The trial will leave patients worse off. | The trial is unlikely to answer the question it intends to, even if patients end up no worse off. |
| Personal | A clinician’s conscience will not permit participation. | Given the clinician’s current professional skills or capacities, the trial will be an excessive burden. |
Trial design and quality of care
Ethical clinical research requires a well-designed trial intended to answer an important question (Emanuel et al. 2000). If the study is not designed well, then clinicians would have a reason to refuse to participate in it. If the clinicians at an institution, in their informed professional judgment, believe that a proposed investigational intervention is inappropriate care, or if they reasonably believe that participating will jeopardize their ability to provide appropriate care to their patients, then they have a reason to refuse to participate in the research. Similarly, if the clinicians reasonably believe that the study is not going to supply useful evidence on an important question regarding clinical care, either because the study’s methods are poorly designed to answer the question, or because the research question itself is not relevant to the real-world treatment decisions, then they have a reason to refuse. Such assessments represent general excuses against clinician participation in research. These assessments are, at root, matters of professional judgment. They would imply that the clinicians believe the study should not occur, not merely that a particular clinician should be excused from it.
However, clinicians who are not clinical investigators are not often experts on trial design and research practice. Consequently, the decision about the quality of a research proposal is not typically within the sole purview of clinicians. Conversely, administrators and their delegates—most notably research ethics committees, but also sponsors and grant administrators—should have the expertise to evaluate research proposals and/or to establish relevant policies for the governance and conduct of that research. The investigators’ expertise, perhaps with a non-conflicted sponsor’s willingness to support it, and combined with the other ethical and regulatory constraints on their studies, supplies good reason to believe that a study (especially at the pre-enrollment phase) may be sound. Researchers may also be mistaken, and other features of the proposed study may undermine its value. But for this reason to work as an objection to participating, the clinicians need to be able to point to the relevant flaws.
While there may be both moral and non-moral general excuses, disentangling the two types may be difficult, insofar as bad research is prone to be unethical by adding risks and burdens with poor prospects of social or personal benefit. We might accept that research that does not diminish patient well-being (it might even increase it), yet which does not answer a useful question, is still less bad than poorly-designed research that leaves patients worse off. However, according to standard analyses of trial design, both kinds of studies would be ethically dubious. The process for assessing the reasonableness of general excuses for clinicians to not participate will be similar for both moral and non-moral reasons, so we will consider them together in this section.
For example, suppose a study proposes an intervention that reduces the patients’ engagement with the clinic, favoring instead some at-home management strategies. Yet clinicians within an institution may believe that the proposed intervention is unacceptably risky or that it does not give them enough quality contact with the patients to provide the care they deem adequate. Moreover, the clinicians may believe that clinic visits will be the only viable way to adequately assess the condition of their patients, thereby partially defeating the goals of the study. The clinicians would then have a reason—though not necessarily a decisive reason—to refuse to participate. To be clear, the clinicians may be wrong about the study; the home-management strategy may be just as good, or even better. The fact that a study is being proposed on the question is already compelling (but not quite decisive) evidence that clinical equipoise presumably exists. The mere fact that other clinicians disagree is at least some reason to discount one’s own beliefs about the quality of an intervention (Freedman 1987; Fried 2016; Campbell et al. 2017; London 2021).
Specifically, an individual clinician or group of clinicians would need to be able to demonstrate not merely that a proposed intervention is likely to be no better than the status quo, but that it is plausibly worse for patients. Alternatively, the clinician(s) would have to show that the research proposal will not answer the question it intends to resolve. But again, these claims would represent objections to a specific study, and not to pragmatic research in general.
The scope of this form of general objection is further restricted by our focus on pragmatic research. Unlike explanatory trials, which involve asking clinicians to make (potentially extensive) changes to clinical workflows or usual care practices, pragmatic research is often deliberately designed to not meaningfully disrupt clinical workflows, so as to best inform how the intervention under study might perform in “real-world” situations. To be accurate, some PCTs may require some clinicians to alter their current care practices, such as in the case of a cluster-randomized trial comparing several possible commonplace interventions that systematically standardizes different clinical sites to one intervention, which may differ from the sites’ prior care practices. From the clinicians’ point of view, however, such a trial may not differ meaningfully from a shift in treatment prompted by a change in clinic policy or in the supply of hospital resources. In such cases, clinicians would therefore not have a good reason to object based on concerns for the safety and well-being of their patients. Though they may believe that the study will not supply good evidence to resolve the question, they may lack expertise to substantiate such a claim. Moreover, as PCTs characteristically compare two or more real-world treatments, they generally track closely with usual care (Loudon et al. 2015). Therefore, if clinicians object to the study procedures as such, they may be implicitly objecting to the procedures common in usual care. As trial designs hew closer to usual care, some of the traditional rules for ethical trial conduct lose some force because the research/care distinction gets harder to detect (Morain et al. 2019). Risks, for example, that are present in usual care do not suddenly become especially noteworthy simply because the care is now the object of research.
None of this is to believe that clinician doubts about the quality of a study are necessarily “bad” objections, for it is certainly possible to imagine cases in which design or conduct problems would give clinicians a valid reason to object. Indeed, the justification for soliciting stakeholder input during the design phase of the study partly depends on the capacity for a variety of stakeholders, including clinicians, to improve trial design. For example, clinicians might know non-medical facts about their patients or clinical settings that could be relevant to the design or implementation of a specific PCT and that might not be evident to an outside planner, ranging from details related to the local culture or patients who would be enrolled, to such features as the (in)feasibility of proposed data collection approaches given current health system workflows. Insisting that clinicians participate over their professional objections risks discounting their professional judgment about appropriate care for their patients.
Conscientious objection
Refusals to participate might also arise from individual objections to the research. These objections can take several forms. The most morally fraught is conscientious objection, which as a matter of ongoing debate deserves special attention. To some, conscientious objection is incompatible with professional duties (Savulescu 2006; Giubilini 2014; Hughes 2018; Savulescu and Schuklenk 2018). When the medical profession has judged that an intervention is a valid component of appropriate care, then a clinician who refuses to provide that care is failing a professional duty. Qualms of conscience—the argument goes—are not good enough to excuse this failure. Being a member of the profession permits exercise of professional judgment, but conscientious objections illegitimately imply private constraints on medical care. To be clear, this distinction depends on whether assertions of conscience are moral claims, rather than professional judgments couched in moralized terms. As noted above, clinicians typically think they are morally obliged to provide high-quality care, so knowingly providing bad care is morally bad. Yet there is a difference between making a professional judgment about the quality of care and having a moral objection to the care itself. For example, physicians who refuse to perform abortions typically do not argue that abortions are contra-indicated, but that they are morally wrong.
One proposed solution to conflicts between professional norms and clinician conscience simply advises conscientious clinicians to evade the dilemma by refusing positions that would create the conflict or, alternatively, to leave the profession altogether. Yet this solution is inadequate, for it implies that the profession’s norms are not subject to moral critique by its members. Moreover, professional norms do not arise by majority vote among members of the profession. Rather, they develop in complex ways over time, and often in ways that allow for members to update their beliefs and practices slowly. New information does not get added to professional standards instantly, and the profession’s coherence depends on leaving some room for its members to dissent from the consensus at times.1
Furthermore, professional duties are not co-extensive with the moral law, and professional norms may even conflict with moral norms. A quick survey of historical attitudes and practices among physicians should present abundant evidence for this claim. For example, standard (or at least acceptable) care has included practices such as forced sterilization of criminals, forced sterilization of “feebleminded” people (Holmes 1927) and widespread prescribing of highly addictive and destructive drugs, etc.. These treatments were believed to be good (or at least acceptable) care at the time. Today each one would be regarded not just as bad practice, but as morally wrong.
One does not cease to be a moral individual when one becomes a medical professional. Insofar as conscience is an individual’s means of apprehending the moral truth, asking someone to violate their conscience, whether in general or as a condition of continuing in a profession, requires them to act in a way they believe to be wrong. Even if the individual’s conscience is mistaken, the requirement to act against conscience implies that they should choose to do what they believe to be wrong—i.e. to will to act wrongly. An institution that demands this must clear a very high bar of moral justification.
When clinicians refuse on grounds of conscience, relevant parties may need to seek negotiated accommodations to resolve conflicts. It is not the case that one side must completely “win”; often there are ways to adjust policies to minimize the moral burdens without giving any one participant veto power. Consider, for example, a clinician who has been trained to follow a rigid model of informed consent. A PCT he will be involved in plans to use a form of modified consent. The IRB has determined that the research clearly satisfies the criteria for a waiver or alteration of consent. The clinician vehemently opposes the plan, thinking that the protocol is unethical. The researchers can supply the clinician with abundant scholarship explaining how modified consent in this case does not violate any plausible ethical standards. Nevertheless, it is might still be appropriate for the researchers to give the clinician some time to absorb and assimilate this new information. Finding a way to accommodate his scruples in the short term may be the best way to respect his judgment. Moreover, it may be that he knows more about his patients and institution that the researchers do, and his complaints may turn out to have merit, even if they are not decisive. Skeptical critics can be valuable contributors.
The point of this section is not to argue that conscientious objections are a decisive trump card against professional or institutional requirements. Rather, it aims to show that a conscientious objection can be a legitimate reason to refuse to participate in certain institutional practices. We protect conscientious exemptions because doing so preserves the moral standing of individuals. Refusing to do so risks treating individuals as something less than fully competent moral agents. Moreover, refusing to hear conscientious objections undermines the possibility of internal critique and revision of institutional norms. And insofar as both health system administrators exercise authority over clinicians, they have a duty to not produce unnecessary moral distress, including making demands that they know will prompt conscientious objections.
Non-moral personal objections
Another kind of person-specific objection may arise. It could be that the research protocol requires a clinician to perform some task that she does not think she is capable of performing well. For example, a comparison of surgery techniques may require a surgeon to perform a procedure using equipment that she cannot manipulate properly. She may object to participating not because she is opposed to the new technique, but because she does not think that she can safely or successfully perform it. This is not a moral objection to the technique, and it does not depend on ideological beliefs. Yet it is personal in a similar way, for it asserts a reason to refuse that does not imply a generalizable objection to the research or to the procedure in question. Note that the surgeon may be mistaken about whether she can perform the surgery, and it may be possible to get alternative tools. The trial might also be redesigned to remove the barrier. Her objection could be conditional. Even so, her inability seems like a legitimate basis from which to seek an exemption.
A different kind of personal objection might arise if a PCT requires the clinician to conduct activities that, for some reason, are particularly time-consuming for her, as compared to otherwise similarly situated clinicians, that they undermine the quality of care she can provide to her patients (Grady and Wendler 2013). She may have a personal objection—one that does not generalize against the research as such. It may be that the investigational intervention would result in worse clinical practice by her, even if it would not have this effect with other clinicians.
A somewhat less compelling reason for refusing to participate is the added burdens that the research may place on individual clinicians, even if they do not compromise care itself. Pragmatic interventions might take more time. Or they might require the clinician to ask questions they fear may cause some of their patients to dislike or distrust them, even if they won’t directly compromise their clinical care. If these kinds of practical matters cannot be addressed in the trial design, then they could be a reason to refuse to participate. Health system administrators and research sponsors would do well to realize that the additional burdens of research can be a legitimate reason to opt out in some cases. There is evidence from research on physicians’ views about comparative-effectiveness research that clinicians are concerned about the added burdens that research requires, and especially about the amount of time the additional study activities might take (Topazian et al. 2016). It is not illegitimate for clinicians to point out their own limitations and extra-professional commitments. For example, the COVID-19 pandemic has forced renewed attention on the problems of clinician burnout and moral distress, as institutions have demanded much of their employees, but the contours of the problem have been studied for a long time (Čartolovni et al. 2021). Objections based on the clinician’s own well-being are not arbitrary or gratuitous.
A second kind of burden, one that adds normative constraints to the ordinary practical human limitations, is that research may distort fiduciary clinician/patient relationships in unhelpful ways. By design, pragmatic research is supposed to minimize switching between “research” and “clinic” modes, and while clinicians would not acquire new obligations of care by participating in the research (Belsky and Richardson 2004; Resnik 2009; Richardson 2012), the research may require that they weigh additional factors when determining what to do with a particular patient. Clinicians may judge that the trial activities are damaging the rapport they have with their patients. For example, a clinician might reasonably believe that some of his patients are afraid of research and that if they find out that they are being studied, they will not trust him as much. It is not unreasonable for a clinician to refuse this burden. Some trial designs may be possible to conduct without affecting ordinary clinical activities at all. Others may be able to mitigate risks to the clinical relationship with relatively little effort. However, if we judge that these kinds of effects on clinical relationships need to be mitigated when they arise, then we are already conceding that, at least at the margins, concerns about risks to the clinical relationship are a legitimate kind of objection to clinicians’ participation in the research. There could be some balancing between these concerns and the needs of the study, but when we judge that there are competing interests, it is at least possible that the clinical needs could override the research needs, thereby giving clinicians a valid reason to opt out.
A different form of this relational burden can arise as well. Sometimes the trial itself poses no special burdens to the clinicians’ work, but the long-term effects of the research activities nevertheless do. For example, PCTs sometimes reveal medical information that is unrelated to the research question (Morain et al. 2020a). A study may reveal that prior clinical care received by a patient fell short of the normative standard of care. In such a case, the study has increased the burden of care for a patient. The new information allows the clinician to care for the patient better, but the relational costs of having to explain to the patient how the clinician knows about the issue may be quite high (Richardson 2008; Morain et al. 2020b). The researcher in a pragmatic trial may have no relationship with the patients he is studying, and the clinician ends up having to bear the relational burdens of any results or effects of the researcher’s findings. This possibility is particularly salient when researchers follow the emerging (and broadly defensible) norm to return individual research results (Clayton 2008). A clinician may believe that it would be unfair or unprofessional to pass these burdens on to future clinicians.
The “burdens” objection assumes that the study itself, or the health care system in which is being conducted, cannot be reformed to ease some of the clinicians’ extra responsibilities. Overworked clinicians may have legitimate grounds to refuse to participate in research, but the health system (and the society in general) can seek to relieve these burdens by increasing the number of clinicians or otherwise excusing them from some of the additional constraints on their time. These changes may be difficult to implement quickly, and in some cases the burdens arise from society-wide health care policy. Even so, it may be that systematic reforms about other burdensome aspects of health care could free clinicians to participate in research, and it may be worth thinking about the large-scale priorities here. Furthermore, the research design itself can minimize the disruption to ordinary clinical activities. This kind of sensitive design underscores the value of engaging clinicians (and other stakeholders) early in the research process. Given these possibilities, the burdens exception is contingent in a way that distinguishes it from the moral and professional considerations described above. Moreover, the solution is reasonably straightforward, even if not easy, for it does not depend on controversial claims about the nature of clinical practice or the appropriate range of clinicians’ moral beliefs.
It is possible that the “burdens” objection could be abused both by clinicians and those who would recruit them for research. Clinicians might just claim without justification that the research is too time-consuming or difficult and so an illegitimate claim on their effort. Nothing about our argument suggests obvious ways to handle bad-faith objections, or bad-faith requirements, for that matter.
Implausible objections
We will not treat in detail bad reasons to refuse to participate in research. For example, clinicians cannot morally refuse on the grounds that participating might reveal that they are incompetent or unnecessary (Lynn et al. 2007). Moreover, mere worries about privacy or confidentiality are not sufficient (Piasecki and Dranseika 2021), unless perhaps there is clear evidence that the research project has not satisfied standard protections for these concerns.
Furthermore, the fact that clinicians have not explicitly authorized their research activities (such as through an informed consent process) does not excuse them from the duty to participate (Gelinas et al. 2016). Though consent may still be essential to ethical trial design and conduct (Campbell et al. 2017), the requirement to obtain consent is distinct from the question of the duty to participate. A duty to participate may imply that clinicians (and patients) ought to consent, given the opportunity. Some pragmatic research can proceed under a waiver or modification of consent (Morain and Largent 2021), but even if it does not, clinicians would still have good reasons to give their consent.
Clinicians might also discount the value of participating in pragmatic research on their own activities because they believe they or their current patients will not benefit from the research themselves. Traditional research involves additional burdens on clinicians’ work, but it also may offer the promise of distinctive benefits. For instance, clinicians may volunteer themselves for their current patients’ well-being (e.g., as an experimental drug might offer hope for a cure), for the well-being of future patients, or for personal benefit (e.g., prestige, income, and professional advancement). Pragmatic research does not tend to offer these benefits, particularly to the clinicians who are not the investigators, so a clinician may believe that the balance of burdens and benefits favors opting out. Yet just as the benefits of pragmatic research to clinicians and their current patients may be less salient, so too may be the burdens. These burdens are characteristically less disruptive insofar as the research is studying care in ordinary clinical settings.
Conclusion
We have argued that there are several reasons to believe that clinicians have a general duty to participate in research, and we have examined various ways in which this general duty can be overridden by features of specific situations. While we acknowledge clinicians may have legitimate reasons to refuse to participate, we argue that they must be responsive to the reasons in favor of participating. Moreover, nothing about our argument justifies compelling clinicians to participate, even as a condition of continued employment. It is possible—indeed, common—for some action to be morally required while also being the kind of thing one may not be legitimately coerced to do.
Importantly, evidence suggests that most clinicians are happy to endorse research about usual care (Topazian et al. 2016). Consequently, we might expect that objections to pragmatic research will generally pertain to the details of the particular design and implementation of the study in question, rather than representing in any way a more fundamental rejection of the research enterprise in general. Looking ahead, empirical research to explore clinician’s complaints about, objections to, and experiences within specific studies could help inform strategies by which these objections can be mitigated, while also suggesting additional types of objections that might merit greater deference. Relatedly, pragmatic research often requires the involvement not only of clinicians, but also of other medical staff, including administrative personnel and information technology staff. While some research-related burdens on non-clinical staff might parallel those of clinicians, considerations of duties may differ for these groups. Further scholarship on non-clinician duties would also be valuable.
In fact, good design to minimize burdens and other kinds of practical objections—including conscientious objections—seems like a good strategy in general. Research has costs, and not just monetary costs. Often those costs are more than worth it. But they should be acknowledged and mitigated wherever possible. Doing the right thing can be hard and inconvenient, but that is no justification for making it harder than it needs to be.
Footnotes
For example, Henry Beecher’s famous article on research ethics sparked a revolution in the field (Beecher 1966). At least at first, the research he was criticizing was judged permissible, and his dissent highlighted the need for more ethical oversight. Francis Kelsey was an FDA commissioner who kept thalidomide mostly out of the US (Commissioner), even as it was commonly prescribed in other wealthy nations. Indeed, norms of research ethics have developed as responses to bad practices that were not exactly secrets. See London (2021) ch. 2. The appropriateness of robot-assisted surgeries, or of machine-learning image diagnostics, is still being discerned. Less ethically-fraught are cases where new research suggests a valuable change in standard care, but this information is not disseminated or confirmed immediately. Knowledge production and dissemination takes time, and it is often not unreasonable for clinicians to take a “wait and see” approach about revisionary practice.
References
- Beauchamp TL, and Childress JF. 1994. Principles of biomedical ethics. 4th ed. New York: Oxford University Press. [Google Scholar]
- Beecher HK 1966. Ethics and Clinical Research. New England Journal of Medicine 274(24). Massachusetts Medical Society: 1354–1360. doi: 10.1056/NEJM196606162742405. [DOI] [PubMed] [Google Scholar]
- Belsky L, and Richardson HS. 2004. Medical researchers’ ancillary clinical care responsibilities. BMJ : British Medical Journal 328(7454): 1494–1496. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Califf RM, and Sugarman J. 2015. Exploring the ethical and regulatory issues in pragmatic clinical trials. Clinical trials (London, England) 12(5): 436–441. doi: 10.1177/1740774515598334. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Campbell MK, Weijer C, Goldstein CE, and Edwards SJL. 2017. Do doctors have a duty to take part in pragmatic randomised trials? BMJ 357. British Medical Journal Publishing Group: j2817. doi: 10.1136/bmj.j2817. [DOI] [PubMed] [Google Scholar]
- Čartolovni A, Stolt M, Scott PA, and Suhonen R. 2021. Moral injury in healthcare professionals: A scoping review and discussion. Nursing Ethics 28(5): 590–602. doi: 10.1177/0969733020966776. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Clayton EW 2008. Incidental Findings in Genetics Research Using Archived DNA. The Journal of law, medicine & ethics : a journal of the American Society of Law, Medicine & Ethics 36(2): 286–212. doi: 10.1111/j.1748-720X.2008.00271.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Commissioner, FDA. Frances Oldham Kelsey: Medical reviewer famous for averting a public health tragedy. FDA. https://www.fda.gov/about-fda/fda-history-exhibits/frances-oldham-kelsey-medical-reviewer-famous-averting-public-health-tragedy; FDA. [Google Scholar]
- Committee on the Learning Health Care System in America, and Institute of Medicine. 2013. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. Edited by Mark Smith, Robert Saunders, Leigh Stuckhardt, and McGinnis J. Michael. Washington (DC): National Academies Press (US). [PubMed] [Google Scholar]
- Dember LM, Lacson E, Brunelli SM, Hsu JY, Cheung AK, Daugirdas JT, Greene T, Kovesdy CP, Miskulin DC, Thadhani RI, Winkelmayer WC, Ellenberg SS, Cifelli D, Madigan R, Young A, Angeletti M, Wingard RL, Kahn C, Nissenson AR, Maddux FW, Abbott KC, and Landis JR. 2019. The TiME Trial: A Fully Embedded, Cluster-Randomized, Pragmatic Trial of Hemodialysis Session Duration. Journal of the American Society of Nephrology 30(5). American Society of Nephrology: 890–903. doi: 10.1681/ASN.2018090945. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Emanuel EJ, Wendler D, and Grady C. 2000. What Makes Clinical Research Ethical? JAMA 283(20). American Medical Association: 2701–2711. doi: 10.1001/jama.283.20.2701. [DOI] [PubMed] [Google Scholar]
- Faden RR, Kass NE, Goodman SN, Pronovost P, Tunis S, and Beauchamp TL. 2013. An Ethics Framework for a Learning Health Care System: A Departure from Traditional Research Ethics and Clinical Ethics. Hastings Center Report 43(s1): S16–S27. doi: 10.1002/hast.134. [DOI] [PubMed] [Google Scholar]
- Freedman B 1987. Equipoise and the Ethics of Clinical Research. New England Journal of Medicine 317(3). Massachusetts Medical Society: 141–145. doi: 10.1056/NEJM198707163170304. [DOI] [PubMed] [Google Scholar]
- Fried C 2016. Medical Experimentation: Personal Integrity and Social Policy: New Edition. Medical Experimentation. Oxford University Press. [Google Scholar]
- Gelinas L, Wertheimer A, and Miller FG. 2016. When and Why Is Research without Consent Permissible? Hastings Center Report 46(2): 35–43. doi: 10.1002/hast.548. [DOI] [PubMed] [Google Scholar]
- Giubilini A 2014. The paradox of conscientious objection and the anemic concept of ‘conscience’: Downplaying the role of moral integrity in health care. Kennedy Institute of Ethics Journal 24(2): 159–185. doi: 10.1353/ken.2014.0011. [DOI] [PubMed] [Google Scholar]
- Grady C, and Wendler D. 2013. Making the Transition to a Learning Health Care System. Hastings Center Report 43(s1): S32–S33. doi: 10.1002/hast.137. [DOI] [PubMed] [Google Scholar]
- Harris J 2005. Scientific research is a moral duty. Journal of Medical Ethics 31(4). Institute of Medical Ethics: 242–248. doi: 10.1136/jme.2005.011973. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Herman B 2021. The Moral Habitat. Oxford: Oxford University Press. doi: 10.1093/oso/9780192896353.001.0001. [DOI] [Google Scholar]
- Hill TEJ 1971. Kant on Imperfect Duty and Supererogation 62(1-4). De Gruyter: 55–76. doi: 10.1515/kant.1971.62.1-4.55. [DOI] [Google Scholar]
- Holmes OW 1927. Buck v. Bell [Google Scholar]
- Hughes JA 2018. Conscientious objection, professional duty and compromise: A response to Savulescu and Schuklenk. Bioethics 32(2): 126–131. doi: 10.1111/bioe.12410. [DOI] [PubMed] [Google Scholar]
- Kant I 2012. Kant: Groundwork of the Metaphysics of Morals. Edited by Christine M. Korsgaard. Translated by Gregor Mary and Timmermann Jens. 2nd edition. Cambridge: Cambridge University Press. [Google Scholar]
- Kass NE, Faden RR, Goodman SN, Pronovost P, Tunis S, and Beauchamp TL. 2013. The Research-Treatment Distinction: A Problematic Approach for Determining Which Activities Should Have Ethical Oversight. Hastings Center Report 43(s1): S4–S15. doi: 10.1002/hast.133. [DOI] [PubMed] [Google Scholar]
- Largent EA, Joffe S, and Miller FG. 2011. Can RESEARCH and CARE Be Ethically Integrated? Hastings Center Report 41(4): 37–46. doi: 10.1002/j.1552-146X.2011.tb00123.x. [DOI] [PubMed] [Google Scholar]
- London AJ 2021. For the Common Good: Philosophical Foundations of Research Ethics. Oxford University Press. [Google Scholar]
- Loudon K, Treweek S, Sullivan F, Donnan P, Thorpe KE, and Zwarenstein M. 2015. The PRECIS-2 tool: Designing trials that are fit for purpose. BMJ 350(may081): h2147–h2147. doi: 10.1136/bmj.h2147. [DOI] [PubMed] [Google Scholar]
- Lynn J, Baily MA, Bottrell M, Jennings B, Levine RJ, Davidoff F, Casarett D, Corrigan J, Fox E, Wynia MK, Agich GJ, O’Kane M, Speroff T, Schyve P, Batalden P, Tunis S, Berlinger N, Cronenwett L, Fitzmaurice JM, Dubler NN, and James B. 2007. The Ethics of Using Quality Improvement Methods in Health Care. Annals of Internal Medicine 146(9). American College of Physicians: 666–673. doi: 10.7326/0003-4819-146-9-200705010-00155. [DOI] [PubMed] [Google Scholar]
- Morain SR, and Largent EA. 2021. Public Attitudes toward Consent When Research Is Integrated into Care “Ought” from All the “Is”? Hastings Center Report 51(2): 22–32. doi: 10.1002/hast.1242. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Morain SR, Joffe S, and Largent EA. 2019. When Is It Ethical for Physician-Investigators to Seek Consent From Their Own Patients? The American Journal of Bioethics 19(4). Taylor & Francis: 11–18. doi: 10.1080/15265161.2019.1572811. [DOI] [PubMed] [Google Scholar]
- Morain SR, Weinfurt K, Bollinger J, Geller G, Mathews DJ, and Sugarman J. 2020a. Ethics and Collateral Findings in Pragmatic Clinical Trials. The American Journal of Bioethics 20(1). Taylor & Francis: 6–18. doi: 10.1080/15265161.2020.1689031. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Morain SR, Mathews DJH, Weinfurt K, May E, Bollinger JM, Geller G, and Sugarman J. 2020b. Stakeholder perspectives regarding pragmatic clinical trial collateral findings. Learning Health Systems. doi: 10.1002/lrh2.10245. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Piasecki J, and Dranseika V. 2021. Balancing professional obligations and risks to providers in learning healthcare systems. Journal of Medical Ethics 47(6). Institute of Medical Ethics: 413–416. doi: 10.1136/medethics-2019-105658. [DOI] [PubMed] [Google Scholar]
- Rawls J 1999. A Theory of Justice. Oxford University Press. [Google Scholar]
- Resnik DB 2009. The clinical investigator-subject relationship: A contextual approach. Philosophy, Ethics, and Humanities in Medicine : PEHM 4: 16. doi: 10.1186/1747-5341-4-16. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Richardson HS 2008. Incidental Findings and Ancillary-Care Obligations. The Journal of law, medicine & ethics : a journal of the American Society of Law, Medicine & Ethics 36(2): 256–211. doi: 10.1111/j.1748-720X.2008.00268.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Richardson HS 2012. Moral Entanglements: The Ancillary-Care Obligations of Medical Researchers. Oxford University Press. doi: 10.1093/acprof:oso/9780195388930.001.0001. [DOI] [Google Scholar]
- Ross WD 1930. The Right and the Good. One thousand, nine hundred eighty-eighth. Clarendon Press. [Google Scholar]
- Savulescu J 2006. Conscientious objection in medicine. BMJ 332(7536). British Medical Journal Publishing Group: 294–297. doi: 10.1136/bmj.332.7536.294. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Savulescu J, and Schuklenk U. 2018. Conscientious objection and compromising the patient: Response to Hughes. Bioethics 32(7): 473–476. doi: 10.1111/bioe.12459. [DOI] [PubMed] [Google Scholar]
- Schaefer GO, Emanuel EJ, and Wertheimer A. 2009. The Obligation to Participate in Biomedical Research. JAMA : the journal of the American Medical Association 302(1): 67–72. doi: 10.1001/jama.2009.931. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Smalley JB, Merritt MW, Al-Khatib SM, McCall D, Staman KL, and Stepnowsky C. 2015. Ethical responsibilities toward indirect and collateral participants in pragmatic clinical trials. Clinical trials (London, England) 12(5): 476–484. doi: 10.1177/1740774515597698. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sugarman J, and Califf RM. 2014. Ethics and Regulatory Complexities for Pragmatic Clinical Trials. JAMA 311(23). American Medical Association: 2381–2382. doi: 10.1001/jama.2014.4164. [DOI] [PubMed] [Google Scholar]
- Tinetti ME, and Basch E. 2013. Patients’ Responsibility to Participate in Decision Making and Research. JAMA 309(22): 2331–2332. doi: 10.1001/jama.2013.5592. [DOI] [PubMed] [Google Scholar]
- Topazian R, Bollinger J, Weinfurt KP, Dvoskin R, Mathews D, Brelsford K, DeCamp M, and Sugarman J. 2016. Physicians’ perspectives regarding pragmatic clinical trials. Journal of Comparative Effectiveness Research 5(5): 499–506. doi: 10.2217/cer-2016-0024. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Weinfurt K 2017. Pragmatic Elements: An Introduction to PRECIS-2. In The Living Textbook. NIH Collaboratory. doi: 10.28929/092. [DOI] [Google Scholar]
