Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Feb 1.
Published in final edited form as: Bioethics. 2017 Feb;31(2):77–86. doi: 10.1111/bioe.12325

In Defense of a Social Value Requirement for Clinical Research

David Wendler, Annette Rid
PMCID: PMC5267934  NIHMSID: NIHMS833762  PMID: 28060427

Abstract

Many guidelines and commentators endorse the view that clinical research is ethically acceptable only when it has social value, in the sense of collecting data which might be used to improve health. A version of this social value requirement is included in the Declaration of Helsinki and the Nuremberg Code, and is codified in many national research regulations. At the same time, there have been no systematic analyses of why social value is an ethical requirement for clinical research. Recognizing this gap in the literature, recent articles by Alan Wertheimer and David Resnik argue that the extant justifications for the social value requirement are unpersuasive. Both authors conclude, contrary to almost all current guidelines and regulations, that it can be acceptable across a broad range of cases to conduct clinical research which is known prospectively to have no social value. The present paper assesses this conclusion by critically evaluating the ethical and policy considerations relevant to the claim that clinical research must have social value. This analysis supports the standard view that social value is an ethical requirement for the vast majority of clinical research studies and should be mandated by applicable guidelines and policies.

Keywords: research ethics, clinical research, social value, research policy, exploitation, moral integrity of researchers

1. BACKGROUND

Ethical analyses of clinical research attempt to identify the conditions under which it can be acceptable to expose participants to risks and burdens in order to evaluate medical treatments and interventions. Many analyses conclude that clinical research is ethically acceptable only when it has social value, in the sense that the data to be collected have the potential to improve health. According to this ‘social value requirement’ (SVR), clinical research that exposes participants to risks and burdens, but lacks social value, is unethical no matter what other positive features it might possess.

Most guidelines for clinical research endorse the SVR.1 In the words of the Nuremberg Code, clinical research is acceptable only when it has the potential to yield “fruitful results for the good of society.”2 Some version of the SVR is also codified in many national regulations. For example, regulations from Kenya maintain that “clinical research must be valuable, meaning that it evaluates a diagnostic or therapeutic intervention that could lead to improvements in health or well-being.”3

Despite this widespread endorsement, there have been surprisingly few analyses of what constitutes socially valuable research,4 and almost no systematic analysis of whether in fact social value is a necessary condition for ethically acceptable clinical research. With this latter gap in mind, Alan Wertheimer has evaluated two possible justifications for what he calls a “universal and robust” SVR, and found both of them wanting.5 Werthheimer concludes that it may be acceptable to conduct clinical research that is known prospectively to have no social value, provided study participants consent and are not exploited and the research is privately funded. Similarly, David Resnik has argued that social value is an ethical requirement for research only when it poses more than minimal risk to non-consenting subjects, or it is supported by public resources.6

To assess these conclusions, the present paper critically evaluates the ethical arguments and policy considerations relevant to the SVR. This analysis reveals that Wertheimer and Resnik are mistaken in rejecting SVR for most clinical research studies. Evaluation of a broad range of ethical and policy considerations provides strong support for the standard view: social value is an ethical requirement for clinical research generally and should be mandated by applicable guidelines and policies.

2. SCOPE OF ANALYSIS

2.1 Defining social value

Clinical research is a subset of research with human participants that focuses on evaluating methods to prevent, treat or cure illness and disease, or on generating the knowledge necessary to develop such methods. We leave to others the question of whether social value is an ethical requirement for research involving human participants more generally, such as economics research that involves playing computer games to learn how humans make decisions under conditions of uncertainty.

We assume the standard conception of social value according to which clinical research has social value to the extent that it collects data which can be used, typically in conjunction with data from other studies, to improve health.7 Such data is gained, for example, by evaluating preventive, diagnostic, therapeutic or palliative interventions, or by conducting pathophysiological studies that are necessary to develop such interventions. This includes studies that identify methods which do not work and thereby point the way to more promising approaches.

The standard conception of social value does not exclude the possibility that clinical research may be socially valuable in ways that do not involve improvements in health. For example, clinical research can provide rewarding careers for scientists, employment for citizens, and a sense of fulfillment for participants. We will bracket these and other types of potential social benefit and assume that clinical research has social value only to the extent that it collects data which can be used to improve health.

2.2 Requiring social value

The SVR holds that the results of clinical research studies must have the potential to improve health, not that they in fact do so. At least some studies turn out to have essentially no social value at all. For example, some clinical trials end up recruiting so few participants that they yield no useful information. These trials do not violate the SVR, provided there is sufficient reason to believe ex ante—at the time the trials are initiated—that they will yield data which can be used to improve health.

Second, as we understand it, the SVR does not mandate that the social value of all clinical trials must exceed some threshold for significance or importance. It does not, for example, mandate that all studies must have significant social value. Instead, the requirement is that all studies must have sufficient social value. This means that studies without at least some social value cannot be justified.8 In addition, the level of social value that is required for a given study depends on the resources invested in the study and the extent to which it poses “net risks” to participants (where net risks refer to those risks that are not offset by potential benefits for participants).9

Our understanding of SVR in this respect differs from both Wertheimer and Resnik. Wertheimer argues that SVR requires significant social value because “any version of SVR worth taking seriously should not be so weak or broad as to exclude virtually nothing”.10 Yet our conception of the SVR does not require significant social value and still excludes some studies. Specifically, it excludes studies whose social value is not sufficient to justify the given level of net risk to participants or the associated research costs—for example, certain studies of “me-too” drugs that have no social value and pose significant net risks to participants, or very costly studies that have litte social value. Resnik understands the SVR as requiring ‘substantial’ public benefit. He points out that proponents of an SVR object to studies that have no social value, even when they earn money for the sponsoring company and provide adequate compensation to participants. Resnik then suggests that proponents of an SVR are likely to object to these studies when they produce only “marginal public benefits.”11 On our understanding, however, the production of marginal benefits does not necessarily violate the SVR. This depends on whether the level of benefits is sufficient in light of the net risks to participants and the cost of research.

Third, the SVR applies to studies, not to the enrollment of specific individuals in these studies. Imagine it is known ex ante that the enrollment of a specific individual will neither contribute to, nor detract from the social value of a given study. As we understand it, the SVR does not preclude enrollment of this individual.

Fourth, we do not consider here whether, in some cases, the SVR should be further specified regarding who should benefit, or in what ways. For example, some guidelines require that research in low-resource settings must be responsive to the health needs of the communities in which it is conducted12—in other words, that it must have what some commentators call local social value.13

Fifth, we do not examine how the SVR should be implemented and enforced. Thus, we take no stance on the question of whether the social value of a given study should be conducted by regulators or research ethics committees, research advocacy groups or funders, or someone else; and what the responsible party should do in response to studies that lack social value.

3. ETHICAL AND POLICY ARGUMENTS FOR THE SOCIAL VALUE REQUIREMENT

The following sections examine eight ethical and policy arguments that, taken together, provide strong support for requiring social value for all clinical research.

3.1 Protecting participants who cannot consent

There is considerable debate over when it is acceptable to expose individuals who cannot consent to net risk research procedures that are performed purely for research purposes and therefore do not offer them a prospect of clinical benefit. For example, under what conditions (if any) is it acceptable for healthy children to undergo a purely research lumbar puncture in order to establish normal levels for a protein in the cerebrospinal fluid?

Commentators have offered a number of ethical justifications for net risk research with individuals who cannot consent.14 For present purposes, the important point is that these justifications all require the research to be socially valuable.15 For example, some argue that it can be acceptable to enroll children in research involving net risks because it teaches them the value of altruism.16 As mentioned, there are a number of ways in which clinical research might benefit others. It might help to advance investigators’ careers or realize a profit for the sponsoring company. However important these goals may be in other regards, they seem ill-suited to teaching children the value of altruism. In contrast, children can plausibly learn to appreciate the value of altruism by enrolling in studies that are designed to identify new methods for promoting the health of future patients.

Others argue that essentially everyone has realized benefits to their health as the result of prior net risk research, and everyone—including individuals who cannot consent—therefore has an obligation to participate in such research.17 This argument makes sense only to the extent that the research in question has the potential to improve health for future individuals.

Still others maintain that participating in research with the potential to improve health offers the opportunity to contribute to valuable activities that may benefit others. Doing so thereby promotes the interest we all have in living a better life overall—even when the research involves some net risks and the participants cannot consent.18 In contrast, it is less clear that helping a company profit from a study of questionable social value contributes to a better life overall. This striking overlap of justifications provides strong support for the claim that social value is necessary to ensure the ethical appropriateness of net risk clinical research with participants who cannot consent. Moreover, given that essentially all clinical research studies include some net risk procedures, it can generally be assumed that SVR applies to all clinical research involving participants who cannot consent.19

3.2 Ensuring the acceptability of high-risk research with competent adults

Respect for autonomy generally requires us to allow competent adults to make their own decisions and lead their own lives, even when they engage in risky activities with little or no social value. For example, we should respect the decisions of competent adults to participate in reality TV shows that pose some net risks and have arguably no social value. With this in mind, imagine that a researcher proposes to enroll competent adults in a clinical trial that has little, if any social value, but has the potential to earn the sponsoring company a profit—perhaps the study of a “me-too” drug that is so similar to already approved treatments in terms of side effects, route of administration, cost and so on, that it has no potential to benefit future patients. Imagine further that the sponsor proposes to offer the participants fair compensation. On what grounds might guidelines or regulations regard this study as inappropriate?

Respect for autonomy is important. At the same time, society rightly limits the activities of even competent adults, especially when it comes to inappropriate and high-risk activities. Reality TV shows that involve competent adults agreeing to swim across rivers and hike in the wilderness without a compass are permitted, but we would not permit a show involving Russian Roulette. Imagine that a TV station proposes a reality TV show, “Firing Squad”, in which competent adults are lined up against a wall. An executioner, equipped with a gun that contains one bullet in its 20 chambers, aims at the group and pulls the trigger.

This show is inappropriate, even if the contestants provide their voluntary informed consent and they are not exploited (meaning that the show’s sponsors do not profit unfairly from their participation). A number of considerations could support this view: it seems inappropriate to be entertained by possible executions; the show might be traumatic to those who watch it; it might trigger copycat incidents; and it might promote the idea that human life has little value. But, even if these concerns are addressed—imagine the game is played in private and not televised—there are still strong moral reasons to oppose it. This will sound puzzling to those who assume that morality, at least when it comes to competent adults, is exhausted by transactional fairness and respect for their autonomy. It will likewise puzzle those who assume that respect for the autonomy of competent adults has overriding value.20

However, our view is that a plurality of values should guide our moral judgment, and that no single value—including respect for autonomy—is overriding in all cases. “Firing Squad” is inappropriate because it poses significant net risks to the contestants with no redeeming value, so that even competent adults who are adequately compensated should not be invited to participate. This view is not peculiar to research, but applies to other activities as well. For example, entrepreneurs should not hire workers to produce fancy dresses under high-risk conditions, even if the work is limited to competent adults who are informed of the risks and offered a fair wage. The ethics of these cases is not exhausted by what competent adults will agree to. It includes, in addition, what offers it is appropriate to make to competent adults.

One might respond that it is possible to imagine circumstances in which offers to work in these conditions might be acceptable. Imagine, for instance, the following conditions: it is not possible for the garment factory owner to make adequate safety provisions for her workers without going out of business; working in the factory is the only opportunity the workers have for employment; and society is unable to feed the children of individuals who are not employed.

It seems plausible to argue, in these circumstances, that it might be acceptable for the owner to offer employment in a factory that poses high risks and manufactures products of no social value. With this in mind, one might attempt to develop a similar argument in support of clinical research that has no social value and poses high net risks to competent consenting participants. For example, enrolment in such research might offer the participants financial compensation and/or access to clinical care that is not otherwise available to them and, given their circumstances, these potential benefits might justify even high research risks from the participants’ perspective. Although there could be some cases like this, the nature of clinical research suggests that they offer at best a weak argument against an SVR. Participation in clinical research is typically short-term, infrequent, limited to individuals who satisfy strict inclusion and exclusion criteria, and the clinical care provided as part of the research is rarely comprehensive. This suggests that research enrolment, unlike employment, typically does not provide participants with a credible opportunity to meet their and their dependents’ most basic needs. Thus, even the desperate circumstances that might justify offering employment in a high-risk factory typically do not justify offering enrollment in high net risk research that offers no social value.

In contrast, activities that pose significant net risks, but have important social value are rightly considered ethical, even praiseworthy. For example, competent consenting adults often are praised for assuming high net risks when fighting fires or safeguarding the national security.21

These considerations suggest that it is unethical to offer participation in an activity that exposes competent adults to high net risks and has no social value, even when the participants give their informed consent and are not exploited. It follows that social value is a necessary ethical requirement on high net-risk research and should be mandated by applicable guidelines and regulations.

3.3 Maintaining researcher integrity

Clinical research does not involve participants merely facing risks; research risks are not analogous to the risk of being attacked by a bear while on a hike. Clinical investigators actively and intentionally expose participants to risks, frequently by invading their bodies to inject investigational agents or remove samples, or by asking sensitive health-related questions. These features raise the question of what constitutes appropriate behavior on the part of investigators.

As Alan Wertheimer points out, there is no comprehensive account of appropriate investigator behavior.22 Wertheimer also highlights the fact that the point of ethical guidance for clinical research is typically understood as protecting participants’ rights and interests, not preventing investigators from acting inappropriately. However, the absence of a comprehensive account of appropriate investigator behavior, and the general emphasis on protecting participants, do not imply that limits on investigator behavior are exclusively grounded in protecting participants.

Imagine a study that involves participants undergoing a lumbar puncture without anaesthetic merely to see how painful it is. Further imagine that participants are fully informed and paid for their participation. This study is not clearly problematic in the way that a study involving high net risks would be, analogous to the “Firing Squad” TV show. In particular, given the relatively moderate net risks of a lumbar puncture, it is not clear that this study is unethical in virtue of exposing participants to excessive risks. Yet, clinical research ethics is not just about what happens to participants; it is also about what investigators as moral agents do to participants.23 Investigators should not insert needles into participants’ spinal columns and intentionally inflict pain without a good reason. This raises the question: When can it be acceptable for investigators to interact with participants in ways that expose them to net risks?

Individuals’ rights to bodily integrity and privacy place strong ethical claims against investigators inserting a needle into participants’ bodies or asking sensitive questions purely for research purposes. However, by giving valid consent, competent adults waive their rights against being so treated. Hence, if there are limits on how investigators can treat consenting adults in the context of net risk research, these limits must have some source other than participants’ rights.

While we do not have a complete (or even partial) account of agent-centered limitations in clinical research, it seems clear that these limits go beyond prohibiting investigators from violating participants’ rights. Investigators also need positive reasons to justify their studies. In the above lumbar puncture study, there needs to be a positive reason why investigators insert needles into participants’ backs, inflicting pain and exposing them to net risks. Absent the potential to benefit participants clinically, the potential to learn something important for improving health provides a clear and strong justification. In this case, an investigator can say: “Yes, I am inserting a needle into your spinal column for research purposes, but I do this as part of an effort to gain information that has the potential to improve the health of future patients and which cannot be obtained in a way that is less invasive or poses lower risks.” By contrast, other possible benefits of conducting clinical trials do not seem to yield a compelling justification for investigators’ behavior. For example, recognizing that this does not constitute a complete argument, to our ears at least “I am doing this lumbar puncture to satisfy the requirements to earn my degree” does not provide a sufficient justification for inserting needles into the spinal columns of research participants.

3.4 Avoiding participant deception

Society benefits from clinical research and takes active steps to promote it, primarily through efforts to advance and encourage the view that clinical research is socially valuable. Empirical data suggest that these efforts have been successful. In a recent global survey, 84% of 12009 respondents indicated that individuals who participate in clinical research make a valuable contribution to science.24

Empirical studies also find that the potential to help future patients is an important reason why many individuals enroll in clinical research, especially in studies that offer no prospect of clinical benefit.25 Relying on this motivation to enroll individuals in research that lacks social value involves a kind of deception or fraud. It involves investigators relying on participants’ false belief to get them to enroll in studies that conflict with their clinical interests. The SVR protects participants against this fraudulent behavior by ensuring that the results of net risk studies have the potential to benefit future patients. Moreover, to the extent that society, by promulgating and encouraging the view that clinical research is socially valuable, is responsible for individuals’ belief, the SVR protects society from becoming complicit in this fraud.26

A different way to avoid participant deception would be simply to inform prospective participants that a given study lacks social value. However, it seems unlikely that investigators would adopt this approach. They likely would be unwilling to admit that their studies lack social value and likely would be concerned that candor in this regard would deter individuals from enrolling. This suggests that an approach which permitted these studies, but mandated full disclosure regarding their lack of social value, is unlikely to be implemented scrupulously by investigators and sponsors who have a strong incentive to enroll participants. Moreover, as we discuss below, informing participants that some studies lack social value may undermine public trust in clinical research in general. These considerations suggest that the SVR is a key safeguard against deception in clinical research that poses net risks.

3.5 Safeguarding against exploitation

The SVR has been cited as an important protection against the exploitation of research participants.27 In response, Alan Wertheimer has argued that the SVR actually increases the potential for exploitation.28 To make this argument, Wertheimer appeals to his own account, according to which exploitation occurs when a specific transaction places an unfair balance of risks and benefits on one or more of the parties to the transaction.29 On this account, research participants are exploited when they receive an unfair share of the benefits of a study, given the risks and burdens they assume, and the extent to which others benefit from their participation.30 Since the SVR mandates that clinical trials must have the potential to benefit future patients, and there might be millions of them, Wertheimer concludes that it actually increases the chances that participants will be exploited.

Wertheimer’s argument makes sense, but only to the extent that one focuses on individual research studies.31 Imagine a world in which only one clinical trial will ever be conducted. In that world, the chances that participants will be exploited increases if the results benefit many others rather than not benefitting anyone else—meaning that the trial has no social value. But, of course, we do not live in that world. To assess the relationship between the SVR and exploitation in the real world, we need to consider clinical research as a cooperative enterprise that is designed, through the conduct of numerous studies over many years, to improve health. In this world, the potential for exploitation depends not simply on the distribution of benefits and burdens within a given study, but on the distribution of benefits and burdens across the enterprise as a whole. For individuals who are denied access to the benefits of clinical research—for example, they are denied access to health interventions that are based on the results of prior clinical trials—the fact that a study has social value for others would increase the potential for exploitation as Wertheimer claims. In contrast, as part of a collective program to which everyone contributes, and from which everyone benefits, an SVR can reduce the potential for exploitation. It can ensure that clinical research generates benefits for all those who contribute to it. For example, enrollment in a study that has social value provides an opportunity for those who have benefited from the research participation of previous others to do their fair share.

Unfortunately, mechanisms to ensure equitable access to the benefits of clinical research do not exist in all countries, let alone beyond national boundaries, and only a small number of people participate in research studies. This raises concern that, in many places, the SVR represents a less than ideal safeguard against exploitation. This consideration is important because Wertheimer offers a more straightforward approach. He writes: “If payment should be regarded as a benefit, then subjects are not exploited if the payment is sufficient to adequately compensate them for the risks and burdens of participation even if the research is entirely lacking in social value.”32

While this argument makes sense in theory, it would require what strikes us as an extremely unlikely change in practice. In particular, compensating participants for the risks and burdens they face is not sufficient to avoid exploitation. The level of compensation would also need to take into account the extent to which others benefit from the participants’ involvement and, conversely, the extent to which these others have contributed to the success of the research. For example, a series of clinical trials can yield a successful intervention and large profits for a company, to which a wide range and large number of individuals has contributed—from the basic scientists who began developing it, the individuals who built their equipment, and the janitors who cleaned their offices, to the investigators who conducted the clinical trials, the IT specialists and statisticians who helped them manage and analyze the collected data, the sponsors who funded them, and the participants who enrolled in their trials. Determining who has a claim to a share of the profits that result from the intervention and how much of a claim they have would be incredibly complicated. However, the contributions of research participants can be crucial and their fair share of the profits may be significant.

One of us has argued that those who make a contribution to the development of a new medical intervention ethically have a claim to share in the profits that it yields. Moreover, among those who make a contribution, it is important to distinguish between those who make a ‘project-specific’ contribution which shapes its revenue generating properties and those who make ‘background’ contributions which may be necessary to the completion of the project, but do not shape its revenue generating properties.33 For example, investigators often make a project-specific contribution whereas janitors typically do not. For present purposes, the important point is that participants sometimes make project-specific contributions, in which case they might have a claim to receive hundreds of thousands of dollars to ensure a fair transaction from an intervention that ends up generating billions of dollars in profit.

It seems unlikely, at least in the foreseeable future, that participants will receive this level of compensation. Sponsors would be reluctant to pay participants at this level. Moreover, doing so would change in substantial ways how we understand the role of research participants, what it means to obtain their voluntary informed consent, and so on. And uncertainty about these changes in turn is likely to increase sponsors’ reluctance to address the potential for exploitation through payment. This suggests that an SVR represents an important component of what currently appears to be the best, albeit imperfect, approach to addressing concerns about exploiting research participants.

3.6 Stewardship of public resources

Public officials have an obligation to exercise stewardship of public resources by spending them in ways that promote socially valuable goals. This obligation is not specific to research, but applies to public expenditures in general. As mentioned, there are different socially valuable goals that might be promoted by research, such as employment or economic activity. However important they may be, these goals can be realized in other ways. In contrast, clinical research is vital to developing and evaluating interventions that have the potential to improve health. The importance of this goal, and the absence of alternative ways to achieve it, provides strong reason to insist that the approximately one third of all clinical research studies which are publicly funded satisfy an SVR.34

Granting this, Wertheimer points out that “corporations are permitted to spend their resources as they see fit, subject to specific constraints such as not marketing drugs that have not been approved by the regulating agency.”35 This view implies that private companies may use their own resources to fund clinical trials that have no social value, provided the risks are not excessive and participants give voluntary informed consent. However, this argument overlooks the numerous public contributions to privately funded research. For example, the early phase studies on which many company studies are based are often publicly funded. In addition, much of the infrastructure on which clinical trials depends, such as the methods that are used in clinical research, trace to public support and funding. Governments also help to train the investigators and clinicians that are needed to run clinical trials, and contribute to the review, approval and reimbursement of the products that are developed through commercial studies. The magnitude of these contributions varies widely and will be low for some studies, as well as difficult to estimate in any precise way. Nonetheless, there remains at least some public support for essentially every privately funded study.

Of course, essentially every private activity relies on public funding or publicly funded resources in some way. Companies rely on the safe space secured by government funded police and military, as well as the roads and infrastructure developed with public funds. Arguably, companies and entrepreneurs who rely on these resources in order to make a profit should compensate the government in return, and frequently they do so in the form of taxes or user fees. Pharmaceutical companies rely on these same generic resources, developed and publicly funded to suit a broad range of uses and widely accessible, in order to develop their products.

Yet pharmaceutical companies also rely on public funding that is specifically tailored to their projects. For example, the products they test and market are frequently developed and first evaluated in publicly funded laboratories. This form of reliance on public funding is very different from relying on the clean air and safe roads that are accessible to the general public for a wide range of uses. Here the government is spending its resources to develop a specific product that is then provided to a company for its exclusive clinical testing and marketing. Similarly, the examples of government support cited earlier—public support for the infrastructure on which clinical trials depend, training of investigators and clinicians, and support for the groups and institutions that review and approve the products developed through commercial studies— are tailored to clinical research. Taken together, these examples suggest that the vast majority of clinical trials do not involve companies simply taking advantage of public resources that are available to everyone. Instead, they are taking advantage of governmental support that is tailored to the promotion of clinical research conducted by private companies.

One might argue that governments can provide this more specific support to pharmaceutical companies without insisting on any particular conditions in return.36 Whether this is right depends on whether specific yet unconditional public support of private institutions is consistent with the obligation to exercise proper stewardship of public resources. Is it appropriate to spend significant amounts of public money in ways that benefit a narrow range of companies without insisting on any public benefit in return? The answer to this question is not entirely clear. However, it does seem clear that requiring clinical research to have social value makes the provision of these benefits consistent with the appropriate stewardship of public resources. This suggests that as a matter of public policy, and absent a clear analysis of what proper stewardship of public resources requires, it makes sense to mandate an SVR.

3.7 Promoting public trust

Clinical research provides significant benefits to society. Generation and preservation of these benefits depends on public trust in and support of clinical research.37 With the SVR in place, potential participants can be assured that they will be invited to face the risks and burdens of clinical research only when the studies have the potential to improve health. In this way, an SVR provides an important protection for public trust in clinical research and hence, indirectly, for the enterprise of clinical research itself.

In response, Resnik argues that a “lack of public benefit from a single study involving human subjects is not likely to have a noticeable impact on the public’s trust in science.”38 However, as Wertheimer points out, the extent to which public trust in clinical research depends on an SVR is “an empirical question that requires much more investigation.”39 The question we face, then, is one of which policy to adopt absent robust data on the importance of an SVR for public trust in the research enterprise.

The potential costs of not requiring social value could be enormous if, in fact, clinical research depends significantly on public trust. In particular, some clinical trials are necessary to achieving fundamentally important goals related to promoting health. For example, identifying effective ways to treat malaria and prevent HIV infection depends on clinical trials. Hence, to the extent that allowing clinical trials which have no social value might undermine trust in and support for these trials, the consequences could be devastating. To see this, imagine that sponsors and researchers are permitted to conduct clinical research that has no social value, provided the net risks are low or moderate and competent adults provide their voluntary informed consent. We can expect that at least some research will be conducted for frivolous reasons, for no reason, and even for harmful reasons. For example, some research might aim to develop a deadly toxin or study interventions solely to undermine the products produced by a competitor. In the long run, communities may come to regard clinical research with suspicion and be unwilling to participate.

One might hope that permitting both valuable studies and studies that have no social value will not necessarily undermine trust in clinical research in general. For example, investigators and funders might be required to make clear whether a given study has social value, perhaps by placing in bold letters on consent forms: “This study does not have any potential to help improve health. Instead, it is being conducted to make money for Company X.” This approach might lead to the public having trust in and supporting those studies that have social value, while regarding studies that have no social value in a different light—for example, as a means for employment or earning a profit.40 However, this approach is likely to fail because the average person typically is not in a position to judge whether a given research study justifies the net risks and burdens it poses. Although it is possible that the public is willing to trust sponsors and others to tell them which studies are socially valuable, there is a significant risk that the public will lose trust in all clinical research. This would have devastating costs.

How do these costs compare to the potential benefits of dispensing with the SVR? Presumably, the SVR blocks some studies that competent adults want to conduct and others are willing to participate in. This is a cost, although likely a very modest one. Beyond this, there are few costs to endorsing an SVR. Even if we did not require that research have social value, we would still need a system to ensure that participants’ rights and interests are protected. Given that implementation of the SVR likely can be incorporated into this system with few additional costs, it seems unlikely—although an open empirical question—that the costs of requiring social value as a matter of policy outweigh the assurance this provides in maintaining public trust.

3.8 Cases versus policies

The argument to this point does not preclude the possibility that there are some studies to which the above arguments in support of an SVR do not apply. In particular, the arguments for requiring social value do not apply to studies that meet all of the following conditions: 1) enroll only competent adults who understand that the study has no potential social value; 2) pose no greater than moderate net risks; 3) are consistent with conditions on appropriate investigator behavior; 4) compensate participants commensurate with any net risks and burdens they face and the extent to which their contribution benefits others; 5) are done in a sufficiently private way; 6) do not rely on any specific public investments, including prior studies that were conducted using public funds, and publicly funded review, approval or reimbursement mechanisms.

Are there studies that satisfy all of these conditions? We are not sure. If such studies exist, social value may not be a requirement for them. However, to the extent that the SVR is part of public policy, it is not clear that these studies matter. Public policies are never fully appropriate for every single case that falls under them. In addition, depending on how complicated it would be, implementation of a system to allow valueless studies might lead to many false positive mistakes—that is, valueless studies that are conducted but should not have been conducted. This provides further reason for the relevant policies to require social value for all clinical research studies.

4. TWO POTENTIAL OBJECTIONS

Critics might argue that the present analysis establishes only that social value—combined with other standard requirements, such as informed consent41—is a sufficient condition for ethical clinical research. We have not demonstrated that every possible way of governing clinical research that does not involve an SVR is ethically problematic. Hence, we have failed to support the standard view that social value is a necessary requirement for ethical clinical research.

We admit to not having canvassed every possible way of governing research without social value and proven that all of them are problematic. If this standard for proving necessity is applied, we have failed to meet it. However, we are not sure—at least in applied ethical reasoning—that this standard is ever met. In contrast, we believe to have shown that, given the world we live in, and the way that clinical research is conducted in that world, failure to enforce an SVR would lead to problematic studies and overall worse research outcomes. It is in this sense that we conclude that the SVR is necessary for ethical research.

Critics might also argue that our arguments blend ethics and policy, and that it is therefore unclear whether social value is required as a matter of ethics or as a matter of good policy. In our view, it is both, although it can be difficult to draw a clear distinction between the two. Even the argument that most clearly appeals to policy considerations—cases versus policies—has moral salience, insofar as it highlights the moral importance of balancing false positive and false negative errors when setting public policy.

5. CONCLUSION

The present analysis identifies eight ethical and policy considerations that together provide strong support for an SVR. Mandating that clinical research must have social value is important for protecting participants who cannot consent, preventing inappropriate research that poses high net risks, and promoting appropriate investigator behavior. Absent an alternative approach, an SVR also provides some protection against participant deception and participant exploitation. Moreover, an SVR helps to ensure proper stewardship of public resources and promotes public trust and support for clinical research, thereby helping to secure the conditions necessary to continue to improve health. Taken together, these considerations provide strong support for the claim that social value is an ethical requirement for clinical research and should be mandated by applicable guidelines and policy.

Acknowledgments

We thank Benedict Rumbold, Benjamin Sachs, Seema Shah, two anonymous reviewers and audiences at the Philosophy & Medicine Colloquium at King’s College London and the 12th World Congress of Bioethics for helpful comments on earlier versions of this paper.

Funding: The present work was funded in part by intramural research funds of the US NIH Clinical Center. However, the views expressed are the authors’ own. They do not represent the position or policy of the US NIH, the PHS, or the DHHS. Annette Rid received funding from the from the People Programme (Marie Curie Actions) of the European Union’s Seventh Framework Programme (FP7/2007–2013) under REA grant agreement n° 301816.

Biographies

David Wendler is a senior investigator and Head of the Section on Research Ethics in the Department of Bioethics at the US NIH Clinical Center. He is a philosopher trained in the philosophy of science, and metaphysics and epistemology. His current research focuses on clinical trials and clinical care with individuals who are unable to give informed consent.

Annette Rid is a Senior Lecturer in Bioethics and Society in the Department of Global Health & Social Medicine, King’s College London, and an elected Fellow of the Hastings Center. Trained in medicine, philosophy and bioethics in Germany, Switzerland and the United States, Annette’s research interests span research ethics, clinical ethics and justice in health and health care.

References

  • 1.World Medical Association. Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects. 2013 Available at: http://www.wma.net/en/30publications/10policies/b3/ [Accessed 2 October 2016]; Council for International Organizations of Medical Sciences (CIOMS) International Ethical Guidelines for Biomedical Research Involving Human Subjects. Available at: http://www.cioms.ch/publications/guidelines/guidelines_nov_2002_blurb.htm [Accessed 2 October 2016] [PubMed]
  • 2.Trials of War Criminals before the Nuremberg Military Tribunals under Control Council Law. 10. Vol. 2. Washington, DC: US Government Printing Office; 1949. The Nuremberg Code; pp. 181–182. Available at: https://history.nih.gov/research/downloads/nuremberg.pdf [Accessed 2 October 2016] [Google Scholar]
  • 3.National Council for Science and Technology. Guidelines for Ethical Conduct of Biomedical Research Involving Human Subjects in Kenya. Nairobi: 2004. (NCST NO 45). Available at: https://www.healthresearchweb.org/files/Kenya_Guidelines_Ethical_conduct_of_research_involving__human_subjects.pdf [Accessed 4 October 2016] [Google Scholar]
  • 4.Freedman B. Scientific Value and Validity as Ethical Requirements for Research: A Proposed Explication. IRB. 1987;9:7–10. [PubMed] [Google Scholar]; Karlawish JH. Clinical Value: The Neglected Axis in the System of Research Ethics. Account Res. 1999;7:255–264. doi: 10.1080/08989629908573956. [DOI] [PubMed] [Google Scholar]; Grady C. Thinking Further about Value: Commentary on a ‘Taxonomy of Value in Clinical Research’. IRB. 2001;24:7–8. [PubMed] [Google Scholar]; Resnik D. Social Benefits of Human Subjects Research. J Clin Res Best Pract. 2008;4:1–7. [PMC free article] [PubMed] [Google Scholar]; Shaw D, Elger BS. The Relevance of Relevance in Research. Swiss Med Wkly. 2013;143:w13792. doi: 10.4414/smw.2013.13792. [DOI] [PubMed] [Google Scholar]; Habets MG, van Delden JJ, Bredenoord AL. The Social Value of Clinical Research. BMC Med Ethics. 2014;15:66. doi: 10.1186/1472-6939-15-66. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Wertheimer A. The Social Value Requirement Reconsidered. Bioethics. 2015;29:301–8. doi: 10.1111/bioe.12128. [DOI] [PubMed] [Google Scholar]
  • 6.Resnik D. Examining the Social Benefits Principle in Research with Human Participants. Health Care Anal. 2016 doi: 10.1007/s10728-016-0326-2. Unlike Wertheimer, Resnik does not limit his analysis of what he calls a ‘strong social benefits principle’ to clinical research. However, this matters little for present purposes because his arguments clearly apply to clinical research. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Emanuel EJ, Wendler D, Grady C. What Makes Clinical Research Ethical? JAMA. 2000;28:2701–2711. doi: 10.1001/jama.283.20.2701. [DOI] [PubMed] [Google Scholar]
  • 8.Rid A, Wendler D. A Framework for Risk-Benefit Evaluations in Biomedical Research. Kennedy Inst Ethics J. 2011;21(2):141–179. doi: 10.1353/ken.2011.0007. [DOI] [PubMed] [Google Scholar]
  • 9.We also believe that there are upper limits on net risks to participants that cannot be justified even by tremendous social value; however, we cannot pursue this question here. Ibid.
  • 10.Wertheimer, op. cit. note 6: 302.
  • 11.Resnik, op. cit. note 7: 5 (preprint).
  • 12.CIOMS, op. cit. note 1.
  • 13.Shah S, Wolitz R, Emanuel EJ. Refocusing the Responsiveness Requirement. Bioethics. 2013;27:151–159. doi: 10.1111/j.1467-8519.2011.01903.x. [DOI] [PubMed] [Google Scholar]
  • 14.Wendler D. The Ethics of Pediatric Research. Oxford, New York: Oxford University Press; 2010. pp. 77–106. [Google Scholar]
  • 15.Many guidelines and regulations also allow waivers or modifications of the requirement to obtain the informed consent of competent adults when the research meets specified conditions, typically including being low risk and involving little, if any interaction with the participants. There has been surprisingly little discussion of what justifies research without consent under these conditions. Although we will not discuss the issue here, we suspect that any successful justification for waiving or modifying informed consent—just like any successful justification for research with participants who cannot consent—will have to cite at least in part the social value of the research.
  • 16.Bartholome W. The Ethics of Non-Therapeutic Clinical Research on Children. In: National Commission, editor. Appendix to Report Recommendations: Research Involving Children. 1977. pp. 3.1–3.22. [Google Scholar]; Ackerman TF. Moral Duties of Parents and Nontherapeutic Clinical Research Procedures Involving Children. Bioethics. 1980;2:94–111. doi: 10.1007/BF00915263. [DOI] [PubMed] [Google Scholar]
  • 17.Harris J, Holm S. Should We Presume Moral Turpitude in our Children? Small Children and Consent to Medical Research. Theor Med Bioethics. 2003;24:121–129. doi: 10.1023/a:1024651013837. [DOI] [PubMed] [Google Scholar]; Brock DW. Ethical Issues in Exposing Children to Risks in Research. In: Grodin MA, Glantz LH, editors. Children as Research Subjects: Science, Ethics and Law. Oxford, New York: Oxford University Press; 1994. pp. 81–101. [Google Scholar]
  • 18.Wendler, op. cit. note 10: 154–165.
  • 19.Resnik makes a similar argument, although he claims that the SVR applies only to research that poses more than minimal net risks. Resnik, op. cit. note 7. In our view, this position is not tenable because any level of net risk to participants who cannot consent, however small, requires justification.
  • 20.For example, Resnik briefly considers whether risk imposition justifies an SVR and concludes that ‘placing restrictions on consensual risk-taking in research is paternalistic’. This response seems to assume both that paternalism is always ethically inappropriate and the justification for requiring social value must trace to the (paternalistic) protection of participants. Resnik, op. cit. note 7: 7 (preprint).
  • 21.Those offering high-risk employment still have an obligation to reduce the risks where possible, suggesting that the risks of even highly valuable research should be reduced as well. Rid & Wendler, op. cit. note 10.
  • 22.Wertheimer, op. cit. note 6.
  • 23.Nagel T. Mortal Questions. Cambridge, UK: Cambridge University Press; 2012. [Google Scholar]
  • 24.Center for Information and Study on Clinical Research Participation. Perceptions and Insights Study: Public and Patient Perceptions of Clinical Research Report on Public Perceptions. 2015 Available at: https://www.ciscrp.org/download/2015-perceptions-insights-study-public-perceptions/?wpdmdl=5740 [Accessed 2 October 2016]
  • 25.Stunkel L, Grady C. More than the Money: a Review of the Literature Examining Health Volunteer Motivations. Contemp Clin Trials. 2011;32:342–52. doi: 10.1016/j.cct.2010.12.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Wertheimer grants that ‘to the extent that prospective participants are altruistically motivated to contribute to the generation of medical knowledge, … social value is required to warrant their confidence.’ However, he suggests – wrongly, in our view – that social value considerations are irrelevant for most participants because they are motivated by self-interest. Wertheimer, op. cit. note 6: 07.
  • 27.Emanuel et al., op. cit. note 7.
  • 28.Wertheimer, op. cit. note 6.
  • 29.Wertheimer A. Exploitation. Princeton, NJ: Princeton University Press; 1999. [Google Scholar]
  • 30.Wertheimer A. Rethinking the Ethics of Clinical Research: Widening the lens. Oxford, New York: Oxford University Press; 2010. [Google Scholar]
  • 31.Wertheimer’s argument also depends on his account of exploitation. For present purposes, we bracket possible criticism of his view and instead focus on showing that an SVR is a compelling way of avoiding exploitation—on Wertheimer’s own account – given current research practices.
  • 32.Wertheimer, op. cit. note 6.
  • 33.Johnson R, Wendler D. Challenging the Sanctity of Donorism: Patient Tissue Providers as Payment-Worthy Contributors. Kennedy Instit Ethics J. 2015;25:291–333. doi: 10.1353/ken.2015.0021. [DOI] [PubMed] [Google Scholar]
  • 34.Chakma J, Sun GH, Steinberg JD, Sammut SM, Jagsi R. Asia’s Ascent – Global Trends in Biomedical R&D Expenditures. N Engl J Med. 2014;370:3–6. doi: 10.1056/NEJMp1311068. [DOI] [PubMed] [Google Scholar]
  • 35.Wertheimer, op. cit. note 6. Resnik pursues a similar line of argument in Resnik, op. cit. note 7.
  • 36.Resnik argues along these lines when he writes that ‘… scientists and institutions have numerous options for compensating the public for its investments in the science.’ However, he does not consider that private investigators and sponsors are taking advantage of public funding that specifically contributes to their enterprise, and that this can justify instituting an SVR as a matter of public policy as we argue here. Resnik, op. cit. note 7: 8 (preprint).
  • 37.London AJ. A Non-Paternalistic Model of Research Ethics and Oversight: Assessing the Benefits of Prospective Review. J Law Med Ethics. 2012;40:930–944. doi: 10.1111/j.1748-720X.2012.00722.x. [DOI] [PubMed] [Google Scholar]
  • 38.Resnik, op. cit. note 7.
  • 39.Wertheimer, op. cit. note 6.
  • 40.Moreover, if successful, this approach would also address the potential for participant deception discussed above (section 3.4) without insisting on an SVR.
  • 41.Emanuel et al., op. cit. note 7.

RESOURCES