Abstract
Scholars and policymakers continue to struggle over the meaning of the word “vulnerable” in the context of research ethics. One major reason for the stymied discussions regarding vulnerable populations is that there is no clear distinction between accounts of research vulnerabilities that exist for certain populations and discussions of research vulnerabilities that require special regulations in the context of research ethics policies. I suggest an analytic process by which to ascertain whether particular vulnerable populations should be contenders for additional regulatory protections. I apply this process to two vulnerable populations: the cognitively vulnerable and the economically vulnerable. I conclude that a subset of the cognitively vulnerable require extra protections while the economically vulnerable should be protected by implementing existing regulations more appropriately and rigorously. Unless or until the informed consent process is more adequately implemented and the distributive justice requirement of the Belmont Report is emphasized and operationalized, the economically disadvantaged will remain particularly vulnerable to the harm of exploitation in research.
Keywords: vulnerable populations, federal regulations, economically vulnerable, cognitive impairment, mission creep, research ethics
Introduction
A fundamental assumption underlies modern clinical research ethics: certain categories of people are presumed to be more likely than others to be misled, mistreated, or otherwise taken advantage of as participants of research. These populations are deemed “vulnerable,” a status that generates a duty for researchers, review committees, and regulators to provide special protections [1]. While scholars struggle to reach consensus about the meaning of the word “vulnerable” in the context of research ethics, policymakers are faced with the challenge of determining which vulnerable populations require additional regulatory protections from research abuses, such as those provided to children, prisoners, and pregnant women and their fetuses [2].1 When it is recognized that there is a research vulnerability or potential for abuse in a specific population, such as economically disadvantaged, international, minority, or critically ill populations or groups with disabilities or strong hierarchical pressures, there is often an inference that these populations require additional regulatory protections in response [4-6]. As the research community continues to acknowledge more people and situations that are vulnerable to research abuses, it is increasingly important to distinguish between research vulnerabilities in general and research vulnerabilities that require additional regulatory protections.
But how should policymakers make the move from identifying vulnerable populations in research to determining which vulnerable populations need extra regulatory protections? I will argue that the restriction of research with vulnerable populations should always be justified by establishing which vulnerabilities cannot be addressed by the general protections in place for all research participants. For the sake of this discussion, I will consider existing regulations as those that operationalize the three Belmont Principles in U.S. policy: (1) “Respect for persons,” operationalized in the requirement for an appropriate informed consent process; (2) “Beneficence,” operationalized in the requirement for a fair risk-benefit ratio;2 and (3) “Justice,” operationalized in the requirement for an equitable selection of subjects [7].
While this point may seem obvious, I will show that by steering the focus of discussions on regulation away from populations that are vulnerable to research and, instead, to determining the particular vulnerabilities from which populations cannot be protected by existing protections, a much more precise and smaller group of vulnerable populations emerge as candidates for additional regulations.
My argument is not to deny that many, if not all, research populations have qualities that can make them vulnerable to research abuses, nor to deny that the current regulations are often not implemented adequately to protect from these abuses. Rather, I argue that at least in one candidate group and potentially many others are already protected from these harms under current policies, and that our response to their continuing vulnerability should be to implement our current regulations with more rigor and consistency, not make more regulations.
There are three important benefits of this distinction. First, it can enable a more focused assessment of the relationship between potential research harms and existing regulations. Unfortunately, populations that are more susceptible to research harms often serve to illuminate the places where current regulations are insufficiently prioritized or implemented, because researchers can easily manipulate or neglect such regulations to achieve unethical ends.
The second important implication of this discussion is that it can prevent the “mission creep” that already seems to be undermining the efficacy and efficiency of an already overburdened review process [8]. Research ethics policies and the institutions that implement them are often accused of overregulating researchers and underprotecting research participants. While the initially benevolent intention of delineating groups for additional protections [9] was that of protecting those who are unable to protect themselves, like children, these regulations have also led many to argue that the increased burdens discourage important research from taking place [10-12]. Additional regulations have been shown to dissuade researchers from pursuing research on already underserved populations, which can increase the injustice of the current system [13, 14]. Thus, care is needed before the burdens of regulation are increased any further for certain populations.
Finally, and most importantly, an overly easy conflation of research vulnerabilities and increased protections threatens the already precarious balance in research between respecting autonomy and protecting those who cannot protect themselves. The Belmont Report itself acknowledges this uneasy tension: “Respect for persons incorporates at least two ethical convictions: first, that individuals should be treated as autonomous agents, and second, that persons with diminished autonomy are entitled to protection. The principle of respect for persons thus divides into two separate moral requirements: the requirement to acknowledge autonomy and the requirement to protect those with diminished autonomy” [7].
“Respect for persons” leaves researchers and research regulators with the challenging question of which vulnerabilities should entail extra protections and which should maintain respect for autonomous choices, even in a diminished state. Otherwise, some who lack control in their lives in many ways could be made to relinquish this control even further, raising the specter of new protections that threaten to burgeon out of control, often with restrictive and paternalistic consequences for those populations [15-17]. Distinguishing contingent weaknesses in the application of our current protections from situations where these protections cannot sufficiently protect people would stem the widespread tendency to use the vulnerabilities of those with a narrowed set of life and research options to take the decision to participate or not out of their hands completely.
To demonstrate how this simple suggestion can frame regulatory discussions in a more straightforward and fruitful manner, I will apply it to two vulnerable populations often suggested as candidates for additional regulatory protections: the cognitively vulnerable and the economically vulnerable. Following Kenneth Kipnis's useful taxonomy of research vulnerabilities, I will define a cognitively vulnerable person as lacking “the capacity to deliberate about and decide whether or not to participate in the study” [18]. I will define an economically vulnerable person as one who is “seriously lacking in important social goods that will be provided as a consequence of his or her participation in research” [18]. These social goods may include money, housing, medical care, childcare, burial benefits, opportunities to benefit the community, and even just medical information.3 I hope that this demonstration will suggest a similar model for assessing whether other populations are or are not suitable for additional protections.
Cognitive Vulnerability: Additional regulations?
In the case of those who are cognitively vulnerable, the potential research harm is that because they lack the capacity to deliberate and decide, they can be enrolled in studies without their informed consent. People can be cognitively vulnerable in many different ways and for different reasons. The causes can differ from mental illness to brain injury, from diseases like Alzheimer's to an innate cognitive deficit, from lack of education to cultural differences. As a result of these diverse causes, cognitive vulnerability can be mild or severe, temporary or permanent, contingent or necessary.
I have argued that those who require additional protections are those for whom the currently required protections, like the informed consent process, cannot function properly even if they follow all the existing requirements. If the researcher either intentionally or inadvertently omits necessary information about the study, provides misleading information, or does not inform the research subject in a manner, language, or level at which he or she can understand, then the subject is not adequately informed, but this does not mean that he or she cannot be adequately informed. For example, often educational, cultural, temporal, and/or language barriers can impede the ability of potential subjects to understand the information provided to them. In these cases, the cognitive vulnerabilities are contingent upon the appropriateness of the informed consent process to their challenges. By acquiring cultural information, hiring an interpreter, testing an informed consent process beforehand, and taking the time to educate the potential participant on the background needed to understand his or her choice, researchers can overcome these contingent barriers to comprehension. There are numerous and increasing resources for those who wish to improve the informed consent process for these populations from reputable sources, including the National Cancer Institute, the Centers for Disease Control, and the Agency for Healthcare Research and Quality. While some researchers do this already, the time, money, and prioritization of these activities are currently lacking in much of the research world. Empirically, it has been found that the informed consent process often does not function properly in these types of cases; but there is no theoretical reason for why it cannot [19-22]. These adaptations are necessary components of a successful informed consent process as already required by existing regulations, and they should be emphasized as such.
Even if the researcher makes his or her best effort to put the information in an accessible form, this may not be enough for a subset of those who are cognitively vulnerable. If a potential participant's cognitive vulnerability is severe (adaptations cannot improve it sufficiently), permanent (waiting until an appropriate time is futile) and necessary (no medical, psychological, or educational recourse can alleviate it) then the informed consent process cannot function properly. This may be the case in situations of brain injury, coma, persistent dementia, and/or mental illness, but the burden of proof is on the researcher to show that this is the case [23]. Thus, when (and only when) this vulnerability is severe, permanent, and necessary should it be a prime candidate for additional protections, since existing requirements for a proper informed consent process cannot be fulfilled with this population.4
Economically Vulnerable: Additional protections?
In their article, “Clinical Research with Economically Disadvantaged Populations,” Denny and Grady summarize the particular harms to which economically disadvantaged (or as they abbreviate them, ED) populations are vulnerable [25]. They discuss these harms under two categories: “vulnerability to impaired decision making” and “vulnerability to exploitation.” I will address these two harms in turn by analyzing the former as a worry about both the likelihood of threats to the informed consent process in economically vulnerable populations (paralleling the discussion of cognitively vulnerable populations above) and the latter as a worry about both the potential injustices to which research is more likely to subject these populations.
Research harm 1: Decisional Impairment
While many focus on the fact that the historically abused study subjects have predominantly been poor, uneducated, lacking in access to medical care, etc. (like the infamous cases at Tuskegee, Willowbrook, Holmsburg Prison, etc.), in all of these cases, participants were intentionally deceived by researchers. This was a breach of the requirement for appropriate informed consent, and would have been a breach even if the research subjects had included rich, educated white men. The relevance of their vulnerabilities is reflected more in the researchers’ presumption of unaccountability and the unwillingness or negligence of the higher echelons of the scientific community to call these researchers to task for their breach of existing research ethics standards, not in a lack of sufficient standards.5
It is clear that researchers who disregard regulations cannot be better controlled by adding more regulations for them to disregard. Rather, this history suggests that IRBs should be more vigilant in oversight and accountability in cases where economically vulnerable groups are involved. This may require something more akin to a DSMB, such as a Community Advisory Group or other governing body to keep tabs on the research and make sure that the regulations are being followed and that marginalized and dissenting voices are being heard.
The second source for impaired decision making among the economically vulnerable is the potential participant's incapacity to comprehend the information. It would be difficult to argue that the African-American men who were harmed by the Tuskegee study, or the parents of the cognitively impaired children in Willowbrook, or the large number of ED research participants in developing countries are incapable of making choices that are in their best interests. If some of them are mentally ill, either before or during the study, as a result of disease(s), then they require special protections for those who are specifically mentally ill (as suggested above), not for those who are economically vulnerable.
Further, it may be argued that due to the educational disadvantages that often come with economic disadvantages, these populations are not capable of understanding the study, even if they have the capacity. In the research ethics literature, one often finds associations between or even equating of economic vulnerabilities and educational vulnerabilities [25, 26]. Denny and Grady in particular discuss low education levels as the cause of decisional impairment in economically vulnerable populations. “The ED may be vulnerable to impaired decision making if low education levels lead them to enroll in research without fully understanding the risks” [25].
This connection between educational deprivations and economic deprivations can become pernicious when regulatory responses to those who are vulnerable due to their lack of education (cognitively vulnerable) become indistinct from regulatory responses to those who are vulnerable due to economic deprivations. The intuitive appeal of this conflation is rooted in an accurate sociological and political observation, namely, that those who are economically disadvantaged in a society are frequently without adequate access to educational opportunities as well. While noting that the same populations often come to manifest these two vulnerabilities, it is important not to elide the differences between the two vulnerabilities vis-à-vis the research enterprise.
If economically disadvantaged subjects do not understand their options because they lack education (or cultural competence or language fluency) then the appropriate response should be for researchers to recognize a cognitive vulnerability that is neither severe, permanent, nor necessary, and educate themselves and their potential subjects so that effective communication can take place. Not only is this requirement already part of the existing informed consent regulations (if suitably implemented), this vulnerability is a product of a population's lack of education, not its economic disadvantages per se. Especially in our current economy, it is easy to see that those who are poor are not necessarily those who are uneducated.
There is one more possible reason for why economic vulnerability is often conjoined with the capacity to perform informed consent. Denny and Grady worry that in the case of economically vulnerable populations, “Clinical research may offer services or other goods so attractive that they impair decision making, causing the ED to irrationally disregard research risks” [25]. Here I will introduce the complex and contested concept of “undue inducement.” While never defined in U.S. federal regulations, Ezekiel Emanuel defines this concept as a situation where “individuals are offered some good that, against their better judgment, makes them assume substantial risks of harm that compromise their welfare” [26]. Denny and Grady articulate this type of inducement as “decision-impairing inducement” [25]. While these goods are most often understood as financial compensation, other goods of research, such as medical care, can provide this inducement as well. The key question is whether this type of inducement is undue, i.e., causes an involuntary choice where a researcher is able to impede the reasonable judgment of particular populations.
So does high compensation undermine ED populations from making rational choices to participate or not in research? Since I have already distinguished economic disadvantages from educational disadvantages, and these are both conceptually distinguished from inherent cognitive deficits, it should not be assumed that economically vulnerable populations are inherently incapable of weighing the situation and choosing the option that is in their best interest. This argument is supported by empirical evidence that the amount of compensation does not significantly impact the accuracy of people's risk assessment, even if it does impact their willingness to participate [28, 29].
By increasing the amount of compensation, researchers are not impairing the judgment of the potential subjects but rather shifting the weight of risks versus benefits for a particular population towards a different judgment. There is no reason to assume that if, at a certain amount of money or other incentives, people choose to take more risks, then they are making irrational judgments. Economically vulnerable people are motivated to take dangerous jobs, give blood, and join the military for money every day, and it is seen as an autonomous choice among their limited options. Why should research be any different?
Up to this point my position on undue inducement closely resembles that of Ezekiel Emanuel. Emanuel argues that “[o]ffering a poor person with no other options a good salary for reasonable work neither compromises voluntariness nor qualifies as an undue inducement even if it constitutes an irresistible offer because of the person's poverty” [27]. He argues that paying people to undertake research falls under the same category. While Emanuel's argument is convincing in denying undue inducement (i.e., the person can voluntarily choose “a good salary for reasonable work”), Emanuel does not recognize that the harm of impairing the informed consent process is not the only harm which this situation can cause.
Research harm 2: Exploitation
Many suggest that economically vulnerable populations require additional protections, not only because they are susceptible to impaired decision making, but also because of the potential for exploitation: [26]. Unlike the former harm that impairs the informed consent process, exploitation can take place even if the participant is fully informed, competent, and educated to understand the information and voluntarily consents to the research. While the term “exploitation” is contentious in the literature, many agree that it is determined by the unfairness in the proportion of risks and benefits to which an individual or a population is exposed [30, 31].
According to Alan Wertheimer's famous analysis, Party A exploits Party B when B receives an unfair level of benefits as a result of B's interactions with A [31].6 Exploitation usually, but not always, implies a power differential between those who unfairly benefit and those who unfairly sacrifice. There are several important implications of this definition. First, it limits the scope of potential exploitative interchanges between actors. While life situations can be unfair, with some facing deprivations and health challenges that others do not, they are not necessarily cases of exploitation. Further, in order for an interaction to be just, it need not be equal. Rather, it need only be proportional. One common example is if a person is stranded in the middle of the river, and someone comes along and offers to sell that person a space on their boat for $25 (the cost of a seat on the boat in normal circumstances), then this is fair. If the boatman will only sell the seat for $1000 since he knows that the stranded person has no other choice, this is exploitation.
In the case of research, there are two ways that a research subject can be exploited. The first is the exploitative potential of a subject desperate for benefit. Like the case of the boatman, a person may be willing to sacrifice to an unreasonable level in order to obtain a benefit. Similarly, a researcher, fearing low participation rates for a risky study, could entice poorer participants by offering desperately needed social goods. Insofar as the incentive can shift the balance between risks and benefits so that a person will consent to an excessively risky study, then it can be exploitative.7
Once again, the existing standards of research should be sufficient to eliminate this type of exploitation. Here, the regulatory standard that protects subjects is not the requirement of respect for persons through adequate informed consent, but rather the requirement of beneficence through a balanced risk-benefit ratio. Any trial that potentially poses extensive risk must be screened by an IRB specifically mandated to evaluate the balance of risk and benefits of a trial independently of any compensation [9]. Thus, if there are not enough direct benefits of a research program to balance out the risks, a review committee is required to prohibit it. Thus, the choices of studies to which an economically vulnerable subject may be attracted are already screened, and any “excessively risky” studies (the worry of exploitative undue inducement) should not even be an option. Thus, no matter how much money or other goods are offered, research subjects never should have the option of enrolling in an excessively risky study.8
The vulnerability of those who face economic depravations illuminates just how crucial this IRB mandate is. Insofar as IRBs do not follow this mandate, and take compensation into account in how they weigh the acceptability of research risks, they are putting economically vulnerable people at risk for exploitation. IRBs should emphasize that studies that pass IRB review should have acceptable risk-benefit ratios even if no compensation is offered.
The other type of exploitation occurs when a particular research population bears most of the burden of research while another population reaps most of the benefits. Although the risks and benefits of research to the research population itself may have a satisfactory risk-benefit ratio, and the research population may have adequately consented to the study, if another population not represented in the study disproportionately benefits, this study is exploitative.
The question in this case is not whether members of economically vulnerable populations can make reasonable judgments about participating in research in the face of high levels of compensation, but rather, whether the situation of choice set up by the researchers is unjust or unfair at a societal level. In the current U.S. health care system the innovative treatments and interventions that result from research are usually utilized first, and more prevalently, by the more well-to-do in our society. Thus, if the more economically impoverished members of the U.S., and especially those abroad, constitute the largest number shouldering the burden of research while the more economically privileged receive most of the benefits of research, this constitutes an exploitative situation.
Here existing regulations once again enter the picture. The Belmont Report demands that research is just in that the selection of the subject population is appropriate and the benefits and burdens of research to both individuals and classes of persons are distributed equitably. The former demand is operationalized in regulation 45 CFR 46.111, which requires equitable subject selection. This requirement was augmented further in 1994 by the “NIH Guidelines on the Inclusion of Women and Minorities as Subjects in Clinical Research” [32].
There are two major weaknesses in the existing regulations surrounding justice, which can be clearly seen in the case of economically vulnerable populations. One problem is that while risks and benefits to individuals are well evaluated by existing regulations and IRBs, risks and benefits to populations are not [33, 34]. The second problem is that although the Belmont Report subdivides the principle of justice into a justice of subject selection and a justice of the distribution of research benefits among populations, the latter ethical requirement does not manifest in any corresponding regulatory requirements.
Here, additional regulations are required, but not as additional protections for any particular vulnerable group. Rather, my discussion has brought out how the ethical requirement for distributive justice should be manifested in regulatory requirements for how researchers choose their research populations. They can either recruit from the populations that will be receiving the benefits of the research directly or from other populations with a corresponding obligation to make sure that the benefits of the research accrue to them. For example, regulations could require research in the developing world be accompanied by corresponding obligations to provide access to the benefits of the research to developing world populations. If researchers cannot fulfill this obligation, then they cannot do research with these populations [16, 30, 35].
Rather than add further regulations for economically vulnerable populations to “protect” them from choosing the best of their limited options, their autonomous choice should be respected but with better implementation of the “justice” principle as already stated in the Belmont Report. We should ensure that the communities from which researchers recruit participants, whether rich or poor, will proportionately receive any benefits that accrue from the research, whether in the form of access to resulting interventions or treatments or, in cases of a negative outcome, at least to information. Rather than additional restrictions on research with economically disadvantaged populations, this would require additional regulations regarding distributive justice in general, which should be applied across the board but will specifically benefit economically vulnerable populations.9
Conclusion
In this paper, I suggested an analytic process by which to ascertain whether particular vulnerable populations need further protections or better implementation of existing ethical principles and regulatory requirements. I applied this process to two vulnerable populations: cognitively and economically vulnerable populations. I concluded from this process that a subset of the cognitively vulnerable (those who are severely, permanently, and necessarily impaired) require additional regulations akin to those introduced for children, prisoners, and pregnant women and their fetuses.
The rest of the cognitively vulnerable, as well as the economically vulnerable, should be protected from the research harms to which they are more vulnerable by better implementation of existing regulations. I suggest several ways that these vulnerabilities indicate that current regulations should be reinforced and strengthened. Unless or until the informed consent process is more adequately implemented, the IRB review process is more carefully conducted, and the Belmont Report's distributive justice principle operationalized, the economically disadvantaged will remain particularly vulnerable to the harm of exploitation in research.
Acknowledgments
The author would like to thank Ann Jeschke for her tireless editing and formatting assistance. The author also acknowledges the financial support from grant UL1 RR024992 from the NIH-National Center for Research Resources.
Footnotes
This topic is the focus of a Target Article and accompanying Open Peer Commentaries in the August 2004 issue of the American Journal of Bioethics [3].
This is also frequently construed as minimizing harms and maximizing benefits or as a ratio that a “reasonable person” would agree to.
I will label this vulnerability “economic” as opposed to Kipnis's “allocational” because it is plausible to consider education as a social good, and I want to distinguish economic from educational vulnerability for reasons that will emerge later.
Designating cognitive incapacity in the research context is beyond the scope of this paper, but extensive work has been done on this topic. See the 2002 issue of the American Journal of Geriatric Psychiatry for a representation of these discussions [24].
Although the federal regulations requiring informed consent were not adopted until 1974, both the Nuremberg Code and the Balfour Declaration were in place long before, and both had specific requirements for adequate informed consent.
Other notions of exploitation refer to instrumentally using other people or taking advantage of people in a way that undermines their autonomy.
Although many argue that compensation should be distinguished from the benefits of a study (manifest most clearly in the different sections for benefits and compensation in Informed Consent forms), this does not entail that those consenting to the studies recognize this distinction.
This, of course presupposes that the review mechanisms work properly, which is not always the case. But adding more protections to compensate for the failings of already existing ones, rather than better oversight to make sure that protocols are evaluated properly in the first place, seems counterproductive. It also presumes that if one should not be allowed to engage in a risky trial because of money, one should not be allowed to engage in it for altruism's sake either; the same threshold should apply to both.
This suggestion may also have the benefit of reducing the price difference between local and international human subjects research since the research funders must also allocate funds for resource distribution. This may result in a more consistent approach to ethical research in both local and international contexts—a balance that is severely lacking at present.
References
- 1.Levine Carol, Faden Ruth, Grady Christine, Hammerschmidt Dale, Eckenwiler Lisa, Sugarman Jeremy. The limitations of “vulnerability” as a protection for human research participants. American Journal of Bioethics. 2004;4(3):44–49. doi: 10.1080/15265160490497083. [DOI] [PubMed] [Google Scholar]
- 2.Ruof Mary C. Vulnerability, vulnerable populations, and policy. Kennedy Institute of Ethics Journal. 2004;14(4):411–425. doi: 10.1353/ken.2004.0044. [DOI] [PubMed] [Google Scholar]
- 3.McGee Glen., editor. American Journal of Bioethics. 2004;4(3) [Google Scholar]
- 4.Moreno Jonathan, Caplan Arthur L., Wolpe Paul Root. Updating protections for human subjects involved in research. Journal of the American Medical Association. 1998;280(22):1951–1958. doi: 10.1001/jama.280.22.1951. [DOI] [PubMed] [Google Scholar]
- 5.Rose Susan L., Pietri Charles E. Workers as research subjects: a vulnerable population. Journal of Occupational and Environmental Medicine. 2002;44(9):801–805. doi: 10.1097/00043764-200209000-00001. [DOI] [PubMed] [Google Scholar]
- 6.Stineman Margaret G., Musick David W. Protection of human subjects with disability:guidelines for research. Archives of Physical Medicine and Rehabilitation. 2001;82(12S):9–14. [PubMed] [Google Scholar]
- 7.National Commission for the Protection of Human Subjects of Biomedical Research . The Belmont report: ethical principles and guidelines for the protection of human subjects of research. US Government Printing Office; Washington D.C.: 1978. [Google Scholar]
- 8.White Ronald F. Institutional Review Board mission creep: The Common Rule, social science, and the nanny state. Independent Review. 2007;11(4):547–564. [Google Scholar]
- 9.U.S. Code of Federal Regulations [Jan. 9, 2013];Protection of human subjects. 2009 45 CFR 46. http://www.hhs.gov/ohrp/humansubjects/guidance/45cfr46.html.
- 10.Sun M. Inmates sue to keep research in prisons. Science. 1981;212(4495):650–651. doi: 10.1126/science.7221550. [DOI] [PubMed] [Google Scholar]
- 11.Epstein Steven. Inclusion: the politics of difference in medical research. University of Chicago Press; Chicago: 2007. [Google Scholar]
- 12.Mastroianni Anna, Kahn Jeffrey. Swinging on the pendulum: shifting views of justice in human subjects’ research. Hastings Center Report. 2001;31(3):21–28. [PubMed] [Google Scholar]
- 13.Keith-Spiegel Patricia, Koocher Gerald P. The IRB paradox: could the protectors also encourage deceit? Ethics & Behavior. 2005;15(4):339–349. doi: 10.1207/s15327019eb1504_5. [DOI] [PubMed] [Google Scholar]
- 14.Martinson Brian C., Anderson Melissa S., Vries Raymond De. Scientists behaving badly. Nature. 2005;435:737–738. doi: 10.1038/435737a. [DOI] [PubMed] [Google Scholar]
- 15.Edwards SJL, Kirchin S, Huxtable R. Research ethics committees and paternalism. Journal of Medical Ethics. 2004;30(1):88–91. doi: 10.1136/jme.2002.000166. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Garrard E, Dawson A. What is the role of the research ethics committee? paternalism, inducements, and harm in research ethics. Journal of Medical Ethics. 2005;31(7):419–423. doi: 10.1136/jme.2004.010447. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Miller Franklin G., Wertheimer Allen. Facing up to paternalism in research ethics. Hastings Center Report. 2007;37(3):24–34. doi: 10.1353/hcr.2007.0044. [DOI] [PubMed] [Google Scholar]
- 18.Kipnis Kenneth. Ethical and policy issues in research involving human participants. Vol. 2. National Bioethics Advisory Council; Bethesda: 2001. Vulnerability in research subjects: a bioethical taxonomy. pp. G–1. [Google Scholar]
- 19.Paasche-Orlow Michael K., Taylor Holly A., Brancati Frederick L. Readability standards for informed-consent forms as compared with actual readability. New England Journal of Medicine. 2003;348(8):721–726. doi: 10.1056/NEJMsa021212. [DOI] [PubMed] [Google Scholar]
- 20.Joffe Steven, Cook E. Francis, Cleary Paul D., Clark Jeffrey W., Weeks Jane C. Quality of informed consent in cancer clinical trials: a cross-sectional survey. Lancet. 2001;358(9295):1772–1777. doi: 10.1016/S0140-6736(01)06805-2. [DOI] [PubMed] [Google Scholar]
- 21.Marshall Patricia A. Informed consent in international health research. Journal of Empirical Research on Human Research Ethics. 2006;1(1):25–42. doi: 10.1525/jer.2006.1.1.25. [DOI] [PubMed] [Google Scholar]
- 22.Sudore Rebecca L., Landefeld Seth C., Williams Brie A., Barnes Deborah E., Lindquist Karla, Schillinger Dean. Use of a modified informed consent process among vulnerable patients. Journal of General Internal Medicine. 2006;21(8):867–873. doi: 10.1111/j.1525-1497.2006.00535.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.DuBois James M., Beskow Laura, Campbell Jean, Dugosh Karen, Festinger David, Hartz Sarah, Hartz Rosalina, Lidz Charles. Restoring balance: a consensus statement on the protection of vulnerable research participants. American Journal of Public Health. 2012;102(12):2220–2225. doi: 10.2105/AJPH.2012.300757. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Jeste Dilip V., editor. American Journal of Geriatric Psychiatry. 2002;10(2) [PubMed] [Google Scholar]
- 25.Denny Colleen C., Grady Christine. Clinical research with economically disadvantaged populations. British Medical Journal. 2007;33(7):382–385. doi: 10.1136/jme.2006.017681. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Stone T. Howard. The invisible vulnerable: The economically and educationally disadvantaged subjects of clinical research. Journal of Law, Medicine & Ethics. 2003;31(1):149–154. doi: 10.1111/j.1748-720x.2003.tb00065.x. [DOI] [PubMed] [Google Scholar]
- 27.Emanuel Ezekiel J. Undue inducement: nonsense on stilts? The American Journal of Bioethics. 2005;5(5):9–13. doi: 10.1080/15265160500244959. [DOI] [PubMed] [Google Scholar]
- 28.Bentley JP, Thacker PG. The influence of risk and monetary payment on the research participation decision making process. Journal of Medical Ethics. 2004;30(3):293–298. doi: 10.1136/jme.2002.001594. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Halpern Scott D., Karlawish Jason H. T., Casarett David, Berlin Jesse A., Asch David A. Empirical assessment of whether moderate payments are undue or unjust inducements for participation in clinical trials. Archives of Internal Medicine. 2004;164(7):801–803. doi: 10.1001/archinte.164.7.801. [DOI] [PubMed] [Google Scholar]
- 30.Emanuel Ezekiel J., Wendler David, Killen Jack, Grady Christine. What makes clinical research in developing countries ethical? The benchmarks of ethical research. Journal of Infections Disease. 2004;189(5):930–937. doi: 10.1086/381709. [DOI] [PubMed] [Google Scholar]
- 31.Wertheimer Allen. Exploitation. Princeton University Press; Princeton: 1999. [Google Scholar]
- 32.Varmus Harold. NIH guidelines on the inclusion of women and minorities as subjects in clinical research. Federal Register. 1994;59(59):14508–14513. [Google Scholar]
- 33.Weijer Charles, Emanuel Ezekiel J. Protecting communities in biomedical research. Science. 2000;289(5482):1142–1144. doi: 10.1126/science.289.5482.1142. [DOI] [PubMed] [Google Scholar]
- 34.Weijer Charles, Goldsand Gary, Emanuel Ezekiel J. Protecting communities in research: current guidelines and limits of extrapolation. Nature Genetics. 1999;23:275–280. doi: 10.1038/15455. [DOI] [PubMed] [Google Scholar]
- 35.Benatar Solomon R. Reflections and recommendations on research ethics in developing countries. Social Science & Medicine. 2002;54(7):1131–1141. doi: 10.1016/s0277-9536(01)00327-6. [DOI] [PubMed] [Google Scholar]
