Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Mar 1.
Published in final edited form as: J Law Med Ethics. 2018 Mar 27;46(1):52–63. doi: 10.1177/1073110518766008

Avoiding Exploitation in Phase I Clinical Trials: More than (Un)Just Compensation

Matt Lamkin 1, Carl Elliott 1
PMCID: PMC6047746  NIHMSID: NIHMS975320  PMID: 30026654

In the early 1960s, two prank comedy pioneers approached a stranger on the street and made him an unusual offer. Posing as the hosts of a radio program called “Job Opportunities,” Jim Coyle and Mal Sharpe explained that they needed an employee for a new tourist attraction.1 In this attraction, the employee would be confined to a flame-filled pit where he would try to fight off bats, snakes, and maniacs. “What we’re trying to do, really, is create a living hell,” Coyle explained. “Have people pay admission; they look down in the pit; they see you down there; the flames are all around you. There will be four maniacs with you and you’ve got to control them.” Then Sharpe asked the prospective employee, “Have you ever worked with maniacs before?” “No, never,” the man said.

In exchange for spending twelve hours a day fighting maniacs, the employee would be paid $46 a week, plus one meal a day — bat meat, which the employee would be expected to grill in the flames. The job would carry some risks, Coyle explained. “I had an employee before, and I will tell you this directly and honestly, he was a little careless and incautious — I gave him specific instructions — and he perished,” Coyle says. “Now I want you to understand this before we get any further. He did perish.” The man was undeterred. “Yeah, I’d like to try it,” he said, sticking with his decision even when Sharpe reminded him that the death index for the job was 98%. “In other words,” Sharpe said, “if you took this job the odds would be 98% in favor of your perishing.” The man replied, “It’s a chance. I like to take chances.”

Part of what made this offer to fight maniacs unfair is the same thing that made it funny. It is not just that Coyle and Sharpe offered the man an absurdly dangerous job, but that the job paid practically nothing. Even if the job of maniac-fighter could be made reasonably safe, fairness would demand a higher wage. Surely it would be better for Coyle and Sharpe to offer the man $4,600 a week rather than only $46.

To most of us, this sounds like common sense. But when it comes to paying subjects for taking part in medical research, many ethicists argue precisely the opposite. The dominant view is that it is better to keep payment low, because larger sums might tempt prospective subjects to take risks to their health.2 On this view, it would be better for Coyle and Sharpe to pay the man $46 because a larger sum might constitute an “undue influence” and undermine the voluntariness of his consent.

The issue is most acute in Phase I clinical trials, which are typically done to determine whether experimental drugs are safe. Subjects in Phase I trials must often check into a clinical trial site for several weeks, where their diet, vital signs, and health status will be closely monitored as they are given an experimental drug. Sometimes they must undergo invasive procedures, such as endoscopies or lumbar punctures. Although most Phase I trials are relatively safe, some of them have resulted in disaster, such as the notorious TeGenero TGN1412 study at NorthWick Park Hospital in England, which sent six healthy, paid volunteers into multisystem organ failure in 2006.3 Since subjects in Phase I trials get no medical benefit from the studies, their primary motivation for enrolling is the payment.4 IRBs typically attempt to keep payment to subjects low, on the grounds that money might unduly influence the subjects, yet the subjects themselves naturally feel they deserve to be paid well.5

While the argument for low payment may sound unfair, there is a certain logic to it. Many research studies involve serious risks and discomforts, and prospective subjects are often financially desperate. If a desperate subject reluctantly chooses to enroll in a high-paying but risky study in order to stave off financial ruin, it is tempting to characterize the choice as less than fully voluntary. Accordingly, research ethicists introduced the concept of “undue influence” to capture the notion that high payment constitutes a threat to voluntary consent.

Yet the view that high payment undermines voluntariness has proved confusing in theory and unworkable in practice. Considerable effort has failed to yield a shared, coherent understanding of the effect of payment on voluntariness, leaving IRBs and researchers with little guidance regarding how large an offer is “undue.”6 More fundamentally, the mandate to avoid undue influence gives rise to a different kind of problem: keeping payments low increases the potential for subjects to be exploited, by offering them inadequate compensation in return for the burdens they assume.7

Other scholars have suggested that the relationship between researchers and paid research subjects is fundamentally a market transaction in which a subject exchanges his or her labor for money.8 Market transactions are generally governed by the law of contracts, which offers a perspective that is diametrically opposed to the standard view in research ethics. Contract law disavows the notion that “excessive” compensation can undermine voluntariness, viewing transactions as improper only when one party gains too little from it. In this view, compensation can be too low — rendering the transaction suspect — but it can never be too high. From this perspective, raising the amount of payment to research subjects is more likely to make the transaction fair.

Although the contract law approach presents a more coherent alternative to voluntary consent, merely raising compensation is not sufficient to protect research participants from exploitation — just as the “living hell” scenario might remain exploitative even if the pay were much higher. Research subjects may be exploited not just by inadequate pay, but by being exposed to excessive risks, treated disrespectfully, subjected to degrading conditions, deprived of the medical treatments they help make possible, or denied compensation and medical care for research-related harms. Yet many of these concerns are largely unaddressed by research guidelines and are generally ignored in the ethics literature.

What is needed is a richer, more expansive account of research ethics that looks beyond the voluntariness of subjects’ consent to protect participants from exploitation. In the account of exploitation we offer here, it is not enough for research oversight bodies to embrace the view from contract law that higher compensation promotes the welfare of research subjects. They must also require study sponsors to provide health insurance and compensation to cover injuries incurred through their participation. In addition, they must implement far more effective systems to minimize risks to subjects. More broadly, sponsors must conduct research in ways that accord participants dignity and respect.

Research Ethics: “Excessive” Offers Threaten Voluntariness

Voluntary consent has long been considered a foundational requirement for ethical research recruitment. The Nuremberg Code, the Declaration of Helsinki, and the Belmont Report all flatly condemn coercion, or obtaining consent through “an overt threat of harm.”9 But when the Belmont Report was issued in 1978, it introduced an additional concern of a very different nature: “undue influence,” which the authors framed as offering “an excessive, unwarranted, inappropriate or improper reward or other overture in order to obtain compliance.”10

Faden and Beauchamp built on this foundation in their early work, A History and Theory of Informed Consent.11 In their view, an offer of a reward or incentive in exchange for research participation compromises the voluntariness of consent if the subject finds the offer both “unwelcome” and difficult to resist.12 To illustrate the potential for unwelcome offers to undermine autonomy, the authors offer the example of a financially desperate woman named Mary. Researchers offer Mary $25 per day in exchange for her participation in research involving repeated, painful medical procedures.13 Mary is terrified of participating in the research, but feels she must accept because she badly needs the money. In Faden and Beauchamp’s analysis, Mary’s consent is not sufficiently voluntary; her desperation and the researcher’s offer place her substantially under the researcher’s control.

This conception of the way payment affects voluntariness has been enormously influential, finding its way into the guidelines governing research on human subjects and spawning a vast literature seeking to define the circumstances under which offers may unduly induce research participation.14 However, supporters of this conception of voluntariness have been unable to identify the precise contours of undue influence. Not every inducement that influences an individual’s decision to participate in research is improper; the ethical issue only arises when the influence is “undue.”15 Yet the Belmont Report itself “does not specify when or for what reasons an offer should be considered excessive, unwanted, inappropriate or improper.”16 More than three decades later, many observers lament the lack of progress on this issue in the literature.17

The widespread adoption of “undue influence” has distorted the debate over paid studies in three important ways. First, by framing the ethical issue as one of payment rather than selection of subjects, it locates the problem solely in the size of the paycheck while excluding the wider circumstances that are often far more important to the subject’s decision — in particular, the background conditions of poverty or desperation that might lead someone to accept a bad offer. In the example offered by Faden and Beauchamp, for instance, it is surely Mary’s financial desperation that is the problematic issue, rather than the size of her $25 payment. This kind of desperation appears repeatedly in the interviews of 178 Phase I trial subjects conducted by Cottingham and Fisher, who found that many subjects dismissed their concerns about study risks because of their extreme financial need. “I guess the desperation far outweighed the concerns,” one subject said. “You know, when someone’s desperate, like they are not even gonna think twice, so I guess that’s where I was at.”18

Second, the insistence on avoiding undue influence by lowering payment undermines the core ethical principle of justice in the selection of subjects, which demands that those who bear the risks and burdens of research should be in a position to share in its benefits.19 Requiring that payments be kept low virtually ensures that the subject population in paid trials will be disproportionately made up of people with lower incomes.20 This concern is borne out by a substantial body of research indicating that paid trials largely attract minority men with low incomes and low rates of health insurance.21 Indeed, drug companies and CROs have repeatedly drawn the ire of critics by targeting homeless people and undocumented immigrants to serve as research subjects.22 As a result of these recruiting practices, the very people who test drug safety are less likely to have access to medications the research may help produce.23

Third and most importantly, framing the problem as “undue influence” concentrates attention solely on issues surrounding the voluntariness of a subject’s consent while ignoring the question of whether the offer is fair. Not every ethical concern can be shoehorned into a worry about autonomy. The fact that a competent subject has voluntarily accepted an unfair offer does not make the transaction ethically sound. Competent subjects can make fully informed, rational, and voluntary decisions to enter into exploitative studies simply because they are desperate and taking part in the study is their least bad option.

Forced to operate within this narrow framework, many commentators have struggled to reconcile a desire to protect vulnerable subjects with a conceptual apparatus focused on the voluntariness of the subjects’ consent. Oversight bodies have been unable to offer meaningful guidance as to what amount of compensation constitutes an undue influence.24 The Office for Human Research Protection’s IRB Guidebook frankly acknowledges that “[f]ederal regulations governing research with human subjects contain no specific guidance for IRB review of payment practices.”25 Indeed, OHRP admits that “[o]n a practical level, it is probably impossible for an IRB to determine what amount of money or type of reward would unduly influence a particular individual to accept a given degree of risk.”26 Thus it is left to IRB members themselves to interpret when an offer is excessive, improper or undue.27

The absence of effective guidance has produced mass confusion among researchers and IRBs. Most research entities have no written policies regarding payment practices, and most written policies that do exist “do not describe how investigators or IRBs should determine when money is ‘undue.’”28 A 2012 survey of IRB members and research ethics professionals found that 80% believed that merely offering payment “constitutes undue influence simply because it motivates someone to do something they otherwise would not.”29 Nearly two-thirds (65%) believed this kind of payment constituted not just undue influence, but full-blown coercion.30

A rule that no one knows how to abide by — including the entity promulgating it — cannot be effective, and is probably defective. In our view, the difficulty encountered in crafting standards for avoiding undue inducement is a symptom of flaws in the underlying conceptual premises.

Contract Law: Higher Compensation Enhances Fairness

When subjects are paid to participate in research, they enter into agreements that broadly resemble employment contracts.31 Contract law requires voluntariness as a condition of a valid agreement, and accordingly the law imposes protections against coercion and excessive pressure. Unlike research guidelines, however, which warn against offering participants too much, contract law looks with skepticism only at agreements in which one side seems to gain too little.32

Although contract law recognizes “undue influence” as undermining voluntariness, that legal concept does not apply to overly generous offers. Rather, the essence of this legal claim is “excessive pressure,” including such features as “discussion of the transaction at an unusual or inappropriate time,” “extreme emphasis on untoward consequences of delay,” and “the use of multiple persuaders by the dominant side against a single servient party.”33 In other words, the contract law view of undue influence refers to situations in which one party obtains another’s agreement by badgering or wearing down the other or by taking advantage of the other’s weakness of mind. But inducing agreement by simply offering the other party very generous compensation would never signal excessive pressure.

On the contrary, in some cases contract law views compensation that is too low as evidence that suggests a party may not have entered into an agreement voluntarily. For example, the “unfairness” of an agreement can serve as potent evidence that a contract was the result of improper threats.34 Likewise courts consider “inadequacy of consideration” — meaning one party received too little value from the transaction — an important factor in deciding whether to invalidate a contract as “unconscionable.”35 But nowhere does the law suggest that “excessive” consideration threatens voluntariness.36 Rather, the more compensation a party receives, the less likely a court is to invalidate the agreement on the grounds of involuntariness.

Accordingly, contract law would clearly bless as voluntary the consent given by Mary in the scenario Faden and Beauchamp describe as a paradigmatic case of undue influence. Under their account, Mary’s consent is not sufficiently voluntary because she finds the offer unwelcome and cannot easily resist it because she needs the money.37 By contrast, contract law makes no distinction between “welcome” and “unwelcome” offers. Nor does it ask whether offers are “irresistible” (unless the offer actually represents a veiled threat, such as the occasion in The Godfather when Don Corleone made Johnny Fontane “an offer he couldn’t refuse”). Absent the kind of badgering that constitutes “excessive persuasion,” the mere offer of money — present in nearly every contract — clearly could not qualify as undue influence.

Nor would Mary’s desperate financial straits undermine the conclusion that Mary’s consent was voluntary. While a court may invalidate a contract on the basis of a party’s “economic duress,” this doctrine only comes into play if the defendant has committed a “wrongful act,” such as “the assertion of a claim known to be false” or “a bad faith threat to breach a contract.”38 “Merely being put to a voluntary choice of perfectly legitimate alternatives” — such as an opportunity to participate in research in exchange for money, or to decline — “is the antithesis of duress.”39 The researcher’s offer of payment to Mary in exchange for her research participation clearly does not constitute a “wrongful act” that would invalidate her consent, even if her refusal to accept would leave her destitute.

Most importantly for present purposes, the larger the benefits offered to Mary in exchange for her consent, the less likely a court would be to find duress or undue influence.40 Rather than undermining Mary’s voluntariness, courts generally view larger offers as enhancing the fairness of agreements, making it more likely that an individual willingly chose to enter into the contract. It is low payment that is more likely to raise questions of unfairness.

The approach to compensating research subjects suggested by contract law provides IRBs with guidance that is far more coherent than the dominant view in research ethics. Rather than grappling with what level of payment might cause subjects to accept an offer that they do not “want to want,” IRBs should adopt a much easier rule of thumb that higher pay is better for subjects than lower pay. And while raising compensation may not be sufficient on its own to ensure fair selection of research participants (as discussed below), discarding concerns about undue inducement would remove a requirement that seems to all but ensure that participating in phase I trials is attractive primarily to financially desperate people.

Exploitation: More Than (Un)Just Compensation

Some scholars have understandably embraced a view of compensation similar to that of contract law, arguing that subjects should be paid whatever the market will bear.41 Yet although abandoning concerns about excessive payments would benefit research subjects, it is by no means sufficient to protect them from exploitation. For evidence one need look no further than the poor industrial working conditions that prevailed in the United States in the early twentieth century, when the market operated largely free from government interference. This laissez faire approach to the labor market was endorsed by the United States Supreme Court in Lochner v. New York, in which the Court struck down a state law that limited the number of hours bakers were allowed to work.42 The court concluded this regulation violated due process by interfering with citizens’ “freedom of contract.” In the years that followed the Court rejected multiple additional attempts by federal and state governments to protect workers, including laws imposing minimum wages, restricting the labor of children, and enshrining workers’ rights to join labor unions.43 This unfettered freedom of contract produced sweatshops, child labor, and unsafe working conditions, not to mention subsistence wages. The regulations that today govern wages, hours, and workplace safety were put in place specifically to combat the exploitative conditions that prevail when parties are left “free” to contract regarding the terms of labor.

In our view, protecting research subjects requires an understanding of exploitation that extends well beyond what subjects are paid. Exploitation typically involves taking unfair advantage of another person, often someone in a position of vulnerability.44 For example, few would dispute that the Public Health study at Tuskegee was exploitative: Public Health Service doctors used the offer of burial insurance to lure impoverished, uneducated black men with syphilis into a deceptive study where they would get no treatment for a dangerous illness.45 Although exploitation can involve coercion and threats, a person might also willingly — even eagerly — agree to an exploitative offer, simply because the unfair offer is still superior to her other choices.46 This helps explain why many poor people are willing to enroll in paid clinical trials, irrespective of their potential risks and discomforts.47

Wertheimer calls transactions such as these “mutually advantageous exploitation,” in order to distinguish them from cases of “harmful exploitation.”48 Using the threat of involuntary commitment to coerce a mentally ill patient into a dangerous study would be “harmful exploitation.”49 By contrast, paid clinical trials can represent “mutually advantageous exploitation.” Rather than being coerced to participate, subjects join because they expect to benefit from the transaction — even if they are being taken advantage of.

Whether a mutually advantageous transaction counts as exploitative depends on whether the transaction is fair. Pricegouging, for instance, is widely considered exploitative; a truck driver who offered to tow an injured, stranded driver from an isolated snowbank for a price of $10,000 would be taking unfair advantage of the driver.50 But the ethics of many other mutually advantageous transactions — such as paid clinical trials — are highly contested.

Some bioethicists who concede that some trials are potentially exploitative mistakenly believe that the remedy is simply to increase payment. Just as fairness demands that workers in dangerous jobs be rewarded with higher payment, they argue, so too should research subjects be rewarded for longer, riskier, and more unpleasant studies. As Emanuel writes, “So when one is tempted to charge ‘undue inducement’ because of too many poor people enrolling and the possibility of exploitation, the response should be to increase the inducement.”51

Yet merely increasing payment does not ensure that an offer is fair. Just as there are many ways for sweatshop labor to be unfair to workers apart from low wages, there are many ways for research studies to be exploitative apart from inadequate compensation. For instance, a research study might be exploitative because it exposes vulnerable subjects to conditions that are excessively dangerous, demanding, painful, or degrading. Simply paying those subjects more money would not be sufficient to ensure that they are being treated fairly. The solution, rather, is to fix the conditions that make the transaction unfair.

In our view (as described below) many research studies in the United States are currently exploitative and would remain so even if compensation were increased. Making those studies fair will require significant changes to the current oversight system and the structure of many research studies.

Compensating Subjects for Research Harms

First, what constitutes a fair bargain for research subjects is not easily determined in advance. It depends crucially on how well or poorly the study goes. If participation entails three weeks of inconvenience and a few unpleasant medical procedures, then $6,000 for a Phase I trial might be a fair bargain. But if a subject suffers a devastating injury that results in an enormous hospital bill and prevents him or her from ever working again, $6,000 seems grossly inadequate. To prevent exploitation of poor subjects, sponsors must guarantee fairness in such worst-case scenarios by promising to pay for medical expenses for injured subjects and compensating subjects for suffering and lost income.

In nearly every developed country, such arrangements are the norm.52 It is almost universally agreed that research sponsors have an ethical obligation to take care of injured subjects, and sponsors in most countries are required to buy insurance or agree to indemnify injured research subjects before research can begin. The lone exception is the United States, where sponsors have no legal obligation to compensate injured subjects, even if the research that produced the injury was dangerous, deceptive, or medically worthless.53 While there are no surveys of private sponsors, a 2005 study found that only 16% of academic medical centers in the United States had a policy obligating them to pay the medical bills of subjects injured in their trials.54 Not a single center compensated injured subjects or their families for lost wages or suffering. A 2012 study found that only 3.8% of American research institutions guaranteed compensation for injured subjects, while over 51% refused to pay any compensation whatsoever.55 Just over 8% allowed for the possibility of compensation at the discretion of the institution, while 36.9% offered compensation only with certain conditions (such as a prior agreement requiring the research sponsor to pick up the bill.)56

Understanding the Risks of Research Participation

Second, in order for research subjects to make an informed decision about what constitutes a fair bargain, they have to know what sort of risks they are taking. Ensuring comprehension among participants would be challenging under the best of circumstances in a country in which nearly half of all adults have only marginal health literacy.57 But the circumstances of Phase I trials make sound decision-making especially difficult. According to sociologist Jill Fisher, the way Phase I trials are conducted and discussed by trial staff (such as calling injuries “AEs” rather than injuries, for instance) leads subjects to perceive trials as much less risky than they are. Fisher calls this “the banalization of risk.”58

But the deeper problem is that our research oversight system makes it very difficult to get good information about the risks of Phase I trials. No agency monitors and tabulates injuries in these trials. The results of Phase I trials are rarely published; in fact, federal regulations do not even require these trials to be registered on ClinicalTrials.gov.59 The failure to register these trials appears to violate the Declaration of Helsinki, which requires that “[e]very research study involving human subjects must be registered in a publicly accessible database before recruitment of the first subject.”60

It is no easier to get reliable information about whether a particular clinical investigator or trial site is reputable. The FDA inspects only about 1% of clinical trial sites, and the Office of Human Research Protection does not oversee the privately-funded trials that are most likely to offer subjects payment.61 As a result, it is very hard for prospective subjects to judge whether the risk they are taking is significant or negligible.

Of course, the very nature of Phase I trials means there will always be some uncertainty regarding risk, and what little evidence is available suggests that in the aggregate Phase I trials are relatively safe.62 But for a prospective research subject weighing a particular clinical trial, what is needed is not aggregate data, but rather information specific to the trial in question. Do certain classes of drugs have poorer safety records than others? Is it more dangerous to enroll in a trial of a new biologic? How much riskier are first-in-human trials than later-stage trials? In a properly regulated workplace, these questions would not be so difficult to answer.

Some bioethicists would rather leave risk assessment in the hands of oversight bodies, rather than research subjects. Emanuel has suggested that if an IRB approves a trial, then it is by definition safe enough to satisfy any concerns about subjects being tempted into taking excessive risks. He writes, “This means inducing a person to enroll in an approved trial, even from poor judgment because of a high incentive, cannot lead to excessive risks and is not an ethical worry.”63

Yet virtually every research scandal involving American institutions over the past three decades has involved studies that were previously approved by IRBs: the death of Jesse Gelsinger at the University of Pennsylvania; the schizophrenia treatment withdrawal study at UCLA; the Fred Hutchinson Cancer Research Center blood cancer scandal; the psychosis challenge studies at Yale, Cincinnati, NIMH and elsewhere; and the scandals involving Dan Markingson and Robert Huber at the University of Minnesota, to name only a few.64 In fact, in many cases (such as those at Minnesota and the Fred Hutchinson Cancer Center) the IRB did not fully concede substantial wrongdoing even after the scandal emerged.65

Paid studies are no different. Many notorious recent clinical trial disasters have taken place in paid Phase I trials: the death of Nicole Wan at the University of Rochester Medical Center in 1996; the death of Ellen Roche in a hexamethonium study at Johns Hopkins University in 2001; the suicide of Traci Johnson in a duloxetine trial at Eli Lilly laboratories in 2004; the TGN1412 trial at a Parexel trial site in Northwick Park Hospital in 2006; the death of Walter Jorden in an antipsychotic study at CRI Worldwide in New Jersey in 2007; and the BIA 10-2474 trial at the Biotrial laboratory in France, which left one person dead and five others hospitalized in 2016. All of these studies were approved by IRBs or research ethics committees that judged the payment appropriate for the risks.66

To make this point is not to suggest that decisions about the appropriate level of risk should simply be left in the hands of informed research subjects. Nor is it to suggest that IRBs should be any less vigilant about assessing risk. Our point is simply that IRBs cannot always be trusted to ensure that subjects are never enrolled in unduly risky studies. In fact we would argue, along with many others, that the current oversight system is far too porous and conflict-ridden to warrant the trust that many bioethicists appear to believe it deserves.67

Protecting Subjects from Degrading Treatment

Third, arguably some transactions are exploitative not because workers are underpaid for the risks they assume, but because the transaction itself is degrading. Desperate people will endure all sorts of degradations in exchange for a paycheck, from racial insults to sexual humiliation, and paying them well does not mean they are not being exploited.

A prominent defender of this view is Ruth Sample, who argues that exploitation involves “interacting with another being for the sake of advantage in a way that degrades or fails to respect the inherent value in that being.”68 Depriving a person of fair benefits would constitute one such failure of respect, of course, but so would many other kinds of transactions. Sample offers the case of an impoverished black man whose desperation leads him to a job as a waiter at an all-white country club, where he is expected to tolerate racist comments by the clientele. On her account, this man is being exploited, and simply raising his wages is not sufficient to remedy the exploitation. Sample’s case is fictional, but real-life examples are not hard to find. At the 2012 South by Southwest technology conference, for instance, a marketing agency came under fire for equipping homeless people to wander around the conference asking for donations while wearing mobile wireless devices and T-shirts bearing slogans such as “I’m Clarence, a 4G Hotspot.”69

As this example suggests, transactions often look more degrading when they involve the instrumental use of other people. If prostitution is degrading, it is at least in part because the prostitute is paid to allow her body to be used instrumentally.70 The same is true for research subjects, of course, who often allow their bodies to be used instrumentally, but it is also true for other jobs. People are paid for permitting themselves to be painted by artists, examined by medical students, or displayed in front of a seafood restaurant wearing a lobster costume. Whether such transactions are degrading or benign depends on many things, such as the social place that it occupies. For instance, the test pilots in the Mercury space program have always been seen as heroic pioneers, despite their own worries that the first space flights required no work that could not be done by a monkey.71 Yet serving as a paid research subject is often seen as a job unworthy of anyone except the truly desperate.72 “There’s a kind of stigma in this line of work that echoes the social shunning that lepers have to deal with, albeit nowhere near the severity,” writes Robert Helms, the editor of the jobzine, Guinea Pig Zero. “I bear no illusions about the economy of my flesh as I wander through this meat-rack of a world, and so I call myself a guinea pig.”73

The low social status of research participation is often less a function of the way a study is designed — the key focus of research guidelines and IRB oversight — than of the conditions under which it is done. Guinea Pig Zero began issuing report cards on research sites in the 1990s based on how the sites treated research subjects.74 When research sites received poor grades, it was often because of bad food, cold showers, incompetent nurses, and unnecessary rectal exams. Some research sponsors were late in paying subjects and excluded them from trials when they complained. Even worse was the treatment of research subjects by the contract research organization SFBC International, which was forced to close its Miami trial site when, among other things, investigative journalists reported that the 700-bed facility was located in a seedy former motel that had been cited as a fire hazard by the county housing board.75 When the story broke in 2005, SFBC officials responded by contacting several foreign-born research subjects who had spoken to the press and threatening to report them to immigration authorities.76

It may well be that none of this treatment violates the guidelines governing human subjects research. But when subjects are treated as unworthy of the kind of respect that researchers presumably would want for their own family members, this both reflects and perpetuates the reality that research participation is not a noble sacrifice for the advancement of science, but a last resort for desperate people. Arguably such degrading conditions are inherently exploitative, irrespective of whether requirements related to informed consent and minimizing risks are followed to the letter.

Conclusion

Properly protecting research subjects requires a number of difficult changes. First among them is the recognition that subjects are typically exploited by paying them too little rather than too much. Research guidelines would do a more effective job of treating subjects fairly if they were to jettison the concept of “undue influence” and replace it with instructions to avoid exploitation.

Second, research institutions and/or sponsors should be required by law to pay the medical bills of subjects injured in their trials, and to compensate injured subjects or their families for lost wages or suffering. Presidential bioethics commissions dating back to the time of the Tuskegee study have repeatedly called for such guarantees, yet those calls have gone unheeded.77 Research subjects cannot be protected from exploitation as long as subjects are drawn disproportionately from uninsured populations, and then left to their own devices to deal with the fallout from any harms they suffer through their participation.

Third, prospective subjects need accurate information about the risks of the kinds of trials they are asked to join. Data must be systematically collected to get an accurate gauge of how often subjects are injured in Phase I trials, how serious those injuries are, which types of trials are more likely to injure subjects, and in which trial sites injuries have occurred. The same logic should apply to all clinical trials, of course, but since Phase I trials are not even required to be registered on ClinicalTrials.gov, the absence of information about them is more acutely problematic.

Finally, the research oversight system must be strengthened dramatically. In the United States, the current oversight system relies almost completely on the vigilance of Institutional Review Boards to protect subjects. Yet if anything is clear from the research scandals over the past thirty years, it is that flawed IRB oversight is often to blame. Some IRBs are sloppy and incompetent; some are compromised by financial conflicts of interest; almost none monitor the kinds of conditions on the ground that concerned the writers for Guinea Pig Zero. Yet there is virtually no meaningful oversight of IRBs themselves.

Existing guidelines have failed to prevent the exploitation of disadvantaged populations in clinical research, and in the case of undue influence have contributed to it. Avoiding a system that relies on the poor and uninsured to produce benefits for people with access to quality care requires looking beyond a narrow set of formalities to ensure that subjects are treated fairly and with dignity.

Acknowledgments

We were invited to contribute a paper on themes addressed at the public conference, “The Future of Informed Consent in Research and Translational Medicine: A Century of Law, Ethics & Innovation,” which was supported by the National Institutes of Health (NIH), National Human Genome Research Institute (NHGRI), and National Cancer Institute (NCI) grant R01HG008605 on “LawSeq: Building a Sound Legal Foundation for Translating Genomics into Clinical Application” (Susan M. Wolf, Ellen Wright Clayton, Frances Lawrenz, Principal Investigators).

Footnotes

Note

The authors have no conflicts to declare.

References

  • 1.Coyle and Sharpe. Maniacs in a Living Hell. The Insane (But Hilarious) Minds of Coyle & Sharpe. available at < https://www.youtube.com/watch?v=_GG9zSr9CkY> (last visited August 29, 2017)
  • 2.See, e.g.,; Protecting Human Research Subjects: Institutional Review Board Guidebook. Washington D.C.: U.S Government Printing Office; 1993. Office for Human Research Protections; p. 33. : at Chapter 3, Section B. [Google Scholar]
  • 3.St Clair EW. The Calm After the Cytokine Storm: Lessons From the TGN1412 Trial. Journal of Clinical Investigation. 2008;118(4):1344–1347. doi: 10.1172/JCI35382. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Almeida L, Azevedo B, Nunes T, Vaz-da-Silva M, Soares-da-Silva P. Why Healthy Subjects Volunteer for Phase I Studies and How They Perceive Their Participation. European Journal of Clinical Pharmacology. 2007;63(11):1085–1094. doi: 10.1007/s00228-007-0368-3. [DOI] [PubMed] [Google Scholar]; Monahan T, Fisher J. I’m Still a Hustler’: Entrepreneurial Responses to Precarity by Participants in Phase I Clinical Trials. Economy and Society. 2015;44(4):545–566. doi: 10.1080/03085147.2015.1113703. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Abadie R. The Professional Guinea Pig. Durham, N.C.: Duke University Press; 2010. pp. 54–57. [Google Scholar]; Elliott C. Guinea-Pigging. The New Yorker. 2008 Jan 7;:36–41. [PubMed] [Google Scholar]; Cottingham M, Fisher J. Risk and Emotion Among Healthy Volunteers in Clinical Trials. Social Psychology Quarterly. 2016;79(3):222–242. doi: 10.1177/0190272516657655. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.See, e.g.; Nelson R, Beauchamp T, Miller V, Reynolds W, Ittenbach R, Luce M. The Concept of Voluntary Consent. American Journal of Bioethics. 2011;11(8):6–16. doi: 10.1080/15265161.2011.583318. 9 (“The federal regulations governing research in the United States require investigators to ‘minimize the possibility of’ undue inducement, but they do not define, analyze, or explain this notion…”) [DOI] [PubMed] [Google Scholar]
  • 7.Lynch H. Protecting Human Research Subjects as Human Research Workers. In: Cohen I Glenn, Lynch Holly Fernandez., editors. Human Subjects Research Regulation: Perspectives on the Future. Cambridge: MIT Press; 2014. Chapter 21, at 332 (“The bigger problem for research subjects is that the regulatory system in some ways encourages their exploitation by effectively precluding high payments.”) [Google Scholar]
  • 8.See, e.g.,; Resnik DB. Research Participation and Financial Inducements. American Journal of Bioethics. 2001;11(2):54–56. doi: 10.1162/152651601300169112. at 55. [DOI] [PubMed] [Google Scholar]
  • 9.The Belmont Report. Washington, D.C: Government Printing Office; 1979. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research; p. 8. [Google Scholar]
  • 10.Id.
  • 11.Faden R, Beauchamp T. A History and Theory of Informed Consent. New York: Oxford University Press; 1986. Coercion, Manipulation, and Persuasion; pp. 337–381. Chapter 10, at. [Google Scholar]
  • 12.Id. at 358-360
  • 13.Id. at 358
  • 14.See, e.g., Council of International Organizations of Medical Sciences (CIOMS) and the World Health Organization (WHO) International Ethical Guidelines for Biomedical Research involving Human Subjects Geneva 2002: 46; Department of Health and Human Services (DHHS) “Protection of Human Subjects of Research (‘The Common Rule’)” 45 C.F.R. § 46.116; Food and Drug Administration, Department of Health and Human Services (DHHS) “Payment to Research Subjects – Information Sheet,” available at <http://www.fda.gov/RegulatoryInformation/Guidances/ucm126429.htm> (last visited May 2, 2017)
  • 15.Faden and Beauchamp. supra. note 11, at 360. [Google Scholar]
  • 16.Largent E, Grady C, Miller F, Wertheimer A. Misconceptions About Coercion and Undue Influence. Bioethics. 2013;27(9):500–507. doi: 10.1111/j.1467-8519.2012.01972.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.For example, see; Appelbaum P, Lidz C, Klitzman R. Voluntariness of Consent to Research: A Conceptual Model. Hastings Center Report. 2009;39(1):30–39. doi: 10.1353/hcr.0.0103. [DOI] [PubMed] [Google Scholar]; Nelson R, Merz J. Voluntariness of Consent for Research: An Empirical and Conceptual Review. Medical Care. 2002;40(9):V-69–V-80. doi: 10.1097/01.MLR.0000023958.28108.9C. Supplement. Nelson et al. supra note 6. [DOI] [PubMed] [Google Scholar]
  • 18.M. Cottingham and J. Fisher, supra note 6.
  • 19.World Medical Association, Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects, 2000, Article 17 (“[m]edical research involving a disadvantaged or vulnerable population or community is only justified if…there is a reasonable likelihood that this population or community stands to benefit from the results of the research.”) See also Belmont Report, supra note 9, at 6 45 C.F.R. § 46.111(3); Emanuel E, Wendler D, Grady C. What Makes Clinical Research Ethical? JAMA. 2000;283(20):2701–2711. 2704–2705. doi: 10.1001/jama.283.20.2701. [DOI] [PubMed] [Google Scholar]
  • 20.See, e.g.; Lemmens T, Elliott C. Guinea Pigs on the Payroll: The Ethics of Paying Research Subjects. Accountability in Research: Policies and Quality Assurance. 1999;7(1):3–20. doi: 10.1080/08989629908573939. at 13. [DOI] [PubMed] [Google Scholar]; Fisher J. Medical Research for Hire: The Political Economy of Pharmaceutical Clinical Trials. New Brunswick, N.J.: Rutgers University Press; 2009. pp. 151–152. [Google Scholar]; Iltis A. Payments to Normal Healthy Volunteers in Phase 1 Trials: Avoiding Undue Influence While Distributing Fairly the Burdens of Research Participation. Journal of Medicine and Philosophy. 2009;34(1):68–90. doi: 10.1093/jmp/jhn036. at, 72. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Fisher, supra note 20, at 130. See also; Stunkel L, Benson M, McLellan L, Sinaii N, Bedarida G, Emanuel E, Grady C. Comprehension and Informed Consent: Assessing the Effect of a Short Consent Form. IRB: Ethics & Human Research. 2010;32(4):1–9. at 6; Iltis, supra note 20, at 72. [PMC free article] [PubMed] [Google Scholar]; M. Cottingham and J. Fisher, supra note 6;; Fisher J, Kalbaugh C. Challenging Assumptions About Minority Participation in U.S. Clinical Research. American Journal of Public Health. 2011;101(12):2217–2222. doi: 10.2105/AJPH.2011.300279. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.For example, see; Elliott C. The Best-Selling, Billion-Dollar Pills Tested on Homeless People, Medium. 2014 available at < https://medium.com/matter/did-big-pharma-test-your-meds-on-homeless-people-a6d8d3fc7dfe> last visited May 2, 2017.; Evans D, Smith M, Willen L. Big Pharma’s Shameful Secret. Bloomberg Markets, December. 2005:36–62. [Google Scholar]; Cohen L. Lilly’s ‘Quick Cash’ to Habitués of Shelters Vanishes Quickly. Wall Street Journal. 1996 Nov 14;:A1. [PubMed] [Google Scholar]
  • 23.Elliott C, Abadie R. Exploiting a Research Underclass in Phase 1 Clinical Trials. New England Journal of Medicine. 2008;358(22):2316–2317. doi: 10.1056/NEJMp0801872. [DOI] [PubMed] [Google Scholar]
  • 24.Nelson et al. supra note 6, at 9
  • 25.Institutional Review Board Guidebook, supra note 2, at 33
  • 26.Id. at 34
  • 27.Largent et al. supra note 16, at 500
  • 28.Dickert N, Emanuel E, Grady C. Paying Research Subjects: An Analysis of Current Policies. Annals of Internal Medicine. 2002;136(5):368–373. doi: 10.7326/0003-4819-136-5-200203050-00009. 372. [DOI] [PubMed] [Google Scholar]
  • 29.Largent E, Grady C, Miller F, Wertheimer A. Money, Coercion, and Undue Inducement: A Survey of Attitudes About Payments to Research Subjects. IRB. 2012;34(1):1–8. [PMC free article] [PubMed] [Google Scholar]
  • 30.Id.
  • 31.See Lynch, supra note 7; Lemmens and Elliott supra note 20, at 13-14
  • 32.Lynch H. Human Research Subjects as Human Research Workers. Yale Journal of Health Policy, Law & Ethics. 2014;14:123–193. 157 (“[F]ear of undue inducement plays no role whatsoever in the legal regulation of wages paid to workers. In fact, worklaw imposes no ceiling – explicit or implicit – on how much workers may be paid…”) [PubMed] [Google Scholar]
  • 33.Johnson v. International Business Machines Corp., 891 F. Supp. 522, 531 (N.D. Cal. 1995).
  • 34.“The fairness of the resulting exchange is often a critical factor in cases involving threats.” American Law Institute, Restatement (Second) of Contracts (1979), at § 176, comment a. Courts are also more likely to invalidate a contract because of a party’s incapacity when the terms of the agreement appear unfair to the impaired party. Id. at § 15(2).
  • 35.Id. at § 208 and § 208 comment c. Although inadequate consideration is not sufficient on its own to invalidate a contract gross disparity in the values exchanged may be an important factor in a determination that a contract is unconscionable.” Id at § 208 comment c. See also Eric A. Posner Contract Law in the Welfare State: A Defense of the Unconscionability Doctrine, Usury Laws, and Related Limitations on the Freedom to Contract Journal of Legal Studies 24, no 2 (1995) 283-319, at 304 (noting that the doctrine of substantive unconscionability “condemns contracts involving substantial disparities between the contract price and the market price” and that the doctrine “dovetail[s] with the conventional view that courts should strike down involuntary contracts (as… price disparity is strong evidence of bargaining abuse)”)
  • 36.Lynch notes some interesting historical examples of laws that have limited wages — including, for example, some “passed by some state legislatures in the South after emancipation for reasons that were decidedly anti-worker.” As Lynch notes [i] n none of these examples, however, is payment limited out of fear that workers will suffer from undue inducement.” Lynch, supra note 32, at 160
  • 37.Faden and Beauchamp, supra note 11, at 360
  • 38.Steinman v. Malamed, 185 Cal App.4th 1550, 1559 2010. A. Wertheimer, Coercion (Princeton, N.J.: Princeton University Press 1990): at 41 (noting that in contract law “there is generally no duress if A’s proposal is not wrongful – even if it serves to create B’s dilemma.”)
  • 39.Johnson, supra note 33, at 529
  • 40.Restatement (Second) of Contracts, supra note 34, at § 176 comment a (in considering whether a contract should be invalidated on the basis of duress or undue influence the fairness of the resulting exchange is often a critical factor.”)
  • 41.Resnik, supra note 8, at 55
  • 42.198 U.S. 45 (1905)
  • 43.Adkins v. Children’s Hospital, 261 U.S. 525 (1923); Hammer v. Dagenhart, 247 U.S. 251 1918; Adair v. United States, 208 U.S. 161 1908
  • 44.Wertheimer A. Exploitation in Clinical Research. In: Emanuel E, Grady C, Crouch R, Lie R, Miller F, Wendler D, editors. The Oxford Textbook of Clinical Research Ethics. New York: Oxford University Press; 2008. at 202. [Google Scholar]
  • 45.Jones JH. The Tuskegee Syphilis Experiment. in The Oxford Textbook of Clinical Research Ethics. supra note 44, at 90. [Google Scholar]
  • 46.Wertheimer, supra note 44, at 203
  • 47.Cottingham and Fisher, supra note 6
  • 48.Werthheimer A. Rethinking the Ethics of Clinical Research. New York: Oxford University Press; 2011. at 201. [Google Scholar]
  • 49.Office of the Legislative Auditor, State of Minnesota, A Clinical Drug Study at the University of Minnesota Department of Psychiatry: The Dan Markingson Case 2015 [hereinafter Legislative Auditor Report], available at <www.auditor.leg. state.mn.us/sreview/Markingson.pdf> (last visited August 29, 2017)
  • 50.Wertheimer, supra note 44, at 205
  • 51.Emanuel E. Undue Inducement: Nonsense on Stilts? American Journal of Bioethics. 2005;5(5):9–13. doi: 10.1080/15265160500244959. [DOI] [PubMed] [Google Scholar]
  • 52.President’s Commission for the Study of Bioethical Issues, Moral Science: Protecting Participants in Human Subjects Research, June 2012, at 8, 15, available at <https://bioethic-sarchive.georgetown.edu/pcsbi/sites/default/files/Moral%20 Science%20June%202012.pdf> (last visited August 29, 2017)
  • 53.Id.
  • 54.The Lewin Group, Task Order Proposal No. 2 Care/Compensation for Injuries in Clinical Research Draft of the final report prepared for the Department of Health and Human Services Office of the Assistant Secretary for Planning and Evaluation, Falls Church, VA: The Lewin Group, May 18 2005. Contract no. HHS 100-03-0005. See also; Elliott C. Justice for Injured Research Subjects. New England Journal of Medicine. 2012;367(1):6–8. doi: 10.1056/NEJMp1205623. [DOI] [PubMed] [Google Scholar]
  • 55.Resnik DB, Parasidis E, Carroll K, Evans JM, Pike ER, Kissling GE. Research-Related Injury Compensation Policies of U.S. Research Institutions. IRB. 2014;36(1):12–19. [PMC free article] [PubMed] [Google Scholar]
  • 56.Id.
  • 57.Tamariz L, Palacio A, Robert M, Marcus EN. Improving the Informed Consent Process for Research Subjects with Low Literacy: A Systematic Review. Journal of General Internal Medicine. 2006;28(1):121–126. doi: 10.1007/s11606-012-2133-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Fisher J. Feeding and Bleeding: The Institutional Banalization of Risk to Healthy Volunteers in Phase I Pharmaceutical Clinical Trials. Science, Technology, & Human Values. 2015;40(2):199–226. doi: 10.1177/0162243914554838. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.van den Bogert CA, Souverein PC, Brekelmans CTM, Janssen SWJ, Koëter GH, Leufkens HGM, Bouter LM. Non-Publication Is Common Among Phase 1, Single-Center, Not Prospectively Registered, or Early Terminated Clinical Drug Trials. (Public Health Service Act, 42 U.S.C. § 282 j 2007).PLOS One. 2006;11(1) doi: 10.1371/journal.pone.0167709. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Declaration of Helsinki, supra note 19, at 35
  • 61.Office of Inspector General, Department of Health and Human Services. The Food and Drug Administration’s Oversight of Clinical Trials. 2007 Sep; available at < https://oig.hhs.gov/oei/reports/oei-01-06-00160.pdf> (last visited August 29, 2017)
  • 62.Emanuel EJ, Bedarida G, Macci K, Gabler NB, Rid A, Wendler D. Quantifying the Risks of Non-Oncology Phase I Research in Healthy Volunteers: Meta-Analysis of Phase I Studies. BMJ. 2015;350:1–9. doi: 10.1136/bmj.h3271. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Emanuel E. Ending Concerns about Undue Inducement. Journal of Law, Medicine, & Ethics. 2004;32(1):100–105. doi: 10.1111/j.1748-720x.2004.tb00453.x. at 104. [DOI] [PubMed] [Google Scholar]
  • 64.Wilson RF. The Death of Jesse Gelsinger: New Evidence of the Influence of Money and Prestige in Human Research. American Journal of Law & Medicine. 2010;36(2–3):295–325. doi: 10.1177/009885881003600202. [DOI] [PubMed] [Google Scholar]; Edgar H, Rothman DJ. The Institutional Review Board and Beyond: Future Challenges to the Ethics of Human Experimentation. Milbank Quarterly. 1995;73(4):489–506. at 493. [PubMed] [Google Scholar]; Wilson D, Heath D. Patients Never Knew the Full Danger of Trials They Staked Their Lives On. Seattle Times. 2001 Mar 11; at A1. [Google Scholar]; Michaud A. UC Drops Controversial Psychoses Tests: Critics Contend Studies Unethical. Cincinnati Enquirer. 1999 May 6; at 1. [Google Scholar]; Malhotra A, Pinals DA, Adler CM, Elman I, Clifton A, Pickar D, Breier A. Ketamine-Induced Exacerbation of Psychotic Symptoms and Cognitive Impairment in Neuroleptic-Free Schizophrenics. Neuropsychopharmacology. 1997;17(3):141–150. doi: 10.1016/S0893-133X(97)00036-5. [DOI] [PubMed] [Google Scholar]; Southwick SM, Krystal JH, Bremner JD, Morgan CA, Nicolaou AL, Nagy LM, Johnson DR, Heninger GR, Charney DS. Noradrenergic and Serotonergic Function in Post Traumatic Stress Disorder. Archives of General Psychiatry. 1997;54(8):749–758. doi: 10.1001/archpsyc.1997.01830200083012. [DOI] [PubMed] [Google Scholar]; Legislative Auditor Report, supra note 49, at 14; Baillon J. U of M Drug Study Criticism Grows. KMSP TV. 2014 May 19; available at < www.fox9.com/health/1647039-story> last visited August 29, 2017.; Elliott C. U Owes Mistreated Psychiatric Subjects an Apology. Minneapolis Star Tribune. 2015 Oct 7; available at < http://www.startribune.com/u-owes-mistreated-psychiatric-subjects-an-apology/331165371/> last visited August 29, 2017.; Elliott C. Why Research Oversight Bodies Should Interview Research Subjects. IRB: Ethics and Human Research. 2017;39(2):8–13. [PubMed] [Google Scholar]
  • 65.Legislative Auditor Report, supra note 49, at 20; Lamkin M, Elliott C. Involuntarily Committed Patients as Prisoners. Richmond Law Review. 2017;51(101):114–115. [Google Scholar]
  • 66.Rosenthal E. New York Seeks to Tighten Rules on Medical Research. New York Times. 1996 Sep 27; at B4. [PubMed] [Google Scholar]; Rosenthal E. British Rethinking Rules After Ill-Fated Drug Trial. New York Times. 2006 Apr 8; at A1. [PubMed] [Google Scholar]; Kolata G. Johns Hopkins Admits Fault in Fatal Experiment. New York Times. 2001 Jul 17; at A16. [PubMed] [Google Scholar]; Wall JK, Tuohy J. Suicide Brings Changes to Lilly Drug Trials. Indianapolis Star. 2004 Feb 11; at A1. [Google Scholar]; C. Elliott, supra note 22; Messmer K, Blasingim J. An Analysis of Specific Phase I Safety Issues. Regulatory Rapporteur. 2017;14(2):23–27. [Google Scholar]
  • 67.See; Obasogie OK. Prisoners as Human Subjects: A Closer Look at the Institute of Medicine’s Recommendations to Loosen Current Restrictions on Using Prisoners in Scientific Research. Stanford Journal of Civil Rights and Civil Liberties. 2010;6(41):75–77. [Google Scholar]; Evans et al. supra note 22, at 39; Lemmens T, Miller P. The Human Subjects Trade: Ethical and Legal Issues Surrounding Recruitment Incentives. Journal of Law, Medicine & Ethics. 2003;31(3):398–419. doi: 10.1111/j.1748-720x.2003.tb00103.x. [DOI] [PubMed] [Google Scholar]
  • 68.Sample RJ. Exploitation: What It Is and Why It’s Wrong. Oxford: Rowman & Littlefield Publishers; 2003. at 57. [Google Scholar]
  • 69.Wortham J. Use of Homeless as Internet Hot Spots Backfires on Marketer. New York Times. 2012 Mar 12; at B1. [Google Scholar]
  • 70.Overall C. What’s Wrong With Prostitution? Evaluating Sex Work. Signs: Journal of Women in Culture and Society. 1992;17(4):705–724. [Google Scholar]
  • 71.Wolfe T. The Right Stuff. New York: Farrar, Straus and Giroux; 1979. p. 148. at 100. [Google Scholar]
  • 72.Abadie R. The Professional Guinea Pig: Big Pharma and the Risky World of Human Research Subjects. Vol. 48. Durham, N.C.: Duke University Press; 2005. p. 162. at 7-8. [Google Scholar]; Murrmann M. I Was a Teenage Guinea Pig! Washington City Paper. 1996 Nov 22; available at < http://www.wash-ingtoncitypaper.com/news/article/13011942/i-was-a-teenage-guinea-pig> (last visited August 29 2017)
  • 73.Helms R, editor. Guinea Pig Zero: An Anthology for the Journal of Human Research Subjects. New Orleans, LA: Garrett County Press; 2005. p. 200. at 207. [Google Scholar]
  • 74.Id.
  • 75.Evans et al. supra note 22
  • 76.Evans D, Smith M. Three Drug Testers Claim SFBC Threatened Them. Seattle Times. 2005 Nov 20; available at < http://www.seattletimes.com/business/three-drug-testers-claim-sfbc-threatened-them/> (last visited August 29, 2017)
  • 77.Pike E. Recovering From Research: A No-Fault Proposal to Compensate Injured Research Participants. American Journal of Law, Medicine & Ethics. 2012;38(1):7–62. doi: 10.1177/009885881203800101. [DOI] [PubMed] [Google Scholar]

RESOURCES