Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Mar 1.
Published in final edited form as: Sci Technol Human Values. 2015 Mar 1;40(2):199–226. doi: 10.1177/0162243914554838

Feeding and Bleeding: The Institutional Banalization of Risk to Healthy Volunteers in Phase I Pharmaceutical Clinical Trials

Jill A Fisher 1
PMCID: PMC4405793  NIHMSID: NIHMS680258  PMID: 25914430

Abstract

Phase I clinical trials are the first stage of testing new pharmaceuticals in humans. The majority of these studies are conducted under controlled, inpatient conditions using healthy volunteers who are paid for their participation. This article draws on an ethnographic study of six phase I clinics in the United States, including 268 semistructured interviews with research staff and healthy volunteers. In it, I argue that an institutional banalization of risk structures the perceptions of research staff and healthy volunteers participating in the studies. For research staff, there are three mechanisms by which risk becomes banal: a perceived homogeneity of studies, Fordist work regimes, and data-centric discourse. For healthy volunteers, repeat study participation contributes to the institutional banalization of risk both through the process of desensitization to risk and the formation of trust in the clinics. I argue that the institutional banalization of risk also renders invisible ethical concerns about exploitation of underprivileged groups in pharmaceutical research.

Keywords: pharmaceuticals, clinical trials, risk, phase I, healthy volunteers

Introduction

Science and technology studies’ (STS) scholars have become increasingly interested in the cultural politics of pharmaceuticals (e.g., Clarke et al. 2010; Dumit 2012; Greene 2007; Lakoff 2005; Pollock 2012; Sismondo 2004). As part of this STS focus, the clinical development of pharmaceuticals has been a central topic, with scholars examining clinical trials from the business of conducting clinical trials (e.g., Fisher 2008; Mirowski 2011; Petryna 2009) to researchers’ engagement with novel therapies (e.g., Hedgecoe 2004; Keating and Cambrosio 2011) and the experiences of trial participants (e.g., Epstein 1996; Fisher 2009b; Morris and Balmer 2006). With STS’ history of “laboratory studies” (e.g., Knorr-Cetina 1999; Latour and Woolgar 1979), clinical trials provide a fertile area of inquiry into the interplay of science and culture, especially given the diversity of study types that comprise drug development.

Pharmaceuticals undergo three phases of testing on humans before they can be approved for use in the United States and most other countries.1 Primarily enrolling healthy volunteers, phase I clinical trials assess the safety profile of new drugs and help establish doses that are appropriate for patients. Phase II trials are small-scale efficacy trials using patients with the target disease to determine whether the company’s continued investment in the product is worthwhile. Phase III trials are much larger studies in which hundreds or thousands of affected patients are enrolled to determine whether the investigational drug is more efficacious than a placebo or “noninferior” to a treatment already on the market. If phase III studies indicate that a product is safe and efficacious, the US Food and Drug Administration (FDA)—or other countries’ regulatory bodies—will approve it for clinical use. The FDA also frequently requires phase IV postmarketing studies for additional safety or efficacy data about the product after it is prescribed to patients.

This article focuses on phase I clinical trials. In lay terms, these safety studies evaluate the negative side effects—“adverse effects”—produced in participants and at what amount of the medication (Corrigan 2002b). This information helps pharmaceutical companies select a therapeutic dose of their product that will not have unduly burdensome side effects. In a classic “dose escalation” study design, for example, each cohort of eight to twelve healthy volunteers is given a higher dose of the investigational drug until a preestablished stopping point is reached or the severity or frequency of the adverse effects compel the investigators to halt the study prematurely (Chapman 2011). In essence, phase I studies are designed to produce adverse effects in at least some of the participants in each study.

Drawing upon an ethnographic study of six phase I clinics in the United States, this article describes how the structure of phase I studies deemphasizes the risks to research participants, resulting in what I call “the institutional banalization of risk.” Examining this process from the perspective of both research staff and healthy volunteers, it illustrates how—in spite of the risks to healthy volunteers—the clinical testing of investigational drugs gets constructed as a banal activity. For research staff, this process is enabled by a perceived homogeneity of phase I studies, a rigid Fordist production model that dictates workflow in the clinic, and a discursive transformation of risks and harms to participants into abstract data points. For healthy volunteers, their own patterns of participation, especially their tendency to enroll serially in phase I trials, contribute to a desensitization to the risks and the formation of trust in research clinics. I argue that the routinization of phase I labor—for both the research staff and participants—renders invisible not only risk of harm to healthy volunteers but also ethical concerns about exploitation of underprivileged groups in pharmaceutical research.

Conceptual Framework and Background

Within dominant research ethics frameworks, it is essential to balance the risks and benefits to research participants, and it is one of the duties of research ethics boards to ensure that potential risks are appropriate and justified by expected benefits (Faden and Beauchamp 1986). Like much of principle-based ethics (De Vries 2004; Fox and Swazey 1984), however, assessments of risks and benefits are often calculated in a vacuum, with ethics boards often carefully avoiding creating too many impediments to the research enterprise (Fisher 2013). For example, a lack of symmetry in the distribution of risks and benefits allows individual research participants to suffer the risks while others (locally or globally) might enjoy the benefits (see Belmont Report 1979). Indeed, much of the history of medical research includes egregious cases of the poor and disenfranchised being exploited (Briggs 2002; Hornblum 1998; Reverby 2009; Shah 2006).

Bringing an STS perspective to an analysis of phase I clinical trials allows for alternative constructions of risks and benefits. Power is central to my understanding of risk. Typical uses of the language of risk make it seem that risk is a level playing field. Risk, however, is not a neutral appraisal of potential dangers but is instead mediated by social position. Beck (1992) warns that in modern technological systems, there are always risk winners and risk losers. The effects of harm are uneven with those in positions of authority careful to limit their responsibility for reparations and those who are already disadvantaged experiencing greater suffering. Yet, as Beck argues, the modern risk system is set up so that there are some “countries, sectors, and enterprises which profit from the production of risk, and others which find their economic existence threatened together with their physical well-being” (Beck 1992, 110, italics in original). Importantly, calculations of acceptable risk are deeply political, and as van Kammen and Oudshoorn (2002) illustrate in their analysis of contraceptive technologies, differential degrees of risk are viewed as acceptable for some members of society based on their gender, race/ethnicity, or socioeconomic position. Transferring this conceptual framework to the context of phase I clinical trials, healthy volunteers can be constructed as potential risk losers. Pharmaceutical companies will profit from their trial participation, and any harms that occur are compounded by healthy volunteers’ social disadvantage, especially their lack of health insurance to care for any physical injuries or their limited resources to litigate and bring suit against a powerful industry.

By focusing on the broader context of clinical trials, the concepts of “risks” and “benefits” expand and contract to conform to the expectations of the researchers and participants. Notably, as I will illustrate with the data from my ethnographic study, risk is often disregarded or normalized—or “misrecognized” (Bourdieu 1977)—because of its everydayness. To make sense of this process, I trace what I call an “institutional banalization of risk” that occurs in the phase I context. By this I mean that the phase I enterprise is organized to render the risks of participation insignificant and unproblematic. Banalization is aided by the organizational structure of clinical trials and the pool of participants recruited as healthy volunteers. To say that phase I clinical trials are characterized by an institutional banalization of risk does not imply that the risks of participation are trivial. To the contrary, it emphasizes the importance of understanding the mechanisms by which individuals come to ignore or underestimate the risks of phase I studies. I describe these components in more detail below, but will first ask: what are the risks to healthy volunteers of phase I trials?

Calculating the risks of phase I trials is no easy task. In addition to having different risks associated with different classes of investigational drugs, phase I studies are diverse and involve different types and degrees of risk. In practice, the “phase I” moniker refers to all clinical trials that test the safety of products, measure the pharmacokinetics or pharmacodynamics of the drug,2 or compare modes of drug administration. True phase I clinical trials are “first-in-human” (FIH) studies, which are literally the first-time investigational drugs are given to human subjects who are usually healthy volunteers.3 In spite of the implied linearity in the phases of drug development, phase I studies are conducted during the entire drug development process, even continuing to be conducted after FDA approval, to provide additional data about the safety profile of marketed drugs (Derendorf et al. 2000). For example, non-FIH phase I studies include those that test a drug in single or multiple doses; investigate its cardiac, hepatic, or renal effects; measure its interaction with other (usually marketed) drugs; and assess the effects of food on the action of drugs. Additionally, some phase I trials are bioequivalence studies designed to prove that generic drugs are metabolized, absorbed, and excreted in the same way as brand name drugs (Hayden 2007) or that racially or ethnically diverse bodies similarly process the drugs (Kelly and Nichter 2012). All of these study types are phase I trials and are conducted on healthy volunteers.

Additionally, every type of pharmaceutical is likely to be tested in some way on healthy volunteers, including oncology and HIV/AIDS medications, and there is no evidence to suggest that there is less risk from the investigational drug in phase I trials enrolling healthy volunteers compared to those enrolling patients. Trying to isolate the “signal” from the “noise” (Lakoff 2007), pharmaceutical companies prefer healthy volunteers in phase I trials so, that investigators do not have to adjudicate whether symptoms are a cause of the drug or the underlying disease. In sum, myriad clinical trial designs with different associated risks (in terms of magnitude and probability) are all part of the phase I world.

It may be difficult to assess the overall risk of participation in phase I studies, but the risks to healthy volunteers are not merely hypothetical (Stein 2003). Participants will likely experience one or more of the following adverse effects, depending on the type of drug being tested and the dose given: headaches, diarrhea, constipation, dizziness, nausea, vomiting, skin reactions, and other symptoms of this nature. More serious adverse effects that are relatively common include allergic reactions, anemia, depression, liver problems, impaired kidney function, seizures, and severe arrhythmias. Although death and serious harm are not common in phase I, they have occurred. Two deaths of healthy subjects received considerable media attention. Ellen Roche, a 24-year-old, died during an asthma-related study at Johns Hopkins University in 2001, and Traci Johnson, a 19-year-old, committed suicide during an antidepressant study at an Eli Lilly facility in Indianapolis in 2004. More recently in 2006, six healthy volunteers who were enrolled in a study in London experienced near-fatal, serious illnesses when the drug caused rapid multiple organ failure (Wood and Darbyshire 2006). In spite of—or perhaps because of—these sensational cases of phase I trials gone awry, phase I investigators are keen to demonstrate evidence of the low probability of harm to healthy volunteers. For example, meta-analyses of published phase I trials consistently indicate that fewer than 1 percent of healthy volunteers experience serious drug-related adverse effects (Kumagai et al. 2006; Sibille et al. 2006), and investigators routinely frame participation as safer than many blue-collar professions (Kupetsky-Rincon and Kraft 2012).

Part of the institutional banalization of phase I risk occurs because of the structure and execution of these clinical trials. Unlike other clinical trials, phase I studies are conducted almost exclusively with “confinement” requirements, meaning that healthy volunteers must stay overnight in the facility during some portion of the trial. In part, the confinement controls for as many variables as possible: researchers can dictate the exact times at which doses occur, the food and beverages consumed, and enforce any restrictions prohibiting the use of other medications or products. It also allows research staff to monitor participants through scheduled procedures or informal observations. In this respect, phase I facilities resemble a total institution (Goffman 1961), one in which the risks could appear to be managed and controlled.

Another aspect of the institutional banalization of risk in phase I stems from the routine labor of conducting and participating in clinical trials. In his analysis of the 2006 phase I disaster in London mentioned previously, Hedgecoe (2014) uses the lens of organizational deviance to explain sociologically how such serious harm could occur in spite of robust regulation and over-sight. One of Hedgecoe’s assertions is that “the everyday culture of a work group accommodates and normalizes risk as part of the practical effort to get work done” (p. 66). Because dramatic or unexpected adverse events are infrequent, the more common “side effects” of investigational drugs are seen by researchers and healthy volunteers as being pretty mundane.

The everyday work of phase I trials structures and normalizes experiences of risk for healthy volunteers as well. Healthy volunteers enroll in phase I studies almost exclusively for the income they can earn (Tolich 2010).4 Payments vary dramatically, based on the geographic location of the clinic, the length of the study, and the procedures involved, but a fair estimate is that the average study pays between US$2,000 and 4,000. Additionally, most healthy volunteers are serial participants in phase I clinical trials, regularly seeking income from study participation (Weinstein 2001). As a result of this wage structure, Cooper and Waldby (2014) argue that clinical trial participation is embodied labor, a form of work representative of flexible postindustrial, post-Fordist economies. Seen in this light, the phase I healthy volunteer becomes an independent contractor participating in clinical trials for remuneration on an occasional or routine basis. Not recognized as such by research institutions or government agencies, how-ever, healthy volunteers are not protected as workers or entitled to labor rights (Lemmens and Elliott 2001; Sunder Rajan 2007). In his ethnography of white anarchist “professional guinea pigs” in Philadelphia, Abadie (2010) found that healthy volunteers conceive of their participation as work and approach it from an activist perspective, even advocating for unionization of participants. Healthy volunteers and research staff both focus on what they need to do to earn their income: for volunteers, this means enrolling in phase I trials and consuming the investigational drugs.

For the healthy volunteers, a final critical component to the institutional banalization of risk comes from their relationship to the larger social structure. The majority of US healthy volunteers are economically and politically disadvantaged minority men. There are regional differences in the racial and ethnic groups, with an overrepresentation of African Americans in phase I clinics in the Northeast and parts of the Midwest and Latinos in clinics in the Southwest (Fisher and Kalbaugh 2011). Most healthy volunteers are unemployed, seasonally employed, or self-employed (Motluck 2009). In some instances, trial participation is simply seen as an easier way to earn an income because it is less demanding and more flexible, and participation in phase I trials becomes a chosen way of life (Abadie 2010). Some serial participants have additional difficulty finding wage employment because they have a history of incarceration or do not have permission to seek work in the United States. Thus, in the broader context of a disappearing social safety net, deindustrialization, gross wage disparities, and little job security—all characteristics of structural violence (e.g., Bourgois 1995; Farmer 2004; Scheper-Hughes and Bourgois 2003), the risks of phase I trials are overshadowed by the need to earn an income. For this reason, the choices and explanatory frameworks of healthy volunteers could add important insight into how the institutional banalization of risk is mediated by the on-the-ground practices of those involved in phase I research.

Methods

The study was conducted at six dedicated phase I, in-patient clinics in the United States from November 2009 through October 2010. The clinics were selected to have as diverse a representation of facilities as possible. For geographic coverage, two facilities were located in the East, two in the Midwest, and two in the West. One facility was owned and operated by a large pharmaceutical company, one was part of an academic medical center, two were owned and operated by contract research organizations, and two were independent phase I clinics. The facilities varied in size with the smallest being a 16-bed unit and the largest a 300-bed clinic.

In addition to observing clinic activities (from informed consent procedures to blood draws and dosing), a total of 268 semistructured interviews were conducted with 33 clinic staff and 235 healthy volunteers. The staff occupied various roles in the six facilities including site directors, principal investigators (MDs), recruiters, study nurses, and phlebotomists. Healthy volunteers were predominantly male (73.2 percent). Roughly one-third were non-Hispanic whites (37.4 percent), another third were non-Hispanic blacks (34.9 percent), and nearly one-quarter were Hispanic (21.8 percent; see Table 1 for volunteers’ demographic breakdown by facility). Healthy volunteers had a wide range of experience participating in phase I trials, with about one-third being first-time participants (30.6 percent) while others claimed to have participated in over 100 studies. Interviewees were representative of the overall demographics of the research staff and volunteers at the time of each site visit. The vast majority of interviews were conducted in English, but 28 were conducted in Spanish.

Table I.

Demographics of Healthy Volunteers Interviewed.

Phase I Facilities
East
Midwest
West
Facility Location Clinic 1 Clinic 2 Clinic 3 Clinic 4 Clinic 5 Clinic 6 Total
Total subjects 42 29 36 46 47 35 235
Sex/gender
 Male, percent (n) 88.1 (37) 79.3 (23) 75.0 (27) 52.2 (24) 74.5 (35) 74.3 (26) 73.2 (172)
 Female, percent (n) 11.9 (5) 20.7 (6) 25.0 (9) 47.8 (22) 25.5 (12) 25.7 (9) 26.8 (63)
Race/ethnicity
 White, non-Hispanic, percent (n) 19.0 (8) 17.2 (5) 44.4 (16) 78.3 (36) 38.3 (18) 14.3 (5) 37.4 (88)
 White, Hispanic, percent (n) 4.8 (2) 0(0) 0(0) 0(0) 44.9 (21) 74.3 (26) 20.9 (49)
 Black, non-Hispanic, percent (n) 61.9 (26) 79.3 (23) 52.8 (19) 17.4 (8) 8.5 (4) 5.7 (2) 34.9 (82)
 Black, Hispanic, percent (n) 0(0) 3.4(1) 0(0) 0(0) 2.1 (1) 0(0) 0.9 (2)
 Asian, percent (n) 11.9 (5) 0(0) 0(0) 4.3 (2) 2.1 (1) 2.9(1) 3.8 (9)
 Native American, percent (n) 0(0) 0(0) 0(0) 0(0) 2.1 (1) 2.9(1) 0.9 (2)
 Biracial, percent (n) 2.4(1) 0(0) 2.8(1) 0(0) 2.1 (1) 0(0) 1.3 (3)
First-time participants, percent (n) 11.9 (5) 17.2 (5) 72.2 (26) 32.6 (15) 38.3 (18) 8.6 (3) 30.6 (72)

Interviews with research staff focused on their perceptions of phase I studies, their experiences with different types of healthy volunteers, and changes in participation trends over time (especially in light of the 2008 US economic downturn). Interviews with healthy volunteers explored their experiences of participating in phase I trials, their perceptions of risks, how they evaluate different types of studies or procedures, and how they explain their participation in studies to others. All interviews were transcribed in full and coded using Atlas.ti. The identities of phase I clinics and staff are confidential, and all healthy volunteers were anonymous. The Van-derbilt University institutional review board reviewed and approved the research protocol.

Experiences of Adverse Effects in Phase I Trials

When I arrived at one of my phase I field sites in the late summer of 2010, a receptionist ushered me to a small waiting area in the Spartan administration suite of the clinic. She told me that the medical director had been delayed by some medical issues that occurred in a current study. I was not there long when a woman who was seated in a nearby cubical offered me some tea or coffee. After I declined, she picked up her cell phone and made a call. Without any small talk, she launched into a frantic description of the hallucinations and nightmares that the healthy volunteers had nearly unanimously experienced after being dosed with the study drug that morning. It slowly dawned on me that the woman did not actually work at the clinic; she was a representative from the pharmaceutical company sponsoring the study.

Later when I met the medical director, he confirmed that the woman had come to observe the dosing because the clinic had previously reported these same adverse effects to the sponsor with earlier cohorts and the company wanted one of their own employees to witness the effects of the investigational drug. I asked him about the drug effects, and he told me that the vast majority of healthy volunteers had experienced sleep paralysis—an often frightening conscious state where one experiences vivid hallucinations and cannot move or speak—as a result of the study drug. He wondered out loud if the sponsor would continue development of the product or would decide the adverse effects were too serious to warrant further investment. He also confided that many of the volunteers needed to be reassured that these side effects of the drug would be short term.

My own interactions with the group of healthy volunteers in this study were shaped by the phase I trial. Instead of moving about the clinic, they stayed in their beds most of the day, many feeling that they could not quite shake the soporific effects of the drug. Most were nonetheless eager to be interviewed and wanted to talk about their side effects. One participant in the study was an African American woman in her early twenties and a mother of two young children. It was her first time participating in a study, and she was in the clinic with her father, a veteran healthy volunteer in his forties who had encouraged her to enroll in a study. During the interview, I asked her whether she had experienced any side effects. After yawning noisily, she said,

Yeah, but the only thing really was sort of a hallucination. I had a vivid, I had a really crazy dream, but that was it. I dreamed that they were sticking IVs in my cheeks, and that was a side effect [of the drug], vivid dreams. They was sticking IVs in my cheeks and my dad was actually coming to do it. I was like, “Dad, you’re not sticking no IV in my cheek. You better send me home.”

Now more fully awake, she was not that concerned about having had an adverse effect because her father’s experiences in studies reassured her. She explained, “I have a lot of faith in my dad. He’s been doing this for however many years he’s been doing it, and he took a study before and it has to be okay.”

Given the low probability of dramatic adverse effects in phase I studies, it is surprising that I was present at a clinic when healthy volunteers would experience something as unusual as sleep paralysis. Typically, I heard about adverse effects through interviews with research staff and healthy volunteers. The latter often laughed about the side effects as they recalled their experiences, and the men seemed especially prone to exaggerate the stories to make them more comical or frightening. For example, a white man in his late fifties who had participated in more than ten studies wanted to share his “study stories” with me. In one Alzheimer’s disease phase I trial, he and other participants suffered so much vertigo and vomiting during the study that the clinic got permission from the sponsor to give the participants Benadryl® to minimize the vomiting at subsequent doses. He remembered,

It was the first study I’ve been in where there was a lot of sickness … I mean it was a pretty strong, strong dose … It was rough. We dosed 26 [people] and 18 got sick … It was not a good experience … That’s really the only time I’ve actually seen physical illness. I’m not privy to other side effects [i.e., those detected through laboratory results], but that was an experience. Yeah, it was [laughs], “You guys better go back to the lab and rethink that one.”

Most healthy volunteers’ adverse effects are short term. The effects tend to occur shortly after being dosed with the investigational drug; then within hours of consuming it, the symptoms subside. In a few instances, however, the effects lasted beyond the study, and participants reported that they were surprised to find that they were nonetheless released from the phase I clinic and not monitored by the research team. One such example came from an African American participant in her early thirties who had participated in more than thirty-five studies. She had several experiences of worrisome adverse effects, but a Ritalin® study made her change her perceptions of the risks of drugs that act on the brain. She reported, “I was administered 500 times the regular dosage of Ritalin, and it was crazy. [Laughs] And I felt the after-effects for several days, although they released me the same day.” When I asked her how she felt during the study, she responded,

I felt like I wasn’t in control of anything. My emotions were up and down. I felt like I tooksome kind of street drug because I was speeding everywhere and then I was so slowed down. I felt sad. I was crying uncontrollably. I was yelling. Someone being around me would think that I had Tourette’s syndrome because it was just coming out of my face and I couldn’t stop it. And I was emotional. I was pensive. I wanted to think about everything. I wanted to analyze everyone and everything. I was angry because I felt like everybody was laughing at me and judging me. Then I became sexual in this environment, it was bad. Really bad. Really bad. And it was embarrassing … When they let me go home, I told them I didn’t wanna leave. I didn’t feel right … My mom was nervous when I got home. I couldn’t sleep. I couldn’t sit down. I just needed her to stay on the phone. I begged her not to hang up, and she was so scared, like, “You don’t ever do anything like that again because I don’t know what’s wrong with you and you’re acting really strange,” and I felt strange.

Remarkably, many healthy volunteers will continue to participate in phase I studies even after they experience temporary adverse effects. The return to health or normalcy perhaps gives them the sense that no serious lasting harms can come from participating in clinical trials. Turning to a structural explanation for this phenomenon, the institutional banalization of risk shapes volunteers’ perceptions and expectations of phase I trials beyond their individual experiences.

Research Staff and Risk in Phase I Clinics

Based on my observations of phase I clinics and the information I gleaned from interviews, research staff are generally caring and committed to the safety and comfort of healthy volunteers.5 At the same time, moments of what could be read as staff callousness rose to the surface when healthy volunteers were less willing to enroll in studies with nearly certain adverse effects. During my fieldwork, one such study had a high probability that participants would experience several hours of flu-like symptoms after dosing, and the clinic staff were frustrated with the high rates of attrition when participants would exercise their right to withdraw from the study. This occurred in the middle of the study that required multiple confinement periods during which participants would be dosed each time with the investigational drug. The clinic found that many participants who had experiencedfever, chills, and nausea in the first confinement period either officially withdrew or failed to check-in for subsequent confinements. In another instance, a clinic was a victim of its own success at informing prospective participants about a dose-escalation study in which higher doses of the investigational drug led to larger numbers of healthy volunteers vomiting during the study. The recruiter informed participants so well during the screening visit that only five of the eight participants checked-in the morning of the study. When two of those participants did not pass all the intake laboratory work and had to be dropped from the study, the sponsor decided to postpone the study and requested that the clinic send the participants home. After dismissing the healthy volunteers (and promising to pay them US$200 because they showed up in “good faith” to complete the study), the nurse manager walked the hallways of the clinic fuming about the cancelled study. I asked her whether the participants who had not shown up that morning had called in or simply not turned up. With clear irritation in her voice, she said that most of them called with pretty transparent excuses but that ultimately they failed to come in because “they didn’t want to barf.” The nurse manager (at least at that moment) was not sympathetic to the fact that the negative effects of the drug would dissuade healthy volunteers from participating.

Analytically, I interpret reactions like the nurse manger’s as stemming from the institutional banalization of risk that occurs in the phase I context. Research staff members are especially prone to minimize or discount the risks of study participation because they perceive that risks are not only managed but also controlled or eliminated. Within this framework, likely adverse effects cease to be true “risks” and are not seen as indicators that participants’ long-term health could be jeopardized by their participation in phase I trials. What is particularly interesting here is that in their interactions with healthy volunteers, the staff is not downplaying the possibility of adverse effects per se. Indeed, my observations of the informed consent processes that occur for phase I studies suggest that staff prepare healthy volunteers well for those side effects when they are an expected part of the study. At the same time, longer-term risks are all but ignored or dismissed. For example, when death is listed as a risk on an informed consent document, it is chalked up to a legal requirement and often dismissed because those staff members have never witnessed it occur. In examining the work practices of research staff, three primary components contribute to the banalization of risk process: (1) the perceived homogeneity of phase I clinical trial design, (2) the institutionalization of Fordist production processes, and (3) the discursive transformation of risks and harms into abstract data points.

Perceived Homogeneity of Phase I Clinical Trial Design

Phase I clinical trials, as described previously, are designed to answer different research questions about the safety and/or administration of pharmaceuticals. On the surface then, it might appear contradictory to depict phase I studies as having homogenous research designs. Yet, when examined from the perspective of the regimen of what tests and procedures need to be performed and at what intervals, clear patterns of study conduct emerge regardless of the scientific goals of specific studies. Key events in most protocols are drug dosing, blood and urine collection, electrocardiograms (EKGs), physical examinations, and meals. For example, a white female physician at a phase I unit told me, “Actually the protocols are fairly uniform. There are like three, four, five kinds of master protocols in a sense. And an FIH-single dose repeat is an FIH-single dose repeat.” Adding to the impression of homogeneity, clinics have standardized screening procedures to verify that prospective volunteers qualify for studies, and only rare additions based on unusual inclusion or exclusion criteria would create deviations in the procedures or workflow.

Phase I protocols are so similar that there is often a sense among research staff that the investigational drugs are interchangeable when it comes to their daily responsibilities. When speaking with one white male staff member who was responsible for setting up the protocol in an electronic data capture system used by the clinic, it was clear that the similarities between studies could trigger programming errors because the differences among protocols are hard to notice:

Even if it looks the same, you have the same type of blood draws, same thing every time, it’s not [exactly] the same. Even though you compare your T&E’s, [that is] your time and event schedules, and they look identical … Well, every time I read a protocol to break it down to [the software application], … I’ll walk out, take a walk around the building real quick, just kinda clear my head, come back and start the new protocol with a fresh start.

Most research staff members, however, are not responsible for noting minor differences among protocols, and as a result, they make few attempts (and have no incentives) to differentiate between studies. Instead, they focus on following their daily schedules that dictate the clinical tasks for which they are responsible, such as administering doses or drawing blood (note that the schedule is, of course, generated by the electronic systems in which the protocols are inputted).

The perceived homogeneity of phase I trials is the primary factor underlying the industry reference to these studies as well as to many of the clinics that conduct them as “feed ‘em-and-bleed’ ems.” The label emphasizes the two activities that structure most days in the clinics. The first is the task of feeding participants three meals per day according to the protocols, with fairly standardized diets based on calories, fat, or other restrictions. The second is the monitoring of participants by collecting and analyzing their blood at frequent intervals. Blood is the principle source of safety data because it contains information about the pharmacokinetics (PK) of the investigational drug. Colloquially, research staff (and subsequently healthy volunteers) refer to days in which the participants receive a dose of a drug followed by hourly blood draws as “PK days” to signal the large volume of blood collection. One white physician referred to clinical trials that are designed primarily to generate pharmacokinetic data as the “bread and butter” of the industry. He explained, “I call them bread and butter because to a guy like me, we do them, but I can’t say I’m hugely interested on a scientific level.” In other words, references to “feeding and bleeding” and PK studies within the industry emphasizes the mundane nature of phase I work.

As these examples indicate, there are few exceptions that radically differentiate phase I protocols. Additionally, not all clinics have the capability or expertise to conduct protocols that include more invasive procedures such as lumbar punctures or endoscopy. Using the industry argot, a white administrator from a small commercial phase I unit compared his facility to academic sites and some of the larger ones in the industry by saying, “They have esoteric testing or procedures that most ’feed ‘em-and-bleed’ ems’ [like us] don’t.” Even when sites have access to advanced technologies and specialist practitioners, few phase I protocols call for those resources. A white male physician at an academic unit with high-tech capabilities explained, “We’re not a huge ’feed ‘em-and-bleed’ em’ … [but even so] we’ll do sort of basic studies.” In other words, the feeding and bleeding always predominate. Without the occasional lumbar puncture or endoscopy tube, it is easy to understand how studies begin to blur.

Institutionalization of Fordist Production Processes

Additionally, the institutional banalization of risk may be augmented through the clinics’ “Fordist” labor processes. Phase I studies require a high level of efficiency of and cooperation among research staff because the schedule of clinical events is highly structured with narrow windows of time in which doses must be given, blood collected, EKGs administered, food consumed, and so on. If research staff miss any of these windows because, for example, they fall behind on work or a participant fails to be in the right place at the right time, staff must document the “protocol deviation” and report the occurrence to the sponsor. To avoid this problem, clinics mobilize a Fordist production model, with an assembly line approach to workflow and clinic design, to try to ensure that the studies run smoothly and efficiently (Fisher 2009a).

Every stage of the research process is designed for throughput. Facilities are specifically designed to usher prospective volunteers rapidly through the screening process. Unlike later phase clinical trials, prospective healthy volunteers are often scheduled to screen for studies in groups of ten, twenty, or more. Even the informed consent process is designed for maximum efficiency with most clinics providing information about the studies in group settings during which a research staff member reads parts of the informed consent form to prospective volunteers and answers any questions that they might have about the study. After consent forms are signed, the prospective volunteers then queue up to have their height and weight recorded, vitals checked, blood drawn, and urine sampled as part of the formal screening for studies. Prospective volunteers move through the screening clinic quickly because research staff members are assigned to each station (like workers on an assembly line) to collect the necessary data or bodily fluids. As prospective volunteers finish procedures at the last screening station, they will often find themselves back in the facility’s reception room.

Unlike the screening area, the rest of the phase I facility cannot be designed solely for maximum throughput of participants. Not only does the facility contain clinical spaces and administrative offices, it must also include shared sleeping quarters for participants, areas for recreation, dining, and bathing, as well as laundry facilities and other dormitory-like features. In spite of the diversity of activities the facilities must accommodate, all six clinics in my study nonetheless incorporated rigorous Fordist processes to facilitate study conduct. One key feature at five of the clinics was a procedure area designed for participants to be processed en masse.6 The procedure areas were spaces in which chairs were lined up in rows or circles so that staff could quickly collect samples or conduct procedures. In some cases, volunteers were even assigned specific chairs (usually by their study ID) to avoid confusion about who was due for a procedure, and staff could draw blood from participants by literally moving down the line from one to the next. Additionally, procedures are semiautomated at many clinics by the barcoding of participants, who are scanned in and out of procedures so that the timing of these events are precisely captured.

The Fordist organizational processes are intended to facilitate the efficient functioning of the phase I clinic. At the same time, however, they also lend the appearance that these processes are controlling the risk to healthy volunteers. On one hand, volunteers’ risk could be managed by the high degree of scheduled interactions with staff and by spaces that allow staff not only to conduct procedures efficiently but also to monitor volunteers easily. If a participant’s health or safety were seriously compromised by an investigational drug, the staff would likely observe the problem immediately and take corrective action to safeguard the participant. On the other hand, the workflow and space of the clinic do not alter the inherent risks of study participation. Instead, the staff’s focused attention on the minute-by-minute scheduling and administration of procedures as well as their management of participants routinizes their work and diverts their attention away from the differential risks to volunteers of consuming investigational drugs.

Discursive Transformation of Risks into Data

Risk is not only pushed to the background by the quotidian concerns that occupy the research staff’s time and energy. Actual harms to participants are also normalized when transforming them into abstract data points that must be gathered through the clinical trial process. This transformation is primarily discursive, manifesting in how staff talk about and label problems that occur during the course of studies. The primary example of this can be witnessed in how staff view adverse effects, which are simply referred to as “AEs.” Rather than seeing adverse effects as harms resulting from investigational drugs, research staff members perceive AEs to be routine and rather mundane: headaches, gastrointestinal changes, drowsiness, and so on. Not only do AEs usually fail to be spectacular, they are generally short term as well, adding to research staff’s impression that they are simply banal reactions that must be recorded and reported for any volunteers who experience them.

Even if these routine AEs are expected and temporary, the erasure of harm to healthy volunteers is striking. Clinics take healthy individuals with no symptoms, give them an investigational drug, and produce symptoms in those individuals as data, but harm remains absent from the discussion. Considering this from the perspective of how harm is socially constructed, it is clear that for the clinics, “harm” applies only to extreme drug reactions. In other words, by framing symptoms that participants develop as “AEs,” research staff are contributing to a banalization of risk through a data-centric discourse.

Language is important for structuring perception. Not only does the term adverse event transform the experiences of participants into mere data points but also the abbreviated “AE” masks the harm because “adverse” is no longer even articulated. Other terms in the industry also minimize staff’s perceptions of the risks of phase I trials. Returning to the expression “feed ‘em-and-bleed’ ems” used to describe the clinics as well as the studies, it is noteworthy that a critical word is absent from this phrase. Without administering the investigational drug, staff would have little reason to feed participants or collect blood from them. The more appropriate term might be “dose ‘em-and-bleed’ em,” especially for those PK days when following the dose, volunteers could expect frequent blood draws. Obviously, what makes the industry’s term work is its cadence and rhyme, but it also obscures the intentional production of harms and represents the clinic as almost risk free.

Healthy Volunteers and Phase I Study Risk

Although it might not be a surprise that research staff do not focus on the risks of phase I participation as they engage in their daily work, it is more difficult to imagine how healthy volunteers who are subjected to that risk express little concern about being harmed. The exceptions to this were a handful of first-time participants I interviewed who were preoccupied with the possibility that unexpected serious injuries would result from the study. Healthy volunteers who are new to phase I clinical trials are rare,7 but even among that small group, this was a minority view. The framework of the institutional banalization of risk provides insights into healthy volunteers’ perceptions as well, especially in the context of serial participation. My ethnographic and interview data indicate that the institutional banalization of risk manifests in two ways for healthy volunteers: (1) desensitization to the risks and (2) trust in research clinics.

Desensitization to Phase I Risks

As with the repetition of many other activities in life, serial participation in phase I studies leads to a desensitization to the risk, and the vast majority of healthy volunteers are rather cavalier about the risks of participating. In reflecting on the risks of phase I trials, healthy volunteers rely on past experiences. For example, when I asked an Asian immigrant in his early forties about his perception of the risk of participating in clinical trials, he replied, “The reason why I don’t too much think about the risk [is] because I’ve done it so many times. If something was supposed to happen to me, it would have done happened already.” Similarly, a Latina participant in her thirties told me, “I start … in 2001 with one pill, and nothing happened … and I see nothing happened and I continue to [participate] and I have [done] like five studies.”

Some healthy volunteers acknowledge that when they first started participating, they had concerns about the risks. For example, an African American in his late thirties participating in his eighth study told me that he was nervous during his first study. When I asked him what made him nervous, he explained,

’Cause there was a consent form, and they gave you all the side effects that could happen and the most serious side effects was death. I was like, oh my God! You know I was freaking out, but my friend assured me, “No, it’s nothing. They have to say that ’cause it’s all procedural.” I was like, “Yeah, but they said death, brother! Like did you understand that? This is serious; anything can happen. I might have an adverse effect nobody else had. You know they tested it in lab rats, but we’re the first humans to test this drug.” And I was like wow. But I did it, and everything worked out … [laughs] I’m still here.

Often any anxiety first-time participants experience melts away once they get through their first study. Emphasizing this point, a Latino man in his twenties said, “The first time I have a little bit worry because I never did one before, but now nothing to worry about it, I think [it’s] safe. [laughs] I feel safe.” Other participants, even those enrolled in their first study, seem unconcerned about the risks. For instance, a Native American woman in her thirties who was a first-time participant admitted, “I guess I’m just kind of naïve to the fact that it would happen, just because I feel like I’m pretty healthy and I guess I’m just not really considering the possibilities that could happen.”

The desensitization to risk is possible because many healthy volunteers will participate in studies with no or only very minor side effects from the investigational drug. For example, a white male in his midfifties explained, “I don’t think there’s anything personally dangerous about it. I’ve never seen anybody [pauses]; some people may have side effects of nausea. Usually, the only side effect I ever have is sleepiness. And that’s about it.” Likewise, an African American man in his midforties was dismissive when I asked him about the risks of participation. He said, “I never had experienced too many adverse events in four years. I’ve experienced like two … I don’t feel nothing; I feel just the way when I came in. I’m going home tomorrow. I have no headache, I’m not dizzy, I’m walking straight, and my teeth didn’t fall out yet.” Another first-time participant—an African American man his early forties—also made light of the side effects:

And the dosage was so minimal! I’m like, I already know I can get more side effects from a shot of Patrón [tequila] than I can get from what I just took on this study. [laughs] I think I’ve had more party days in college dorms worse than I’ve had from this pill they just gave me … I know I have perfect health, and I’m in control of me, so I can’t let that [the possible risks] scare me off, you know?

Healthy volunteers, like the staff, also perceive some degree of homogeneity among studies. They talk about the similarities in the risks that are listed in informed consent forms. For example, a white man in his thirties explained, “I feel like I’ve been to so many [that] it’s just like the same consent everything, every time. So they just tell you the same thing, just different medications … but the rest of it’s all the same.” Importantly, the combination of reading the same list of side effects without experiencing any of them further desensitizes them to the risks. In the words of a white man in his thirties,

I look at side effects a little, and like I’ve seen the same side effects on every one of the studies I’ve done, but I personally haven’t had a lot of side effects from it. Like the first couple I did, I got like the rashes they showed [in the consent form], but I don’t remember anything going wrong in the last couple that I’ve done. So it’s been all good.

Volunteers’ Trust in Research Clinics

Contributing to an institutional banalization of risk is serial participants’ trust that the general research oversight system and specific phase I clinics can protect them from harm. Some of this trust manifests in a generic way, such as in the following articulation made by a white transwoman in her forties, “Here I feel very safe. You know, I don’t feel my health’s gonna be compromised in any way.” In other cases, volunteers trust the high level of monitoring, perceiving their risk is minimized. A Latino in his forties stated, “I mean, they are always monitoring us. I mean, it’s not like they are going to give us something that we can’t handle and we are going to die. [O sea, siempre nos están monitoreando. O sea, no nos van a dar algo para que nosotros no aguantemos y nos vamos a morir.]” Another healthy volunteer—an Asian immigrant in his fifties—pointed to both research oversight and clinic procedures as protection for participants:

I’m kind of trusting in that they have to adhere to the IRB, whatever it is, you know. So, I kind of have faith in the process they’re doing, and whatever they catch, whenever they catch something. I know that it’s part of protocol to let it be known to everybody, like immediately, you know what I mean? So, if something happens to a prior group … a side effect, they’ll round everybody up … So, I kind of go in with a sense of security that everybody’s doing what they’re supposed to do.

In other words, even when healthy volunteers acknowledge the possibility for harm, they indicate that such risks are mitigated by the clinics, their staff, and research ethics boards.

Saying that healthy volunteers are both trusting and desensitized to the risks of phase I trial participation is not the same as saying that they are indiscriminate about the studies in which they will participate. Some serial participants will not participate in studies at clinics that have a reputation for having unprofessional staff or dangerous facilities. Some also become savvy about phase I trial design and actively manage their study participation in ways that they perceive as reducing their risk of harm. For example, many healthy volunteers will not participate in studies that require invasive procedures such as lumbar punctures, whereas others are concerned about the type or dose of investigational drug. Many of the African American volunteers in my sample claimed that they would refuse to participate in studies for AIDS or psychotropic medications.

In part, volunteers’ selectivity is based on their perceptions of those procedures, drugs, or illnesses, but it can also be based on their own negative experiences in clinical trials. For example, some of the healthy volunteers who avoided psychotropic drugs did so because they had participated in schizophrenia studies that resulted in many side effects, including frequent nightmares. What is interesting is that even when serial participants experience side effects, they are likely to see those specific studies as exceptions and become even more desensitized to risk. For example, a white man in his late thirties explained that his perception of the risk is based on how long the side effects last, pointing to his current study as an example: “The side effects [in this study] hit me really hard. I looked like death warmed over, and so I spent most of the day curled up in a ball, waiting for the day to go by … but you know, I know that by this time tomorrow, I will be right as rain.”

Ultimately, however, the primary concern about phase I clinical trials that serial participants wanted to discuss in interviews was the risk of failing to qualify for a study when they wanted (or felt they needed) to participate. As they come to rely on phase I participation for income, they focus on getting into the next study instead of thinking about the risks. For example, an African American man in his forties explained that he refused to participate in vaccine trials, not because of the risk of the vaccine, but because of its effects on his future participation: “Now a pill, you can’t catch nothing with a pill. When they inject you with something, they [are] telling you they [are] giving you something … With like a vaccine or something, then you [are] shut down for quite a few months, so you actually losing money.” Delving further into how healthy volunteers discuss the risk of exclusion from participation is more complex a topic than can be covered here, but it indicates that the direct physical risks of participation are not healthy volunteers’ primary concern.

Conclusion

The management and communication of risk are considered key features of the ethics of human subjects research, yet risk is framed in that realm as objective and measurable instead of as deeply contextual and contested. This article has examined how the risk of participation in phase I clinical trials becomes banal for research staff and healthy volunteers. Through organizational and discursive practices, research staff perceive the risks of phase I studies as minimal, and healthy volunteers experience a desensitization to the risks and formation of trust in the clinics as they enroll in new studies without incident of harm in previous studies. These findings illustrate in large part how risk gets constructed (and reconstructed) in particular ways that are divorced from conversations outside the clinics by academics and research ethics boards about the ethics of early-phase drug studies.

Risk begins to disappear for research staff as they engage in a process of naturalizing the particular (efficient) mode of knowledge production in phase I trials. I compared the organizational practices to Fordist production processes. When revisiting that analogy from the perspective of ethics and risk, however, some differences between traditional production lines and the workflow of the phase I unit become stark. In the factory, the workers are at risk for unintentional harm or injury through repetitive stress or strain or as a result of the machinery itself. In contrast, the risk of harm or injury in phase I studies is to the healthy volunteers and stems from intentional exposure to an investigational drug. Within discussions of the ethics of phase I research, this element of producing side effects—even if only temporary—in otherwise healthy individuals is overshadowed by debates about how much it is appropriate to pay volunteers (e.g., Dunn and Gordon 2005; Largent et al. 2012; Stones and McMillan 2010).

The serial participation of savvy healthy volunteers in clinical trials creates a different structure and engagement with the putative risks and benefits of enrolling in phase I studies. Payment, of course, is central to why healthy volunteers consent to phase I studies, representing the serious issue of how clinical trial participation both exploits and reproduces social inequalities in US society (Fisher 2009b). Perceptions of risk are nonetheless important because healthy volunteers are engaged in risk assessments that consider the risks of study enrollment in relation to the economic need that motivates their interest in clinical trials. When the risks of participation are effectively erased by healthy volunteers through the process of desensitization, their personal assessment of risk benefit is skewed toward trial participation because they perceive it as safer than it actually is. This echoes Cerulo’s (2006) findings of a widespread cultural valorization of bestcase scenarios instead of attention to or planning for the disasters that can and do occur. An STS analysis of the institutional processes that trivialize phase I risk indicates that the informed consent process is always already flawed for serial participants in ways that the ethics literature has not considered. It should trigger a profound ethical concern that research staff and healthy volunteers routinely construct phase I clinical trial participation as low risk when it will be the healthy volunteers—due to their social disadvantage—who will be the “risk losers” should long-term harm result.

Acknowledgments

I am grateful for the assistance of Dulce Medina and Irma Beatriz Vega de Luna in conducting interviews with Spanish-speaking healthy volunteers.

Funding

The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was supported by grant number 1R21CA131880 from the National Cancer Institute. Its contents are solely the responsibility of the author and do not necessarily represent the official views of the National Cancer Institute, National Institutes of Health.

Author Biography

Jill A. Fisher is an Assistant Professor of Social Medicine in the Center for Bioethics at the University of North Carolina at Chapel Hill. She is the author of Medical Research for Hire: The Political Economy of Pharmaceutical Clinical Trials (Rutgers University Press, 2009) and editor of Gender and the Science of Difference: Cultural Politics of Contemporary Science and Medicine (Rutgers University Press, 2011).

Footnotes

1

Pharmaceutical clinical trials are conducted throughout the world, with a concentration of clinics in North America, Western Europe, and Asia. In spite of media and scholarly attention to clinical trials in the developing world (e.g., Petryna 2007; Shah 2006), the United States remains the dominant site of all clinical trials worldwide (Borfitz 2011).

2

In lay terms, pharmacokinetics measures what the body does to a drug after it has been consumed, including how it is absorbed, metabolized, and excreted. Pharmacodynamics measures the effects of a drug on the body.

3

These studies used to be called “First-in-Man” clinical trials, but the industry has largely abandoned that term for the gender-neutral alternative since 1993 when the Food and Drug Administration lifted its ban on the enrollment of “women of childbearing potential” in early-phase clinical trials (Corrigan 2002a; Fisher and Ronald 2010). In spite of the change in US regulations and new terminology, these trials are almost exclusively filled with male volunteers (Batchelor 2002).

4

Of course, financial motivation is multifaceted with healthy volunteers having a variety of purposes to which their study stipends will be put. In my forthcoming book, I describe a taxonomy of financial motivations that I group into the following categories: necessary income, investment in the future, and mass consumption. Nonfinancial motivations, including altruism and social and lifestyle factors, also shape participation, especially serial participation.

5

Some phase I clinics not included directly in my sample have a reputation for hiring unprofessional staff that do not treat healthy volunteers as well. Because serial participants frequent many clinics, it allowed me “indirect access” to clinics that had refused my request for a visit (see Monahan and Fisher forthcoming, on the method of indirect access). It is notable that the clinics that had the worse reputation among healthy volunteers were the most likely to deny my request to be included in the study.

6

The sixth facility, which was the smallest clinic, did not have a procedure area. Instead, the participants were asked to remain in bed when it was time for procedures, so research staff could come to them at designated times.

7

My sample had a slightly larger representation of first-time participants than expected. This was in large part due to one facility that had shortly before my visit successfully launched a campaign to increase the number of healthy volunteers in their database (see Table 1).

Declaration of Conflicting Interests

The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

References

  1. Abadie R. The Professional Guinea Pig: Big Pharma and the Risky World of Human Subjects. Duke University Press; Durham, NC: 2010. [Google Scholar]
  2. Batchelor S. [Accessed October 1, 2014];Feds May Track Sex Differences in Drug Reactions. Women’s eNews. 2002 Sep 15; http://womensenews.org/story/medicine/020915/feds-may-track-sex-differences-in-drug-reactions
  3. Beck U. From Industrial Society to the Risk Society: Questions of Survival, Social Structure and Ecological Enlightenment. Theory, Culture & Society. 1992;9(1):97–123. [Google Scholar]
  4. [Accessed April 7, 2012];Belmont Report. 1979 http://ohsr.Od.Nih.Gov/guidelines/belmont.Html
  5. Borfitz D. Canada Scrambles to Reboot Sagging Clinical Trials Market. CenterWatch Monthly. 2011;18(12):1, 10–14. [Google Scholar]
  6. Bourdieu P. In: Outline of a Theory of Practice. Nice R, editor. Cambridge University Press; Cambridge, MA: 1977. [Google Scholar]
  7. Bourgois P. In Search of Respect: Selling Crack in El Barrio. Cambridge University Press; Cambridge, UK: 1995. [Google Scholar]
  8. Briggs L. Reproducing Empire: Race, Sex, Science, and U.S. Imperialism in Puerto Rico. University of California Press; Berkeley: 2002. [Google Scholar]
  9. Cerulo KA. Never Saw it Coming: Cultural Challenges to Envisioning the Worst. University of Chicago Press; Chicago: 2006. [Google Scholar]
  10. Chapman AR. Addressing the Ethical Challenges of First-in-Human Trials. Journal of Clinical Research and Bioethics. 2011;2(4):113. [Google Scholar]
  11. Clarke AE, Mamo L, Fosket J, Fishman J, Shim J. Biomedicalization: Technoscience, Health, and Illness in the U.S. Duke University Press; Durham, NC: 2010. [Google Scholar]
  12. Cooper M, Waldby C. Clinical Labor: Tissue Donors and Research Subjects in the Global Bioeconomy. Duke University Press; Durham, NC: 2014. [Google Scholar]
  13. Corrigan OP. First in Man: The Politics and Ethics of Women in Clinical Drug Trials. Feminist Review. 2002a;72(1):40–52. [Google Scholar]
  14. Corrigan OP. A Risky Business: The Detection of Adverse Drug Reactions in Clinical Trials and Post-marketing Exercises. Social Science and Medicine. 2002b;55(3):497–507. doi: 10.1016/s0277-9536(01)00183-6. [DOI] [PubMed] [Google Scholar]
  15. Derendorf H, Lesko LJ, Chaikin P, Colburn WA, Lee P, Miller R, Powell R, Rhodes G, Stanski D, Venitz J. Pharmacokinetic/Pharmacodynamic Modeling in Drug Research and Development. The Journal of Clinical Pharmacology. 2000;40(12):1399–418. [PubMed] [Google Scholar]
  16. De Vries RG. How Can We Help? From Sociology in to Sociology of Bioethics. Journal of Law, Medicine, & Ethics. 2004;32(2):279–92. doi: 10.1111/j.1748-720x.2004.tb00475.x. [DOI] [PubMed] [Google Scholar]
  17. Dumit J. Drugs for Life: How Pharmaceutical Companies Define our Health. Duke University Press; Durham, NC: 2012. [Google Scholar]
  18. Dunn LB, Gordon NE. Improving Informed Consent and Enhancing Recruitment for Research by Understanding Economic Behavior. JAMA. 2005;293(5):609–12. doi: 10.1001/jama.293.5.609. [DOI] [PubMed] [Google Scholar]
  19. Epstein S. Impure Science: AIDS, Activism, and the Politics of Knowledge. University of California Press; Berkeley: 1996. [PubMed] [Google Scholar]
  20. Faden RR, Beauchamp TL. A History and Theory of Informed Consent. Oxford University Press; New York: 1986. [Google Scholar]
  21. Farmer P. An Anthropology of Structural Violence. Current Anthropology. 2004;45(3):305–25. [Google Scholar]
  22. Fisher JA. Practicing Research Ethics: Private-sector Physicians & Pharmaceutical Clinical Trials. Social Science & Medicine. 2008;66(12):2495–505. doi: 10.1016/j.socscimed.2008.02.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Fisher JA. Society for Social Studies of Science (4S) Washington, DC: Oct 28-31, 2009a. Bleeding and Feeding: Unpacking the Banality of Healthy Human Testing of Investigational Pharmaceuticals. 2009. [Google Scholar]
  24. Fisher JA. Medical Research for Hire: The Political Economy of Pharmaceutical Clinical Trials. Rutgers University Press; New Brunswick, NJ: 2009b. [Google Scholar]
  25. Fisher JA. Expanding the Frame of Voluntariness in Informed Consent: Structural Coercion and the Power of Social and Economic Context. Kennedy Institute of Ethics Journal. 2013;23(4):355–79. doi: 10.1353/ken.2013.0018. [DOI] [PubMed] [Google Scholar]
  26. Fisher JA, Kalbaugh CA. Challenging Assumptions about Minority Participation in U.S. Clinical Research. American Journal of Public Health. 2011;101(12):2217–22. doi: 10.2105/AJPH.2011.300279. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Fisher JA, Ronald LM. Sex, Gender, and Pharmaceutical Politics: From Drug Development to Marketing. Gender Medicine. 2010;7(4):357–70. doi: 10.1016/j.genm.2010.08.003. [DOI] [PubMed] [Google Scholar]
  28. Fox RC, Swazey JP. Medical Morality is Not Bioethics—Medical Ethics in China and the United States. Perspectives in Biology and Medicine. 1984;27(3):336. doi: 10.1353/pbm.1984.0060. [DOI] [PubMed] [Google Scholar]
  29. Goffman E. Asylums: Essays on the Social Situation of Mental Patients and Other Inmates. Anchor Books; New York: 1961. [Google Scholar]
  30. Greene JA. Prescribing by Numbers: Drugs and the Definition of Disease. Johns Hopkins University Press; Baltimore, MD: 2007. [Google Scholar]
  31. Hayden C. A Generic Solution? Pharmaceuticals and the Politics of the Similar in Mexico. Current Anthropology. 2007;48(4):475–95. [Google Scholar]
  32. Hedgecoe A. The Politics of Personalized Medicine: Pharmacogenetics in the Clinic. Cambridge University Press; New York: 2004. [Google Scholar]
  33. Hedgecoe A. A Deviation from Standard Design? Clinical Trials, Research Ethics Committees, and the Regulatory Co-construction of Organizational Deviance. Social Studies of Science. 2014;44(1):59–81. doi: 10.1177/0306312713506141. [DOI] [PubMed] [Google Scholar]
  34. Hornblum AM. Acres of Skin: Human Experiments at Holmesburg Prison. Routledge; New York: 1998. [Google Scholar]
  35. Keating P, Cambrosio A. Cancer on Trial: Oncology as a New Style of Practice. University of Chicago Press; Chicago: 2011. [Google Scholar]
  36. Kelly K, Nichter M. The Politics of Local Biology in Transnational Drug Testing: Creating (Bio)Identities and Reproducing (Bio)Nationalismthrough Japanese “Ethnobridging” Studies. East Asian Science, Technology and Society. 2012;6(3):379–99. [Google Scholar]
  37. Knorr-Cetina K. Epistemic Cultures: How the Sciences Make Knowledge. Harvard University Press; Cambridge, MA: 1999. [Google Scholar]
  38. Kumagai Y, Fukazawa I, Momma T, Iijima H, Takayanagi H, Takemoto N, Kikuchi Y. A Nationwide Survey on Serious Adverse Events in Healthy Volunteer Studies in Japan. Clinical Pharmacology and Therapeutics. 2006;79(2):71–71. [Google Scholar]
  39. Kupetsky-Rincon EA, Kraft WK. Healthy Volunteer Registries and Ethical Research Principles. Clinical Pharmacology and Therapeutics. 2012;91(6):965–68. doi: 10.1038/clpt.2012.32. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Lakoff A. Pharmaceutical Reason: Knowledge and Value in Global Psychiatry. Cambridge University Press; New York: 2005. [Google Scholar]
  41. Lakoff A. The Right Patients for the Drug: Managing the Placebo Effect in Antidepressant Trials. BioSocieties. 2007;2(1):57–71. [Google Scholar]
  42. Largent EA, Grady C, Miller FG, Wertheimer A. Money, Coercion, and Undue Inducement: Attitudes about Payments to Research Participants. IRB: Ethics & Human Research. 2012;34(1):1–8. [PMC free article] [PubMed] [Google Scholar]
  43. Latour B, Woolgar S. Laboratory Life: The Construction of Scientific Facts. Princeton University Press; Princeton, NJ: 1979. [Google Scholar]
  44. Lemmens T, Elliott C. Justice for the Professional Guinea Pig. American Journal of Bioethics. 2001;1(2):51–53. doi: 10.1162/152651601300169095. [DOI] [PubMed] [Google Scholar]
  45. Mirowski P. Science-mart: Privatizing American Science. Harvard University Press; Cambridge, MA: 2011. [Google Scholar]
  46. Monahan T, Fisher JA. Strategies for Obtaining Access to Secretive or Guarded Organizations. Journal of Contemporary Ethnography. 2014 doi: 10.1177/0891241614549834. doi: 10.1177/0891241614549834. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Morris N, Balmer B. Volunteer Human Subjects’ Understandings of their Participation in a Biomedical Research Experiment. Social Science & Medicine. 2006;62(4):998–1008. doi: 10.1016/j.socscimed.2005.06.044. [DOI] [PubMed] [Google Scholar]
  48. Motluck A. Perils of the Professional Lab Rat. New Scientist. 2009;27(18):40–43. [Google Scholar]
  49. Petryna A. Clinical Trials Offshored: On Private Sector Science and Public Health. BioSocieties. 2007;2(1):21–40. [Google Scholar]
  50. Petryna A. When Experiments Travel: Clinical Trials and the Global Search for Human Subjects. Princeton University Press; Princeton, NJ: 2009. [Google Scholar]
  51. Pollock A. Medicating Race: Heart Disease and Durable Preoccuptions with Difference. Duke University Press; Durham, NC: 2012. [Google Scholar]
  52. Reverby SM. Examining Tuskegee: The Infamous Syphilis Study and its Legacy: The Infamous Syphilis Study and its Legacy. University of North Carolina Press; Chapel Hill: 2009. [Google Scholar]
  53. Scheper-Hughes N, Bourgois PI. Violence in War and Peace: An Anthology. Blackwell; Malden, MA: 2003. [Google Scholar]
  54. Shah S. The Body Hunters: How the Drug Industry Tests its Products on the World’s Poorest Patients. The New Press; New York: 2006. [Google Scholar]
  55. Sibille M, Donazzolo Y, Lecoz F, Krupka E. After the London Tragedy, is it Still Possible to Consider Phase I Is Safe? British Journal of Clinical Pharmacology. 2006;62(4):502–3. doi: 10.1111/j.1365-2125.2006.02740.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Sismondo S. Pharmaceutical Maneuvers. Social Studies of Science. 2004;34(2):149–59. [Google Scholar]
  57. Stein CM. Managing Risk in Healthy Subjects Participating in Clinical Research. Clinical Pharmacology & Therapeutics. 2003;74(6):511–12. doi: 10.1016/j.clpt.2003.08.007. [DOI] [PubMed] [Google Scholar]
  58. Stones M, McMillan J. Payment for Participation in Research: A Pursuit for the Poor? Journal of Medical Ethics. 2010;36(1):34–36. doi: 10.1136/jme.2009.030965. [DOI] [PubMed] [Google Scholar]
  59. Sunder Rajan K. Experimental Values: Indian Clinical Trials and Surplus Health. New Left Review. 2007 May-Jun;45:67–88. [Google Scholar]
  60. Tolich M. What If Institutional Review Boards (IRBs) Treated Healthy Volunteers in Clinical Trials as Their Clients? Australasian Medical Journal. 2010;3(12):767–71. [Google Scholar]
  61. van Kammen J, Oudshoorn N. Gender & Risk Assessment in Contraceptive Technologies. Sociology of Health & Illness. 2002;24(4):436–61. [Google Scholar]
  62. Weinstein M. A Public Culture for Guinea Pigs: U.S. Human Research Subjects after the Tuskegee Study. Science as Culture. 2001;10(2):195–223. doi: 10.1080/09505430120052293. [DOI] [PubMed] [Google Scholar]
  63. Wood AJ, Darbyshire J. Injury to Research Volunteers: The Clinical Research Nightmare. New England Journal of Medicine. 2006;354(18):1869–71. doi: 10.1056/NEJMp068082. [DOI] [PubMed] [Google Scholar]

RESOURCES