Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2011 Sep 7.
Published in final edited form as: J Empir Res Hum Res Ethics. 2010 Sep;5(3):43–56. doi: 10.1525/jer.2010.5.3.43

Why do we pay? A national survey of investigators and IRB chairpersons

Elizabeth Ripley 1, Francis Macrina 1, Monika Markowitz 1, Chris Gennings 1
PMCID: PMC3168552  NIHMSID: NIHMS312831  PMID: 20831420

Abstract

The principle that payment to participants should not be undue or coercive is the consensus of international and national guidelines and ethical debates; however, what this means in practice is unclear. This study determined the attitudes and practices of IRB chairpersons and investigators regarding participant payment. One thousand six hundred investigators and 1900 IRB chairpersons received an invitation to participate in a web-based survey. Four hundred and fifty-five investigators (28.3%) and 395 IRB chairpersons (18.6%) responded. The survey was designed to gather considerations that govern payment determination and practical application of these considerations in hypothetical case studies. The survey asked best answer, multiple choice, and open text questions. Short hypothetical case scenarios where presented, and participants were asked to rate factors in the study that might impact payment and then determine their recommended payment. A predictive model was developed for each case to determine factors which affected payment. Although compensation was the primary reason given to justify payment by both investigators and IRB chairpersons, the cases suggested that, in practice, payment is often guided by incentive, as shown by the impact of anticipated difficulty recruiting, inconvenience, and risk in determining payment. Payment models varied by type of study. Ranges for recommended payments by both groups for different types of procedures and studies are presented.

Keywords: participant payment, Institutional Review Boards, ethics


Payment for participation in research is not a new concept. For instance, in the 1820s Beaumont compensated a patient with an incompletely healed gunshot wound to the stomach with food, lodging, clothing, and $150 to study his stomach contents for a year (Lederer, 1995). The positive and negative impact of payment for study participation in actual studies have been shown for survey (Coogan & Rosenberg, 2004; Doody et al., 2003; Gilbart & Kreiger, 1998; Little & Davis, 1984; Parkes et al., 2000; Perneger, Etter, & Rougemont, 1993), HIV risk reduction (Deren et al., 1994), and substance abuse studies (Festinger et al., 2005; Fry & Dwyer, 2001). The ethical debate regarding paying participants is also not new and laws and guidelines establish the concept that payment should not be coercive or undue, as has been extensively discussed (Grady, 2005; Permuth-Wey & Borenstein, 2009; Ripley, 2006). Payment practices vary for similar national studies (Grady, 2005) but little empirical data exists to determine how investigators choose payment and how IRB members review payment.

A study by Fry et al. (2005), in Australia, surveyed 70 organizations and several listservs in Australia regarding payment practices. A common reason for not providing reimbursement for participants in research was that it was unnecessary (46%). Thirty-three percent thought that payment was an inducement and 22% thought that it may compromise participant willingness to consider the risks and benefits. Twenty-five percent thought that reimbursement could potentially target economically vulnerable populations and could compromise scientific integrity by skewing the types of individuals enrolled. Three percent thought that payment could harm participants and 6% noted they did not have the funds. When asked the rationale for deciding to pay, 15% responded that reimbursement for direct costs was necessary. In reporting this Australian study, Fry et al. (2005) called for empirical research to identify how ethics committees and investigators make decisions about participant payment practices. Data from two single United States (US) institution studies by Ripley et al. (2008) and Ripley, Macrina, & Markowitz (2006) showed variable opinions regarding payment among and between investigators and IRB members at a single institution, but there was a consensus that risk was a major determinant of payment. These studies looked at pragmatic answers to payment at one institution. However, there are no US national data to show how investigators and IRB members actually determine and approve payment for research.

This study was a national survey of NIH-funded principal investigators and IRB chairpersons designed to identify considerations that govern payment determination and practical application of these considerations in case studies.

Methods

The study was approved by the Institutional Review Board (IRB) at the Virginia Commonwealth University (VCU). Two independent web-based surveys of IRB chairpersons and Principal Investigators (PIs) who had active NIH funding were conducted nationally.

Sample Size

For a survey with multiple questions and variable sample proportions for those questions, a sample size of 400 for each survey was adequate for a sample proportion of 50% and a confidence interval of 95%, giving a calculated sampling error of 4.9% (Decision Support Systems, 2010).

Participants and Sampling

NIH-funded researchers were selected in order to obtain contact information from investigators across the country. Many NIH-funded researchers also have experience with non-NIH-funded research, and this population of investigators was considered to have enough background to answer questions regarding NIH-funded, industry funded, and unfunded research. The Computer Retrieval of Information on Scientific Projects (CRISP, http://crisp.cit.nih.gov/; now retired and replaced with the NIH RePORTER system) website database was queried and e-mail addresses of 2,824 PIs for studies involving human subjects were obtained by identifying abstracts which included human subjects projects. Both biomedical and social/behavioral researchers were included. Information including e-mail addresses from 5,805 IRB chairpersons was obtained through a Freedom of Information Act request from the Office for Human Research Protections.

Using random.org (RANDOM.ORG, 2010), a random ordering list was generated for both the IRB chairpersons and investigators. Participants were selected by starting at the top of this list.

Assuming a response rate of 50%, an initial 800 individuals in each cohort were sent e-mail invitations and non-responders were sent two reminder e-mails. The original response rate was inadequate to provide the needed participants and a second sample, of 800 individuals from each group, was sent invitations. Because of the lower IRB response rate an additional 300 IRB chairpersons were sent mail invitations to complete the online survey. Figure 1 shows sampling method and response rates. Other than knowing that non-responders were either NIH investigators or IRB chairpersons, no other information was collected on these individuals.

FIG. 1.

FIG. 1

Study Design.

Surveys

The surveys were piloted locally with Virginia Commonwealth University (VCU) IRB members and investigators (Ripley et al., 2008; Ripley et al., 2006) and nationally utilizing a web-based survey of Western IRB (WIRB) members. Both surveys in this study asked similar questions. IRB members who noted they were investigators also received investigator-specific questions and likewise investigators who were IRB members received those questions. Therefore, the order of questions varied but not the wording. The survey asked best answer, multiple choice, and open text questions.

Hypothetical Survey Cases

Different cases were developed in order to obtain payment considerations regarding a variety of studies ranging from a survey study to a pharmacokinetic inpatient healthy volunteer study. See Table 1. Because of space and concern about participant time constraints, the case scenarios were short, providing only the basic information about the study with a brief description of goal(s), number of visits, duration of the study, specific procedures required, study population, funding source, risks, and frequency of the risks. Box 1 shows an example of one of the cases and the questions asked for each case. We acknowledge that for an actual study, investigators and IRB chairpersons have much more detailed information as a basis for their decisions, and IRB chairpersons approve payment as submitted by the investigator rather than select payment. After reading each case, scenario participants were asked to rate risk, inconvenience, and difficulty recruiting as minimal, medium, or high, and to rate funding source, patient income, therapeutic/non-therapeutic trial, and involvement of healthy volunteer as important or not important in determining payment. They were then asked to select what payment was appropriate: no payment, payment for parking, a movie ticket or small gift from catalog, a monetary payment, a prorated monetary payment, or a prorated payment with a completion bonus. If participants selected monetary payments, they were asked to provide dollar amounts. For purposes of analysis and modeling, no payment and parking only were considered to have a $0 value, movie ticket or small gift a $15 value, and were collectively considered token payments. The other values were the monetary amounts determined by the survey participant.

TABLE 1.

Hypothetical Case Scenarios: Cases were designed to cover a variety of types of studies and study factors. Key elements are shown below.

Lupus Registry Substance Abuse Survey and Urine Drug Screen Twin DNA and Survey Cancer Treatment Trial Healthy Volunteer Pharmacokinetic Study Hypertension Placebo Trial
Study Population Patients Substance Abusers Twins Patients Healthy Patients
Pilot No Yes No No No No
Therapeutic No No No Yes No Yes
Funded Yes No Yes Yes Yes Yes
Placebo NA NA NA No NA Yes
Factorial No No No No Yes – Inconvenience Yes–Risk and Frequency of Risk

BOX 1. Sample of Case Scenario Format. Example given is for a survey of substance abusers. The questions that follow this case are similar to those for all cases.

Dr. Jones is recruiting patients who are substance abusers for a survey of factors influencing continued drug abuse. The surveys will ask questions regarding obtaining drugs, employment, depression, financial status, and past criminal charges. The surveys will be done in a private place. The surveys will take 2 hours to complete. A urine drug screen will also be obtained. Surveys will be coded and the key kept locked in the investigator's office. Three months later, the participants will fill out the same surveys and will have another urine drug screen performed. She hopes to use this study as preliminary data for an RO1 submission. There is no direct funding for the current study; however, she does have some discretionary funds which she could use.

For the above case, please rank each factor:

Minimal Medium High
Risk to the participant
Inconvenience to the participant
Anticipated difficulty recruiting/retaining participants

For the above case, which of the following influences your decision on appropriate payment?
Important Not Important

Patient Participant
Income of Participant
Budget/Funding Source for the Study Other Influence

Which payment schedule do you feel is most appropriate? (If choosing money payments, please fill in the amount you would suggest.)
a. No payment
b. Payment for parking or bus fare
c. A gift from a catalog or movie tickets
d. Money
 1. At completion of study $_____ (suggested amount).
 2. Prorated $_____ (suggested amount) for each completed set of surveys and drug screen.
 3. Prorated with a completion bonus $_____ (suggested amount) for the baseline set of surveys and drug screen, and $_____ for the final set and drug screen.

To determine for similar types of studies if alterations in inconvenience, risk, or frequency of risk had an impact on payment recommendations, several cases had a factorial design which varied elements such as risk, frequency of risk, or inconvenience. These levels of variation were determined by the research team after discussion with other researchers and staff at VCU. Each respondent received only one level. These levels are discussed in the Results section.

Statistical Analysis

Descriptive statistics were used where appropriate and are reported as % or mean ± SD. Participants responded to the importance of various reasons for payment and factors for determining payment as not important, somewhat important, important, very important, and extremely important. For comparison, these responses were then dichotomized as not important (not and some-what important) and important (important, very and extremely important). These dichotomized data were then compared using Pearson’s Chi Square to determine if there was a significant tendency to consider the reasons for payment or factors for determining payment “important” or “not important’ (two-sided test). An item which was not significant meant that an equivalent proportion of respondents considered the item important as considered it not important; such items were considered neutral items.

Model Analysis

Summary statistics were produced for each case. A predictive model was developed for payment using demographic data (sex, geographic region, and study [IRB, investigator]), how the participants in each group rated the risk, inconvenience, and difficulty recruiting (minimal, medium, high) and factors including whether the study was a therapeutic study, patient income, and funding source. A log linear model was used to accommodate the constraints of a nonnegative mean value for payment using a generalized linear model with a log link. The variance of the mean was assumed to be constant. A model building strategy was determined for all cases. Since the data were taken from two separate surveys, an indicator for study was included in the model (IRB vs. investigator) using a factor-effects parameterization; the resulting model can be interpreted as the average model between the two surveys. An initial model was built using study risk, inconvenience to participants, and anticipated difficulty recruiting as a categorical variable with potential pairwise interactions.

Variables with a significance level of 0.10 were retained in the model using likelihood ratio statistics. Next, other variables corresponding to the case using a backward stepwise procedure were included. Finally, categorical variables for risk, inconvenience, and difficulty recruiting were changed to scores (1, 2, and 3) for convenience of interpretation when appropriate (i.e., when the categorical estimates were monotonic) and to increase the power for the significance factor. Model factors with a p ≤ 0.05 were counted as significant.

For cases with factorial elements, significant difference between assigned levels and participant designated levels are presented. Assigned levels are used in the model analysis. SAS (version 9.2) was utilized for case model analysis and JMP (version 8) was utilized for demographic and descriptive statistics.

Results

Respondents

Participant demographics are shown in Table 2. Both IRB chairpersons and investigators represented institutions evenly distributed across the country. As expected, a large percentage of IRB chairpersons were also investigators and about one-third of investigators reported having served on an IRB panel. IRB chairpersons had served on IRB panels longer than investigators had served on the IRB. 54.1% of IRB chairpersons and 51.5% of investigators were on an IRB primarily reviewing biomedical studies. The number of human research studies that investigators had conducted during the past five years varied widely. The majority of all respondents had worked on a research project that paid participants. The most common role on research projects was principal investigator for both investigators and IRB chairpersons. 16.7% of IRB chairpersons were primarily consultants on research projects.

TABLE 2.

Characteristics of Respondents.

Investigator Survey
IRB Chair Survey
N % N %
Gender
 Male 222 51.2 202 61.2
 Female 212 48.9 128 38.8
Age (years) 47.9% + 10.3 54.4 + 9.8
Academic Rank
 Clinical Instructor 5 1.1 4 1.5
 Associate Professor 108 24.3 57 21.1
 Assistant Professor 112 25.2 25 9.3
 Professor 170 38.2 73 27.0
 Does not apply 20 4.5 91 33.7
 Other 30 6.7 20 7.4
Geographic Region of Institution
 Northeast 109 26.2 78 23.6
 South 105 25.2 109 33.0
 Midwest 116 27.9 78 23.6
 West 86 20.7 65 19.7
# of Human Subject Studies in past 5 years 23.9 + 164.9 10.8 + 26.0*
Most Common Role on Study
 Principal Investigator 332 74.9 116 52.3
 Sub-Investigator 99 22.4 58 26.1
 Consultant 3 0.7 37 16.7
 Coordinator 9 2.0 11 5.0
# of months on IRB Panel 38.0 + 31.7 85.6 + 80.5
Worked on a study which paid participants (yes) 429 95.3 186 68.1
*

For investigators and IRB chairs who are also investigators.

For investigators who are also IRB members.

Reasons to Pay, Factors that Influence Payment, and Methods for Determining Payment

Table 3 shows the perceived primary purpose for paying participants. There is significant variability in the importance that individuals place on different reasons and factors for paying participants. Figure 2 shows the percentage of respondents that selected each level of importance for risk, inconvenience, and difficulty recruiting. To determine a consensus as to the importance of these reasons and factors the responses were dichotomized to “important” (extremely, very, or important) or “not important” (somewhat or not important). Table 4 shows whether the reason or factor was rated as important, not important, or neutral (as many rated important as rated not important). Both investigators and chairs rated compensation (e.g., for time, clinic visits, and procedures) as the primary purpose for paying participants. However, the rating of importance for the reasons and factors showed both consensus and disagreement. For example, on the dichotomized scale, inconvenience was rated by most individuals as important and the income of the participant was rated as not important by most. For investigators, there was not an agreement regarding whether the number needed to recruit was an important factor in determining payment, and for IRB chairpersons, appreciation as a reason to pay was also seen as neutral. Note that anticipated difficulty recruiting was rated as important by both investigators and IRB chairpersons as a factor determining payment while incentive was an important reason for paying for investigators but not important to IRB chairpersons. As a single question, the direction of payment (e.g., more payment or less payment for more risk) could not be determined with this question but was evaluated in the case studies.

TABLE 3.

Primary Purpose for Paying Study Participants.

Investigator
IRB Chair
N % N %
Reimbursement for study expenses (e.g. travel, parking) 36 8 44 12.8
Compensation (e.g. time, clinic visits, procedures) 316 69.8 220 64.1
Appreciation 43 9.5 36 10.5
Incentive for enrolling 43 9.5 37 10.8
Other 15 3.3 6 1.8

FIG. 2.

FIG. 2

Variability of opinions regarding the importance of risk, inconvenience and difficulty recruiting on determining payment for a study. The percentage selecting each level of importance for both Investigators (Inv) and IRB Chairs (IRB).

TABLE 4.

Importance of Reasons for Paying Participants and Factors in Determining Payment.

Investigator IRB Chair
Reasons for Paying
 Inconvenience Important Important
 Reimbursement Important Important
 Appreciation Important Neutral
 Incentive Important Not Important
Participant-Related Factors for Determining Payment
 Risk Important Important
 Inconvenience Important Important
 Income Not Important Not Important
 Demographics of Participants Not Important Not Important
Study-Related Factors for Determining Payment
 Budget Important Not Important
 Anticipated Difficulty Recruiting Important Important
 Funding Source Not Important Not Important
 N needed Neutral Not Important

Note: Responses were given as not important, somewhat important, important, very important or extremely important. Results were dichoto­mized to Important (important, very important, or extremely important) and Not Important (not important and somewhat important). These dichotomized data were then compared using Pearson’s Chi Square to determine if there was a significant tendency to consider the factor “important” or “not important” (two-sided test). Neutral items are those for which an equivalent proportion of respondents considered the item important as considered it not important.

Several payment methods have been suggested in the literature. The Market Model is based on the number of participants needed, how hard it will be to recruit and retain subjects, and allows for incentives and completion bonuses. The Wage Payment Model pays based on the standard wage given the time commitment for the study but does allow payment to augment for particularly uncomfortable procedures. The Reimbursement Model is determined by subjects’ expenses and can include payment for lost wages or other expenses incurred. It is aimed to make the study cost neutral for the participant. These first three methods were defined by Dickert and Grady (1999). A fourth method, described by Saunders and Sugar (1999), is the Fair Share Model which determines participant payment as a percentage of the profit made by the investigator.

For this study, participants were given these descriptions of published payment methods and asked which model they primarily used to determine the payment amount. For IRB chairpersons, the Reimbursement Model (41.6%) was the most frequent. The Market Model was chosen by 20.4% and the Wage Payment model by 16.7%. Twenty-one percent responded that none of the models fit their method for determining payment. For the investigators, 32.7% chose the Market Model, 26.0% chose the Wage Payment Model, and 22.8% chose the Reimbursement Model. 18.6% of investigators responded that none of the models fits their method for determining payment. None of the IRB chairpersons or investigators chose the Fair Share Model.

IRB Review of Payment and Institutional Guidelines

34.6% of IRB chairpersons and 46.4% of investigators who had served on an IRB said that their IRB panel sometimes questions the amount proposed for payment. 10.5% of IRB chairpersons and 12.3% of investigators on IRB panels said that their panel always questions the amount. Both agreed that the most common reason for the IRB to question the payment was that the payment was not clearly described in the protocol (42% IRB chairpersons and 34.1% investigators on IRB panels). Thirty percent of investigators who sat on panels thought that the payment being too high was the most common reason for questioning, while IRB chairpersons less commonly (13%) considered this the main problem.

When asked about institutional written guidelines regarding payment for participation (other than federal regulations), 66.4% of IRB chairpersons and 36% of investigators said that their institution had additional guidelines. Highlighting an educational need, 10.9% of IRB chairpersons and 29.8% of investigators were not sure. Standard payment scales for similar types of research activities are not common. 85.0% of IRB chairpersons said their institution did not have one compared to 58.4% of investigators (28.3% of investigators were not sure). There was agreement that the payment was required to be in the consent form (88% of IRB chairpersons and 92.4% of investigators on a panel). Interestingly, 16.5% of investigators and 28.3% of IRB chairpersons said this information is located in the benefits section of the consent.

Payment Recommendations for Specific Procedures

Respondents were asked to consider a therapeutic trial as a study for participants with a particular disease or condition where the research involves a possible treatment but may also involve a placebo limb, and a non-therapeutic study, as a study which does not offer treatment for a particular disease or condition that the participant has. The latter can include pathophysiology studies, epidemiology studies, and surveys, and can involve healthy volunteers. Following the descriptions, respondents were asked to state how much, if anything, a participant should be paid for specific types of study activities both for a therapeutic and a non-therapeutic study. The mean ± SD dollar payments are shown in Table 5. The standard deviations are high, perhaps reflecting the common lack of guidelines, but in general, by mean dollar amount, investigators tended to pay more than IRB chairpersons and recommendations of payment for therapeutic trials were lower than for non-therapeutic.

TABLE 5.

Recommended $ Payments for Different Research Activities (Mean + SD).

Investigator IRB Chair
Clinic Visit
 Therapeutic 24.2 ± 24.10 25.69 ± 36.79
 Non-Therapeutic 27.31 ± 24.71 25.67 ± 19.30
Blood Sample
 Therapeutic 19.83 ± 56.13 15.48 ± 17.16
 Non-Therapeutic 21.54 ± 56.05 16.56 ± 15.33
Urine Sample
 Therapeutic 11.95 ± 29.54 9.39 ± 11.49
 Non-Therapeutic 13.61 ± 30.00 10.41 ± 11.41
Overnight Stay
 Therapeutic 132.41 ± 149.59 107.11 ± 87.84
 Non-Therapeutic 142.27 ± 145.14 115.84 ± 96.54
1-Hour Questionnaire
 Therapeutic 21.59 ± 14.63 19.10 ± 14.63
 Non-Therapeutic 23.60 ± 16.82 20.53 ± 18.82
X-Ray
 Therapeutic 29.43 ± 38.01 22.13 ± 26.37
 Non-Therapeutic 35.09 ± 48.78 26.44 ± 32.53

Overwhelmingly, investigators thought that it was at least sometimes acceptable to offer a bonus for completion of a study (89.7% with 11.5% indicating it was always acceptable). In contrast, 10.2% reported that it was not acceptable (rarely or never acceptable) to offer a bonus for completion of a study. IRB chairpersons agreed, with 82.1% noting it was at least sometimes acceptable; of those, 7.1% thought it was always acceptable.

Case Scenarios and Modeling

Several hypothetical cases were presented to the participants. Participant-determined considerations for each case were modeled to evaluate which significantly affected payment amount. Table 6 shows components that were significant in the model for that case. If significant, the direction of the effect (increased or decreased) payment is given. For example, in the hypertension trial, as the inconvenience increased, the payment increased, and for those that chose funding source as important (this study is an industry-funded study) the recommended payment was higher than if they did not rate it as important. Table 7 shows the percentage of individuals who chose each type of payment and the dollar amount as mean ± SD. The model is presented as analyzed using data from both surveys. Significant findings with the secondary analysis using all investigators and all IRB members are described in the text below for each case.

TABLE 6.

Significant Base Changes in Payment with Each Significant Factor.

Model Component Lupus Registry Substance Abuse Survey and Urine Drug Screen Twin DNA and Survey Cancer Treatment Trial Healthy Volunteer Pharmacokinetic Study Hypertension Placebo Trial
Increase Risk Borderline Decrease p = 0.08 Increase p = 0.0005 Increase p = 0.07 Increase p = 0.03 Increase p = 0.07
Increase Inconvenience Increase p = 0.03 Increase p < 0.0001 Decrease p = 0.02 Increase p = 0.0082 Increase p = 0.08
Increase Difficulty Recruiting Increase p < 0.0001 Increase p < 0.0001 Increase p < 0.0001 Increase p = 0.05
Important Funding Source Increase p = 0.004 Increase p = 0.02
Important Healthy Volunteer Increase p < 0.0001 Increase p = 0.004
Important Patient Income Increase p = 0.0018
Male Respondents Increase p = 0.05

Note: Using a log-linear multivariate model, individual recommended payment was modeled using selections for model components. For each model component, as it was rated higher (choices: minimal, medium, high) or rated as important (choices: important or not important) significant change in payment for each case is shown below as either an increase or decrease in payment. Those that are blank were not significant to the model for that case. For example for the Lupus registry case, an increase in inconvenience, difficulty recruiting, and important patient income were all associated with an increase in payment.

TABLE 7.

Percentage of Participants Reach Type of Payment and the Mean + SD Dollar Amounts of Payment for Each Case.

% of respondents
$ Mean ± SD
Lupus Registry Substance Abuse Survey and Urine Drug Screen Twin DNA and Survey Cancer Treatment Trial Healthy Volunteer Pharmacokinetic Study Hypertension Placebo Trial
No Payment 32.5% 3.8% 7.9% 15.5% 1.1% 3.6%
Parking 6.7% 13.6% 12.4% 23.1% 1.9% 14.9%
Small Gift ($15) 23.2% 15.7% 33.3% 3.4% 1.1% 3.9%
Single Payment 27.5% 11.3% 36.9% 6.8% 54.9% 9.6%
$18 ± 10 $80 ± 71 $48 ± 43 $496 ± 1402 374 ± 365 $158 ± 138
Prorated 10.2% 33.8% 9.5% 51.3% 13.2% 29.8%
Total amount $36 ± 33 $71 ± 44 $65 ± 55 $1685 ± 6219 1029 ± 1949 $182 ± 121
Prorated with Completion Bonus 21.7% 27.9% 38.3%
Total amount 0 $107 ± 79 0 0 $821 ± 738 $249 ± 335

One case involved a NIH-funded registry of lupus patients. It required a phone follow-up every six months with medical record review and standard of care labs drawn as clinically indicated. A majority of individuals chose no payment. The increase in payment for inconvenience was higher for IRB survey respondents. When ratings are compared between all IRB members, all investigators, and both IRB member and investigator, there was no difference in payment for similar chosen levels of risk or inconvenience. But IRB members only or both investigators and IRB members rated inconvenience minimal more commonly than those who were only investigators.

Another case was an unfunded study of substance abusers and factors influencing continued drug abuse. It required a two-hour survey including questions on obtaining drugs, employment, depression, financial status, and past criminal behavior at baseline and three months. A urine drug screen was done at baseline and at three months. The average suggested total payment was a flat rate of $81. The degree of risk had a borderline significance (p = 0.074) suggesting decreased payment with increased risk. IRB members who primarily reviewed social behavioral studies rated the risk level of this study higher than biomedical reviewers. IRB chairpersons or both tended to assign inconvenience to be minimal more often than investigators, but tended to pay more as the rating of inconvenience increased. Consideration of difficulty recruiting did not impact payment.

In a case involving a survey study of twins looking at the influence of genetic factors and athletic ability, the twins are asked to complete a 30-minute survey and do a cheek swab for DNA. The majority recommended token payment: no payment, parking, or a small gift. Investigators rated difficulty recruiting as higher than IRB members or both.

For a cancer treatment trial sponsored by NIH, patients were randomized to an experimental treatment or standard chemotherapy. They had, at minimum, every other week clinic visits for six months, then bimonthly for two years. Clinically relevant laboratory work and x-ray results were obtained. There seemed to be confusion about the calculation of prorated payments in this case study. Respondents were asked to set payment for each visit but may not have calculated the number of total visits for the study. For example, one individual chose a prorated payment that totaled $120,000 for the entire study. With this payment omitted, both inconvenience and difficulty recruiting are associated with increased payment.

A three-level factorial design was used for a pharmaceutical company–sponsored pharmacokinetic study of healthy individuals. Three levels of inconvenience were developed, but the case study presented to each respondent described only one of the three. The stated risks were held constant as the inconvenience was varied from 12 hours of telemetry monitoring with 10 blood draws (minimal inconvenience), 48 hours of monitoring with 20 blood draws (medium inconvenience), and 96 hours of monitoring with 30 blood draws (high inconvenience). Most respondents considered the study highly inconvenient no matter what inconvenience level they had been assigned (p < 0.0009). Most individuals recommended payment although the amounts varied widely. Using the respondents’ selected level of inconvenience; risk, inconvenience, and difficulty recruiting were not significant. However, using the assigned levels of inconvenience; risk and difficulty recruiting were associated with increased payment. Those who ranked funding source as an important factor determining payments chose higher payment. When the model was evaluated using IRB member or investigator, IRB members tended to suggest higher payments as the risk increased compared to investigators or both. IRB members assigned to the case designated as medium inconvenience recommended the highest payment.

An industry-sponsored placebo controlled hypertension trial was also designed as a factorial design of three levels of risk and frequency of risk. Frequency of risk did not correlate with risk selection designated by the respondent but risk selection did follow pre-assigned levels by the research team. Using the designated risk level in the model, inconvenience but not risk was significant in payment determinations. As inconvenience increased, payment increased. Those who thought the funding source was important also paid more.

Discussion

How paying research participants might constitute undue influence or coercion has stimulated considerable ethical debate. It has even been questioned if it is possible for payment to be coercive since payment itself cannot cause harm (Emanuel, 2005; Grady, 2001). These debates have centered on general considerations as to whether it is ethically appropriate to pay research participants rather than how these payments are determined. There is little empirical data to define how investigators determine payment, how IRBs review and approve these payments, and what underlying principles of payment govern those choices. Prior studies by Ripley et al. (2008) and Ripley et al. (2006) have shown that at a single United States institution there was a spectrum of opinions and attitudes held by both of these stakeholders. This is the first nationwide survey to gather opinions and practices from investigators and IRB chairpersons. The surveys presented here asked direct questions about payment, such as the importance of different factors and reasons for payment as well as presenting elements of a study, which were rated and then consequently used to determine suggested payment. The cases were used as a method to move beyond conceptual considerations and general approaches to payment to actual payment practice. Our results show that there sometimes is a seeming contradiction between payment practice and payment considerations.

Inconvenience, as an important factor in determining payment, is consistent with the stated primary purpose of paying participants as compensation. The case studies show a range of ratings for inconvenience within each group (investigators and IRB chairpersons), but overall, there were differences between groups. IRB members rated inconvenience lower than investigators in the registry and substance abuse studies. In the case studies, increased payment was seen with increased rating of inconvenience for the lupus registry, cancer treatment trial, and the pharmacokinetic trial. For the twin DNA testing and survey, increased inconvenience was actually associated with decreased payment. However, for this case a majority of individuals chose no payment, parking, or a small gift ($15), which may have skewed the model.

If the payment of participants is intended to compensate for cost to the participant the assumption would be that it should be financially neutral for the individual unless cost is incorrectly estimated. Cost neutral payment should not be seen as coercive or undue. If compensation is the primary purpose for paying participants, then calculating the true cost of participation is essential. Federal regulations require informing participants of additional costs to them due to their participation as an additional element of consent (Department of Health and Human Services, 2009). However, consent forms usually do not list costs such as missed work, need for child care, additional co-pays, phone calls, or Internet use. One argument against specifying and compensating some of these costs is the lack of a standard calculation for them. For example, the cost of missed work may be different for an hourly worker or a professional and co-pays vary widely with insurance plans. Logistically and ethically, it may be difficult to accept paying different individuals different amounts for participation in the same study (Grady, 2001). Recommendations have been made to compensate for time as an unskilled laborer with the understanding that this would overpay or underpay certain individuals. Even without these recommendations, there is no data on whether or how investigators calculate cost of participation or how these calculations affect payment decisions. Getz et al. (2008) looked at 10,038 unique phase 1–4 protocols conducted between 1999 and 2005. They showed that the number of unique procedures and the frequency of procedures per protocol have increased at an annual rate of 6.5% and 8.7%, respectively. Yet there is no evidence that payment to participants has increased to match the increased inconvenience and probable increased time requirements.

How participants themselves calculate cost has received remarkably little attention. Czarny et al. (2010) asked healthy volunteers about their willingness to participate in four hypothetical studies and the payment they thought participants should receive. One study was an investigational drug study requiring one night in the hospital, two clinic visits, and 10 blood draws with a drug that had a few non-serious side effects in animal studies. The survey respondents recommended payment of a geometric mean of $456 (median $500, low $100, high $3,000). The other cases in this study also showed wide variations. The investigators tried to estimate relative weights of study medical risk and logistical burden. Participants stated that they counted risk in the determination of payment, yet their monetary choices tracked more with apparent logistical burden. Further studies to determine actual cost as determined by investigators and participants, and whether the calculated cost influences payment by the former and willingness to participate by the latter, are essential to a fact-based discussion of the ethics of payment.

An incentive encourages someone to act—in this case, to participate. Incentive was not seen as the primary purpose for payment, although it was considered an important reason for payment by investigators but not IRB chairpersons. Clearly there is some tension between the perceived primary purpose of payment and what investigators may feel is necessary for recruitment. While not specifically addressed in this study, logically, payment for difficulty recruiting incentivizes participation. The cases echo this with increasing difficulty recruiting tied to increased payment. Grady (2005) has noted that offering payment to research participants as an effort to enhance recruitment is common. Although investigators and IRB members have not previously endorsed this motive in principle, this study demonstrates that it can be endorsed in practice. Payment of a bonus at completion, which was also considered acceptable by both groups, is clearly an incentive to complete the study. Paying as an incentive would allow a cost-neutral study to provide extra money (beyond cost compensation) to an individual. Increasing incentives to participate could be considered coercive or undue if not accompanied by sufficient informed consent information for the prospective participant to adequately weigh individual cost and benefit (value). Again, it must be noted that data on how investigators and participants actually calculate the individual cost is lacking.

Risk should be minimized by the investigator, reviewed by the IRB, and fully described in the consent. Risk varies by study type, with some studies having no to minimal risk while others carry higher risk of harm. Concern has been raised that paying as an incentive could overcome concern regarding risk and coerce an individual to participate in a study (Grady, 2001; Grady, 2005). Menikoff has said that society pays individuals for risk, giving examples of high-risk jobs like firefighting and reality show participation. He notes that as long as risk is clear, payment correlated with risk fits with society standards (Menikoff, 2001). A study of pharmacy students shows that payment did not overshadow their assessment of risk in deciding to participate. Cryder et al. (2010) showed that potential participants associated higher payments with increased perceived risk. The potential impact of payment on risk assessment was seen in a study by Casarett, Karlawish, and Asch (2002) in which 75% of individuals thought that paying individuals to participate in a clinical trial would prevent them from carefully thinking about risk. However, only 19.7% thought that it would influence their decision.

In the current study, as a single answer, both IRB chairpersons and investigators note risk as an important factor in determining payment. The cases show, in application, risk as an important payment factor for several studies. For the substance abuse study, risk was primarily rated low or medium and only tended to decrease payment for increased risk (p = .07). For higher risk studies like the hypertension and pharmacokinetic studies, increased risk was associated with increased payment. Perhaps surprisingly, given that IRB members are charged with assuring that risks are minimized and participants are protected and afforded adequate information, this trend to pay for risk was stronger for the IRB members than for investigators.

The ability of IRB members to accurately assess risk has been questioned, particularly for new or experimental methods (Hirshon et al., 2002; McWilliams et al., 2003; Shah et al., 2004). It may be straightforward for IRBs to determine the risk of a blood draw or an x-ray because these are well defined and approximate risk has been quantified by studies. The risks of new biomedical procedures or medications may be harder to quantify (London, 2005). For social behavioral research, fewer studies have quantified issues such as emotional, financial, or even community risk. IRBs also judge risk and benefit for average participants and not for individual participants. There has been concern that IRB members overestimate potential risk or choose higher risk even if potential for harm is low (London, 2005). The current study did not show a difference in risk ratings between IRB members, investigators, or both. Nor did IRB chairpersons and investigators who primarily review social behavioral protocols rate risk higher, except for one notable exception: IRB chairpersons (but not investigators) rated the risk of the substance abuse trial as higher risk than those who primarily reviewed biomedical studies. Whether this is due to increased knowledge of past harm with similar studies, knowledge of general risk in this population, a better understanding of non-physical risks (like financial, reputation, and emotional), or an increased protectiveness of this population is unclear.

Payment has traditionally not been considered a study benefit and FDA regulations specifically state that payment is not a benefit (Food and Drug Administration, 2009). Interestingly, a relatively large percentage of IRB chairpersons (28.3%) and investigators (16.5%) said that payment information is located in the benefits section of the consent. Payment as a benefit contradicts the majority-perceived primary purpose of payment as compensation. Payment issues can arise with any study including a study on payment. Because of budget constraints and study methodology, individual payment to participants in this study was not possible for either reimbursement or compensation for their time or as an incentive to participate. As a small token of appreciation, participants were given a choice of a $2 donation to either the American Heart Association or the National Kidney Foundation. Two issues arose; one was from a potential participant who wanted the $2 to be mailed to her. She was offered the choice of not completing the survey or completing the survey and choosing one of the organizations or no organization. The second issue was an administrative dilemma. Although the study budget had been reviewed by the university’s sponsored program office, NIH review panel, and the NIH grant administrator, when we attempted to send the donated money to the organizations, the study was informed that federal grant funds could not pay (donate) money to a charity. After much discussion, the decision was made to purchase items from the organizations to meet the donation amounts. In order to purchase useful items, child and adult healthy cookbooks were purchased and donated to an adolescent obesity research program and renal diet cookbooks were purchased and given to dialysis patients. This compromise met the participants’ expectation for the organizations to receive money and as an added benefit provided useful information to other individuals.

This study has several limitations. The cases provided a variety of types of studies and funding sources; however, they could not cover all activities or types of studies. It was impossible to provide entire protocols for each case study given time constraints for participants. The cases did describe types of activities, number of visits, and risks. In actual practice, investigators and IRB panels would have the entire protocol and consent to review in determining appropriate payment. The modeling attempted to use individual responses in rating certain elements used to determine payment. It is understood that other factors can play a role in determining payment for an actual case. The respondents represented individuals across the country. The response rate, while in keeping with other survey response rates (Yetter & Capaccioli, 2010), compelled us to increase the number of individuals receiving our invitations in order to reach our sample size. It is not possible to determine if there is a difference between the respondents and non-respondents.

Paying research participants has continued to be a topic of ethical debate and discussion in the literature and at national meetings. However, empirical data for payment beyond the impact on participation in studies, particularly for social behavioral tools, such as surveys and questionnaires, has been limited. How payments are determined has not been previously considered. This study shows that agreement on various factors and reasons for paying participants is not unanimous and that investigators and IRBs consider a variety of factors for any particular study. Modeling payment permits a comprehensive look at the factors determining a specific payment and the tendency of those factors to correlate with higher or lower payment.

The cases show varying key factors and payment levels depending on the study type. This speaks to the need for payment assessment for each trial and may make standard payment scales difficult to develop or to calculate. For example, using $21.50 as the mean payment for a blood sample and four samples during the study, it would be easy to calculate $86 as appropriate payment. But if other factors were involved, such as increased inconvenience (maybe there are multiple visits or long visits) and a highly anticipated difficulty in recruiting, how would these be calculated to a standard payment? These elements would need to be quantified and added or subtract to the calculated cost. Our models for this study allow prediction of payment recommendations by assessing individual ratings for risk, inconvenience, difficulty recruiting, and for some studies, funding source. The models point out the necessity of considering multiple factors that impact payment setting and that the impact of each factor is different for different types of studies. These models need to be tested with other respondents and similar types of studies. Further, we need data on how investigators and IRBs arrive at an appropriate payment. In theory, compensation is the primary purpose for payment but do investigators fully acknowledge and compensate for the true costs to the participant? Until we have that data, this study can provide estimates of acceptable ranges for different procedures from the perspectives of investigators and IRB chairpersons.

Best Practices

This study shows that individuals consider different factors in determining payment for different studies. These factors should be explored and discussed by all investigators and IRB reviewers determining payment or acceptability of payment on a study-by-study basis. Elements that should be considered as reasons for payment are risk, inconvenience, and need for an incentive (i.e., for increased recruitment or as a bonus to help retention). The funding source for studies should also be considered as budget limitations do impact the ability to pay participants. Investigators, when proposing a payment, should specify the intent of the payment. Is it compensation or incentive or both? Is the compensation adequate? Perhaps, rather than worrying about most payments being too high and potentially coercive, IRBs and investigators should be looking at whether the compensation is adequate and if the participant has complete information about additional costs. This will require further evaluation in order to determine what activities and procedures should receive compensation and what is fair value.

Research Agenda

This study helps lay a foundation for why and how investigators pay participants and how these payments are considered by IRB members. This is the first step in moving beyond the ethical theoretical debate of the potential harms and benefits of payment. Future studies need to be conducted to determine the actual impact of payment on individual participants. This should take into consideration the impact on different participant populations including vulnerability and cultural differences. Further work to elucidate the impact of payment on willingness to participate should be seen in the context of other factors on participation. Utilizing an economics model for willingness to participate, as has been done with willingness to pay for healthcare procedures, would help tease out the impact of payment in the context of the individual’s assessment of the value of the study and the personal cost of participation in the study. Actual cost to participate in different types of studies or for particular study activities should be evaluated, being sure to include the perspectives of both the investigator and the participant.

Although not the primary purpose of this study, differences in interpreting studies in regards to risk, inconvenience, and difficulty recruiting are pervasive. These differences may lead to disputes between IRB reviewers and investigators. Further work on elucidating and resolving these issues should be conducted. The variations in ratings and the importance of those ratings and payments emphasize a need to understand how the IRB and investigators approach the calculation of study payment.

Educational Implications

A key finding in this study was that approximately 30% of investigators and 10% of IRB chairpersons were uncertain if their institutions had payment guidelines beyond federal regulations or standard payments for similar types of studies. These numbers would suggest that education regarding national and local attitudes, guidelines, and regulations about payment would be helpful. Further discussion of payment construed as a benefit should also be conducted at institutions that favor this approach.

Acknowledgments

The project described was supported by Grant Number 5R21NR010399–02 from the National Institute of Nursing Research. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of NIH. The authors would like to acknowledge the statistical assistance of Hanan Hammouri in analyzing portions of the survey responses. We would also like to acknowledge the Survey Evaluation and Research Laboratory (SERL) at Virginia Commonwealth University, particularly Andy Hollins and Mark Williams, for hosting the web-based survey and assisting with survey development.

Biographies

Elizabeth Ripley is Professor of Medicine in the division of nephrology at Virginia Commonwealth University (VCU). Her Master’s degree is in clinical research and biostatistics. She is Senior Chairperson for the IRB panels at VCU and is interim program director for the Clinical Research Center at VCU. Her research is in the area of research ethics and scientific integrity. Dr. Ripley was the PI and was responsible for all aspects of this study.

Francis L. Macrina is the Edward Myers Professor and Vice President for Research at Virginia Commonwealth University (VCU). He has served on the VCU IRB, and teaches the Scientific Integrity course at VCU. He has studied the impact of scientific integrity training on F32 grantees. Dr. Macrina assisted with the study design and interpretation.

Monika Markowitz is Director of the Office of Education and Compliance Oversight in the Vice President’s Office for Research at Virginia Commonwealth University (VCU). Her Master’s degrees are in nursing and religious and biomedical ethics and she has a Ph.D. in biomedical ethics. Dr. Markowitz assisted with the study design and interpretation.

Chris Gennings is Professor in the Department of Biostatistics at Virginia Commonwealth University (VCU). She is Director of the Research Incubator within the Center for Clinical and Translational Research at VCU. Dr. Gennings analyzed and modeled the responses from the surveys.

References

  1. Casarett D, Karlawish J, Asch D. Paying hypertension research subjects. Journal of General Internal Medicine. 2002;17(8):650–652. doi: 10.1046/j.1525-1497.2002.11115.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Coogan P, Rosenberg L. Impact of a financial incentive on case and control participation in a telephone interview. American Journal of Epidemiology. 2004;160(3):295–298. doi: 10.1093/aje/kwh190. [DOI] [PubMed] [Google Scholar]
  3. Cryder C, London A, Volpp K, Loewenstein G. Informative inducement: Study payment as a signal of risk. Social Science Medicine. 2010;70(3):455–464. doi: 10.1016/j.socscimed.2009.10.047. [DOI] [PubMed] [Google Scholar]
  4. Czarny MJ, Kass NE, Flexner C, Carson KA, Myers RK, Fuchs EJ. Payment to healthy volunteers in clinical research: The research subject’s perspective. Clinical Pharmacology and Therapeutics. 2010;87(3):286–293. doi: 10.1038/clpt.2009.222. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Decision Support Systems, LP. Researcher’s toolkit: Sample error calculator. 2010 Retrieved July 2, 2010 from http://www.dssresearch.com/toolkit/secalc/error.asp.
  6. Department of Health and Human Services, Office of Human Subjects Research. Protection of human subjects. Code of Federal Regulations Title 45, Part 46. 2009 Retrieved July 2, 2010 from http://www.hhs.gov/ohrp/humansubjects/guidance/45cfr46.htm#46.101.
  7. Deren S, Stephens R, Davis WR, Feucht TE, Tortu S. The impact of providing incentives for attendance at AIDS prevention sessions. Public Health Reports Hyattsville. 1994;109(4):548–554. [PMC free article] [PubMed] [Google Scholar]
  8. Dickert N, Grady C. What’s the price of a research subject? Approaches to payment for research participation. New England Journal of Medicine. 1999;341(3):198–203. doi: 10.1056/NEJM199907153410312. [DOI] [PubMed] [Google Scholar]
  9. Doody MM, Sigurdson AS, Kampa D, Chimes K, Alexander BH, Ron E, Tarone RE, Linet MS. Randomized trial of financial incentives and delivery methods for improving response to a mailed questionnaire. American Journal of Epidemiology. 2003;157(7):643–651. doi: 10.1093/aje/kwg033. [DOI] [PubMed] [Google Scholar]
  10. Emanuel E. Undue inducement: Nonsense on stilts? American Journal of Bioethics. 2005;5(5):9–13. doi: 10.1080/15265160500244959. [DOI] [PubMed] [Google Scholar]
  11. Festinger DS, Marlowe DB, Croft JR, Dugosh KL, Mastro NK, Lee PA, DeMatteo DS, Patapis NS. Do research payments precipitate drug use or coerce participation? Drug and Alcohol Dependence. 2005;78(3):275–281. doi: 10.1016/j.drugalcdep.2004.11.011. [DOI] [PubMed] [Google Scholar]
  12. Food and Drug Administration. Protection of human subjects. Code of Federal Regulations Title 21, Part 50. 2009 Retrieved July 2, 2010 from http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/CFRSearch.cfm?CFRPart=50.
  13. Fry C, Dwyer R. For love or money? An exploratory study of why injecting drug users participate in research. Addiction. 2001;96(9):1319–1325. doi: 10.1046/j.1360-0443.2001.969131911.x. [DOI] [PubMed] [Google Scholar]
  14. Fry CL, Ritter A, Baldwin S, Bowen KJ, Gardiner P, Holt T, Jenkinson R, Johnston J. Paying research participants: A study of current practices in Australia. Journal of Medical Ethics. 2005;31(9):542–547. doi: 10.1136/jme.2004.009290. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Getz K, Wenger J, Campo R, Seguine E, Kaitin K. Assessing the impact of protocol design changes on clinical trial performance. American Journal of Therapeutics. 2008;15(5):450–457. doi: 10.1097/MJT.0b013e31816b9027. [DOI] [PubMed] [Google Scholar]
  16. Gilbart E, Kreiger N. Improvement in cumulative response rates following implementation of a financial incentive. American Journal of Epidemiology. 1998;148(1):97–99. doi: 10.1093/oxfordjournals.aje.a009565. [DOI] [PubMed] [Google Scholar]
  17. Grady C. Money for research participation: Does in jeopardize informed consent? American Journal of Bioethics. 2001;1(2):40–44. doi: 10.1162/152651601300169031. [DOI] [PubMed] [Google Scholar]
  18. Grady C. Payment of clinical research subjects. Journal of Clinical Investigation. 2005;115(7):1681–1687. doi: 10.1172/JCI25694. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Hirshon JM, Krugman SD, Witting MD, Furuno JP, Limcangco MR, Perisse AR, Rasch EK. Variability in institutional review board assessment of minimal-risk research. Academic Emergency Medicine. 2002;9(12):1417–1420. doi: 10.1111/j.1553-2712.2002.tb01612.x. [DOI] [PubMed] [Google Scholar]
  20. Lederer S, editor. Subjected to Science: Human Experimentation in America before the Second World War. Baltimore, MD: John Hopkins University Press; 1995. [PubMed] [Google Scholar]
  21. Little RE, Davis AK. Effectiveness of various methods of contact and reimbursement on response rates of pregnant women to a mail questionnaire. American Journal of Epidemiology. 1984;120(1):161–163. doi: 10.1093/oxfordjournals.aje.a113865. [DOI] [PubMed] [Google Scholar]
  22. London A. Undue inducements and reasonable risks: Will the dismal science lead to dismal research ethics? American Journal of Bioethics. 2005;5(5):29–32. doi: 10.1080/15265160500245105. [DOI] [PubMed] [Google Scholar]
  23. McWilliams R, Hoover-Fong J, Hamosh A, Beck S, Beaty T, Cutting G. Problematic variation in local institutional review of a multicenter genetic epidemiology study. Journal of the American Medical Association. 2003;290(3):360–366. doi: 10.1001/jama.290.3.360. [DOI] [PubMed] [Google Scholar]
  24. Menikoff J. Just compensation: Paying research subjects relative to the risk they bear. American Journal of Bioethics. 2001;1(2):56–58. doi: 10.1162/152651601300169121. [DOI] [PubMed] [Google Scholar]
  25. Parkes R, Kreiger N, James B, Johnson KC. Effects on subject response of information brochures and small cash incentives in a mail-based case-control study. Annals of Epidemiology. 2000;10(2):117–124. doi: 10.1016/s1047-2797(99)00047-2. [DOI] [PubMed] [Google Scholar]
  26. Permuth-Wey J, Borenstein A. Financial remuneration for clinical and behavioral research participation: Ethical and practical considerations. Annals of Epidemiology. 2009;19(4):280–285. doi: 10.1016/j.annepidem.2009.01.004. [DOI] [PubMed] [Google Scholar]
  27. Perneger TV, Etter JF, Rougemont A. Randomized trial of use of a monetary incentive and a reminder card to increase the response rate to a mailed health survey. American Journal of Epidemiology. 1993;138(9):714–722. doi: 10.1093/oxfordjournals.aje.a116909. [DOI] [PubMed] [Google Scholar]
  28. RANDOM.ORG. True random number service. 2010 Retrieved July 2, 2010 from http://www.random.org/
  29. Ripley EBD. A review of paying research participants: It’s time to move beyond the ethical debate. Journal of Empirical Research on Human Research Ethics. 2006;1(4):9–20. doi: 10.1525/jer.2006.1.4.9. [DOI] [PubMed] [Google Scholar]
  30. Ripley EBD, Macrina F, Markowitz M. Paying clinical research participants: One institution’s research ethics committees’ perspective. Journal of Empirical Research on Human Research Ethics. 2006;1(4):37–44. doi: 10.1525/jer.2006.1.4.37. [DOI] [PubMed] [Google Scholar]
  31. Ripley EBD, Macrina FL, Markowitz M, Byrd L. To pay or not to pay: How do we determine participant payment for clinical trial? Journal of Clinicial Research Best Practices. 2008;4(3):1–11. [Google Scholar]
  32. Saunders CA, Sugar AM. What’s the price of a research subject? New England Journal of Medicine. 1999;341(20):1550–1551. doi: 10.1056/NEJM199911113412016. [DOI] [PubMed] [Google Scholar]
  33. Shah S, Whittle A, Wilfond B, Gensler G, Wendler D. How do institutional review boards apply the federal risk and benefit standards for pediatric research? Journal of the American Medical Association. 2004;291(4):476–482. doi: 10.1001/jama.291.4.476. [DOI] [PubMed] [Google Scholar]
  34. Yetter G, Capaccioli K. Differences in responses to web and paper surveys among school professionals. Behavior Research Methods. 2010;42(1):266–272. doi: 10.3758/BRM.42.1.266. [DOI] [PubMed] [Google Scholar]

RESOURCES