Abstract
The purpose of this study was to examine the effects of two incentive conditions (a $10 pre-incentive only vs. a $2 pre-incentive and a $10 promised incentive) on response rates, sample composition, substantive data, and cost-efficiency in a survey of college student substance use and related behaviors. Participants were 3000 randomly-selected college students invited to participate in a survey on substance use. Registrar data on all invitees was used to compare response rates and respondents, and web-based data collection on participants was used to compare substantive findings. Participants randomized to the pre-incentive plus promised incentive condition were more likely to complete the survey and less likely to give partial responses. Subgroup differences by sex, class year, and race were evaluated among complete responders, although only sex differences were significant. Men were more likely to respond in the pre-incentive plus promised incentive condition than the pre-incentive only condition. Substantive data did not differ across incentive structure, although the pre-incentive plus promised incentive condition was more cost-efficient. Survey research on college student populations is warranted to support the most scientifically sound and cost-efficient studies possible. Although substantive data did not differ, altering the incentive structure could yield cost savings with better response rates and more representative samples.
Keywords: incentives, college students, survey response, cost, substance use, web-based, alcohol, drugs
1.1 Introduction
Across survey modes and target populations, researchers are facing declining survey response rates (e.g., Cantor et al., 2008; Curtin, Presser, & Singer, 2005; McCluskey & Topping, 2011; Van Horn, Green, & Martinussen, 2009; Singer & Ye, 2012). As a result, studies require more resources in the form of monetary incentives, administrative time spent tracking non-respondents, and attempts at refusal conversion. Unfortunately, these declining response rates are currently paired with tighter budgets for research from a variety of funding streams, including the federal government and universities (e.g., Atkinson & Stewart, 2011; Collins, 2011). Therefore, it is more important than ever to run cost-efficient research studies with high-quality sampling designs that use resources wisely. The current study was designed to test the differences between two incentive structures for survey responders in a web-based survey of substance use among college students.
Cost-efficiency means balancing costs with data quality, in an effort to maximize quality with available resources. In this study, we operationalize high quality data based on three criteria. First, a representative sample is an important component of survey quality to insure that the data obtained reflect unbiased prevalence and trend estimates (Singer & Ye, 2012). Second, a high response rate is important for statistical power, precise estimation, and credibility of the study (Van Horn et al., 2009). Third, accurate data in the area of substantive interest are obviously essential in order to make correct inferences and draw meaningful conclusions.
As substance use among college students is a major public health concern, examining the most cost-efficient survey designs is an important component of conducting high-quality research. Monetary incentives are an effective tool for increasing survey response across a variety of modes (Van Horn et al.; Singer & Ye, 2012). However, little research regarding the effects of incentives on non-response or sample composition in web-based surveys is available (Singer & Ye, 2012; for exceptions see Couper, 2008; Göritz, 2006). Little is known about the effects of incentive among college students, except that among prospective students lotteries have very limited effectiveness (Porter & Whitcomb, 2003). Previous work on incentives, largely in surveys via postal mail, distinguishes between pre-paid incentives, given to those invited to a survey before they respond, and promised incentives, which are guaranteed to be paid only if the individuals participate in the survey. For example, Cantor et al. (2008) reviewed the literature and concluded that prepaid incentives (as little as $1 to $5) led to higher response rates, while promised incentives in the range of $5 to $25 tended not to increase response rates in telephone surveys. Similarly, Singer and Ye (2011) conclude from their review that monetary, pre-paid incentives increase response rates more than promised incentives, although they acknowledge that little work regarding incentives in internet surveys has been conducted. Therefore, this study was designed to compare data quality and cost-efficiency of two respondent incentive designs.
In our case, the substantive area of interest was college student substance use. Substance use behaviors are an important area of public health concern (Hingson & White, 2010), so that tracking the frequency of various types of use and predicting individuals most at risk of experiencing negative consequences are areas of critical importance. College students are an important population to study, given their high rates of substance use. Alcohol use among college students is associated with negative consequences for individuals and communities (Hingson et al., 2005). Cigarette use has been declining among youth and young adults (Johnston et al., 2012), although the serious health risks resulting from smoking have led to continued research interest in the area (e.g., Dierker et al., 2007). Marijuana use is on the rise among youth and young adults (Johnston et al., 2012) and also can lead to serious health and social consequences (Hall & Babor, 2000; Lee et al., 2010). Finally, nonmedical use of prescription medications among college students is now at its highest level in the past two decades (Johnston et al., 2012; McCabe et al., 2007). Accurate data regarding these behaviors is necessary to support prevention and intervention efforts.
1.1.1 Aims
This study aimed to compare two incentive structures (with otherwise identical sampling and data collection procedures) on three domains: response rates (Aim 1), sample representativeness (Aim 2), and substantive data (Aim 3). In addition, the cost of the two conditions will be examined to determine the more cost-efficient strategy for survey data collection in this population (Aim 4). The first condition was a $10 prepaid incentive provided to all invited respondents in the initial mailing. Condition 2 was a $2 prepaid incentive provided to all invited respondents in the initial mailing and a $10 promised incentive delivered in a second mailing to those who participated in the survey. In our sample, we hypothesized that Condition 2 would yield a higher response rate and be more cost-efficient than Condition 1. Based on prior research (e.g., Singer and Ye, 2012), we hypothesized that a $2 prepaid incentive would be enough to get students’ attention and establish the survey’s credibility, since they already had an ongoing relationship with the organization conducting the survey. And, because even non-respondents get the prepaid incentive in Condition 1, we hypothesized that the cost per respondent would be lower in Condition 2. Finally, because respondents in Condition 2 get a total of $12 rather than $10, we hypothesized that the response rate would actually be higher in that condition (Church, 1993; Singer et al., 1999).
1.2 Material and Methods
1.2.1 Participants
Participants were part of the Student Life Survey (SLS; e.g., Boyd, McCabe & d’Arcy, 2003a, 2003b; McCabe, Teter, & Boyd, 2005, 2006; McCabe, 2008), an ongoing biennial survey of a random sample of undergraduate students at a large university in the Midwest. Like other surveys of various types (Cantor et al., 2008; Curtin, Presser, & Singer, 2005; McCluskey & Topping, 2011; Van Horn, Green, & Martinussen, 2009; Singer & Ye, 2012), response rates have been declining historically, from 68% in 1999, when the only incentive offered was entrance into a drawing for cash and prizes (McCabe, 2002). In 2009, the response rate dropped to only 54% with a $10 pre-incentive and eligibility for a drawing for cash and prizes. As a result, in 2011 an experimental manipulation was planned.
1.2.2 Procedures
Contact information for a random sample of 3,000 students was drawn from the Registrar’s Office. Selected students received a mailed pre-notification letter inviting them to participate and informing them that they would receive an email containing a link to the web-based survey. With the letter, 2,000 students received an incentive of $10 (Condition 1) and 1,000 students received $2 with a promised $10 incentive for completion (Condition 2). Participants were randomly assigned to condition. However, we note that it is possible that students saw the invitation letters of other students and compared the incentive structures. Up to four reminder emails were sent to non-responders.
1.2.3 Measures
Past 30-day frequency of alcohol use was measured with the question, “On how many occasions (if any) have you had alcohol to drink (more than just a few sips) during the past 30 days?” Past 30-day frequency of marijuana use was assessed by asking, “On how many occasions in the past 30 days have you used marijuana or hashish (hash)? Do not include drugs used under a doctor’s prescription.” Past 30-day frequency of nonmedical use of prescription stimulant and pain medication were measured with the following. “Sometimes people use prescription drugs that were meant for other people, even when their own doctor has not prescribed it for them. On how many occasions in the past 30 days have you used the following types of drugs, not prescribed to you? …Stimulant medication (e.g., Ritalin®®, Dexedrine®, Adderall®, Concerta®, methylphenidate); Pain medication (i.e., opioids such as Vicodin®, OxyContin®, Tylenol® 3 with codeine, Percocet®, Darvocet®, morphine, hydrocodone, oxycodone).” The response options for all four measures were 1=none, 2=1–2 occasions, 3=3–5 occasions, 4=6–9 occasions, 5=10–19 occasions, 6=20–39 occasions, and 7=40+ occasions. A “rather not say” option was also given for all substances; these responses were coded as missing. Past 30-day frequency of cigarette use was assessed by asking, “How many cigarettes have you smoked in the past 30 days?” Response options were 1=none, 2=less than one cigarette per day, 3=1–5 cigarettes per day, 4=about ½ pack per day, 5=about 1 pack per day, 6=about 1 ½ packs per day, 7=2 or more packs per day. Substance use measures are largely based on measures from Monitoring the Future (Johnston et al., 2012).
1.2.4 Plan of Analysis
To address the first study aim, examining response rates from the two conditions, t-tests were used to compare complete and partial response rates in the overall samples and for each subgroup (i.e., by sex, class year, and race). In addition, a logistic regression analysis was used to predict survey response based on sex, class year, race, and condition, as well as interactions of condition by each of the other variables. To address the second study aim, examining sample representativeness, chi-square tests were used to assess whether, among complete responders, there were differences by sex, class year, and race. To address the third study aim, assessing differences in substantive findings, t-tests were used to compare frequencies of alcohol, cigarette, marijuana, nonmedical stimulant, and nonmedical pain medication use reported by condition. Finally, the survey designs were compared based on total incentive and mailing costs.
1.3 Results
1.3.1 Response Rates
The first study aim was to examine the response rates from Conditions 1 and 2, shown in Table 1. Complete responders were those participants who took the survey through to the end and hit “submit” (although they were allowed to skip questions along the way). Partial responders were participants who logged in and began the survey, but never got to the final page to click “submit.” Differences between conditions were examined with t-tests for the overall sample and for each subgroup (i.e., by sex, class year, and race). Based on the total sample of responders in each condition, Condition 2 had a higher rate of complete responses than Condition 1, 47.5% vs. 42.3% (t(2998)=−2.71, p<.01) and a lower partial response rate, 2.3% vs. 3.7% (t(2998)=2.04, p<.05). With a couple of exceptions, results for subgroups were in the same direction. However, these may not have reached statistical significance due to small sample sizes (see Table 1).
Table 1.
Response Rates by Condition and Demographic Subgroups.
Overall | Condition 1 | Condition 2 | ||||
---|---|---|---|---|---|---|
N | % | N | % | N | % | |
Total | ||||||
Invited | 3000 | 2000 | 1000 | |||
Complete | 1321 | 44.0 | 846 | 42.3 | 475 | 47.5** |
Partial | 97 | 3.0 | 74 | 3.7 | 23 | 2.3* |
Mean % Complete (SD) | 95.2 | (19.2) | 94.23 | (21.0) | 97.03 | (15.2) |
Women | ||||||
Invited | 1495 | 1019 | 476 | |||
Complete | 745 | 49.8 | 497 | 48.8 | 248 | 52.1 |
Partial | 54 | 3.6 | 41 | 4.0 | 13 | 2.7 |
Mean % Complete (SD) | 95.4 | (18.6) | 94.5 | (20.4) | 97.2 | (14.2) |
Men | ||||||
Invited | 1505 | 981 | 524 | |||
Complete | 576 | 38.3 | 349 | 35.6 | 227 | 43.3** |
Partial | 43 | 2.9 | 33 | 3.4 | 10 | 1.9+ |
Mean % Complete (SD) | 95.0 | (19.9) | 93.8 | (21.8) | 96.8 | (16.2) |
Freshmen | ||||||
Invited | 462 | 320 | 142 | |||
Complete | 224 | 48.5 | 140 | 43.8 | 84 | 59.2** |
Partial | 19 | 4.1 | 15 | 4.7 | 4 | 2.8 |
Mean % Complete (SD) | 94.5 | (20.2) | 92.9 | (23.0) | 97.3 | (13.8) |
Sophomores | ||||||
Invited | 660 | 456 | 204 | |||
Complete | 296 | 44.8 | 196 | 43.0 | 100 | 49.0 |
Partial | 18 | 2.7 | 12 | 2.6 | 6 | 2.9 |
Mean % Complete (SD) | 96.1 | (17.2) | 95.9 | (17.6) | 96.4 | (16.6) |
Juniors | ||||||
Invited | 696 | 469 | 227 | |||
Complete | 303 | 43.5 | 192 | 40.9 | 111 | 48.9* |
Partial | 31 | 4.5 | 25 | 5.3 | 6 | 2.6+ |
Mean % Complete (SD) | 93.9 | (21.2) | 92.4 | (23.1) | 96.6 | (16.8) |
Seniors | ||||||
Invited | 1182 | 755 | 427 | |||
Complete | 498 | 42.1 | 318 | 42.1 | 180 | 42.2 |
Partial | 29 | 2.5 | 22 | 2.9 | 7 | 1.6 |
Mean % Complete (SD) | 95.9 | (18.5) | 95.0 | (20.5) | 97.5 | (14.0) |
Asian | ||||||
Invited | 434 | 295 | 139 | |||
Complete | 208 | 47.9 | 132 | 44.7 | 76 | 54.7+ |
Partial | 16 | 3.7 | 13 | 4.4 | 3 | 2.2 |
Mean % Complete (SD) | 94.9 | (19.8) | 93.2 | (22.9) | 98.0 | (11.9) |
Black | ||||||
Invited | 128 | 82 | 43 | |||
Complete | 40 | 31.3 | 23 | 28.0 | 17 | 37.0 |
Partial | 6 | 4.7 | 5 | 6.1 | 1 | 2.2 |
Mean % Complete (SD) | 91.0 | (24.7) | 88.5 | (26.7) | 95.0 | (21.2) |
Hispanic | ||||||
Invited | 110 | 72 | 38 | |||
Complete | 48 | 43.6 | 32 | 44.4 | 16 | 42.1 |
Partial | 6 | 5.5 | 4 | 5.6 | 2 | 5.3 |
Mean % Complete (SD) | 95.7 | (16.5) | 84.5 | (19.5) | 98.2 | (7.1) |
White | ||||||
Invited | 2020 | 1338 | 682 | |||
Complete | 909 | 45.0 | 579 | 43.3 | 330 | 48.4* |
Partial | 58 | 2.9 | 43 | 3.2 | 15 | 2.2 |
Mean % Complete (SD) | 95.8 | (18.1) | 95.0 | (19.5) | 97.1 | (19.2) |
Multi-Racial | ||||||
Invited | 120 | 83 | 37 | |||
Complete | 53 | 44.2 | 34 | 44.6 | 16 | 43.2 |
Partial | 5 | 4.2 | 4 | 4.8 | 1 | 2.7 |
Mean % Complete (SD) | 94.0 | (22.1) | 93.7 | (22.1) | 94.5 | (22.8) |
Other/Not Indicated | ||||||
Invited | 188 | 130 | 58 | |||
Complete | 63 | 33.5 | 43 | 33.1 | 20 | 34.5 |
Partial | 6 | 3.2 | 5 | 3.8 | 1 | 1.7 |
Mean % Complete (SD) | 92.0 (26.0) | 90.4 (28.5) | 95.8 (19.2) |
p<.10,
p<.05,
p<.01,
p<.001.
p-values indicate whether there were t-test differences by condition.
A logistic regression analysis was used to predict response to the survey, with sex, class year, race, and condition as dummy-variable predictors (Table 2). Men had lower odds of responding. There were no overall response differences by class year. Students whose race was Black and Other/Not Indicated were less likely to respond to the survey overall. Students in Condition 2 had significantly greater odds of responding to the survey.
Table 2.
Results from Multiple Logistic Regression Analysis Predicting Response.
Complete Response | |
---|---|
OR [95% CI] | |
___________ | |
Male Sex | 0.63 [0.54, 0.72]*** |
Class Year | |
Freshman | 1.23 [0.99, 1.54] |
Sophomore | 1.04 [0.94, 1.15] |
Junior | 1.02 [0.84, 1.24] |
Race | |
Asian | 1.14 [0.93, 1.41] |
Black | 0.53 [0.36, 0.78]** |
Hispanic | 0.91 [0.61, 1.34] |
Multi-Racial | 0.95 [0.65, 1.38] |
Other/Not Indicated | 0.67 [0.49, 0.93]* |
Condition 2 | 1.27 [1.09, 1.48]** |
p<.05,
p<.01,
p<.001.
N = 3000. Reference groups were female sex, senior class year, and white race.
Interactions of condition by subgroup (i.e., sex, class year, and race) were also examined. The only significant difference to emerge was that freshmen students were especially likely to respond to the survey if they were assigned to Condition 2. None of the condition by sex or condition by race interactions was significant.
1.3.2 Sample Representativeness
The second study aim was to examine the sample representativeness of the two conditions. Sample characteristics are shown for the target sample (all invited), all respondents, Condition 1 respondents, and Condition 2 respondents in Table 3. Chi-square tests were used to determine whether, among complete responders, there were differences by condition on sex, class year, and race. Significant differences emerged only for sex. There was a greater percentage of men in Condition 2 (47.8%) than in Condition 1 (41.3%), χ2(1, 1321) = 5.29, p<.05, which more closely approximated the sex composition of the target sample (50.2%). There were no significant differences between the conditions on class year or on race.
Table 3.
Sample Characteristics for Target Sample, All Respondents, Condition 1, and Condition 2.
Target Sample | All | Condition 1 | Condition 2 | |
---|---|---|---|---|
Respondents | Respondents | Respondents | ||
% | % | % | % | |
_________ | _________ | _________ | _________ | |
Sex | ||||
Men | 50.2 | 43.6 | 41.3 | 47.8 |
Women | 49.8 | 56.4 | 58.7 | 52.2 |
Class Year | ||||
Freshmen | 15.4 | 17.0 | 16.5 | 17.7 |
Sophomore | 22.0 | 22.4 | 23.2 | 21.1 |
Junior | 23.2 | 22.9 | 22.7 | 23.4 |
Senior | 39.4 | 37.7 | 37.6 | 37.9 |
Race | ||||
Asian | 14.5 | 15.7 | 15.6 | 16.0 |
Black | 4.3 | 3.0 | 2.7 | 3.6 |
Hispanic | 3.7 | 3.6 | 3.8 | 3.4 |
White | 67.3 | 68.8 | 68.4 | 69.5 |
Multi-Racial | 4.0 | 4.0 | 4.4 | 3.4 |
Other/Not Indicated | 6.3 | 4.8 | 5.1 | 4.2 |
N | 3000 | 1321 | 846 | 475 |
1.3.3 Substantive Findings
The third aim of the study was to examine whether there were differences in the substantive findings based on condition. In this case, the main substantive findings of interest were the frequencies of use of various types of substances, including alcohol, cigarettes, marijuana, nonmedical use of prescription stimulants, and nonmedical use of pain medications. Means and standard deviations for all five types of substance use are shown in Table 4. Based on t-tests, there were no significant differences by condition for any of the substance use estimates.
Table 4.
Substance Use Estimates in the Past 30 Days by Condition.
Overall | Condition 1 | Condition 2 | |
---|---|---|---|
M (SD) | M (SD) | M (SD) | |
––––––––– | ––––––––– | ––––––––– | |
Alcohol frequency | 2.65 (1.42) | 2.64 (1.43) | 2.66 (1.40) |
Cigarette frequency | 1.13 (0.49) | 1.14 (0.51) | 1.13 (0.46) |
Marijuana frequency | 1.42 (1.12) | 1.41 (1.13) | 1.42 (1.12) |
Stimulant medication (nonmedical) frequency | 1.05 (0.33) | 1.05 (0.35) | 1.05 (0.29) |
Pain medication (nonmedical) frequency | 1.02 (0.21) | 1.02 (0.19) | 1.03 (0.23) |
M (SD) = mean (standard deviation) values on scales of 1=none to 7=40+occasions or 7=2+ packs per day (cigarettes only). There were no significant differences by condition for any of the substance use estimates, based on t-tests.
1.3.4 Cost per Condition
The final aim was to examine the cost-efficiency of the two approaches. Conditions 1 and 2 differed in the incentive structure and therefore in the total cost of the survey administration. Condition 1 was a $10 prepaid incentive provided to all invited respondents in the initial mailing. The cost per respondent in Condition 1 was $27.78. Condition 2 involved a $2 prepaid incentive provided to all invited respondents in the initial mailing and a $10 promised incentive delivered in a second mailing to those who participated in the survey. Condition 2’s cost per respondent was $21.23.
Cost Calculations
Condition 1:
Condition 2:
The first mailing was less expensive because it was sent to all invited students and therefore involved less tracking. The second mailing was more expensive because respondents had to be identified as responders; therefore, more administrative work was required. Condition 2 yielded a savings of $6.55 per respondent.
1.4 Discussion
This study compared the response rates, samples, substantive findings, and costs resulting from two incentive conditions in the same study. These comparisons indicated that Condition 2, which involved a $2 prepaid incentive and a $10 incentive promised upon completion, was preferable to Condition 1, which involved only a $10 prepaid incentive. Participants in Condition 2 had a higher response rate than those in Condition 1. They were more likely to give complete responses and less likely to give partial responses. The proportion of men in Condition 2 was significantly higher than the proportion of men in Condition 1 and therefore closer to the target population. There were no substantive differences based on condition, which suggests that the differential incentives offered did not affect the responses given by participants. Finally, Condition 2 was also less expensive than Condition 1. In this study, offering a small prepaid incentive combined with a promised incentive was clearly a more cost-effective design than offering only a prepaid incentive. However, total (complete plus partial) response rates were 49.8% in Condition 2 and 46.0% in Condition 1, which is still down overall compared to earlier rates (68% in 1999 with only a drawing for cash and prizes [McCabe, 2002] and 54% in 2009 with a $10 pre-incentive and a drawing for cash and prizes). This historical trend is troubling and may become problematic for research relying on survey reports of substance use behaviors, although in this study frequency estimates did not vary based on incentive condition.
There are several limitations of the current study. First, the role of a prepaid incentive is largely to establish the legitimacy of the study to foster trust and engagement with the organization conducting the survey, but these functions vary across college and non-college populations, where potential participants have differing levels of connection to the sponsor. Second, there is less heterogeneity in a college student sample with respect to age and education, whereas such differences are important factors in surveys of general populations. Third, these findings are based on web-based surveys; similar studies should be done using face-to-face or other survey modes. Fourth, not all survey studies (especially those on a smaller scale) may have budgets available to offer cash incentives. Therefore, additional means of increasing response rates (e.g., course credit) should be explored and empirically examined. Finally, the administrative costs for staffing time vary and were not included in the survey costs presented above.
In this study, response rates in both experimental conditions were low, less than 50%. This reflects the historical decline in response rates, in particular among college students. Additional techniques to reduce non-response, including additional reminder emails, text messages, and phone calls should be considered, and these may affect the relative advantage of the incentive structures. Future studies could examine how promised incentives compare to using study funds for targeted refusal conversion (Brick et al., 2005), to explore whether further cost savings and greater representativeness could be achieved by focusing incentive resources primarily on those who would otherwise not respond and may require incentives for participation (Curtin et al. 2005). Results of this study suggest that the combination of pre-paid and promised incentives is worth considering, although additional survey research is needed to stem the tide of decreasing response rates in web-based substance use studies.
Research Highlights.
Participants randomized to the pre-incentive plus promised incentive condition were more likely to complete the survey and less likely to give partial responses.
Substantive data did not differ across incentive structure, although the pre-incentive plus promised incentive condition was more cost-efficient and yielded more representative samples.
Survey research on substance using populations is warranted to support the most scientifically sound and cost-efficient studies possible.
Acknowledgments
This study was funded by the University of Michigan for the Student Life Survey (SLS) and University of Michigan’s Survey Research Center with a grant to M. Patrick. The development of this manuscript was supported by research grants R03AA018735, R01DA024678, and R01DA031160 from the National Institutes of Health. The National Institutes of Health had no role in the study design, collection, analysis or interpretation of the data, writing of the manuscript, or the decision to submit the paper for publication. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
References
- Atkinson R, Stewart LA. University research funding: The United States is behind and falling. The Information Technology & Innovation Foundation; 2011. Available at: http://www.itif.org/files/2011-university-research-funding.pdf. [Google Scholar]
- Boyd CJ, McCabe SE, d’Arcy H. A modified version of the CAGE as an indicator of alcohol abuse and its consequences among undergraduate drinkers. Substance Abuse. 2003;24(4):221–232. doi: 10.1023/a:1026059913654. [DOI] [PubMed] [Google Scholar]
- Boyd CJ, McCabe SE, d’Arcy H. Ecstasy use among college undergraduates: gender, race and sexual identity. Journal of Substance Abuse Treatment. 2003;24(3):209–215. doi: 10.1016/s0740-5472(03)00025-4. [DOI] [PubMed] [Google Scholar]
- Brick JM, Montaquila J, Hagedorn MC, Roth SB, Chapman C. Implications for RDD design from an incentive experiment. Journal of Official Statistics. 2005;21:571–589. [Google Scholar]
- Cantor D, O’Hare B, O’Connor K. The use of monetary incentives to reduce non-response in random digit dial telephone surveys. In: Lepkowski JM, Tucker C, Brick JM, de Leeuw E, Japec L, et al., editors. Advances in Telephone Survey Methodology. New York: Wiley; 2008. [Google Scholar]
- Church Allan H. Estimating the effect of incentives on mail survey response rates: A meta-analysis. Public Opinion Quarterly. 1993;57:62–79. [Google Scholar]
- Collins FM. Witness appearing before the Senate Subcommittee on Labor – HHS – Education Appropriations. Department of Health and Human Services, National Institutes of Health; 2011. Available at: http://www.nih.gov/about/director/budgetrequest/fy2012budgetrequest.pdf. [Google Scholar]
- Couper MP. Designing effective web surveys. New York: Cambridge University Press; 2008. [Google Scholar]
- Curtin R, Presser S, Singer E. Changes in telephone survey nonresponse over the past quarter century. Public Opinion Quarterly. 2005;69:87–98. [Google Scholar]
- Dierker LC, Donny E, Tiffany S, Colby SM, Perrine N, Clayton RR Tobacco Etiology Research Network (TERN) The association between cigarette smoking and DSM-IV nicotine dependence among first year college students. Drug and Alcohol Dependence. 2007;86:106–114. doi: 10.1016/j.drugalcdep.2006.05.025. [DOI] [PubMed] [Google Scholar]
- Göritz AS. Incentives in web studies: Methodological issues and a review. International Journal of Market Research. 2006;46:327–345. [Google Scholar]
- Hall W, Babor TF. Cannabis use and public health: Assessing the burden. Addiction. 2000;95:485–490. doi: 10.1046/j.1360-0443.2000.9544851.x. [DOI] [PubMed] [Google Scholar]
- Hingson R, Heeren T, Winter M, Wechsler H. Magnitude of alcohol-related mortality and morbidity among U.S. college students ages 18–24: Changes from 1998–2001. Annual Review of Public Health. 2005;26:259–279. doi: 10.1146/annurev.publhealth.26.021304.144652. [DOI] [PubMed] [Google Scholar]
- Hingson RW, White AM. Magnitude and prevention of college alcohol and drug misuse: U.S. college students aged 18–24. In: Kay J, Schwartz V, editors. Mental Health Care in the College Community. New York: Wiley; 2010. [Google Scholar]
- Johnston LD, O’Malley PM, Bachman JG, Schulenberg JE. College students and adults ages 19–50. II. Ann Arbor: Institute for Social Research, The University of Michigan; 2012. Monitoring the Future national survey results on drug use, 1975–2010. [Google Scholar]
- Jones S. The Internet Goes to College: How students are living in the future with today’s technology. 2002 Available at: http://www.pewinternet.org/~/media/Files/Reports/2002/PIP_College_Report.pdf.pdf.
- Lee CM, Neighbors C, Kilmer JR, Larimer ME. A Brief, Web-based Personalized Feedback Selective Intervention for College Student Marijuana Use: A Randomized Clinical Trial. Psychology of Addictive Behaviors. 2010;24:265–273. doi: 10.1037/a0018859. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McCabe SE, Teter CJ, Boyd CJ. Illicit use of prescription pain medication among college students. Drug and Alcohol Dependence. 2005;77:37–47. doi: 10.1016/j.drugalcdep.2004.07.005. [DOI] [PubMed] [Google Scholar]
- McCabe SE, Teter CJ, Boyd CJ. Medical use, illicit use, and diversion of abusable prescription drugs. Journal of American College Health. 2006;54:269–278. doi: 10.3200/JACH.54.5.269-278. McCabe, 2002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McCabe SE, West BT, Wechsler H. Trends and college-level characteristics associated with the nonmedical use of prescription drugs among U.S. college students from 1993 to 2001. Addiction. 2007;102:455–465. doi: 10.1111/j.1360-0443.2006.01733.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McCabe SE. Misperceptions of non-medical prescription drug use: A web survey of college students. Addictive Behaviors. 2008;33:713–724. doi: 10.1016/j.addbeh.2007.12.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McCabe SE. Gender differences in collegiate risk factors for heavy episodic drinking. Journal of Studies on Alcohol. 2002;63:49–56. [PubMed] [Google Scholar]
- McCluskey S, Topping A. Increasing response rates to lifestyle surveys: a review of methodology and ‘good practice’. Perspectives in Public Health. 2011 Mar;131(2):89–94. doi: 10.1177/1757913910389423. [DOI] [PubMed] [Google Scholar]
- Porter SR, Whitcomb ME. The impact of lottery incentives on student survey response rates. Research in Higher Education. 2003;44:389–407. [Google Scholar]
- Singer E, Ye C. The use and effects of incentives in surveys. In: Massey DS, Tourangeau R, editors. The Future of Surveys, Special Issue of the Annals of the American Academy of Political and Social Science. 2012. [Google Scholar]
- Singer E, Gebler R, Raghunathan T, Van Hoewyk J, Katherine McGonagle K. The effect of incentives in interviewer-mediated surveys. Journal of Official Statistics. 1999;15:217–230. [Google Scholar]
- Van Horn PS, Green KE, Martinussen M. Survey response rates and survey administration in counseling and clinical psychology: A meta-analysis. Educational and Psychological Measurement. 2009;69:389–403. [Google Scholar]