Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Jan 31.
Published in final edited form as: Field methods. 2016 Oct 17;29(3):221–237. doi: 10.1177/1525822X16671701

The Effects of a Delayed Incentive on Response Rates, Response Mode, Data Quality, and Sample Bias in a Nationally Representative Mixed Mode Study

Katherine A McGonagle 1, Vicki A Freedman 1
PMCID: PMC5791756  NIHMSID: NIHMS915812  PMID: 29398975

Abstract

This article describes the results of an experiment designed to examine the impact of the use and amount of delayed unconditional incentives in a mixed mode (push to web) supplement on response rates, response mode, data quality, and sample bias. The supplement was administered to individuals who participate in the U.S. Panel Study of Income Dynamics, the longest running national household panel in the world. After 10 weeks of data collection, individuals who had not yet completed the interview were sent a final survey request and randomly assigned to one of three treatment conditions: no incentive, US$5, and US$10. The impact of the incentives on response rates and mode, effects on data quality, and sample bias are described. The implications for the use of incentives in mixed mode surveys and directions for future research are discussed.

Introduction

Monetary incentives are commonly used in household surveys, both as a token of appreciation for participants’ time and as a means of encouraging participation. Often, respondents are offered payment on completion of their interview; in other cases, they are provided with an unconditional incentive when invited to participate. Much of what is known about the benefits of providing incentives on response comes from single-mode surveys, including those administered by interviewers on the telephone and in person (Cantor et al. 2008; Laurie and Lynn 2009; Singer et al. 1999) and self-administered mail surveys (Church 1993; Edwards et al. 2002). In these modes, small unconditional incentives provided with the survey request appear to be more effective than promises to pay upon survey completion (e.g., Adua and Sharp 2010; Dillman et al. 2009), and there is generally a positive association between incentive amount and response rate (Fumagalli et al. 2013; Rodgers 2002). Moreover, several studies find differential responsivity to monetary incentives by sociodemographic characteristics of sample members, including socioeconomic status (McGonagle et al. 2013; Ryu et al. 2006) and education (Petrolia and Bhattacharjee 2009), raising the possibility that the use of incentives may alter the mix of characteristics in the responding sample.

Although interest is growing in the use of the Internet to conduct household surveys, both as a single mode and as an option in mixed mode designs, evidence on the optimal design for the use of incentives in such surveys continues to be limited. A meta-analysis of web surveys found positive effects of prepaid incentives on various kinds on response rates (Göritz 2006), and several recent studies show that a prepaid cash incentive sent with the study invitation increased response rates in a community sample (Messer and Dillman 2011) and in college student samples (Millar and Dillman 2011; Parsons and Manierre 2014; Patrick et al. 2013). Most studies of prepaid incentives in web surveys have provided the incentives at the start of a study, typically with the study invitation. Consequently, questions remain about the most effective way to design incentive strategies for web surveys, particularly those mixed with another mode.

Mixed mode studies typically offer an alternative (often paper) to those who are not regular web users or opt not to answer online. In such studies, using a prepaid incentive with the invitation, when followed by a delayed paper questionnaire, may help “push” the respondent to answer by web (Messer 2012; Messer and Dillman 2011; Millar and Dillman 2011). Web is often the preferred mode over mail in such studies because it is less costly (no postage or data entry), responses may be obtained more quickly, and data quality may be higher since range checks and skips can be programmed. Use of prepaid incentives closer to the end of fieldwork may also be an efficient tool for increasing cooperation by targeting individuals who were initially invited to respond by web but have not yet responded after multiple requests.

The impact of delayed, prepaid incentives on data quality for individuals who are induced to respond late in the field period is unknown. The majority of studies examining the influence of prepaid incentives on data quality are based on nonweb modes and yield mixed findings (see Singer and Ye 2013). Some studies find no effects on data quality (e.g., Cantor et al. 2008); others show improvements (e.g., Goldenberg et al. 2009; Medway and Tourangeau 2015); and still others show decrements (e.g., Jäckle and Lynn 2008). How delayed prepaid incentives influence response rates, mode of response, and response bias in mixed mode studies also remains understudied (Singer and Ye 2013).

This article describes the results of an experiment designed to examine the impact of the use and amount of delayed unconditional incentives in a mixed mode (push to web) supplement on response rates, response mode, data quality, and response bias. The supplement was administered to household heads and spouse/partners who participate in the U.S. Panel Study of Income Dynamics (PSID), a long-running national household panel study.

Methods

Sample

The sample of individuals included in this experiment was drawn from families that participated in the 2013 wave of the PSID. The PSID is a longitudinal study of a nationally representative sample of the U.S. families that began in 1968. Families in the PSID have been interviewed annually from 1968 to 1997, and biennially since 1997. The main interview is about 75 minutes on average and collects a variety of data on economic, health, and social behaviors using telephone as the primary mode (see McGonagle et al. 2012, for more information).

Initial Invitation

Individuals and, if partnered, their spouses/partners who completed the 2013 wave of the PSID were invited by mail to complete a 20-minute supplemental study recalling various childhood experiences. Approximately 13,000 individuals aged 19 years and older were invited to participate. Individuals were initially assigned to one of two conditions to complete the survey: web or choice. Individuals initially assigned to web (73%) reported in 2013 that they had connected to the Internet through a computer or laptop at home at sometime in the past year. Remaining individuals (27%) were assigned to a condition called choice.

For both web and choice groups, the study invitation included the web address of the survey and provided log-in credentials that were unique and randomly generated. The invitation letter sent to the choice group also stated that the survey could be completed on a paper questionnaire that would be mailed to their address after two weeks. To encourage the use of the web to complete the survey, no mention of a paper questionnaire was made in the initial invitation letter sent to the web group. Study invitations were mailed to individuals (and not families), so that within couples both individuals received their own unique log-in credentials. Study members were told they would receive a US$20 check upon completion of the survey. The active study period occurred between May 2014 and October 2014 and responses were collected through February 2015.

Follow-up Protocol

After two weeks of nonresponse, several steps were taken to increase cooperation to the survey request. Altogether, approximately 10 reminders were sent alternating between regular mail and e-mail (the latter for the 66% who had provided an e-mail address). Reminders sent via regular mail included the web address of the survey with log-in credentials and reiterated that a US$20 postpaid incentive would be sent when the study was completed. E-mail reminders included similar information but excluded log-in credentials. The choice group was sent a paper questionnaire with a postage-paid return envelope after two weeks of nonresponse and again four weeks later. To encourage the use of the web to complete the survey, the mailings sent to the web group did not include a paper questionnaire but mentioned that one could be requested by calling the study’s toll-free number. Telephone reminder calls started after six weeks of nonresponse, and approximately 80% of the sample was successfully contacted or left a message.

Design of Delayed Prepaid Incentive Experiment

Ten weeks after the study began, a delayed prepayment experiment was launched. The experiment consisted of a mailing sent to 2,473 individuals who had not yet completed the interview. The mailing included a final reminder from the study director to complete the study, the paper questionnaire, and a postage-paid return envelope. Individuals were randomly assigned to one of the three treatment conditions: no incentive, US$5, or US$10. In the incentive treatment conditions, a U.S. bill in the amount of US$5 or US$10 was paper clipped to the top of the paper questionnaire to ensure visibility. A letter signed by the study director highlighted the impending end of the study period, explained how to access the web instrument, and provided a reminder that an additional US$20 postpaid incentive would be sent upon completion of the study. Individuals in families with children eligible for an impending supplement on child development (N = 5,037) were excluded to avoid overlapping survey requests.

Individuals in married/cohabiting couples in which one spouse/partner had already responded (n = 342) were sent the mailing but were not eligible for random assignment to a treatment condition. With this final mailing, the web group was sent its first copy of the paper questionnaire and the choice group was sent its third copy of the questionnaire. At the start of the experiment, the response rate was approximately 65% for the eligible subgroup, and about 25% of this group had completed the survey by paper (about 9% of those initially assigned to the web group and 75% of those initially assigned to the choice group).

Outcomes

We examine the effects of the experiment on four main outcomes: response rate, defined as the number of complete interviews divided by the number of eligible reporting units in the sample; mode of response (web or paper); data quality, as assessed by item nonresponse across the 292 items in the survey; and sample bias as indicated by a comparison of sociodemographic characteristics of respondents before and after the incentive experiment.

Statistical Methods

Response rates and item missing data rates are analyzed using a weighted multilevel mixed generalized linear regression model that adjusts for dependency of observations within couples by including a random effect for families that include spouses/partners (SAS Institute 2011). This model properly adjusts standard errors to account for the joint assignment of spouses/partners to experimental groups. We estimate a main effects model (with just treatment effects) and then test whether treatment effects vary by sociodemographic characteristics (age, years of completed education, gender, marital status, and income) by introducing interactions between receiving any incentive amount (US$5 or US$10) versus no incentive and each characteristic of interest. Sociodemographic characteristics of respondents are compared using weighted χ2 tests adjusted for the dependency of couple-based observations.

Results

Sample Characteristics

Sociodemographic characteristics of the sample eligible for the experiment, by treatment condition, are shown in Table 1. As would be expected due to random assignment to the conditions, the three groups are of approximately equal size and there are no statistically significant differences among them with respect to any of the characteristics shown in Table 1.

Table 1.

Sociodemographic Characteristics (%) across Randomly Assigned Treatment Conditions.

Sociodemographic Characteristics Treatment Condition
US$0 (N = 820) US$5 (N = 839) US$10 (N = 814)
Age
 <40 33.8 30.0 26.3
 40–59 38.1 43.8 44.7
 60+ 28.0 26.2 29.0
Years of education
 ≤12 44.1 43.6 45.9
 13–15 23.4 24.1 25.1
 16+ 22.3 20.5 19.6
 Missing 10.2 11.7 9.5
Gender of respondent
 Female 45.6 47.8 49.2
 Male 54.4 52.2 50.8
Couple status
 In a couple 44.6 52.2 47.5
 Single 55.4 47.8 52.5
Individual is PSID respondent
 Yes 77.9 74.1 76.3
 No 22.1 25.9 23.7
Low-income oversample
 Yes   8.5   8.6 10.0
 No 91.5 91.4 90.0
Initial mode assignment
 Paper 24.2 25.7 26.1
 Web 75.8 74.3 73.9

Note: PSID = Panel Study of Income Dynamics.

Incentive Effect on Response Rates

We estimate two main effects models for the overall sample and for web and choice subsamples. The first model includes separate parameters for US$5 and US$10 (vs. US$0), and the second model collapses these categories into a single variable indicating either US$5 or US$10 (vs. US$0). Table 2 documents the effects of the delayed incentive on response rates. The total proportion of individuals in the experiment completing the interview is higher among those receiving US$5 (23.9%) or US$10 (26.3%), compared to those receiving no incentive (14.0%). There is no significant difference in the proportion of survey completions between receiving either treatment (US$5 or US$10; 25%) and receiving no treatment (US$0; 14%).

Table 2.

Survey Completions by Treatment Condition and Initial Mode Assignment.

Initial Mode Assignment Treatment Condition
US$0 (%) US$5 (%) US$10 (%) Any Incentive (US$5 or US$10)
Total 14.0 23.9**   26.3*** 25.0*** 
Web (n = 1,705) 15.8 26.4** 29.7** 27.8**  
Choice (n = 768)   9.5 16.3     20.9*   18.6*    

Note: N = 2,473. Weighted generalized linear model adjusted for dependency of observations from couples. Difference of US$5 versus US$10 is nonsignificant for all groups.

**

Difference versus US$0 is significant at p < .005.

***

Difference versus US$0 is significant at p < .0001.

*

Difference versus US$0 is significant at p < .05.

Incentive Effect on Response Rate by Initial Assignment

As shown in Table 2, both the US$5 and US$10 incentives have a statistically significant positive effect on response rates for those initially assigned to web (26.4% and 29.7% compared to 15.8% for US$5, US$10, and no incentive, respectively). For those initially assigned to choice, only the US$10 incentive is associated with a significantly higher response rate compared to no incentive (20.9% compared to 9.5% for US$10 and no incentive, respectively). There is no statistically significant difference in the response rate between the US$5 and US$10 conditions for either the web or the choice group, and the size of the effects of the incentives does not differ significantly between the web and choice groups. Moreover, we found no significant difference in the proportion of survey completions on paper between those receiving any incentive (78.3% for US$5 or US$10) and those receiving no incentive (79.0%).

Incentive Effect on Response Rate by Sociodemographic Characteristics

Older individuals were more responsive to the effects of the US$5 and US$10 incentives than were younger individuals. That is, compared to individuals younger than age 40 years, for whom the incentive increased response rates by 15.0 percentage points, the incentive had a greater effect for individuals aged 40–59 years (by 10.7 percentage points; p < .05 for difference) and those aged 60 years and older (by 22.5 percentage points; p < .05 for difference). There were no other statistically significant interaction effects for any of the other characteristics in Table 1, indicating that the delayed unconditional incentives had an otherwise uniformly positive effect across a variety of sociodemographic characteristics.

Incentive Effect on Data Quality

Overall item missing data rates among the surveys completed after the implementation of the experiment were quite low, at 3.7% on average (see Table 3). There were no statistically significant differences in average rates of item missing data between the incentive conditions and the no-incentive condition, or between the incentive conditions in the total sample, or by response mode (3.2% and 4.3% compared to 3.6% for US$5, US$10, and no incentive, respectively, for the total group). Average item missing data rates for individuals responding by paper were slightly higher than those responding by web (4.3% and 1.1%, respectively; p = .10 for difference); there were no other statistically significant differences between the modes in the magnitudes of the effects of the incentives.

Table 3.

Average Item Missing Data Rates by Treatment Condition and Response Mode.

Response Mode Treatment Condition
Total (%) US$0 (%) US$5 (%) US$10 (%) US$5 or US$10 (%)
Total 3.7 3.6 3.2 4.3 3.8
Web (n = 94) 1.1 2.6 0.8 1.4 1.0
Paper (n = 346) 4.3 3.8 4.2 4.7 4.5

Note: N = 440. Weighted generalized linear model adjusted for dependency of observations from couples.

Sample Bias before and after Treatment

Table 4 provides information on the sociodemographic characteristics of the total eligible sample and respondents and nonrespondents before and after the incentive experiment. Four main results are noteworthy. First, the sociodemographic characteristics of the overall eligible sample (column A) and study respondents before the experiment (column C) are fairly similar although there are minor differences: In particular, study respondents before the experiment are slightly older, slightly more likely to be in a couple, and slightly less likely to be from the low-income oversample. The sizes of these differences are small.

Table 4.

Sociodemographic Characteristics (%) of Eligible Sample and Nonrespondents and Respondents before and after the Experiment.

Sociodemographic Characteristics A
B
C
D
E
E1
E2
F
Total Eligible Sample (n = 7,230) Non respondents before Experiment (n = 2,473) Respondents before Experiment (n = 4,757) Nonrespondents after Experiment (n = 2,033) Respondents after Experiment (n = 440) Respondents Sent US$0 (n = 92) Respondents Sent US$5/US$10 (n = 348) All Respondents (n = 5,197)
Age
 <40 22.6 30.1 19.6 33.4 18.4 26.0 16.3 19.5
 40–59 39.8 42.2 38.8 42.4 41.6 33.7 43.9 39.1
 60+ 37.6 27.7 41.6 24.2 39.9 40.3 39.8 41.4
Years of education
 ≤12 39.4 44.5 37.3 44.8 43.4 34.9 45.8 37.8
 13–15 23.0 24.2 22.5 23.9 25.3 28.4 24.5 22.7
 16+ 29.4 20.8 32.8 20.1 23.2 33.3 20.4 32.1
 Missing   8.2 10.5   7.3 11.2   8.0   3.3   9.4   7.3
Gender of the respondent
 Female 52.5 47.5 54.6 44.5 57.9 58.8 57.7 54.9
 Male 47.5 52.5 45.4 55.5 42.1 41.2 42.3 45.1
Couple status
 In a couple 56.6 48.2 60.0 46.3 54.7 50.6 55.8 59.6
 Single 43.4 51.8 40.0 53.7 45.3 49.4 44.2 40.4
Individual is PSID respondent
 Yes 73.0 76.1 71.7 76.5 74.8 77.1 74.1 72.0
 No 27.0 23.9 28.3 23.5 25.2 22.9 25.9 28.0
Low-income oversample
 Yes   6.2   9.0   5.1 10.1   5.3   5.0   5.4   5.1
 No 93.8 91.0 94.9 89.9 94.7 95.0 94.6 94.9
 Initial assignment to web 74.8 74.7 74.8 72.5 82.3 85.0 81.6 75.4
 Response mode is web N/A N/A 74.5 N/A 21.6 21.0 21.7 70.1

Note: PSID = U.S. Panel Study of Income Dynamics. N/A = not applicable.

Second, not unexpectedly, before the incentive experiment, there were significant differences between study respondents and nonrespondents. These differences persisted but became slightly less marked after the experiment. Before the experimental mailing, study respondents (column C) were older, more educated, more likely to be female, in a couple, and less likely to be from the low-income oversample compared to study nonrespondents (column B; all comparisons are statistically significant at p < .05). Respondents to the experiment (column E) were also significantly older, more likely to be female, in a couple, and less likely to be from the low-income oversample compared to nonrespondents (column D; comparisons are statistically significant at p < .05) but did not differ significantly in years of completed education or whether the individual was the PSID respondent.

Third, respondents in the no-incentive and treatment (US$5 or US$10) subgroups (columns E1 and E2) were very similar to each other and to nonrespondents (column D), with one exception: Respondents in the no-incentive condition tended to have more years of education compared to both nonrespondents and respondents who were sent an incentive. In other words, the delayed prepaid incentive brought in a disproportionate number of less-educated respondents, who were less likely to have responded before the final mailing.

Discussion

We conducted a delayed, prepaid incentive experiment in a mixed mode supplement embedded in a national panel survey. Unlike prior work on this area, we drew on respondents in an ongoing nationally representative panel, which offered the distinctive benefit of including individuals across a broad range of sociodemographic characteristics and allowed us to test a variety of differential effects of the delayed incentives.

Several key results emerged. First, we found that including US$5 or US$10 doubled incremental response rates when added to the first mailing of a paper questionnaire for a group pushed to web and to a third mailing for a group given a choice of responding via web or paper questionnaire. The incentive was equally effective in improving response rates in both groups and across various socioeconomic groups but was more effective for older than for younger adults. Second, examination of item missing data rates across all survey questions showed that data quality was not meaningfully affected by the incentive. Finally, the endgame did not introduce or exacerbate sample bias and may have slightly reduced a small preexisting bias with respect to education.

We found that a mailing that included US$5 sent to individuals who had been unresponsive to as many as 10 prior requests for their participation nearly doubled their rates of participation compared to including no incentive. Increasing the amount to US$10 yielded no significant additional benefits, leading us to conclude that a relatively modest US$5 included in a mailing at the end of a study can be highly beneficial. It is important to note that these findings occurred in the context of a long-standing panel study, and generalization to different types of study designs, such as those making a first contact with new study members, is unclear. Two additional limitations should be noted.

First, individuals in families with children under age 18 years living at home were not included in the experiment, making the generalizability of findings to this group uncertain. The age-specific findings that we report must be interpreted in this context; that is, the experiment was more effective for older adults than younger adults without children. However, given the lack of differential effects of the incentives across a large number of sociodemographic characteristics demonstrated here and evidence regarding the utility of incentives in studies of families with children (e.g., Fomby et al. 2015), there is reason to believe that similar effects would emerge for parents.

A second limitation relates to the use of postal mail, which limits our ability to definitively distinguish nonresponse from noncontact. Consistent efforts to maintain updated contact information for PSID families (Schoeni et al. 2013) have resulted in sample location rates that are extremely high, with more than 90% of the approximately 20% of families in tracking each wave being successfully located. This suggests that non-contact rates are low and, consequently, that this limitation is not likely to be a significant weakness.

Despite these limitations, our study has implications for investigators interested in enhancing response rates in mixed mode studies through the use of incentives. This study showed that a substantial proportion of non-respondents initially assigned to web may be willing to respond and even more so when a small incentive is attached. In response to receiving a final mailing with a paper questionnaire and log-in instructions and an incentive of US$5 or US$10, 28% of the group initially assigned to web responded (vs. 16% with no incentive). Moreover, this crossover effect is potentially sizable, with 76% of those responding sending back a paper questionnaire.

Our findings also highlight the critical importance of initial group assignment. Our use of “having connected to the Internet through a computer or laptop at home at sometime in the past year” may have underestimated the preference for paper among some respondents. Why individuals initially assigned to web preferred responding by paper is unclear. Existing research shows that even in households with Internet connection, some individuals are reluctant to complete survey requests through the Internet for reasons such as lack of skill or “web proficiency” (Stern et al. 2009; Stern et al. 2014). Moreover, in the current study, some of the content was sensitive, which raises the additional possibility that some individuals were reluctant to complete the survey on the Internet due to privacy concerns. It may simply be that the tangible presence of a paper questionnaire made the request salient and its completion convenient. More research pinpointing who should initially be assigned to web and who should be given a choice is needed. This study suggests, however, that even if wrongly assigned initially, an incentivized paper copy can remedy participation reluctance, even after a 10-week delay.

Our findings also highlight the need to better understand mechanisms that lead to incentive-induced cooperation. In the current study, the motivation was evidenced by both the positive impact of delayed incentives on response and their lack of detrimental effect on data quality. The exact mechanism through which motivation to respond is increased remains unclear, however. Existing research, based largely on incentives that are provided or promised with the initial request, suggests that incentives work through eliciting of a sense of reciprocity, heightening the salience of the study and topic, and compensating for a lack of interest in the study (see Laurie and Lynn 2009; Singer et al. 1999; Singer and Ye 2013).

Our finding that incentives bring in a disproportionate number of less-educated individuals suggests delayed prepaid incentives may reduce bias if respondents underrepresent those with low educational attainment. An examination of responses to the survey data to determine potential influences of the incentives may be a valuable next step in this research. In any case, additional experimental research is needed to uncover the mechanisms through which delayed incentives work and how they differ from mechanisms related to prepaid incentives sent with the initial survey request.

Finally, our study sets the stage for gleaning new insights about panel study participants’ behavior. In the context of an ongoing panel study, the evidence on the effectiveness of incentives and the relative dearth of other levers to increase nonresponse makes their use alluring. Yet we know little about potential long-term effects of various incentive strategies in longitudinal studies (see Laurie and Lynn 2009; Singer and Ye 2013). An important question is whether there are conditioning effects for the use of delayed incentives for late respondents in subsequent waves. Because this experiment was embedded in an ongoing panel study, it is possible to examine patterns of response for individuals in the 2015 wave of the PSID among those who were invited to participate in the incentive experiment in the current study. We believe this is an important question for future research.

In conclusion, our findings confirm and extend prior research documenting the positive effect of prepaid incentives on response rates. We demonstrate their positive influence late in the field period among panel study members who had declined numerous prior survey requests. Moreover, we find no negative impact of a delayed, prepaid incentive on data quality or response bias. As panel studies increasingly use mixed mode designs and expand incentive strategies to address rising rates of nonresponse, additional research is needed to better understand the motivational characteristics of respondents across a range of sample characteristics, including factors underlying response propensity and determinants of mode preference. This information will inform the design of study protocols by sharpening the definition of initial mode assignment and optimizing the use of incentive strategies to enhance cooperation and reduce field effort in mixed mode studies.

Acknowledgments

The authors thank Shonda Kruger-Ndiaye and Carissa Scurlock for their contributions to this project. An earlier version of this article was presented at the Sixth Conference of the European Survey Research Association, Reykjavik, Iceland, July14–17, 2015.

Funding

The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was supported by the National Institute on Aging P01 AG029409.

Footnotes

Declaration of Conflicting Interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

References

  1. Adua L, Sharp JS. Examining survey participation and response quality: The significance of topic salience and incentives. Survey Methodology. 2010;36:95–109. [Google Scholar]
  2. Cantor D, O’Hare B, O’Connor K. The use of monetary incentives to reduce nonresponse in random digit dial telephone surveys. In: Lepkowski JM, Tucker C, Brick JM, de Leeuw E, Japec L, Lavrakas PJ, Link MW, Sangster RL, editors. Advances in telephone survey methodology. New York: Wiley; 2008. pp. 471–98. [Google Scholar]
  3. Church AH. Incentives in mailed surveys: A meta-analysis. Public Opinion Quarterly. 1993;57:62–79. [Google Scholar]
  4. Dillman DA, Smyth JD, Christian LM. Internet, mail and mixed-mode survey: The tailored design method. 3rd. Hoboken, NJ: John Wiley; 2009. [Google Scholar]
  5. Edwards P, Roberts L, Clarke M, DiGuiseppi C, Pratap S, Wentz R, Kwan I. Increasing response rates to postal questionnaires: Systematic review. British Medical Journal. 2002;324:1183. doi: 10.1136/bmj.324.7347.1183. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Fomby P, Sastry N, McGonagle KA. Effectiveness of a time-limited incentive on participation by hard-to-reach respondents in a panel study. Paper presented at the 6th Conference of the European Survey Research Association; Reykjavik, Iceland. July 14–17.2015. [Google Scholar]
  7. Fumagalli L, Laurie H, Lynn P. Experiments with methods to reduce attrition in longitudinal surveys. Journal of the Royal Statistical Society: Series A (Statistics in Society) 2013;176:499–519. [Google Scholar]
  8. Goldenberg KL, McGrath D, Tan L. The effects of incentives on the consumer expenditure interview survey. Joint Statistical Meetings Proceedings. 2009:5985–99. [Google Scholar]
  9. Göritz AS. Incentives in web studies: Methodological issues and a review. International Journal of Internet Science. 2006;1:58–70. [Google Scholar]
  10. Jäckle A, Lynn P. Respondent incentives in a multi-mode panel survey: Cumulative effects on nonresponse and bias. Survey Methodology. 2008;34:105–17. [Google Scholar]
  11. Laurie H, Lynn P. The use of respondent incentives on longitudinal surveys. In: Lynn P, editor. Methodology of longitudinal surveys. New York: Wiley; 2009. pp. 205–33. [Google Scholar]
  12. McGonagle KA, Schoeni RF, Couper MP. The effects of a between-wave incentive experiment on contact update and production outcomes. Journal of Official Statistics. 2013;29:1–17. doi: 10.2478/jos-2013-0022. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. McGonagle KA, Schoeni RF, Sastry N, Freedman VA. The Panel Study of Income Dynamics: Overview, recent innovations, and potential for life course research. Longitudinal and Life Course Studies. 2012;3:268–84. doi: 10.14301/llcs.v3i2.188. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Medway RL, Tourangeau R. Response quality in telephone surveys: Do prepaid cash incentives make a difference? Public Opinion Quarterly. 2015;79:524–43. [Google Scholar]
  15. Messer BL. PhD dissertation. Washington State University; Pullman: 2012. Pushing households to the web: Results from webmail experiments using address based samples of the general public and mail contact procedures. [Google Scholar]
  16. Messer BL, Dillman DA. Surveying the general public over the Internet using address-based sampling and mail contact procedures. Public Opinion Quarterly. 2011;75:429–57. [Google Scholar]
  17. Millar M, Dillman DA. Improving response to web and mixed-mode surveys. Public Opinion Quarterly. 2011;75:249–69. [Google Scholar]
  18. Parsons NL, Manierre MJ. Investigating the relationship among prepaid token incentives, response rates, and nonresponse bias in a web survey. Field Methods. 2014;26:191–204. [Google Scholar]
  19. Patrick ME, Singer E, Boyd CJ, Cranford JA, McCabe SE. Incentives for college student participation in web-based substance use surveys. Addictive Behaviors. 2013;38:1710–14. doi: 10.1016/j.addbeh.2012.08.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Petrolia DR, Bhattacharjee S. Revisiting incentive effect: Evidence from a random sample mail survey on consumer preferences for fuel ethanol. Public Opinion Quarterly. 2009;73:537–50. [Google Scholar]
  21. Rodgers W. Size of incentive effects in a longitudinal study. Paper presented at the American Association for Public Opinion Research; St. Petersburg, FL. May 16–19.2002. [Google Scholar]
  22. Ryu E, Couper MP, Marans RW. Survey incentives: Cash vs. in-kind; face-to-face vs. mail; response rate vs. nonresponse error. International Journal of Public Opinion Research. 2006;18:89–106. [Google Scholar]
  23. SAS Institute. Base SAS® 9.3 Procedures Guide. Cary, NC: SAS Institute; 2011. [Google Scholar]
  24. Schoeni RF, Stafford FP, McGonagle KA, Andreski P. Response rates in national panel surveys. The Annals of the Academy of Political and Social Science. 2013;645:60–87. doi: 10.1177/0002716212456363. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Singer E, Gebler N, Raghunathan T, Van Hoewyk J, McGonagle K. The effect of incentives in telephone and face-to-face surveys. Journal of Official Statistics. 1999;15:217–30. [Google Scholar]
  26. Singer E, Ye C. The use and effects of incentives in surveys. The Annals of the American Academy of Political and Social Science. 2013;645:112–41. [Google Scholar]
  27. Stern MJ, Adams A, Elsasser S. Digital inequality and place: The effects of technological diffusion on Internet proficiency and usage across rural, suburban, and urban counties. Sociological Inquiry. 2009;79:391–417. [Google Scholar]
  28. Stern MJ, Bilgen I, Dillman DA. The state of survey methodology challenges, dilemmas, and new frontiers in the era of the tailored design. Field Methods. 2014;26:284–301. [Google Scholar]

RESOURCES