Abstract
We describe an experiment to provide a time-limited incentive among a random sample of 594 hard-to-reach respondents, 200 of whom were offered the incentive to complete all survey components of a study during a three-week winter holiday period. Sample members were primary caregivers of children included in the 2014 Child Development Supplement to the U.S. Panel Study of Income Dynamics. The incentive provided $50 to caregivers who completed a 75-minute telephone interview and whose eligible children each completed a 30-minute interview. Results indicate that the incentive was an effective and cost-efficient strategy to increase short-term response rates with hard-to-reach respondents with no negative impact on final response rates.
Survey researchers must balance limited fieldwork periods and finite budgets against the goal of achieving high response rates with a variety of respondent types, including those who are not easily motivated to participate or who are otherwise hard to reach. This tension between scientific objectives and the circumstances of fieldwork is compounded in panel studies, where the value of respondents’ participation comes from repeated observations collected during fixed intervals. Thus, in the context of a panel study, cost-efficient strategies to complete interviews early in fieldwork may be particularly useful when they increase response rates and decrease hard-to-reach respondents’ perceived burden by reducing subsequent contact attempts and providing a longer rest period between waves of data collection.
Time-limited monetary incentives may provide one such strategy. These conditional incentives (Olsen 2005) are offered for a few days or weeks early in the fieldwork period and provide payments to respondents who complete the study prior to a targeted date. Time-limited incentives differ from standard monetary incentives in that they require respondents’ swift participation and are intended to generate a boost in productivity over a short time horizon (LeClere et al. 2012). Thus, time-limited incentives may enhance fieldwork efficiency by appealing to participants’ willingness to complete a task over a short time horizon when there is a measurable gain to doing so. This approach potentially complements more widely-used strategies targeted at panel study sample members such as prepaid incentives intended to tap norms of reciprocity and appeals to participants’ irreplaceability (Arzheimer and Klein 1999, Martinez-Ebers 1997, McGonagle, Schoeni and Couper 2013, Pedersen and Nielsen 2016).
Time-limited incentives are attractive for a variety of reasons. First, their early implementation can produce a completed interview with a respondent who might otherwise require substantial resources to complete the survey later (Singer and Ye 2013). Repeated contact attempts over the full course of fieldwork are an effective but potentially costly method to recruit hard-to-reach respondents and to diminish nonresponse bias in representative samples (Legleye et al. 2013, Lynn and Clarke 2002, Westrick and Mount 2008). Strategies to achieve higher response rates with hard-to-reach respondents over a shorter active fieldwork period are therefore generally desirable (Brown and Calderwood 2014, McGonagle and Freedman forthcoming) and are particularly valuable in studies with a hard end date where fieldwork extensions are infeasible.
Second, when offered in supplemental projects on panel studies like the one described here, time-limited incentives introduced early in the field period may enhance interest in the study topic and provide a rest period prior to requests for participation in a subsequent interview round. Finally, time-limited incentives may capitalize on seasonal opportunities to reach respondents. Relevant to the experiment we describe, the winter holiday season in the United States (roughly including the week before Christmas to the weekend after New Year’s Day) is marked by school closures and reduced work hours in many occupations. While this period is also characterized by travel away from home, it represents one of the few times outside of weekends when parents and children might be away from work and school at the same time. This provides a unique opportunity to schedule back-to-back interviews with multiple participants in household-based studies. Further, an added incentive provided during the holiday season may be particularly attractive to less affluent families experiencing financial strain associated with purchasing gifts and planning family celebrations, and may help to maintain interviewer morale by providing a new tool for participant recruitment.
Some risks counterbalance these advantages. First, cognitive evaluation theory predicts that respondents who do not participate in time to receive the incentive may feel discouraged from participation once the offer expires (Deci, Koestner and Ryan 1999), thus contributing to selective final nonresponse. However, current literature suggests that time-limited incentives are effective for producing a brief rise in fieldwork productivity with no observed negative effects on final response rates across a variety of survey modes, including web, telephone, and in-person interviews with adults participating in cross-sectional and panel studies (Brown and Calderwood 2014, Coopersmith et al. 2014, LeClere et al. 2012). Nevertheless, the effectiveness of time-limited incentives has not been evaluated in the context of a sample of children and their caregivers; nor has its usefulness during a traditionally slow period for data collection been assessed.
A second potential risk is that time-limited incentives offered early in fieldwork may be cost-inefficient if they are paid mostly to respondents who were likely to eventually participate even in the absence of an additional incentive. Potential respondents’ underlying propensity to participate can only be estimated indirectly at the end of fieldwork based on actual eventual participation. Respondents who have a high propensity to participate but who are inclined to defer until the end of the fieldwork period may be motivated to complete their interview early in response to an added incentive, but the incentive is an ill-used resource if recruiting the respondent later in fieldwork would yield the same outcome. One method to target the hardest-to-reach respondents in the sample is to estimate their probability of nonresponse from what is known about them at the outset of fieldwork. In the context of panel studies, considerable information is usually available from previous waves of data collection such as indicators of frequent contact attempts, perceived resistance to participation, and other characteristics associated with nonresponse. This information can be used to identify those with the lowest likelihood of completing the survey, thus allowing time-limited incentives to be targeted at those respondents from whom they might draw the largest gain in ultimate fieldwork productivity.
Current study
We present the results of an experiment to provide a $50 time-limited monetary incentive to caregivers participating with their children in the 2014 Child Development Supplement to the U.S. Panel Study of Income Dynamics. We describe the immediate and long-term effectiveness of the incentive by comparing participation rates among respondents in the experimental and control groups immediately after the target date and at the end of fieldwork 16 weeks later. Further, we assess whether the incentive was equally effective with families that had a higher or lower likelihood of predicted nonresponse based on their participation experience in the previous panel wave. Finally, we ask whether the incentive was cost-efficient as measured by whether it brought in respondents who otherwise would not have completed the interview at all, rather than only those who might have finished later in the absence of any additional incentive.
Context
The Panel Study of Income Dynamics is a longitudinal study of income and employment dynamics in households in the United States that began in 1968 with a sample of approximately 4,800 families. The study has a genealogical design and recruits adult children of respondents to participate when they split off from their parents to form their own households. A refresher sample in 1997 that added about 500 households headed by immigrants who entered the United States since 1968 has kept the sample representative of approximately 97 percent of children in US households. Interviews for the main study are conducted in odd-numbered years by telephone with one respondent per household, usually the household head or the head’s spouse or partner. Respondents provide information about themselves, their spouse or cohabiting partner, and all other family members living together in a family unit. In 2013, the study covered nearly 10,000 families and 25,000 individuals (see McGonagle et al. 2012, Panel Study of Income Dynamics 2015 for more details).
The 2014 PSID Child Development Supplement (CDS-2014) gathered in-depth information on sample children from PSID families who participated in the 2013 PSID main interview. The CDS-2014 sample included children aged 0 to 17 years in 2013 and their primary caregivers (usually the child’s mother). CDS-2014 included a 10-minute coverscreen telephone interview with a knowledgeable adult in the household which collected basic demographic information used to define eligibility for the study components; a 75-minute telephone interview with the primary caregiver; and a 35-minute telephone interview with 12–17 year old adolescents. The child interview included two components: A 20-minute interviewer-administered segment and a 15-minute segment including sensitive questions administered by interactive voice response technology (IVR). The study also included other components to assess children’s cognition, time use, and anthropometry and to request consent to obtain administrative records. However, participants were not required to complete these additional components to earn the time-limited incentive. CDS-2014 is the successor to the original Child Development Supplement that was conducted over three waves between 1997 and 2007 with a cohort of children who were between 0 and 12 years of age at the first wave.
The eligible sample for CDS-2014 comprised 5,200 children in 2,880 families from among the 3,300 original families who were asked to complete the coverscreen. The final response rate among screened families was 88%. CDS-2014 was fielded over a 26-week period between late October 2014 and late April 2015. The time-limited incentive was available from December 13 to January 4, a three-week period beginning at the end of the seventh week of fieldwork.
Sample selection
The incentive was intended for families that were expected to have a relatively low probability of participation in the study. To identify these hard-to-reach families, we used information from the 2013 PSID main interview fieldwork experience (i.e., paradata) to estimate the probability that the family had not yet completed the CDS-2014 interview by December 10. Completion was measured by whether the primary caregiver and all eligible children had finished their interviewer-administered telephone interviews. Predictors included the number of telephone calls made to the household before the 2013 interview was completed; the number of breakoffs occurring during that interview; whether the respondent demonstrated any resistance to participating in that interview; whether the household had not participated in the 2011 PSID interview; and the family’s original sampling stratum (that distinguished the original low-income oversample and the immigrant refresher sample). The unit of analysis was the household in which CDS-2014 children resided in 2013. The strongest predictors of CDS-2014 completion status (results not shown) were the number of telephone calls made to the household during fieldwork in 2013 and membership in the low-income oversample or immigrant refresher sample, both of which predicted higher odds that a household’s interview was incomplete as of December 10. The other indicators had a statistically nonsignificant association with completion status. Sample characteristics and estimated probabilities of being incomplete are summarized in Table 1.
Table 1.
Sample characteristics by incentive group assignment
| Offered incentive |
Not offered incentive |
|
|---|---|---|
| N | 200 | 394 |
| Criteria for sample selection | ||
| Probability of being incomplete at beginning of incentive period |
Range: .87–.99 Mean: .93 |
Range: .87–.99 Mean: .93 |
| Number of calls placed at 2013 Core interview |
51.9 | 52.5 |
| Number of breakoffs in 2013 Core interview |
.76 | .77 |
| Resistance in 2013 Core interview |
.19 | .23 |
| Household did not complete 2011 interview |
.07 | .05 |
| Sampling stratum | ||
| Low-income | .40 | .40 |
| General population | .43 | .43 |
| 1997 immigrant refresher | .18 | .17 |
|
Sample attributes during CDS 2014 fieldwork |
||
| Coverscreen interview complete at beginning of incentive period |
.39 | .30* |
| Selected for supplemental in- person visit |
.52 | .52 |
| Number of children 12–17 expected in household |
.39 | .39 |
Group differences significant at p<.05.
We then divided the sample into two groups: those families in which all eligible members had actually completed all of their CDS-2014 interviews by December 10 (N=547) and those families where at least one interview was not started or started but incomplete (N=2,368). Families in the latter group whose predicted probability of being incomplete fell in the top quartile of the distribution were defined as eligible for the time-limited incentive (N=594). From the eligible sample, we randomly assigned 200 families to the experimental condition and the balance to the control group.
Table 1 describes characteristics of the experimental group and the control group separately. Group differences on criteria from the 2013 PSID Core interview for selection into the hard-to-reach classification and on their characteristics in the CDS-2014 sample are statistically insignificant at p<.05 with one exception: approximately 39 percent of families in the incentive group had completed the initial coverscreen interview which must be completed in order to launch the remaining interview modules by the beginning of the incentive period, compared to 30 percent of families in the control group. To the extent that having completed the coverscreen interview early indicates a willingness to participate in the remainder of the study compared to having deferred the coverscreen interview, the incentive group may have a higher underlying probability of eventual completion. However, controlling for this characteristic in the foregoing analysis did not change the overall results.
Criteria
To earn the time-limited incentive, all eligible members of a family were required to have completed their interviewer-administered telephone interview by January 4, 2015. However, adolescents were not required to complete the IVR portion of the telephone interview, and families selected for the supplemental home visit were not required to complete the in-home components. Respondents also were not required to submit signed linkage consent forms or saliva samples to earn the time-limited incentive.
Implementation
A brief letter describing the incentive and the requirements to receive it was mailed to the household head of families in the experimental group from Survey Research Operations at the Institute for Social Research on December 12, 2014. The letter encouraged participants to call the study’s toll-free telephone number to schedule their interview during the incentive period. Interviewers also introduced the incentive to eligible primary caregivers by telephone. When all interviews in the family were complete, the $50 incentive was paid by check to the primary caregiver in combination with the standard $60 incentive she/he received for participating. The study protocol was unchanged for families in the control group.
Results
We address three questions to assess the effectiveness of the time-limited incentive. First, did it contribute to a short-term gain in productivity? Second, did it have a negative effect on final participation rates by hard-to-reach families? Third, was it cost-efficient in terms of capturing cases that otherwise would not have participated?
In response to the first question, Column 1 in Table 2 shows that at the end of the incentive period, 25 percent of families in the experimental group, or 50 families, had completed their telephone interviews, compared to 8 percent of families in the control group (N=31). The hazard rate for completing telephone interviews during the incentive period was about 3.95 times higher in families in the incentive group compared to the control group (p<.01). On the second question, Column 2 in Table 2 shows that the share of families that eventually completed their interviews after the time-limited incentive was withdrawn at the end of week 10 remained somewhat higher compared to the control group, but the group differences were not statistically significant in a Chi-square test (hazard ratio=1.27, ns).
Table 2.
Short-term and final completion of the PSID 2014 Child Development Supplement by incentive status
| Completed during time-limited incentive period |
Completed in post- incentive period |
Final completion rate | |
|---|---|---|---|
| Offered time-limited incentive (N=200) |
50 (25%) | 63 (42% of remaining cases) |
113 (56.5%) |
| Not offered time- limited incentive (N=394) |
31 (7.9%) | 132 (36.4% of remaining cases) |
163 (40.9%) |
| Hazard ratio (time- limited incentive/no time-limited incentive) |
3.95 (p<.01) | 1.27 (ns) | 1.72 (p<.01) |
The incentive could have been a long-term detriment to participation if eligible respondents became less likely to participate once the incentive was no longer available. Further, if the incentive was only effective in reaching those families that were most likely to participate anyway, the residual families in the experimental group would have had a lower likelihood of participation than those remaining in the control group in the post-incentive period. Instead, the nonsignificant hazard ratio indicates that the two groups were equally likely to complete their telephone interviews by the end of fieldwork given that they had not yet completed when the incentive period ended. Overall, by the end of fieldwork 56 percent of families in the treatment group had completed their telephone interviews, compared to 41 percent of families in the control group. Combining the incentive and post-incentive periods, the hazard ratio for completion in the incentive group compared to the control group was 1.52 (p<.01). We note that there were no reports from field interviewers that respondents expressed disappointment or frustration when the time-limited incentive expired.
Figure 1 summarizes the impact of the incentive on productivity during the three-week holiday period in which it was available and in the post-incentive period. The figure compares the Kaplan-Meier completion rates for the experimental group (a solid line) and the control group (a dashed line) from the beginning of the incentive period to the end of fieldwork. The vertical line indicates the end of the tenth week, when the incentive was withdrawn. Consistent with the information provided in table 2, the experimental group experienced a bump in productivity compared to the control group through the incentive period, an advantage that persisted in the first few days after the incentive was withdrawn. (Field interviewers were permitted to provide the incentive to families who had some outstanding study components at the official end date and completed those components soon after. Those cases are not included in our assessment of the incentive’s effectiveness during the planned incentive period.) Thereafter, the slopes for completion rates in the two groups were roughly parallel, consistent with the statistically insignificant hazard ratio of completion in the post-incentive period described above.
Figure 1.
Kaplan-Meier completion rates, week 7 to end of fieldwork
Finally, we take two approaches to consider the cost-efficiency of the incentive. First, we ask whether the incentive only attracted those who might have been likely to eventually complete the study anyway. In both the incentive and control groups, we distinguish those who had a lower initial predicted probability of non-completion by December 10 – below .94 – from those with a higher predicted probability of non-completion – .94 and above. Figure 2 shows that in the control group, 33 percent of those who had a higher probability of being incomplete as of the seventh week of fieldwork ultimately completed their telephone interviews compared to 49 percent of those with a lower probability of being incomplete. In the incentive group, 56 percent of families with high initial probabilities of being incomplete eventually completed their interviews, a difference of 23 percentage points compared to their counterparts in the control group. In fact, in the incentive group, those with high or low initial probabilities of being incomplete were about equally likely to finish their interviews by the end of fieldwork – 56 and 57 percent respectively. We conclude that the incentive was effective in recruiting those who had the highest initial probabilities of not completing the study.
Figure 2.
Predicted probability of completion by initial probability of non-response (error bars represent 95% confidence intervals)
Our second approach is to ask how many interviews might have been completed with the families randomly assigned to the incentive group if the incentive had not been offered. As Table 2 shows, 50 cases in the incentive group completed their interviews by the end of the incentive period, and 113 families, or 56.5 percent, finished by the end of fieldwork. In comparison, 41 percent of families in the control group ultimately finished their interviews. Put another way, the share of cases completed in the experimental group by the end of fieldwork was about 38 percent higher than in the control group (.565/.41=1.38). Applying this ratio to the number of completed cases in the incentive group, we estimate that 82 of the 113 completed cases in the experimental group eventually would have finished even without the holiday incentive (113/1.38=82). This implies that the true gain from the incentive was 31 families (113−82=31). If we spread the cost of the $50 incentive paid to 50 caregivers whose families participated during the incentive period over the 31 families whom we expect would not have been completed by the end of fieldwork without the incentive, we estimate that the price to acquire each gained interview was about $81, or $31 greater than the actual per-family incentive payment. ($50 * 50 recipients = $2500, $2500/31 “true gain” recipients = $80.64). This is a relatively low added cost, given that the money invested in the experimental group was time and effort saved in contacting hard-to-reach families over the remaining course of fieldwork and especially at the end of the field period when interviewers otherwise would have been deploying endgame strategies to capture these hard to reach families.
Discussion
Time-limited incentives offer a strategy to complete survey interviews with hard-to-reach respondents early in fieldwork and potentially to capitalize on fieldwork periods that might otherwise be relatively unproductive. In panel surveys and supplemental studies, early fieldwork completion also potentially carries the bonus of reducing perceived respondent burden and providing a longer rest period between contacts from field interviewers.
Using an experimental design, we offered a $50 time-limited incentive during the three-week U.S. winter holiday period to a random treatment group among primary caregivers of children in 594 hard-to-reach families that were eligible for the 2014 PSID Child Development Supplement. Our assessment of the incentive’s effectiveness is favorable. It produced a statistically significant increase in completed interviews during a traditionally quiet time of year and had no negative impact on the target group’s final response rate. It was effective in recruiting the most challenging of the hard-to-reach families and was cost-efficient. This study adds to the broader research evidence on the positive effects of incentives (e.g., Laurie and Lynn 2009, McGonagle, Schoeni and Couper 2013, Singer et al. 1999) by demonstrating the effectiveness of an offer of a conditional monetary amount within a discrete time window for hard-to-reach respondents.
We do note four caveats to this positive assessment. First, CDS-2014 included several other components beyond the telephone-based survey interview that were associated with the time-limited incentive. These included linkage consent forms, saliva samples and, for half of the sample, an in-home visit and time diaries. Families in the experimental group were no more likely to complete these components than were families in the control group. That is, there was no spillover effect of the incentive to motivate families to complete the additional study components. Second, because a time-limited incentive by its nature is offered only for a short period during fieldwork, it is not scalable to large numbers of participants without also increasing the number of interviewers employed during the time-limited interval – given fixed personnel resources, interviewers are likely to be constrained in the number of cases they can realistically complete during the incentive period. Hence, time-limited incentives might be most effective with specific subsamples that would otherwise require disproportionate effort throughout the remainder of the fieldwork period, rather than as a general strategy. Third, in the context of a panel study, the offer of any bonus incentive in one wave may condition panel respondents to expect similar incentives in the future. This is of particular concern in PSID, where some primary caregivers interviewed in CDS 2014 were also expected to be the household respondent in the next wave of the core panel study. To offset this concern, we emphasized the distinctive nature of CDS 2014 to respondents and will assess whether the incentive had any apparent impact on participation at the subsequent wave of the core study. Finally, as with any selectively offered incentive, time-limited incentives introduce differential compensation to respondents who complete the same study components, an issue that may affect the perceived fairness of the incentive structure (in particular, by excluding the most cooperative respondents from being offered the reward) and raise the likelihood that an Institutional Review Board would object to such a scheme.
Despite these caveats, we conclude that the previously-documented gains from time-limited incentives extend to settings where the participation of multiple respondents in a hard-to-reach household must be coordinated over a short time horizon, and that these incentives may provide a cost-efficient boost to productivity during fieldwork periods that might otherwise see a short-term decline in participation.
Acknowledgments
Funding acknowledgement: This work was supported by the Eunice Kennedy Shriver National Institute of Child Health and Human Development (R01HD052646).
Contributor Information
Paula Fomby, Institute for Social Research, Survey Research Center and Population Studies Center, University of Michigan.
Narayan Sastry, Institute for Social Research, Survey Research Center and Population Studies Center, University of Michigan.
Katherine A. McGonagle, Institute for Social Research, Survey Research Center, University of Michigan
References
- Arzheimer Kai, Klein Markus. The Effect of Material Incentives on Return Rate, Panel Attrition and Sample Composition of a Mail Panel Survey. International Journal of Public Opinion Research. 1999;11(4):368–377. [Google Scholar]
- Brown Matt, Calderwood Lisa. Can Encouraging Respondents to Contact Interviewers to Make Appointments Reduce Fieldwork Effort? Evidence from a Randomized Experiment in the Uk. Journal of Survey Statistics and Methodology. 2014;2(4):484–497. [Google Scholar]
- Coopersmith Jared, Vogel Lisa Klein, Bruursema Tim, Feeney Kathleen. Effects of Incentive Amount and Type on Web Survey Response Rates. Paper presented at the Annual Conference of the American Association for Public Opinion Research; Anaheim, CA. 2014. [Google Scholar]
- Deci Edward L, Koestner Richard, Ryan Richard M. A Meta-Analytic Review of Experiments Examining the Effects of Extrinsic Rewards on Intrinsic Motivation. Psychological Bulletin. 1999;125(6):627–668. doi: 10.1037/0033-2909.125.6.627. [DOI] [PubMed] [Google Scholar]
- Laurie Heather, Lynn Peter. The Use of Respondent Incentives on Longitudinal Surveys. In: Lynn P, editor. Methodology of Longitudinal Surveys. New York: Wiley; 2009. [Google Scholar]
- LeClere Felicia, Plummer Sheldonn, Vanicek Jennifer, Amaya Ashley, Carris Kari. Household Early Bird Incentives: Leveraging Family Influence to Improve Household Response Rates. American Statistical Association Joint Statistical Meetings, Section on Survey Research. 2012:4156–4165. [Google Scholar]
- Legleye Stéphane, Charrance Géraldine, Razafindratsima Nicolas, Bohet Aline, Bajos Nathalie, Moreau Caroline. Improving Survey Participation: Cost Effectiveness of Callbacks to Refusals and Increased Call Attempts in a National Telephone Survey in France. Public Opinion Quarterly. 2013;77(3):666–695. [Google Scholar]
- Lynn Peter, Clarke Paul. Separating Refusal Bias and Non-Contact Bias: Evidence from Uk National Surveys. Journal of the Royal Statistical Society: Series D (The Statistician) 2002;51(3):319–333. [Google Scholar]
- Martinez-Ebers Valerie. Using Monetary Incentives with Hard-to-Reach Populations in Panel Surveys. International Journal of Public Opinion Research. 1997;9(1):77–86. [Google Scholar]
- McGonagle Katherine A, Schoeni Robert F, Sastry Narayan, Freedman Vicki A. The Panel Study of Income Dynamics: Overview, Recent Innovations, and Potential for Life Course Research. Longitudinal and life course studies. 2012;3(2):188. doi: 10.14301/llcs.v3i2.188. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McGonagle Katherine A, Schoeni Robert F, Couper Mick P. The Effects of a between-Wave Incentive Experiment on Contact Update and Production Outcomes in a Panel Study. Journal of Official Statistics. 2013;29:261. doi: 10.2478/jos-2013-0022. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McGonagle Katherine A, Freedman Vicki A. The Effects of a Delayed Incentive on Response Rates, Response Mode, Data Quality and Sample Bias in a Nationally Representative Mixed Mode Study. Field Methods. doi: 10.1177/1525822X16671701. forthcoming. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Olsen Randall. The Problem of Respondent Attrition: survey Methodology Is Key. Monthly Labor Review. 2005 Februrary;128:63–70. [Google Scholar]
- Panel Study of Income Dynamics. Psid Main Interview User Manual: Release 2015. Ann Arbor, MI: Institute for Social Research, University of Michigan; 2015. [Google Scholar]
- Pedersen Mogens Jin, Videbæk Nielsen Christian. Improving Survey Response Rates in Online Panels: Effects of Low-Cost Incentives and Cost-Free Text Appeal Interventions. Social Science Computer Review. 2016;34(2):229–243. [Google Scholar]
- Singer Eleanor, Van Hoewyk John, Gebler Nancy, Raghunathan Trivellore, McGonagle Katherine A. The Effect of Response Rates in Interviewer-Mediated Surveys. Journal of Official Statistics. 1999;15:217–230. [Google Scholar]
- Singer Eleanor, Ye Cong. The Use and Effects of Incentives in Surveys. The ANNALS of the American Academy of Political and Social Science. 2013;645(1):112–141. [Google Scholar]
- Westrick Salisa C, Mount Jeanine K. Effects of Repeated Callbacks on Response Rate and Nonresponse Bias: Results from a 17-State Pharmacy Survey. Research in Social and Administrative Pharmacy. 2008;4(1):46–58. doi: 10.1016/j.sapharm.2007.02.002. doi: http://dx.doi.org/10.1016/j.sapharm.2007.02.002. [DOI] [PubMed] [Google Scholar]


