Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Sep 6.
Published in final edited form as: Eval Health Prof. 2011 Mar 16;34(4):464–486. doi: 10.1177/0163278710397791

A Randomized Trial of the Impact of Survey Design Characteristics on Response Rates Among Nursing Home Providers

Melissa Clark 1, Michelle Rogers 1, Andrew Foster 2, Faye Dvorchak 1, Frances Saadeh 1, Jessica Weaver 1, Vincent Mor 1
PMCID: PMC3764450  NIHMSID: NIHMS506011  PMID: 21411474

Abstract

An experiment was conducted to maximize participation of both the Director of Nursing (DoN) and the Administrator (ADMIN) in long-term care facilities. Providers in each of the 224 randomly selected facilities were randomly assigned to 1 of 16 conditions based on the combination of data collection mode (web vs. mail), questionnaire length (short vs. long), and incentive structure. Incentive structures were determined by amount compensated if the individual completed and an additional amount per individual if the pair completed (a) $30 individual/$5 pair/$35 total; (b) $10 individual/$25 pair/$35 total; (c) $30 individual/$20 pair/$50 total; and (d) $10 individual/$40 pair/$50 total. Overall, 47.4% of eligible respondents participated; both respondents participated in 29.3% of facilities. In multivariable analyses, there were no differences in the likelihood of both respondents participating by mode, questionnaire length, or incentive structure. Providing incentives contingent on participation by both providers of a facility was an ineffective strategy for significantly increasing response rates.

Keywords: surveys, response rate, incentives, nursing home

Introduction

Survey research is essential in the assessment of clinical practice and implementation of evidence-based care in health services research. Few medical and public health journals are willing to publish surveys of health professionals with response rates below 50 percent (James, Ziegenfuss, Tilburt, Harris, & Beebe, 2011) despite emerging evidence that response rates are poorly correlated with response bias (Groves & Peytcheva, 2008). A recent survey of editors of scientific journals found that approximately 90% believe that a study’s response rate is somewhat or very important in publication decision making (Carley-Baxter et al., 2009). Unfortunately, response rates among health professionals have been steadily declining (Cull, O’Connor, Sharp, & Tang, 2005; Cummings, Savitz, & Konrad, 2001; Hill, Fahrney, Wheeless, & Carson, 2006). Therefore, optimizing response rates without adversely influencing response bias is a growing concern among health services researchers.

Added to the complexity of declining response rates is the fact that quality improvement and cost containment initiatives in health care increasingly involve interdisciplinary teams of providers. Health service researchers will therefore want to collect data from multiple members of a health care team as these initiatives expand. As a result, studies will require participation from more than one individual in an organization or health care facility to address study aims. This increases the likelihood of low overall response rates if all the eligible individuals in the organization do not participate.

To date, there have been relatively few studies in general medical settings that have explicitly attempted to enroll more than one respondent from an organization. In one of the few recent studies, Ward, Teno, Curtis, Rubenfeld, and Levy (2008) conducted a national survey of nurse managers and physician directors about their perceptions of cost constraints, resource limitations, and rationing in U.S. intensive care units. They found that the physician or nurse responded in 63% of the facilities but both providers responded from only 17% of the facilities.

Unlike other medical settings, long-term care is an area of health services research in which the perspectives of multiple providers have been more likely to be included. Providers in these studies have most often included physicians (e.g., Medical Directors), nurses (e.g., Directors of Nursing [DoNs]), or administrators with response rates for these providers varying greatly depending on the sampling frame, study size, and mode of data collection. For example, in recent studies of samples drawn from professional membership lists, Colon-Emeric and colleagues (2005) reported response rates of 40% for Medical Directors and 48% for DoNs for a mailed national survey about barriers to providing osteoporosis care. On the other hand, Shirts and colleagues (2009) reported response rates of 16% for physicians and 11% for nurse practitioners for an Internet survey about laboratory testing. Response rates varied similarly when participants were recruited from specific nursing homes (Boyce, Bob, & Levenson, 2003; Jogerst, Daly, Dawson, Peek-Asa, & Schmuch, 2006; Resnick, Manard, Stone, & Castle, 2009; Young, Inamdar, Barhydt, Colello, & Hannan, 2009).

While several studies have included multiple long-term care providers, only a limited number of investigators have reported the combined response rate for these providers. Responses to a mailed survey from the Administrator or DoN were received for 90% of 409 facilities in one state (Daly & Jogerst, 2005; Jogerst et al., 2006). In a study of four facilities in which physicians, pharmacists, nurse practitioners/physician assistants, and nurses were asked to complete a mailed questionnaire, the facility rates ranged from 56% to 93% (Handler et al., 2007). However, in a study including 300 facilities in New York State, the Medical Director and DoN both only responded in 17% of the facilities (Young et al., 2009).

DoNs and Administrators (ADMINs) specifically play important roles in the leadership of long-term care facilities. Unlike physicians, these two providers are continually onsite yet differ in their experiences and knowledge. Perspectives from both individuals are often important in understanding the clinical and administrative issues faced by long-term care institutions. Low response rates for both respondents will prohibit investigators from assessing these differing perspectives in the future. Therefore, we conducted an experiment to assess ways to maximize the likelihood of obtaining completed questionnaires from both the DoN and the ADMIN using a nationally representative sample of U.S. nursing homes.

We focused on three design features that may affect response rates among nursing home providers: mode of data collection, questionnaire length, and incentive structure. In a systematic review of studies to improve physician response rates, Vangeest, Johnson, and Welch (2007) concluded that mail and telephone strategies were more effective than fax or web approaches. With some exceptions (e.g., Guise, Chambers, Valimaki, & Makkonen, 2010), studies conducted on the web have had greater nonresponse bias compared to surveys conducted in other modes (Cummings et al., 2001; Kellerman & Herold, 2001; Leece et al., 2004; Shih & Fan, 2008). In addition, investigators have cautioned against widespread implementation of Internet surveys until concerns about the ability to obtain representative samples are addressed (Ahern, 2005; Braithwaite, Emery, De Lusignan, & Sutton, 2003). However, the use of the Internet as a mode of data collection has grown in recent years and there are a number of potential benefits to Internet data collection including lower cost per respondent, quicker turnaround time, and low respondent burden (Couper & Miller, 2008; Wyatt, 2000). We were specifically interested in comparing mail versus Internet data collection among nursing home providers because of previous studies suggesting particularly little information technology in nursing homes (Poon et al., 2006) and the concern that nursing home providers may lack Internet access during normal business hours.

Most studies of health professionals have found that longer surveys yield lower levels of participation, particularly among physicians (Asch, Jedrziewski, & Christakis, 1997; Thran & Hixon, 2000). In a comparison of surveys of varying length, Jepson, Asch, Hershey, and Ubel (2005) found a threshold of 1,000 words at which response rates dropped off. Therefore, we were interested in response rate differences for a survey intended to be completed within 5–10 min versus one intended to be completed within 20–30 min.

Findings about incentives to health professionals for participation in survey research have been mixed. For example, a review by VanGeest and colleagues (2007) concluded that modest incentives are associated with improved physician response although there is little improvement in increments over $1. Although the number of studies about nurses has been more limited, those that are available have shown any monetary incentive significantly improves response rates over no incentives (Camunas, Alward, & Vecchione, 1990; Odon & Price, 1999; Ulrich et al., 2005). However, Flanigan, McFarlane, and Cook (2008) have cautioned that too large of an incentive may be seen as a payment, thereby turning away many physicians. Unfortunately, little is known about ideal monetary incentive levels for health care professionals such as nurses who do not have incomes commensurate with physicians and who may value larger monetary incentives more favorably than physicians.

It is important to consider how incentive payments may be used to increase the likelihood of more than one individual from a facility participating in the study, particularly for health professionals other than physicians. A simple application of economic theory suggests that joint compliance of two individuals in a health care organization can be increased by offering payments to an individual that are conditional not only on his or her participation but on the participation of both individuals. In such a case, an individual’s participation will depend both on the expectation that the other individual will participate and the extent to which he or she values payments to the other individual. If the first individual thinks the other person is likely to participate and he or she is altruistic to the other person, then the first will have a strong incentive to participate. If the first individual thinks the other person is not likely to participate, there will be no additional incentive for the first to participate. This is further complicated, however, if one individual is subordinate to the other and may have an incentive to participate due to supervisory and administrative relationships between the individuals. Of course, these effects are dependent on the premise that incentives do not create perverse behavioral effects and are large enough to actually influence behavior. While economic experiments frequently provide payments that are contingent on the behavior of others (as in the classic prisoner’s dilemma), to our knowledge, no studies have used such jointly contingent payments as an incentive to increase survey response rates.

In order to compare the changes in response to individual and jointly contingent incentives, we constructed four incentive structures that varied the amount that an individual was compensated if he or she completed the survey and an additional amount per individual if his or her colleague also completed the survey. We initially divided the total possible compensation into two groups: one group could receive up to $35 while the other could receive up to $50. Within these two groups, we further made divisions based on the amount that an individual could receive if he or she completed the questionnaire ($10 vs. $30) as well as the amount the individual would receive if his or her colleague returned the questionnaire ($5, $25, and $40). If an individual’s behavior is influenced both by the amount that he or she might earn and by the effect of the behavior on his or her colleague, we anticipated that response rates would be highest in the two conditions in which each individual could earn up to $50, with the highest rate in the condition for which the individual and group compensation was maximized ($30 individual/$20 pair/$50 total).

Method

Sample

The study was reviewed and determined to be exempt by the Brown University Institutional Review Board. The sampling frame was the 2008 Online Survey Certification and Reporting (OSCAR) system. The OSCAR contains nursing home facility-level aggregate information collected by the Centers for Medicare and Medicaid Services (CMS) as part of the annual inspection and certification process for nursing homes, including measures of performance for all Medicare/Medicaid certified facilities. Thus, except for a few nonparticipating, exclusively private pay facilities, all licensed nursing homes in the United States are included in the OSCAR database. Due to substantive research questions that were to be addressed as part of a larger project, eligible facilities for this study were all U.S. nursing homes with at least 30 beds across the 48 contiguous states (excluding Alaska, Hawaii, and Washington, DC) (n = 15,059).

Also to address the substantive areas of interest in the larger project, it was important to include sufficient numbers of facilities in each of the following categories: (a) states with more versus fewer nursing homes (above vs. below median number of homes); (b) type of ownership (free-standing, for-profit vs. free-standing, nonprofit vs. hospital-based), (c) facility size (small [30–120 beds] vs. large [>120 beds]), and percentage of non-White residents (≤10% in facility vs. >10%). Therefore, the eligible facilities in the OSCAR were allocated to 19 strata based on these characteristics. Although there was a potential for 24 strata, all hospital-based facilities were combined, regardless of state size and owner type based on the aims being addressed in the larger project. A total of 285 facilities were randomly selected from the 19 strata (n = 15 facilities per strata). Next, each selected facility was contacted by telephone to obtain the names and contact information for the ADMIN and DoN. By design, 224 (8–10 per strata) of the 285 facilities were randomly selected for this study. The additional 61 of 285 facilities that were not included in this study were used as part of another study to assess the resources required to obtain accurate and complete contact information for nursing home providers (data available upon request).

Our intention was to test the effect of three study design features: data collection mode (two groups), questionnaire length (two groups), and incentive structure (four groups) independently rather than in combination. However, we wanted equal allocation in each condition. Therefore, the 224 facilities were randomly allocated to 16 conditions based on the design features (see Table 1). The ADMIN and DoN at each facility were assigned to the same condition. Data collection mode included a web versus mailed option. In the web option, participants were mailed a cover letter explaining the study that included a web link and unique username and password to complete an online questionnaire. In the mailed option, participants were mailed a similar cover letter explaining the study as well as a paper copy of the questionnaire and a self-addressed return envelope. Questionnaire length varied from a 5- to 10-min version of the survey (short) that included only a minimal set of critical items to address the substantive research aims versus a 20- to 30-min version (long) that included all items of interest. Four incentive structures were based on amount compensated if the individual completed the questionnaire and an additional amount per individual if the pair completed: (a) $30 individual/$5 pair/$35 total; (b) $10 individual/$25 pair/$35 total; (c) $30 individual/$20 pair/$50 total; and (d) $10 individual/$40 pair/$50 total. Regardless of condition, the total compensation amount per individual was computed and mailed at the end of the study. The surveys for the ADMIN and DoN were unique and included the questions that were most relevant to the provider’s responsibilities and expertise.

Table 1.

Facility Allocation by Study Design Conditions

Incentive Structure
Condition Data Collection
Mode
Questionnaire Length If Individual
Responded
If Colleague
Responded
If Both Individual and
Colleague Responded
Number of Facilities
(Number Eligible)
1 Mailed Short (5–10 min) $30 $5 $35 14(13)
2 Mailed Short (5–10 min) $10 $25 $35 14 (14)
3 Mailed Short (5–10 min) $30 $20 $50 14 (14)
4 Mailed Short (5–10 min) $10 $40 $50 14(12)
5 Mailed Long (20–30 min) $30 $5 $35 14(11)
6 Mailed Long (20–30 min) $10 $25 $35 14(11)
7 Mailed Long (20–30 min) $30 $20 $50 14(13)
8 Mailed Long (20–30 min) $10 $40 $50 14(13)
9 Web based Short (5–10 min) $30 $5 $35 14(13)
10 Web based Short (5–10 min) $10 $25 $35 14 (14)
1 1 Web based Short (5–10 min) $30 $20 $50 14(13)
12 Web based Short (5–10 min) $10 $40 $50 14(13)
13 Web based Long (20–30 min) $30 $5 $35 14(12)
14 Web based Long (20–30 min) $10 $25 $35 14(12)
15 Web based Long (20–30 min) $30 $20 $50 14 (14)
16 Web based Long (20–30 min) $10 $40 $50 14(13)

The study was conducted over 8 weeks. After 2 weeks, telephone/e-mail/fax contacts were initiated with nonrespondents. The types and schedule of contacts varied throughout the study period and were determined by information available about the participant and/or facility. For example, participants at some facilities provided e-mail addresses while others did not have, or were unwilling to provide, an e-mail address. A mailed reminder letter was sent in Week 6 to nonrespondents, regardless of type of previous contact. If information was mailed, e-mailed, or faxed to a participant, we waited at least 14 days between contacts. If we spoke with, or left voicemail for, the participant or the participant’s assistant, we waited at least 6 days between contact attempts.

Analysis

Our analyses included three outcomes of interest: response rates, number of participant contact attempts, and item nonresponse. Response rates were defined as the number of returned questionnaires divided by the number of eligible respondents (The American Association for Public Opinion Research, 2008). In addition to computing response rates for the DoN and ADMIN separately, we computed two other facility-level response rate measures. We calculated the participation rate for at least one of the two eligible respondents at a facility (individual response rate) as well as the participation rate for both eligible respondents at a facility (facility response rate). For the individual response rate, a facility was considered complete if at least one respondent at a facility returned the questionnaire. For the facility response rate, a facility was considered complete if we received returned questionnaires from both the ADMIN and DoN at the facility.

We first used Pearson chi-square tests to compare the response rates by each study design feature (e.g., mode of data collection, length of questionnaire, and incentive structure). Second, we computed multivariable logistic regression models to assess the relationships between the study design features and the likelihood of responding to the questionnaire controlling for type of provider, type of facility, size of facility, and percentage of non-White residents in a facility. To further explore the potential effect of incentive structure on response rates, we calculated the probability of an individual completing the survey conditional on completion by his or her colleague and compared these probabilities across the different incentive structures.

Fourth, we computed the mean number of contacts per study design feature. Next, we computed multivariable linear regression models to assess the relationships between the study design features and number of contact attempts per participant, controlling for type of provider, type of facility, size of facility, and percentage of non-White residents in a facility. Finally, we used Pearson chi-square tests to compare the percentage of participants who refused to answer/left items blank for two or more questions (e.g., item nonresponse) by each study design feature.

Results

Overall, 426 of 448 nursing home providers were eligible for the study. The 22 individuals who were not eligible were no longer employed at the facility (n = 14; 63.6%), were employed at a facility that did not provide long-term care (n = 6; 27.3%), were out of the office during the study period (n = 1; 4.5%), or were serving in both the DoN and the ADMIN positions (n = 1; 4.5%). This resulted in 205 facilities for which both the DoN and the ADMIN were eligible. By the end of the study, a total of 202 respondents completed the questionnaire. A total of 51 individuals explicitly refused participation. Reasons for refusal included too busy (n = 24; 47.1%), company policy did not allow participation in surveys (n = 8; 15.7%), not interested (n = 6; 11.8%), facility was in receivership or being sold (n = 4; 7.8%), new in position (n = 1; 2.0%), and unknown (n = 8; 15.7%). The remaining individuals were nonrespondents at the end of the study period.

Response Rates

As shown in Table 2, 47.4% of eligible respondents completed the questionnaire (45.3% DoN; 49.5% ADMIN). At least one individual completed the questionnaire in 67.3% of facilities and both respondents completed the questionnaire in 29.3% of facilities. In univariable analyses, neither the individual response rate (i.e., at least one person in a facility) nor the facility response rate (i.e., both ADMIN and DoN completed) differed by mode of data collection, questionnaire length, or incentive structure.

Table 2.

Response Rates by Design Feature

Facility
Design Feature Director
of Nursing
(DoN)
Administrator
(ADMIN)
At
Least One
Respondenta
Both
DoN and
ADMINa
Overall response rate 45.3% 49.5% 67.3% 29.3%
Data collection mode
 Mailed questionnaire 51.9% 47.1% 69.3% 31.7%
 Web-based
  questionnaire
43.5% 47.2% 65.4% 26.9%
Length of survey
 Short (5–10 min) 43.6% 50.5% 66.0% 29.3%
 Long (20–30 min) 47.1% 48.6% 68.7% 29.3%
Incentiveb
 $30/$5/$35 43.4% 54.9% 67.4% 30.6%
 $I0/$25/$35 40.4% 53.7% 66.7% 29.4%
 $30/$20/$50 47.3% 54.6% 70.4% 33.3%
 $I0/$40/$50 50.0% 35.2% 64.7% 23.5%
a

Includes facilities in which both DoN and ADMIN were eligible.

b

Amount per individual/amount if both complete/total amount per individual.

Table 3 provides the results of the multivariable relationships between the study design features and the likelihood of responding to the survey controlling for facility characteristics. None of the design features were significantly associated with responding to the survey. Only facility characteristics were associated with the likelihood of both individuals participating in the study. Both individuals were more likely to participate in facilities with fewer minority residents and less likely to participate if they worked in free-standing, for profit facilities versus hospital-based facilities. However, none of the facility-level characteristics were associated with at least one individual from a facility responding to the survey.

Table 3.

Multivariable Relationships Between Study Design Features and Likelihood of Director of Nursing (DoN) and/or Administrator (ADMIN) Responding to Survey Controlling for Facility Characteristics

Characteristic At Least One Respondent
From Same Facility AOR
[95% CI]
Both DoN and ADMIN
From Same Facility
AOR [95% CI]
Design feature
Data collection mode
 Mailed questionnaire 1.17 [0.65, 2.12] I.20 [0.64, 2.23]
 Web-based questionnaire Reference Reference
Length of survey
 Short (5–10 min) 0.90 [0.49, 1.62] 0.97 [0.52, I.80]
 Long (20–30 min) Reference Reference
Incentivea
 $30/$5/$35 I.09 [0.47, 2.53] I.56 [0.62, 3.89]
 $I0/$25/$35 I.II [0.49, 2.55] I.45 [0.58, 3.59]
 $30/$20/$50 1.32 [0.58, 3.0I] I.72 [0.7I, 4.I4]
 $I0/$40/$50 Reference Reference
Facility characteristics
 Type of facility
  Free-standing,
   for-profit
0.92 [0.39, 2.I8] 0.37 [0.I5, 0.90]
  Free-standing,
   nonprofit
I.29 [0.53, 3.I2] 0.46 [0.I9, I.I0]
  Hospital based Reference Reference
 Size of facility
 Small I.02 [0.56, I.87] 0.7I [0.38, I.34]
 Large Reference Reference
Percentage of non-White residents in facility
≤10% I.56 [0.87, 2.84] I.9I [I.02, 3.67]
>10% Reference Reference

Note: AOR = adjusted odds ratio.

a

Amount per individual/amount if both complete/total amount per individual.

To further explore the potential effect of incentive structure on response rates, we calculated the probability of an individual completing the survey conditional on completion by his or her colleague (see Table 4). Having a colleague participate was associated with about a 20% increase in an individual participating, regardless of incentive structure. However, there were some differences in the probability of responding in each of the incentive structures by provider type. Response rates were highest for DoNs and lowest for ADMINs in the condition in which each individual could earn an additional $40 if both responded.

Table 4.

Probability of Responding to Survey Conditional on Colleague Responding by Incentive Structurea

Incentive
Structure
Administrator
Given Director
of Nursing did
Not Respond
Administrator
Given
Director
of Nursing
Responded
Director of
Nursing Given
Administrator
did Not Respond
Director of
Nursing Given
Administrator
Responded
Overallb 39.6 63.8 33.7 57.7
 $30/$5/$35 42.9 71.4 27.3 55.6
 $10/$25/$35 43.3 71.4 26.1 53.6
 $30/$20/$50 42.9 69.2 33.3 60.0
 $10/$40/$50 28.0 46.2 43.8 63.2
Incentive to individual
 $30 42.9 70.2 30.4 57.9
 $10 36.4 57.5 36.4 57.5
Incentive if both completed
 $5 42.9 71.4 27.3 55.6
 $20–25 43.1 70.2 29.8 56.9
 $40 28.0 46.2 43.8 63.2
Total incentive
 $35 43.1 71.4 26.7 54.6
 $50 35.9 57.7 39.3 61.2
a

Among facilities for whom both individuals are eligible.

b

Amount per individual/amount if both complete/total amount per individual.

Contact Attempts

The mean number of contact attempts per participant after the initial contact letter was 5.7 (SD = 4.0). The number of contact attempts were slightly higher for web (M = 6.2, SD = 3.8) versus mail (M = 5.2, SD = 4.2) administration, the long (M = 5.9, SD = 4.0) versus short (M = 5.6, SD = 4.0) version of the questionnaire; and the $10/$40/$50 incentive structure (M = 6.4, SD = 3.8) versus $10/$25/$35 (M = 5.7, SD = 3.7), $30/$20/$50 (M = 5.4, SD = 4.4), and $30/$5/$35 (M = 5.3, SD = 3.8) conditions. However, none of these differences were statistically significant at p < .05 in univariable analyses.

Table 5 provides results of the multivariable regression analyses for the relationship between the study design features and number of contact attempts per nonrespondent. Individuals assigned to the mail mode required fewer follow-up contacts than those assigned to the web while those assigned the $10/$40/$50 incentive structure required more contacts than those in the other incentive structures. None of the respondent or facility characteristics were associated with number of follow-up contacts.

Table 5.

Multivariable Relationships Between Study Design Features and Number of Contact Attempts to Nonrespondents Controlling for Respondent and Facility Characteristics

Characteristic Parameter
Estimate
Standard
Error
t Value p Value
Intercept 6.98 0.71 9.77 <.0001
Design feature
 Data collection mode
  Mailed questionnaire −0.96 0.39 −2.47 .01
  Web-based questionnaire Reference
 Length of survey
  Short (5–10 min) −0.28 0.39 −0.72 .47
  Long (20–30 min) Reference
 Incentivea
  $30/$5/$35 −I.I4 0.55 −2.06 .04
  $I0/$25/$35 −0.72 0.55 −1.31 .I9
  $30/$20/$50 −1.07 0.54 −I.98 .04
  $I0/$40/$50 Reference
Respondent characteristics
 Type of respondent
  Administrator −0.21 0.39 −0.55 .58
  Director of nursing Reference
Facility characteristics
 Type of facility
  Free-standing, for-profit −0.28 0.58 −0.49 .63
  Free-standing, nonprofit −0.07 0.58 −0.12 .91
  Hospital based Reference
 Size of facility
  Small −0.12 0.39 −0.31 .76
  Large Reference
 Percent of non-White residents in facility
  ≤10% −0.55 0.39 −I.44 .I5
  >10% Reference

Data Quality

Only two surveys were returned with less than 50% of the items completed. Among the remaining completed questionnaires, we compared the percentage of participants who refused to answer or left 2 or more items blank by each study design feature. Overall, 19.3% of participants did not answer at least two questions. Nonresponse to 2 or more items was slightly higher for those in the mailed versus web condition (23.8% vs. 14.35, p = .09). As expected, item nonresponse of ≥2 was greater for the long versus short version of the questionnaire (30.0% vs. 8.8%, p = .0001) and was also higher for the $10/$40/$50 condition (31.1%) compared to the other conditions ($30/$5/$35 = 11.8%; $10/$25/$35 = 16.0%, $30/$20/$50 = 19.6%; p = .10).

Conclusions

We conducted an experiment to assess ways to maximize the likelihood of obtaining completed questionnaires from both the DoN and ADMIN at long-term care facilities using a national sample of nursing homes. We tested three study design features (e.g., mode of data collection, length of questionnaire, and incentive structures) on response rates. In addition, we assessed the number of follow-up contact attempts to nonrespondents and data quality in each of the design conditions.

Overall, 45% of the DoNs and 50% of ADMINs responded to the survey. At least one of the two individuals responded in 67% of the facilities. However, both providers responded to the survey in only 29% of the facilities. While this is higher than the facility response rate in some other studies of health care providers (Ward et al., 2008; Young et al., 2009), it is less than ideal for studies in which the perspectives of more than one individual are desired. Unfortunately, none of the design features that we tested were associated with higher response rates. Nursing home providers in our sample were equally likely to complete the questionnaire over the Internet and by mail. To achieve comparable response rates, participants assigned to the web condition required more follow-up contacts on average than those in the mail condition, suggesting that mailed questionnaires may be preferable when surveying nursing home providers. However, investigators should be cautious because we found more item nonresponse to the mailed versus web questionnaires. In addition, although we did not do a formal cost analysis of the resources required for the two modes, our process data indicate that almost half of the participants requested the information be resent at least once and this was slightly more likely in the mailed condition. Therefore, investigators who intend to survey nursing home providers should consider using a mixed mode data collection to exploit the advantages while lessening the disadvantages of each mode.

Contrary to several previous studies of physicians (Asch et al., 1997; Thran & Hixon, 2000), response rates by DoNs and ADMINs did not differ by length of the questionnaire. Nursing home providers were equally likely to complete a survey intended to be 5–10 min and 20–30 min. We do not have any data to indicate why our findings may be different than other studies that have shown that response rates among health professionals are higher for shorter questionnaires. One reason may be that nursing home providers receive fewer requests for survey participation compared to physicians. Therefore, those who agree to participate may be willing to spend more time completing a survey for which they feel the topic is interesting and valuable to the long-term care industry.

While we hypothesized that data collection mode and survey length may have an influence on the likelihood of both individuals in a facility completing the survey, we predicted that the incentive structure may have the most effect on response rates. There is a relatively large literature in experimental economics “games” conducted in laboratory settings that has explored individual behavior in circumstances in which a person’s incentive was dependent on payments received by other individuals (i.e., partners) participating in the game. For the most part, this literature demonstrates that subjects do respond to variation in incentives both to the individual and to the partner in expected directions (Dal Bo, Foster, & Putterman, In Press; The Handbook of Experimental Economics, 1995). However, in our study, neither the individual nor facility-level response rate was associated with incentive structure. Higher compensation amounts contingent on completion by both members of the pair was not an effective strategy for increasing the likelihood of participation by both members of the facility.

There are at least three potential explanations for why we did not observe incentive effects in this study comparable to those observed in laboratory settings. First, there may have been substantial variation across facilities in how individuals value payments to their colleagues (i.e., altruism) and on expectations about the colleague’s behaviors (i.e., beliefs). This variation is important because the relative attractiveness of different payment schemes we provided differs based on altruism and beliefs. In particular, in our study, the highest individual and joint participation should have been observed for the $30/$20/$50 condition if individuals were not altruistic and did not believe their colleagues would participate. Participation should have been highest and equal for the $10/$40/$50 and $30/$20/$50 conditions if individuals were not altruistic but did expect their colleagues to comply. Finally, the highest participation rate should have been in the $10/$40/$50 condition if individuals were sufficiently altruistic and expected their colleagues to participate. Unfortunately, there is no obvious way to control for participant beliefs and altruism in the context of survey data comparable to what is done in a laboratory setting. As a result, response rates may not have been systematically higher in any one incentive structure when we combined data from all facilities.

Second, individuals in our study may have not been able to easily map the incentive structures into clear economic payments due to their relative complexity. Data from experimental economics indicate that in the face of complexity, individuals do not necessarily adopt strategies that maximize expected payments (The Handbook of Experimental Economics, 1995). For example, an individual may find it difficult to form beliefs about a colleague’s behavior, given the incentive structure and thus may assume the worst (i.e., that she or he will not participate) and therefore ignore any of the incentives that depend on the other individual’s actions. In our study, some individuals facing the complexity of the incentive structures may have assumed the best while others assumed the worst with regard to their colleague’s behaviors, which may partly explain the lack of consistent findings that we observed. Unfortunately, it was not feasible in our study to address the problem of complexity by giving individuals an opportunity to practice the “game” as is done in laboratory settings.

Third, individuals in our study may not have viewed the study incentives as compensation for services in a traditional economic sense. For example, participants may have viewed completion of the survey as more or less part of their job responsibility and thus something for which they were already being compensated. If this was the case, key issues determining participation were other demands on their time and how the decision to do the survey would be viewed by their supervisors and colleagues. Individuals may have been wary of being perceived as someone who pursues private gain during work hours and thus may have been ambivalent about higher levels of payment. For some individuals, the smaller incentive may have distinguished our request for participation from other unsolicited requests while at the same time reducing ambivalence about the receipt of additional compensation for job-related responsibilities. Given the sample size, participation among individuals who preferred small payments may have been balanced by participation among individuals who were incentivized by larger compensation, thus resulting in no differences by the incentive structures.

In further considering how individual perceptions may have been affected by the design of our study, it is important to note that an individual was aware of the participation status of his or her colleague based on the total compensation amount provided. The fact that an individual’s behavior would be known may have been an important motivator for some individuals, particularly in work environments for which there are hierarchical role responsibilities. In many nursing home facilities, the DoN is supervised by the ADMIN. We found that DoNs were most likely to participate in the condition in which individuals would be compensated the most ($40) if both members of the pair participated. However, this was not the case for ADMINs. Therefore, DoNs may have felt more compelled to participate because their behavior would be known to, and benefit, their supervisor. We are unable to explore this further because we did not include a condition in which there was no jointly contingent incentive (e.g., amount paid if both complete).

Interestingly, we found that the number of follow-up contacts to nonrespondents was significantly higher in the $10/$40/$50 condition than for the $30/$5/$35 and $30/$20/$50 conditions. Our process data as well as information from interviewer debriefings suggest that more time was also spent explaining the study and encouraging individuals in the $10/$40/$50 condition to participate relative to the other incentive structures. This suggests that promising larger incentives if both members of the facility participated may have caused suspicion and skepticism among some participants. This is similar to studies among physicians in which enclosing too large of an incentive turned away some participants (Flanigan et al., 2008). Taken together, our data suggest that providing incentives based on contingencies is not likely to maximize response rates among multiple members of a health care organization. Rather, incentives that are directed to the individual and viewed as a token of appreciation may have the best results. Incentives directed to the individual also allow for prepaid versus promised incentives, which have been demonstrated to be particularly effective for increasing response rates among physicians (Berk, Edwards, & Gay, 1993; James et al., 2011; VanGeest, Wynia, Cummins, & Wilson, 2001). Including the total incentive with initial study materials is not possible if the compensation amount is contingent on the behaviors of other study participants.

We found the likelihood of both individuals responding from the same facility was associated with facility-level characteristics. Respondents working in facilities with more minority residents were less likely to both participate in the survey. Similarly, compared to respondents working in hospital-based facilities, those working for free-standing, for-profit facilities were less likely to participate. However, there were no differences in the number of contact attempts to nonrespondents by these facility-level characteristics. We do not have data to determine why these facility-level characteristics were associated with response rates. However, facilities with more minority residents typically have fewer resources, less staffing, and more deficiencies related to care quality (Fennell, Feng, Clark, & Mor, 2010; Smith, Feng, Fennell, Zinn, & Mor, 2007). Therefore, respondents in these facilities may be burdened with heavy clinical and administrative responsibilities, have little operating margin, and/or may be more suspicious of studies asking about their facility. Similarly, studies have reported lower quality of care in for-profit nursing compared to nonprofit facilities (Comondore et al., 2009) and for-profit facilities have received more negative media attention related to care of residents (Harrington, 2001). As such, providers in these facilities may be more concerned about how their responses will be presented and interpreted, despite promises of confidentiality of results.

There are some important study limitations. First, we did not assess the simultaneous effect of the study design features. Rather, we tested the effect of each study design feature independently. As such, we are unable to comment on whether a specific combination of design features such as a mailed, short questionnaire with an incentive structure of $30/$20/$50 would significantly improve the likelihood of both the ADMIN and the DoN completing the questionnaires. Our choice of sample size balanced our interest in detecting design feature differences that had meaningful impact on practice with budget constraints. We were well aware that we would be unable to detect subtle differences within specific combinations of design features, and ex post standard errors support this. Future studies utilizing a full factorial design with larger sample sizes should be considered to assess the simultaneous effect of these design features. Second, due to time constraints, our data collection period was only 8 weeks. It is unknown whether a longer data collection period with more follow-up contacts would have substantially increased response rates. Third, our sample was limited to nursing home providers. Therefore, the extent to which the results are generalizable to other health care professionals is unknown. Fourth, a full cost analysis was beyond the scope of this project but should be pursued in future work to determine the most effective and efficient approaches to increase response rates among health care professionals, including nursing home providers.

Our study design also did not allow us to explore potential organizational barriers that may have affected the response rates. For example, in establishment surveys (e.g., surveys of businesses, organizations, and institutions), gatekeepers have increasingly challenged survey efforts by protecting employees, including health care professionals, from unwanted intrusions on their time (Fisher, Bosley, Goldenberg, Mockovak, & Tucker, 2003; Flanigan et al., 2008). In our study, 16% of respondents indicated that company policy did not allow participation in surveys. We do not know, however, how many additional potential respondents did not participate because receptionists did not give them the study materials and/or transmit messages about the study. We also do not know the extent to which the topic of the survey may have affected response rates. The surveys were about the effects of state policies on ways that nursing homes address key clinical and organizational challenges. While it was assumed that this issue has high salience for nursing home providers, they may have considered it unimportant, burdensome, time-consuming, or intrusive.

Despite these limitations, our study is an important contribution to the literature about improving response rates for surveys of health professionals. Our findings suggest the need to continue exploring novel study design features to find optimal approaches to surveying health professionals. In particular, more studies are needed to find ways to maximize response rates among multiple members of organizations, particularly as health care becomes increasingly dependent on interdisciplinary teams of providers. In doing so, investigators should consider studies that test the extent to which contingency incentives are perceived to be coercive, particularly in settings in which there are hierarchical roles among potential participants (Singer & Bossarte, 2006).

Acknowledgements

A version of this paper was presented at the 2010 annual meeting of the American Association of Public Opinion Research, Chicago, Illinois.

Footnotes

Declaration of Conflicting Interests

The author(s) declared no potential conflicts of interests with respect to the authorship and/or publication of this article.

Funding

The author(s) disclosed receipt of the following financial support for the research and/or authorship of this article: The National Institute on Aging Grant (1P01AG0 27296-01A1).

References

  1. Ahern NR. Using the Internet to conduct research. Nurse Research. 2005;13:55–70. doi: 10.7748/nr2005.10.13.2.55.c5968. [DOI] [PubMed] [Google Scholar]
  2. Asch DA, Jedrziewski MK, Christakis NA. Response rates to mail surveys published in medical journals. Journal of Clinical Epidemiology. 1997;50:1129–1136. doi: 10.1016/s0895-4356(97)00126-1. [DOI] [PubMed] [Google Scholar]
  3. Berk ML, Edwards WS, Gay NL. The use of a prepaid incentive to convert nonresponders on a survey of physicians. Evaluation & the Health Professions. 1993;16:239–245. doi: 10.1177/016327879301600208. [DOI] [PubMed] [Google Scholar]
  4. Boyce BF, Bob H, Levenson SA. The preliminary impact of Maryland’s medical director and attending physician regulations. Journal of American Medical Director Association. 2003;4:157–163. doi: 10.1097/01.JAM.0000066022.74526.CF. [DOI] [PubMed] [Google Scholar]
  5. Braithwaite D, Emery J, De Lusignan S, Sutton S. Using the Internet to conduct surveys of health professionals: a valid alternative? Family Practice. 2003;20:545–551. doi: 10.1093/fampra/cmg509. [DOI] [PubMed] [Google Scholar]
  6. Camunas C, Alward RR, Vecchione E. Survey response rates to a professional association mail questionnaire. Journal of the New York State Nurses Association. 1990;21:7–9. [PubMed] [Google Scholar]
  7. Carley-Baxter LR, Hill CA, Roe DJ, Twiddy SE, Baxter RK, Ruppenkamp J. Survey Practice. Nov, 2009. Does response rate matter? Journal editors use of survey quality measures in manuscript publication decisions. Available at: http://surveypractice.org/2009/10/17/editors-decisions. [Google Scholar]
  8. Colon-Emeric CS, Casebeer L, Saag K, Allison J, Levine D, Suh TT, Lyles KW. Barriers to providing osteoporosis care in skilled nursing facilities: perceptions of medical directors and directors of nursing. Journal of the American Medical Directors Association. 2005;6:S61–S66. doi: 10.1016/j.jamda.2005.03.024. [DOI] [PubMed] [Google Scholar]
  9. Comondore VR, Devereaux PJ, Zhou Q, Stone SB, Busse JW, Ravindran NC, Guyatt GH. Quality of care in for-profit and not-for-profit nursing homes: systematic review and meta-analysis. British Medical Journal. 2009;339:b2732. doi: 10.1136/bmj.b2732. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Couper MP, Miller PV. Web survey methods. Public Opinion Quarterly. 2008;72:831–835. [Google Scholar]
  11. Cull WL, O’Connor KG, Sharp S, Tang SF. Response rates and response bias for 50 surveys of pediatricians. Health Service Research. 2005;40:213–226. doi: 10.1111/j.1475-6773.2005.00350.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Cummings SM, Savitz LA, Konrad TR. Reported response rates to mailed physician questionnaires. Health Service Research. 2001;35:1347–1355. [PMC free article] [PubMed] [Google Scholar]
  13. Dal Bo P, Foster A, Putterman L. Institutions and behavior: Experimental evidence on the effects of democracy. American Economic Review. 2010;100:2205–2229. doi: 10.1257/aer.100.5.2205. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Daly JM, Jogerst GJ. Association of knowledge of adult protective services legislation with rates of reporting of abuse in Iowa nursing homes. Journal of American Medical Directors Association. 2005;6:113–120. doi: 10.1016/j.jamda.2005.01.005. [DOI] [PubMed] [Google Scholar]
  15. Fennell ML, Feng Z, Clark MA, Mor V. Elderly Hispanics more likely to reside in poor-quality nursing homes. Health Affiliation (Millwood) 2010;29:65–73. doi: 10.1377/hlthaff.2009.0003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Fisher S, Bosley J, Goldenberg K, Mockovak W, Tucker C. A qualitative study of nonresponse factors affecting BLS establishment surveys: Results. American Statistical Association; 2003. Proceedings of the Survey Research Methods Section. Retrieved September 27, 2010, from http://www.bls.gov/osmr/pdf/st030230.pdf. [Google Scholar]
  17. Flanigan TS, McFarlane E, Cook S. Conducting survey research among physicians and other medical professionals—A review of current literature. 2008 Retrieved September 27, 2010, from http://www.amstat.org/sections/srms/Proceedings/y2008/Files/flanigan.pdf.
  18. Groves RM, Peytcheva E. The impact of nonresponse rates on nonresponse bias. Public Opinion Quarterly. 2008;72:167–189. [Google Scholar]
  19. Guise V, Chambers M, Valimaki M, Makkonen P. A mixed-mode approach to data collection: Combining web and paper questionnaires to examine nurses’ attitudes to mental illness. Journal of Advanced Nursing. 2010;66:1623–1632. doi: 10.1111/j.1365-2648.2010.05357.x. [DOI] [PubMed] [Google Scholar]
  20. Handler SM, Perera S, Olshansky EF, Studenski SA, Nace DA, Fridsma DB, Hanlon JT. Identifying modifiable barriers to medication error reporting in the nursing home setting. Journal of the American Medical Directors Association. 2007;8:568–574. doi: 10.1016/j.jamda.2007.06.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Harrington C. Regulating nursing homes: Residential nursing facilities in the United States. British Medical Journal. 2001;323:507–510. doi: 10.1136/bmj.323.7311.507. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Hill CA, Fahrney K, Wheeless SC, Carson CP. Survey response inducements for registered nurses. Western Journal of Nursing Research. 2006;28:322–334. doi: 10.1177/0193945905284723. [DOI] [PubMed] [Google Scholar]
  23. James KM, Ziegenfuss JY, Tilburt JC, Harris AM, Beebe TJ. Getting physicians to respond: The impact of incentive type and timing on physician survey response rates. HSR. 2011;46:232–242. doi: 10.1111/j.1475-6773.2010.01181.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Jepson C, Asch DA, Hershey JC, Ubel PA. In a mailed physician survey, questionnaire length had a threshold effect on response rate. Journal of Clinical Epidemiology. 2005;58:103–105. doi: 10.1016/j.jclinepi.2004.06.004. [DOI] [PubMed] [Google Scholar]
  25. Jogerst GJ, Daly JM, Dawson JD, Peek-Asa C, Schmuch G. Iowa nursing home characteristics associated with reported abuse. Journal of the American Medical Directors Association. 2006;7:203–207. doi: 10.1016/j.jamda.2005.12.006. [DOI] [PubMed] [Google Scholar]
  26. Kagel JH, Roth AE. The handbook of experimental economics. Princeton University Press; Princeton, NJ: 1995. [Google Scholar]
  27. Kellerman SE, Herold J. Physician response to surveys. A review of the literature. American Journal of Preventive Medicine. 2001;20:61–67. doi: 10.1016/s0749-3797(00)00258-0. [DOI] [PubMed] [Google Scholar]
  28. Leece P, Bhandari M, Sprague S, Swiontkowski MF, Schemitsch EH, Tornetta P, Guyatt GH. Internet versus mailed questionnaires: A randomized comparison (2) Journal of Medical Internet Research. 2004;6:e30. doi: 10.2196/jmir.6.3.e30. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Odon L, Price JH. Effects of a small monetary incentive and follow-up mailings on return rates of a survey to nurse practitioners. Psychological Reports. 1999;85:1154–1156. doi: 10.2466/pr0.1999.85.3f.1154. [DOI] [PubMed] [Google Scholar]
  30. Poon EG, Jha AK, Christino M, Honour MM, Fernandopulle R, Middleton B, Kaushal R. Assessing the level of healthcare information technology adoption in the United States: A snapshot. BMC Medical Informatics and Decision Making. 2006;6:1. doi: 10.1186/1472-6947-6-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Resnick HE, Manard B, Stone RI, Castle NG. Tenure, certification, and education of nursing home administrators, medical directors, and directors of nursing in for-profit and not-for-profit nursing homes: United States 2004. Journal of the American Medical Directors Association. 2009;10:423–430. doi: 10.1016/j.jamda.2009.03.009. [DOI] [PubMed] [Google Scholar]
  32. Shih TH, Fan X. Comparing response rates from web and mail surveys: A meta-analysis. Field Methods. 2008;20:249–271. [Google Scholar]
  33. Shirts BH, Perera S, Hanlon JT, Roumani YF, Studenski SA, Nace DA, Handler SM. Provider management of and satisfaction with laboratory testing in the nursing home setting: Results of a national internet-based survey. Journal of the American Medical Directors Association. 2009;10:161–166. doi: 10.1016/j.jamda.2008.08.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Singer E, Bossarte RM. Incentives for survey participation when are they “coercive”? American Journal of Preventive Medicine. 2006;31:411–418. doi: 10.1016/j.amepre.2006.07.013. [DOI] [PubMed] [Google Scholar]
  35. Smith DB, Feng Z, Fennell ML, Zinn JS, Mor V. Separate and unequal: Racial segregation and disparities in quality across U.S. nursing homes. Health Affiliation (Millwood) 2007;26:1448–1458. doi: 10.1377/hlthaff.26.5.1448. [DOI] [PubMed] [Google Scholar]
  36. The American Association for Public Opinion Research . Standard definitions: Final dispositions of case codes and outcome rates for surveys. 5th ed. Author; Lenexa, KS: 2008. [Google Scholar]
  37. Thran SL, Hixon JS. Physician surveys: Recent difficulties and proposed solutions; Paper presented at the ASA 2000 Proceedings; 2000. [Google Scholar]
  38. Ulrich CM, Danis M, Koziol D, Garrett-Mayer E, Hubbard R, Grady C. Does it pay to pay? A randomized trial of prepaid financial incentives and lottery incentives in surveys of nonphysician healthcare professionals. Nursing Research. 2005;54:178–183. doi: 10.1097/00006199-200505000-00005. [DOI] [PubMed] [Google Scholar]
  39. VanGeest JB, Johnson TP, Welch VL. Methodologies for improving response rates in surveys of physicians: A systematic review. Evaluation & the Health Professions. 2007;30:303–321. doi: 10.1177/0163278707307899. [DOI] [PubMed] [Google Scholar]
  40. VanGeest JB, Wynia MK, Cummins DS, Wilson IB. Effects of different monetary incentives on the return rate of a national mail survey of physicians. Medical Care. 2001;39:197–201. doi: 10.1097/00005650-200102000-00010. [DOI] [PubMed] [Google Scholar]
  41. Ward NS, Teno JM, Curtis JR, Rubenfeld GD, Levy MM. Perceptions of cost constraints, resource limitations, and rationing in United States intensive care units: Results of a national survey. Critical Care Medicine. 2008;36:471–476. doi: 10.1097/CCM.0B013E3181629511. [DOI] [PubMed] [Google Scholar]
  42. Wyatt JC. When to use web-based surveys. Journal of American Medical Information Association. 2000;7:426–429. doi: 10.1136/jamia.2000.0070426. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Young Y, Inamdar S, Barhydt NR, Colello AD, Hannan EL. Preventable hospitalization among nursing home residents: Varying views between medical directors and directors of nursing regarding determinants. Journal of Aging Health. 2010;22:169–182. doi: 10.1177/089826430353346. [DOI] [PubMed] [Google Scholar]

RESOURCES