Abstract
Background and Aims
Nursing home research may involve eliciting information from managers, yet response rates for Directors of Nursing have not been recently studied. As a part of a more extensive study, we surveyed all nursing homes in three states in 2018 and 2019, updating how to survey these leaders effectively. We focus on response rates as a measure of non‐response error and comparison of nursing home's characteristics to their population values as a measure of representation error.
Methods
We surveyed Directors of Nursing or their designees in nursing homes serving adult residents with at least 30 beds in California, Massachusetts, and Ohio (N = 2389). We collected contact information for respondents and then emailed survey invitations and links, followed by three email reminders and a paper version. Nursing home associations in two of the states contacted their members on our behalf. We compared the response rates across waves and states. We also compared the characteristics of nursing homes based on whether the response was via email or paper. In a multivariable logit regression, we used characteristics of the survey and the nursing homes to predict whether their DON responded to the survey using adjustments for multiple comparisons.
Results
The response rate was higher for the first wave than for the second (30% vs 20.5%). The highest response rate was in Massachusetts (31.8%), followed by Ohio (25.8%) and California (19.5%). Nursing home characteristics did not vary by response mode. Additionally, we did not find any statistically significant predictors of whether a nursing home responded.
Conclusion
A single‐mode survey may provide a reasonably representative sample at the cost of sample size. With that said, however, switching modes can increase sample size without potentially biasing the sample.
Keywords: non‐respondents, nursing administration research, nursing homes, occupational health, survey methodology
1. INTRODUCTION
Along with Administrators and Medical Directors, Directors of Nursing (DONs) are part of the top management team in individual nursing homes and play a critical role in operations, resident experience, and patient safety. 1 Their most common job responsibilities include finance, human resources, clinical, regulatory compliance, and staffing. 2 Research on nursing homes often requires learning from the management team as they have information on the structure, functioning, and culture of their nursing homes. 3 , 4 In qualitative studies, DONs provided more detailed information about workplace policies, programs, and practices than executives or other managers. 5 , 6 They also recognized the need to invest in worker health, but viewed this need as disconnected from the need to improve quality of care. 5 , 6
Earlier research reported on data collection methods in nursing homes in which researchers sought responses from both the administrator and the DON. 7 , 8 These studies generally had response rates over 50%. They evaluated tying incentive payments to joint response of the DON and administrator, varying survey modes, and the number of follow‐ups needed. 3 , 8 , 9 However, the most recent trials of survey methods among Directors of Nursing were conducted several years ago, most recently in 2009 to 2011, and covered patient safety topics. 7 , 8 , 9 Response rates have declined over time for other healthcare professionals 10 and may have fallen for DONs as well. While additional leadership surveys in nursing homes have been conducted since then, most of the topics were related to patient safety. Previous research has demonstrated an association between the salience of the survey topic and response rates; at the outset, we did not know how salient worker health was likely to be to participants. 11 More information is needed on how to survey DONs, particularly in the increasingly marketized environment of U. S. nursing homes that are also facing greater regulatory requirements, more care transitions, and higher patient acuity. 12 , 13 , 14 While there has been extensive work on survey methods for physicians and executives, there is less work on establishment surveys, where the establishment is the unit of interest. 15 , 16 , 17 , 18 , 19
The findings presented here are part of a study of DONs designed to validate a measure of Total Worker Health, the Workplace Integrated Safety, and Health (WISH) Assessment and assess the role of working conditions in work‐related injury and patient safety. 20 As a part of this more extensive study, we conducted a survey of DONs in two randomly selected groups of nursing homes in three different states, resulting in a complete census of nursing homes in each state when we combine both waves. In this paper, we focus on response rates as a measure of non‐response error and comparison of nursing home's characteristics to their population values as a measure of error resulting from representation domain. We do not consider measurement error in this study. We also describe other lessons learned from data collection with this unique respondent population.
We use the total survey error framework (TSE) to assess the representation of our sample of nursing homes DONs in three U.S. states. 21 Briefly, the TSE framework decomposes survey error into several sources across two domains—(1) Representation and (2) Measurement. Representation is decomposed into:
Coverage error: differences in the sampling frame from its intended target population.
Sampling error: differences in the selected sample (ie, those invited to take the survey) from the overall sampling frame.
Nonresponse error: differences in the respondents from the overall selected sample. 21
Random deviations in any of the above sources will result in unbiased estimates with increased variance (thus, poorer statistical precision or power). In contrast, any systematic difference will result in biased estimates.
In practice, it is often not possible to decompose error along its theoretical sources. Rather, surveys rely on auxiliary information for quality such as response rates and comparisons of respondent characteristics, such as demographics, and population values on theoretically relevant variables as measures of potential error or bias. For example, a low response rate provides potential for large non‐response bias. However, recent work has found that non‐response bias is not necessarily correlated with response rate and should be separately investigated, as we will later in this paper. 22 , 23 Alternatively, small or negligible differences in respondents from the target population on auxiliary information suggests the outcome is likely not biased. Typically, observed differences between respondent and population distributions on auxiliary variables are adjusted for through weighting (eg, post‐stratification). Thus, adjustment errors (eg, adjusting for irrelevant variables) may similarly produce error.
2. METHODS
2.1. Human subjects review and consent
The study was approved by the Institutional Review Board of the Harvard T.H. Chan School of Public Health (IRB 18‐1245). Our survey included a consent page, but signatures were not required because the participants' primary risk was identification. The consent page would have been the only link to their names.
2.2. Population
We drew the population of nursing homes from Medicare's provider files available on Nursing Home Compare in August of 2018. 24 We then excluded nursing homes with fewer than 30 beds, those that served only pediatric patients, and those that closed between August and the start of data collection. Risks are systematically different in pediatric homes, and quality data are less reliable for homes with fewer than 30 beds. 25 , 26 , 27 We chose California, Massachusetts, and Ohio for the study, because we were able to obtain worker's compensation data (needed for another part of the project) for each state and had a positive initial contact with at least one nursing home association in each state. Our resources only allowed for a final sample of 500 to 600 nursing homes total.
2.3. Survey
We designed the survey to collect information about the work environment in the context of validating the WISH instrument and linking the workplace environment to worker injuries and patient safety. 20 The survey took about 20 minutes to complete. After completion, we gave respondents a $20 Amazon gift card. We increased this amount to $25 in the second wave with the goal of increasing the response rate.
2.4. Data collection
There were two waves of data collection. We conducted the first wave in the fall of 2018 (35% of the total sample). We completed the second wave in the winter and spring of 2019. Between when we selected the population and began the survey, 7 (1.9%), 4 (0.4%), and 10 (1.1%) nursing homes in Massachusetts, California, and Ohio, respectively, had closed.
We attempted to collaborate with nursing home associations in each state. In Massachusetts, a nursing home association representing 91% of Massachusetts nursing homes as members collected names and email addresses, sent out the survey electronically, and mailed paper copies to non‐respondents for the first wave. In Ohio, we collaborated with a nursing home association that promoted the survey in newsletters and in‐person events. We also contacted a nursing home association in California, but they declined to participate. The research team directly distributed all the California and Ohio survey materials as well as survey materials for the second wave in Massachusetts.
2.5. First wave
We selected a stratified (by state) simple random sample of 828 nursing homes from the list of nursing homes certified by the Centers for Medicare and Medicaid Services. 28 We called sampled nursing homes to obtain the name and email address of the DON. If the position was vacant, we requested contact information for someone similarly positioned (such as the Assistant DON). In late September 2018, our survey research team emailed the initial invitations and survey links for Ohio and California. Our collaborator emailed them for Massachusetts. We sent email reminders to non‐responders approximately 21 days after the initial invitation and conducted telephone follow‐up to ascertain whether the intended respondents received emails. We were unable to offer to complete the survey via phone because of funding constraints. We obtained corrected email addresses and resent the survey invitations for any bounce‐backs to the initial invitations. We sent a second email reminder approximately 21 days after the first follow‐up and a final email 1 month later (January 2019). This email modified the subject line to include the study name and omit reference to Amazon to reduce the chance of being coded as spam. We mailed a paper version of the survey to non‐respondents in late January, using the DON's name for each facility. The paper version included an introductory letter, survey copy, and an addressed‐stamped return envelope. In Massachusetts' first wave, the mailing envelope included branding from the participating nursing home association (the return envelope had the host institution's address). For other states, the mailing envelope included branding for the host institution (Harvard T.H. Chan School of Public Health).
2.6. Second wave
We included all eligible nursing homes not sampled in the first wave in the second wave. We obtained contact information by calling nursing homes during February 2019 and March 2019 and sent the initial email in mid‐March. We followed the same reminder process as with the first wave except that reminders were spaced 7 to 9 days apart. Additionally, rather than responding to bounced emails on a rolling basis, we re‐sent email invitations and links in two groups. While both groups were sent three reminder emails, one of the groups received the paper survey after the initial email rather than after the third email. The second group received their final reminder in mid‐May. Survey responses were closed on June 30, 2019. Figure 1 gives a graphical illustration of the survey timeline.
FIGURE 1.

Survey timeline. This figure is an illustration of the survey timeline
2.7. Response rate calculations
We calculated response rates for all states and waves as the number of respondents divided by the number of eligible nursing homes sent surveys as suggested by the American Association for Public Opinion Research. 22 We excluded nursing homes that closed during the survey administration from both the numerator or denominator as currently operating was a determinant of eligibility. Some DONs work at multiple facilities. We counted facilities as being non‐respondents if they did not have a response, even if their DON responded for a separate facility.
2.8. Data analysis
For our primary analysis, we analyzed the correlates of receiving a response to the survey after controlling for characteristics that might have affected whether the DON responded. The outcome was a binary variable for whether there was a response. We used a multivariable logistic regression with standard errors adjusted for clustering by DON to account for situations where a single DON worked at multiple facilities. We report the results using odds ratios and simultaneous 95% Confidence Intervals (with a Bonferroni correction to obtain a family wide error threshold of 5%). We also estimated marginal effects–changes in the predicted probability of response as the predictors change, to provide a better sense of magnitude. Predictors that we hypothesized could conceptually affect survey response were included in the model. Analyses were performed using Stata, version 16.1 (Stata Corp LLC, College Station, Texas).
2.9. Measures
Each state had different levels of collaboration and different regulations, so state indicators, [0, 1] dummy variables, were included based on the nursing home's location. We also included wave indicators because of differences in the survey process.
Additionally, we included variables that might be related to the respondent's availability. Larger nursing homes are likely to have more personnel, which could either positively or negatively affect the amount of time a DON has available to answer a survey. Additionally, personnel who work in for‐profit chains may have to seek administrative approval to fill out an organizational survey, and so may be less likely to respond. 29 To address these concerns, we control for the number of federally certified beds and ownership status defined as for‐profit or not‐for‐profit (both obtained from Medicare). To control for area characteristics that might affect the availability of the DON and other difference between rural and urban nursing homes in the United States, 30 we used the most recent (2010) Rural‐Urban Commuting area codes to assign each nursing home a rurality score based on their county. 31 These codes (1‐10) broadly sort areas into Metropolitan areas (1‐3), Micropolitan areas (4‐6), Small towns (7‐9), and Rural areas with primary flow outside a small or large urban area or cluster (10). We classified nursing homes as being more rural if they were not Metropolitan. Additionally, we included the Medicare staffing rating: the number of hours worked by licensed staff (RNs, LPNs/LVNs) each day at the nursing home per resident, adjusted for resident needs. It is a relative ranking within each state. 28 Nursing homes with higher staffing levels per resident might have DONs with relatively more time to respond to the survey.
We also included the quality of resident care and health inspection ratings to see if nursing homes with better quality indicators were more likely to respond—indicating potential selection bias for quality. The quality rating is a risk‐adjusted combination of 17 different quality measures for both long‐term and short‐stay residents. 28 The health inspection rating is based on the scope and severity of citations identified on the most recent three inspections and those identified on complaint or facility incident investigations over the last 3 years. 28 Both ratings are relative within each state and have the following categories: much below average, below average, average, above average, and much above average. We believe that these factors might affect whether a DON responds to the survey because individuals in relatively lower ranked facilities may be more hesitant to respond for fear of looking bad or because their positions are more stressful because of their relatively low ranking.
Nineteen homes were missing data for at least one of the Medicare measures for all of 2018 and were dropped from the analysis. The results did not differ when these homes were excluded from the analysis.
3. RESULTS
3.1. Final sample
The final denominators are shown in Figure 2. After exclusions, 1108 nursing homes remained in California, 374 nursing homes remained in Massachusetts, and 907 nursing homes remained in Ohio. Each of the two survey waves were random samples within each state—by the end of the second wave every open nursing home meeting, the inclusion criteria had been offered participation in the survey.
FIGURE 2.

Number of nursing home included in the survey. This figure describes the selection process in each state. Initial lists were pulled from Medicare's Nursing Home Compare website on August 23, 2018. Final census numbers listed in bold
3.2. Response rates
The overall response rate was 23.8% (569/2389). The following sections discussed the unadjusted and adjusted response rates based on the categories described in the methods section.
Response Rates by State. In California, the overall response rate was 19.5%. In Massachusetts, the overall response rate was 31.8%, and in Ohio, it was 25.8% (see Table 1). There were 63 (5.7%), 3 (0.8%), and 12 (1.3%) nursing homes in California, Massachusetts, and Ohio, respectively, that refused the survey by declining to provide an email address (not shown). We were unable to obtain some DON emails despite repeated phone calls. These facilities only received the paper version of the survey: 79 (7.1%) in California, 18 (4.8%) in Massachusetts, and 34 (3.7%) in Ohio. The percentage decrease in the response rate in the second wave was similar across states, despite not partnering with the state association in Massachusetts or Ohio for that wave. A table showing the breakdown of nonresponses is in Table S1.
TABLE 1.
Response rates by state and wave
| State | First wave | Second wave | Total |
|---|---|---|---|
| California | 26.0% (79/304) | 17.0% (137/804) | 19.5% (216/1108) |
| Massachusetts | 34.1% (84/246) | 27.3% (35/128) | 31.8% (119/374) |
| Ohio | 31.3% (87/278) | 23.4% (147/629) | 25.8% (234/907) |
3.3. Response rates by mode
Overall, 59% of responses were completed via the emailed link, and 41% were completed via paper. There were no differences in the characteristics of those who responded to the emailed link vs the paper survey, as shown in Table S2.
3.4. Adjusted response rates
The estimated odds of responding to the survey are shown in Table 2. Compared to nursing homes in California, those in Massachusetts (OR 1.63, 95% CI [1.00, 2.66]) had higher odds of responding, as did those in Ohio (OR 1.22, 95% CI [0.82, 1.80]). Full estimates using marginal effects are shown in Table S3. All else equal, DONs in Massachusetts were 9% more likely to respond to the survey than DONs in California (95% CI [0.02, 0.15]). DONs at for‐profit nursing homes had lower odds of responding, but the OR was not statistically significant (OR 0.78, 95% CI [0.50, 1.22]). The numbers of beds were not statistically significantly related to whether the DON responded to the survey, and there was no clear pattern in the ORs. The same was also true for the quality rating and staffing rating. DONs in non‐metropolitan nursing homes had higher odds of responding, but again, the estimate was not statistically significant (1.49, 95% CI [0.90, 2.46]). DONs in nursing homes surveyed in the second wave had lower odds of responding than those surveyed in the first wave (OR 0.74, 95% CI [0.52, 1.05]). While none of the ORs for health inspection rating was statistically significant, there was a pattern in the estimates. The marginal effects for health inspection rating ranged from −2% (95% CI [−0.08, 0.05]) for those rated much below average to 7% (95% CI [−0.02, 0.15]) for those rated much above average, compared to those rated average.
TABLE 2.
Association between survey features, nursing home factors and response to survey: results of logistic regression (N = 2370 a )
| Variable | Odds ratio | Simultaneous 95% confidence interval b |
|---|---|---|
| State | ||
| California | Reference | |
| Massachusetts | 1.63 | [1.00, 2.66] |
| Ohio | 1.22 | [0.82, 1.80] |
| For profit (not‐for‐profit is reference) | 0.78 | [0.50, 1.22] |
| Number of beds | ||
| 30‐49 | 0.93 | [0.53, 1.65] |
| 50‐99 | Reference | |
| 100‐149 | 0.88 | [0.60, 1.31] |
| 150‐199 | 1.04 | [0.61, 1.78] |
| 200 or more | 0.86 | [0.34, 2.17] |
| Rural (Metropolitan is reference) | 1.49 | [0.90, 2.46] |
| Survey wave | ||
| Wave 1 | Reference | |
| Wave 2 | 0.74 | [0.52, 1.05] |
| Ownership change in previous 12 months (No change is reference) | 0.78 | [0.30, 2.06] |
| Health inspection rating c (2018) | ||
| Much below average | 0.91 | [0.56, 1.48] |
| Below average | 0.96 | [0.61, 1.51] |
| Average | Reference | |
| Above average | 1.14 | [0.73, 1.78] |
| Much above average | 1.42 | [0.81, 2.46] |
| Quality rating c (2018) | ||
| Much below average | 0.83 | [0.24, 2.88] |
| Below average | 1.54 | [0.70, 3.39] |
| Average | reference | |
| Above average | 1.26 | [0.70, 2.26] |
| Much above average | 1.07 | [0.63, 1.82] |
| Staffing rating c (2018) | ||
| Much below average | 1.33 | [0.83, 2.11] |
| Below average | 1.27 | [0.77, 2.09] |
| Average | Reference | |
| Above average | 1.22 | [0.82, 1.81] |
| Much above average | 1.42 | [0.70, 2.85] |
| Constant | 0.29 | [0.13, 0.63] |
There were 2346 clusters (distinct DONs).
The 95% confidence intervals were adjusted using a Bonferroni correction to obtain a family wide error rate of 5%.
Rankings are relative within state.
3.5. Linking DONs to nursing homes
In Ohio, several DONs (16) worked at multiple nursing homes in the same capacity. When we realized this, our survey team contacted the relevant DONs to double‐check their response facility. No DON completed a survey for more than one facility. As stated in the methods section, we did not drop any of these facilities from analysis because the facilities were the unit of response. Very few DONs worked at multiple nursing homes in the same capacity in Massachusetts and California. Additionally, there was turnover in some facilities between when we originally obtained contact information for the DON and when we sent out the survey.
3.6. Respondents' titles
The survey included questions asking whether the respondent was the DON and what the respondent's title was if they were not the DON. Of the 569 responses, 480 (84.4%) reported to be the DON, 42 (7.4%) did not report a title, and 43 (7.6%) reported a title other than DON. The most common substitutes were Administrators (13) and Assistant/Interim Directors of Nursing (11).
4. DISCUSSION
As a part of an ongoing study, we surveyed all the DONs of adult nursing homes in three states. As a result of the survey, we learned several important lessons for working with this population that might prove useful to others in the field. There is some potential for these lessons to carry over to other institutionalized settings, but further work is needed in that area.
Response rates were about half to two‐thirds lower in our study than have been cited in other surveys of DONs. 3 , 8 This difference might be due to differences in survey administration methods, the topic of the survey, or specific features of the states in our study, such as whether there was pending legislation or overlapping survey efforts from a professional organization. Starting from publicly available information, we compiled DON email addresses ourselves. We also began with emails and then sent paper surveys to non‐respondents. Other studies used both methods in this population concurrently or only used paper, although, in other populations, sequential appears to be no worse than concurrent. 3 , 8 There is some evidence from surveys of physicians that using multiple methods (paper and web) results in higher response rates. However, others have shown no difference. 15 , 19 In our analysis, the paper respondents were characteristically similar to the web respondents, and the overall sample composition was similar to the population on known factors. These results mirror those obtained from physician samples. 15 , 19
The implications of these findings are twofold. First, for research or evaluation that requires a quick turnaround of data collection to reporting or that has few resources, a single‐mode web survey may produce a reasonably representative sample at the cost of sample size (and statistical precision or power). Second, for research or evaluation that requires a great deal of statistical precision or power to detect smaller effect sizes, switching modes can increase the sample size without biasing the overall sample. Thus, it does not produce the need to calibrate or post‐stratify the sample to population characteristics, which would reduce the marginal gain in precision due to variability of weights and increase the complexity of analysis.
Other differences are that the surveys with higher response rates were related to patient safety, and they were completed several years ago. Worker health, well‐being, and safety may not have been perceived as exciting or relevant to DONs as patient safety. The differences in response rates could also be due to features of the particular states in our sample, compared to other papers that used national random samples. Furthermore, we were restricted in the number of contacts and reminders we could send from both a budgetary and a human subject standpoint. Our overall refusal rate was 3.3%, roughly aligned with refusals in previous studies. 8
While we did find some potentially interesting associations in our multivariable analysis, such as a negative relationship between for‐profit status and response, and positive relationships between rurality and response, and between health inspection rating and response, few of the estimated associations were statistically significant. These associations would be good candidates for future studies to evaluate the extent of selection bias in survey responses.
In terms of methods, we also learned some practical lessons. Response rates were the highest in Massachusetts and the lowest in California, aligning with our collaboration strength (or lack thereof). In addition to being surveyed at a different time of year, the first wave had reminders spaced out differently (3 weeks vs 7‐9 days) because of the holiday period and follow‐up phone calls that we made in between reminders. The overwhelming majority of our calls ended up on voicemails or as messages left with the front desk staff. When we did reach DONs, almost no one recalled receiving an email about the survey. We think either the emails were treated as spam by their servers, regarded as spam by the recipient, or otherwise left unopened. Removing common triggers, such as “Amazon” or sending emails individually rather than as part of a bulk mailing might reduce these problems. The first wave had a much higher response rate than the second wave, despite being over the “holiday period.” Additionally, sending new emails with survey links while on the phone with potential respondents would be helpful to minimize confusion for respondents and ensure they have access to the survey link.
Turnover among DONs was another hurdle, primarily for maintaining the list of potential respondents so that researchers could use names and email addresses in communications. Given the challenge of maintaining the list, having a design where small batches of surveys could be sent as email addresses are collected would be beneficial, as was done by Clark and colleagues. 8 However, this strategy increases the work of sending reminders and might draw out the timeline. Additionally, we recommend adding questions to ask DONs directly if they work at multiple facilities and the names of those facilities.
We acknowledge several limitations to this research. The nursing homes in our sample are from three states, and they are not a random sample of all U.S. nursing homes. While these states do not differ considerably from the rest of the United States, findings are not generalizable to the entire population of nursing homes in the United States. The original study was not designed to test the difference in survey methodologies and may be underpowered for the survey's features. Additionally, coverage error can occur in dynamic populations if, for example, the characteristics of nursing homes today are different from when the sampling frame was collected and used. Each state had different levels of collaboration from professional organizations but also had different environments—we cannot attribute the differences in response rates solely to collaboration. However, our results are suggestive of differences that might affect response rates. We hope that our results will assist others who are planning surveys for nursing home leadership.
5. CONCLUSION
Overall, we found differences in response rates based on state and features of the survey administration. We did not find any statistically significant differences between respondents who answered via the initial email and paper follow‐up. There were some associations between nursing home characteristics and response, but they were generally not statistically significant. Importantly, we did not find quality indicators to be associated with response.
CONFLICT OF INTEREST
The authors declare no conflict of interest.
AUTHOR CONTRIBUTIONS
Conceptualization: Jessica A. R. Williams, Mary G. Vriniotis, Leslie I. Boden, Jamie E. Collins, Daniel A. Gundersen, Gregory R. Wagner, Jeff N. Katz, Glorian Sorensen.
Data Curation: Jessica A. R. Williams, Mary G. Vriniotis, Glorian Sorensen.
Formal Analysis: Jessica A. R. Williams, Mary G. Vriniotis, Daniel A. Gundersen.
Funding Acquisition: Glorian Sorensen.
Methodology: Jessica A. R. Williams, Mary G. Vriniotis, Leslie I. Boden, Jamie E. Collins, Daniel A. Gundersen, Gregory R. Wagner, Jeff N. Katz, and Glorian Sorensen.
Project Administration: Jessica A. R. Williams, Mary G. Vriniotis.
Resources: Glorian Sorensen.
Supervision: Glorian Sorensen.
Visualization: Jessica A. R. Williams, Mary G. Vriniotis.
Writing – Original Draft Preparation: Jessica A. R. Williams, Mary G. Vriniotis, Daniel A. Gundersen.
Writing – Review & Editing: Jessica A. R. Williams, Mary G. Vriniotis, Leslie I. Boden, Jamie E. Collins, Daniel A. Gundersen, Gregory R. Wagner, Jeff N. Katz, Glorian Sorensen.
Jessica A. R. Williams had full access to all the data in the study and takes complete responsibility for the integrity of the data and the accuracy of the data analysis.
ETHICAL APPROVAL AND INFORMED CONSENT
The study was approved by the Institutional Review Board of the Harvard T. H. Chan School of Public Health (IRB 18‐1245). Our survey included a consent page, but signatures were not required because the participants' primary risk was identification. The consent page would have been the only link to their names and thus no names were recorded.
Supporting information
Table S1. Response rates by survey wave.
Table S2. Characteristics of respondents by response mode.
Table S3. Marginal effects for the association between survey features, nursing home factors and response to survey: results of logistic regression (N = 2370).
ACKNOWLEDGEMENTS
We are grateful to the Massachusetts Senior Care Association and the Ohio Health Care Association for their help in revising and promoting the survey. We would also like to thank the DFCI Survey and Data Management Core, especially Ruth Lederman, who did an excellent job managing our survey.
Williams JAR, Vriniotis MG, Gundersen DA, et al. How to ask: Surveying nursing directors of nursing homes. Health Sci Rep. 2021;4:e304. 10.1002/hsr2.304
Funding information National Institute of Occupational Safety and Health, USA, Grant/Award Number: U19 OH008861
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author upon reasonable request.
REFERENCES
- 1. Resnick HE, Manard B, Stone RI, Castle NG. Tenure, certification, and education of nursing home administrators, medical directors, and directors of nursing in for‐profit and not‐for‐profit nursing homes: United States 2004. J Am Med Dir Assoc. 2009;10:423‐430. [DOI] [PubMed] [Google Scholar]
- 2. Siegel EO, Young HM, Leo MC, Santillan V. Managing up, down, and across the nursing home: roles and responsibilities of directors of nursing. Policy Polit Nurs Pract. 2020;13:214‐223. [DOI] [PubMed] [Google Scholar]
- 3. Castle NG, Wagner LM, Ferguson JC, Handler SM. Safety culture of nursing homes: opinions of top managers. Health Care Manag Rev. 2011;36:175‐187. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. Faghri PD, Kotejoshyer R, Cherniack M, Reeves D, Punnett L. Assessment of a worksite health promotion readiness checklist. J Occup Environ Med. 2010;52:893‐899. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Okechukwu C, Kelly E, Bacic J, Depasquale N, Hurtado D, Kossek E, Sembajwe G. July 2015. Supporting employees' Work‐Family Needs Improves Care Quality: Evidence from the Work, Family, and Health Study. VI International Conference of Work and Family Barcelona, Spain.
- 6. Okechukwu C, Kelly E, Bacic J, et al. Supporting employees' work‐family needs improves care quality: evidence from the work, family, and health study. Soc Sci Med. 2016;157:111‐119. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Clark M, Rogers M, Foster A, et al. A randomized trial of the impact of survey design characteristics on response rates among nursing home providers. Eval Health Prof. 2011;34:464‐486. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Clark MA, Roman A, Rogers ML, Tyler DA, Mor V. Surveying multiple health professional team members within institutional settings: an example from the nursing home industry. Eval Health Prof. 2014;37:287‐313. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Colón‐Emeric CS, Casebeer L, Saag K, et al. Barriers to providing osteoporosis care in skilled nursing facilities: perceptions of medical directors and directors of nursing. J Am Med Dir Assoc. 2005;6:S61‐S66. [DOI] [PubMed] [Google Scholar]
- 10. Cook JV, Dickinson HO, Eccles MP. Response rates in postal surveys of healthcare professionals between 1996 and 2005: an observational study. BMC Health Serv Res. 2009;9:160. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Edwards P, Roberts I, Clarke M, et al. Increasing response rates to postal questionnaires: systematic review. BMJ. 2002;324:1183. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Castle NG, Ferguson JC. What is nursing home quality and how is it measured? Gerontologist. 2010;50:426‐442. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Coleman EA. Falling through the cracks: challenges and opportunities for improving transitional care for persons with continuous complex care needs. J Am Geriatr Soc. 2003;51:549‐555. [DOI] [PubMed] [Google Scholar]
- 14. Harrington C, Jacobsen FF, Panos J, Pollock A, Sutaria S, Szebehely M. Marketization in long‐term care: a cross‐country comparison of large for‐profit nursing home chains. Health Serv Insights. 2017;10:1178632917710533. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Beebe TJ, Jacobson RM, Jenkins SM, Lackore KA, Rutten LJF. Testing the impact of mixed‐mode designs (mail and web) and multiple contact attempts within mode (mail or web) on clinician survey response. Health Serv Res. 2018;53(Suppl 1):3070‐3083. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Kaputa SJ, Thompson KJ, Beck JL. An embedded experiment for targeted non‐response follow‐up in establishment surveys. J. R. Stat. Soc. A. 2019;182:1371‐1391. 10.1111/rssa.12448. [DOI] [Google Scholar]
- 17. Klabunde CN, Willis GB, Mcleod CC, et al. Improving the quality of surveys of physicians and medical groups:a research agenda. Eval Health Prof. 2012;35:477‐506. [DOI] [PubMed] [Google Scholar]
- 18. Morrison RL, Ridolfo H, Sirkis R, Edgar J, Kaplan R. 2018. Smartphone Usage in Establishment Surveys:Case Studies from Three U.S. Federal Statistical Agencies. Paper presented at: Fifth International Workshop on Business Collection Data Methodology. Lisbon, Portugal: NSF/NCSES, USDA/NASS, BLS.
- 19. Weaver L, Beebe TJ, Rockwood T. The impact of survey mode on the response rate in a survey of the factors that influence Minnesota physicians' disclosure practices. BMC Med Res Methodol. 2019;19:73. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Sorensen G, Sparer E, Williams JAR, et al. Measuring best practices for workplace safety, health, and well‐being: the workplace integrated safety and health assessment. J Occup Environ Med. 2018;60(5):430‐439. 10.1097/JOM.0000000000001286. PMID: 29389812; PMCID: PMC5943154. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Baker R, Brick M, Keeter S, Kennedy C. 2016. Evaluating Survey Quality in Today's Complex Environment [Online]. Washington, DC: American Association for Public Opinion Research. https://www.aapor.org/Education-Resources/Reports/Evaluating-Survey-Quality.aspx [Accessed November 19, 2020].
- 22. AMERICAN ASSOCIATION FOR PUBLIC OPINION RESEARCH . 2020. Response Rates ‐ An Overview ‐ AAPOR [Online]. https://www.aapor.org/Education-Resources/For-Researchers/Poll-Survey-FAQ/Response-Rates-An-Overview.aspx.
- 23. Groves RM, Peytcheva E. The impact of nonresponse rates on nonresponse bias: a meta‐analysis. Public Opin Q. 2008;72:167‐189. [Google Scholar]
- 24. CENTERS FOR MEDICARE & MEDICAID SERVICES . 2020. Provider Info. Washington, DC: Data.Medicare.gov.
- 25. Harrington C, Carrillo H, Blank BW, O'Brian T. 2008. Nursing facilities, staffing, residents and facility deficiencies, 2004 through 2009. University of California, san Francisco, 31, Acessed November 18, 2015 http://ualr.edu/seniorjustice/uploads/2008/12/Nursing%20Home%20Facilities,%20Staffing,%20Residents,%20and%20Facility%20Deficiencies%202001%20Through%202007.pdf.
- 26. Mor V. Public reporting of long‐term care quality: the US experience. Eurohealth. 2010;16:14. [Google Scholar]
- 27. Thomas KS, Wysocki A, Intrator O, Mor V. Finding Gertrude: the resident's voice in minimum data set 3.0. J Am Med Dir Assoc. 2014;15:802‐806. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28. MEDICARE.GOV . 2020. About Nursing Home Compare Data [Online]. Washington D.C.: Nursing Home Compare. https://www.medicare.gov/nursinghomecompare/Data/About.html [Accessed August 13, 2020].
- 29. Kotejoshyer R, Zhang Y, Flum M, Fleishman J, Punnett L. Prospective evaluation of Fidelity, impact and sustainability of participatory workplace health teams in skilled nursing facilities. Int J Environ Res Public Health. 2019;16(9):1494. 10.3390/ijerph16091494. PMID: 31035568; PMCID: PMC6539866. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Lutfiyya MN, Gessert CE, Lipsky MS. Nursing home quality: a comparative analysis using CMS nursing home compare data to examine differences between rural and nonrural facilities. J Am Med Dir Assoc. 2013;14:593‐598. [DOI] [PubMed] [Google Scholar]
- 31. USDA ECONOMIC RESEARCH SERVICE . 2020. Rural‐Urban Commuting Area Codes [Online]. Washington, DC. https://www.ers.usda.gov/data-products/rural-urban-commuting-area-codes/.
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Table S1. Response rates by survey wave.
Table S2. Characteristics of respondents by response mode.
Table S3. Marginal effects for the association between survey features, nursing home factors and response to survey: results of logistic regression (N = 2370).
Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
