Skip to main content
HHS Author Manuscripts logoLink to HHS Author Manuscripts
. Author manuscript; available in PMC: 2022 Oct 1.
Published in final edited form as: Med Care. 2021 Oct 1;59(10):907–912. doi: 10.1097/MLR.0000000000001627

Comparing Web and Mail Protocols for Administering HCAHPS Surveys

Floyd J Fowler Jr 1, Philip S Brenner 2, J Lee Hargraves 3, Paul D Cleary 4
PMCID: PMC8570265  NIHMSID: NIHMS1723034  PMID: 34334736

Abstract

OBJECTIVES:

Compare results of using web-based and mail (postal) Hospital CAHPS (HCAHPS) data collection protocols.

RESEARCH DESIGN:

Patients who had been hospitalized in a New England hospital were surveyed about their hospital experience. Patients who provided email addresses were randomized to one of three data collection protocols: web alone, web with postal mail follow-up, and postal mail only. Those who did not provide email addresses were surveyed using postal mail only. Analyses compared response rates, respondent characteristics, and patient reported experiences.

SUBJECTS:

For an 8-week period, patients discharged from the study hospital to home.

MEASURES:

Measures included response rates, characteristics of respondents, 6 composite measures of their patient experiences and two ratings of the hospital.

RESULTS:

Response rates were significantly lower for the web only protocol than the mail or combined protocols, and those who had not provided email addresses had lower response rates. Those over 65 were more likely than others to respond to all protocols, especially for the mail only protocols. Respondents without email addresses were older, less educated, and reported worse health than those who provided email addresses. After adjusting for respondent differences, those in the combined protocol differed significantly from the mail (postal) only respondents on two measures of patient experience; those in the web only protocol differed on one. Those not providing an email address differed from those who did on one measure.

CONCLUSIONS:

If web-based protocols are used for HCAHPS surveys, adjustments for mode of data collection are needed to make results comparable.

Keywords: Web surveys, HCAHPS, response rates, measuring patient experience

INTRODUCTION

The Consumer Assessment of Healthcare Providers and Systems (CAHPS) is a program of survey instruments and protocols, sponsored by the Agency for Healthcare Research and Quality.13 Established in 1995, CAHPS surveys ask about patients’ experiences with getting medical care in a variety of settings. The instruments and data collection protocols are standardized to facilitate comparisons of results across patient care settings.

The Centers for Medicare & Medicaid Services (CMS) uses the results from the hospital CAHPS46 (HCAHPS) surveys as one basis for adjusting reimbursement to hospitals.7 Hospitals are required periodically to administer HCAHPS surveys to a sample of their patients. HCAHPS survey protocols are the focus of this study.

The most commonly used methods for conducting CAHPS surveys have been a combination of postal mail surveys and telephone interviews. There is a great deal of pressure to reduce the costs of CAHPS survey administration. As more and more patients use the web and provide their email addresses to their health care providers, it becomes more feasible to use the web as one mode for carrying out CAHPS surveys.811 The Centers for Medicare & Medicaid Services (CMS), which sponsors 11 different CAHPS surveys1213 is currently assessing the feasibility of web administration for its major survey programs, including Hospital CAHPS (HCAHPS).

Research comparing postal mail and web approaches to surveying patients has typically found that those who respond to web surveys are different than those most likely to respond to mail and phone surveys.11 In addition, response rates to web surveys tend to be much lower than to mail or phone surveys.11,1418 These findings raise concerns that using web surveys will result in data that are not representative of the target population. Although there is not a consistent association between response rates and the representativeness of the sample,1921 the average risk of error due to nonresponse does rise as response rates fall.22 When those who do not respond to an email request are contacted by postal mail, there tends to be an increase in responses.15,23

Previous studies of the effects of administration mode on HCAHPS surveys also have found differences in response to mail and web surveys.8,24 A more recent study in a group of outpatient primary care facilities found that response rates were about 20 percentage points higher by mail than by the web alone, but the demographic profile of respondents to the two modes were similar. A limitation of that 2017 study was that patients in the practices’ studied were relatively well educated and there was little representation of racial or ethnic minorities.9 With increasing use of email invitations to complete web surveys, it is important to regularly reassess these issues.

To investigate how response rates and response patterns are affected by web administration of CAHPS surveys among hospitalized patients, in the fall of 2019, we assessed web and mail approaches to conducting HCAHPS surveys to assess patients’ hospital experiences. In this paper, we report how the different protocols compare with respect to response rates, the characteristics of those who respond, and the reports patients gave about their hospital experiences. In this paper, when we refer to mail data collection, we are referring to postal mail. When we refer to web-based data collection, we are referring to having respondents complete surveys by following a URL on the Internet.

METHODS

Sampling and Data Collection Protocols

For a period of about 8 weeks, patients discharged from a hospital to their homes were sampled and assigned to one of 4 data collection groups. The hospital routinely solicits and stores patient email addresses. Prior to the study, sixty-five per cent provided an email address to the hospital and 35% did not. The sample was stratified by whether or not an email address was provided to the hospital. Those who had provided an email address were randomly assigned to one of the three data collection protocols:

  1. A traditional CAHPS postal mail protocol (labeled MAIL ONLY): They were mailed a cover letter, a paper questionnaire, and postage paid envelope in which to return the survey. About 4 weeks after the initial mailing, non-respondents were sent a reminder letter and second questionnaire.

  2. A web-based protocol (labeled WEB ONLY): Selected patients were sent an email letter explaining the survey with an imbedded link to the HCAHPS survey. Non-respondents were sent two follow-up email reminders.

  3. A web-based protocol with a mail follow-up to non-respondents (labeled WEB-MAIL): Selected patients were sent an email asking that they complete the survey, parallel to the WEB ONLY initial protocol. However, the difference was that after two email contacts, the non-respondents were sent a paper questionnaire in the mail.

  4. A fourth group consisted of those who had not provided an email address. This sample of patients was sent a paper questionnaire using the same protocol as the MAIL ONLY group who had provided email addresses.

The survey instruments were offered in Spanish and in English in both the paper and web forms.

Measures

The respondent characteristics we measured included their demographic characteristics (age, gender, education, and ethnicity) plus patient ratings of their physical and mental health. In addition to respondent reports, we had information about age and gender from the hospital’s records.

The measures of hospital experience derived from responses to the HCAHPS survey include 6 multi-item composites: communication with nurses, communication with doctors, responsiveness of staff, communication about medications, discharge experience and experience with respect to cleanliness and quiet. The composite scores were calculated as the percentage of respondents who give the most positive response to the questions in a composite. In addition, respondents rated the hospital (from 0 to 10) and reported how likely they would be to recommend the hospital to others. The analysis focuses on the percentages giving the hospital a 10 rating and the percentages saying they would “definitely” recommend the hospital. The complete questions are included in the appendix.

Analysis

Response rates, using AAPOR RR125, were calculated as the percentage of all sampled patients who returned a response by mail or via the Internet. We included those for whom the contact information was incorrect in the denominator so the results reflect both our ability to contact patients using the information available to the hospital and willingness to respond. Because we had age and gender based on patient records, we were also able to calculate responses rates by those two variables. For education, race/ethnicity, and ratings of physical and mental health, we looked at the characteristics of respondents by protocol to compare rates at which patients with various characteristics responded.

The analyses address two core questions. First, for the patients who provided an email address, we assess how the results vary across the three data collection protocols. Z-tests for proportions are used to compare demographic variables between the data collection protocols using two-sample tests or, in one set of analyses, to data from the sampling frame using one-sample tests. In addition to descriptive statistics, we also calculated hospital experience composite scores and ratings, adjusting for differences in the characteristics of those who responded to the three protocols; age, gender education, race, self-reported physical health, self-reported mental health. The question is whether one would expect to get systematically different results depending on which of the three data collection protocols was used. Logistic regression models were estimated predicting the highest “top box” value for each experience score or rating, controlling for demographic characteristics. We present and compare the adjusted percentages from these models for each hospital experience score and rating.

Second, we compare the results of those who did and did not provide an email address by comparing the responses of the no email patients (who responded via postal mail) with the patients who provided an email address and were assigned to the postal MAIL ONLY protocol. One key question is whether leaving out those who do not provide an email address, for example so the survey could be done entirely on the web, would systematically affect the results of an HCAHPS survey.

RESULTS

Response rates and respondent characteristics

Table 1 shows that among those providing an email address, the response to mail with a paper questionnaire (MAIL ONLY) is better than the response to email requests to do the survey on the web. The WEB ONLY approach had only a 21% response rate. The MAIL ONLY approach got a 33% response rate. When the WEB-MAIL combination protocol was used, about the same number responded via the web as in the WEB ONLY protocol. When mail follow-up was added to the web-based approach, the response to the combined approach was about the same as to the MAIL ONLY approach (33%). Thus, using the combination did not increase the overall response rate, but adding the mail option to the web protocol raised the response rate to the level of the MAIL ONLY protocol.

Table 1.

Responses to Survey Request by Data Collection Protocol

PROTOCOL Initial Samples Paper returns Web returns Final response rate
Patients who provided email addresses
WEB ONLY 800 NA 167 21%***
EB-MAIL 800 107 153 33%
MAIL ONLY 800 264 NA 33%
Patients who did not provide email addresses
MAIL ONLY 800 206 NA 26%**
***

Differed from MAIL ONLY (Email provided), p < .001

**

Differed from MAIL ONLY (Email provided), p <.01

Those who did not provide their email addresses to their providers responded at a somewhat lower rate to the mail protocol than those who did provide an email address: 26% vs 33%.

Table 2 presents response rates by age and gender of selected patients. The response rates across data collection protocols do not differ significantly by gender. However, those over 65 responded at a higher rate than those who are younger for all protocols. This pattern is more pronounced for the MAIL ONLY protocols than it is for the protocols that used the web. Those in the sample who did not provide email addresses were more likely to be male and to be 65 or older than those who did provide email addresses (data not shown).

Table 2.

Response Rates by Data Collection Protocol, Gender and Age

PROTOCOL Response Rate Men Response Rate Women Response Rate Under 65 Response Rate 65 or Older
Patients who provided email addresses
WEB ONLY 22%
(N=300)
22%
(N=461)
19%
(N=542)
29% **
(N=219)
WEB-MAIL 34%
(N=296)
35%
(N=445)
32%
(N=505)
40% **
(N=236)
MAILONLY 37%
(N=302)
31%
(N=479)
27%
(N=521)
47% ***
(N=260)
Patients who did not provide email addresses
MAIL ONLY 29%
(N=373)
25%
(N=381)
19%
(N=391)
36% ***
(N=363)
**

Differed from Under 65, same protocol, p<.01

***

Differed from Under 65, same protocol, p<001

Table 3 presents four self-reported respondent characteristics by protocol. Since those providing email addresses were randomized to the three protocols, we expected the results to be similar across protocols, and that is the case. The only notable difference is that those responding in the WEB ONLY protocol were more likely than those in other protocols to rate their mental health as excellent or very good.

Table 3.

Characteristics of Respondents by Data Collection Protocol

PROTOCOL % College Grad % White, Non-Hispanic % Rated Physical Health Excellent or Very Good % Rated Mental Health Excellent or Very Good
Patients who provided email addresses
WEB ONLY
(N=163)
47% 80% 57% 82%*
WEB-MAIL
(N=248)
52% 85% 55% 74%
MAILONLY
(N=256)
52% 86% 51% 71%
Patients who did not provide email addresses
MAIL ONLY
N=200)
18%*** 80% 31%*** 52%***
*

Differed from MAIL ONLY (Email provided), p <.05

***

Differed from MAIL ONLY(Email provided), p < .001

Those who did not provide emails responding by MAIL ONLY were very different from the MAIL ONLY respondents who did provide email addresses with respect to not being college graduates and rating their physical and mental health much lower. Since they had the same data collection protocols, we can conclude that the main explanation for these differences is that the characterisics of those not providing email addresses differed from those who did provide email addresses with respect to their formal education and their assessments of their physical and mental health.

Hospital Experiences

Table 4 presents data from the six composite measures of patient hospital experiences. It presents adjusted percentages who gave top scores to items in each composite, derived from logistic models that control for demographic differences between the groups.

Table 4.

Adjusted+ Percentage of Respondents Who Gave Top Scores to Questions about Patient Experience by Data Collection Protocol

PROTOCOL Communi-cation Nurses Communi-cation Drs. Respon-siveness Communi-cation about meds Discharge Process Clean/Quiet Experience
Patients who provided email addresses
WEB ONLY
(N=163)
78% 69% 70%** 57% 86% 43%
WEB-MAIL
(N=248)
75% 75% 66%* 53% 79% 49%**
MAILONLY
(N=256)
70% 75% 56% 51% 81% 37%
Patients who did not provide email addresses
MAIL ONLY
N=200)
68% 66% 53% 54% 75% 47%*
*

Differed from MAIL ONLY (Email Provided), p <.05

**

Differed from MAIL ONLY (Email Provided), p <.01

+

Adjusted for age, gender education, race, self-reported physical health, self-reported mental health.

For those who provided email addresses, the results are generally consistent across protocols. However, those in the WEB ONLY group were significantly more positive in their perception of Responsiveness than the MAIL ONLY respondents (top scores 70 vs 56%, p<.01). Those responding via the combination WEB-MAIL protocol were different from the MAIL ONLY respondents on top scores for two composites: Responsiveness (66% vs 56%, p <,05) and Clean/Quiet experience (49% vs. 37%, p<.01).

When comparing those who did and did not provide email addresses who responded by MAIL ONLY, there is one statistically sigificant difference: higher percentage of top scores for Clean/Quiet Environment from those who did not provide email addresses (47% vs 37%, p<.05).

Finally, the HCAHPS survey includes two overall assessments of the hospital: a rating of the hospital on a scale from 0 to 10 and a question about how strongly respondents would recommend the hospital. The results, in Table 5, show there are no statistically significant differences for these measures based either on data collection protocol or whether or not an email address was provided.

Table 5.

Adjusted+ Overall Assessments of Hospitals by Data Collection Protocol

PROTOCOL Percentage Giving Hospital Top Rating (10) Percentage Saying They Would Definitely Recommend Hospital
Patients who provided email addresses
WEB ONLY
(N=163)
53% 76%
WEB-MAIL
(N=248)
48% 79%
MAILONLY
(N=256)
46% 78%
Patients who did not provide email addresses
MAIL ONLY
N=200)
49% 75%
+

Adjusted for age, gender education, race, self-reported physical health, self-reported mental health.

DISCUSSION

The key reason for creating the initial CAHPS survey instruments was so that patient experiences with ambulatory care could be compared across health care plans. Later versions were developed for assessing experience with different providers, such as hospitals. Having standardized survey methods for sampling and collecting data, which forms the foundation of the CAHPS program, allows for comparing groups with greater accuracy. These results indicate that the reports of patient experience across data collection protocols are similar. However, even after adjustments are made for differences in the characteristics of respondents by protocol, there are some substantive differences that would distort comparisons. Moreover, those who did not provide email addresses, and therefore should be surveyed in some mode other than web, differed from the other patients in age, gender, education and self-assessments of health, and they, too, had a substantive difference in answers from those who provided email addresses that adjustments for known respondent differences did not eliminate. Thus, leaving this group out of a survey would also distort comparisons.

The development of sophisticated and efficient web surveys provides a data collection option that was not available when the CAHPS program started. The fact that the majority of patients now provide their email addresses to their providers in many settings means that an easy, cost-effective way to gather patient experience data is now widely available.

Collecting data via the web saves the costs of printing survey instruments, the cost of mailing survey instruments to respondents and return postage, and the cost of data entry. In contrast, once a research assistant spends a day or two programming the online survey, the costs of data collection via the web are minimal. If a survey is done using only web, the cost difference compared to a postal mail survey is substantial, with differences increasing as the sample size increases. If a combination of web and postal mail is used, the savings are more modest. The majority of those sampled will end up being sent a mail survey and more people will respond by mail than via the web. Again, the larger the sample and the more people can be induced to respond via the web who do not need to be sent a mail questionnaire, the greater the savings. However, if web-based surveys systematically provide different estimates than the predominant mail and telephone protocols, a central goal of the CAHPS program, collecting comparable data across institutions, is threatened.

A study similar to this one compared mail and web-based approaches to surveying ambulatory patients and found few differences by protocol in the characteristics of respondents or their reported patient experiences.9

The results of the study reported herein are different. Older respondents were overrepresented in all the data collection groups, but that was particularly the case for respondents who did not provide email addresses (who responded by mail). The important issue, however, is that the patient experience reports were significantly different for one or two composites (depending on the protocol) compared with the mail respondent, even after adjustments were made for a basic set of patient characteristics. Those who did not provide email addresses were quite different from those who did in education, rating phyical and mental health and in age. Their adjusted reports of patient experience differed significantly on one measure.

Doing surveys using web alone produces considerably lower responses rates than postal mail, and it leaves out those who do not provide email addresses, who are quite different in important ways from those providing email addresses. The combination of mail and web approaches addressed the response rate issue and the issue of including those who do not provide email addresses. However, in this study, the substantive results were not strictly comparable to those from the mail only protocol. Elliott and colleagues8 also found mode differences in answers to HCAHPS surveys. They suggested adjusting estimates for these mode effects. The study described herein was conducted when email and the Internet are more widely used, in a hospital with a high proportion of email addresses for patients.

Mode of date collection can affect results either because different people respond or because the way people answer the questions is affected by the mode. Because the modes used were both self-administration and the differences appeared in only one or two measures, the effect of mode of data collection on answers is probably not the source of the differences observed. Klausch and his colleagues found no evidence of measurement differences when web and paper survey results were compared26. More likely it is something about the mix of who responds to the protocols, beyond the demographic characteristics and self ratings of health for which we adjusted, that is leading to the differences. Nevertheless, the overall conclusion from this study, like Elliott’s, is that further adjustments must be made for mode of data collection if HCAHPS data collected all or in part by web are to be compared with data based on mail surveys. However, because of the potential for patient characteristics and setting to influence the extent and type of mode effects observed, additional studies are needed to help us learn how to adjust for the effects of data colleciton protocols on results.

Supplementary Material

Supplemental Data File (.doc, .tif, pdf, etc.)

ACKNOWLEDGEMENTS:

We acknowledge Kathryn Bell at the Center for Survey Research for contributions to the project in managing the data collection and overseeing the creation of data files for the analysis. We thank Joan Kelly, Sara Pontillo Johnson, Tina Bennett, and Alan Friedman of Yale New Haven Health for their help planning and facilitating data collection for this project.

This work was supported by a cooperative agreement, U18 HS016978, with the Agency for Healthcare Research and Quality.

Footnotes

All the listed authors contributed significantly to the work and to the preparation of this manuscript. None has any conflicts of interest to report.

Contributor Information

Floyd J Fowler, Jr, Center for Survey Research, University of Massachusetts Boston, 100 Morrissey Blvd., Boston, MA 02125.

Philip S. Brenner, Department of Sociology, University of Massachusetts Boston, 100 Morrissey Blvd., Boston, MA 02125.

J. Lee Hargraves, Center for Survey Research, University of Massachusetts Boston, 100 Morrissey Blvd., Boston, MA 02125.

Paul D. Cleary, Anna M.R. Lauder Professor of Public Health, Department of Health Policy and Management, Yale School of Public Health, PO Box 208034, New Haven, CT 06520-8034.

REFERENCES

  • 1.Anhang Price R, Elliott MN, Zaslavsky AM, et al. Examining the role of patient experience surveys in measuring health care quality. Med Care Res Rev. 2014;71(5):522–554. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Cleary PD. Evolving concepts of patient-centered care and the assessment of patient care experiences: Optimism and opposition. J Health Politics Policy & Law. 2016;41(4):675–696. [DOI] [PubMed] [Google Scholar]
  • 3.Darby C, Crofton C, Clancy CM. Consumer assessment of health providers and systems (CAHPS®): Evolving to meet stakeholder needs. Am J Med Qual. 2006;21(2):144–147. [DOI] [PubMed] [Google Scholar]
  • 4.Goldstein E, Farquhar M, Crofton C, Darby C, Garfinkel S. Measuring hospital care from the patients’ perspective: An overview of the CAHPS hospital survey development process. Health Services Research. 2005;40(6):1977–1995. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.O’Malley AJ, Zaslavsky AM, Hays RD, Hepner KA, Keller S, Cleary PD. Exploratory factor analyses of the CAHPS® Hospital Pilot Survey responses across and within medical, surgical and obstetric services. Health Serv Res. 2005;40(6):2078–2095. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Giordano LA, Elliott MN, Goldstein E, Lehrman WG, Spencer PA. Development, implementation, and public reporting of the HCAHPS survey. Med Care Res Rev. 2010;67(1):27–37. [DOI] [PubMed] [Google Scholar]
  • 7.Elliott MN, Beckett MK, Lehrman WG, et al. Understanding the role played by Medicare’s patient experience point system in hospital reimbursement. Health Aff. 2016;35(9):1673–1680. [DOI] [PubMed] [Google Scholar]
  • 8.Elliott M, Brown J, Lehrman W, et al. A Randomized Experiment Investigating the Suitability of Speech-Enabled IVR and Web Modes for Publicly Reported Surveys of Patient’ Experience of Hospital Care. Medical Care Research and Review 2013;70(2):165–184. [DOI] [PubMed] [Google Scholar]
  • 9.Fowler F, Cosenza C, Cripps L, Edgman-Levitan S, Cleary P. The effect of administration mode on CAHPS response rates and results: A comparison of mail and web-based approaches. Health Serv Res. 2019;54:714–721. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Converse P, Wolfe E, Huang X, Oswald F. Response Rates for Mixed-Mode Surveys Using Mail and E-Mail/Web. American Journal of Evaluation. 2008;29(1):99–107. [Google Scholar]
  • 11.Bergeson SC, Gray J, Ehrmantraut LA, Laibson T, Hays RD. Comparing web-based with mail survey administration of the Consumer Assessment of Healthcare Providers and Systems (CAHPS®) Clinician and Group Survey. Primary Health Care: Open Access. 2013;3:132. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.CMS. Consumer Assessment of Healthcare Providers & Systems. https://www.cms.gov/Research-Statistics-Data-and-Systems/Research/CAHPS. Accessed 11/18/2020.
  • 13.Parast L, Mathews M, Elliott MN, et al. Effects of push-to-web mixed mode approaches on survey response rates: evidence from a randomized experiment in emergency departments. Survey Prac. 2019;12(1). [Google Scholar]
  • 14.Manfreda KL, Vehovar V. Internet surveys. In: DeLeeuw E, Hox J, Dillman D, eds. International Handbook of Survey Methodology. New York: Psychology Press; 2008:264–284. [Google Scholar]
  • 15.Messer B, Dillman D. Surveying the General Public Over the Internet Using Address-Based Sampling and Mail Contact Procedures. Public Opinion Quarterly 2011;75(3):429–457. [Google Scholar]
  • 16.Link M, Mokdad A. Alternative Modes for Health Surveillance Surveys: An Experiment with Web, Mail, and Telephone. Epidemiology. 2005;16(5):701–704. [DOI] [PubMed] [Google Scholar]
  • 17.Shih T, Xitao F. Comparing Response Rates from Web and Mail Surveys: A Meta-Analysis. Field Methods 2008;20(3):249–271. [Google Scholar]
  • 18.Dillman DA, Smyth JD, Christian LM. Internet, Mail and Mixed Mode Surveys: The Tailored Design Method. 4th ed. New York, NY: John Wiley; 2014. [Google Scholar]
  • 19.Kohut A, Keeter S, Doherty C, Dimock M, Christian L. Assessing the Representativeness of Public Opinion Surveys. Washington, DC: Pew Research Center;2012. [Google Scholar]
  • 20.Groves R Nonresponse Rates and Nonresponse Bias in Household Surveys. Public Opinion Quarterly 2006;70(5):646–675. [Google Scholar]
  • 21.Groves R, Peytcheva E. The Impact of Nonresponse Rates on Nonresponse Bias: A Meta-Analysis. Public Opinion Quarterly 2008;72(2):167–189. [Google Scholar]
  • 22.Brick JM, Tourangeau R (2017). Responsieve survey designs for reducing nonresponse bias. Journal of Official Statistics 2017; 33(3): 735–752. [Google Scholar]
  • 23.Medway R, Fulton J. When More Gets You Less: A Meta-Analysis of the Effect of Concurrent Web Options on Mail Survey Response Rates. Public Opinion Quarterly 2012;76(4):733–746. [Google Scholar]
  • 24.Elliott MN, Zaslavsky AM, Goldstein E, et al. Effects of survey mode, patient mix, and nonresponse on CAHPS Hospital Survey scores. Health Serv Res. 2009;44(2):501–518. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.American Association for Public Opinion Research. Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. American Association for Public Opinion Research;2016. [Google Scholar]
  • 26.Klausch T, Hox JJ, Schouten B. Measurement effects of survey mode on the equivalance of attitudinal rating scale questions. Sociological Methods & Research 2013; 42(3): 227–263. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplemental Data File (.doc, .tif, pdf, etc.)

RESOURCES