Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 May 5.
Published in final edited form as: Clin Ther. 2014 May 5;36(5):689–696.e1. doi: 10.1016/j.clinthera.2014.04.004

Evaluating the Psychometric Properties of the CAHPS Patient-Centered Medical Home Survey

Ron D Hays A, Laura J Berman B, Michael H Kanter C, Mildred Hugh D, Rachel R Oglesby E, Chong Y Kim F, Mike Cui G, Julie Brown H
PMCID: PMC4087122  NIHMSID: NIHMS594144  PMID: 24811752

Abstract

Purpose

To evaluate the reliability and validity of the Consumer Assessment of Healthcare Providers and Systems (CAHPS®) Patient-Centered Medical Home Survey.

Methods

We conducted a field test of the CAHPS patient-centered medical home (PCMH) survey with 2,740 adults collected by mail (n = 1,746), phone (n = 672), and web (n = 322) from 6 sites of care affiliated with a west-coast staff model health maintenance organization.

Findings

An overall response rate of 37% was obtained. Internal consistency reliability estimates for 7 multi-item scales were as follows: access to care (5 items, alpha = 0.79), communication with providers (6 items, alpha = 0.93), office staff courtesy and respect (2 items, alpha = 0.80), shared decision-making about medicines (3 items, alpha = 0.67), self-management support (2 items, alpha = 0.61), attention to mental health issues (3 items, alpha = 0.80), and care coordination (4 items, alpha = 0.58). The number of responses needed to get reliable information at the site of care level for the composites was generally acceptable (< 300 for 0.70 reliability-level) except for self-management support and shared decision-making about medicines. Item-scale correlations provided support for distinct composites except for access to care and shared decision-making about medicines, which overlapped with the communication with providers scale. Shared decision-making and self-management support were significantly uniquely associated with the global rating of the provider (dependent variable) along with access and communication in a multiple regression model.

Implications

This study provides further support for the reliability and validity of the CAHPS PCMH survey, but refinement of the self-management support and shared decision-making scales is needed. The survey can be used to provide information about the performance of different health plans on multiple domains of health care, but future efforts to improve some of the survey items is needed.

Keywords: CAHPS PCMH survey, patient experience measure

Introduction

Patient-centered medical homes are emerging as an integral part of the delivery of health care in the United States.1 The Consumer Assessment of Healthcare Providers and Systems (CAHPS®) patient-centered medical home (PCMH) survey was developed to enable evaluation of patient care experiences in sites of care at different stages of implementation of the medical home model of care delivery (from sites considering adoption of some features of the medical home model to fully recognized medical homes).

Scholle et al.2 reported results that provided initial support for the reliability and validity of the survey, but this was based on only an east coast sample and a 25% response rate to the survey. The objective of this study was to conduct an independent assessment of the reliability and validity of the CAHPS PCMH survey including an evaluation of the hypothesized multi-item scales. We use data collected from members of a health maintenance organization located on the west coast of the U.S.

Methods

The CAHPS PCMH survey includes the CAHPS clinician and group survey core measures of access to care, communication with providers, and office staff courtesy and respect.3,4 The PCMH survey also includes a 3-item shared decision-making about medicines scale, a 2-item self-management support scale, and a 3-item behavioral/whole person (attention to mental health issues) scale. Two separate items were recommended to assess care coordination (got test results, and provider informed and up-to-to-date about care from specialists) because analyses did not support a multi-item scale. Subsequently, a 5-item care coordination scale was developed based on analysis of the CAHPS Medicare survey.5 The patient experience measures in the CAHPS PCMH survey are summarized in the Appendix. The survey also includes questions assessing patient demographic characteristics (gender, age, race/ethnicity, education) and self-rated health.

We conducted a field test with adults 18 and older from 6 sites of care affiliated with a staff model health maintenance organization (i.e., care is provide in a facility owned by the organization and the organization employs the providers of care with a high degree of control over care delivered) located on the west coast. Two of the sites were located in the north and the other four in the south on the west-coast of the U.S. The two northern facilities achieved PCMH recognition in 2012. Two of the four southern facilities obtained PCMH recognition in 2009 and 2010 while the other two facilities did not have PCMH recognition, but had some inherent similarities in medical practices.

The study included a random selection of adult members (18 years or older), with one or more visits to a primary care provider in the prior 12 months. Members who had completed any patient experience survey within the prior 3 months (northern facilities) or prior 6 months (southern facilities) were excluded from this study.

Two different data collection approaches were used (see Table 1). A web-mail protocol was used for 2,000 members with email addresses in the north. Data collection was begun by web (3 email invitations) and after 4 weeks non-respondents were sent the survey by mail. Both English and Spanish-language CAHPS surveys were administered, depending on language preference. A sample of 1,714 members with email addresses in the north were selected for the mail-phone protocol.

Table 1.

Summary of Study Design

Web-Mail Mail-Phone Mail-Phone Mail-Phone Overall
North North South South
English language English language English language Spanish language
2,000 sampled 1,714 sampled 1,714 sampled 2,093 sampled 7,432 sampled
774 respondents 946 respondents 828 respondents 192 respondents 2,740 respondents
39% response rate 55% response rate 48% response rate 9% response rate 37% response rate

The second approach was a mail-phone protocol. Data collection was begun by mail (2 mailings), and after 4 weeks non-respondents were called for a phone interview. The mail-phone protocol was used for 1,714 English-preferring members in the south. This protocol was also used for 2,093 members in the south with a Spanish-language preference.

Paper and electronic study invitations were sent by the RAND Survey Group using the health plan’s letter templates and leadership names/signatures. Participants were told that the survey would take less than 20 minutes on average to complete and no financial compensation was offered for completion. All data collection was conducted by RAND and began in September 2012 and ended in February of 2013.

Analysis Plan

We estimate internal consistency reliability6 for 7 multi-item scales, unadjusted mean score differences, reliabilities and intraclass correlations at the site-level7 for the six study locations, number of responses needed for target levels of reliability,8 item-scale correlations for the scales, and correlations among the scales. Because the CAHPS PCMH survey is used to compare providers rather than patients, the site-level intraclass correlations are of utmost importance and are used to estimate the number of responses needed to obtain reliable information. The site-level reliabilities and intraclass correlations are estimated by partitioning between versus within site variance in one-way ANOVAs.9

We also regress the CAHPS 0–10 global rating of the provider (where 0 is worst possible provider and 10 is best possible provider) on the CAHPS scales, controlling for age (18–24, 25–34, 35–44, 45–54, 55–64, 65–74), education (8th grade or less, some high school, high school graduate, some college, college graduate), self-rated general health (poor, fair, good very good), and self-rated mental health (poor, fair, good, very good).

CAHPS items assess a variety of aspects of care and are only answered if they apply to a given respondent. Because of structured missing data, estimating correlations of items with scales using listwise deletion of cases would be based on a small and unrepresentative subset of the sample. Hence, for the item-scale correlation matrix only we imputed data for missing item responses using other items in the matrix and a single Markov-Chain Monte Carlo imputation (maximum likelihood estimates of the covariance matrix using expectation-maximization algorithm). The median fraction of missing data was 0.03.

Analyses were conducted using SAS® version 9.2.

Results

The overall study response rate was 37% (2,740 completed surveys out of 7,432 eligible), with 1,746 completes by mail, 672 by phone, and 322 by web. Response rates varied from 9% for Spanish-language members in the south to 55% for English-language members in the north (mail-phone protocol).

The majority of the 2,740 respondents were female (62%), 55 years or older, with some college education. Self-rated general health and mental health, respectively, was fair or poor for 17% and 12% of the sample (Table 2).

Table 2.

Demographic Characteristics of the Sample

Gender
 Female (n = 1,692, 62%)
 Male (n= 1,048, 38%)
Age (missing = 19)
 18–24 (n = 82, 3%)
 25–34 (n = 144, 5%)
 35–44 (n = 310, 11%)
 45–54 (n = 484, 18%)
 55–64 (n = 701, 26%)
 65–74 (n = 641, 24%)
 75+ (n = 359, 13%)
Race/ethnicity (missing = 90)
 Hispanic (n = 431, 16%)
 Non-Hispanic White (n = 1,890, 71%)
 Non-Hispanic Black (n = 109, 4%)
 Non-Hispanic Asian (n = 88, 3%)
 Non-Hispanic Native Hawaiian or other Pacific Islander (n = 8, < 1%)
 Non-Hispanic American Indian or Alaska Native (n = 7, < 1%))
 Non-Hispanic Other/2 or more races (n = 117, 4%)
Education (missing = 36)
 8th grade or less (n = 82, 3%)
 Some high school (n = 152, 6%)
 High school graduate (n = 583, 22%)
 Some college (n = 1,152, 43%)
 4 year college graduate (n = 334, 12%)
 More than 4 year college graduate (n = 401, 15%)
Self-rated general health (missing = 30)
 Excellent (n = 301, 11%)
 Very Good (n = 897, 33%)
 Good (n = 1,063, 39%)
 Fair (n = 377, 14%)
 Poor (n = 72, 3%)
Self-rated mental health (missing = 24)
 Excellent (n = 703, 26%)
 Very Good (n = 950, 35%)
 Good (n = 737, 27%)
 Fair (n = 280, 10%)
 Poor (n = 46, 2%)

Note: Response categories represent the items as administered in the survey.

Internal consistency reliability estimates for the 7 multi-item scales (overall sample and median estimate within the 6 sites) were as follows: access to care (5 items, alpha = 0.79, median = 0.83), communication with providers (6 items, alpha = 0.93, median = 0.92), office staff courtesy and respect (2 items, alpha = 0.80, median = 0.81), shared decision-making about medicines (3 items, alpha = 0.67, median = 0.69), self-management support (2 items, alpha = 0.61, median = 0.62), attention to mental health issues (3 items, alpha = 0.80, median = 0.80), and care coordination (4 items, alpha = 0.58. median = 0.47).

Table 3 provides mean score differences (scales scored on 0–100 possible range with a higher score representing a more positive experience with care) for the 6 sites participating in the study. The differences shown are relative to the site with the least positive score on each scale. There were no significant differences between sites on shared decision-making and self-management support. Location 5 had the lowest (most negative experiences) scores on 4 of the scales, location 6 for 2 scales, and location 4 for 1 scale.

Table 3.

Means (Standard Errors) for CAHPS Scale Differences by Location (n = 2,740)

Location 1 (N = 845) Location 2 (N = 230) Location 3 (N = 875) Location 4 (N = 204) Location 5 (N = 336) Location 6 (N = 250)
Location North South North South South South
PCMH recognized? Yes No Yes Yes No Yes
Data Collection Mail-web Mail-Phone Mail-Phone Mail-Phone Mail-Phone Mail-Phone
Scales
Access 7.2b (0.75) 5.5b (1.65) 11.1a (0.66) 0.0c (2.03) 5.0b (1.31) 0.4c (1.75)
Communication 4.0a,b (0.61) 1.3b,c (1.49) 4.9a (0.58) 1.0b,c (1.65) 0.0c (1.21) 2.5a,b,c (1.28)
Office Staff 4.1a,b (0.57) 1.3b (1.47) 6.6a (0.52) 2.3b (1.44) 4.0a,b (0.94) 0.0c (1.34)
Shared Decision Making 5.0a,b (1.27) 7.6a (2.53) 7.9a (1.13) 4.7a,b (2.92) 0.0b (2.27) 7.0a (2.29)
Self-Management Support 3.9a (1.44) 3.5a (2.68) 5.9a (1.40) 5.8a (2.92) 0.0a (2.12) 2.0a (2.65)
Attention Mental Health 10.7a (1.40) 6.2a,b (2.55) 10.3a (1.37) 1.8b (2.61) 0.0b (1.99) 3.7b (2.33)
Care coord. 4.7a,b (0.79) 1.1b,c (1.74) 7.1a (0.72) 0.1c (1.90) 0.3c (1.43) 0.0c (1.78)

Note: Numbers are differences between the lowest (least positive) scoring location on the scale and the other 5 locations (0–100 possible score on scales). Sites that share a superscript on a composite (row) do not differ significantly.

Estimated reliabilities and intraclass correlations at the level of the 6 locations were as follows: access to care (0.931 and 0.029), communication with providers (0.783 and 0.008), office staff courtesy and respect (0.873 and 0.015), shared decision-making about medicines (0.590 and 0.006), self-management support (0.150 and <.001), attention to mental health issues (0.829 and 0.011), and care coordination (0.872 and 0.015). The number of responses estimated to obtain reliabilities of 0.70, 0.80 and 0.90 per site are reported in Table 4. The number of responses for reliability of 0.70 ranged from 79 to 396 except for the self-management support scale, which had essentially no reliability at the site-level.

Table 4.

Number of Responses per Site Needed for 0.70, 0.80 and 0.90 Reliability

Scale Reliability = 0.70 Reliability = 0.80 Reliability = 0.90
Access to Care 79 135 303
Communication with Providers 295 506 1139
Office Staff Courtesy and Respect 153 263 591
Shared Decision-Making about Medicines 396 679 1,527
Self-management Support 5,979 10,250 23,063
Attention to Mental Health issues 218 374 842
Care Coordination 156 268 603

Note: Estimates are derived using Spearman-Brown formula8 from intraclass correlations reported in the text.

Sample sizes for each of the CAHPS PCMH survey items are given in Table 5. The variance in sample size reflects the differential applicability of the content represented by the items. The smallest sample size (n = 222) and largest fraction of missing (0.93) was observed for item 16 (getting an answer to a medical question after regular office hours).

Table 5.

Item-Scale Correlations for Hypothesized Scales (n = 2,740)

Item n before imputation Access Communication Office Staff Shared Decision-Making Self-Management support Attention Mental Health Care Coord.
6 1,356 0.54* 0.41 0.29 0.29 0.16 0.08 0.25
9 2,089 0.54* 0.37 0.30 0.24 0.17 0.09 0.25
14 1,098 0.46* 0.44 0.32 0.30 0.13 0.13 0.24
16 222 0.07* 0.07 0.04 0.06 0.05 0.04 0.07
18 2,696 0.37* 0.35 0.28 0.26 0.13 0.10 0.20
19 2,720 0.46 0.79* 0.33 0.47 0.24 0.15 0.36
20 2,717 0.45 0.87* 0.32 0.52 0.26 0.16 0.36
22 2,351 0.46 0.84* 0.31 0.53 0.27 0.16 0.37
24 2,708 0.45 0.71* 0.33 0.50 0.31 0.20 0.40
25 2,723 0.40 0.82* 0.33 0.52 0.26 0.15 0.36
26 2,719 0.45 0.76* 0.36 0.49 0.27 0.16 0.35
47 2,693 0.39 0.36 0.68* 0.27 0.17 0.10 0.25
48 2,705 0.31 0.34 0.68* 0.21 0.13 0.07 0.21
31 1,449 0.31 0.57 0.25 0.53* 0.31 0.27 0.35
32 1,434 0.26 0.39 0.17 0.50* 0.33 0.31 0.34
33 1,431 0.29 0.46 0.22 0.45* 0.32 0.24 0.36
37 2,694 0.19 0.31 0.17 0.35 0.44* 0.30 0.43
38 2,675 0.15 0.22 0.11 0.33 0.44* 0.41 0.34
41 2,702 0.11 0.15 0.07 0.30 0.34 0.65* 0.24
42 2,709 0.14 0.20 0.09 0.31 0.39 0.68* 0.27
43 2,712 0.10 0.14 0.08 0.26 0.33 0.62* 0.23
36 1,342 0.14 0.21 0.13 0.22 0.27 0.15 0.20*
40 2,441 0.16 0.28 0.13 0.33 0.29 0.21 0.25*
46 2,740 0.07 0.11 0.07 0.1 0.27 0.18 0.19*
28–29 2,395 0.43 0.47 0.31 0.37 0.26 0.16 0.20*

Item-scale correlations provide support for the items in the scales with 5 exceptions. One access item (16. When phoning this provider’s office after regular office hours, how often you did you get an answer to your medical question as soon as you needed?) correlated very weakly with the access to care scale. Two other access items (14. Getting an answer to a medical question during regular office hours; 18. Seeing a provider within 15 minutes of the appointment time) correlated about as highly with the communication scale as with the hypothesized access to care composite. Finally, two of the shared decision-making about medicine items (items 31 and 33) correlated as highly with the communication with providers scale as with their hypothesized scale.

As shown in Table 6, correlations among the six scales ranged from 0.09 (office staff courtesy and respect with attention to mental health issues) to 0.57 (communication with providers with shared decision-making about medicine). The regression of the global rating of the provider item on the scales yielded an adjusted R-squared of 70% (n = 1,431, dfs = 26, F = 128.22, p< 0.0001) with 4 of the 7 composites significantly uniquely associated with the global rating (standardized regression coefficients followed by zero-order correlations): communication with providers (B = 0.65 (p < 0.01); r =0.80), shared decision-making about medicines (B = 0.12 (p < 0.01); r = 0.56), care coordination (B = 0.12 (p < 0.01), r = 0.56), access to care (B = 0.04 (p = 0.02), r = 0.41), self-management support (B = 0.03 [NS, p = 0.10]; r = 0.30), office staff courtesy and respect (B = −.02 [NS, p = 0.15]; r = 0.30), and attention to mental health issues (B = 0.01 [NS, p = 0.58]; r = 0.18).

Table 6.

Correlations among Scales

SCALE Access to Care Communication with Providers Office Staff Courtesy and Respect Shared Decision-Making about Medicines Self-Management Support Attemtion to Mental Health Issues Care Coordination.
Access 1.00
Communication 0.47 1.00
Office Staff 0.37 0.38 1.00
Shared Decision-Making 0.31 0.57 0.20 1.00
Self-Management Support 0.19 0.31 0.17 0.38 1.00
Attention Mental Health 0.14 0.19 0.09 0.31 0.42 1.00
Care Coordination 0.32 0.44 0.25 0.42 0.36 0.24 1.00

Note: Pairwise correlations all p’s < 0.0001

Discussion

This study provides further information about the reliability and validity of the CAHPS PCMH survey. The study was limited to a west coast sample of health maintenance organization members, In addition, the overall response rate was less than desired (37%). Despite these limitations, the study provided an important opportunity to evaluate the survey on a large sample of 2,740 respondents in a different location and with a different system of care than the original published evaluation of the measure.5

Internal consistency reliabilities for the scales exceeded those in the previously published PCMH field test5 except for the office staff courtesy and respect scale (coefficient alpha was 0.80 here versus 0.85 in the previous study; χ2 (1 df) = 14.47, p < 0.001).10 Site-level reliability estimates indicate that a 0.70 reliability can be achieved with less than 300 completes per site for the 5 of the 7 scales, and shared decision-making required about 396 completes. The self-management support scale did not discriminate among the 6 sites in this study. Site-level reliability was also suboptimal in the previous study but there was strong stakeholder support for including it so it was retained by Scholle et al.5

The getting an answer to a medical question after regular office hours question (item 16) correlated weakly with the access to care scale in this study, but it correlated reasonably well with the access scale (r = 0.53) in the Scholle et al.5 study. Only 8% of the respondents in this study reported phoning the provider’s office after hours. While in this system of care, medical questions after regular hours are directed to the “advice nurse” line, physician groups in other systems of care plan for after-hours medical questions in various ways (e.g., answering service, partner with physicians in nearby medical groups to set up an “on-call” system, etc.) so few patients would call the provider’s office knowing that the office is closed. Refinement of the gate question to capture after hours calls may be needed.

The saw the provider within 15 minutes of appointment time question (item 18) correlated as highly with the communication scale as with the access scale. This item had the lowest correlation of all the items representing access with the access scale score in the prior study.5 The got an answer to a medical question during regular office hours question (item 14) also correlated as highly with the communication scale as with the access scale in this study.

Two of the shared decision-making questions (items 31 and 33) correlated as highly with the communication with providers scale as with the other items in the shared decision-making scale. The shared decision-making scale also correlated 0.57 with the communication with providers scale and this was the largest correlation among the 7 CAHPS scales. Despite this fact, the shared decision-making scale had the second strongest unique association in the multiple regression of the global rating of the provider item on the CAHPS scales (tied with care coordination). Shared decision-making about medications is one aspect of communication with providers, but has reduced application as not all patients are on medications.

Concerns about ceiling effects for the CAHPS communication scale have resulted in debates about its value in the calculation of overall performance by organizations such as the National Committee on Quality Assurance (NCQA). As noted by Quigley, Martino et al.11 and supported in this study, the CAHPS communication scale is psychometrically sound and has the strongest relationship to overall ratings of the provider of care. These findings support continuing use of the communication with provider scale for health plan and other accreditation efforts. Moreover, a recent study suggested that it may be important to focus on individual communication items for quality improvement efforts and that the showing respect for what patients say item is especially important for specialty care.12

Among the four sites of care in the south, there are no clear differences in scores between the sites with and without PCMH recognition, except on access to care where the non-PCMH sites performed significantly higher. The scale score differences by site show that one of the non-PMCH sites (location 5) had the lowest scores for 4 of the 7 CAHPS scales. However, the other non-PCMH site (location 2) scored relatively well. These results indicate that PCMH recognition is not necessarily associated with higher CAHPS PCMH scores. And for practices that are part of a larger system of care, the administration of the CAHPS PCMH survey at the system level may suffice, as results at practice site level will not gain additional information.

Conclusions

The results reported here support the CAHPS PCMH survey scales in general, but the performance of the self-management support and shared decision-making about medicines scales were suboptimal. Further refinement of these scales to improve reliability and to distinguish shared decision-making from communication with providers are recommended. Additional evaluation of the existing scales and any modifications to them need to be performed. The 37% response rate in this study exceeded the 25% response rate obtained in the Scholle et al. study.2 Despite this improvement, the response rate for Spanish-language respondents was extremely low (9%). Future work is needed to enhance participation rates in this important subgroup of the overall population.

Appendix. CAHPS Patient-Centered Medical Home Adult Survey Content

Access to Health Care (5 items)
  • 6. Getting appointments for urgent care

  • 9. Getting appointments for routine care

  • 14. Getting an answer to a medical question during regular office hours

  • 16. Getting an answer to a medical question after regular office hours

  • 18. Saw provider within 15 minutes of appointment time

Communication with Providers (6 items)
  • 19. Provider explanations easy to understand

  • 20. Provider listens carefully

  • 22. Provider gives easy to understand information

  • 24. Provider knows important information about medical history

  • 25. Provider shows respect for what you have to say

  • 26. Provider spends enough time with you

Courteous and Helpful Office Staff (2 items)
  • 47. Clerks and receptionists were helpful

  • 48. Clerks and receptionists treat you with courtesy and respect

Shared Decision-Making about Medicine (3 items)
  • 31. Provider talked about reasons to take a medicine*

  • 32. Provider talked about reasons not to take a medicine*

  • 33. Provider asked what you though was best for you regarding medicine*

Self-management Support (2 items)
  • 37. Provider talked with you about specific goals for your health*

  • 38. Provider asked you if there were things that make it hard for you to take care of your health*

Attention to Mental Health Issues (3 items)
  • 41. Talked about feeling sad or depressed*

  • 42. Talked about worry or stress in your life*

  • 43. Talked about personal or family problem/alcohol or drug use*

Care Coordination (4 items)
  • 28–29. Got test results as soon as needed*

  • 36. Provider seemed informed and up-to-date about care you got from specialists.*

  • 40. Talked about prescription medicines you are taking.+

  • 46. Got help managing care, tests, or treatment.+

*

Items in CAHPS PCMH survey added beyond the CAHPS clinician and group survey core.

+

Items in CAHPS Medicare survey.

Footnotes

Conflict of interest statement

The authors have no conflicts of interest in conjunction with this paper.

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Contributor Information

Ron D. Hays, Email: drhays@ucla.edu.

Laura J. Berman, Email: Laura.J.Berman@kp.org.

Michael H. Kanter, Email: Michael.H.Kanter@kp.org.

Mildred Hugh, Email: Mimi.Hugh@kp.org.

Rachel R. Oglesby, Email: Rachel.R.Oglesby@kp.org.

Chong Y. Kim, Email: Chong.Y.Kim@kp.org.

Mike Cui, Email: mcui@rand.org.

Julie Brown, Email: julieb@rand.org.

References

  • 1.Bitton A, Martin C, Landon BE. A nationwide survey of patient-centered medical home demonstration projects. J Gen Intern Med. 2010;25:584–592. doi: 10.1007/s11606-010-1262-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Scholle SH, Vuong O, Ding, et al. Development of and field test results for the CAHPS PCMH survey. Med Care. 2012;11:S2–10. doi: 10.1097/MLR.0b013e3182610aba. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Dyer N, Sorra JS, Smith SA, et al. Psychometric properties of the Consumer Assessment of Healthcare Providers and Systems (CAHPS®) clinician and group adult visit survey. Med Care. 2012;50:S28–34. doi: 10.1097/MLR.0b013e31826cbc0d. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Hays RD, Chong K, Brown J, et al. Patient reports and ratings of individual physicians: An evaluation of the DoctorGuide and CAHPS® provider level surveys. Am J Med Qual. 2003;18(5):190–196. doi: 10.1177/106286060301800503. [DOI] [PubMed] [Google Scholar]
  • 5.Hays RD, Martino S, Brown J, et al. Evaluation of a care coordination measure for the Consumer Assessment of Healthcare Providers and System (CAHPS®) Medicare Survey. Med Care Res Rev. 2013 doi: 10.1177/1077558713508205. epub before print. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika. 1951;16:297–334. [Google Scholar]
  • 7.Hays RD, Wang E, Sonksen M. General Reliability and Intraclass Correlation Program (GRIP). Proceedings of the 3rd Annual Western Users of SAS Conference; 1995; pp. 220–223. [Google Scholar]
  • 8.Clark E. Spearman-Brown formula applied to ratings of personality traits. J Educ Psychol. 1935;26:552–555. [Google Scholar]
  • 9.Hays RD, Revicki DR. Reliability and validity (including responsiveness) In: Fayers P, Hays RD, editors. Assessing Quality of Life in Clinical Trials: Methods and Practice. 2. Oxford: Oxford University Press; 2005. pp. 25–39. [Google Scholar]
  • 10.Feldt LS, Woodruff DJ, Salih FA. Statistical inference for coefficient alpha. App Psych Measure. 1987;11:93–103. [Google Scholar]
  • 11.Quigley DD, Martino SC, Brown JA, et al. Evaluating the content of the communication items in the CAHPS® Clinician and Group Survey and supplemental items with what high-performing physicians say they do. Patient. 2013;6(3):169–177. doi: 10.1007/s40271-013-0016-1. [DOI] [PubMed] [Google Scholar]
  • 12.Quigley DD, Elliott MN, Burkhart Q. Specialities differ in which aspects of doctor communication predict overall physician ratings? J Gen Intern Med. 2013 doi: 10.1007/s11606-013-2663-2. epub before print. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES