Abstract
Objectives:
To investigate whether content from patient narratives explains variation in patients’ primary care provider (PCP) ratings beyond information from the closed-ended questions of the Consumer Assessment of Healthcare Providers and Systems (CAHPS) Clinician and Group Survey and whether the relative placement of closed- and open-ended survey questions affects either the content of narratives or the CAHPS composite scores.
Methods:
Members of a standing Internet panel (N = 332) were randomly assigned to complete a CAHPS survey that was either preceded or followed by a set of open-ended questions about how well their PCP meets their expectations and how they relate to their PCP.
Results:
Narrative content from healthier patients explained only an additional 2% beyond the variation in provider ratings explained by CAHPS composite measures. Among sicker patients, narrative content explained an additional 10% of the variation. The relative placement of closed- and open-ended questions had little impact on narratives or CAHPS scores.
Conclusion:
Incorporating a protocol for eliciting narratives into a patient experience survey results in minimal distortion of patient feedback. Narratives from sicker patients help explain variation in provider ratings.
Keywords: CAHPS, patient narratives, patient comments, patient experience
Information about patient experience is a common component of public performance reports on hospitals and medical practices and increasingly included in certification and value-based purchasing programs (1). These high-stakes programs typically use patient experience measures derived from standardized tools such as the Consumer Assessment of Healthcare Providers and Systems (CAHPS) surveys (2). At the same time, there is growing interest in using patient narratives along with survey scores to help clinicians understand what they can do to improve care and inform patients about differences in the care delivered by available providers (3 –7). In fact, a recent survey of outpatient health-care providers found that over 40% of providers reported using narrative comments as a basis for implementing measures to improve care (8).
Interest in narratives as a means of conveying information about patient experience raises 3 important questions about the relationship between patient narratives and scores from standardized closed-ended surveys. First, given the added cost of collecting, analyzing, and reporting narratives, how much value do narratives add to our understanding of how patients evaluate their health-care providers beyond information gathered by closed-ended survey questions? On one hand, the content of patient narratives may overlap with standardized surveys to such a degree that narratives provide little additional information. On the other hand, these surveys may omit aspects of patient experience that are expressed in narrative accounts. Consistent with this notion, a recent study of online narrative hospital reviews found that over half of such reviews mention aspects of care that are not reflected in the CAHPS Hospital Survey (HCAHPS; 9), and a separate study found that the comments written by patients responding to the HCAHPS survey help to predict overall hospital ratings beyond the numerical scores derived from closed-ended questions (10). It is unclear whether these findings apply outside of inpatient settings.
Second, do narratives have differential value for representing the experiences of patients who have had relatively more complex interactions with the health-care system compared with healthier patients? For example, sicker patients are likely to have more complicated and frequent encounters with health-care providers than patients who are healthy. Reducing these experiences to a single response on a closed-ended scale may obscure nuanced interactions that have both positive and negative aspects (7).
Third, does adding a narrative elicitation protocol into a patient survey (such as CAHPS) influence the information derived from either the narratives or the closed-ended responses? There are logistical benefits to embedding a narrative elicitation protocol within extant surveys, including a ready-made sampling frame and the ability to link narratives to quantitative metrics. However, it might be necessary to decouple the 2 if placing open-ended questions at the end of a standardized patient survey results in patients discussing fewer topics or providing narratives that are less detailed or engaging. In contrast, asking patients to articulate their experiences in narrative form prior to answering closed-ended questions about that experience may encourage patients to think deeply about their experiences with their providers but could also influence their responses on subsequent closed-ended scales.
This article presents findings from an experimental study aimed at developing and testing a protocol for rigorously eliciting patient narratives in the context of the CAHPS Clinician and Group (CG-CAHPS) survey. We report our findings related to the development and performance of the elicitation protocol elsewhere (11,12). Our focus here is on the implications of including this protocol in a CAHPS survey. In sum, we address the following 3 questions:
To what extent does the evaluative information in patient narratives account for variation in patients’ global ratings of their provider beyond the information derived from closed-ended CAHPS survey responses?
Does this explanatory potential differ for sicker versus healthier patients?
Are patients’ answers to closed- and open-ended questions affected by the relative placement of these questions in an integrated survey?
These are not the only ways in which narratives could have value or could influence other forms of feedback regarding patient experience; however, these 3 foci are crucial areas for investigation, given the expanding use of CAHPS survey results in public reports on health-care system performance and in provider payment arrangements, many of which are closely tied to the global rating score.
Methods
Participants
Data were collected from 332 members of a standing Internet panel (the “Knowledge Panel”) of over 60 000 households recruited and maintained by the research firm GfK, which is representative of the American population in terms of its demographics and health status (13). A random sample of panelists was invited to participate; those who agreed (59.5%) were screened to ensure that they had some contact with a health-care provider in the past year. To investigate whether the association between patient narratives and responses to closed-ended CAHPS questions differed between patients with simple versus more complex health-care experiences, we used stratified random sampling to recruit approximately equal numbers of participants who (a) reported having a “serious or life-threatening” health event in the past year (n = 90), (b) reported having a chronic health problem that required regular medical monitoring (n = 121), and (c) had neither of these types of health problems in the past year (n = 113).
Study Design and Procedures
Within each health stratum, participants were randomly assigned to complete a version of the CG-CAHPS survey that was either preceded or followed by a series of 5 open-ended questions designed to elicit a narrative account of patients’ experiences with the provider they saw the most over the past year. Eighty percent of the surveys were completed online and 20% by phone. Data collection occurred from May 2014 to June 2014. All procedures were approved by the institutional review boards at RAND, Yale University, and the University of Wisconsin–Madison. Informed consent was obtained from all participants.
Closed-Ended CAHPS Questions
Responses to closed-ended survey questions were combined to create 3 composite measures that captured experiences with provider communication (6 items, α = 0.91), access to care (5 items, α = 0.41), and office staff (2 items, α = 0.85). Appendix A contains additional detail on these composite measures. Participants also rated their provider on a 0 to 10 scale, with 0 being the worst and 10 being the best (global provider rating).
Elicitation Protocol
Participants responded to 5 open-ended questions about what they look for in a provider and the staff in his or her office, how well the provider and staff measure up to expectations, examples of good and bad experiences with their provider and the staff over the past 12 months, and how they relate to their provider (exact item wording is in Appendix B). This 5-question protocol has been shown to yield coherent narratives that accurately represent the balance of positive and negative experiences that patients have with their health-care providers and capture with reasonable fidelity the depth and nuances of patients’ experiences (11). The protocol is effective at capturing experiences from patients across a range of health status and sociodemographic characteristics (12).
Coding of Patient Narratives
Responses to the 5 open-ended questions were aggregated to create a single narrative for each patient. Two independent coders determined the number of positive and negative statements pertaining to 10 aspects of care that were identified via both inductive and deductive analytic approaches to the qualitative data (12)—provider communication, time spent during office visits, access to care, office staff, emotional rapport between the provider and the patient, perceived thoroughness of the provider, perceived technical competence of the provider, shared decision-making, provider practice style, and care coordination. We calculated the percentage of patients who mentioned each aspect of care and who mentioned anything negative about each aspect. For each aspect of care, we also quantified the extent to which a patient’s narrative conveyed negative experiences with care by dividing the number of lines in the narrative that conveyed negative experiences about a given aspect of care by the total number of lines in the narrative that pertained to that aspect of care. Our focus on negative experiences is consistent with prior research showing that patients’ overall evaluations of their health care are more strongly based on negative aspects of care than on positive aspects (10,14).
The coders also assessed the scope, salience, evaluative balance, and coherence of each narrative. Scope was quantified as the proportion of the 10 aspects of care mentioned in the narrative. Narrative length was the total number of lines of transcribed text. Evaluative balance was the number of lines in the narrative that conveyed a positive assessment of the provider divided by the number of lines that conveyed a negative assessment. Overall coherence of the narrative was assessed by coding and then averaging 5 facets of coherence identified in the literature on narratives related to health—statement of expectations for care, emotional expressivity, substantive expressivity, completeness of storyline, and the extent to which the narrative conveys a clear chronology (15). Each of these facets was assessed on a 0 to 3 scale, with higher numbers indicative of greater coherence (12).
To ensure interrater reliability in the use of the coding scheme, all narratives were coded by both coders; disagreements were resolved through discussion. Interrater reliability, calculated using Cohen’s kappa, ranged from 0.65 to 0.79, meeting conventional standards of acceptable reliability (16,17).
Statistical Analysis
We estimated a series of multiple linear regression models to investigate the predictive utility of evaluative information contained in patient narratives. In model 1, we predicted the global provider rating using only the 3 CAHPS composite measures. In model 2, we predicted the global rating solely from the indicators derived from the narratives. In model 3, we combined CAHPS and narrative measures to predict the global rating. We tested each model separately among healthier and sicker patients (ie, those who had a recent serious or ongoing chronic illness), calculated the percentage of variation in the global provider rating explained by each model (R2), and tested the statistical significance of changes in R2 across models.
Next, we conducted a series of t tests to assess the effect of the placement of the narrative elicitation protocol in the survey on the content and other qualities of the narratives. These tests compared participants who completed the elicitation protocol before versus after the CAHPS closed-ended questions on the percentage of narratives that contained any mention of each of the 10 aspects of care as well as the scope, salience, evaluative balance, and coherence of the narratives. We conducted these tests separately among sicker and healthier patients.
Finally, we conducted t tests to assess the effect of the placement of the elicitation protocol in the survey on the CAHPS composite scores, as well as on the strength of the correlation between each composite measure and the global provider rating. These tests were also conducted separately among sicker and healthier patients. For the regression analyses and t tests, we were powered to detect a medium-sized effect at power = 0.80 for alpha = 0.05 (18).
Results
Sample Characteristics
Table 1 presents a comparison of the characteristics of sicker and healthier patients. Not surprisingly, sicker patients were significantly (P < .05) older and had significantly more visits with their health-care provider in the past year.
Table 1.
Characteristics of Healthier (n = 113) and Sicker (n = 219) Patients.
| Characteristics | Healthier (%) | Sickera (%) | χ2 | P Value |
|---|---|---|---|---|
| Gender | 2.69 | .10 | ||
| Male | 37.2 | 46.6 | ||
| Female | 62.8 | 53.4 | ||
| Age, years | 19.30 | .001 | ||
| Younger than 35 | 26.6 | 10.5 | ||
| 35-44 | 11.5 | 11.4 | ||
| 45-54 | 20.4 | 13.7 | ||
| 55-64 | 20.4 | 29.7 | ||
| 65 or older | 21.2 | 34.7 | ||
| Race/ethnicity | 3.07 | .38 | ||
| White, non-Hispanic | 81.4 | 76.3 | ||
| Black, non-Hispanic | 11.5 | 11.4 | ||
| Other, non-Hispanic | 1.8 | 5.7 | ||
| Hispanic | 5.3 | 6.6 | ||
| Education | 5.46 | .14 | ||
| High school degree or less | 31.9 | 40.3 | ||
| Some college | 29.2 | 33.2 | ||
| 4-year college degree or more | 38.9 | 26.5 | ||
| Length of relationship with provider | 5.50 | .14 | ||
| Less than 1 year | 23.2 | 19.5 | ||
| 1-3 years | 26.8 | 17.7 | ||
| 3-5 years | 14.3 | 18.6 | ||
| More than 5 years | 35.7 | 44.2 | ||
| Doctor visits in past 12 months | 63.03 | <.0001 | ||
| 1 | 39.3 | 7.9 | ||
| 2-3 | 46.4 | 43.3 | ||
| 4-9 | 11.6 | 35.8 | ||
| 10 or more | 2.7 | 13.0 |
aHad a recent serious or chronic illness.
Descriptive Data on Narrative Content
Table 2 presents descriptive data on the narrative content. The only significant difference (P < .05) between sicker and healthier patients is that sicker patients were more likely to mention care coordination than were healthier patients. Because none of the healthier patients and very few of the sicker patients made negative comments about shared decision-making, practice style, or care coordination, we omitted from the regression models described below indicators of the extent to which narratives conveyed negative commentary about these aspects of care.
Table 2.
Descriptive Data on the Content of Narratives of Healthier (n = 113) and Sicker (n = 218) Patients.
| Aspect of Care | Healthier Patients | Sickerb Patients | ||||
|---|---|---|---|---|---|---|
| Any Mention, % | Any Negative Commentary, % | Proportion of Commentary That Is Negative, Mean (SD) | Any Mention, % | Any Negative Commentary, % | Proportion of Commentary That Is Negative, Mean (SD) | |
| Provider communication | 69.0 | 3.5 | 0.03 (0.17) | 62.1 | 9.1 | 0.07 (0.23) |
| Time spent during office visits | 53.1 | 15.9 | 0.14 (0.33) | 42.9 | 16.9 | 0.14 (0.32) |
| Access to care | 42.5 | 15.0 | 0.12 (0.30) | 42.9 | 10.0 | 0.08 (0.26) |
| Office staff | 47.8 | 14.2 | 0.12 (0.31) | 51.6 | 16.0 | 0.13 (0.31) |
| Emotional rapport | 66.4 | 4.4 | 0.03 (0.16) | 62.6 | 4.1 | 0.03 (0.17) |
| Perceived thoroughness | 15.9 | 1.8 | 0.02 (0.13) | 21.5 | 3.7 | 0.03 (0.17) |
| Perceived technical competence | 47.8 | 3.5 | 0.03 (0.18) | 58.0 | 7.3 | 0.05 (0.21) |
| Shared decision-making | 6.2 | 0.0 | 0.00 (0.00) | 12.8 | 2.3 | 0.02 (0.15) |
| Practice style | 13.3 | 0.0 | 0.00 (0.00) | 18.3 | 0.9 | 0.01 (0.10) |
| Care coordination | 0.9 | 0.0 | 0.00 (0.00) | 6.4a | 0.0 | 0.00 (0.00) |
Abbreviation: SD, standard deviation.
aP value from χ2 test of healthier versus sicker patients <.05.
bHad a recent serious or chronic illness.
Predicting Global Provider Ratings
Table 3 presents the results of the regression models predicting healthier patients’ global provider ratings. In model 1, which included the CAHPS composite measures as predictors, provider communication and office staff were significant predictors of healthier patients’ global ratings and the model as a whole accounted for 63% of variation in ratings. In model 2, the only significant predictor of healthier patients’ global provider ratings was the extent to which their narratives contained negative commentary about provider communication. As a whole, model 2 accounted for 17% of variation in global ratings. In model 3, which combined the predictors from models 1 and 2, only the CAHPS provider communication and office staff measures were significant predictors. This model explained 65% of variation in the global rating, 3% more than was explained by model 1, F (7, 96) = 0.83, P = .56.
Table 3.
Regression Models Predicting Global Provider Rating From CAHPS Composites and Evaluative Information From Patient Narratives: Healthier Patients (n = 107).
| Predictor | Model 1 | Model 2 | Model 3 | |||
|---|---|---|---|---|---|---|
| β Coefficient (Standard Error) | P Value | β Coefficient (Standard Error) | P Value | β Coefficient (Standard Error) | P Value | |
| CAHPS composites | ||||||
| Provider communication | 1.67 (0.18) | <.0001 | - | - | 1.61 (0.20) | <.0001 |
| Access to care | −0.26 (0.16) | .87 | - | - | −0.14 (0.17) | .42 |
| Office staff | 0.61 (0.18) | .001 | - | - | 0.72 (0.19) | <.0001 |
| Evaluative information from narrativea | ||||||
| Provider communication | - | - | −2.56 (0.96) | .009 | −0.97 (0.73) | .19 |
| Time spent during office visits | - | - | −0.42 (0.44) | .34 | −0.33 (0.30) | .27 |
| Access to care | - | - | 0.48 (0.45) | .29 | 0.06 (0.32) | .86 |
| Office staff | - | - | −0.37 (0.49) | .45 | 0.15 (0.38) | .69 |
| Emotional rapport | - | - | −0.50 (1.17) | .67 | 0.17 (0.79) | .83 |
| Perceived thoroughness | - | - | −0.52 (1.38) | .71 | 0.06 (0.94) | .95 |
| Perceived technical competence | - | - | −0.06 (0.79) | .94 | −0.64 (0.54) | .24 |
| R2 | 0.63 | 0.17 | 0.65 | |||
Abbreviation: CAHPS, Consumer Assessment of Healthcare Providers and Systems.
aNumber of lines devoted to negative commentary as a proportion of the total lines devoted to all commentary about a particular aspect of care.
The results look quite different for sicker patients (Table 4). In model 1, the CAHPS provider communication and office staff measures were again significant predictors, with the model as a whole accounting for 50% of variation in provider ratings. But for sicker patients, the measures derived from the narratives accounted for 37% of the variation in provider ratings when considered on their own (model 2); 5 of the 7 indicators were significant predictors. In model 3, the CAHPS provider communication and office staff measures were significant predictors, but so were indicators of negative commentary about access to care, the emotional rapport between the provider and patient, and the perceived thoroughness of the provider. As a whole, this model explained 60% of the variation in sicker patients’ global ratings, 20% more than was explained by model 1, F (7, 194) = 6.74, P < .0001.
Table 4.
Regression Models Predicting Global Provider Rating From CAHPS Composites and Evaluative Information From Patient Narratives: Sickera Participants (n = 205).
| Predictor | Model 1 | Model 2 | Model 3 | |||
|---|---|---|---|---|---|---|
| β Coefficient (Standard Error) | P Value | β Coefficient (Standard Error) | P Value | β Coefficient (Standard Error) | P Value | |
| CAHPS composites | ||||||
| Provider communication | 1.51 (0.18) | <.0001 | - | - | 0.99 (0.19) | <.0001 |
| Access to care | 0.21 (0.13) | .10 | - | - | 0.24 (0.12) | .05 |
| Office staff | 0.53 (0.15) | <.0001 | - | - | 0.60 (0.15) | <.0001 |
| Evaluative information from narrativeb | ||||||
| Provider communication | - | - | −1.34 (0.51) | <.0001 | −0.27 (0.43) | .54 |
| Time spent during office visits | - | - | −1.04 (0.29) | <.0001 | −0.42 (0.24) | .09 |
| Access to care | - | - | 0.30 (0.35) | .40 | 0.58 (0.29) | .04 |
| Office staff | - | - | −0.90 (0.30) | .003 | −0.39 (0.26) | .14 |
| Emotional rapport | - | - | −2.03 (0.56) | <.0001 | −1.42 (0.46) | .002 |
| Perceived thoroughness | - | - | −1.94 (0.61) | .002 | −1.92 (0.51) | <.0001 |
| Perceived technical competence | - | - | −0.24 (0.47) | .61 | 0.27 (0.39) | .49 |
| R2 | 0.50 | 0.37 | 0.60 | |||
Abbreviation: CAHPS, Consumer Assessment of Healthcare Providers and Systems.
aHad a recent or chronic illness.
bNumber of lines devoted to negative commentary as a proportion of the total lines devoted to all commentary about a particular aspect of care.
Effects of the Placement of the Elicitation Protocol on Narrative Content and Quality
Table 5 shows the effect of the placement of the elicitation protocol on the percentage of narratives that contained any mention of each of the 10 aspects of care. Among healthier patients, placement had no effect on narrative content. Among sicker patients, those who responded to the narrative elicitation protocol after completing the CAHPS questions were significantly less likely to mention office staff than were those who responded to the elicitation protocol before completing the CAHPS questions. Table 5 also shows that the placement of the elicitation protocol had no effect on the scope, length, evaluative balance, or overall coherence of healthier patients’ narratives. Among sicker patients, placing the protocol after versus before the CAHPS questions resulted in narratives that were significantly shorter.
Table 5.
Effect of the Placement of the Narrative Protocol Relative to CAHPS Questions on Narrative Responses.
| Content/Quality of Narrative | Healthier Patients (n = 113) | Sickera patients (n = 219) | ||||
|---|---|---|---|---|---|---|
| Narrative Elicitation Placed Before CAHPS | Narrative Elicitation Placed After CAHPS | P Value | Narrative Elicitation Placed Before CAHPS | Narrative Elicitation Placed After CAHPS | P Value | |
| Aspect of care discussed, % | ||||||
| Provider communication | 71.9 | 66.1 | .51 | 67.6 | 56.5 | .09 |
| Time spent during office visits | 49.1 | 57.1 | .40 | 47.7 | 40.0 | .15 |
| Access to care | 36.8 | 48.2 | .23 | 47.7 | 40.0 | .15 |
| Office staff | 49.1 | 46.4 | .78 | 58.6 | 44.4 | .04 |
| Emotional rapport | 70.2 | 62.5 | .39 | 62.2 | 63.0 | .90 |
| Perceived thoroughness | 15.8 | 16.1 | .97 | 23.4 | 19.4 | .48 |
| Perceived technical competence | 49.1 | 46.4 | .78 | 60.4 | 55.6 | .47 |
| Shared decisions | 7.0 | 5.4 | .72 | 13.5 | 12.0 | .75 |
| Practice style | 12.3 | 14.3 | .76 | 14.4 | 22.2 | .14 |
| Care coordination | 0.0 | 1.8 | .32 | 5.4 | 7.4 | .55 |
| Other qualities of narrative, mean (SD) | ||||||
| Overall coherence | 1.02 (0.53) | 1.11 (0.54) | .39 | 1.13 (0.55) | 1.03 (0.58) | .18 |
| Scopeb | 0.36 (0.18) | 0.36 (0.21) | .94 | 0.40 (0.19) | 0.36 (0.20) | .09 |
| Length (number of lines) | 7.05 (7.50) | 8.63 (11.09) | .38 | 10.42 (10.24) | 7.74 (8.41) | .04 |
| Evaluative balance | 0.84 (0.25) | 0.77 (0.29) | .23 | 0.80 (0.28) | 0.82 (0.25) | .53 |
Abbreviations: CAHPS, Consumer Assessment of Healthcare Providers and Systems; SD, standard deviation.
aHad a recent or chronic illness.
bProportion of 10 aspects of care that were discussed in the narrative.
Effects of the Placement of the Elicitation Protocol on CAHPS Composite Scores
Mean CAHPS composite scores of sicker and healthier patients were unaffected by the placement of the elicitation protocol in the survey (Table 6). However, placement did affect the correlation between scores on the provider communication measure and participants’ global ratings of their providers. In particular, for both sicker and healthier patients, the association between scores on the provider communication measure and the global provider rating was stronger among those who completed the CAHPS questions after responding to the elicitation protocol.
Table 6.
Effect of the Relative Placement of the Narrative Protocol on CAHPS Composite Means and the Correlation of the CAHPS Composites With the Global Provider Rating.
| CAHPS Composite | Healthier Patients (n = 113) | Sickera patients (n = 219) | ||||
|---|---|---|---|---|---|---|
| Narrative Elicitation Placed Before CAHPS | Narrative Elicitation Placed After CAHPS | P Value | Narrative Elicitation Placed Before CAHPS | Narrative Elicitation Placed After CAHPS | P Value | |
| Provider communication | ||||||
| Mean (SD) | 3.61 (0.62) | 3.57 (0.59) | .69 | 3.61 (0.52) | 3.58 (0.60) | .62 |
| Correlation with global rating | 0.83 | 0.67 | .05 | 0.74 | 0.56 | .02 |
| Access to care | ||||||
| Mean (SD) | 3.16 (0.80) | 3.20 (0.69) | .74 | 3.23 (0.72) | 3.05 (0.67) | .06 |
| Correlation with global rating | 0.42 | 0.52 | .51 | 0.38 | 0.37 | .94 |
| Office staff | ||||||
| Mean (SD) | 3.46 (0.62) | 3.54 (0.61) | .53 | 3.65 (0.57) | 3.55 (0.70) | .24 |
| Correlation with global rating | 0.48 | 0.51 | .84 | 0.49 | 0.54 | .62 |
Abbreviations: CAHPS, Consumer Assessment of Healthcare Providers and Systems; SD, standard deviation.
aHad a recent or chronic illness.
Discussion
The increasing use of patient experience scores for high-stakes purposes such as provider compensation (2,6) makes it more important than ever for clinicians to understand why patients rate them as they do. Open-ended narratives can be a rich source of information about the particular experiences on which patients base their ratings of their health-care providers. Our findings demonstrate that it is feasible to incorporate protocols for eliciting patient narratives into large-scale patient surveys with minimal distortion of the feedback from either the open- or closed-ended responses.
Our findings also demonstrate that narratives—particularly those of patients with more complex interactions with the health-care system—help to explain variation in the ratings that patients assign clinicians. The difference in the predictive utility of narratives for sicker and healthier patients is not explained by differences in how often or extensively these participants discussed particular aspects of care but by how important those aspects were in shaping their evaluations of their providers. Although we tested only for differences between healthier and sicker patients, it seems likely that narratives will have greater predictive power for other types of respondents who may be considered medically complex and thus require ongoing care from multiple providers, including those with multiple comorbidities, cognitive and mental health issues, and problems with substance abuse (19 –21).
Among sicker patients, statements about the thoroughness of the provider and the emotional rapport between the patient and provider were especially useful in predicting patients’ overall provider ratings. This is noteworthy given that these topics are not covered by the CAHPS survey. Although patients’ perspectives on whether care is appropriately thorough may differ from clinical standards or similar assessments by other professionals, it is clearly an aspect of care that is salient to them and that underlies their evaluation of their providers. As perceptions of provider thoroughness are likely to have a basis in concrete features of the patient–provider interaction, it would be worthwhile to understand what patients are cueing in on to make judgments about thoroughness so that particular provider behaviors or patient expectations can be addressed if necessary.
Although these findings can be used to support the case for collecting patient narratives more systematically, measuring the statistical influence of narratives on global provider ratings is just one way of assessing the value of narratives for explaining the nuances of patient experience. The tests used here reduce a large amount of rich detail and nuance to a set of numerical indicators, ignoring much of the information in the verbatim narratives. Moreover, the CAHPS global provider rating is a relatively limited metric for assessing the quality of patient–physician interactions, albeit one that is often heavily weighted in provider compensation arrangements.
These findings also need to be considered in light of certain methodological limitations of the study. Although the elicitations were drawn from an Internet panel that is sociodemographically representative of the overall population, our sample size was relatively small, making it infeasible to examine whether narratives are more or less important for various subsets of patients. Moreover, the responses to the elicitation questions were given by respondents who knew that their commentary would be kept confidential. Were these same questions asked of patients whose answers would be reported back to clinicians or incorporated into public websites, the content of the responses might differ (though it is not clear whether it would be enriched or impoverished).
Conclusion
It has been argued that patient narratives may improve health-care quality beyond what standardized survey scores can accomplish by better informing consumer choice and enhancing clinicians’ understanding of health-care interactions that patients consider to be problematic (7,22). Our results demonstrate that a rigorously designed protocol for eliciting patient narratives can be incorporated into patient experience surveys with minimal distortion of patient feedback and that the information contained in the narratives offered by sicker patients is useful for understanding variation in provider ratings beyond the information derived from closed-ended CAHPS questions. Because patients with more serious and complex conditions arguably represent the most challenging test of any health-care system, better understanding their experiences seems an essential prerequisite for improving health system performance.
Author Biographies
Steven C Martino is a senior behavioral scientist at the RAND Corporation. Dr. Martino’s research focuses on the measurement and reporting of information on healthcare quality and on identifying and reducing healthcare disparities.
Dale Shaller is principal of Shaller Consulting Group, a health policy analysis and management consulting practice based in Stillwater, Minnesota. His research has focuses primarily on the collection and use of patient experience measures for public reporting to consumers as well as private feedback reporting to clinicians for improvement.
Mark Schlesinger is professor of Health Policy and a fellow of the Institution for Social and Policy Studies at Yale University. He is also past editor of the Journal of Health Policy, Politics and Law. Dr. Schlesinger’s research explores the determinants of public opinion about health and social policy, the influence of bounded rationality on medical consumers, and the role of nonprofit organizations in American medicine.
Andrew M Parker is a senior behavioral scientist at the RAND Corporation and director of the RAND Center for Decision Making under Uncertainty. His research applies core concepts in behavioral decision research to understanding decision-making behavior in complex real-world situations.
Lise Rybowski is president of The Severyn Group, Inc., in Ashburn, Virginia. Her research over the past 20 years has focused on the effective use of patient experience and other quality measures in public reports and strategies for improving patients’ experiences with care.
Rachel Grob is director of National Initiatives and Associate Clinical Professor at the Center for Patient Partnership and senior scientist at the Department of Family Medicine and Community Health at the University of Wisconsin-Madison. Her work focuses on eliciting, synthesizing and amplifying patients’ voices to improve health and health care; her research focuses on patients’ experiences.
Jennifer L Cerully is a behavioral and social scientist at the RAND Corporation. Her research interests include building the science of reporting healthcare quality data to consumers, reducing stigma and other barriers to mental health care, and improving health-related decision-making.
Melissa L Finucane is a senior social and behavioral scientist at RAND Corporation and a senior fellow at the East-West Center. Her research focuses on assessing and addressing environmental health risks in the context of complex adaptive systems.
Appendix A
Consumer Assessment of Healthcare Providers and Systems Composite Measures
Provider communication (6 items): Respondents were asked how often in the past 12 months (1 = never, 4 = always) their provider (a) explained things clearly, (b) listened carefully, (c) showed respect, (d) provided easy-to-understand instructions, (e) knew their medical history, and (f) spent enough time with them.
Access to care (5 items): Respondents were asked how often in the past 12 months (1 = never, 4 = always) they (a) received routine care as soon as they needed, (b) received urgent care as soon as they needed, (c) got timely answers to questions when they called their provider’s office during routine business hours, (d) got timely answers to questions when they called their provider’s office after routine business hours, and (e) saw their provider within 15 minutes of their appointment time.
Office staff (2 items): Respondents were asked how often in the past 12 months (1 = never, 4 = always) their provider’s office staff (a) were helpful and (b) treated them with courtesy and respect.
Appendix B
Five-Question Narrative Elicitation Protocol
What are the most important things that you look for in a health-care provider and his or her staff?
When you think about the things that are most important to you, how do your provider and his or her staff measure up?
Now we’d like to focus on anything that has gone well in your experiences with your provider and his or her staff over the past 12 months. Please explain what happened, how it happened, and how it felt to you.
Next we’d like to focus on any experiences with your provider and his or her staff that you wish had gone differently over the past 12 months. Please explain what happened, how it happened, and how it felt to you.
Please describe how you and your provider relate to and interact with each other.
Footnotes
Declaration of Conflicting Interests: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study was supported by 2 cooperative agreements (2U18HS016980 and 1U18HS016978) from the Agency for Healthcare Research and Quality (AHRQ) to RAND and Yale University, respectively, and by a grant (1R21HS021858) from AHRQ to Yale University.
References
- 1. Anhang Price R, Elliott MN, Zaslavsky AM, Hays RD, Lehrman WG, Rybowski L, et al. Examining the role of patient experience surveys in measuring health care quality. Med Care Res Rev. 2014;71:522–54. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Anhang Price R, Elliot MN, Cleary PD, Zaslavsky AM, Hays RD. Should health care providers be accountable for patients’ care experiences? J Gen Intern Med. 2015;30:253–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Greaves F, Ramirez-Cano D, Millett C, Darzi A, Donaldson L. Harnessing the cloud of patient experience: using social media to detect poor quality healthcare. BMJ Qual Saf. 2013;22:251–5. [DOI] [PubMed] [Google Scholar]
- 4. Lagu T, Greaves F. From public to social reporting of hospital quality. J Gen Intern Med. 2015;30:1397–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Lagu T, Lindenaur PK. Putting the public back in public reporting of health care quality. JAMA. 2010;304:171–2. [DOI] [PubMed] [Google Scholar]
- 6. Schlesinger M, Grob R, Shaller D. Using patient-reported information to improve clinical practice. Health Serv Res. 2015;50:2116–54. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Schlesinger M, Grob R, Shaller D, Martino SC, Parker AM, Finucane ML, et al. Taking patients’ narratives about clinicians from anecdote to science. N Engl J Med. 2015;373:675–9. [DOI] [PubMed] [Google Scholar]
- 8. Emmert M, Meszmer N, Sander U. Do health care providers use online patient ratings to improve the quality of care? Results from an online-based cross-sectional study. J Med Internet Res. 2016;18:e254. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Bardach N, Lyndon A, Asteria-Peñaloza R, Goldman LE, Lin GA, Dudley RA. From the closest observers of patient care: a thematic analysis of online narrative reviews of hospitals. BMJ Qual Saf. 2016;25:889–897. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Huppertz JW, Smith R. The value of patients’ handwritten comments on HCAHPS surveys. J Healthc Manag. 2014;59:31–47. [PubMed] [Google Scholar]
- 11. Grob R, Schlesinger M, Parker AM, Shaller D, Barre LR, Martino SC, et al. Breaking narrative ground: innovative methods for rigorously eliciting and assessing patient narratives. Health Serv Res. 2016;51:1248–71. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Grob R, Schlesinger M, Martino SC. Collecting and reporting patient narratives to better understand patients’ experiences. Paper presented at: The AHRQ Research Conference; October 6, 2015; Crystal City, VA. [Google Scholar]
- 13. Chang L, Krosnick JA. National surveys via RDD telephone interviewing versus the Internet: comparing sample representativeness and response quality. Public Opin Quart. 2009;73:641–78. [Google Scholar]
- 14. Otani K, Kurz RS, Burroughs TE, Waterman B. Reconsidering models of patient satisfaction and behavioral intentions. Health Care Manage Rev. 2003;28:7–20. [DOI] [PubMed] [Google Scholar]
- 15. Reese E, Haden CA, Baker-Ward L, Bauer P, Fivush R, Ornstein PA. Coherence of personal narratives across the lifespan: a multidimensional model and coding method. J Cogn Dev. 2011;12:424–62. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159–74. [PubMed] [Google Scholar]
- 17. Fleiss JL. Statistical Methods for Rates and Proportions. New York, NY: John Wiley and Sons; 1981:165–8. [Google Scholar]
- 18. Cohen J. A power primer. Psychol Bull 1992;112:155–9. [DOI] [PubMed] [Google Scholar]
- 19. de Jonge P, Huyse FJ, Stiefel FC. Case and care complexity in the medically ill. Med Clin North Am. 2006;90:679–92. [DOI] [PubMed] [Google Scholar]
- 20. Peek CJ, Baird MA, Coleman E. Primary care for patient complexity, not only disease. Fam Syst Health 2009;27:287–302. [DOI] [PubMed] [Google Scholar]
- 21. Valderas JM, Starfield B, Sibbald B, Salisbury C, Roland M. Defining co-morbidity: implications for understanding health and health services. Ann Fam Med. 2009;7:357–63. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Ranard BL, Werner RM, Antanavicius T, Schwartz HA, Smith RJ, Meisel ZF, et al. Yelp reviews of hospital care can supplement and inform traditional surveys of the patient experience of care. Health Aff. 2016;35:697–705. [DOI] [PMC free article] [PubMed] [Google Scholar]
