Skip to main content
HHS Author Manuscripts logoLink to HHS Author Manuscripts
. Author manuscript; available in PMC: 2022 Nov 1.
Published in final edited form as: Med Care. 2021 Nov 1;59(11):950–960. doi: 10.1097/MLR.0000000000001629

Shadow Coaching Improves Patient Experience with Care, But Gains Erode Later

Denise D Quigley 1, Marc N Elliott 1, Mary E Slaughter 1, Q Burkhart 1, Alex Y Chen 2, Efrain Talamantes 3, Ron D Hays 4
PMCID: PMC8516705  NIHMSID: NIHMS1723035  PMID: 34387621

Abstract

Background.

Healthcare organizations strive to improve patient care experiences. Some use one-on-one provider counseling (“shadow coaching”) to identify and target modifiable provider behaviors.

Objective.

We examined whether shadow coaching improves patient experience across 44 primary-care practices in a large urban Federally Qualified Health Center.

Research Design.

Seventy-four providers with “medium” (i.e., slightly below average) overall provider ratings received coaching and were compared to 246 uncoached providers. We fit mixed-effects regression models with random effects for provider (level of treatment assignment) and fixed effects for time (linear spline with a knot and “jump” at coaching date), patient characteristics and site indicators. By design, coached providers performed worse at selection; models account for the very small (0.2 point) regression-to-the-mean effects. We assessed differential effects by coach.

Subjects.

46,452 patients (from 320 providers) who completed Clinician and Group Consumer Assessment of Healthcare Providers and Systems (CG-CAHPS®) surveys.

Measures.

CAHPS overall provider rating and provider communication composite (scaled 0-100).

Results.

Providers not chosen for coaching had a non-significant change in performance during the period when selected providers were coached. We observed a statistically significant 2-point (small-to-medium) jump among coached providers after coaching on the CAHPS overall provider rating and provider communication score. However, these gains disappeared after 2.5 years; effects differed by coach.

Conclusions.

Shadow coaching improved providers’ overall performance and communication immediately after being coached. Regularly planned shadow coaching ‘booster’ sessions might maintain or even increase the improvement gained in patient experience scores, but research examining additional coaching and optimal implementation is needed.

Keywords: coaching, patient experience, CAHPS, provider performance, spline models

Introduction

Good provider communication is crucial to the doctor-patient relationship and influences patient adherence to care plans and clinical outcomes.19 Most measure patient experience using the Consumer Assessment of Healthcare Providers and Systems (CAHPS®) surveys to,1012 the national standard for collecting, tracking and benchmarking patient care experiences across settings including ambulatory care.1329 With CAHPS scores publicly available for individual providers and at the group level, provider groups can identify specific, modifiable provider behaviors that they can target to improve patient care experiences.3033

Healthcare organizations generally use one or more of six methods of changing physician behavior: education/training, feedback, engaging physicians as leaders in change efforts, administrative changes, financial incentives, or financial penalties.34,35 Substantial time and resources are invested in communication training for clinicians (such as continuing medical education); however, little research has been done to test the effect of such training on patient experience scores and those studies that have provide little evidence of its effectiveness.36,37

Some groups and practices use individualized feedback or one-on-one provider counseling (“shadow coaching”) based on patient experience scores to target modifiable provider behaviors.3844 Shadow coaching refers to the process of commissioning coaches to observe providers in real-time encounters at a point-of-care and provide structured, specific feedback (vs. general feedback) to encourage targeted behaviors.43 Although there are various coaching models and philosophies,40,41,44 shadow coaching is usually conducted during a half or full day to observe several patient encounters.42,45Prior work has examined coaching,4652 compliance training for nursing staff,5355 and coaching using simulated patient encounters.56 Witherspoon and White (1996)57 suggest coaching has four different functions: coaching for skill enhancement, increased performance, development, and strategic planning. The first three functions are relevant to our study. The fourth, coaching for strategic planning, is executive coaching and not addressed. Also, three terms are often associated with coaching, but are different from coaching: 1) Managing- To manage people is to make sure that they do what they already know how to do. 2) Training - When people need to learn something new, training is introduced. 3) Mentoring – This involves advising, guiding and counseling by an expert and can involve a component of coaching.49 Coaching is slightly different than managing, training or mentoring, as its optimal use leads to the increased utilization of a person’s current skills and resources without counseling or advising.

Shadow coaching, a type of collaborative or peer-assisted learning,58,59 is a distinct type of coaching in which peers, who often have a similar knowledge level to those they coach,60,61 enter an equal, non-competitive voluntary relationship, establishing goals or preferred outcomes, observing, and providing feedback to improve task performance and support changes that amplify participants’ strengths and capacity.58,6265 Sessions usually occur in dyads.66 Mutual trust between recipients and coaches is essential for successful peer-coaching relationships.6770

Some studies on shadow coaching have tracked the impact of coaching on outcome measures, providing evidence that coaching, shadow or otherwise, helps build and maintain competencies among physicians, nurses and other staff, and increases compliance with practice guidelines.47,52,56 Despite the primary shadow coaching objective to improve individual provider behaviors and the associated patient interactions and experiences, research on its effectiveness and its specific impact on patient experiences is relatively sparse.7173 We examine whether shadow coaching improves patient experience scores and if improvements are sustained.

Methods

Setting.

The study was conducted in a large, urban Federally Qualified Health Center (FQHC) in California with nearly 1 million patient visits annually. Six years prior to this study, the FQHC’s chief medical officer implemented a company-wide quality monitoring system based on the overall provider rating and provider communication composite of the Clinician and Group CAHPS (CG-CAHPS) visit survey 2.0: https://www.ahrq.gov/cahps/surveys-guidance/cg/visit/index.html.14 Provider communication was chosen because it has the highest correlation of the CAHPS composites with the overall rating of care.21

Shadow coaching was introduced as part of quality monitoring with the goal of improving patient care experiences. Every 6 months, in January and July, the FQHC calculated every provider’s average 6-month score on the CAHPS overall provider rating (scored with a 0-100 possible range, higher score is better). Providers with a 6-month average score between 45-89 in the 6 months prior to the coaching were identified as “medium performers” and selected for coaching.

Intervention.

Eight full-time, high-performing providers at the FQHC, based on patient experience and other performance indicators, were selected to shadow other providers during 4 or more patient encounters during a half-to-full day. Coaches attended a one-day coaching seminar by the SullivanLuallin group.42,7476 Provider assignments were made based on geography; coaches were essentially assigned regions to minimize their commuting time. Medical director coaches were not permitted to coach providers that reported to them. These shadow coaches observed providers and after the observation provided verbal feedback about strengths and areas of improvement with a focus on patient-provider interactions. Coach feedback was based on their own experiences as high-performing physicians and broader insights into what makes for high-quality patient-provider communication derived from the coaching seminar. This initial feedback was followed by a written coaching report from the coach to the provider summarizing the comments and recommendations from the coaching session.42 The primary goal of the shadow coaching session was to identify and target areas of patient-provider interaction that a provider could improve when interacting and caring for their patients, with a focus on provider communication. Coaching occurred from March 2015 to August 2018.

The FQHC in our study implemented shadow coaching with the afore-mentioned critical components as their program established that having CG-CAHPS scores over 90 was the goal for all providers, participation in shadow coaching was voluntary, and coaching included both (1) self-evaluation in an immediate feedback conversation with the coach after the observation of patient encounters and (2) a written coaching report that focused on both encouraging current behaviors that were participants’ strengths and recommendations of beneficial new behaviors.

Measures.

The CAHPS surveys that include the overall provider rating and the communication items were completed either by adult patients or by parents of children who are patients. The communication composite (4 items) assesses the frequency of explaining things in a way that is easy to understand, listening carefully to the patient, showing respect for what the patient says, and spending enough time with the patient5 using a 4-point Never/Sometimes/Usually/Always response scale.

Data and Analysis.

The sample included 320 providers from 44 practices with 46,452 completed CG-CAHPS visit surveys. We used a pre-post design to compare the performance trajectories of providers who were coached vs. those who were not during the interval between January 2012 to June 2019. The dependent variables were the CG-CAHPS overall provider rating and provider communication composite.

We compared trends for coached and uncoached providers before and after coaching. The actual coaching date was used for the coached providers; the mean coaching date was assigned for uncoached providers. We fit mixed-effects regression models with random effects for provider (the level of treatment assignment) and fixed effects for time (a linear spline with a single knot and a possible “jump” at the coaching date),77,78 patient characteristics (adult/child, age, gender, race/ethnicity, language, general health status, education), and practice indicators. This spline model allows the slope to change at the time of coaching (i.e., the knot), for a gradual change in scores over time, and also allows for a possible vertical discontinuity or a jump in the measured scores instantaneously after the date of the intervention. Allowing the trajectory to change at the time of coaching independently for the coached and uncoached groups addressed the possible threat of regression to the mean from performance-based treatment assignment (very small effect of 0.2 point).

The spline model allows us to detect two different forms of intervention effects, each of which represents a departure in the intervention group from what would have been expected in the absence of an intervention. The first effect, referred to as “Differential immediate change (i.e., jump) at coaching for coached,” captures an immediate change in scores in the intervention group (coached providers), relative to the control group (uncoached providers), at the time of the intervention (coaching for the coached providers). This shift or jump at the time of coaching is at the spline knot in the model (the coaching date). The null hypothesis of the test of this effect is no differential change in score between the coached at uncoached providers at the time of coaching, i.e., no instantaneous effect of the intervention at the time of the intervention. A significant positive value for the coefficient indicates an instantaneous positive change for the coached provider relative to any change for uncoached providers at coaching. The second intervention effect in our study, labeled as “Differential slope change at coaching for coached,” captures any change in the slope of the coached providers at the time of coaching relative to any change in the slope of the uncoached providers. The null hypothesis of the test of this effect corresponds to no differential slope change and hence no gradual effect of the intervention. A significant positive value for this effect indicates the trajectory of the outcome after coaching for the coached providers increases relative to that of the uncoached providers.

Including practice fixed effects and provider random effects in the models allowed mean performance to vary by provider and practice. As a sensitivity analysis, we fit two additional models per dependent variable to see if there was a differential intervention effect in our intervention variables by two provider characteristics: specialty (5-category) and provider type (2-category). In these models, we interacted our intervention variables (and all supporting lower-order interactions) with the provider characteristic and then conducted joint tests of whether there was variation in our intervention effects across the provider characteristic categories. We found no evidence that there were differences in intervention effects by specialty or provider type in our main study variables for either outcome.

Lastly, we assessed differential effects by coaches. Of the eight coaches, four coached more than one provider for whom we had patient experience data. Indicator variables for these four coaches were created and interacted with our main study variables of interest in place of a single coached indicator (and all supporting lower-order interactions). We conducted joint tests to determine whether our main study variables of interest differed among the four coaches, with a null hypothesis of no difference among coaches in the jump at coaching nor in the slopes after coaching.

Results

The FQHC identified 74 “medium-performing” providers (of 320) based on CAHPS scores and shadowed their interactions with patients (i.e., coached providers). About half (52%) of the FHQC’s providers specialized in family medicine, followed by pediatrics (25%). The majority of providers were doctors with MD, DO or DDS degrees (63%). Similar proportions of coached and uncoached providers were medical doctors (vs. nurse practitioners/medical assistants) and specialties were similar, except that more coached providers specialized in internal medicine (22% vs. 10%) and more uncoached providers were pediatricians (28% vs, 15%). The median patient age was 35-44, with 18% under 17 and 9% over 64.; 37% were male; 69% were Hispanic. Half (50%) of the adult patients and 71% of the parents of child patients had not attended college (Table 1).

Table 1.

Provider and Patient Characteristics, Overall and By Coached Status

All Providers Coached Providers Uncoached Providers
Provider Characteristics
Total Providers (N) 320 74 246
Provider Type % N % N % N
 MD/DO/DDS 62.5 200 63.5 47 62.2 153
 Nurse Practitioner/Medical Assistant 27.5 88 36.5 27 24.8 61
 Missing 10 32 0 0 13 32
Provider Specialty
 Family Medicine 51.9 166 51.4 38 52.0 128
 Internal Medicine 12.8 41 21.6 16 10.2 25
 Infectious Disease  2.5 8  2.7 2  2.4 6
 Pediatrics 25.0 80 14.9 11 28.1 69
 Women’s Health  7.5 24  9.5 7  6.9 17
 Missing  0.3 1 0 0  0.3 1
Total Patient Surveys (N) 46,452 20,720 25,732
Mean SD Mean SD Mean SD
Number of patient surveys per provider 145.2 157.4 280.0 157.7 104.6 133.1
Patient Characteristics
Age (years) % SE % SE % SE
 0-17 18.0 0.18 14.3 0.24 21.0 0.25
 18-24  6.3 0.11  7.3 0.18  5.5 0.14
 25-34 14.2 0.16 15.6 0.25 13.1 0.21
 35-44 13.5 0.16 14.3 0.24 12.9 0.21
 45-54 17.6 0.18 17.3 0.26 17.8 0.24
 55-64 21.2 0.19 21.6 0.29 20.8 0.25
 65+  9.2 0.13  9.8 0.21  8.7 0.18
Male 37.4 0.22 35.7 0.33 38.8 0.30
Race, ethnicity, and language of survey
 Hispanic and surveyed in Spanish 29.1 0.21 27.2 11.9 28.0 17.6
 Hispanic and surveyed in English 40.2 0.23 41.8 0.34 38.9 0.30
 Non-Hispanic White 12.6 0.15 11.2 0.22 13.7 0.21
 Asian/Pacific Islander  6.6 0.11  7.7 0.19  5.6 0.14
 Other race or ethnicity 11.6 0.15 11.0 0.22 12.1 0.20
General health status
 Excellent 19.1 0.18 18.0 0.26 20.1 0.24
 Very good 26.6 0.20 26.3 0.30 26.8 0.27
 Good 30.1 0.21 33.1 0.32 31.2 0.28
 Fair 18.5 0.18 18.8 0.27 18.3 0.24
 Poor  3.8 0.09  3.9 0.13  3.7 0.11
Education of adult patients (N=38,120)
 8th grade or less 14.2 0.17 13.6 0.25 14.7 0.24
 Some high school 12.8 0.16 13.0 0.24 12.7 0.22
 High school diploma 23.2 0.21 23.6 0.31 22.8 0.28
 Some college or 2-year degree 31.6 0.23 31.3 0.34 31.8 0.31
 4-year college degree 10.9 0.15 11.0 0.23 10.9 0.21
 More than 4-year college degree  7.3 0.13  7.5 0.19  7.2 0.17
Education of parent for child patients (N=8,332)
 8th grade or less 12.8 0.34 12.6 0.57 12.9 0.43
 Some high school 13.1 0.35 13.2 0.59 13.1 0.43
 High school diploma 25.8 0.45 27.4 0.77 24.9 0.55
 Some college or 2-year degree 31.7 0.48 30.7 0.80 32.2 0.60
 4-year college degree 10.3 0.31 10.4 0.53 10.2 0.39
 More than 4-year college degree  6.3 0.25  5.6 0.40  6.6 0.32

NOTE: SD= standard deviation. SE= standard error. MD/DO/DDS= Medical Doctor/Doctor of Osteopathic medicine/ Doctor of Dental Surgery

Patient Experience Trends Before and After Coaching.

Model results (Table 2) indicate that uncoached providers (n = 246), had a non-significant change in performance during the period that the selected providers were coached, as expected. Among coached providers, we identified a statistically significant two-point jump on the 0-100 scale for the CAHPS measures (overall provider rating 2.0 with standard error (SE) 0.6 and communication score 1.9 with SE 0.6) at the time of coaching (labeled as “Differential immediate change (i.e., jump) at coaching for coached” in Table 2). Differences of 1, 3, and ≥5 points for CAHPS measures are considered small, medium, and large, respectively.79 The change in scores for the uncoached providers at the mean time of coaching of coached providers (labeled as “Immediate change at coaching for uncoached” in Table 2) was non-significant for both the overall provider rating and communication score, −0.3 and −0.8, respectively. Slopes from the spline model before and after coaching for both the uncoached and coached providers are also shown in Table 2. There was a significant increase in overall provider rating in the time before coaching for both the uncoached and coached providers (slope before coaching was equal to 0.4 (SE 0.1) for uncoached and 0.6 (SE 0.2) for coached); however, these slopes after coaching were not significantly different from zero (i.e., a flat line). For provider communication, the slope before coaching for the coached providers (but not for the uncoached providers) was significant, the estimate of the slope before coaching was equal to 0.4 (SE 0.2).

Table 2.

Linear Spline Model Results with a Single Knot and Possible “jump” at Coaching Date, By Case-Mix Adjusted Measure

Case-mix Adjusted Overall Provider Rating (n=46,089) Case-mix Adjusted Provider Communication (n=46,440)
Estimate SE p-value Estimate SE p-value
Independent Variables (Fixed Effects)
 Intercept 80.4 1.3 <.0001 *** 79.5 1.4 <.0001 ***
Coached indicator −0.8 0.8 0.287 −0.3 0.8 0.7414
Pre-coaching slope for uncoached providers  0.4 0.1 0.0033 **   0.3 0.1 0.0667
Difference in pre-coaching slope between coached and uncoached providers  0.1 0.2 0.5151   0.1 0.2 0.6762
Immediate change at coaching for uncoached −0.3 0.4 0.4686 −0.8 0.5 0.1084
Differential immediate change (i.e., jump) at coaching for coached  2.0 0.6 0.0007 ***   1.9 0.6 0.0022 **
SLOPE CHANGE AT COACHING FOR UNCOACHED −0.1 0.2 0.7822   0.0 0.3 0.8722
Differential change in slope AT COACHING FOR COACHED −0.8 0.3 0.0185 * −0.7 0.4 0.0358 *
POST ESTIMATION
Uncoached providers slope in pre-coaching period  0.4 0.1 0.0033 **   0.3 0.1 0.0667
Uncoached providers slope in post-coaching period  0.3 0.2 0.0777   0.3 0.2 0.1309
Coached providers slope before coaching  0.6 0.2 0.0003 ***   0.4 0.2 0.0235 *
Coached providers slope after coaching −0.3 0.2 0.0738 −0.3 0.2 0.0635
Random Effects
Variance Components
 Provider 20.9 2.3 <.0001 *** 24.5 2.6 <.0001 ***
 Residual 257.7 1.7 <.0001 *** 290.0 1.9 <.0001 ***
Square Root of variance component
 Provider  4.6   4.9 (0.0)
Case mix adjusters (Fixed Effects)
Patient Age (Ref: 1824)
 0-17  0.5 0.5 0.2523   1.5 0.5 0.0021 **
 25-34  0.5 0.4 0.1463   1.2 0.4 0.0013 **
 35-44  2.2 0.4 <.0001 ***   3.1 0.4 <.0001 ***
 45-54  4.1 0.4 <.0001 ***   5.2 0.4 <.0001 ***
 55-64 5.25 0.4 <.0001 ***   6.6 0.4 <.0001 ***
 65+ 5.58 0.4 <.0001 ***   7.2 0.5 <.0001 ***
MALE 0.01 0.2 0.9532   0.4 0.2 0.0361 *
Patient race/ethnicity/language
(Ref: White, English Survey)
 Hispanic, Spanish survey  2.3 0.3 <.0001 *** −0.0 0.3 0.9343
 Hispanic, English survey  1.5 0.3 <.0001 ***   1.1 0.3 <.0001 ***
 API  2.0 0.4 <.0001 ***   3.0 0.4 <.0001 ***
 Other  0.6 0.3 0.069   0.0 0.3 0.9834
Patient General Health Status (Ref: Poor)
 Excellent 12.5 0.4 <.0001 *** 13.8 0.5 <.0001 ***
 Very good  9.2 0.4 <.0001 *** 11.3 0.5 <.0001 ***
 Good  7.2 0.4 <.0001 ***   9.1 0.4 <.0001 ***
 Fair  5.3 0.4 <.0001 ***   6.1 0.5 <.0001 ***
Respondent education (Ref: 8th grade or less)
 Some high school  0.0 0.3 0.9115   0.1 0.3 0.743
 High school diploma −0.3 0.3 0.2607 −0.1 0.3 0.6359
 Some college or 2-yr degree −1.7 0.3 <.0001 *** −1.5 0.3 <.0001 ***
 4-year college degree −2.6 0.4 <.0001 *** −2.2 0.4 <.0001 ***
 More than 4-year college −3.1 0.4 <.0001 *** −2.4 0.4 <.0001 ***
Site (coefficients not shown) ** **
 Omnibus test (43 df) 0.0011 0.0037

NOTE:

*

p<0.05,

**

p<0.01,

***

p<0.001.

SE= standard error. The slope before coaching describes change in effect from 2.5 years prior to coaching up to coaching date. The slope before coaching describes change in effect from day after coaching date to 2.5 years post coaching.

Figure 1, Panel A shows the adjusted overall provider rating results before and after coaching (in 6-month intervals) for the uncoached providers, coached providers and estimated trend if the coached providers had not been coached—that is, we predicted what their patient experience trend would have been if they had not received coaching. Figure 1, Panel B shows the same results for the adjusted provider communication composite. Note that after coaching, the improvement gains in patient experience scores faded significantly, by ~40% a year, disappearing after 2.5 years. That is, we calculate [(coached indicator*years since coaching*post-coaching indicator)/ (coached indicator*post-coaching indicator)], which is [(−0.8/2) *100], equaling 40% for overall provider rating and 37% for provider communication.

Figure 1.

Figure 1.

Adjusted Over Time Trend, Before and After Coaching, By Measure

In the models controlling for individual coaches, we found providers coached by Coach B and D both had over a 2-point jump in provider rating at the time of coaching, (labeled as “Coach B’s (or Coach D’s) immediate change (i.e., jump) at coaching compared to uncoached” in Table 3) with jumps equal to 2.6 (SE 0.9) and 2.8 (SE 0.8), respectively. This was also seen in provider communication, where again providers coached by Coach B and D had a significant jump at the time of coaching compared to providers of other coaches and the non-coached providers. Coach B’s providers improved by 2.7 (SE 0.9) at the time of coaching and Coach D by 3.2 (SE 0.9) in provider communication scores. The joint (3df) test of whether the jumps among the four coaches differed was not significant for the provider rating, (F=2.46, (p = 0.06) but was for provider communication F=4.09 (p=0.007). There were no significant differences among the coaches in their slopes after coaching (p>0.05 for each); therefore, we do not report these slopes by coach. In Figure 2, Panel A shows estimated regression lines for the two high-impact coaches (Coach B and D) and Panel B for the low-impact coaches (Coach A and C). A significant jump at coaching is seen in provider rating for providers coached by Coach B (p = 0.005) and Coach D (p < 0.001). These differences hold up to one and half years after coaching for providers coached by Coach D (p = 0.002 at six months; p=0.04 at one and half years), and for up to six months after coaching for Coach B (p = 0.03). For provider communication (data not shown), a significant jump is also seen after coaching for providers coached by Coach B (p = 0.006) and Coach D (p < 0.001). These differences hold up to one and half years after coaching for providers coached by Coach D (p< 0.001 at six months; p=0.008 at one and half years), and for up to six months after coaching for Coach B (p = 0.02).

Table 3.

Linear Spline Model Results with a Single Knot and Possible “jump” at Coaching Date including Differential Coach Effects, By Case-Mix Adjusted Measure

Global Rating (N=45,735) Provider Communication (N=46,085)
Independent Variables (Fixed Effects) Estimate (SE) Estimate (SE)
 Intercept 80.5 (1.3) *** 79.3 (1.4) ***
 Coach A 0.1 (1.2) 0.7 (1.3)
 Coach B 0.2 (1.4) 0.5 (1.6)
 Coach C 0.0 (1.7) 1.9 (1.8)
 Coach D −2.5 (1.2) * −2.5 (1.3)
Pre-coaching slope for uncoached providers 0.4 (0.1) ** 0.3 (0.1)
Coach A’s difference in pre-coaching slope compared to uncoached providers 0.6 (0.3) 0.6 (0.4)
Coach B’s difference in pre-coaching slope compared to uncoached providers 0.4 (0.4) 0.0 (0.4)
Coach C’s difference in pre-coaching slope compared to uncoached providers 0.3 (0.4) 1.1 (0.4) **
Coach D’s difference in pre-coaching slope compared to uncoached providers −0.4 (0.3) −0.6 (0.3)
Uncoached provider immediate change at coaching −0.3 (0.4) −0.7 (0.5)
Coach A’s immediate change (i.e., jump) at coaching compared to uncoached 0.1 (0.9) −0.1 (1.0)
Coach B’s immediate change (i.e., jump) at coaching compared to uncoached 2.6 (0.9) ** 2.7 (0.9) **
Coach C’s immediate change (i.e., jump) at coaching compared to uncoached 1.4 (1.1) 0.0 (1.1)
Coach D’s immediate change (i.e., jump) at coaching compared to uncoached 2.8 (0.8) *** 3.2 (0.9) ***
Slope change at coaching for uncoached −0.1 (0.2) 0.0 (0.3)
Coach A’s slope change at coaching compared to uncoached −0.7 (0.6) −0.7 (0.6)
Coach B’s slope change at coaching compared to uncoached −1.3 (0.5) ** −1.0 (0.5)
Coach C’s slope change at coaching compared to uncoached −0.8 (0.6) −1.0 (0.7)
Coach D’s slope change at coaching compared to uncoached −0.5 (0.5) −0.2 (0.5)
Post-estimation 0.3 (0.4) 1.1 (0.4) **
Uncoached providers slope in pre-coaching period −0.4 (0.3) −0.6 (0.3)
Uncoached providers slope in post-coaching period −0.3 (0.4) −0.7 (0.5)
Coach A’s slope before coaching 0.1 (0.9) −0.1 (1.0)
Coach B’s slope before coaching 2.6 (0.9) ** 2.7 (0.9) **
Coach C’s slope before coaching 1.4 (1.1) 0.0 (1.1)
Coach D’s slope before coaching 2.8 (0.8) *** 3.2 (0.9) ***
Coach A’s slope after coaching −0.1 (0.2) 0.0 (0.3)
Coach B’s slope after coaching −0.7 (0.6) −0.7 (0.6)
Coach C’s slope after coaching −1.3 (0.5) ** −1.0 (0.5)
Coach D’s slope after coaching −0.8 (0.6) −1.0 (0.7)
Random Effects
Variance Components
 Provider 21.2 (2.3) *** 24.8 (2.7) ***
 Residual 256.5 (1.7) *** 288.7 (1.9) ***
Square Root of variance component of provider 4.6 5.0
Case Mix Adjustors (Fixed Effects)
Patient age (Ref: 1824)
 0-17 0.6 (0.5) 1.5 (0.5) **
 25-34 0.6 (0.4) 1.3 (0.4) ***
 35-44 2.2 (0.4) *** 3.1 (0.4) ***
 45-54 4.1 (0.4) *** 5.2 (0.4) ***
 55-64 5.3 (0.4) *** 6.6 (0.4) ***
 65+ 5.6 (0.4) *** 7.2 (0.5) ***
Male 0.0 (0.2) 0.4 (0.2) *
Patient race/ethnicity and survey language (Ref=White, English survey)
 Hispanic, Spanish survey 2.2 (0.3) *** −0.1 (0.3)
 Hispanic, English survey 1.5 (0.3) *** 1.1 (0.3) ***
 API 1.9 (0.4) *** 3.0 (0.4) ***
 Other 0.5 (0.3) 0.0 (0.3)
Patient General Health Scale (Ref: Poor)
 Excellent 12.4 (0.4) *** 13.7 (0.5) ***
 Very good 9.1 (0.4) *** 11.3 (0.5) ***
 Good 7.2 (0.4) *** 9.0 (0.4) ***
 Fair 5.3 (0.4) *** 6.1 (0.5) ***
Respondent education (Ref: 8th grade or less)
 Some high school 0.0 (0.3) 0.1 (0.3)
 High school diploma −0.4 (0.3) −0.2 (0.3)
 Some college or 2-year degree −1.7 (0.3) *** −1.5 (0.3) ***
 4-year college degree −2.6 (0.4) *** −2.2 (0.4) ***
 More than 4-year college degree −3.1 (0.4) *** −2.5 (0.4) ***

NOTE:

*

p<0.05,

**

p<0.01,

***

p<0.001.

SE= standard error. The slope before coaching describes change in effect from 2.5 years prior to coaching up to coaching date. The slope after coaching describes change in effect from day after coaching date to 2.5 years post coaching.

Figure 2.

Figure 2.

Adjusted Over Time Trend Before and After Coaching, By Impact

Discussion

Studies consistently show that effective communication between clinicians and patients is a critical determinant of overall patient experience.21,8086 Organizations engage in several ways to change providers behavior; many provide communication training for clinicians. Generally, provider training includes 4 to 8 hours of instruction and takes place during regular office hours,8791 requiring clinicians to be relieved of their responsibilities. However, very little research has studied the effect of this type of communication training, with limited evidence of its effectiveness in improving patient experiences.36,88,9193 Often, any initial improvement from such communication training dissipates as providers return to overloaded schedules and variable patient expectations.43

Shadow coaching, on the other hand, has proven more effective, with some studies providing evidence that coaching, shadow or otherwise, helps build and maintain competencies among physicians, nurses and other staff, and increases compliance with practice guidelines.47,52,56,89 Ravitz et al 201356 studied 26 physicians and found improvements to physician-patient communication 4-6 months after an intervention that included four weekly videotaped sessions with feedback and coaching. Yusuf et al 201852 reported significant improvement for nurse-physician dynamic after 3 months of observation and coaching with results sustained at 12 months. Our study is consistent with these studies in finding an immediate improvement in providers’ overall performance and communication immediately following shadow coaching, as captured by the CAHPS measures.

Our study is the first to our knowledge to evaluate improvement sustainability 2.5 years after shadow coaching. Although the peer shadow coaching used in our study incorporates many features that are consistent with the literature on successful behavior change (a learner-centered approach, immediate feedback, written recommendations on what skills to practice/behaviors to engage in), we found that the gains in patient experience scores for coached providers faded significantly (i.e., about ~40% a year) and disappeared after 2.5 years. This may correspond to providers gradually slipping back into their previous habits of interacting with patients, as there is some evidence that clinician behavior lapses when feedback ends.94 More generally, erosion related to physician behaviors and coaching95 is a specific instance of a general phenomenon, and has been found in other behavioral change interventions (e.g., smoking cessation, alcohol use, anger management, weight loss programs). Engaging providers in the coaching process is critical96,97 but its program implementation also needs a “booster” component to sustain desired behaviors.9599 Structural and environmental factors, which may have led to the need for the coaching in the first place, may also have been at least partly responsible for the erosion of improvement. Organizational structure may obstruct providers seeking to change their behaviors.100,101 Having less than full engagement by providers involved in the organization-wide mission to provide high-quality care experiences could stifle improvement efforts. Making structural changes that allow physicians to have more time with patients could foster and encourage physicians to focus on improving provider-patient interactions. Targeted quality improvement efforts aimed to ease providers overloaded schedules may allow for more time for physicians to try out and incorporate new behaviors. Addressing such systemic changes could allow providers to maintain and even further improve their interactions with patients.

We also considered whether coaches differed in effectiveness. We found evidence that two coaches were more effective than the other two, initially and 18 months later, but without clear evidence of differences in the rate at which gains eroded. These two effective coaches provided additional follow-up at 3-and 6-months to review CAHPS scores with the provider and touched base with them on what was working or not working based on the recommended/suggested behavioral modifications; the other coaches did not provide any follow-up. Such reinforcement after coaching may solidify new learning and lead to more enduring change, consistent with the limited research that states providing skill reinforcement post training helps change physician performance.36,102,103 Regularly planned annual shadow coaching ‘boosters’ might maintain or even increase the improvement in patient experience scores that otherwise erodes.

These findings have practical implications for implementing a coaching program. First, the two most effective coaches spontaneously added follow-up engagement with those they coached, suggesting that such follow-up should be added as a required component. Second, evidence that gains erode suggest the importance of adding “refresher” booster coaching sessions. Booster sessions have been shown to be important for other behavioral interventions such as drug use prevention programs.104,105 These additions would ensure a more consistent and persistent coaching effect on provider behaviors. Finally, future research should examine the most effective pairing of coaches with those coached, considering supervisory relationships, medical specialty, and other factors. Research examining the impact of additional coaching and work that investigates and identifies mechanisms that sustain changes in provider behavior are also both needed.

Limitations.

We studied one large FQHC using CAHPS data as the basis for eligibility to provide shadow coaching to providers, so our findings may not generalize to all settings, but they are instructive, given limited research evaluating the effect of shadow coaching on patient experiences. We also were unable to evaluate all providers who were coached, as patient experience surveys were not available for some providers before and after coaching. Our findings support the current literature on the effectiveness of shadow coaching, but also extend the literature by investigating the sustainability of improvements and the differential effects of coaches.

Conclusion

Practices use patient experience scores as a metric for patient-centeredness and for improving provider-patient interactions. Overall ratings of providers and provider communication scores can be improved by implementing peer shadow coaching that targets modifiable provider behaviors. Shadow coaching ‘booster’ sessions might maintain or increase the improvement in patient experience scores that otherwise erodes. Attention should also be paid to the faithful implementation of all aspects of shadow coaching to ensure its effectiveness.

Acknowledgements:

We acknowledge the time and support of Pearl Kim and the health plan staff that assisted with obtaining the patient experience data used in this study. These findings were presented at an Invitational Research Meeting Sponsored by the U.S. Agency for Healthcare Research and Quality on October 7th, 2020 on Advancing Methods of Implementing and Evaluating Patient Experience Improvement Using CAHPS® Surveys.

Funding:

This work was supported by a cooperative agreement from the Agency for Healthcare and Research Quality (AHRQ) [Contract number U18 HS025920]

Footnotes

Conflicts of Interest: All authors report no conflicts of interest.

References

  • 1.Ranjan P, Kumari A, Chakrawarty A. How can Doctors Improve their Communication Skills? Journal of clinical and diagnostic research : JCDR. 2015;9:JE01–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Anhang Price R, Elliott MN, Zaslavsky AM, et al. Examining the role of patient experience surveys in measuring health care quality. Med Care Res Rev. 2014;71:522–54. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Fortuna KL, Lohman MC, Batsis JA, et al. Patient experience with healthcare services among older adults with serious mental illness compared to the general older population. Int J Psychiatry Med. 2017;52:381–98. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Kripalani S, Weiss BD. Teaching about health literacy and clear communication. J Gen Intern Med. 2006;21:888–90. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Quigley DD, Elliott MN, Farley DO, Burkhart Q, Skootsky SA, Hays RD. Specialties differ in which aspects of doctor communication predict overall physician ratings. J Gen Intern Med. 2014;29:447–54. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Quigley DD, Martino SC, Brown JA, Hays RD. Evaluating the content of the communication items in the CAHPS((R)) clinician and group survey and supplemental items with what high-performing physicians say they do. Patient. 2013;6:169–77. [DOI] [PubMed] [Google Scholar]
  • 7.Roberts MJ, Campbell JL, Abel GA, et al. Understanding high and low patient experience scores in primary care: analysis of patients’ survey data for general practices and individual doctors. BMJ. 2014;349:g6034. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Setodji CM, Quigley DD, Elliott MN, et al. Patient Experiences with Care Differ with Chronic Care Management in a Federally Qualified Community Health Center. Popul Health Manag. 2017;20:442–8. [DOI] [PubMed] [Google Scholar]
  • 9.Slatore CG, Feemster LC, Au DH, et al. Which patient and clinician characteristics are associated with high-quality communication among veterans with chronic obstructive pulmonary disease? Journal of health communication. 2014;19:907–21. [DOI] [PubMed] [Google Scholar]
  • 10.Davies E, Shaller D, Edgman-Levitan S, et al. Evaluating the use of a modified CAHPS survey to support improvements in patient-centred care: lessons from a quality improvement collaborative. Health Expect. 2008;11:160–76. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Friedberg MW, SteelFisher GK, Karp M, Schneider EC. Physician groups’ use of data from patient experience surveys. J Gen Intern Med. 2011;26:498–504. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Quigley DD, Mendel PJ, Predmore ZS, Chen AY, Hays RD. Use of CAHPS® patient experience survey data as part of a patient-centered medical home quality improvement initiative. J Healthc Leadersh. 2015;7:41–54.. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Drake KM, Hargraves JL, Lloyd S, Gallagher PM, Cleary PD. The effect of response scale, administration mode, and format on responses to the CAHPS Clinician and Group survey. Health Serv Res. 2014;49:1387–99. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Dyer N, Sorra JS, Smith SA, Cleary PD, Hays RD. Psychometric properties of the Consumer Assessment of Healthcare Providers and Systems (CAHPS(R)) Clinician and Group Adult Visit Survey. Med Care. 2012;50 Suppl:S28–34. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Elliott MN, Kanouse DE, Edwards CA, Hilborne LH. Components of care vary in importance for overall patient-reported experience by type of hospitalization. Med Care. 2009;47:842–9. [DOI] [PubMed] [Google Scholar]
  • 16.Evensen CT, Yost KJ, Keller S, et al. Development and Testing of the CAHPS Cancer Care Survey. J Oncol Pract. 2019;15:e969–e78. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Giordano LA, Elliott MN, Goldstein E, Lehrman WG, Spencer PA. Development, implementation, and public reporting of the HCAHPS survey. Med Care Res Rev. 2010;67:27–37. [DOI] [PubMed] [Google Scholar]
  • 18.Hays RD, Berman LJ, Kanter MH, et al. Evaluating the psychometric properties of the CAHPS Patient-centered Medical Home survey. Clin Ther. 2014;36:689–96 e1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Hays RD, Chong K, Brown J, Spritzer KL, Horne K. Patient reports and ratings of individual physicians: an evaluation of the DoctorGuide and Consumer Assessment of Health Plans Study provider-level surveys. Am J Med Qual. 2003;18:190–6. [DOI] [PubMed] [Google Scholar]
  • 20.Hays RD, Mallett JS, Gaillot S, Elliott MN. Performance of the Medicare Consumer Assessment of Health Care Providers and Systems (CAHPS) Physical Functioning Items. Med Care. 2016;54:205–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Hays RD, Martino S, Brown JA, et al. Evaluation of a Care Coordination Measure for the Consumer Assessment of Healthcare Providers and Systems (CAHPS) Medicare survey. Med Care Res Rev. 2014;71:192–202. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Hays RD, Shaul JA, Williams VS, et al. Psychometric properties of the CAHPS 1.0 survey measures. Consumer Assessment of Health Plans Study. Med Care. 1999;37:MS22–31. [DOI] [PubMed] [Google Scholar]
  • 23.Morales LS, Weech-Maldonado R, Elliiott MN, Weidmer B, Hays RD. Psychometric Properties of the Spanish Consumer Assessment of Health Plans Survey (CAHPS). Hispanic Journal of Behavioral Sciences. 2003;25:386–409. [Google Scholar]
  • 24.Rothman AA, Park H, Hays RD, Edwards C, Dudley RA. Can additional patient experience items improve the reliability of and add new domains to the CAHPS hospital survey? Health Serv Res. 2008;43:2201–22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Schmocker RK, Cherney Stafford LM, Siy AB, Leverson GE, Winslow ER. Understanding the determinants of patient satisfaction with surgical care using the Consumer Assessment of Healthcare Providers and Systems surgical care survey (S-CAHPS). Surgery. 2015;158:1724–33. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Solomon LS, Hays RD, Zaslavsky AM, Ding L, Cleary PD. Psychometric properties of a group-level Consumer Assessment of Health Plans Study (CAHPS) instrument. Med Care. 2005;43:53–60. [PubMed] [Google Scholar]
  • 27.Weech-Maldonado R, Morales LS, Spritzer K, Elliott M, Hays RD. Racial and ethnic differences in parents’ assessments of pediatric care in Medicaid managed care. Health Serv Res. 2001;36:575–94. [PMC free article] [PubMed] [Google Scholar]
  • 28.Weidmer BA, Brach C, Slaughter ME, Hays RD. Development of items to assess patients’ health literacy experiences at hospitals for the Consumer Assessment of Healthcare Providers and Systems (CAHPS) Hospital Survey. Med Care. 2012;50:S12–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Weidmer BA, Cleary PD, Keller S, et al. Development and evaluation of the CAHPS (Consumer Assessment of Healthcare Providers and Systems) survey for in-center hemodialysis patients. Am J Kidney Dis. 2014;64:753–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Sweeney SM, Hemler JR, Baron AN, et al. Dedicated Workforce Required to Support Large-Scale Practice Improvement. J Am Board Fam Med. 2020;33:230–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Berwick DM. A user’s manual for the IOM’s ‘Quality Chasm’ report. Health Aff (Millwood). 2002;21:80–90. [DOI] [PubMed] [Google Scholar]
  • 32.Goldstein E, Cleary PD, Langwell KM, Zaslavsky AM, Heller A. Medicare Managed Care CAHPS: A Tool for Performance Improvement. Health care financing review. 2001;22:101–7. [PMC free article] [PubMed] [Google Scholar]
  • 33.Patwardhan A, Spencer CH. Are patient surveys valuable as a service improvement tool in health services? An overview. J Healthc Leadersh. 2012;4:33–46. [Google Scholar]
  • 34.Einsenberg J Doctors’ decisions and the cost to medical care. Ann Arbor, MI: Health Administration Press; 1986. [Google Scholar]
  • 35.Fineberg HV. Clinical evaluation: how does it influence medical practice? Bull Cancer. 1987;74:333–46. [PubMed] [Google Scholar]
  • 36.Brown JB, Boles M, Mullooly JP, Levinson W. Effect of clinician communication skills training on patient satisfaction. A randomized, controlled trial. Ann Intern Med. 1999;131:822–9. [DOI] [PubMed] [Google Scholar]
  • 37.O’Leary KJ, Darling TA, Rauworth J, Williams MV. Impact of hospitalist communication-skills training on patient-satisfaction scores. J Hosp Med. 2013;8:315–20. [DOI] [PubMed] [Google Scholar]
  • 38.Associates D. Improving Care Team Communication & Patient Experience. Saint Paul, MN: Regions Hospital; 2015. [Google Scholar]
  • 39.Associates D. Improving Physician Communication & Patient Experience. Robbinsdale, MN: North Memorial Medical Center; 2015. [Google Scholar]
  • 40.Physician Coaches Improve the Patient Experience. Becker’s Hospital Review Web site. 2013. (Accessed April 20, 2020, at https://www.beckershospitalreview.com/hospital-physician-relationships/physician-coaches-improve-the-patient-experience.html.)
  • 41.The Big Benefits of Shadow Coaching for Improving the Patient Experience. 2019. (Accessed July 23, 2020, at http://baird-group.com/articles/the-big-benefits-of-shadow-coaching-for-improving-the-patient-experience.)
  • 42.Luallin MD. The shadow coach: high-touch help for low-scoring providers. MGMA Connex. 2005;5:31–2. [PubMed] [Google Scholar]
  • 43.Sullivan KW. How outliers become superstars: what shadow coaches do. J Med Pract Manage. 2012;27:344–6. [PubMed] [Google Scholar]
  • 44.Wolever R, Moore M, Jordan M. Coaching in Healthcare. In: Bachkirova T, Spence G, Drake D, eds. The SAGE Handbook of Coaching. Thousand Oaks, CA: SAGE Publications; 2016. [Google Scholar]
  • 45.Mayberry D, Hanson M. Let’s Talk: A guide for transforming the patient experience through improved communication: MN Community Measurement; 2013. [Google Scholar]
  • 46.Hayes E, Kalmakis KA. From the sidelines: coaching as a nurse practitioner strategy for improving health outcomes. J Am Acad Nurse Pract. 2007;19:555–62. [DOI] [PubMed] [Google Scholar]
  • 47.Poe SS, Abbott P, Pronovost P. Building nursing intellectual capital for safe use of information technology: a before-after study to test an evidence-based peer coach intervention. J Nurs Care Qua. 2011;26:110–9. [DOI] [PubMed] [Google Scholar]
  • 48.Sargeant J, Lockyer J, Mann K, et al. Facilitated Reflective Performance Feedback: Developing an Evidence- and Theory-Based Model That Builds Relationship, Explores Reactions and Content, and Coaches for Performance Change (R2C2). Acad Med. 2015;90:1698–706. [DOI] [PubMed] [Google Scholar]
  • 49.Schwellnus H, Carnahan H. Peer-coaching with health care professionals: what is the current status of the literature and what are the key components necessary in peer-coaching? A scoping review. Med Teach. 2014;36:38–46. [DOI] [PubMed] [Google Scholar]
  • 50.Sherman RO. Leading a multigenerational nursing workforce: issues, challenges and strategies. Online J Issues Nurs. 2006;11:3. [PubMed] [Google Scholar]
  • 51.Watling CJ, LaDonna KA. Where philosophy meets culture: exploring how coaches conceptualise their roles. Med Educ. 2019;53:467–76. [DOI] [PubMed] [Google Scholar]
  • 52.Yusuf FR, Kumar A, Goodson-Celerin W, et al. Impact of Coaching on the Nurse-Physician Dynamic. AACN Adv Crit Care. 2018;29:259–67. [DOI] [PubMed] [Google Scholar]
  • 53.Buchanan MO, Summerlin-Long SK, DiBiase LM, Sickbert-Bennett EE, Weber DJ. The compliance coach: A bedside observer, auditor, and educator as part of an infection prevention department’s team approach for improving central line care and reducing central line-associated bloodstream infection risk. Am J Infect Control. 2019;47:109–11. [DOI] [PubMed] [Google Scholar]
  • 54.Nelson JL, Apenhorst DK, Carter LC, Mahlum EK, Schneider JV. Coaching for competence. Medsurg Nurs. 2004;13:32–5. [PubMed] [Google Scholar]
  • 55.Wise T, Gautam B, Harris R, Casida D, Chapman R, Hammond L. Increasing the Registered Nursing Workforce Through a Second-Degree BSN Program Coaching Model. Nurse Educ. 2016;41:299–303. [DOI] [PubMed] [Google Scholar]
  • 56.Ravitz P, Lancee WJ, Lawson A, et al. Improving physician-patient communication through coaching of simulated encounters. Acad Psychiatry. 2013;37:87–93. [DOI] [PubMed] [Google Scholar]
  • 57.Witherspoon R, White RP. Executive coaching: A continuum of roles. Consult Psychol J Pract Res. 1996;48:124–33. [Google Scholar]
  • 58.Ladyshewsky RK. Building cooperation in peer coaching relationships: Understanding the relationships between reward structure, learner preparedness, coaching skill and learner engagement. Physiotherapy. 2006;92:4–10. [Google Scholar]
  • 59.Secomb J A systematic review of peer teaching and learning in clinical education. J Clin Nurs. 2008;17:703–16. [DOI] [PubMed] [Google Scholar]
  • 60.Blase J, Hekelman FP, Rowe M. Preceptors’ use of reflection to teach in ambulatory settings: An exploratory study. Acad Med. 2000;75:947–53. [DOI] [PubMed] [Google Scholar]
  • 61.Gingiss PL. Peer coaching: building collegial support for using innovative health programs. J Sch Health. 1993;63:79–85. [DOI] [PubMed] [Google Scholar]
  • 62.Driscoll J, Cooper R. Coaching for clinicians. Nurs Manag (Harrow). 2005;12:18–23. [PubMed] [Google Scholar]
  • 63.Grant A, Passmore J, Cavanagh M, Parker H. The state of play in coaching today: A comprehensive review of the field. Inter Rev Ind Organ Psych. 2010;25:125–67. [Google Scholar]
  • 64.Ladyshewsky R Peer-assisted learning in clinical education: A review of terms and learning principles. J Phys Ther Educ. 2000;14:15–22. [Google Scholar]
  • 65.Zeus P, Skiffington S. The coaching at work toolkit: A complete guide to techniques and practice. North Ryde, Australia: McGraw-Hill Book Company; 2002. [Google Scholar]
  • 66.Hekelman F, Flynn S, Glover P, Galazka S, Phillips JJ. Peer coaching in clinical teaching. Eval Health Prof. 1994;17:366–81. [Google Scholar]
  • 67.Cox E. Individual and organizational trust in a reciprocal peercoaching context. Mentor Tutor Part Learn. 2012;20:427–43. [Google Scholar]
  • 68.Gattellari M, Donnelly N, Taylor N, Meerkin M, Hirst G, Ward JE. Does ‘peer coaching’ increase GP capacity to promote informed decision making about PSA screening? A cluster randomised trial. Fam Pract. 2005;22:253–65. [DOI] [PubMed] [Google Scholar]
  • 69.Sabo K, Duff M, Purdy B. Building leadership capacity through peer career coaching: a case study. Nurs Leadersh (Tor Ont). 2008;21:27–35. [DOI] [PubMed] [Google Scholar]
  • 70.Waddell DL, Dunn N. Peer coaching: the next step in staff development. J Contin Educ Nurs. 2005;36:84–9; quiz 90-1. [DOI] [PubMed] [Google Scholar]
  • 71.Fustino NJ, Moore P, Viers S, Cheyne K. Improving Patient Experience of Care Providers in a Multispecialty Ambulatory Pediatrics Practice. Clin Pediatr (Phila). 2019;58:50–9. [DOI] [PubMed] [Google Scholar]
  • 72.Godfrey MM, Andersson-Gare B, Nelson EC, Nilsson M, Ahlstrom G. Coaching interprofessional health care improvement teams: the coachee, the coach and the leader perspectives. J Nurs Manag. 2014;22:452–64. [DOI] [PubMed] [Google Scholar]
  • 73.Sharieff GQ. MD to MD Coaching: Improving Physician-Patient Experience Scores: What Works, What Doesn’t. J Patient Exp. 2017;4:210–2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.A Better Care Experience with A.I.M. SullivanLuallin Group, 2021. (Accessed February 15, 2021, at https://sullivanluallingroup.com/.)
  • 75.Clinician Resources. SullivanLuallin Group, 2021. (Accessed February 15, 2021, at http://www.sullivanluallingroup.com/shadow-coaching/?option=com_content&view=article&id=125:shadow-coaching-motivators&catid=2:uncategorised&Itemid=244.)
  • 76.Use Shadow Coaching to Improve Medical Practice Performance. Physicians Practice, 2013. (Accessed February 15, 2021, at https://www.physicianspractice.com/healthcare-careers/use-shadow-coaching-improve-medical-practice-performance.)
  • 77.de Boor C. A Practical Guide to Splines: Revised Edition: Springer-Verlag; 2001. [Google Scholar]
  • 78.Marsh LC, Cormier DR. Spline Regression Models. Thousand Oaks, CA: SAGE Publications, Inc.; 2002. [Google Scholar]
  • 79.Quigley DD, Elliott MN, Setodji CM, Hays RD. Quantifying Magnitude of Group-Level Differences in Patient Experiences with Health Care. Health Serv Res. 2018;53:3027–51. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80.Carter WB, Inui TS, Kukull WA, Haigh VH. Outcome-based doctor-patient interaction analysis: II. Identifying effective provider and patient behavior. Med Care. 1982;20:550–66. [DOI] [PubMed] [Google Scholar]
  • 81.Frederickson L Exploring information-exchange in consultation: the patients’ view of performance and outcomes. Patient Educ Couns. 1995;25:237–46. [DOI] [PubMed] [Google Scholar]
  • 82.Hall JA, Irish JT, Roter DL, Ehrlich CM, Miller LH. Satisfaction, gender, and communication in medical visits. Med Care. 1994;32:1216–31. [DOI] [PubMed] [Google Scholar]
  • 83.Hall JA, Roter DL, Katz NR. Meta-analysis of correlates of provider behavior in medical encounters. Med Care. 1988;26:657–75. [DOI] [PubMed] [Google Scholar]
  • 84.Mishler EG, Clark JA, Ingelfinger J, Simon MP. The language of attentive patient care: a comparison of two medical interviews. J Gen Intern Med. 1989;4:325–35. [DOI] [PubMed] [Google Scholar]
  • 85.Roter DL, Stewart M, Putnam SM, Lipkin M Jr., Stiles W, Inui TS. Communication patterns of primary care physicians. JAMA. 1997;277:350–6. [PubMed] [Google Scholar]
  • 86.Rowland-Morin PA, Carroll JG. Verbal communication skills and patient satisfaction. A study of doctor-patient interviews. Eval Health Prof. 1990;13:168–85. [DOI] [PubMed] [Google Scholar]
  • 87.Faulkner A, Argent J, Jones A, O’Keeffe C. Improving the skills of doctors in giving distressing information. Med Educ. 1995;29:303–7. [DOI] [PubMed] [Google Scholar]
  • 88.Joos SK, Hickam DH, Gordon GH, Baker LH. Effects of a physician communication intervention on patient care outcomes. J Gen Intern Med. 1996;11:147–55. [DOI] [PubMed] [Google Scholar]
  • 89.Maiman LA, Becker MH, Liptak GS, Nazarian LF, Rounds KA. Improving pediatricians’ compliance-enhancing practices. A randomized trial. Am J Dis Child. 1988;142:773–9. [DOI] [PubMed] [Google Scholar]
  • 90.Roter DL, Hall JA, Kern DE, Barker LR, Cole KA, Roca RP. Improving physicians’ interviewing skills and reducing patients’ emotional distress. A randomized clinical trial. Arch Intern Med. 1995;155:1877–84. [PubMed] [Google Scholar]
  • 91.Stein TS, Kwan J. Thriving in a busy practice: physician-patient communication training. Eff Clin Pract. 1999;2:63–70. [PubMed] [Google Scholar]
  • 92.Lewis CC, Pantell RH, Sharp L. Increasing patient knowledge, satisfaction, and involvement: randomized trial of a communication intervention. Pediatrics. 1991;88:351–8. [PubMed] [Google Scholar]
  • 93.Verby JE, Holden P, Davis RH. Peer review of consultations in primary care: the use of audiovisual recordings. Br Med J. 1979;1:1686–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 94.Freeborn DK, Shye D, Mullooly JP, Eraker S, Romeo J. Primary care physicians’ use of lumbar spine imaging tests: effects of guidelines and practice pattern feedback. J Gen Intern Med. 1997;12:619–25. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 95.Etchegary C, Taylor L, Mahoney K, Parfrey O, Hall A. Changing Health-Related Behaviors 5: On Interventions to Change Physician Behaviors. In: Parfrey P, Barrett BJ, eds. Clinical Epidemiology: Practice and Methods, Methods in Molecular Biology, vol 2249. New York, NY: Humana; 2021:613–30. [DOI] [PubMed] [Google Scholar]
  • 96.Chiaburu DS, Marinova SV. What predicts skill transfer? An exploratory study ofgoal orientation, training, self-efficacy, and organizational supports. International Journal of Training & Development. 2005;9:110–23. [Google Scholar]
  • 97.Kirwan C, Brichall D. Transfer of learning from management development programmes: Testing the Holton model. International Journal of Training & Development. 2006;10:252–68. [Google Scholar]
  • 98.Sustainability Model and Guide. Michael Smith Foundation for Health Research, 2010. (Accessed May 4, 2021, at https://ktpathways.ca/resources/sustainability-model-and-guide.)
  • 99.Wiltsey Stirman S, Kimberly J, Cook N, Calloway A, Castro F, Charns M. The sustainability of new programs and innovations: A review of the empirical literature and recommendations for future research. Implement Sci. 2012;7:2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 100.Bokhour BG, Fix GM, Mueller NM, et al. How can healthcare organizations implement patient-centered care? Examining a large-scale cultural transformation. BMC Health Serv Res. 2018;18:168. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 101.Friedberg MW, Safran DG, Coltin KL, Dresser M, Schneider EC. Readiness for the Patient-Centered Medical Home: structural capabilities of Massachusetts primary care practices. J Gen Intern Med. 2009;24:162–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 102.Davis DA, Thomson MA, Oxman AD, Haynes RB. Changing physician performance. A systematic review of the effect of continuing medical education strategies. JAMA. 1995;274:700–5. [DOI] [PubMed] [Google Scholar]
  • 103.Greco PJ, Eisenberg JM. Changing physicians’ practices. N Engl J Med. 1993;329:1271–3. [DOI] [PubMed] [Google Scholar]
  • 104.Ellickson PL. Project ALERT, A Smoking and Drug Prevention Experiment: First-Year Progress Report. Santa Monica, CA: RAND Corporation; 1984. [Google Scholar]
  • 105.Ellickson PL, Bell RM, Thomas MA, Robyn A, Zellman GL. Designing and Implementing Project ALERT: A Smoking and Drug Prevention Experiment. Santa Monica, CA: RAND Corporation; 1988. [Google Scholar]

RESOURCES