Skip to main content
Journal of Patient Experience logoLink to Journal of Patient Experience
. 2016 Apr 7;3(1):17–19. doi: 10.1177/2374373516636736

Evaluations of Neurologists by Their Patients and Residents Are Inversely Correlated

Michael R Dobbs 1,, Jonathan H Smith 1
PMCID: PMC5513626  PMID: 28725827

Abstract

Objective and Background:

We hypothesized that evaluation scores for attending neurologists by patients and residents would parallel one another. Additionally, we hypothesized that provider productivity would be also be associated with performance evaluations by patients and residents.

Methods:

In a university neurology department, we collected individual Clinician and Group Consumer Assessment of Healthcare Providers and Systems patient satisfaction scores and standardized resident evaluation scores (n = 22 faculty members). We performed bivariate analysis of doctor–patient satisfaction versus resident evaluation scores.

Results:

Attending neurologists with higher patient satisfaction received lower resident evaluation scores (P < .05). There seem to be disproportionate neurologists with low evaluations not meeting clinical productivity targets.

Conclusion:

Finding a significant inverse correlation was surprising. Perhaps what is valued by patients in their physician is not what residents value in teachers. That deserves further study. Maybe attending physicians who spend their energy on the patient experience do not have sufficient time to devote to teaching and vice versa. That neurologists with low evaluation scores appear more likely to not meet productivity targets supports this idea.

Keywords: patient experience, patient satisfaction, resident education, clinical faculty

Background and Purpose

Teaching hospitals have missions that include providing high-quality patient care and teaching resident physicians to become independent practitioners. Successful teaching physicians, it follows, should exhibit excellence in patient care and trainee education. Teaching physicians, therefore, should strive to receive high marks both in patient satisfaction and in resident teaching evaluations.

Based on a limited body of literature, gender, race, and academic rank may bias trainee ratings of faculty in a graduate medical education settings (1,2). Of note, a large cross-sectional study of graduate medical education in the Netherlands identified time spent on teaching, as opposed to patient care, to be associated with more favorable odds of receiving high ratings by trainees (3). Conversely, increasing time spent on patient care relative to teaching was associated with lower evaluation scores. However, this association has neither been replicated nor been evaluated from the patient satisfaction perspective. We hypothesized that the same attending neurologist behaviors and values that would result in high patient satisfaction would result in high ratings by residents and that patient satisfaction scores would correlate with resident evaluation scores.

Methods

In the department studied, faculty are evaluated by residents monthly using a University mandated 7-item scale that assesses patient care, interpersonal and communication skills, practice-based learning and improvement, medical knowledge, professionalism, and an overall teaching effectiveness score (Appendix 1). Each item on the scale is rated according to 4 verbal descriptors, ranging from strongly agree to strongly disagree. These are then numerically converted to a score of 1 to 4 and subsequently scaled out of 10 total points.

In a university neurology department, we collected individual Clinician and Group Consumer Assessment of Healthcare Providers and Systems (CGCAHPS) Press Ganey patient satisfaction survey scores as well as standardized resident evaluation scores. Domains of CGCAHPS questions are in Appendix 2.

Faculty productivity was treated as a dichotomous variable depending on whether the faculty met or did not meet a work revenue value unit (wRVU) target. The target was defined as accruing their expected wRVUs compared to benchmark standards and adjusted for clinical distribution of effort.

We chose to focus on the overall doctor rating by patients (% top box). The term “top box” refers to the percentage of patients selecting the most positive response on a specified measure, such as overall doctor rating. We performed a bivariate analysis of overall doctor rating versus composite resident evaluation scores using JMP version 10 statistical analysis software (SAS Institute, Inc, Cary, North Carolina, 1989-2007). We then performed exploratory analysis looking at the association of clinical productivity as a function of evaluation scores.

Results

The faculty cohort (n = 22) was 5 pediatric and 17 adult neurologists, including 6 women and 16 men, 9 professors, 5 associate professors, and 8 assistant professors. On average, there were 26 patient satisfaction scores (range 8-86) and 37 resident scores (range 12-64) per faculty member. Median overall doctor rating was 74.55 on a 100-point scale (10th percentile = 62.13, 90th percentile = 92.93). Median resident score was 9.045 on a 10-point scale (10th percentile = 8.405, 90th percentile = 9.592).

Attending neurologists who scored higher on overall doctor rating scored lower on resident evaluation composite scores (P < .05) as seen in the Figure 1. The simple linear regression equation is estimated to be (patient of provider % top box score = 191.9 − 12.87 × composite resident of faculty score). Scores did not correlate with gender or academic rank.

Figure 1.

Figure 1.

Patient satisfaction versus resident evaluation scores.

Separation of the line of fit graph into quadrants using a focusing matrix technique showed 4 distinct groups—(1) high patient satisfaction and low resident evaluation (n = 4), (2) high patient satisfaction and high resident evaluation (n = 2), (3) low patient satisfaction and high resident evaluation (n = 6), and (4) low patient satisfaction with low resident evaluation (n = 10). When clinical productivity is overlaid, it appears that there is a disproportionate number of neurologists in the negatively balanced quadrant who did not meet productivity targets (6 of 10, P > .05).

Conclusion

We expected positive correlation of resident and patient scores. Finding a significant inverse correlation was surprising. An obvious conclusion is that what is valued by patients in their physician is not what residents value in their attending teachers. That deserves further study. It is possible that attending physicians who focus their energy on the patient experience tend to not have sufficient time to devote to resident education and vice versa. There were some physicians who scored high in both areas (n = 2) as well as those who scored low in both (n = 10).

Using a 4-quadrant focusing matrix allowed us to explore the characteristics of teaching physicians who comprised the data. The 4 groups might be broken down as patient centered (high patient and low resident), positively balanced (high in both areas), resident centered (low patient and high resident), and negatively balanced (low in both areas). The results are consistent with the findings reported by Arah et al, suggesting that the allocation of time between patients and trainees is an important covariate in determining satisfaction (3). While on the surface our results imply that efforts to maximize patient and trainee satisfaction are mutually exclusive, we feel that further studies are needed to clarify this relationship. Studies investigating how university clinicians balance teaching and patient care may yield strategic insights for how individuals may excel in both clinical and educational domains (ie, what do the 2 physicians in our balanced group do differently?). That there were relatively few positively balanced (n = 2) and so many negatively balanced (n = 10) neurologists is concerning. We can hypothesize that the negatively balanced neurologists might represent individuals who are struggling for some reason. While there is no statistically significant difference in our small sample, that this subgroup of neurologists appears more likely to not meet productivity targets further supports this idea. It makes sense that a faculty physician who has trouble meeting productivity targets might not expend as much effort on resident teaching or the patient experience.

With more development, the technique we have described might be used to identify potentially at-risk faculty members for development opportunities. It could also be used as supporting documentation for decisions on promotions and tenure as well as in setting performance bonus structures.

Patient satisfaction is comprised of the patient’s rational and emotional reactions to their health-care experience. Likewise, resident evaluations of attending physicians are based on both rational and emotional perceptions. As teaching physicians, we should take time to consider the emotional aspects of our mission to both our patients and our trainees. Like it or not, we are setting the example for our trainees. We wonder what would happen to ratings if an entire department concentrated their efforts either on patient centeredness or resident training. There may be such departments available for study. We hypothesize that if resident education became the main focus of patient care that resident ratings would be high and patient satisfaction ratings would be very low. On the other hand, we hypothesize that if all faculty in a department were to focus on patient-centeredness that while patient satisfaction would be higher, resident ratings would remain the same.

This technique deserves further testing to better understand how other covariates (ie, adult versus pediatric neurologist) may impact variation in satisfaction scores. The authors suggest exploring data over time within a program, comparing data among departments in the same specialty, and testing with much larger data sets from multiple specialties and multiple hospitals.

Author Biographies

Michael R Dobbs, is a Professor of neurology and Associate Chief Medical Officer at the University of Kentucky.

Jonathan H Smith, is the adult neurology residency Director at the University of Kentucky. He is a practicing neurologist and an assistant professor of neurology.

Appendix 1

CG CAHPS question domains

  1. Access to care

  2. Provider communication

  3. Test results

  4. Office Staff

  5. Overall Provider Rating

Appendix 2

Resident of faculty evaluation questions

  1. This faculty member provided an appropriate level of graduated responsibility

  2. This faculty member was easily accessible to provide supervision when needed

  3. This faculty member effectively explained his/her clinical reasoning and decision making

  4. This faculty member effectively guided my development of clinical reasoning and decision making

  5. This faculty member modeled professional behavior

  6. Overall, this faculty member was an effective teacher

Footnotes

Authors’ Note: Dr Dobbs and Dr Smith contributed to study concept and design, acquisition of data, and analysis and interpretation.

Declaration of Conflicting Interests: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) received no financial support for the research, authorship, and/or publication of this article.

References

  • 1. Ramsbottom-Lucier T, Gillmore GM, Irby DM, Ramsey PG. Evaluation of clinical teaching by general internal medicine in out-patient and in-patient settings. Acad Med. 1994;69:152–4. [DOI] [PubMed] [Google Scholar]
  • 2. McOwen KS, Bellini LM, Guerra CE, Shea JA. Evaluation of clinical faculty: gender and minority implications. Acad Med. 2007;82(10 Suppl):94–6. [DOI] [PubMed] [Google Scholar]
  • 3. Arah OA, Heineman MJ, Lombarts KM. Factors influencing residents’ evaluations of clinical faculty member teaching qualities and role model status. Med Educ. 2012;46:381–9. [DOI] [PubMed] [Google Scholar]

Articles from Journal of patient experience are provided here courtesy of SAGE Publications

RESOURCES