Skip to main content
Global Pediatric Health logoLink to Global Pediatric Health
. 2019 Jan 24;6:2333794X18822996. doi: 10.1177/2333794X18822996

The Association Between Pediatric Faculty Factors and Resident Physician Ratings of Teaching Effectiveness

Nicholas M Potisek 1,, Laura Page 2, Aditee Narayan 3, Kenya McNeal-Trice 4, Michael J Steiner 4
PMCID: PMC6348494  PMID: 30719494

Abstract

Background. Faculty factors not inherently related to teaching effectiveness can influence teaching ratings. No studies have focused on pediatric faculty who possess unique differences from general medical faculty. Methods. We designed a retrospective observational study to compare faculty teaching ratings with measured factors across 3 academic pediatric institutions. Results. Our study included 196 faculty members. The majority (76%) of variation in teaching effectiveness ratings was not accounted for by any measured variable, but 24% was attributed to measurable factors. Increased resident exposure (sequential r2 = .10, P < .0001) significantly affected teaching effectiveness. Variation between resident ratings of pediatric faculty teaching can be partially explained by measured factors not necessarily related to teaching effectiveness. Conclusions. The identification of faculty factors that significantly contribute to rating variation can enhance interpretation of these rating.

Keywords: evaluation, teaching effectiveness

Introduction

Effective clinical teaching is difficult to assess because it is complex and multifactorial.1 Resident physician evaluations of faculty often serve as the primary measure of individual faculty teaching effectiveness. Departmental leaders use trainee evaluations of clinical educators to assess faculty teaching activities and clinical skills.2 Thus, academic promotion and success can be affected by trainee evaluations.3 Trainee evaluations have also been used to assess how changes to faculty roles improve clinical education.4 Therefore, identifying and understanding factors that influence faculty evaluations is important.

To be an effective clinical teacher, a faculty member must impart knowledge and skills, assess trainees’ abilities, and ensure patients receive quality care.5,6 Ideally, measurement of clinical teaching effectiveness would be based on Kirkpatrick’s 4-level model including not only learner satisfaction but also resident behavior and patient clinical outcomes.7 However, in academic institutions, multiple faculty members, advanced practitioners, nurses, staff, and other residents contribute to an individual resident’s education. Determining the incremental educational benefit contributed by an individual faculty member on resident knowledge, behavior, or patient outcomes is challenging. Thus, programs rely on tools that measure resident satisfaction to evaluate faculty clinical teaching effectiveness.8-10

Cognitive and noncognitive faculty attributes may influence residents’ perceptions of clinical teaching effectiveness.5,11,12 Studies in internal medicine or combined specialties have found factors not inherently related to teaching effectiveness can also influence teaching scores, including type of rotation and age of faculty.13-19 However, these results may not apply to pediatrics, as several aspects of the field may uniquely affect teaching scores. First, pediatrics is one of a few specialties where more than half of academic faculty are female20 and an even larger proportion of pediatric residents are female.21 Compared with other specialties, pediatric residents may have different emotional intelligence, personality traits, and learning styles, which could influence factors associated with effective teaching.22-24 Distinct challenges to the pediatric learning environment must also be considered. For example, pediatric faculty must balance the need for progressive autonomy in trainee responsibility with the supervisory expectations of concerned parents. They must also teach trainees how to care for patients across a broad spectrum from neonates to young adults, each stage with its own required skills. Comparing among pediatric faculty who possess unique differences from other medical fields may allow us to understand how faculty factors are specifically associated with teaching effectiveness ratings in pediatrics.

The objectives of our study were to (1) identify modifiable and nonmodifiable faculty factors associated with pediatric resident evaluations of faculty teaching effectiveness and (2) determine the relative proportion of the teaching effectiveness score attributed to these factors. We hypothesized that factors not clearly associated with teaching effectiveness would explain an important amount of variation in teaching scores and that resident exposure would also be associated with the teaching effectiveness scores.

Methods

We performed a retrospective, observational study of faculty teaching effectiveness ratings across 3 academic pediatric departments from July 1, 2013, to June 30, 2014. Each of the Children’s Hospitals included consists of a “hospital within a hospital” structure, range from 144 to 190 inpatient beds, admit 10 000 to 13 000 patients annually, and have their primary care clinics located outside of the hospital. After finishing a clinical rotation, resident physicians at each site complete an anonymous faculty evaluation based on their experience on the rotation. The evaluation asks the resident to rate overall teaching effectiveness of the faculty members on a Likert-type scale of 1 to 5, with 5 being the most effective. Resident physicians are expected to complete evaluations of their primary faculty preceptors during the course of the rotation. The primary outcome of overall teaching effectiveness is similar in wording and intent at each institution, with no descriptive anchors. Completed faculty teaching effectiveness ratings were collected from all resident education experiences over the 1-year period, including block and longitudinal experiences. Faculty members included in the study were employed by the departments of pediatrics and had a minimum of 3 ratings completed. Individual teaching effectiveness ratings for a given faculty member were collected for the full year and then averaged to create a mean overall teaching rating for each faculty member. This mean teaching effectiveness score was our primary outcome variable.

Faculty and Rotation Factors

Faculty member characteristics potentially associated with teaching effectiveness ratings were identified through a review of the current medical education literature.13-19 After determining the potential measurable factors, authors selected faculty factors to be included as dependent variables: gender, decade of age, race/ethnicity, specialty, primary clinical setting, fellowship training, additional degrees, and any formally held educational administrative role (Table 1). Decade of age was used instead of academic rank because in informal work with residents, the academic rank of faculty was often not known. Similarly, race, ethnicity, and gender were not self-identified, but instead characterized as the residents’ perception of a faculty member’s race, ethnicity, and gender. A formal education administrative role was defined as holding the position of program director, associate program director, medical student pediatric clerkship director, fellowship director, or rotation director. In this study, a program director is someone who oversees entire residency program, while associate program directors also have a significant advisory role for the residency program. A medical student pediatric clerkship director leads and oversees medical students on their pediatric rotation. Rotation directors oversee the monthly or longitudinal clinical experiences a resident physician must complete during their pediatric training. Fellowship directors oversee individual pediatric subspecialty curriculums (such as infectious disease, rheumatology) for pediatric fellows who have already completed their residency training. Rotation evaluations are separate from faculty evaluations and are also distributed at the end of each resident rotation. Rotation evaluations over the same time period were collected, summed, and then divided by number of evaluations to create an average annual rating of the clinical rotation. This average annual rating of the clinical rotation where the faculty member primarily taught was also included as a measured factor.

Table 1.

Overview of Measured Faculty Characteristics.

Measured Faculty Characteristics Description
Gender Male, female
Decade of age (in years) 30-40, 41-50, 51-60, 61-70
Race/ethnicity African American, Asian, Latino, white, other
Specialty By division
Primary clinical setting Outpatient, inpatient, both
Fellowship training Yes, no
Additional degrees Yes, no (eg, MPH, PhD)
Educational administrative role Program director, Associate program director, third year clerkship director
Overall level of resident exposure Clinical, mentorship, conference, scholarship, social

Additionally, each faculty member was assigned a score for “overall exposure to residents,” which was measured across 5 domains: clinical, mentorship, conference, scholarship, and social. Clinical exposure was defined as time providing direct patient care with residents. Mentorship was defined as actively counseling residents on career development. Conference was defined as time contributed to core resident conferences. Scholarship was defined as supervising residents in projects. Social exposure was defined as time spent participating in resident-related events that were not strictly educational. This category included activities such as graduation ceremonies, welcome parties, book clubs, and resident retreats. At each institution, 3 physicians in the education office independently rated the exposure factors for each faculty member over the past 1 to 2 years. Members of the education office were chosen to assess resident exposure due to their familiarity with faculty members’ involvement with residents and ability to compare across faculty. To ensure similar interpretations of exposure ratings were performed across institutions, instructions on rating resident exposure were developed and distributed to each education office (see the appendix). Each exposure variable was rated from “none to low exposure” to “highest exposure,” and the 5 variables were summed to create an overall exposure variable. Disagreements in ratings were resolved by group consensus.

Data Collection

Faculty characteristics were collected through biographical information provided by their respective pediatric and human resource departments or the electronic system used to collect rotation and faculty evaluations completed by resident physicians. Faculty members were de-identified before teaching effectiveness ratings and measurable factors were linked, so authors and education office personnel were masked to the overall outcome as factors were collected.

Data Analysis

Multivariate least squares regression analysis was used to measure association between factors and faculty ratings and partition out the variation between faculty teaching effectiveness ratings. We decided a priori to include any variable with a statistically significant bivariable relationship in the final model, as well as level of faculty exposure, underlying rotation evaluation, age, race/ethnicity, primary clinical practice type, gender, fellowship training, and presence of an additional nonmedical degree, which we hypothesized would be associated with teaching effectiveness ratings by residents.

Exemption From Ethics Approval and Waiver of Informed Consent Granted

The institutional review board of each participating medical center waived the need for ethics approval and the need to obtain consent for data collection, analysis, and publication of this retrospectively obtained and anonymized data for this noninterventional study (Wake Forest Approval No. 00030466, UNC-Chapel Hill Approval No. 14-2706, and Duke Approval No. Pro00058806).

Results

A total of 196 of 321 (61%) pediatrics faculty members met our criteria for inclusion in the study. Inclusion rates were similar across each site and by site; there were 31, 70, and 95 faculty members included. The number of teaching evaluations collected per faculty member ranged from 3 to 65. Statistically significant differences among institutions were noted in the proportion of faculty who were generalists (primary care or hospitalist faculty) compared with specialists, the mean rotation rating for the clinical rotations where rated faculty worked, and the average faculty score for degree of resident physician exposure (Table 2). Overall teaching effectiveness ratings ranged from 3.47 to 5.0 with a mean value of 4.5 (standard error 0.01, median 4.57). Two institutions had similar average teaching effectiveness ratings of 4.63 and 4.64, while 1 institution had an average teaching effectiveness rating of 4.36 (P < .001). In bivariable analysis, increased resident physician exposure, working on highly rated rotations, and not having a formal educational administrative title were associated with higher faculty teaching effectiveness scores. These items, along with the other characteristics chosen a priori, were used for multivariable analysis.

Table 2.

Characteristics of Included Faculty From 3 Institutions.

Faculty Characteristics Overall Institution A Institution B Institution C P (Difference Between Each Site)
Gender (% female) 53% 53% 71% 46% .06
Race/ethnicity (% white) 79% 83% 81% 76% .8
Percent of faculty <40 years of age 29% 24% 39% 29% .34
Percent of faculty >60 years of age 14% 19% 13% 10% .31
Practice type (% generalist) 29% 17% 45% 36% .009
Mean rating of rotation faculty primarily taught 4.4 4.4 4.3 .005
Mean rating of overall exposure to residents (score 0 to 10) 4.6 (median 4, mode 2) 3.9 7.2 4.1 <.001
Mean overall teaching effectiveness rating of faculty by residents 4.5 (median 4.57) 4.6 4.6 4.4 <.001

Summary Multivariable Analysis and Teaching Score Variation

One or multiple variables that were not directly measured accounted for 76% of the variation in teaching effectiveness in multivariate analysis. This represented the majority of variation in teaching effectiveness ratings. Twenty-four percent of the variation was attributable to measured factors (Figure 1). Three of these factors had statistically significant associations with teaching effectiveness ratings even after controlling for other variables: resident exposure score, rotation evaluation, and formal education administrative title. Higher resident exposure scores were associated with increased teaching effectiveness ratings (moderate to high level of association, sequential r2 = .10, P < .0001). The 5 components in our overall exposure variable included clinical exposure, mentorship, conference attendance, scholarship, and social. When analyzed individually, only mentorship was significantly associated with teaching ratings (r2 = .16, P < .0001). The second strongest association was rotation evaluation; higher rotation evaluations were associated with higher teaching effectiveness ratings (moderate level of association, sequential r2 = .095, P < .0001). The third strongest association was a negative association with holding a formal educational administrative title. Having formal education administrative role was associated with lower teaching effectiveness ratings (small level of association, sequential r2 = .03, P < .0001).

Figure 1.

Figure 1.

Variation in teaching effectiveness scores. Illustrates the percentage of variation in teaching scores attributed to measured faculty and rotation characteristics.

The remaining characteristics of gender, decade of age, primary clinical setting, fellowship training, and additional degrees were not significantly associated with teaching ratings. The number of faculty from ethnic or racial minority groups and subspecialty groups was too small to accurately evaluate the variation in teaching effectiveness ratings attributable to these factors.

Discussion

This study is the first to evaluate the association of faculty characteristics on teaching effectiveness ratings in pediatrics and demonstrated 3 factors, not necessarily related to teaching abilities, were associated with clinical teaching effectiveness ratings. Mentoring residents, rotation ratings, and holding an educational administrative title represent an important amount of the teaching rating variation. Seventy-six percent of the variation in teaching effectiveness scores was not explained by our measured factors, which is not surprising. This unaccounted for variation likely includes random measurement error associated with Likert-type scales,25 unmeasured faculty, resident, or rotation factors, and true teaching effectiveness.

Consistent with previous studies, we found higher teaching ratings were associated with increased resident exposure and working on a highly rated clinical rotation.14,18,19 While previous studies14,19 relied on overall faculty involvement with students or residents, our study is the first to define 5 possible domains of faculty exposure. We initially combined these domains into an overall faculty exposure factor, but subsequent analysis showed mentorship had a stronger association with teaching effectiveness ratings than the other domains of clinical, conference, scholarly, and social exposure. Further defining and perhaps differentially weighting the various areas of faculty exposure may enhance our understanding of how exposure affects teaching effectiveness in future studies.

Unlike prior studies,13,14,18,19 we did not find a significant association between teaching effectiveness ratings and faculty age, gender, or clinical setting. This difference may be related to factors specific to the field of pediatrics. In general, male gender may be associated with higher pay, promotion, and benefits compared with females.26-28 Previous studies on teaching effectiveness have demonstrated mixed results on the impact of faculty gender.13,29 Our study suggests that in pediatrics, gender bias may not be an important factor in teaching effectiveness ratings.13 While unclear, we suspect this difference may be due to the higher percentages of female faculty and residents in pediatrics, compared with most other specialties. It would be interesting to determine if other fields with relatively increased numbers of women, such as obstetrics and gynecology, yield similar results.

Our study is also the first to assess and find a negative association between teaching effectiveness ratings and holding an educational administrative role. It is unclear why educational administrators received lower teaching effectiveness ratings in our analysis. Educational administrators may be held to higher standards by residents. Faculty holding these titles may also be more likely to provide constructive feedback to residents, given their experience in milestone assignments, promotion, and remediation. Although providing constructive feedback is consistent with best practices in teaching, it may lead to decreased satisfaction resulting in lower faculty evaluations.30 Alternatively, education administrators may truly not be as effective teachers as other faculty members. This result should be viewed cautiously, as only a small proportion of faculty qualified as education administrators, but warrants further investigation.

Our results suggest thoughtful consideration is needed when using residents’ evaluations of faculty for academic promotion or incentive payments. For instance, comparing faculty members’ teaching performances within their division holds the rotation rating constant and may provide a better estimate of true teaching effectiveness. Our results also suggest faculty members may be able to modify their scores without actually improving their clinical teaching effectiveness. To maximize their ratings, faculty could potentially increase their mentorship exposure to residents, preferentially work on highly rated clinical rotations, and avoid formal educational administrative roles. A practical application of these results can be demonstrated by applying the significant faculty factors from the multivariable model. For example, an average faculty member with high levels of resident mentorship exposure, a highly rated rotation, and no formal educational administrative title will have a mean teaching effectiveness score of 4.86. Comparatively, an average faculty member with low levels of resident mentorship exposure, but otherwise similar highly rated rotation and no formal educational administrative title will have a mean teaching effectiveness score of 4.45. At the low rating extreme, an average faculty member with low levels of resident exposure, a poorly rated rotation, and a formal educational administrative title will have a mean teaching effectiveness score of 4.01.

Because our results demonstrate association rather than causality, it is unclear if these changes truly affect a faculty member’s teaching effectiveness. It is possible that effective clinical teachers are more likely to mentor residents. Similarly, it is plausible that clinical rotations ratings may in part reflect the teaching effectiveness of faculty members. Conversely, popular rotations may be appreciated due to factors such as favorable schedules, locations, or patient populations and mentorship does not necessarily translate to more time spent teaching or better teaching during a discrete rotation. Outstanding mentors may be effective in identifying and supporting residents through research projects or career discussions, but may not have expertise in imparting clinical skills or providing useful feedback. More work is needed to differentiate which of these possibilities drives the association between clinical rotation and mentorship and teaching effectiveness.

This study has several limitations. First, we measured associations between factors and teaching ratings and a causal relationship cannot be inferred from our results. Second, we were unable to determine the impact of race and ethnicity on faculty teaching ratings given the relative homogeneity of the faculty included. Third, the factors contributing to the majority of variation remain unknown. Further studies are needed to define the components of the unmeasured variation and to determine why pediatric resident’s ratings vary with faculty exposure levels, clinical rotation scores, and educational administrative titles. Fourth, among the sites, there were variations in faculty and rotation evaluation forms and in the wording of the faculty rating question, which may alter results. Additionally, 39% of faculty had fewer than 3 resident evaluations during the study period and were excluded, which may have biased our results. The excluded faculty may truly have less clinical exposure to residents, but it is also possible that resident biases toward perceived effective teachers or ineffective teachers may fuel completion of faculty evaluations. Finally, the faculty exposure scores were determined by expert education administration panels at each site, but may not reflect all exposures for each faculty member. The panels were composed of individuals involved in resident promotion and evaluation who attended most resident educational conferences and social events and also approved resident funds for poster creation and travel for scholarly activity. Thus, in general, these panels were knowledgeable regarding faculty exposure to residents across each of the 5 domains. However, we recognize that exposure may not always be public, for instance, mentoring can be an informal process and may have been missed in some cases, which is a limitation of the study. Still, we feel the expert panel provided a more objective assessment of exposure than self-reports from faculty, who may overestimate or underestimate their exposure without efforts at standardization.

One institution did have a higher level of overall exposure to residents compared with the other 2 sites. This difference could reflect variation in resident rotations and schedules. For example, some sites have residents assigned to a clinic with the same faculty preceptor each week, while at other sites, residents work with different preceptors. Similarly, some subspecialists have their own inpatient teams, while others serve primarily as consultants. However, we cannot exclude differences in culture among the institutions accounts for more or less resident-faculty exposure. Importantly, site was controlled for in the analysis and there was no similar difference in mean teaching effectiveness rating at that site.

Our study is novel in that it explored how faculty characteristics may affect their teaching effectiveness ratings specifically in pediatrics among 3 institutions. It found that effects of certain factors, such as gender, might not be generalizable across all specialties. Additionally, it is the first study to show a negative association between educational administrative titles and faculty teaching effectiveness scores. Importantly, our study highlights the limitations of global teaching effectiveness scores. Because definitions are unclear, these scores may have intra- and inter-resident variability. Additionally, global scores provide no specific or concrete feedback to allow faculty to modify their behaviors for improvement. We feel our findings should be further explored to help faculty and departmental leadership understand the determinants of teaching effectiveness scores.

Conclusion

Although the majority of teaching effectiveness rating variation among pediatric faculty was unmeasured, we found clinical rotation, mentorship, and holding an educational administrative title were associated with faculty teaching scores. Interestingly, pediatric faculty characteristics such as age, gender, and clinical setting did not appear to influence teaching effectiveness ratings. Proper interpretation of residents’ ratings of faculty clinical teaching effectiveness is necessary as these ratings may be affected by measured factors not necessarily related to teaching effectiveness. Further exploration of the unmeasured variation in our study may enhance our understanding of the validity of resident teaching ratings’ ability to measure actual teaching effectiveness and improve our ability to interpret such ratings.

Acknowledgments

The authors would like to thank Christopher Weisen of The Odum Institute at University of North Carolina at Chapel Hill for statistical support and analysis.

Appendix

Instructions to Rate Faculty Exposure to Resident Physicians

Working in a group that may include the chief residents, program director, or associate program director (minimum 3 individuals), assign “exposure scores” for each faculty member. You should reach a group consensus when assigning scores. Thus, each faculty member will have 5 scores corresponding to each of the exposure categories (clinical, mentorship, conference, scholarly work, and social). Please enter the scores into the provided spreadsheet, which lists all faculty members from the previous academic year (July 2013 to June 2014). There are 5 columns to the right of each faculty member’s name to enter the various exposure scores.

Note that scores should be based off of your group’s perception of a given faculty member over the last 1 to 2 years. Do not review actual conference attendance sheets, scholarly work submitted, clinical time interacting with residents, and so on. If you are unsure of how to score a particular category for a faculty member, please provide your best guess. Please only use whole numbers. Faculty members cannot evaluate themselves.

  • 1. Clinical exposure to resident physicians (inpatient or outpatient settings). Choose highest category the faculty member meets:

Low (0 point): rarely works with residents in clinical settings

  • 0-4 weeks per year of inpatient service AND/OR

  • 3 or fewer clinics/ED shifts per month with residents

Medium (1 point): occasionally works with residents in clinical settings

  • 5-8 weeks per year of inpatient service (must have primary patients on team, not simply consulting service) AND/OR

  • 1 clinic/ED shift per week with residents

High (2 points): frequently works with residents in clinical settings

  • 9 or more weeks per year of inpatient service (must have primary patients on team, not simply consulting service) AND/OR

  • 2 or more clinics/ED shifts per week with residents

  • 2. Mentorship exposure to resident physicians (actively counsels residents on career development or provides professional guidance; may be formally through the residency program or informally; resident advisor also qualifies)

Low (0 point): rarely mentors resident physicians

  • During most years, 0 residents identify this faculty member as a mentor based on the definition above

Medium (1 point): occasionally mentors resident physicians

  • During most years, 1-2 residents identify this faculty member as a mentor based on the definition above

High (2 points): frequently mentors resident physicians

  • During most years, 3 or more residents identify this faculty member as a mentor based on the definition above

  • 3. Conference exposure to resident physicians. Each program determines the scope of what constitutes resident core conferences (examples include morning report, noon conference, M&M). Choose highest category the faculty member meets:

Low (0 point): rarely attends conferences for residents

  • Attends morning report or M&M less than once per month on average AND/OR

  • Gives 1 or fewer core conference per year

Medium (1 point): occasionally attends conferences for residents

  • Attends morning report or M&M at least once per month on average AND/OR

  • Gives 2 core conferences per year

High (2 points): frequently attends conferences for residents

  • Attends morning report or M&M at least once per week on average AND/OR

  • Gives 3 or more core conferences per year

  • 4. Scholarly work exposure to resident physicians (partners with or recruits residents for scholarly work such as advocacy, research, or QI projects; could include national/local presentations, posters, book chapters, clinical research, case reports, CATCH grants, etc)

Low (0 point): rarely partners with residents for scholarly activities

  • During most years, 0 scholarly work activities involving a resident

Medium (1 point): occasionally partners with residents for scholarly activities

  • During most years, 1-3 scholarly work activities involving a resident

High (2 points): frequently partners with residents for scholarly activities

  • During most years, 4 or more scholarly work activities involving a resident

  • 5. Social exposure to resident physicians (time outside the hospital—may include retreats, orientation events, graduation events, residency program sponsored events, or less formal gatherings)

Low (0 point): rarely has social interaction outside the hospital with residents

  • During most years, participates in 0 social outings with residents

Medium (1 points): occasionally has social interaction outside the hospital with residents

  • During most years, participates in 1-2 social outings with residents

High (2 points): frequently has social interaction outside the hospital with residents

  • During most years, participates in 3 or more social outings with residents

Footnotes

Author Contributions: NMP and MJS: Contributed to conceptualization of idea; contributed to data collection; worked with statistician and are the primary writers.

LP, AN, and KMNT: Contributed to conceptualization of idea; contributed to data collection; contributed to writing of paper.

Declaration of Conflicting Interests: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) received no financial support for the research, authorship, and/or publication of this article.

References

  • 1. Sutkin G, Wagner E, Harris I, Schiffer R. What makes a good clinical teacher in medicine? A review of the literature. Acad Med. 2008;83:452-466. [DOI] [PubMed] [Google Scholar]
  • 2. Atasoylu AA, Wright SM, Beasley BW, et al. Promotion criteria for clinician-educators. J Gen Intern Med. 2003;18:711-716. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Curran DS, Stalburg CM, Xu X, Dewald SR, Quint EH. Effect of resident evaluations of obstetrics and gynecology faculty on promotion. J Grad Med Educ. 2013;5:620-624. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Barone MA, Dudas RA, Stewart RW, McMillan JA, Dover GJ, Serwint JR. Improving teaching on an inpatient pediatrics service: a retrospective analysis of a program change. BMC Med Educ. 2012;12:92. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Bannister SL, Raszka WV, Jr, Maloney CG. What makes a great clinical teacher in pediatrics? Lessons learned from the literature. Pediatrics. 2010;125:863-865. [DOI] [PubMed] [Google Scholar]
  • 6. Bowen JL. Educational strategies to promote clinical diagnostic reasoning. N Engl J Med. 2006;355:2217-2225. [DOI] [PubMed] [Google Scholar]
  • 7. Kirkpatrick DL. Evaluation of training. In: Craig RL, Bittel LR, eds. Training and Development Handbook. New York, NY: McGraw-Hill; 1967:87-112. [Google Scholar]
  • 8. Stalmeijer RE, Dolmans DH, Wolfhagen IH, Muijtjens AM, Scherpbier AJ. The development of an instrument for evaluating clinical teachers: involving stakeholders to determine content validity. Med Teach. 2008;30:e272-e277. [DOI] [PubMed] [Google Scholar]
  • 9. Stalmeijer RE, Dolmans DH, Wolfhagen IH, Muijtjens AM, Scherpbier AJ. The Maastricht Clinical Teaching Questionnaire (MCTQ) as a valid and reliable instrument for the evaluation of clinical teachers. Acad Med. 2010;85:1732-1738. [DOI] [PubMed] [Google Scholar]
  • 10. Williams BC, Litzelman DK, Babbott SF, Lubitz RM, Hofer TP. Validation of a global measure of faculty’s clinical teaching performance. Acad Med. 2002;77:177-180. [DOI] [PubMed] [Google Scholar]
  • 11. Buchel TL, Edwards FD. Characteristics of effective clinical teachers. Fam Med. 2005;37:30-35. [PubMed] [Google Scholar]
  • 12. Irby DM. Clinical teacher effectiveness in medicine. J Med Educ. 1978;53:808-815. [DOI] [PubMed] [Google Scholar]
  • 13. Arah OA, Heineman MJ, Lombarts KM. Factors influencing residents’ evaluations of clinical faculty member teaching qualities and role model status. Med Educ. 2012;46:381-389. [DOI] [PubMed] [Google Scholar]
  • 14. Irby DM, Gillmore GM, Ramsey PG. Factors affecting ratings of clinical teachers by medical students and residents. J Med Educ. 1987;62:1-7. [DOI] [PubMed] [Google Scholar]
  • 15. Irby DM, Ramsey PG, Gillmore GM, Schaad D. Characteristics of effective clinical teachers of ambulatory care medicine. Acad Med. 1991;66:54-55. [DOI] [PubMed] [Google Scholar]
  • 16. Lombarts KM, Heineman MJ, Scherpbier AJ, Arah OA. Effect of the learning climate of residency programs on faculty’s teaching performance as evaluated by residents. PLoS One. 2014;9:e86512. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. McOwen KS, Bellini LM, Guerra CE, Shea JA. Evaluation of clinical faculty: gender and minority implications. Acad Med. 2007;82(10 suppl):S94-S96. [DOI] [PubMed] [Google Scholar]
  • 18. Myers KA. Evaluating clinical teachers: does the learning environment matter? Acad Med. 2001;76:286. [DOI] [PubMed] [Google Scholar]
  • 19. Ramsey PG, Gillmore GM, Irby DM. Evaluating clinical teaching in the medicine clerkship: relationship of instructor experience and training setting to ratings of teaching effectiveness. J Gen Intern Med. 1988;3:351-355. [DOI] [PubMed] [Google Scholar]
  • 20. Association of American Medical Colleges. Core entrustable professional activities for entering residency (updated). https://www.mededportal.org/icollaborative/resource/887. Accessed March 20, 2016. [DOI] [PubMed]
  • 21. Lautenberger DM, Dandar VM, Raezer CL, Sloane RA. The State of Women in Academic Medicine: The Pipeline and Pathways to Leadership. Washington, DC: Association of American Medical Colleges; 2013-2014. https://members.aamc.org/eweb/upload/The%20State%20of%20Women%20in%20Academic%20Medicine%202013-2014%20FINAL.pdf2013-14:18. [Google Scholar]
  • 22. Markert RJ, Rodenhauser P, El-Baghdadi MM, Juskaite K, Hillel AT, Maron BA. Personality as a prognostic factor for specialty choice: a prospective study of 4 medical school classes. Medscape J Med. 2008;10:49. [PMC free article] [PubMed] [Google Scholar]
  • 23. Macneily AE, Alden L, Webber E, Afshar K. The surgical personality: comparisons between urologists, non-urologists and non-surgeons. Can Urol Assoc J. 2011;5:182-185. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. McKinley SK, Petrusa ER, Fiedeldey-Van Dijk C, et al. A multi-institutional study of the emotional intelligence of resident physicians. Am J Surg. 2015;209:26-33. [DOI] [PubMed] [Google Scholar]
  • 25. Stangor C. Research Methods for the Behavioral Sciences. 5th ed. Belmont, CA: Cengage Leaning; 2014. [Google Scholar]
  • 26. Ash AS, Carr PL, Goldstein R, Friedman RH. Compensation and advancement of women in academic medicine: is there equity? Ann Intern Med. 2004;141:205-212. [DOI] [PubMed] [Google Scholar]
  • 27. Wright AL, Schwindt LA, Bassford TL, et al. Gender differences in academic advancement: patterns, causes, and potential solutions in one US College of Medicine. Acad Med. 2003;78:500-508. [DOI] [PubMed] [Google Scholar]
  • 28. Rotbart HA, McMillen D, Taussig H, Daniels SR. Assessing gender equity in a large academic department of pediatrics. Acad Med. 2012;87:98-104. [DOI] [PubMed] [Google Scholar]
  • 29. Fluit CR, Feskens R, Bolhuis S, Grol R, Wensing M, Laan R. Understanding resident ratings of teaching in the workplace: a multi-centre study. Adv Health Sci Educ. 2015;20:691-707. [DOI] [PubMed] [Google Scholar]
  • 30. Boehler ML, Rogers DA, Schwind CJ, et al. An investigation of medical student reactions to feedback: a randomised controlled trial. Med Educ. 2006;40:746-749. [DOI] [PubMed] [Google Scholar]

Articles from Global Pediatric Health are provided here courtesy of SAGE Publications

RESOURCES