Abstract
Physician faculty learn teaching skills informally while fighting competing professional obligations. One underutilized proven technique to improve teaching skills is peer observation with feedback. We aimed to understand benefits and challenges of a physician faculty development program based on peer observation of teaching and to develop best practices recommendations for future program development. The authors developed a peer observation-based physician faculty development program from 2015 to 2017. Two interviewers conducted and analyzed qualitative interviews with 13 faculty participants and four non-participants using content analysis to identify themes and subthemes in NVivo©. Participant-identified program benefits included conveyed institutional support for teaching, the opportunity for peer observation with direct and timely feedback, the opportunity for community building, and overall program feasibility. Program challenges included competing scheduling demands, variability in feedback quality, and difficulty maintaining engagement for the program duration. Potential areas for improvement included participation incentives, external faculty involvement, assistance with program logistics and administration, and improvement in the consistency of the feedback experience. While peer observation is a valued approach to physician faculty development of teaching skills, competing demands on physicians may still limit program effectiveness. Program sustainability depends on optimizing feedback quality, boosting motivation for participation, and providing administrative support.
Keywords: peer observation, faculty development, feedback, community of educators
Introduction
Faculty “development” remains a broad term in relation to physician professional development. Faculty must be simultaneously effective clinicians, scholars, and educators,1 and faculty development programs have grown over the years to cover the range of roles faculty are expected to have. Training in medical education typically takes the form of formal workshops or traditional classroom instruction.2 Clinical faculty are expected to teach in numerous settings, but with minimal formal “hands on” training in education. Many clinician educators feel unprepared for teaching and do not feel they have the required expertise.3 Much of what is learned is done “on the job” through trial and error, potentially to the risk of learners and faculty alike. Additionally, when formal faculty development programs are offered, faculty struggle to find institutional support including time or resources to attend.4
Direct observation with feedback, where feedback is typically provided by learners themselves, is one powerful way for educators to improve their skills. Critiques of direct observation include that learner evaluations of teaching may lack validity across settings,5 perpetuate biases regarding faculty gender,6 and be less useful for formative feedback to educators.7 Evaluations completed by learners may lack specific feedback and can be delayed in reaching faculty. However, peer observation of teaching can provide practical feedback to faculty in real time, with benefits to both the observer and the observed, including increased confidence in teaching and providing feedback, self-awareness of teaching practices, and a sense of community in education with peers.8–14 These benefits have been demonstrated in medicine8–14 and other health professions fields.15,16 Furthermore, long established adult learning principles like self-regulation,17 contiguous feedback,18 and grounding in day-to-day work can be integrated into peer observation programs, but the use of both established teaching theory and peer observation feedback continue to be limited by competing demands on faculty in academic medicine. While some have explored the acceptability of such professional development programs,19 most have been with small numbers of participants20 and issues with participation21 and retention22 have been reported. A previous study with Hospitalists showed that 40% of faculty chose not to participate in a similar faculty development program and participation rates dropped over the course of the program.10 The long-term sustainability and rationale for participation in peer observation programs remains unknown and further evaluation is needed.
We developed a robust peer observation program for faculty in the division of Gerontology, Geriatrics, and Palliative Care at the University of Alabama at Birmingham School of Medicine (UAB). While it was initially met with great enthusiasm, participation waned over the 18-month period, and the program was eventually ended due to lack of participation, mirroring previously reported difficulties seen in long-term sustainability of these types of programs.19–21 Therefore, this study aimed to understand and explore the benefits and challenges of a faculty development program based on peer observation of teaching. We will use the lessons learned to make recommendations for best practices in future program development.
Materials and Methods
Study Design
This program evaluation study used qualitative semi-structured interviews to explore the benefits and challenges of a faculty development program based on peer observation, called the Teaching Academy.
Participants
The study was conducted at UAB, an 1100-bed academic medical center. Participants were recruited from the Division of Gerontology, Geriatrics, and Palliative Care during the 2015–2017 academic years. All division clinical faculty, composed of geriatricians and palliative medicine physicians, were invited to voluntarily participate in a faculty development program called the Teaching Academy. Prior participants were recruited and consented to discuss their experience with the program. Out of a total of 28 clinical faculty in the division, 15 (53%) faculty members completed at least one cycle of the peer observation program. Seven (47%) were from palliative medicine, seven (40%) were from geriatrics, and one (13%) practiced both specialties. Thirteen of the 15 program participants completed interviews for this study (87% response rate).
Teaching Academy Program
The program began with a kickoff session where the program leaders (C.H. & M.D.B.) led sessions on the qualities of effective teachers,23,24 program logistics, and feedback. Faculty pairs were directed to conduct a set of reciprocal teaching observations in a setting of their choosing (i.e., bedside teaching, small group discussion, or didactic teaching) during the following three months, then a second set of observations in months four and five. Faculty who enrolled were paired across subspecialty and academic rank to increase exposure to varied teaching styles and experience. Throughout the six-month cycle, participants were encouraged to communicate with each other via postings on BaseCamp, an online platform for team collaboration. At the end, participants again came together for a debriefing session during which they reflected on lessons learned.
Participants provided feedback to one another using a standardized peer observation form based on the educational categories of the Stanford Faculty Development Program25,26 which was used to structure peer feedback in the program. This structure has been used in prior peer observation programs.10,27 The form included behavioral anchors for teaching skills with overall rating and space for comments in each category (learning climate, control of session, communication of goals, promotion of understanding and retention, evaluation, feedback, and promotion of self-directed learning). In addition, the form included prompts for the faculty to identify specific skills to focus on in each observation.
Analysis
Two investigators (M.S and B.H.) conducted semi-structured, audio-recorded interviews with participants to encourage honest and open answers from the interviewees (interview guide presented in Table 1). Interviews were conducted in person or by phone. Faculty within the division who chose not to participate in the program were also recruited to further explore potential barriers to participation. Interviews were digitally audio recorded and transcribed. Transcripts were analyzed and coded by M.S. and B.H. using content analysis using NVivo 12 software.29 Inductive line by line coding using research questions (benefits, challenges, areas for improvement) was conducted and a codebook generated after consensus on codes were reached by the two coders. Codes were further organized into themes and subthemes as they emerged from the data. Any coding conflicts not addressed by consensus between the two coders were brought before the other investigators for discussion. Data trustworthiness involved the use of two coders who reviewed and came to consensus on the qualifying themes, prolonged engagement with the data, peer debriefing, and non-participants were purposively sampled to further understand faculty reasons for non-participation. UAB IRB approved the study protocol.
Table 1.
Interview Guide Categories and Questions
I. Overall Awareness and Level of Participation | |
1. | • Were you aware of the teaching academy? and if so, how did you find out about it? • There were 3 cycles, one began in May 2015; Jan 2016; July 2016- Did you attend one or more of these cycles? If so, which did you attend? |
2. | • Tell me about your level of participation in the teaching academy. |
II. Experience in the Academy | |
3. | • KICKOFF: What did you think about the general kick off session? Tell me about the topics that were covered? What was helpful? Not helpful? Missing? • At the end of the kickoff you were matched with a partner. What did you think about having your partner chosen for you? How was the match between you and your partner? |
4. | • OBSERVATIONS: After the kick-off you were given a charge to do two sets of observations—In each set you were to observe your partner teaching & then have them observe you teaching. How many of these did you do? a) How did you decide where to be observed and tell me how you decided on that setting? What was it like for you to be observed/to observe others? b) What were the challenges to doing the observation component of the Teaching Academy? What were the things that you liked about the observations? |
5. | • PROVIDING FEEDBACK: Shortly after the observation, you were to provide feedback to each other. How was it to provide feedback? How was it to receive feedback? • You were provided with a feedback form. How useful was the feedback form? |
6. | • DEBREIFING: At the end of the 6-month cycle- there was an in-person group debriefing celebration breakfast. Were you able to attend that session? What was your experience of that session? What was helpful? Not helpful? |
III. Academy Logistics | |
7. | • You were asked to schedule your own observations. How did you feel about having to do that vs having others schedule this for you? |
8. | • How was the communication with other members of the teaching academy using the basecamp site? |
9. | • There was a request for planned interaction at the midpoint of the cycle. A journal club was planned-were you able to attend and what was that like? |
10. | • How was the program leadership? Can you tell me the optimal leadership for this type of program? |
IV. Overall Evaluation | |
11. | • If you were to describe this program to a fellow colleague, what would you tell them? What were some of the aspects that you liked, what didn’t you like? • How engaged did you feel about the program? • If you were to redo this academy, what would you change? What would you keep the same? |
Results
During the Teaching Academy program, most faculty participants elected to receive feedback centered on observation of small group or bedside teaching. Table 2 describes characteristics of division, qualitative study participants, and participants who never completed the Teaching Academy program. Only clinicians within the division could participate in the Teaching Academy Program as the program was designed to facilitate giving and receiving feedback around their teaching opportunities. Additionally, four faculty members who did not participate in the program participated in brief interviews describing their reasons for not participating. Two faculty members provided email responses for their reasons for non-participation.
Table 2.
Characteristics of Participants in a Qualitative Study of a Peer Observation of Teaching Program, 2015–16
Characteristic | Total Division Faculty n (%) | Study participants completing at least one cycle of the Teaching Academy n (%) | Study participants who never completed the Teaching Academy n (%) |
---|---|---|---|
Total | 28 | 13 | 6 |
Sex | |||
Male | 9 (32) | 6 (46) | 1 (17) |
Female | 19 (68) | 7 (54) | 5 (83) |
Specialty | |||
Geriatrics | 14 (50) | 6 (46) | 4 (67) |
Palliative Medicine | 13 (46) | 6 (46) | 2 (33) |
Both | 1 (4) | 1 (8) | 0 (0) |
Years in specialty | |||
0–5 | 11 (39) | 7 (54) | |
6–15 | 10 (36) | 4 (31) | 3 (50) |
>15 | 7 (25) | 2 (15) | 3 (50) |
Rank | |||
Assistant professor | 17 (61) | 7 (54) | 2 (33) |
Associate professor | 9 (32) | 5 (38) | 3 (50) |
Professor | 2 (7) | 1 (7) | 1 (17) |
Pairings | |||
Same rank | 4 (33) | ||
Differing ranks | 8 (67) |
Over half of division faculty participated in the Teaching Academy (15; 53%). Of the 13 faculty members who completed a qualitative interview, more female faculty participated than males (54% vs. 46%). Geriatrics and Palliative Medicine Specialties were equally represented. Program participants reported their experience was positive overall and that they benefited from participation. Table 3 summarizes the key themes and sub-themes gleaned from the post-experience qualitative interviews. Themes emerged around the benefits of the program, its challenges, and ways to improve future iterations of the program.
Table 3:
Major Themes and Sub-themes
Program Benefits | 1. Conveyed institutional support for teaching 2. Peer observation with direct and timely feedback is valued by faculty 3. Opportunity for community-building with colleagues 4. Feasible due to balance of structure and flexibility |
Program Challenges | 1. Scheduling was challenging given the competing demands for faculty time 2. Variability in the feedback experience 3. Maintaining participant engagement |
Recommendations | 1. Increase incentives for participation 2. Improve accountability through involvement of external faculty 3. Assist with program logistics 4. Improve quality of the feedback experience |
Program Benefits
One overarching benefit of the program cited was the fact that simply offering a program sent a powerful message about the importance of teaching. Many participants mentioned their perception that teaching can be undervalued in the academic medical center, and this program was an important and encouraging signal that their efforts to teach well were valued by the division leadership. One participant noted:
I liked just the focus on teaching. We don’t necessarily do that very often. We have learners with us all the time, but all of us are expected to be teachers. We have med students. We have nurses. We have residents, fellows with us all the time. To have a very intentional—not only should you teach them, we want you to be teaching at a certain quality and we’re gonna help you get there.
Second, most participants discussed the fact that direct observation and feedback on teaching is rarely offered to faculty, but incredibly valuable. They felt it improved their self-awareness and confidence in their teaching skills, while also allowing them to customize their learning by addressing their specific questions or concerns about teaching. One participant noted about that the observation and feedback:
Well, it helped me actually have more confidence about my bedside teaching, in particular. I was always pretty confident about my classroom teaching, but I wasn’t sure that …anything I did on the wards was right. It was very helpful to hear what he had to say about that, and that he could see things I was doing what he thought [was] useful and helpful.
Another stated they thought “better self-awareness of some of my areas of opportunity for teaching, even if it’s tone of—well, subtle things like tone of voice or pace of talking, ways that I can frame things or tips on—one session that he observed for a didactic session that I do on an annual basis. I got some suggestions on how to make that maybe more interactive.”
A third unexpected benefit was that it gave faculty an opportunity for community-building within the division. Faculty reported enjoying getting to know colleagues they rarely worked with, and felt they built considerable trust with their observation partner. Some also highlighted that they valued the opportunity to get to know someone of different rank or background from their own:
It was a great opportunity to meet people, to meet people in our division and to get—to spend time with people that you wouldn’t normally spend time with…. You don’t ever get to see other faculty members practice, much less teach, to actually see them teach.”
One participant stated, “I enjoyed learning from people who have more experience and training…. getting to know your colleagues who you might not work with as much.”
One frequently cited reason for the success of the program was its feasibility, which was fueled by a good balance between flexibility and structure. Many mentioned the program did take a time commitment, but they also appreciated the flexibility of the program:
I was doing the teaching anyway, so it would allow for me, personally, to get real-time feedback from teaching that was already on my calendar. It wasn’t too onerous or too structured. We were given several months to observe somebody else just a couple of time[s]. That gave me flexibility to be able to really return the favor for a colleague in a way that I could still get that in my schedule…It was both structured, but it wasn’t structured on such a tight timeline that it was burdensome.
Apart from the training and debriefing sessions, the bulk of the time commitment was in the observation and feedback activities, which could be scheduled to maximize convenience for participants. Additionally, respondents valued the structure and guidance provided within the program experience. Highly valued structural elements included: the requirement to identify a few key skills or issues to work on in advance of each observation, having a concrete observation checklist to guide feedback, the emphasis on timely feedback, and a provision of reminders to keep participants on track.
Program Challenges
Unsurprisingly, several respondents commented on how, although the total time commitment was low, scheduling the observation sessions was a challenge, given the many demands on the time of clinician educators. One participant expressed, “The major issue that I had in continuing was arranging times that we mutually could be together. I felt very pressured, too, with my clinical responsibilities and found it very difficult to arrange time or make time—”, Participants did not always have teaching sessions at times compatible with their partner’s schedule, and participants taught in widely differing settings, sometimes several miles apart. Because teaching observations often happened as a part of a longer activity, others found it challenging to give feedback immediately after the observation session. For example, if teaching rounds were observed, the faculty member who was teaching usually needed to engage in patient care activities immediately afterwards, forcing the partners to schedule the feedback discussion at another time. A few other faculty members, both participants and non-participants, commented that the lack of funding for protected time to participate meant program activities were less of a priority for them. Another participant shared, “There are no flexible dollars supporting any of us, and so you must basically spend all your time chasing salary, and so if it isn’t going to generate salary, it goes WAY down on your list of priorities.”
A second challenge was the fact that the quality of the feedback experience itself varied considerably, and while some had very positive growth experiences, others did not. Some cited the difference in rank between partners as an impediment, causing it to be “nerve-wracking” for more junior participants, and leaving more senior participants wishing their observer had been more critical. One participant said “I find it harder to give honest feedback to a much more experienced person that doesn’t feel like it’s coming across as I’m telling you what to do, and I’m some young whippersnapper. Conversely, it doesn’t feel good to have someone who’s experienced in the field say things that I’m gonna perceive as negative, even if they’re meant to be constructive or positive.” Others felt their partner valued different elements of teaching than they did, and as a result, they found their partner’s feedback and suggestions less useful than in other pairings where values were more concordant:
The only thing that I would say was—my partner and I maybe had different perspective and ways that we would prioritize certain things. What may have been a priority in terms of a teaching style for him is not necessarily what I would have been prioritizing or thought about prioritizing. It’s still feedback that’s helpful, I guess, but maybe not a suggestion that I would incorporate in any way. It’s a different framework, a different approach.
A third challenge was maintaining participant engagement both during a cycle and into the next cycle. There were many comments about BaseCamp, the online platform used to maintain communication among participants during the cycle. Due in part to its novelty, many viewed BaseCamp as time-consuming to learn or as “just one more thing” to deal with in their email inbox. “I wasn’t able to get familiar enough with [BaseCamp Site] so that I was using it seamlessly. I would say I minimally utilized that as a resource during—I liked the idea of it, but practically, it didn’t work out to integrate into that whole process very well for me.” Additionally, some participants never completed their second observation. Most attributed this to the difficulties in scheduling observation times. Others said they did not sign up for the next cycle because they did not have time to commit, did not understand that they could, or did not perceive additional benefits from continued participation.
Recommendations for Improvement
Several respondents felt there was insufficient incentive for participation in the program. In addition to salary support to provide protected time to participate, respondents suggested that greater recognition and visibility at the division and department level would have encouraged higher levels of engagement. “There wasn’t—didn’t seem to be enough recognition, or would have been nice to have had a little more recognition of the folks going through it by the division leadership.” Another respondent suggested making the program a part of a “master teacher” certification program as a way of improving the commitment of participants. One useful suggestion for incentivizing participation and motivating participants to meet program expectations was to involve “outsiders” in the form of external faculty educators in the role of program leadership. Two participants shared similar views regarding the differing perspective brought with external faculty: “Honestly, it might carry more weight if it were external faculty. I enjoyed it because I enjoyed working with both of them [program leaders]. There might be more weight to it if it were someone they brought in from the outside,” one shared. Another stated, “I hate to say this, but I think external faculty. I think it feels like you’re just doing more divisional activities sometimes when it’s with your own division. Me, personally, I think I would have more accountability if I were doing it with people outside of our division.” While respondents thought it was fun and collegial to limit the program to people in the same division, there was less accountability in this model.
A third recommendation was to provide more assistance to participants in dealing with program logistics. Strategies for operationalization were more controversial. Some respondents wanted more reminders via email, while others felt they were a distraction and preferred minimal communication from the program. There was much disagreement between respondents about whether it would have been helpful to have a third party set up the observation sessions between partners. Some felt only they could know which type of teaching session would be most productive for the observation, so having a third party would create unneeded complications and decrease the effectiveness of the experience. “I don’t think anybody else would’ve been able to schedule it for me. My schedule is too complicated. We never know when we’re gonna have learners, or if they’re out…They’re post-call. They’re in clinic. I don’t think anybody else would’ve been able to put all that information together and figure out a time.” Others would have been grateful to have someone manage logistics for them, as it would have saved time and helped ensure accountability.
A fourth set of recommendations concerned ways to improve the quality of the feedback experience. All agreed that it was important to provide training in giving feedback during the kickoff session before the observations began. This helped set expectations and make peer feedback feel more “safe,” however, some respondents still felt challenged by the task of giving feedback to a peer. A participant stated, “[Feedback] was actually harder than I thought it would be. You want it to be constructive and not a really, I guess, picky or nitpicky. I felt like it—it was easy to talk about the good, but much more difficult to talk about areas of improvement.” Respondents made many conflicting suggestions about the process of assigning partners to improve the quality of the feedback. Several valued having someone more senior as their partner, others found it too intimidating with one participant expressing, “I think if I had been matched with someone who was senior to me or even someone who did something different than I did, so that I could learn from someone who was not someone [who] had just finished training, I may have found it [feedback] more useful.” Some wanted a partner whose clinical and teaching duties were more similar to their own, while others valued feedback from someone with a different perspective. A few respondents even suggested expanding outside the division, to get new perspectives and involve a greater number of people in the program. Some would have liked to have chosen their partner to get the kind of feedback they needed while others liked having their partner assigned. One suggestion to overcome the challenges of the peer observation partner system was to supplement the peer observation sessions with occasional group sessions where participants could present their personal teaching challenges and get feedback. This would allow everyone to be exposed to the senior, most respected teachers in the division, even if they were not partnered with them.
Reasons for non-participation
The six faculty members who did not participate gave a variety of reasons for non-participation. Three faculty members cited upcoming career transitions: two were planning moves to new jobs not involving teaching, and one was preparing for retirement. The three other faculty members reported that the primary driver was a lack of protected time to participate and lack of visibility of the program. In addition, they reported a perception that the division leadership did not prioritize participation.
Discussion
Overall, the participating 13 faculty members felt that this peer observation program elevated teaching in a unique way, provided much desired feedback on teaching skills, and helped to build community amongst participants. It required few resources apart from a few hours of faculty time over several months and the cost of refreshments for opening and closing sessions. Yet, as expected, there were considerable challenges around scheduling and competing clinical demands.
Implications for Future Peer Observation Faculty Development Programs
Further exploration and evaluation of the Teaching Academy program through qualitative analysis in this study highlights the need for high-quality feedback. Faculty in a professional development program must be assured that participation is worth their time and effort. Guiding feedback to specific teaching skills using a standardized observation form and allowing faculty to self-identify areas of need were key elements of this program that have been used in other model programs.25,26,30 Still, observing partners must have the confidence and experience themselves to provide meaningful insight and direction. Pairing a peer observation program with more formal and extensive observation and feedback training might be beneficial.10,26 Shifting to a “master teacher” approach with a few skilled coaches could also address this issue but might risk losing the community-building benefits of the current program. A final way to address this concern could be to allow participants to choose the person who performs their observation and feedback, which would create logistical challenges but might improve participants’ perception of the quality of feedback.
The other main concern uncovered in this study was the lack of incentive for participation and engagement. While some felt that the mere existence of the peer observation program elevated the role of education in the division, many commented that teaching, and consequently professional development in teaching, may still be an afterthought in academic medicine. Faculty participants in development programs support the idea that a successful teaching skills program should be supported by their division,31 but given the competing demands for their time, there is a need for incentives from institutional leadership. One possible solution is to integrate faculty development in teaching skills into an institution’s promotion process by requiring a minimum number of peer observations at each level of faculty rank. Similarly, peer observation could become an expected part of the annual review process for clinician-educators, either considered a competency to master32 or a means of gaining special recognition tied to salary bonus plans or professional development funds for the following academic year.
Finally, there is a clear need for personnel support for educational programs, both in program administration and in helping to manage faculty calendars. Academic medicine faculty face increasing demands regarding clinical productivity, documentation, supervision, and scholarship. Concerns about program promotion, visibility, and communication with participants may be addressed by leadership visibly supporting such programs by providing protected time to allow for developing teaching advancement.
Study Limitations
This study was limited to a single academic institution but did include faculty across specialties. The response rate among participants in follow-up interviews was high. While qualitative interviews added to our understanding of the benefits and challenges of the program, these semi-structured interviews were not conducted immediately after peer observation cycles, and therefore there is risk of recall bias among participants. While some struggled to remember specifics of various aspects of the training or debrief sessions, most had no trouble recalling the barriers to motivation and engagement, overarching concerns about lack of protected time, and respect for teaching activities
Conclusion
This study confirms previous findings related to the challenges of recruitment and retention in peer observation programs.21,22 Many such programs have had small numbers of participants and have failed to demonstrate sustainability. Many previous peer faculty development program evaluations have provided quantitative feedback,1 and we have added to the understanding of the problem of lack of participation and sustained engagement by providing a more robust qualitative view of program participants and non-participants. We were able to explore more deeply the benefits and challenges faced by busy academic clinicians and their professional development in education. While faculty development programs that use peer observation are well-received by faculty and are consistent with principles of sound andragogy, they continue to be limited by academic faculty’s competing demands. For these programs to be successful, high quality peer feedback, clear incentives for participation, and administrative support for the program are essential elements of success.
Acknowledgments:
We would like to acknowledge and thank Cynthia J. Brown, MD, MSPH, FACP for supporting the faculty development program and for providing manuscript edits.
Funding/Support:
Macy Stockdill is supported by the NIH/NINR under Grant 1F31NR018782. Bailey A. Hendricks is supported by the Robert Wood Johnson Foundation Future of Nursing Scholarship. Content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.
Contributor Information
Macy Stockdill, School of Nursing, University of Alabama at Birmingham, Birmingham, AL..
Bailey Hendricks, College of Nursing, University of Nebraska Medical Center, Omaha, NE..
Michael D. Barnett, School of Medicine, University of Alabama at Birmingham, Birmingham, AL..
Marie Bakitas, School of Nursing, and Department of Medicine, Division of Gerontology, Geriatrics, and Palliative Care, University of Alabama at Birmingham, Birmingham, AL..
Caroline N. Harada, School of Medicine, University of Alabama at Birmingham, Birmingham, AL..
References
- 1.Leslie K, Baker L, Egan-Lee E, Esdaile M, Reeves S. Advancing faculty development in medical education: a systematic review. Academic Medicine. 2013;88(7):1038–1045. doi: 10.1097/ACM.0b013e318294fd29 [DOI] [PubMed] [Google Scholar]
- 2.Steinert Y, Mann K, Anderson B, et al. A systematic review of faculty development initiatives designed to enhance teaching effectiveness: A 10-year update: BEME Guide No. 40. Medical teacher. 2016;38(8):769–786. doi: 10.1080/0142159X.2016.1181851 [DOI] [PubMed] [Google Scholar]
- 3.McSparron JI, Huang GC, Miloslavsky EM. Developing internal medicine subspecialty fellows’ teaching skills: a needs assessment. BMC medical education. 2018;18(1):1–6. doi: 10.1186/s12909-018-1283-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.McLean M, Cilliers F, Van Wyk JM. Faculty development: yesterday, today and tomorrow. Medical teacher. 2008;30(6):555–584. doi: 10.1080/01421590802109834 [DOI] [PubMed] [Google Scholar]
- 5.Fluit CR, Bolhuis S, Grol R, Laan R, Wensing M. Assessing the quality of clinical teachers: a systematic review of content and quality of questionnaires for assessing clinical teachers. J Gen Intern Med. 2010;25(12):1337–1345. doi: 10.1007/s11606-010-1458-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Morgan HK, Purkiss JA, Porter AC, et al. Student Evaluation of Faculty Physicians: Gender Differences in Teaching Evaluations. Journal of women’s health (2002). 2016;25(5):453–456. doi: 10.1089/jwh.2015.5475 [DOI] [PubMed] [Google Scholar]
- 7.Elzubeir M, Rizk D. Evaluating the quality of teaching in medical education: are we using the evidence for both formative and summative purposes? Medical teacher. 2002;24(3):313–319. doi: 10.1080/01421590220134169 [DOI] [PubMed] [Google Scholar]
- 8.Beckman TJ. Lessons learned from a peer review of bedside teaching. Academic Medicine. 2004;79(4):343–346. doi: 10.1097/00001888-200404000-00011 [DOI] [PubMed] [Google Scholar]
- 9.Fry H, Morris C. Peer observation of clinical teaching. Med Educ. 2004;38(5):560–561. doi: 10.1111/j.1365-2929.2004.01869.x [DOI] [PubMed] [Google Scholar]
- 10.Mookherjee S, Monash B, Wentworth KL, Sharpe BA. Faculty development for hospitalists: structured peer observation of teaching. Journal of hospital medicine. 2014;9(4):244–250. doi: 10.1002/jhm.2151 [DOI] [PubMed] [Google Scholar]
- 11.Borus J, Pitts S, Gooding H. Acceptability of peer clinical observation by faculty members. Clin Teach. 2018;15(4):309–313. doi: 10.1111/tct.12681 [DOI] [PubMed] [Google Scholar]
- 12.Sullivan PB, Buckle A, Nicky G, Atkinson SH. Peer observation of teaching as a faculty development tool. BMC Med Educ. 2012;12:26. doi: 10.1186/1472-6920-12-26 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Pattison AT, Sherwood M, Lumsden CJ, Gale A, Markides M. Foundation observation of teaching project--a developmental model of peer observation of teaching. Med Teach. 2012;34(2):e136–142. doi: 10.3109/0142159x.2012.644827 [DOI] [PubMed] [Google Scholar]
- 14.Finn K, Chiappa V, Puig A, Hunt DP. How to become a better clinical teacher: a collaborative peer observation process. Med Teach. 2011;33(2):151–155. doi: 10.3109/0142159x.2010.541534 [DOI] [PubMed] [Google Scholar]
- 15.Trujillo JM, DiVall MV, Barr J, et al. Development of a peer teaching-assessment program and a peer observation and evaluation tool. American journal of pharmaceutical education. 2008;72(6):147. doi: 10.5688/aj7206147 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Cairns AM, Bissell V, Bovill C. Evaluation of a pilot peer observation of teaching scheme for chair-side tutors at Glasgow University Dental School. British dental journal. 2013;214(11):573–576. doi: 10.1038/sj.bdj.2013.527 [DOI] [PubMed] [Google Scholar]
- 17.Bandura A, Cervone D. Differential engagement of self-reactive influences in cognitive motivation. Organizational behavior and human decision processes. 1986;38(1):92–113. [Google Scholar]
- 18.Guthrie ER. Association by contiguity. Psychology: A study of a science. 1959;2:158–195. [Google Scholar]
- 19.Shapiro N, Janjigian M, Schaye V, et al. Peer to Peer observation: real-world faculty development. Med Educ. 2019;53(5):513–514. doi: 10.1111/medu.13865 [DOI] [PubMed] [Google Scholar]
- 20.Schofield L, Page M. ‘It has a lot more to contribute than just the teaching’: Perceptions of participants in a peer-observation programme to develop clinical teaching. Future healthcare journal. 2019;6(Suppl 1):170. doi: 10.7861/futurehosp.6-1-s170 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Mortaz Hejri S, Mirzazadeh A, Jalili M. Peer observation of teaching for formative evaluation of faculty members. Med Educ. 2018;52(5):567–568. doi: 10.1111/medu.13566 [DOI] [PubMed] [Google Scholar]
- 22.DiVall M, Barr J, Gonyeau M, et al. Follow-up assessment of a faculty peer observation and evaluation program. American journal of pharmaceutical education. 2012;76(4):61. doi: 10.5688/ajpe76461 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Sutkin G, Wagner E, Harris I, Schiffer R. What makes a good clinical teacher in medicine? A review of the literature. Academic medicine : journal of the Association of American Medical Colleges. 2008;83(5):452–466. doi: 10.1097/ACM.0b013e31816bee61 [DOI] [PubMed] [Google Scholar]
- 24.Haws J, Rannelli L, Schaefer JP, et al. The attributes of an effective teacher differ between the classroom and the clinical setting. Advances in health sciences education : theory and practice. 2016;21(4):833–840. doi: 10.1007/s10459-016-9669-6 [DOI] [PubMed] [Google Scholar]
- 25.Skeff KM, Stratos GA, Berman J, Bergen MR. Improving clinical teaching. Evaluation of a national dissemination program. Arch Intern Med. 1992;152(6):1156–1161. doi: 10.1001/archinte.152.6.1156 [DOI] [PubMed] [Google Scholar]
- 26.Litzelman DK, Stratos GA, Marriott DJ, Skeff KM. Factorial validation of a widely disseminated educational framework for evaluating clinical teachers. Academic medicine : journal of the Association of American Medical Colleges. 1998;73(6):688–695. doi: 10.1097/00001888-199806000-00016 [DOI] [PubMed] [Google Scholar]
- 27.Beckman TJ, Lee MC, Rohren CH, Pankratz VS. Evaluating an instrument for the peer review of inpatient teaching. Med Teach. 2003;25(2):131–135. doi: 10.1080/0142159031000092508 [DOI] [PubMed] [Google Scholar]
- 28.Skeff KM, Stratos GA, Bergen MR. Evaluation of a Medical Faculty Development Program: A Comparison of Traditional Pre/Post and Retrospective Pre/Post Self-Assessment Ratings. Evaluation & the Health Professions. 1992;15(3):350–366 doi: doi/ 10.1177/016327879201500307 [DOI] [Google Scholar]
- 29.International QSR (1999) NVivo Qualitative Data Analysis Software [NVivo 12].
- 30.Pedram K, Brooks MN, Marcelo C, et al. Peer Observations: Enhancing Bedside Clinical Teaching Behaviors. Cureus. 2020;12(2). doi: 10.7759/cureus.7076 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.McDaniel CE, Singh AT, Beck JB, et al. Current practices and perspectives on peer observation and feedback: a national survey. Academic pediatrics. 2019;19(6):691–697. doi: 10.1016/j.acap.2019.03.005 [DOI] [PubMed] [Google Scholar]
- 32.Srinivasan M, Li S-TT, Meyers FJ, et al. “Teaching as a competency”: competencies for medical educators. Academic Medicine. 2011;86(10):1211–1220. doi: 10.1097/ACM.0b013e31822c5b9a [DOI] [PubMed] [Google Scholar]