Skip to main content
Journal of Education and Health Promotion logoLink to Journal of Education and Health Promotion
. 2024 Jan 22;12:430. doi: 10.4103/jehp.jehp_1795_22

Peer-review of teaching materials in Canadian and Australian universities: A content analysis

Roghayeh Gandomkar 1, Azadeh Rooholamini 1,2,
PMCID: PMC10920684  PMID: 38464635

Abstract

BACKGROUND:

Peer-review of teaching materials (PRTM) has been considered a rigorous method to evaluate teaching performance to overcome the student evaluation's psychometric limitations and capture the complexity and multidimensionality of teaching. The current study aims to analyze the PRTM practices in Canadian and Australian universities in their faculty evaluation system.

MATERIALS AND METHODS:

This is a qualitative content analysis study in which all websites of Canadian and Australian universities (n = 46) were searched based on the experts› opinion. Data related to PRTM were extracted and analyzed employing an integrative content analysis, incorporating both inductive and deductive elements iteratively. Data were coded and then organized into subcategories and categories using a predetermined framework including the major design elements of a PRTM system. The number of universities for each subcategory was calculated.

RESULTS:

A total of 21 universities provided information on PRTM on their websites. The main features of PRTM programs were organized under the seven major design elements. Universities applied PRTM mostly (n = 11) as a summative evaluation. Between half to two-thirds of the universities did not provide information regarding the identification of the reviewers and candidates, preparation of reviewers, and logistics (how often and when) of the PRTM. Almost all universities (n = 20) defined the criteria for review in terms of teaching philosophy (n = 20), teaching activities (n = 20), teaching effectiveness (n = 19), educational leadership (n = 18), teaching scholarship (n = 17), and professional development (n = 14).

CONCLUSION:

The major design elements of PRTM, categories and subcategories offered in the current study provide a practical framework to design and implement a comprehensive and detailed PRTM system in the academic setting.

Keywords: Australian, Canadian, content analysis, faculty evaluation, peer-review, teaching

Introduction

Teaching effectiveness is a major criterion used for formative or summative decision-making when evaluating faculty performance.[1] Student evaluation of teaching has remained the primary and, in many cases, the only tool to measure teaching effectiveness for the last decades, notwithstanding questioning its validity as a legitimate measure.[2] Other measures of teaching effectiveness, such as peer ratings, are also fallible in one or more ways, though.[3] Recently, there has been a trend toward triangulating evidence to measure teaching effectiveness to overcome psychometric limitations and capture the complexity and multidimensionality of teaching.[4,5,6,7]

There has been some progress toward the recognition of teaching since Boyer (1990) redefined scholarship to include teaching.[6] If teaching performance is to be rewarded as a scholarship, it should be subjected to the same rigorous peer-review process applied to other forms of scholarship such as research.[8]

Peer-review of teaching (PRT) is defined as informed peer judgment about faculty teaching performance and is used to foster improvement or make personnel decisions.[9] Typically, the peer is a qualified colleague by expertise or training to serve as a knowledgeable judge of evidence accuracy. The notion of informed judgment implies a systematic evaluation based on appropriate criteria and thoughtful processes.[10]

PRT has two forms: peer observation of teaching (POT) and peer review of teaching materials (PRTM). POT covers those aspects of teaching performance that peers are better qualified to evaluate than students.[11,12,13] PRTM involves rating the quality of the teaching materials such as course syllabus, instructional plans, texts, reading assignments, handouts, homework, and tests or projects generally through the teaching or course portfolios containing self-reflection of teachers themselves.[14,15,16] The two forms of PRT are the most complementary source of evidence for student evaluation of teaching. Although POT is conducted more commonly, PRTM is less subjective and more efficient and reliable than observations.[17]

The literature about PRT has mainly been focused on POT practices.[12,18,19,20,21,22,23,24] The PRTM format has been operationalized through teaching portfolio; first in Canadian and then in Australian higher education. Afterward, it was expanded to the United States and the United Kingdom academic context.[25] Although PRTM is advocated and its methods and effects were tested,[26,27,28,29,30] there has been little attention on how it has been used systematically in academic settings.[31] To help overcome this gap in knowledge, we aimed this study to identify the extent to which Canadian and Australian universities applied elements of PRTM in their faculty evaluation system. We focused on these two countries because they are pioneers in designing and using PRTM programs in the academic setting. The Canadian and Australian context for better accountability and improved quality in teaching has promoted and supported excellence in teaching and the scholarship of teaching in their higher education institutions.[31,32,33,34,35]

To evaluate the data related to the PRTM programs of the selected universities, we used a Chism (2007) systematic approach including PRTM program major design elements as a predetermined framework: the purpose of PRT, identifying the faculty members involved, logistics of the review, criteria, and evidence for peer-review, standards to be used in judgment, instruments for performing the reviews, and preparation of reviewers.[9]

Although her framework contains all elements of the PRTM, it does not propose the details of each element. Therefore, another aim of this study was to analyze and categorize the details of PRTM programs in the selected universities to expand the Chism elements. This study takes the next step by providing empirical evidence of the use of those elements and how they interact as a framework to help guide the practical implementation and evaluation of PRTM in health profession education.

Materials and Methods

Study design and setting

We followed two steps to address the two aims of the study. First, a qualitative content analysis methodology was considered to analyze the data related to PRTM programs in Canadian and Australian universities based on the experts' opinions and produce a practical framework. Second, the frequency of the framework's components was identified.

Data collections

Website data were used for this study. We performed a content search on the all websites of Canadian and Australian universities through URL links available on the national/regional rank listing on the Academic Ranking of World Universities website. The search was conducted between June 2020 and September 2020 and updated in June 2022 using the following terms: peer-review of teaching, PRT, peer evaluation, peer assessment, peer development, teaching portfolio, academic portfolio, educator portfolio, faculty portfolio, and a teaching dossier. If the initial search failed to yield any results, a supplementary search was carried out with broader terms: promotion, rank, and tenure. We finally searched the “teaching and learning center” and its equivalent terms on the website but we found no record. We limited the information retrieval process to the university home page and all pages within drill-downs, which means moving from general to detailed data such as policies and guidelines. Universities that provided information on PRTM on their websites were included and universities that used only the POT component were excluded from further analysis.

Data analysis

Data from websites were extracted and entered into a Microsoft OneNote@2016 considering a separate sheet for each university. We developed a coding scheme based on the major design elements of a PRTM system outlined by Chism (2007) with modifications as a guide for data categorization. For step 1 of the study, we employed an integrative approach to qualitative content analysis, incorporating both inductive and deductive elements iteratively.[36] To gain a deeper understanding of the data, all retrieved data were read several times and the meaning units were recognized inductively and assigned appropriate codes. The codes were abstracted and allocated into subcategories based on similarities and differences. Subcategories were then assigned deductively under the elements of Chism's framework. The inductive and deductive processes were performed iteratively and elements were revised if it was necessary. Data analysis was conducted by one of the authors. However, over data analysis, frequent meetings between authors were conducted to review codes and refine the coding framework. Disagreements were resolved through ongoing discussions. For step 2, the number of universities for each subcategory was calculated using descriptive analysis of frequency and percentage.

Results

Websites of 46 (27 Canadian and 19 Australian) universities were searched. A total of 21 universities provided information on PRTM on their websites and were included in the results. Fourteen universities were in Canada. The main features of PRTM programs were organized under the seven major design elements: purpose, identifying the faculty members involved, logistics, criteria and evidence, standards, instruments, and preparation of reviewers.

Table 1 shows the major design elements and their categories and subcategories with frequencies.

Table 1.

Number of universities for each subcategory of major design elements of a PRTM

Major design elements Categories Subcategories No. Universities (%)
Purpose Just summative 11 (52.38%)
Both summative and formative 10 (47.62%)
Identifying the faculty members involved Who the reviewers will be? (NM=13) Just external to the university 3 (14.28%)
Both external and internal to the university 5 (23.81%)
Who selects reviewers? (NM=13) Candidate 2 (9.52%)
Administrators/teaching, and learning center 6 (28.57%)
Criteria for selecting reviewers (NM=13) Experienced in the peer review of teaching 4 (19.04%)
Qualified to evaluate the candidate’s academic duties 3 (14.28%)
Disciplinary background 6 (28.57%)
Teaching and learning expertise 5 (23.81%)
Status (teaching award/excellence in teaching) 2 (9.52%)
level/rank (a certain level of seniority) 4 (19.04%)
Number of reviewers (NM=13) Two or more 8 (38.09%)
Candidate (NM=11) Only continuing academic staff members 4 (19.04%)
All faculty members 6 (58.57%)
Logistics Review intervals (NM=12) Annually or each semester 4 (19.04%)
Between 1 and 3 years 3 (14.28%)
> 3 years 2 (9.52%)
Participation (NM=16) Mandatory 2 (9.52%)
Voluntary 1 (4.76%)
Both 2 (9.52%)
Criteria and evidence (NM=1) Teaching Philosophy Describing teaching goals, strategies, and evaluation methods 20 (95.23%)
Teaching activities (n=20) Number of courses and course outline 9 (42.85%)
Course, curriculum, or teaching materials 9 (42.85%)
Teaching methods 6 (28.57%)
Grading and assessment 5 (23.81%)
Guest lecturing and invited teaching activities 3 (14.28%)
Supervision, mentorship, and student advising 14 (66.66%)
Teaching effectiveness (n=19) Teacher evaluation 17 (80.95%)
Course evaluation 10 (47.62%)
Student development, success, and progress 7 (33.33%)
Formal recognition of teaching accomplishment 12 (57.14%)
Educational leadership (n=18) Committee services 12 (57.14%)
Peer mentoring, coaching, and consultation 5 (23.81%)
Coordinating or running workshops, and seminars 6 (58.57%)
Educational scholarship (n=17) Publications or presentations 13 (61.90%)
Research grants 5 (23.81%)
Peer-review or editorial activities 3 (14.28%)
Course and curriculum development 9 (42.85%)
Innovations in teaching and learning 9 (42.85%)
Professional development (n=14) Efforts made to improve teaching activities (scholarly teaching) 13 (61.90%)
Participation in POT as an observer 6 (58.57%)
Participating in seminars and workshops 9 (42.85%)
Participating in the educational scholarship 1 (4.76%)
Standards (NM=19) Completeness of documentation 1 (4.76%)
Quantity and quality of evidence 1 (4.76%)
Instruments (NM=4) Rubrics 5 (23.81%)
Rating scale 2 (9.52%)
Checklists 5 (23.81%)
Comments 5 (23.81%)
Preparation of reviewers (NM=10) Workshop 7 (33.33%)
Guidelines 4 (19.04%)

NM=Not Mentioned

All universities provided information about the purpose of the PRTM. Universities applied PRTM as a summative evaluation (n = 11) or with the purpose of both formative and summative (n = 10). Over half of the universities did not publish details regarding the faculty member involved; reviewers' characteristics and their selection (n = 13) and candidates (n = 11). Of that provided information, five recruited a mixture of external and internal reviewers to the university, six assigned the reviewers by a responsible body, and all used more than one criteria for selecting the reviewers and employed two or more reviewers. Selection based on the disciplinary background was the most applied criterion (n = 6). More than half of the universities that provided information implemented the PRTM for all faculty members. There was no report on the logistics of the PRTM by most universities (review intervals, 12; participation, 16) [Table 1].

Almost all universities (n = 20) defined the criteria for review in terms of teaching philosophy (n = 20), teaching activities (n = 20), teaching effectiveness (n = 19), educational leadership (n = 18), teaching scholarship (n = 17), and professional development (n = 14). An analysis of the types of evidence used within each criterion revealed various data sources and types of information. Interestingly, supervising and advising were the most used evidence for teaching activities (n = 14). Seventeen out of 19 universities reported evidence related to teacher evaluation including students' surveys, POT, and feedback from alumni for teaching effectiveness. Providing education committee services, publications, presentations, and scholarly teaching were among the common evidence for educational leadership (n = 12), educational scholarship (n = 13), and professional development (n = 13), respectively [Table 1].

Only two universities proposed standards for judgment as the completeness of documentation and quantity and quality of evidence, one for each university. Universities used a variety of instruments for performing reviews such as rubrics, checklists, rating scales, and written comments. Finally, universities provided workshops (n = 7) and guidelines (n = 4) for the preparation of reviewers [Table 1].

Discussion

Although the POT component of PRT has been addressed adequately in higher education literature, there has been less emphasis on the PRTM constituent. We conducted a content analysis of PRTM practices presented on the website of 21 Canadian and Australian universities. Employing content analysis to web-based data is a practical and feasible research endeavor[37] that has been used in health professions education rarely.[38]

To our knowledge, this is the first study that utilized a well-documented framework to depict the major design elements of PRTM. We found that universities documented adequately the purpose of the PRTM program, and the criteria, evidence, and instruments applied for review. Meanwhile, other design elements of the program referring to process, procedure, logistics, and support aspects were not sufficiently addressed. Thus, between half to two-thirds of the universities did not provide information regarding the identification of the reviewers and candidates, preparation of reviewers, and logistics (how often and when) of the PRTM. Lastly, while universities utilized the indicators for performance measurement in the evaluation instruments, only two universities reported standards for judgment explicitly.

All universities in our study had specified their PRTM goals because of its effect on the other elements such as what information should be collected, and how that information should be used. Interestingly, none of the universities applied PRTM with only formative. The reason may be that only formative evaluations had universities that used the teaching observation (POT) component to provide information for developing teaching practice and were excluded from our study.[39] In our framework, the faculty members involved were categorized for these elements; who will serve as reviewers, who selects reviewers on what basis, how many reviewers will be needed, and who will be reviewed. Of those universities that provided information, all used more than one criterion for selecting the reviewers and employed two or more reviewers. Also, most of them used a mixture of reviewers as internal and external to the university. The reason may be to have a panel of reviewers that can reflect different perspectives collectively.[40]

In our framework, the criteria were categorized as teaching philosophy, teaching activities, teaching effectiveness, educational leadership, educational scholarship, and professional development. As we analyzed, most universities provided a complete report of these criteria and subcategorized them as evidence that revealed various data sources and types of information. These findings showed and confirmed PRT as a comprehensive approach for teaching evaluation that uses multiple sources of evidence.[4,7,41]

For education as a scholarship, most universities reported evidence based on the criteria for the scholarship. Furthermore, like Kanade et al.,[42] (2018) using Glassick's criteria including clear goals, adequate preparation, appropriate methods, significant results, effective presentation, and reflective critique will be a helpful framework for defining teaching criteria.

We are not aware of the methods that universities have used to identify criteria. Based on the literature, the value of consensus approaches including nominal or Delphi group for developing and validating criteria is discussed.[43,44]

Criteria, evidence, and standards are three essential building blocks at the heart of every PRT system. Because they are the basis of developing tools to evaluate teaching.[9,45] Standards as the basis for evaluating good teaching are not explicitly reported in most university documentation. Arreola (2007) cautions that standards of performance be identified (e.g., “the syllabus contains the following items”) so that reviewers are rating the same thing, and labels on the rating scale are related to the criteria to be evaluated.[46]

Most universities reported various tools for evaluating evidence and rating teaching portfolios based on criteria. However, universities did not describe the methodology of designing their tools both for evaluating evidence and rating portfolios.[45] This matter could be a line of inquiry for further research. For instance, Van der et al. (2008) suggested using a chain model of the assessment process to develop and validate a design for teacher portfolio assessment. According to their study, two links were verified: the link between content standards and portfolio format and the link between content standards and raters' scoring.[47]

These findings mirror the eight-step model for the design of faculty evaluation systems proposed by Arreola (1999) and applied in academic settings, which consider the definition of performance aspects as the main focus.[48,49] Our findings are also aligned with the results of a systematic scoping review conducted on papers reporting the use of portfolios of medical educators.[50] Hong et al. (2021) found the components of portfolios of medical educators as one of the main focuses of the reviewed papers. They categorized the components as teaching and scholarship, educational research products, leadership and administration, curriculum development, assessment of learners, formal recognition, professional development, and general, which was comparable with the criteria and evidence category and its subcategories in our study.[50]

The major design elements of PRTM with associated categories and subcategories offered in the current study provide a practical framework to design and implement a comprehensive and detailed PRTM system in the academic setting. Further studies are recommended to investigate the applicability of the framework.

Limitations

We used data provided on the websites of the universities, which may not depict the full picture of their PRTM practices. Triangulating these findings with other sources of data such as surveying the universities' responsible bodies may broaden our understanding of PRTM practices. We investigated the PTRM practices of two (pioneering) countries which may limit the generalizability of our findings.

Implications for practice

Based on the findings from our analysis of the PRTM practices in Canadian and Australian universities, we offered a practical framework to assist and guide institutions to effectively design, develop, and even evaluate peer-review programs. In addition, documentation of this framework could be an essential part of a teaching portfolio design that can constitute the evidence in the portfolio.

Conclusions

Despite the robustness of the PTRM to evaluate teaching performance, the investigation of its implementation in academic settings received little attention. We conducted a content analysis of PRTM practices presented on the website of 21 Canadian and Australian universities utilizing a well-documented framework. Although some major design elements of PRTM were not addressed on the websites adequately, categories and subcategories offered in the current study provide a framework to design and implement a comprehensive and detailed PRTM system in the academic setting. Further studies are suggested to investigate the applicability of the framework.

Financial support and sponsorship

This work was funded by the National Agency for Strategic Research in Medical Education. Tehran. Iran. Under Grant number 970559. Please contact to corresponding author for detailed information on major elements of PRTM applied by universities.

Conflicts of interest

There are no conflicts of interest.

References

  • 1.Berk RAJ. Survey of 12 strategies to measure teaching effectiveness. IntJTeachLearnHigh Educ. 2005;17:48–62. [Google Scholar]
  • 2.Hornstein HA. Student evaluations of teaching are an inadequate assessment tool for evaluating faculty performance. Cogent Educ. 2017;4:1304016.. [Google Scholar]
  • 3.Berk RA. Top five flashpoints in the assessment of teaching effectiveness. Med Teach. 2013;35:15–26. doi: 10.3109/0142159X.2012.732247. [DOI] [PubMed] [Google Scholar]
  • 4.Jahangiri L, Mucciolo TW, Choi M, Spielman AI. Assessment of teaching effectiveness in US dental schools and the value of triangulation. JDent Educ. 2008;72:707–18. [PubMed] [Google Scholar]
  • 5.Knapper C. Broadening our approach to teaching evaluation. New DirTeachLearn. 2001;88:3–9. [Google Scholar]
  • 6.Boyer EL. San Francisco, Calif: Jossey-Bass Publishers; 1990. Scholarship Reconsidered: Priorities of the Professoriate. [Google Scholar]
  • 7.Berk RA. Start spreading the news: Use multiple sources of evidence to evaluate teaching. J Fac Dev. 2018;32:73–81. [Google Scholar]
  • 8.Berk RA, Theall M. Stylus Publishing, LLC; 2006. Thirteen Strategies to Measure College Teaching: A Consumer's Guide to Rating Scale Construction, Assessment, and Decision Making for Faculty, Administrators, and Clinicians. [Google Scholar]
  • 9.Chism NVN. 2nd. Bolton, MA: Anker; 2007. Peer Review of Teaching. A Sourcebook. [Google Scholar]
  • 10.Fernandez CE, Yu J. Peer review of teaching. J Chiropr Educ. 2007;21:154–61. doi: 10.7899/1042-5055-21.2.154. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Siddiqui ZS, Jonas-Dwyer D, Carr SE. Twelve tips for peer observation of teaching. MedTeach. 2007;29:297–300. doi: 10.1080/01421590701291451. [DOI] [PubMed] [Google Scholar]
  • 12.Pattison AT, Sherwood M, Lumsden CJ, Gale A, Markides M. Foundation observation of teaching project–A developmental model of peer observation of teaching. Med Teach. 2012;34:e136–42.. doi: 10.3109/0142159X.2012.644827. [DOI] [PubMed] [Google Scholar]
  • 13.Berk RA, Naumann PL, Appling SE. Beyond student ratings: Peer observation of classroom and clinical teaching. Int J Nurs Educ Scholarsh. 2004;1:Article10. doi: 10.2202/1548-923x.1024. doi: 10.2202/1548-923x.1024. [DOI] [PubMed] [Google Scholar]
  • 14.Lamki N, Marchand M. The medical educator teaching portfolio: Its compilation and potential utility. Sultan Qaboos Univ Med J. 2006;6:7–12. [PMC free article] [PubMed] [Google Scholar]
  • 15.Dalton CL, Wilson A, Agius S. Twelve tips on how to compile a medical educator's portfolio. Med Teach. 2018;40:140–5. doi: 10.1080/0142159X.2017.1369502. [DOI] [PubMed] [Google Scholar]
  • 16.Little-Wienert K, Mazziotti M. Twelve tips for creating an academic teaching portfolio. Med Teach. 2018;40:26–30. doi: 10.1080/0142159X.2017.1364356. [DOI] [PubMed] [Google Scholar]
  • 17.Lopa JM. A scholarly approach to a peer review of teaching. J Culin SciTechnol. 2012;10:352–64. [Google Scholar]
  • 18.Bell M, Cooper P. Peer observation of teaching in university departments: A framework for implementation. Int J Acad Dev. 2013;18:60–73. [Google Scholar]
  • 19.Georgiou H, Sharma M, Ling A. Peer review of teaching: What features matter? A case study within STEM faculties. Innov Educ Teach Int. 2018;55:190–200. [Google Scholar]
  • 20.Finn K, Chiappa V, Puig A, Hunt DP. How to become a better clinical teacher: Acollaborative peer observation process. MedTeach. 2011;33:151–5. doi: 10.3109/0142159X.2010.541534. [DOI] [PubMed] [Google Scholar]
  • 21.Sullivan PB, Buckle A, Nicky G, Atkinson SH. Peer observation of teaching as a faculty development tool. BMC Med Educ. 2012;12:26.. doi: 10.1186/1472-6920-12-26. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Mortaz Hejri S, Mirzazadeh A, Jalili M. Peer observation of teaching for formative evaluation of faculty members. Med Educ. 2018;52:567–8. doi: 10.1111/medu.13566. [DOI] [PubMed] [Google Scholar]
  • 23.Esterhazy R, de Lange T, Bastiansen S, Wittek AL. Moving beyond peer review of teaching: A conceptual framework for collegial faculty development. Rev Educ Res. 2021;91:237–71. [Google Scholar]
  • 24.Teoh SL, Ming LC, Khan TM. Faculty perceived barriers and attitudes toward peer review of classroom teaching in higher education settings: A meta-synthesis. Sage Open. 2016;6:2158244016658085.. [Google Scholar]
  • 25.Knapper C, Wright WA. Using portfolios to document good teaching: Premises, purposes, practices. New DirTeachLearn. 2001;2001:19–29. [Google Scholar]
  • 26.Zeichner K, Wray S. The teaching portfolio in US teacher education programs: What we know and what we need to know. TeachTeachEduc. 2001;17:613–21. [Google Scholar]
  • 27.Quinlan KM. Inside the peer review process: How academics review a colleague's teaching portfolio. Teach Teach Educ. 2002;18:1035–49. [Google Scholar]
  • 28.Chye S, Zhou M, Koh C, Liu WC. Using e-portfolios to facilitate reflection: Insights from an activity theoretical analysis. Teach Teach Educ. 2019;85:24–35. [Google Scholar]
  • 29.De Rijdt C, Tiquet E, Dochy F, Devolder M. Teaching portfolios in higher education and their effects: An explorative study. Teach Teach Educ. 2006;22:1084–93. [Google Scholar]
  • 30.Ramsey JL. Department of Instructional Psychology and Technology: Brigham Young University; 2022. Peer Review of Teaching and the Pursuit of Excellent Teaching in Higher Education. [Google Scholar]
  • 31.Miller-Young J, Sinclair M, Forgie S. Teaching excellence and how it is awarded: A Canadian case study. Can J High Educ. 2020;50:40–52. [Google Scholar]
  • 32.Cummings R, Elliott S, Stoney S, Tucker MB, Wicking R, de St Jorre TJ. Australian University Teaching Criteria and Standards Project Ref: SP12–2335. Retrieved from Australian Government Office for Learning and Teaching 2014 [Google Scholar]
  • 33.Cooper T. Rethinking teaching excellence in Australian higher education. Int J Comp Edu Dev. 2019;21:83–98. [Google Scholar]
  • 34.Vardi I, Quin R. Promotion and the scholarship of teaching and learning. High Educ Res Dev. 2011;30:39–49. [Google Scholar]
  • 35.Cunsolo J. The Scholarship of teaching: A Canadian perspective with examples. Can JHigh Educ. 1996;26:35–56. [Google Scholar]
  • 36.Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15:1277–88. doi: 10.1177/1049732305276687. [DOI] [PubMed] [Google Scholar]
  • 37.Kim I, Kuljis J. Applying content analysis to web-based content. J Comput Inform Technol. 2010;18:369–75. [Google Scholar]
  • 38.Macarthur J, Eaton M, Mattick K. Every picture tells a story: Content analysis of medical school website and prospectus images in the United Kingdom. Perspect Med Educ. 2019;8:246–52. doi: 10.1007/s40037-019-00528-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Fletcher JA. Peer observation of teaching: A practical tool in higher education. J Fac Dev. 2018;32:51–64. [Google Scholar]
  • 40.Cosser M. Towards the design of a system of peer review of teaching for the advancement of the individual within the university. High Educ. 1998;35:143–62. [Google Scholar]
  • 41.Villanueva KA, Brown SA, Pitterson NP, Hurwitz DS, Sitomer A. Teaching evaluation practices in engineering programs: Current approaches and usefulness. Int J Eng Educ. 2017;33:1317–34. [Google Scholar]
  • 42.Shinkai K, Chen CA, Schwartz BS, Loeser H, Ashe C, Irby DM. Rethinking the educator portfolio: An innovative criteria-based model. Acad Med. 2018;93:1024–8. doi: 10.1097/ACM.0000000000002005. [DOI] [PubMed] [Google Scholar]
  • 43.Newman LR, Lown BA, Jones RN, Johansson A, Schwartzstein RM. Developing a peer assessment of lecturing instrument: Lessons learned. Acad Med. 2009;84:1104–10. doi: 10.1097/ACM.0b013e3181ad18f9. [DOI] [PubMed] [Google Scholar]
  • 44.Van der Schaaf MF, Stokking KM. Construct validation of content standards for teaching. Scand J Educ Res. 2011;55:273–89. [Google Scholar]
  • 45.Smith K, Tillema H. Use of criteria in assessing teaching portfolios: Judgemental practices in summative evaluation. Scand J Educ Res. 2007;51:103–17. [Google Scholar]
  • 46.Arreola RA. Developing a Comprehensive Faculty Evaluation System: A Guide to Designing, Building, and Operating Large-scale Faculty Evaluation Systems. Jossey-Bass; 2007. [Google Scholar]
  • 47.Van der SchaAf MF, Stokking KM. Developing and validating a design for teacher portfolio assessment. AssessEval High Educ. 2008;33:245–62. [Google Scholar]
  • 48.Arreola RA. Issues in developing a faculty evaluation system. Am J Occup Ther. 1999;53:56–63. doi: 10.5014/ajot.53.1.56. [DOI] [PubMed] [Google Scholar]
  • 49.Bland CJ, Wersal L, VanLoy W, Jacott W. Evaluating faculty performance: Asystematically designed and assessed approach. Acad Med. 2002;77:15–30. doi: 10.1097/00001888-200201000-00006. [DOI] [PubMed] [Google Scholar]
  • 50.Hong DZ, Lim AJ, Tan R, Ong YT, Pisupati A, Chong EJ, et al. A systematic scoping review on portfolios of medical educators. JMed EducCurri Dev. 2021;8:2382120521100035. doi: 10.1177/23821205211000356. doi: 10.1177/23821205211000356. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of Education and Health Promotion are provided here courtesy of Wolters Kluwer -- Medknow Publications

RESOURCES