Skip to main content
The Malaysian Journal of Medical Sciences : MJMS logoLink to The Malaysian Journal of Medical Sciences : MJMS
. 2015 Nov;22(6):5–10.

Workplace-based Assessment; Applications and Educational Impact

Salman Yousuf Guraya 1,
PMCID: PMC5295751  PMID: 28223879

Abstract

Workplace based assessment (WPBA) refers to a group of assessment modalities which evaluates trainees’ performance during the clinical settings. Hallmark of WPBA is the element of observation of the trainee’s performance in real workplace environment along with relevant feedback, thus fostering reflective practice. WPBA consists of observation of clinical performance (mini-clinical evaluation exercise, direct observation of procedural skills), discussion of clinical cases (case based discussion), and feedback from peers, coworkers, and patients (multisource feedback). This literature review was conducted on the databases of MEDLINE, EMBASE, CINAHL, and The Cochrane Library. Data were retrieved by connecting Medical Subject Headings (MeSH) keywords: ‘workplace based assessment’ AND ‘mini-clinical evaluation exercise’ AND ‘direct observation of procedural skills’ AND ‘case based discussion’ AND ‘multi-source feedback’. Additional studies were searched in the reference lists of all included articles. As WPBA is gaining popularity, there is a growing need for continuing faculty development and greater evidence regarding the validity and reliability of these instruments, which will allow the academia to embed this strategy in the existing curricula. As of today, there are conflicting reports about the educational impact of WPBA in terms of its validity and reliability. This review draws upon the spectrum of WPBA tools, their designs and applications, and an account of the existing educational impact of this emerging assessment strategy in medical education. Finally, the study reports some educational impact of WPBAs on learning and emphasises the need for more empirical research in endorsing its application worldwide.

Keywords: workplace based assessment, mini clinical evaluation exercise, direct observation of procedural skills, case based assessmen

Introduction

In the traditional apprenticeship-training models, trainees confront a wide range of health-care problems and, as other physicians practice, they are required to apply their professional capabilities in a competent and skillful manner (1). It is essential that the assessment models should safeguard the safety of patients as well as to offer the opportunity of contextual feedback to the trainee. For plausible solutions, a wealth of assessment tools have been described, many of which are modifications of the conventional clinical long case examination. WPBA entails the evaluation of daily clinical practices employed in the working situation (2). Simply, it is an “assessment of what doctors actually do in practice” (3). WPBA encompasses wide range of assessment strategies that evaluate trainees in clinical settings and provide feedback. WBA has allowed the transition away from the use of numbers-based experience toward a more structured format of assessment (4).

WPBA has been adopted by the UK General Medical Council (GMC) and the Academy of Medical Royal Colleges (AoMRC) for the assessment of performance in the postgraduate medical education (5). Likewise, WPBA is also being used and gaining popularity in the undergraduate medical education (6). GMC elaborates WPBA as assessments for learning (formative), rather than as assessments of learning (summative) (7). Despite this, WBA gathers objective evaluation of a trainee’s competence and performance, providing a generic blueprint of summative functions. This review explores multiple dimensions of WPBA with a view to look into its educational impact on medical trainees and physicians.

Study Design

This search was conducted on the databases of MEDLINE, EMBASE, CINAHL, and The Cochrane Library. Data were retrieved by connecting Medical Subject Headings (MeSH) keywords: ‘workplace based assessment’ AND ‘mini-clinical evaluation exercise’ AND ‘direct observation of procedural skills’ AND ‘case based discussion’ AND ‘multi-source feedback’. Additional studies were searched in the reference lists of all included articles.

The body of information from literature showed a range of categories of WPBA.

Categories of WPBA

A number of WPBA strategies exist, all aiming to assess various facets of trainees’ performance. Their categories and specific indications are summarised in Table 1.

Table 1.

Categories of workplace based assessment and their objectives

No. Tasks Tools
1 Observation of clinical performance Mini-clinical evaluation exercise
Direct observation of procedural skills
2 Discussion of clinical cases Case based discussion
3 Feedback from peers, coworkers, and patients Multisource feedback (360° assessment)
Mini peer assessment tool
Team assessment of behaviors
Patient satisfaction questionnaire

Observation of clinical performance

Mini-clinical evaluation exercise (mCEX)

In the mCEX, an assessor evaluates a trainee–patient interaction in any healthcare institution. Such clinical encounters are expected to last for about 15 minutes, and the trainee is expected to conduct a focused history and/or physical examination within this stipulated time (8). At the end, trainee suggests a diagnosis and management plan, the performance is graded by using a structured evaluation form, and then constructive feedback is provided. Assessors use a nine-point Likert scale ranging from ‘unsatisfactory’ to ‘superior’(9). This provides six domain-specific ratings and one final rating of clinical competence. Trainee undertakes around six clinical assessments during the year, with a different assessor for each session.

Nair et al surveyed a group of international medical graduates for the acceptability, reliability, and feasibility of mCEX and reported that about 50% graduates were either satisfied or very satisfied with this assessment strategy for learning (10). Other studies examined the educational impact of mCEX in anesthesia training and showed that majority of trainees (and their evaluators) were satisfied by the schedule of assessments and the quality of feedback offered (11).

Direct observation of procedural skills (DOPS)

DOPS was introduced by the Royal College of Physicians and now forms an integral component of WPBA for doctors in the foundation year and those in specialist training (12). It was specifically designed to assess procedural skills involving real patients in a single encounter. During DOPS, an assessor evaluates a trainee conducting a procedure as a part of his/her routine practical training against a set criteria, which is followed by a face-to-face feedback session (13). DOPS is scored evaluations of practical procedures and clinical examinations. This method of assessment has been shown to be valid, reliable and feasible in evaluating postgraduate medical registrars in the UK (14).

DOPS is considered as a valuable learning opportunity for trainees to enhance performance in a skill. Intricate working between trainee and assessor is required for its timely and effective functioning. DOPS assessments are tailor-made to be conveniently integrated into trainees’ and assessors’ daily routine and hence cconsidered highly feasible (15). A feed-back survey responded by 25 of the 27 pre-registration house officers completing the assessments showed that the majority (70%) was helped by direct observation in improving their clinical skills (5). However, a study conducted in 2009 showed that a number of interns agreed that DOPS might enhance their clinical acumen, but this evidence has not been reported scientifically, and the number of respondents was very small (16).

Discussion of clinical cases

Case-based discussion (CbD)

A CbD focuses entirely on the doctor’s real work and at all times explores exactly what was done and why and how any decision, investigation or intervention was decided upon (17). The trainee selects the timing, the records, and the assessor. A few days before the assessment, a case is chosen with particular curriculum objectives in mind and then discussed using focused questions designed to elicit responses that will indicate knowledge, skills, attitudes and behaviours relevant to those domains. After the discussion, the assessor rates the quality of the performance and then provides constructive feedback. On average, trainees are assessed six times during the year. A working plan for a typical CbD is as follows;

  1. Planning

    • Trainee selects two medical records

    • Assessor selects a medical record for discussion and assessment

    • Trainee and assessor map out potential curriculum domains and specific competencies

    • Assessor prepares questions for discussion

  2. Discussion

    • Assessor ensures that medical records are available during the discussion

    • Discussion starts with a reminder of medical record of the patient by assessor

    • Assessor explores the trainee’s clinical reasoning and professional judgment

    • Discussion is focused on the case, determining the trainee’s diagnostic and management skills

  3. Feedback

    • Assessor provides effective and constructive feedback to the trainee

CbD evaluates what the trainees actually did rather than what they think they might do. This is the most striking difference between CbD and objective structured clinical examination (OSCE), which assesses the physician performance under examination conditions (18). CbD has been demonstrated to have significant face and content validity (19). In addition, it has been demonstrated that (with sufficient sampling) good levels of reliability (20) and validity with assessor training can be achieved (21). The innate nature of CbD demands that doctors’ own patients (cases) are used for a conversation or discussion that provides the main impetus to assess trainee’s applied knowledge, clinical reasoning and decision-making. CbD can explore a full range of holistic, balanced and justifiable decisions in complex situations, such as the ability to recognise dilemmas, managing a complex case in the given range of options, deciding on a course of action, explaining the course of action, and reflecting on the final outcomes.

Multisource feedback (360° assessment)

Mini-peer assessment tool (mPAT)

mPAT encompasses the integration of ideas about an trainee’s performance in a range of competence domains from their colleagues. This assessment strategy gathers confidential feedback from eight peers evaluating 16 aspects from the following domains (22);

  • Diagnosis and appropriate application of available investigative tools

  • Management of time

  • Management of stress, fatigue, and workload

  • Effective communication

  • Knowledge of one’s own limitations

Foundation doctors in the UK are required to complete at least two mPATs per year. Archer et al explored the validity of mPAT by a mapping exercise against the UK Foundation Curriculum Trainees’ clinical performance (23). They administered a 16-item questionnaire written against a six- point scale on two separate events during the period of pilot study. The participants’ responses were evaluated to identify internal structural framework, potential points of leniency and various measurement variables. The analysis of these variables generated two main factors of clinical competence and humanistic values. The research illustrated that as a component of assessment program, mPAT has the potential to provide an effective and reliable tool of collating colleague opinions in comprehensively assessing the Foundation trainees.

Team assessment of behaviors (TAB)

TAB is a form of multisource feedback assessment for the trainee doctors in the UK Foundation Curriculum (24). TAB has following four domains based on the GMC’s guidance on professional behaviour;

  • Developing and maintaining professional rapport and relationships with the patients

  • Communicating by effective verbal skills

  • Working in a team and as team leader

  • Ensuring the accessibility and availability

TAB is used as a formative as well as a summative tool to help people improve their performances. This assessment tool entails a minimum of 10 returns for a valid, reliable evaluation. The recommended mix of raters is specified, since ratings vary significantly by staff group (25).

Patient satisfaction questionnaire (PSQ)

PSQ can provide formative feedback on a doctor’s professional performance within a process of appraisal (26). A structured questionnaire is used to obtain patients’ feedback. Physicians are expected to get feedback at least once every five years, to reflect on the feedback they obtain, and to use it to inform their further professional development, where appropriate (27). When patients assess physicians’ or larger health care systems, the demographics of the patients, and the questionnaire administration methods (e.g., postal, telephone or use of proxy respondents) can potentially influence final ratings (28) (29, 30). When colleagues judge the performance of other physicians, the rater’s personal impression, the duration and nature of the rater’s relationship with the examinee, and the rater’s familiarity with the doctor’s practice can jeopardise the entire assessment process (31).

Many reports in the existing literature suggest that multisource feedback can objectively assess key competencies like communications skills, interpersonal skills, collegiality, professional expertise, and the ability to progress in the medical field (21), (32), (33). Multisource feedback, however, has its own limitations. A number of studies have shown that responses tend to be skewed towards positive assessments of doctor performance (34) (35) (36) Others have shown dissatisfaction about the ability of multisource feedback, patient feedback in particular, in identifying the underperforming doctors (37).

Acute Care Assessment Tool (ACAT)

ACAT assesses the performance of a trainee after, for instance, a night shift in acute medicine

Strengths and weaknesses of WPBA

A study on the medical students indicated that WPBA was useful for increasing contact time with the supervisors (38). In another research, students reported that allocating a tutor in WPBA was the most effective means of judging the competence (39). By WPBA, learners establish professional relationship with their tutors, who become authentic sources of effective feedback for the learners. The outright strength of WPBA is its formative potential for assessment. The essential impetus needed to achieve ‘assessment for learning’ is the provision of feedback by the assessor, enabling the trainee to steer his or her learning towards the intended learning achievements (40). Evidence-based research has shown that systematic feedback delivered by a credible source can enhance clinical performance (41). The most striking evidence of performance improvement by WPBA is derived from studies exploring the feasibility and validity of the multisource feedback (5). Research from psychology literature has shown that multisource feedback can lead to gradual enhancements in professional competence (42), Another study of medical education showed that doctors getting specific feedback from peers, colleagues and patients can use the data to implement modifications in their clinical practice (43). The direct observation of trainee’s performance at the workplace is only made useful by the associated feedback, thus triggering reflection (44). The feasibility of WPBA is amplified than the conventional assessments as it is applied and conducted during the course of day-to-day routine. Although WPBA demands prior training of the faculty, additional time and student knowledge and sensitisation, institutions don’t need any dedicated infrastructure to establish this assessment strategy. However, a survey reporting the views of 539 surgical trainees on the Intercollegiate Surgical Curriculum Program(16) showed that 60% of respondents felt that the program adversely affected on training opportunities due to a long time required to complete the assessments. More than 90% stated that the program had a neutral or negative impact on their training overall.

WPBA, per se, cannot replace traditional methods of assessment but carries a potential of add-on method especially to the in-training or formative assessment. The trainees who perform well in initial encounters may get overconfident and this may be a hurdle in motivating them for future improvements (44). Since WPBA demands a lot of time, trainees tend to seek less senior assessors. There is evidence to suggest that the more senior and expert staff may provide lower but more accurate rating of performance (45). At the same time, WPBA requires assessor training particularly in objective evaluations and providing effective feedback. Sensitization and introductory seminars about WPBA may be the first step in grooming the staff about appropriate functionality of this assessment tool.

Conclusion

WPBA involves evaluation of performance and provision of feedback of doctors in their everyday activities. These assessment tools purport to provide valuable insight to trainee, assessor and academics, and are found to have some educational impact on learning. However, further evidence-based interventional and experimental models are needed to establish its significant educational impact in medical education.

Acknowledgement

None.

Footnotes

Conflict of interests

None.

Funds

None.

References

  • 1.Norcini JJ, McKinley DW. Assessment methods in medical education. Teach Teach Educ. 2007;23(3):239–250. [Google Scholar]
  • 2.Swanwick T, Chana N. Workplace assessment for licensing in general practice. Br J Gen Pract. 2005;55(515):461–467. [PMC free article] [PubMed] [Google Scholar]
  • 3.Swanwick T, Chana N. Workplace-based assessment. BrJ Hosp Med (17508460) 2009;70(5):290–293. doi: 10.12968/hmed.2009.70.5.42235. [DOI] [PubMed] [Google Scholar]
  • 4.Nesbitt A, Baird F, Canning B, Griffin A, Sturrock A. Student perception of workplace-based assessment. Clin teach. 2013;10(6):399–404. doi: 10.1111/tct.12057. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Miller A, Archer J. Impact of workplace based assessment on doctors’ education and performance: a systematic review. BMJ. 2010;341:c5064. doi: 10.1136/bmj.c5064. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Boursicot K, Etheridge L, Setna Z, Sturrock A, Ker J, Smee S, et al. Performance in assessment: Consensus statement and recommendations from the Ottawa conference. Med Teach. 2011;33(5):370–383. doi: 10.3109/0142159X.2011.565831. [DOI] [PubMed] [Google Scholar]
  • 7.Franche R-L, Cullen K, Clarke J, Irvin E, Sinclair S, Frank J. Workplace-based return-to-work interventions: a systematic review of the quantitative literature. J Occup Rehabil. 2005;15(4):607–631. doi: 10.1007/s10926-005-8038-8. [DOI] [PubMed] [Google Scholar]
  • 8.Augustine K, McCoubrie P, Wilkinson J, McKnight L. Workplace-based assessment in radiology—where to now? Clin Radiol. 2010;65(4):325–332. doi: 10.1016/j.crad.2009. [DOI] [PubMed] [Google Scholar]
  • 9.Hawkins RE, Margolis MJ, Durning SJ, Norcini JJ. Constructing a validity argument for the mini-clinical evaluation exercise: A review of the research. Acad Med. 2010;85(9):1453–1461. doi: 10.1097/ACM.0b013e3181eac3e6. [DOI] [PubMed] [Google Scholar]
  • 10.Nair BR, Alexander HG, McGrath BP, Parvathy MS, Kilsby EC, Wenzel J, et al. The mini clinical evaluation exercise (mini-CEX) for assessing clinical performance of international medical graduates. Med J Aust. 2008;189(3):159–161. doi: 10.5694/j.1326-5377.2008.tb01951.x. [DOI] [PubMed] [Google Scholar]
  • 11.Weller J, Jolly B, Misur M, Merry A, Jones A, Crossley JM, et al. Mini-clinical evaluation exercise in anaesthesia training. Br J Anaesth. 2009;102(5):633–641. doi: 10.1093/bja/aep055. [DOI] [PubMed] [Google Scholar]
  • 12.Bindal N, Goodyear H, Bindal T, Wall D. DOPS assessment: A study to evaluate the experience and opinions of trainees and assessors. Med Teach. 2013;35(6):e1230–e1234. doi: 10.3109/0142159X.2012.746447. [DOI] [PubMed] [Google Scholar]
  • 13.Pelgrim E, Kramer A, Mokkink H, Van den Elsen L, Grol R, Van der Vleuten C. In-training assessment using direct observation of single-patient encounters: a literature review. Adv Health Sci Educ. 2011;16(1):131–142. doi: 10.1007/s10459-010-9235-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Wilkinson JR, Crossley JG, Wragg A, Mills P, Cowan G, Wade W. Implementing workplace-based assessment across the medical specialties in the United Kingdom. Med Educ. 2008;42(4):364–373. doi: 10.1111/j.1365-2923.2008.03010.x. [DOI] [PubMed] [Google Scholar]
  • 15.Morris A, Hewitt J, Roberts C. Practical experience of using directly observed procedures, mini clinical evaluation examinations, and peer observation in pre-registration house officer (FY1) trainees. Postgrad Med J. 2006;82(966):285–288. doi: 10.1136/pgmj.2005.040477. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Pereira EA, Dean BJ. British surgeons’ experiences of mandatory online workplace-based assessment. J R Soc Med. 2009;102(7):287–293. doi: 10.1258/jrsm.2009.080398. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Brown N, Holsgrove G, Teeluckdharry S. Case-based discussion. Adv Psych Treat. 2011;17(2):85–90. [Google Scholar]
  • 18.Guraya S, Alzobydi A, Salman S. Objective structured clinical examination: Examiners’ bias and recommendations to improve its reliability. J Med Med Sci. 2010;1:269–272. [Google Scholar]
  • 19.Jennett PA, Scott SM, Atkinson MA, Crutcher RA, Hogan DB, Elford RW, et al. Patient charts and physician office management decisions: chart audit and chart stimulated recall. J Continuing Educ Health Professions. 1995;15(1):31–39. [Google Scholar]
  • 20.Norman GR, Davis DA, Lamb S, Hanna E, Caulford P, Kaigas T. Competency assessment of primary care physicians as part of a peer review program. JAMA. 1993;270(9):1046–1051. doi: 10.1001/jama.1993.03510090030007. [DOI] [PubMed] [Google Scholar]
  • 21.Epstein RM, Hundert EM. Defining and assessing professional competence. JAMA. 2002;287(2):226–235. doi: 10.1001/jama.287.2.226. [DOI] [PubMed] [Google Scholar]
  • 22.Burkill G. Work-based assessment for trainees—more than just a few new tools? Clin Radiol. 2008;63(1):12–14. doi: 10.1016/j.crad.2007.07.014. [DOI] [PubMed] [Google Scholar]
  • 23.Archer J, Norcini J, Southgate L, Heard S, Davies H. mini-PAT (Peer Assessment Tool): a valid component of a national assessment programme in the UK? Adv Health Sci Educ Theory Pract. 2008;13(2):181–192. doi: 10.1007/s10459-006-9033-3. [DOI] [PubMed] [Google Scholar]
  • 24.Wall D, Singh D, Whitehouse A, Hassell A, Howes J. Self-assessment by trainees using self-TAB as part of the team assessment of behaviour multisource feedback tool. Med Teach. 2012;34(2):165–167. doi: 10.3109/0142159X.2012.644840. [DOI] [PubMed] [Google Scholar]
  • 25.Bullock AD, Hassell A, Markham WA, Wall DW, Whitehouse AB. How ratings vary by staff group in multi source feedback assessment of junior doctors. Med Educ. 2009;43(6):516–520. doi: 10.1111/j.1365-2923.2009.03333.x. [DOI] [PubMed] [Google Scholar]
  • 26.Campbell J, Richards S, Dickens A, Greco M, Narayanan A, Brearley S. Assessing the professional performance of UK doctors: an evaluation of the utility of the General Medical Council patient and colleague questionnaires. Qualy Saf Health Care. 2008;17(3):187–193. doi: 10.1136/qshc.2007.024679. [DOI] [PubMed] [Google Scholar]
  • 27.Wright C, Richards SH, Hill JJ, Roberts MJ, Norman GR, Greco M, et al. Multisource feedback in evaluating the performance of doctors: the example of the UK General Medical Council patient and colleague questionnaires. Acad Med. 2012;87(12):1668–1678. doi: 10.1097/ACM.0b013e3182724cc0. [DOI] [PubMed] [Google Scholar]
  • 28.Woods SE, Bivins R, Oteng K, Engel A. The influence of ethnicity on patient satisfaction. Ethn Health. 2005;10(3):235–242. doi: 10.1080/13557850500086721. [DOI] [PubMed] [Google Scholar]
  • 29.Campbell JL, Roberts M, Wright C, Hill J, Greco M, Taylor M, et al. Factors associated with variability in the assessment of UK doctors’ professionalism: analysis of survey results. BMJ. 2011;343:d6212. doi: 10.1136/bmj.d6212. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Lipner RS, Blank LL, Leas BF, Fortna GS. The value of patient and peer ratings in recertification. Acad Med. 2002;77(10 Suppl):S64–S66. doi: 10.1097/00001888-200210001-00021. [DOI] [PubMed] [Google Scholar]
  • 31.Mackillop LH, Crossley J, Vivekananda-Schmidt P, Wade W, Armitage M. A single generic multi-source feedback tool for revalidation of all UK career-grade doctors: Does one size fit all? Med Teach. 2011;33(2):e75–e83. doi: 10.3109/0142159X.2010.535870. [DOI] [PubMed] [Google Scholar]
  • 32.Violato C, Lockyer J, Fidler H. Multisource feedback: a method of assessing surgical practice. BMJ. 2003;326(7388):546–548. doi: 10.1136/bmj.326.7388.546. doi: http://dx.doi.org/10.1136/bmj.326.7388.546. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Brown C. Multi-source feedback. In: Malik A, Bhugra D, Brittlebank A, editors. Workplace-Based Assessments in Psychiatry. The Royal College of Psychiatrists; London: 2011. pp. 68–75. [Google Scholar]
  • 34.Archer J, McGraw M, Davies H. Assuring validity of multisource feedback in a national programme. Arch Dis Childd. 2010;95(5):330–335. doi: 10.1136/adc.2008.146209. [DOI] [PubMed] [Google Scholar]
  • 35.Campbell J, Narayanan A, Burford B, Greco M. Validation of a multi-source feedback tool for use in general practice. Educ Prim Care. 2010;21(3):165–179. doi: 10.1080/14739879.2010.11493902. [DOI] [PubMed] [Google Scholar]
  • 36.Richards SH, Campbell JL, Walshaw E, Dickens A, Greco M. A multi method analysis of free text comments from the UK General Medical Council Colleague Questionnaires. Med Educ. 2009;43(8):757–766. doi: 10.1111/j.1365-2923.2009.03416.x. [DOI] [PubMed] [Google Scholar]
  • 37.Archer JC, McAvoy P. Factors that might undermine the validity of patient and multi source feedback. Med Educ. 2011;45(9):886–893. doi: 10.1111/j.1365-2923.2011.04023.x. [DOI] [PubMed] [Google Scholar]
  • 38.Sabey A, Harris M. Training in hospitals: what do GP specialist trainees think of workplace-based assessments? Educ Prim Care. 2011;22(2):90–99. doi: 10.1080/14739879.2011.11493974. [DOI] [PubMed] [Google Scholar]
  • 39.Archer JC. State of the science in health professional education: effective feedback. Med Educ. 2010;44(1):101–108. doi: 10.1111/j.1365-2923.2009.03546.x. [DOI] [PubMed] [Google Scholar]
  • 40.Norcini J, Burch V. Workplace-based assessment as an educational tool: AMEE Guide No. 31. Med Teach. 2007;29(9–10):855–871. doi: 10.1080/01421590701775453. [DOI] [PubMed] [Google Scholar]
  • 41.Veloski J, Boex JR, Grasberger MJ, Evans A, Wolfson DB. Systematic review of the literature on assessment, feedback and physicians’ clinical performance*: BEME Guide No. 7. Med Teach. 2006;28(2):117–128. doi: 10.1080/01421590600622665. [DOI] [PubMed] [Google Scholar]
  • 42.Smither JW, London M, Reilly RR. Does Performance Improve Following Multisource Feedback? A Theoretical Model, Meta Analysis, And Review Of Empirical Findings. Personnel psychology. 2005;58(1):33–66. doi: 10.1111/j.1744-6570.2005.514_1.x. [DOI] [Google Scholar]
  • 43.Fidler H, Lockyer JM, Toews J, Violato C. Changing physicians’ practices: the effect of individual feedback. Acad Med. 1999;74(6):702–714. doi: 10.1097/00001888-199906000-00019. [DOI] [PubMed] [Google Scholar]
  • 44.Singh T, Modi JN. Workplace-based assessment: A step to promote competency based postgraduate training. Indian Pediatr. 2013;50(6):553–559. doi: 10.1007/s13312-013-0164-3. [DOI] [PubMed] [Google Scholar]
  • 45.Hill F, Kendall K, Galbraith K, Crossley J. Implementing the undergraduate miniCEX: a tailored approach at Southampton University. Med Educ. 2009;43(4):326–334. doi: 10.1111/j.1365-2923.2008.03275.x. [DOI] [PubMed] [Google Scholar]

Articles from The Malaysian Journal of Medical Sciences : MJMS are provided here courtesy of School of Medical Sciences, Universiti Sains Malaysia

RESOURCES