Abstract
With the redesign of general practice training implicit in the Department of Health's programme of reform, Modernising Medical Careers, there is the opportunity to bring summative assessment and the MRCGP examination together into a unified assessment framework for licensing. It is likely that assessment in the workplace will play a central role in such a process. Workplace assessment is of high validity and has the potential to reconnect teaching and testing in general practice. Five principles to underpin the design of a workplace assessment are proposed, namely that it should be: competency-based, developmental, evidential, locally assessed, and triangulated. Successful implementation of workplace assessment will not only serve to reduce the current testing burden on trainees, but will also harness the involvement of medical teachers. In doing so, general practice has the opportunity to create a powerful tool for professional development.
Keywords: competency-based education, educational assessment, professional education
INTRODUCTION
The assessment of the competence of doctors by examinations is inherently problematic. There are real differences between what doctors do in controlled assessment situations and their actual performance in professional practice,1 and degrees of correlation between the two have been shown to be extremely variable.2 Competence is a prerequisite for performance, but no matter whether the assessment takes place in a controlled or uncontrolled setting, competence can only ever really be inferred from performance. In other words, competence indicates what people can do in a contextual vacuum, under perfect conditions, whereas performance is about how people behave in real life, on a day-to-day basis.2
Recent policy developments have led to a fundamental review of the assessment of doctors in training, with a growing emphasis on competency-based training and assessment. Modernising Medical Careers3 recommends that all examinations of medical training now be competency-based and premised on the General Medical Council's (GMC) document Good Medical Practice.4 Overseeing the implementation of this policy is the Postgraduate Medical Education and Training Board (PMETB), which is shortly to assume the functions of the Joint Committee of Postgraduate Training for General Practice and the Specialist Training Authority. PMETB has issued guidance for the development of trainee assessment in all specialties. In its published principles and standards for an assessment system for postgraduate medical training,5 PMETB defines a satisfactory assessment system as:
‘… an integrated set of assessments which is in place for the entire postgraduate training programme, and which supports the curriculum. It may comprise different methods, and be implemented either as national examinations, or as assessments in the workplace. The balance between these two approaches principally relates to the relationship between competence and performance. Competence (can do) is necessary but not sufficient for performance (does do), and as experience increases so performance-based assessment becomes more important.’5
It is clear from statements issued by the Department of Health and PMETB that the Royal Colleges will play a central role in defining the requirements of postgraduate training. In keeping, therefore, with the other medical colleges, the Royal College of General Practitioners (RCGP) set about developing a curriculum for general practice training that is expected to be delivered in 2005. In many ways, the RCGP was ahead of the game and a comprehensive syllabus for the existing MRCGP examination was already in existence, having been drawn up after widespread consultation with all stakeholders in 2002.6 The MRCGP syllabus, Good Medical Practice, and the emergent curriculum statements may usefully be brought together in an assessment matrix — a framework from which to derive competences for general practice and how best these might be tested. Good Medical Practice also forms the basis of evidence required to be collected for the NHS Appraisal for Doctors in Training framework.7 In order to reduce the assessment, and evidential burden on trainees, any proposed system of assessment should also satisfy the data collection requirements of the NHS Appraisal framework.
How this fits in
The assessment of doctors training for general practice should be reviewed as a result of growing evidence from the educational and assessment literature. Workplace assessment offers ‘authenticity’, focusing on what doctors actually do in the workplace, and should therefore form an integral part of any assessment framework for assessing doctors. The principles underpinning workplace assessment are that it should be: competency based, developmental, evidential, locally assessed and triangulated with other assessments.
From both a political and an educational perspective, the time would then appear to be right to rethink the assessment of doctors in training for general practice. In this article we argue for a central role for workplace assessment as part of a global licensing framework for doctors in training. Furthermore, we wish to establish a number of key features, which we believe should underpin the development of such an assessment in the workplace.
WORK-BASED OR WORKPLACE?
Work-based assessment can be considered in a number of ways and may include assessments of work undertaken ‘off-site’, such as the current MRCGP video module, surrogate tests of performance routed in work, simulated surgeries, and assessments of outcomes, process of care and practice volume that are undertaken in the working environment.8 A more specific term, which will be used throughout this paper, is ‘workplace assessment’, that is, assessment of working practices undertaken in the working environment.
WHY USE A WORKPLACE ASSESSMENT?
There are two main arguments for including workplace assessment in a licensing examination for general practice. The first of these is educational and concerns the re-coupling of teaching and testing. Assessment should be an integral part of the curriculum, not a ‘bolt-on’ extra at the end. Trainees (and indeed trainers) should know exactly what is expected of them, and have the opportunity to demonstrate attainment over time and in a variety of contexts.9
A second argument, well supported by a wealth of research material, suggests that an assessment is more valid the closer it gets to the activity one wishes to assess.10,11 If, for example, you want to know how a doctor consults, watch him do it. In the lexicon of test design, this is called ‘authenticity’. Authenticity is particularly important when dealing with the assessment of competence because expertise appears to be domain-specific and contextual.12 That is, good communication skills in the consulting room may not be transferable to, say, an oral examination. Furthermore, some competency areas such as probity, availability and professionalism, simply cannot be disentangled from system (for example, practice facilities) or personal influences (for example, health). As such, assessment of performance, as opposed to competence, provides us with the only route into many of the areas that we might wish to assess. Context, therefore, is highly significant in both the development and expression of competence and, as such, has an important part to play in both learning and assessment.
One way to increase the authenticity of an assessment method is to base it on the simulation of reality, which enhances reliability through standardisation. But this ‘preoccupation with objective testing encourages the substitution of surrogate or indirect measures for the real thing’13 and has dominated thinking in medical education, particularly in relation to high stakes assessment. Interestingly, as Schuwirth14 highlights, test development is coming full circle from patient-based examinations through simulations back to observations of real practice.
LESSONS FROM EXISTING ASSESSMENT METHODS
Summative assessment in the UK already includes a workplace assessment in the form of the Structured Trainer's Report; it could, however, be argued that this is not an assessment tool as such, but rather a portfolio of smaller assessment units. The Structured Trainer's Report has stood the profession in good stead but there are issues that arise from it as it stands.
The Report specifies failing criteria but provides no developmental markers as to how the learner is progressing. As such, its educational impact is limited.
The content of the Report includes a large section on basic clinical skills which lend themselves to more rigorous testing in a standardised way, as is done when a doctor's license is brought into question, such as in the GMC's performance procedures.15 It could also be argued that such skills should be tested before entering basic specialist training rather than at the end of it.
It is rare to fail summative assessment on the Trainer's Report alone16 and when this does occur, supporting documentation is usually required from the trainer indicating that there is uncaptured information about the registrar's inadequacies that is not stipulated in the Report.
Despite recommendations for its use as an open document, anecdotal reports from directors of general practice education suggest that the Trainer's Report often remains untouched until the last few weeks of training.
The Structured Trainer's Report is ‘moderated’ in theory, as it has to be approved and countersigned by a director of postgraduate general practice. This, however, is rarely more than a rubber-stamping exercise and, unlike the other components of summative assessment, there are no national attempts at calibration or standardisation.
OUTLINE PROPOSAL
Our proposal is that any national workplace assessment for licensing should have a number of key features (Box 1). Furthermore, the development of the assessment will need to take into account validity, reliability, feasibility, cost and educational impact. These issues are given individual consideration in the next section of this paper.
Box 1. Key features of the national workplace assessment for licensing.
-
▸
Competency-based
-
▸
Developmental
-
▸
Evidential
-
▸
Locally assessed
-
▸
Triangulated
Competency-based
Competency-based training and education has acquired a bad name. Criticisms of competency movement abound12,17-19 accusing it of being overly simplistic, atomistic and reductionist. In spite of that, ‘in graduate medical training in the UK a competency model is being promoted with an almost messianic fervour’.20 There exists a general confusion between competence and competency, and idiosyncratically constructed competency frameworks are springing up everywhere.21-23 Despite these criticisms, Gonczi12 argues that a competency-based approach to education and training can be made to work and is as applicable to the professions as to any other occupation. He cautioned, however, that there are a number of ways of conceptualising competence and that ‘if the inappropriate way is adopted, not only will this potential fail to be realised but serious damage will be done to skill formation policies in the medium term’. Gonczi's three conceptualisations of competency can be summarised as:
task-based
generic attributes
holistically defined in context.
The task-based, or behaviourist, competency model is the one usually adopted when competency-based training and assessment is being proposed. As Rethans states, ‘competency-based assessment measures what doctors can do in controlled representations of professional practice’.1 This brand of competency-based training and assessment focuses on discrete behaviours associated with the completion of atomised tasks. Indeed, this is its appeal in that the model is both simple and clear. However, such an approach to education, training, and assessment is generally agreed to be conservative and reductionist, ignoring as it does, underlying attributes, group processes, context, the complexity of performance, and the role of professional judgement. Gonczi sums up the view of many other authors before and since in declaring that such a model is ‘clearly inappropriate for conceptualising professional work’.12 Leung and Diwaker, in a recent critique of competency-based training, concur and caution that ‘we should be cautious of applying the competency-based approach universally unless robust higher order competencies are available’.19
An alternative approach — that of treating competencies as general personal attributes — is popular in the management field and can be found, for example, in the leadership literature, such as that of Goleman.24 We know, however, that expertise is context specific and general attributes may not be applicable in certain circumstances. General attributes are therefore unhelpful in the design of training programmes or for the purposes of wider accountability.
Modernising Medical Careers3 recommends that all postgraduate programmes should be ‘competency-based’. Modernising Medical Careers: The Next Steps25 — although ostensibly focusing on the Foundation programme — softens this considerably:
‘MMC[Modernising Medical Careers] signalled a move to competency-based training throughout the medical continuum, which will be reflected in Foundation programmes. Evolving thinking takes this a step further and suggests progress should be outcome-based — that is, not just acquisition of competencies but a demonstration that they can be applied in real situations’.26
This later statement fits with the idea of competencies as general attributes within a context, incorporating both understanding and judgement: that is, competence ‘as a complex structuring of attributes needed for intelligent performance in specific situations’.12 We should also note here the shift in discourse as the concept of ‘intelligent performance’ comes from an integrated approach to constructing competencies and a move towards the more holistic construct of competence based on outcomes defined by a national curriculum.
Good Medical Practice for General Practitioners27 provides us with units of competence, defining the characteristics of the excellent (and unacceptable) GP. From these, meaningful sub-divisions or ‘elements’ can be defined. Performance criteria can then be developed, describing the sort of behaviours that might lead one to infer that competence has been obtained. It will, of course, be essential that the workplace assessment ultimately fits with the RCGP's curriculum which, interestingly, takes the agreed European definition of family practice28 as its point of departure.
Developmental
The Structured Trainer's Report is a portfolio of assessments. Although there is little evidence to support the use of portfolio assessments for summative assessment in medical education,29 they have been widely used in other fields and in other countries.30 Any new workplace assessment for licensing offers the opportunity to link training and assessment more effectively; this proposal recommends the development and testing of a ‘developmental assessment’ that is a ‘process of monitoring student's progress through an area of learning so that decisions can be made about the best ways to facilitate future learning’.31 This provides the opportunity for ‘scaffolding’,32 a process of support and guidance that can be offered to enable the trainee to achieve at a potentially higher level.
This may appear to mix formative and summative assessment but as Sadler33 points out, the methods are often the same, the only difference being the timing of application. Successful examples exist: the Royal Australian College of General Practitioners recently considered in-training assessment (ITA) as part of its training programme and Hays and Wellard34 described the possibility of introducing some summative assessment tasks during training in a way that complements rather than interferes with formative assessment processes. Prescott35 also described a formative assessment tool for dental trainees in which information gathered formatively is used qualitatively rather than quantitatively towards a summative decision. The aim of any workplace assessment must be explicit36 but the use of a developmental portfolio — where students are guided through progressions of well described criteria towards specified goals — informed by a predefined ‘sufficiency of evidence’ may be one way of reconciling formative with summative purposes.
Miller37 provided us with a simple, and now familiar, model for the development of clinical competence (Figure 1) and it should be possible to build such a progression into a workplace assessment, defining performance criteria at each level of Miller's pyramid. There are, of course, other such developmental progressions in the literature, such as that described by Dreyfus and Dreyfus.38 Interestingly, the latter has already been adopted for use in GP training in a national out-of-hours training workbook.39 By drawing up a continuum of competence, an explicit structure for development is provided, competencies can be demonstrated ‘when ready’, and weaknesses exposed can be worked on early in training.
Figure 1.

Miller's Pyramid.37
Evidential
A national workplace assessment for licensing would be dependent on the collection of evidence. Public sector workplace assessments in Australia40 point the way. A national body would be responsible for developing and maintaining the competency framework for general practice training and also issue guidance on what would constitute ‘sufficiency of evidence’ or ‘standards of proof’. This body could also develop, or be the repository for, a range of approved in-service assessment tools. A range of modes of workplace assessments (such as observation, 360-degree appraisal, case analysis) would be encouraged.
An overall holistic assessment of the evidence presented, perhaps by an external examiner, needs also to be considered and might serve to aid comparability across sites. In the UK, external moderation could be carried out by a trained national panel of assessors.
In their analysis of the relationship between summative and formative assessment, William and Black41 recommended ‘separating the interpretation of evidence from its elicitation, and the consequent actions from the interpretations’. This supports the concept of ongoing evidence collection throughout the training period but with regular, well circumscribed staging reviews at which the developmental framework is reviewed and the learner's progress judged.
Locally assessed
Workplace assessment for licensing would be carried at local level predominantly by the GP trainer. This raises the thorny issue of reliability. As Southgate15 points out, ‘establishing the reliability of assessments of performance in the workplace is difficult because they rely on expert judgements of unstandardised material’. In workplace assessment, as with any other form of assessment, there are several potential threats to reliability:42 inter-observer variation (the tendency for one observer to mark consistently higher or lower than another), intra-observer variation (the variation in an observer's performance for no apparent reason — the ‘good/bad day’ phenomenon) and case specificity (the variation in the candidate's performance from one challenge to another, even when he/she seems to test the same attribute).
Despite these challenges we should not eschew a workplace assessment as, although reliability in performance assessments can be elusive, it can also be maximised through a series of measures outlined by Baker et al:43
specification— of standards, criteria, and scoring guides
calibration — of assessors and moderators
moderation — of results, particularly those on the borderline
training— of assessors, with retraining where necessary
verification and audit— through quality assurance measures and the collection of reliability data.
A similar list in relation to portfolio assessment is suggested by Klenowski.30
Reliability will always be a concern in workplace assessment,44 but it can be enhanced. Gipps45 took this further in arguing for a new paradigm of ‘educational assessment’ and suggested dropping the term ‘reliability’ completely when considering performance assessments and replacing it with ‘comparability’ based on ‘consistency’. Assessment is not an exact science and, despite the inherent challenges, workplace assessment is a paradigm worth pursuing. No other form of testing is as direct, relevant and capable of measuring holistically the higher integrative functions that make up professional competence.
Clearly, the introduction of a national workplace assessment will require a complementary training programme, arrangements for calibration, a procedure for the moderation of results and a raft of quality control and reliability checks. But it will be worth the effort. The more that teachers can be engaged in assessment — in selecting methodologies, generating standards and discussing criteria — the more they will be empowered in the educative process. To ignore this professional need, and to impose another round of externally generated assessments, will serve to disaffect further the teaching community. Of course, this assumes that GP teachers will want to be involved in assessment at all; if workplace assessment is to succeed in its purpose, there are some complex issues to work through concerning the professionalisation of our teachers and the impact of bringing assessment closer to home on their relationship their learners.
Triangulation
Through triangulation within workplace assessment we can be more confident of the veracity, fairness and reproducibility of our final judgement. Triangulation, both over time and context, is important because of the variability of those situations and contexts in which learners will find themselves. In this way, performance assessment moves us from the quantitative paradigm of psychometrics to the rich description of qualitative research. In view of the complexity of professional performance, there is also the need to triangulate what is found in the workplace with other assessments. This approach is wholeheartedly endorsed by social science researchers and the assessment literature, the latter arguing for a realistic mixture of decontextualised structured exercises and contextualised performance tasks.46 Again, the GMC's performance procedures provide a model whereby workplace assessments are supported by ‘phase two’ tests of ‘knowledge’ and ‘skills for clinical method’.15
In view of the need for triangulation, it is vital that licensing assessments are coordinated and that a national body maintains an overview. Such a body both needs to consider the emergent assessments of the Foundation Programme of Modernising Medical Careers,3 and to ensure that all domains within the competency matrix are assessed, that they are assessed adequately, and that they are assessed at the appropriate stage of training.
DISCUSSION
The implementation of a workplace assessment for GP licensing, although politically driven, is underpinned by a sound rationale supported by the theoretical literature. Assessments conducted in the workplace are of high validity and serve to reconnect teaching and assessment. A competency-based model accords with the overall contemporary emphasis of medical assessment; caution, however, is advised lest defined competencies become over-atomised. In order to enhance educational impact, the use of holistic competencies within a developmental continuum is recommended. Such a continuum has the advantage of illustrating explicitly the direction of travel for trainees rather than merely pointing out the level below which they should not fall. In this we are entirely in agreement with Eraut:47
‘At the very least, we recommend that assessment systems should be capable of recording achievement beyond competence … Such systems should also be coherent, though not necessarily identical with those being developed for recording continuing professional development’.
To strengthen further the link between teaching and assessment, and to deal with the practical expediencies of large-scale implementation, a workplace assessment should be locally assessed and based on the collection of evidence. A sufficiency of evidence would be predefined and triangulation built in as an essential feature in order to enhance the reliability of judgements made.
Clearly, there is much to be done in the development of a workplace assessment for licensing to create a vehicle for assessment that is robust, fair, comparable and consistent; further research in this area is urgently required. To get it right will not only reduce the current assessment burden on trainees, but also harness the involvement of medical teachers. In doing so, we have the opportunity to create a powerful tool for professional development.
Acknowledgments
With thanks to David Sales, Amar Rughani, Julian Page, Chris Robinson, Alison Evans, Sathyia Naidoo, and Tim Norfolk for their helpful comments on earlier drafts of this article.
Competing interests
None
REFERENCES
- 1.Rethans JJ, Norcini JJ, Baron-Maldonado M, et al. The relationship between competence and performance: implications for assessing practice performance. Med Educ. 2002;36(10):901–909. doi: 10.1046/j.1365-2923.2002.01316.x. [DOI] [PubMed] [Google Scholar]
- 2.Schuwith LW, Southgate L, Page GG, et al. When enough is enough: a conceptual basis for fair and defensible practice performance assessment. Med Educ. 2002;36(10):925–930. doi: 10.1046/j.1365-2923.2002.01313.x. [DOI] [PubMed] [Google Scholar]
- 3.Department of Health. Modernising Medical Careers: the response of the four UK Health Ministers to the consultation on ‘Unfinished business – proposals for reform of the senior house officer grade’. London: Department of Health; 2003. [Google Scholar]
- 4.General Medical Council. Good medical practice. London: GMC; 2001. [Google Scholar]
- 5.Postgraduate Medical Education and Training Board. Principles for an assessment system for postgraduate medical training. London: Postgraduate Medical Education and Training Board; 2004. http://www.pmetb.org.uk/pmetb/publications/principles.pdf (accessed 5 May 2005) [Google Scholar]
- 6.Royal College of General Practitioners. A syllabus for the MRCGP examination. http://www.rcgp.org.uk/exam/syllabus.asp (accessed 5 May 2005)
- 7.Department of Health. Appraisal for doctors in training. London: Department of Health; 2003. [Google Scholar]
- 8.Norcini JJ. Work based assessment. BMJ. 2003;326:753–755. doi: 10.1136/bmj.326.7392.753. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Assessment Reform Group. Assessment for learning: beyond the black box. Cambridge: University of Cambridge School of Education; 1999. [Google Scholar]
- 10.Wiggins G. A true test: toward more authentic and equitable assessment. Phi Delta Kappan. 1989;70(9):703–713. [Google Scholar]
- 11.Wiggins G. Assessment, authenticity, context and validity. Phi Delta Kappan. 1993;75(3):200–214. [Google Scholar]
- 12.Gonczi A. Competency based assessment in the professions in Australia. Assessment in Education. 1994;1(1):27–44. [Google Scholar]
- 13.Sadler R. Specifying and promulgating achievement standards. Oxford Review of Education. 1987;13:191–209. [Google Scholar]
- 14.Schuwirth LW, van der Vleuten CP. The use of clinical simulations in assessment. Med Educ. 2003;37(Suppl 1):65–71. doi: 10.1046/j.1365-2923.37.s1.8.x. [DOI] [PubMed] [Google Scholar]
- 15.Southgate L, Cox J, David T, et al. The General Medical Council's performance procedures: peer review of performance in the workplace. Med Educ. 2001;35(Suppl 1):9–19. doi: 10.1046/j.1365-2923.2001.0350s1009.x. [DOI] [PubMed] [Google Scholar]
- 16.National Office for Summative Assessment. Summative assessment results April 1 2001–March 31 2002. London: National Office for Summative Assessment; 2002. [Google Scholar]
- 17.Hyland T. Behaviourism and the meaning of competence. In: Hodkinson P, Issitt M, editors. The challenge of competence. London: Cassell; 1995. [Google Scholar]
- 18.Wolf A. Theoretical issues in a criterion-based system. In: Wolf A, editor. Competence-based assessment. Buckingham: Open University Press; 1995. pp. 53–78. [Google Scholar]
- 19.Leung W-C, Diwakar V. Competency based medical training: review. Commentary: The baby is thrown out with the bathwater. BMJ. 2002;325:693–696. [PubMed] [Google Scholar]
- 20.Talbot M. Monkey see, monkey do: a critique of the competency model in graduate medical education. Med Educ. 2004;38(6):587–592. doi: 10.1046/j.1365-2923.2004.01794.x. [DOI] [PubMed] [Google Scholar]
- 21.Joint Committee on Postgraduate Training for General Practice. The SHO report and attribute guide. London: Joint Committee on Postgraduate Training for General Practice; 2003. http://www.jcptgp.org.uk/certification/sho-review.asp (accessed 26 Apr 2005) [Google Scholar]
- 22.Academy of Royal Medical Colleges F2 Sub-committee. Core competencies for the second foundation year. London: Academy of Royal Medical Colleges; 2004. [Google Scholar]
- 23.Committee of General Practice Education Directors. Out-of-hours workbook. London: Committee of General Practice Education Directors; 2004. [Google Scholar]
- 24.Goleman D. Leadership that gets results. Harv Bus Rev. 2000;78(2):78–90. [Google Scholar]
- 25.Department of Health. Modernising medical careers: the next steps – the future shape of foundation, specialist and general practice training programmes. London: Department of Health; 2004. [Google Scholar]
- 26.Department of Health. Annex 2: A firm foundation — standards for foundation training. In: Department of Health, editor. Modernising medical careers: the next steps – the future shape of foundation, specialist and general practice training programmes. London: Department of Health; 2004. [Google Scholar]
- 27.General Practitioners Committee of the British Medical Association and the Royal College of General Practitioners. Good medical practice for general gractitioners. London: Royal College of General Practitioners; 2002. www.rcgp.org.uk/corporate/position/good_med_prac/GMP06.pdf (accessed 26 Apr 2005) [Google Scholar]
- 28.WONCA. European definition of general practice/family medicine. 2002 http://euract.org/html/page03a.shtml (accessed 27 Apr 2005)
- 29.Roberts C, Newble DI, O'Rourke AJ. Portfolio-based assessments in medical education: are they valid and reliable for summative purposes? Med Educ. 2002;36(10):899–900. doi: 10.1046/j.1365-2923.2002.01288.x. [DOI] [PubMed] [Google Scholar]
- 30.Klenowski V. Developing portfolios for learning and assessment. London: Falmer; 2002. [Google Scholar]
- 31.Masters G. Developmental assessment: what, why, how? 1997. Conference on Advances of Student Learning, Chinese University of Hong Kong.
- 32.Palincsar AS, Brown AL. Reciprocal teaching of comprehension fostering and comprehension monitoring activities. Cognition and Instruction. 1984;2:117–175. [Google Scholar]
- 33.Sadler R. Formative assessment and the design of instructional systems. Instructional Science. 1989;18(2):119–144. [Google Scholar]
- 34.Hays R, Wellard R. In-training assessment in postgraduate training for general practice. Med Educ. 1998;32(5):507–513. doi: 10.1046/j.1365-2923.1998.00231.x. [DOI] [PubMed] [Google Scholar]
- 35.Prescott LE, Norcini JJ, McKinlay P, Rennie JS. Facing the challenges of competency-based assessment of postgraduate dental training: longitudinal evaluation of performance (LEP) Med Educ. 2002;36(1):92–97. doi: 10.1046/j.1365-2923.2002.01099.x. [DOI] [PubMed] [Google Scholar]
- 36.Swanwick T. Work based assessment in general practice: three dimensions and three challenges. Work Based Learning in Primary Care. 2003;1(2):99–108. [Google Scholar]
- 37.Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990;65(9 Suppl):63–67. doi: 10.1097/00001888-199009000-00045. [DOI] [PubMed] [Google Scholar]
- 38.Dreyfus H, Dreyfus S. Mind over machine: the power of human intuition expertise in the era of the computer. Oxford: Basil Blackwell; 1986. [Google Scholar]
- 39.Committee of General Practice Education Directors. Workbook for out of hours (OOH) training for GP registrars. London: Committee of General Prctice Education Directors; 2004. [Google Scholar]
- 40.Education Department of Western Australia. Preparing a teaching portfolio: guidelines for applicants. Perth: Education Department of Western Australia; 1997. [Google Scholar]
- 41.William D, Black P. Meanings and consequences: A basis for distinguishing formative and summative functions of assessment? British Educational Research Journal. 1996;22(5):537–548. [Google Scholar]
- 42.Crossley J, Davies H, Humphris G, Jolly B. Generalisability: a key to unlock professional assessment. Med Educ. 2002;36(10):972–978. doi: 10.1046/j.1365-2923.2002.01320.x. [DOI] [PubMed] [Google Scholar]
- 43.Baker E, O'Neil H, Linn R. Policy and validity prospects for performance-based assessment. Am Psychol. 48(12):1210–1218. [Google Scholar]
- 44.Wass V, O'Neill P. What the educators are saying. BMJ. 2004;328:210. [Google Scholar]
- 45.Gipps C. Beyond testing. London: Falmer Press; 1994. [Google Scholar]
- 46.Messick S. The interplay of evidence and consequences in the validation of performance assessments. Princeton, NJ: Educational Testing Service; 1992. Research Report. [Google Scholar]
- 47.Eraut M. Developing professional knowledge and competence. London: Falmer Press; 1994. [Google Scholar]
