David Bruce, Katie Phillips, Ross Reid, David Snadden, Ronald Harden
NHS Education for Scotland, Tayside Centre for General Practice, Dundee DD2 4AD
David Bruce
director of postgraduate general practice education
Katie Phillips
project officer
Ross Reid
associate adviser
Northern Medical Program, Universities of Northern British Colombia and British Colombia, Prince George, BC, Canada V2N 4Z9
David Snadden
professor
Ronald Harden
professor of medical education
Correspondence to: D Bruce d.bruce@tcgp.dundee.ac.uk
(Accepted 16 January 2004)
Abstract
ObjectivesTo develop two models of revalidation for clinical general practice:
a minimum criterion based model with revalidation as the primary purpose
an educational outcome model with emphasis on combining revalidation with continuing professional development (CPD)
Setting Tayside, Scotland
Participants Development of models—45 GPs and stakeholder groups (representatives from the RCGP, LHCC, Tayside Local Health Council, non-principals group, secondary care)
Implementation of models – 66 Tayside GPs (principals and non-principals)
Methodology Models developed over 6 months at 4 half day workshops. A consensus methodology, the nominal group technique used. Both models blueprinted to good medical practice. Randomisation of GPs to either model at implementation phase.
Outcome Development of two models for revalidation of general practitioners with content and face validity. Criterion model used as basis of Scottish revalidation folder
Introduction
Revalidation for all doctors working in the United Kingdom will be introduced in spring 2005.1 In Tayside Scotland between September 2000 and January 2003, 61 general practitioners (GPs) and key stakeholders developed implemented and evaluated two models or approaches to revalidation.
When this study began revalidation was proposed as a five yearly assessment of doctor’s practice,2 demonstrated against the principles of good medical practice.3 It was proposed that revalidation should be linked with annual appraisal, which is a formative process, but that a five yearly assessment would also take place by a local revalidation group. This group comprised a doctor with personal knowledge of the doctor’s practice, a registered doctor who does not know the doctor, and a lay person. Using this proposed methodology revalidation would become a five yearly summative assessment of a doctor’s practice. Revalidation would therefore achieve the triple aims of:
Secondly, revalidation requires doctors to provide information about their performance over a five year period. A folder or portfolio will be required. For general practice there is a need to define what information would be required to build a robust profile and whether clear criteria and standards can be specified. Sampling of doctors folders by either the GMC or an external agency will probably be part of a quality assurance mechanism. Even though the detection of poor performance is no longer a declared objective of revalidation, there remains tension around the formative and summative aims of requiring doctors to demonstrate fitness to practise. As presently conceived, summative judgments will have to be made on the quality of each doctor’s folder which is sampled: is it good enough to demonstrate fitness to practise? If a folder sampled by the GMC is judged "inadequate" then the doctor will be referred to the appropriate fitness to practise procedure. This raises questions as to the validity and reliability of making such judgments on a folder of evidence.6 7 Tensions may arise in any system that strives to promote the continuing development of the majority of competent doctors, while also acting as a sieve to detect those whose performance is a cause for concern.89 There may be considerable difficulties in devising a system that achieves both.10
Against this background, and in order to illuminate the issues around introduction of revalidation to general practice, we developed, piloted, and evaluated two models for revalidation of general practitioners:
Methods
This study was undertaken between September 2000 and January 2003. The project timeline and phases of the study are shown in appendix 1.
Participants
All GPs, principals and non-principals, registered on the databases
of Tayside Primary Health Care Trust, the local faculty of the Royal College
of General Practitioners (RCGP), and the GP Postgraduate Unit (340 GPs)
were invited by letter to take part in the two year study. Evening meetings
were set up to explain the pilot and register interest: 72 GPs attended
the meetings and 45 volunteered to take part. Key stakeholder organisations
(see box) were asked to participate in development of the models. Patients’
representation was provided by the Local Health Council, which was also
asked to give feedback on the completed models and take part in the assessment
of doctors’ folders.
Key stakeholders
Patients’ representative—Tayside Health Council
Secondary care
Local Health Care Council (LHCC) representative
RCGP representative
Non-principals’ representative
Development of models
GPs were asked to indicate a preference for which model they would like to develop, those without preference were randomised. Five of the volunteering GPs who were also members of the stakeholder groups were asked to represent these groups. Thus each model was developed by two groups comprising:
Criterion model
This model was based on fitness to practise standards made explicit in the document Good Medical Practice for General Practitioners.11 For GPs the "attributes of an unacceptable GP" were taken as the level above which GPs must perform. The original version of this document was modified by grouping the "unacceptable attributes" into clinical headings and mapping this back to the seven headings of good medical practice. Using a methodology suggested by the RCGP12 the grouped attributes were distilled into broad criterion statements. For each criterion statement choices of evidence were suggested and standards set (appendix 5, example page of criterion model)
Educational outcome model
Based on the Dundee outcome model,13 the12 outcomes determined for medical practice were modified to reflect the specific tasks and competencies of general practice. Conceptually this model looks at the tasks that a doctor does (technical intelligences), the deeper understanding needed for those tasks (intellectual intelligences) and the professionalism of the doctor (personal intelligences).14 15 Within each outcome broad statements of required practices were specified as "givens," areas of practice that were changing were mapped as "trends," and unacceptable practices were specified as "red flags" (appendix 6, key features of outcome model).
The basic infrastructure of each of the models was presented to the groups. The task of the development groups were therefore to modify the structures of the models if they considered this necessary and to decide the content within each model. Both development groups indicated that they did not wish to make alterations to the structure of the models.
In order to gain agreement between participants of each group, a consensus method was used, the nominal group technique.16 This method allowed all participants to create and prioritise what information would be needed to be collected by the doctors and define the criteria and standards without undue influence from the other group members.17 This method was also recognised to be efficient in use of time.18 During workshops participants brainstormed ideas on content, criterion statements, and the choices of information to be collected and voted on which should be included in the models. Because of time constraints some voting slips had to be sent to participants between the workshops and were collated and agreed at the next workshop.
The content of both models was completed over two half day workshops, with postal voting and reading taking place between workshops.
Standard setting
The third and fourth half day workshops were taken up with standard setting. Participants were given further reading (included in appendix 2), and a presentation was given on principles of standard setting. For each evidence the groups were asked to consider exactly what should be included and the pass point.19 As many of the pieces of information to be collected were common to both models, an "Evidence and standards" booklet was compiled and agreed between the two development groups (appendix 3). It is of note that for many of the evidence choices (choices of information to be collected) standards could not be set, and the pass point became presentation of the evidence in the agreed format.
Both groups therefore developed structured models that included a list of information choices that could be used to demonstrate fitness to practise. For the educational outcome model these were classified under the 12 outcome headings, and for the criterion model classified under the seven headings in good medical practice (appendix 4).
Mapping of the content and criterion statements and retrospective checking that the evidence choices and standards selected covered the spectrum of clinical general practice blueprinted both models are to Good Medical Practice.6
Completed models or sections from the models were sent to Tayside Health Council and political and educational leaders for comment. Feedback received indicated that the models covered all the competencies required for revalidation. The style of the educational outcome model was questioned and thought to be complex. No changes were made to the models following this consultation.
Implementation phase
A further 24 GPs were recruited by contacting those who had initially indicated interest but not volunteered to develop the models. The study design was formulated to eliminate participant bias and test the acceptability of the models to the wider GP community (table). GPs who developed each model were randomised, with half the GPs implementing the model they developed and half implementing the other model. New recruits were randomised to either model.
Subgroups of general practitioners who implemented the two revalidation models. Values are numbers of participants allocated to study groups (from the initial 66 doctors) and those who completed the study (the 53 doctors who handed in folders)
Monthly evening support meetings were arranged, where worked examples of the information choices were made available and any problems that participants might have were discussed. Standardised forms for submitting evidence were posted on the postgraduate website.20 A validated patient satisfaction questionnaire21 and a modified Ramsay peer review survey22 were distributed. For doctors choosing to use the questionnaires, responses were collected and analysed by computer, and results returned for inclusion in their folders. Doctors received only the results of their own patient and peer surveys and were asked to reflect on their results when including them in their folders. The project officer (KP) also provided telephone advice to participants and discussed any problems with the steering group.
Subgroups of GPs Implementation of models Criterion model (n=24 completed) Outcome model (n=29 completed) GPs who had developed the same model 10 (5 completed) 10 (9 completed) GPs who had developed the other model 11 (9 completed) 11 (9 completed) New recruits for implementation 12 (10 completed) 12 (11 completed)
The implementation phase lasted nine months.
Discussion
This paper describes the method used to develop pragmatic approaches to revalidation in general practice. The method was designed to engage all stakeholders to allow the development of a method acceptable to all. This study has shown that it is possible to develop and implement two models or approaches to revalidation for GPs. Basing models on good medical practice ensures content validity, and consultation with professional and patient groups confirms face validity.
Both models would be suitable to use either within an appraisal system or independent route. The criterion model has formed the basis of the Scottish Revalidation folder, which has been designed to complement the Scottish appraisal system.
In both models an attempt has been made to define the standards that should apply to information collected by GPs in their folders. There is little research evidence in the current literature on what written information is required to support the revalidation of general practitioners. Norcini has suggested that an ideal recertification programme should consist of three components; demonstration of satisfactory performance in day to day practice, a measure that the doctor has the potential to respond to a range of problems that are important but which do not occur routinely in practice, and an assessment of the doctors professionalism.23 The models developed in this study allowed doctors to profile their routine practice (reflect on their day to day performance), record their continuing professional development (maintain their skills) and provide information on patients and peer surveys (receive feedback on their professionalism).
The study also included non-principals, an increasing group, often neglected in educational matters.24
Helping doctors to structure collection of information about their practice through the proformas and examples worked well. As a result of this experience some of the support material from this project has been included in the Scottish revalidation toolkit, distributed to all GP principals in Scotland, and also available on the RCGP Scotland website.25
Conclusion
As described in this and the accompanying paper this study has shown
that models of revalidation can be developed that are valid and feasible
to doctors and supported by patient groups. The study has also shown that
engaging the user groups is an effective way of developing an acceptable
revalidation method. The criterion model has informed development of the
Scottish revalidation folder,26 and work from both models is
used as educational support material in the Scottish revalidation toolkit.
What is already known on this topic
UK doctors’ professional standards of practice are made explicit in Good Medical Practice
Folders of evidence will be used to show doctors’ competence, but no validated models exist for how to carry out revalidation by this method
Portfolio assessments have been developed for undergraduates, but they have poor reliability for postgraduates because of the varied nature of their content
What this study adds
The study developed two revalidation models: a criterion model, with revalidation as the primary purpose, and an educational outcome model, which combined revalidation with continuing professional development
Both were found to be feasible and acceptable to both general practitioners and patient representatives
Engaging user groups is an effective way of developing an acceptable revalidation method.
We thank Miriam Friedman, Ben David, and Jennifer Laidlaw of the Centre for Medical Education, University of Dundee, for methodological support and Peter Donnan of Tayside Centre for General Practice, University of Dundee, for statistical advice.
Contributors: DB was the principal investigator, who conceived and developed the original idea, led the study, and prepared the manuscript. KP coordinated the study, organised the questionnaires, carried out the interviews, and contributed to analysis. RR contributed to all stages of study, including analysis, and contributed to manuscript preparation. DS helped develop the original idea, advised on the study throughout, sat on the expert group, and prepared the final manuscript. RH helped with initial methodological design and supported the project throughout. DB is guarantor for the study.
Funding: Main funding was from NHS Education for Scotland with supplementary funding from Tayside Primary Care Trust and Tayside Centre for General Practice Postgraduate Funds.
Competing interests: None declared.
Ethical approval: None required.