Revalidation for general practitioners: Development and implementation of two revalidation models

David Bruce, Katie Phillips, Ross Reid, David Snadden, Ronald Harden

NHS Education for Scotland, Tayside Centre for General Practice, Dundee DD2 4AD

 

 

 David Bruce
director of postgraduate general practice education

Katie Phillips
project officer

Ross Reid
associate adviser

Northern Medical Program, Universities of Northern British Colombia and British Colombia, Prince George, BC, Canada V2N 4Z9

David Snadden
professor

Centre for Medical Education, University of Dundee, Dundee

 

 

Ronald Harden
professor of medical education

Correspondence to: D Bruce d.bruce@tcgp.dundee.ac.uk

(Accepted 16 January 2004)

Abstract

ObjectivesTo develop two models of revalidation for clinical general practice:

a minimum criterion based model with revalidation as the primary purpose

an educational outcome model with emphasis on combining revalidation with continuing professional development (CPD)

Setting Tayside, Scotland

Participants Development of models—45 GPs and stakeholder groups (representatives from the RCGP, LHCC, Tayside Local Health Council, non-principals group, secondary care)

Implementation of models – 66 Tayside GPs (principals and non-principals)

Methodology Models developed over 6 months at 4 half day workshops. A consensus methodology, the nominal group technique used. Both models blueprinted to good medical practice. Randomisation of GPs to either model at implementation phase.

Outcome Development of two models for revalidation of general practitioners with content and face validity. Criterion model used as basis of Scottish revalidation folder

Introduction

Revalidation for all doctors working in the United Kingdom will be introduced in spring 2005.1 In Tayside Scotland between September 2000 and January 2003, 61 general practitioners (GPs) and key stakeholders developed implemented and evaluated two models or approaches to revalidation.

When this study began revalidation was proposed as a five yearly assessment of doctor’s practice,2 demonstrated against the principles of good medical practice.3 It was proposed that revalidation should be linked with annual appraisal, which is a formative process, but that a five yearly assessment would also take place by a local revalidation group. This group comprised a doctor with personal knowledge of the doctor’s practice, a registered doctor who does not know the doctor, and a lay person. Using this proposed methodology revalidation would become a five yearly summative assessment of a doctor’s practice. Revalidation would therefore achieve the triple aims of:

When considering the impact of introducing revalidation to general practice, we felt that consideration needed to be given to three main issues; definition of standards, portfolio assessment, and the tension between assuring minimal competence and professional development. The standards against which doctors will be revalidated are the General Medical Council’s fitness to practice standards.4 Summative assessment drives learning,56 and measuring doctors performance against baseline standards may result in their educational activity being directed towards demonstration that their practice is not unacceptable, rather than taking a more holistic view of their practice and professional development.

Secondly, revalidation requires doctors to provide information about their performance over a five year period. A folder or portfolio will be required. For general practice there is a need to define what information would be required to build a robust profile and whether clear criteria and standards can be specified. Sampling of doctors folders by either the GMC or an external agency will probably be part of a quality assurance mechanism. Even though the detection of poor performance is no longer a declared objective of revalidation, there remains tension around the formative and summative aims of requiring doctors to demonstrate fitness to practise. As presently conceived, summative judgments will have to be made on the quality of each doctor’s folder which is sampled: is it good enough to demonstrate fitness to practise? If a folder sampled by the GMC is judged "inadequate" then the doctor will be referred to the appropriate fitness to practise procedure. This raises questions as to the validity and reliability of making such judgments on a folder of evidence.6 7 Tensions may arise in any system that strives to promote the continuing development of the majority of competent doctors, while also acting as a sieve to detect those whose performance is a cause for concern.89 There may be considerable difficulties in devising a system that achieves both.10

Against this background, and in order to illuminate the issues around introduction of revalidation to general practice, we developed, piloted, and evaluated two models for revalidation of general practitioners:

This paper reports on the development and implementation of the models. The results of the study are reported in our accompanying paper.

Methods

This study was undertaken between September 2000 and January 2003. The project timeline and phases of the study are shown in appendix 1.

Participants

All GPs, principals and non-principals, registered on the databases of Tayside Primary Health Care Trust, the local faculty of the Royal College of General Practitioners (RCGP), and the GP Postgraduate Unit (340 GPs) were invited by letter to take part in the two year study. Evening meetings were set up to explain the pilot and register interest: 72 GPs attended the meetings and 45 volunteered to take part. Key stakeholder organisations (see box) were asked to participate in development of the models. Patients’ representation was provided by the Local Health Council, which was also asked to give feedback on the completed models and take part in the assessment of doctors’ folders.
 

Key stakeholders

Patients’ representative—Tayside Health Council

Secondary care

Local Health Care Council (LHCC) representative

RCGP representative

Non-principals’ representative
 

Development of models

GPs were asked to indicate a preference for which model they would like to develop, those without preference were randomised. Five of the volunteering GPs who were also members of the stakeholder groups were asked to represent these groups. Thus each model was developed by two groups comprising:

Each model was developed over six months at four half day workshops. GPs were supported financially to attend day time workshops. All participants were given background reading relative to the model they were to develop (appendix 2). A steering group was formed, comprising the authors and educationalists from the NHS Education for Scotland’s Education Development Unit. Preliminary work by the steering group involved literature searching, communicating with those working in this field, and deciding the basic infrastructure for both models.

Criterion model

This model was based on fitness to practise standards made explicit in the document Good Medical Practice for General Practitioners.11 For GPs the "attributes of an unacceptable GP" were taken as the level above which GPs must perform. The original version of this document was modified by grouping the "unacceptable attributes" into clinical headings and mapping this back to the seven headings of good medical practice. Using a methodology suggested by the RCGP12 the grouped attributes were distilled into broad criterion statements. For each criterion statement choices of evidence were suggested and standards set (appendix 5, example page of criterion model)

Educational outcome model

Based on the Dundee outcome model,13 the12 outcomes determined for medical practice were modified to reflect the specific tasks and competencies of general practice. Conceptually this model looks at the tasks that a doctor does (technical intelligences), the deeper understanding needed for those tasks (intellectual intelligences) and the professionalism of the doctor (personal intelligences).14 15 Within each outcome broad statements of required practices were specified as "givens," areas of practice that were changing were mapped as "trends," and unacceptable practices were specified as "red flags" (appendix 6, key features of outcome model).

The basic infrastructure of each of the models was presented to the groups. The task of the development groups were therefore to modify the structures of the models if they considered this necessary and to decide the content within each model. Both development groups indicated that they did not wish to make alterations to the structure of the models.

In order to gain agreement between participants of each group, a consensus method was used, the nominal group technique.16 This method allowed all participants to create and prioritise what information would be needed to be collected by the doctors and define the criteria and standards without undue influence from the other group members.17 This method was also recognised to be efficient in use of time.18 During workshops participants brainstormed ideas on content, criterion statements, and the choices of information to be collected and voted on which should be included in the models. Because of time constraints some voting slips had to be sent to participants between the workshops and were collated and agreed at the next workshop.

The content of both models was completed over two half day workshops, with postal voting and reading taking place between workshops.

Standard setting

The third and fourth half day workshops were taken up with standard setting. Participants were given further reading (included in appendix 2), and a presentation was given on principles of standard setting. For each evidence the groups were asked to consider exactly what should be included and the pass point.19 As many of the pieces of information to be collected were common to both models, an "Evidence and standards" booklet was compiled and agreed between the two development groups (appendix 3). It is of note that for many of the evidence choices (choices of information to be collected) standards could not be set, and the pass point became presentation of the evidence in the agreed format.

Both groups therefore developed structured models that included a list of information choices that could be used to demonstrate fitness to practise. For the educational outcome model these were classified under the 12 outcome headings, and for the criterion model classified under the seven headings in good medical practice (appendix 4).

Mapping of the content and criterion statements and retrospective checking that the evidence choices and standards selected covered the spectrum of clinical general practice blueprinted both models are to Good Medical Practice.6

Completed models or sections from the models were sent to Tayside Health Council and political and educational leaders for comment. Feedback received indicated that the models covered all the competencies required for revalidation. The style of the educational outcome model was questioned and thought to be complex. No changes were made to the models following this consultation.

Implementation phase

A further 24 GPs were recruited by contacting those who had initially indicated interest but not volunteered to develop the models. The study design was formulated to eliminate participant bias and test the acceptability of the models to the wider GP community (table). GPs who developed each model were randomised, with half the GPs implementing the model they developed and half implementing the other model. New recruits were randomised to either model.

Subgroups of general practitioners who implemented the two revalidation models. Values are numbers of participants allocated to study groups (from the initial 66 doctors) and those who completed the study (the 53 doctors who handed in folders)

 
Subgroups of GPs
Implementation of models
Criterion model (n=24 completed)
Outcome model (n=29 completed)
GPs who had developed the same model
10 (5 completed)
10 (9 completed)
GPs who had developed the other model
11 (9 completed)
11 (9 completed)
New recruits for implementation
12 (10 completed)
12 (11 completed)
Monthly evening support meetings were arranged, where worked examples of the information choices were made available and any problems that participants might have were discussed. Standardised forms for submitting evidence were posted on the postgraduate website.20 A validated patient satisfaction questionnaire21 and a modified Ramsay peer review survey22 were distributed. For doctors choosing to use the questionnaires, responses were collected and analysed by computer, and results returned for inclusion in their folders. Doctors received only the results of their own patient and peer surveys and were asked to reflect on their results when including them in their folders. The project officer (KP) also provided telephone advice to participants and discussed any problems with the steering group.

The implementation phase lasted nine months.

Discussion

This paper describes the method used to develop pragmatic approaches to revalidation in general practice. The method was designed to engage all stakeholders to allow the development of a method acceptable to all. This study has shown that it is possible to develop and implement two models or approaches to revalidation for GPs. Basing models on good medical practice ensures content validity, and consultation with professional and patient groups confirms face validity.

Both models would be suitable to use either within an appraisal system or independent route. The criterion model has formed the basis of the Scottish Revalidation folder, which has been designed to complement the Scottish appraisal system.

In both models an attempt has been made to define the standards that should apply to information collected by GPs in their folders. There is little research evidence in the current literature on what written information is required to support the revalidation of general practitioners. Norcini has suggested that an ideal recertification programme should consist of three components; demonstration of satisfactory performance in day to day practice, a measure that the doctor has the potential to respond to a range of problems that are important but which do not occur routinely in practice, and an assessment of the doctors professionalism.23 The models developed in this study allowed doctors to profile their routine practice (reflect on their day to day performance), record their continuing professional development (maintain their skills) and provide information on patients and peer surveys (receive feedback on their professionalism).

The study also included non-principals, an increasing group, often neglected in educational matters.24

Helping doctors to structure collection of information about their practice through the proformas and examples worked well. As a result of this experience some of the support material from this project has been included in the Scottish revalidation toolkit, distributed to all GP principals in Scotland, and also available on the RCGP Scotland website.25

Conclusion

As described in this and the accompanying paper this study has shown that models of revalidation can be developed that are valid and feasible to doctors and supported by patient groups. The study has also shown that engaging the user groups is an effective way of developing an acceptable revalidation method. The criterion model has informed development of the Scottish revalidation folder,26 and work from both models is used as educational support material in the Scottish revalidation toolkit.
 
 

What is already known on this topic

UK doctors’ professional standards of practice are made explicit in Good Medical Practice

Folders of evidence will be used to show doctors’ competence, but no validated models exist for how to carry out revalidation by this method

Portfolio assessments have been developed for undergraduates, but they have poor reliability for postgraduates because of the varied nature of their content

What this study adds

The study developed two revalidation models: a criterion model, with revalidation as the primary purpose, and an educational outcome model, which combined revalidation with continuing professional development

Both were found to be feasible and acceptable to both general practitioners and patient representatives

Engaging user groups is an effective way of developing an acceptable revalidation method.

We thank Miriam Friedman, Ben David, and Jennifer Laidlaw of the Centre for Medical Education, University of Dundee, for methodological support and Peter Donnan of Tayside Centre for General Practice, University of Dundee, for statistical advice.

Contributors: DB was the principal investigator, who conceived and developed the original idea, led the study, and prepared the manuscript. KP coordinated the study, organised the questionnaires, carried out the interviews, and contributed to analysis. RR contributed to all stages of study, including analysis, and contributed to manuscript preparation. DS helped develop the original idea, advised on the study throughout, sat on the expert group, and prepared the final manuscript. RH helped with initial methodological design and supported the project throughout. DB is guarantor for the study.

Funding: Main funding was from NHS Education for Scotland with supplementary funding from Tayside Primary Care Trust and Tayside Centre for General Practice Postgraduate Funds.

Competing interests: None declared.

Ethical approval: None required.

  1. General Medical Council. A licence to practice and revalidation. London: GMC, 2003.
  2. General Medical Council. Revalidating doctors, ensuring standards, securing the future. London: GMC, 2000.
  3. General Medical Council. Good medical practice. London: GMC, 1995.
  4. General Medical Council. When your professional performance is questioned. London: GMC, 1997.
  5. Southgate L. Freedom and discipline: clinical practice and the assessment of clinical competence. Br J Gen Pract 1993;44:87-92.
  6. Wass V, van der Vleuten C, Shatzer J, Jones R. Assessment of clinical competence. Lancet 2001;357:945-9.
  7. Pitts J, Coles C, Thomas P. Educational portfolios in the assessment of general practice trainers: reliability of assessors. Med Educ 1999;33:515-20.
  8. Buckley G. Revalidation is the answer. BMJ 1999;319:1145-6.
  9. Wakeford R. GMC’s proposals for revalidation would not be accurate, econonical, or fair. BMJ 2000;321:1220.
  10. Du Boulay C. Revalidation for doctors in the United Kingdom: the end or the beginning? BMJ 2000;320:1490-1.
  11. Royal College of General Practitioners, General Practitioner Committee. Good medical practice for general practitioners. London: RCGP, 2002.
  12. General Practitioner Committee, Royal College of General Practitioners. A methodology for recommending revalidation for the general practitioner. London: RCGP, 2000.
  13. Harden RM, Crosby JR, Davis MH, Friedman M. From competency to meta-competency: a model for the specification of learning outcomes. Med Teach 1999;21:546-52.
  14. Callahan D. Medical education and the goals of medicine. AMEE education guide 14. Dundee: University of Dundee, Centre for Medical Education, 1999.
  15. Harden RM, Crosby JR, Davis MH. An introduction to outcome-based education. AMEE education guide 14. Dundee: University of Dundee, Centre for Medical Education, 1999.
  16. Jones J, Hunter D. Using the Delphi and nominal group technique in health services research. In: Pope C, Mays N, eds. Qualitative research in health care. London: BMJ Books, 2000:40-9.
  17. Lloyd-Jones G, Fowell S, Bligh J. The use of the nominal group technique as an evaluative tool in medical undergraduate education. Med Educ 1999;33:8-13.
  18. Horton JN. Nominal group technique. Anaesthesia 1980;35:811-4.
  19. Dauphinee WD, Case S, Fabb W, McAvoy P, Saunders N, Wakeford R. Standard setting for recertification. In: Newble D, Jolly B, Wakeford R, eds. The certification and recertification of doctors. Cambridge: Cambridge University Press, 1994:201-15.
  20. Tayside revalidation pilot. e-proformas. www.dundee.ac.uk/generalpractice/postgraduate/E-proformas.htm (accessed 3 Mar 2004).
  21. Grogan S, Conner M, Willits D, Norman P. Development of a questionnaire to measure patient’s satisfaction with general practitioner’s services. Br J Gen Pract 1995;45:525-9.
  22. Ramsey PG, Wenrich M, Carline J, Iuni T, Larson E, LoGerfo J. Use of peer ratings to evaluate physician performance. JAMA 1993;269:1655-60.
  23. Norcini J. Recertification in the United States. BMJ 1999;319:1183-5.
  24. Chambers R, Fieldhouse R, O’Connell S. GP non-principal’s education: let’s improve access for our flexible friends. Br J Gen Pract 1998;48:1551-2.
  25. RCGP Scotland. www.rcgp-scotland.org.uk/ (accessed 20 Feb 2004).
  26. RCGP Scotland, BMA Scotland, NHS Education for Scotland. Revalidation folder: Doctors working in clinical general practice in Scotland. Edinburgh: RCGP Scotland, 2003.