Skip to main content
The BMJ logoLink to The BMJ
editorial
. 1998 Jun 20;316(7148):1847–1848. doi: 10.1136/bmj.316.7148.1847

Making self regulation credible

Through benchmarking, peer review, appraisal—and management 

JN Johnson 1
PMCID: PMC1113359  PMID: 9632402

Professional self regulation has so far been vested in the General Medical Council, which has done much recently to modernise its way of working. The new performance procedures go a long way to plug a major gap in its ability to deal with cases which, though serious, may not be best dealt with by erasure or suspension from the medical register. Each problem dealt with by the GMC, however, represents an issue which has not been adequately addressed locally, and it is locally that major changes are needed if self regulation is to be credible.

Firstly, outcome data for individual treatments are needed to allow doctors to compare their own results with those of colleagues throughout the NHS performing the same procedures. Such benchmarking has been found useful in cardiothoracic surgery1,2 and lends itself to specialties which produce definite and measurable outcomes and complications but could in principle be adapted to all specialties. Individual doctors’ results need to be corrected for case difficulty and comorbidity—which is difficult.

For cardiothoracic surgery Keogh et al have described some of the problems of risk stratification, including the necessity for good data collection.3 It is an even more daunting prospect to extend such systems to specialties like general surgery, where surgeons undertake a wide variety of procedures and where outcomes other than mortality need to be investigated. An alternative would be to compare unadjusted results with the range of outcomes obtained by most doctors performing that procedure. This would allow individual doctors—and their hospital’s audit process—to determine when results fell short of what could be expected throughout the NHS. When the adverse result was an excess mortality, the doctor, together with the medical or clinical director, might decide to stop performing the procedure until corrective action could be taken. This approach would allow doctors and the public to know that a particular hospital performed an operation satisfactorily compared with similar institutions, but would avoid the disadvantages of league tables, which might lead to high risk patients being denied treatment if doctors felt that their position in the league table might be jeopardised. The Joint Consultants’ Committee and the Academy of Medical Royal Colleges are currently developing indicators based on everyday clinical practice. Outcomes of some procedures might be capable of being extracted from data already collected and held by specialist societies. In any event resources must be made available for outcome data to be collected as a matter of urgency.

Similarly, for the national service frameworks for cancer, coronary heart disease, and mental health—and others as they are developed—there need to be a small number of indicators which hospitals can use to monitor their adherence to the national framework. Such results could be published and would reassure patients that the whole process of care measured up to what had been determined nationally.

Secondly, a process of appraisal for consultants is being developed which is designed to enhance their professional role and protect patients.4 For this to succeed the clinical work of individual consultants needs to be reviewed in the context of the clinical service provided by their department. It is difficult for clinical work to be appraised by lay managers or doctors from a different specialty. Appraisal must therefore be rooted in peer review, and with increasing subspecialisation genuine peer review will increasingly need to come from outside the hospital—in any event the assessor must be independent and therefore external. The assessor must have sufficient information to comment on individual performance, staffing, bed numbers, equipment, and so on. This type of peer review will provide a formal opportunity at agreed regular intervals (annually or biannually) for senior doctors to discuss issues relating to their individual performance, the facilities provided by the hospital, and their professional and career development. The process should also be valuable to trusts, not only in the interests of good human resources policies but also as part of their responsibilities under clinical governance.

This type of peer review has been pioneered by the British Thoracic Society5 and has been found helpful by thoracic physicians across the NHS—not least because it helps clinicians make a case for better staffing or equipment when support comes from an external assessment. To extend peer review to all specialties, even quinquennially in the first instance, would require a national initiative and financial support. In some circumstances such a scheme could be developed into formal accreditation, as has happened with clinical pathology accreditation.

The work of individual doctors and the performance of the department in which they work are clearly interdependent. Responsibility for the performance of the department, particularly organisational aspects, lies with the clinical director, in conjunction with the trust’s management. The annual review of consultants’ job plans has been a contractual requirement for several years and is the proper mechanism for reviewing all aspects of a consultant’s work programme and service development plans. Though different, the processes of job plan review and appraisal by peer review are closely interlinked, and neither process should be undertaken without the other. Ideally, they should take place together.

Thirdly, we must grasp the nettle of behaviour problems. Whatever the advice of the GMC,6 it is difficult for doctors who have no managerial relationship with a colleague to take action over that colleague’s conduct. Medical and clinical directors do, however, have a responsibility for the behavioural problems of doctors working with them and must act to resolve them. Clear methods need to be developed, and training is required to help medical managers deal with these issues.

Ministers have supported the concept of self regulation—for the time being. We have to show that it can be delivered within a short timescale, and patients need to know that they will be safe when hospital treatment is necessary.

References

  • 1.English TAH, Bailey AR, Dark JF, Williams WG. The UK cardiac surgical register 1977-82. BMJ. 1984;289:1205–1208. doi: 10.1136/bmj.289.6453.1205. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Treasure T. Lessons from the Bristol case. BMJ. 1998;316:1685–1686. [PMC free article] [PubMed] [Google Scholar]
  • 3.Keogh BE, Dussek J, Watson D, Magee P, Wheatley D. Public confidence and cardiac surgical outcome. BMJ. 1998;316:1759–1760. doi: 10.1136/bmj.316.7147.1759. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Central Consultants and Specialists Committee. Appraisal and peer review of senior hospital doctors. London: BMA; 1998. [Google Scholar]
  • 5.Page RL, Harrison BDW. Setting up interdepartmental peer review—the British Thoracic Society experience. J R Coll Physicians Lond. 1995;29:319–324. [PMC free article] [PubMed] [Google Scholar]
  • 6.General Medical Council. Good medical practice. London: GMC; 1995. [Google Scholar]

Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Publishing Group

RESOURCES