A rash of external inspection is affecting the delivery of health care around the world. Governments, consumers, professions, managers, and insurers are hurrying to set up new schemes to ensure public accountability, transparency, self regulation, quality improvement, or value for money. But what do we know of such schemes' evidence base, the validity of their standards, the reliability of their assessments, or their ability to bring improvements for patients, staff, or the general population?
In short, not much. The standards, measurements, and results of management systems have not been, and largely cannot be, subjected to the same rigorous scrutiny and meta-analysis as clinical practice. No one has published a controlled trial, and there are too many confounding variables to prove that inspection causes better clinical outcomes, although there is evidence that organisations increase their compliance with standards if these are made explicit. But experience and consensus are gradually being codified into guidelines to make external quality systems as coherent, consistent, and effective as they could be (box B1). Much of this consensus is ignored by those who develop and operate new programmes.
Summary points
External assessment and inspection of health services are becoming more common worldwide, using a combination of models—ISO certification, business excellence, peer review, accreditation, and statutory inspection
There is common concern that voluntary and statutory programmes need to be integrated to ensure valid standards, consistent assessments, transparency, and public accountability
International consensus on the effective organisation and methods of external assessment is growing, but hard evidence of clinical benefit is lacking
The United Kingdom has many independent and statutory programmes but no effective mechanism for coordinating their activity, standards, and methods according to this consensus
The NHS must be willing to support a public-private coalition to bring realism, clarity, consistency, efficiency, and transparency to external assessment
In Britain there has been no consistent central strategy to support or coordinate existing external assessment programmes. The NHS has introduced new statutory bodies and triggered more formal programmes of visiting and assessment. Each brings a burden of inspection and requires resources for development, but responsibility for ensuring the integration, consistency, and value of such programmes has not been defined.
This article describes the growth of external assessment and the issues it raises around the world, particularly in Britain.
Common approaches
Many countries have voluntary and statutory mechanisms for periodic external assessment of healthcare organisations against defined standards, and some have been systematically compared.1–3 They are all meant to assure or improve some elements of quality, but they are usually run by different organisations without national coordination to make them consistent, mutually supportive, economical, and effective. Broadly, these mechanisms include variants on five approaches (box B2).
The International Organization for Standardization provides standards against which organisations or functions may be certificated by accredited auditors. These have been applied in health care, specifically to radiology and laboratory systems, and more generally to quality systems in clinical departments.4
The Baldrige criteria have evolved into national and international assessment programmes such as the Australian Business Excellence Model (www.aqc.org.au/) and the European Foundation for Quality Management (www.efqm.org/).5
Peer review is based on collegiate, usually single discipline, programmes to assess and give formal accreditation to training programmes but is now also extended to clinical services.6
Accreditation relies on independent voluntary programmes developed from a focus on training into multidisciplinary assessments of healthcare functions, organisations, and networks. These have spread from Western countries into Latin America,7 Africa,8 and South East Asia9,10 during the 1990s. Mandatory programmes have recently been adopted in France,11 Italy,12 and Scotland.13
Registration and licensing are statutory programmes to ensure that staff or provider organisations achieve minimum standards of competence. There are also inspectorates for specific functions to ensure public health and safety.
National requirements
Several countries have recently received recommendations on their ability to ensure high standards in health care nationally. The general conclusions on the role of external agencies have been remarkably similar.
The US president's advisory commission on consumer protection and quality in health care recommended in 1998 that public and private programmes of external review should make their standards, survey protocols, decision criteria, and results available to the public at “little or no cost.”14 The organisations themselves should work towards a common set of standards, coordinate their activities to avoid conflict and duplication, and commit themselves to a national quality forum. This forum aims to devise a national strategy for measuring and reporting healthcare quality and in 1999 began to standardise performance measures for the nation's 5000 acute general hospitals.15
In 1999 the US inspector general of the Department of Health reviewed the external quality oversight of hospitals that participate in Medicare.16 She concluded that voluntary “collegiate” accreditation by the Joint Commission on Accreditation of Healthcare Organisations and “regulatory” Medicare certification by state agencies had considerable strengths (box B3) but also major deficiencies. She recommended that both systems should harmonise their methods, disclose more details of hospital performance on the internet, and be held more fully accountable at federal level for their performance in reviewing hospitals.
An Australian taskforce recommended in 1996 that the government should formally acknowledge independent assessment programmes that met defined criteria and should enable them to disseminate information about their processes and findings to the public.17 Two years later an expert advisory group recommended “that accreditation or certification of healthcare organisations be strongly encouraged with incentives, or indeed made mandatory, but choice of accreditation/certification/award approaches be allowed.”18
In Scotland the Carter report on acute services recommended a single mandatory system of accreditation for hospitals and primary care.19 This should be patient centred, clinically focused, and complementary to internal quality improvement, and its explicit, measurable standards and reports should be in the public domain. This recommendation led to the Clinical Standards Board for Scotland.
International solutions
Countries have good reasons to be able to show that healthcare standards are not only consistent within their own territory but also that they are comparable with those of their neighbours, suppliers, and competitors. Several recent European and international initiatives are making traditional assessment methods more accessible, convergent, and relevant to health care.
International Organization for Standardization—The ISO 9000 series of standards were designed for manufacturing industries and have been criticised for using language that is difficult to interpret in terms of health services. The 2000 version will be more readily applied, and US and European initiatives are under way to develop ISO guidelines specific to health care.
European Foundation for Quality Management—The original “business excellence” model has given way to “excellence” in the 1999 version and has shifted emphasis from “enabling processes” to results of concern to patients, staff, and society
Accreditation—The international arm of the US Joint Commission on Accreditation of Healthcare Organisations has developed a set of multinational accreditation standards.20 In addition the International Society for Quality in Health Care has developed (“ALPHA”) standards and criteria (available from the society's website www.isqua.org.au) against which an accreditation programme may apply to have its standards and process assessed and internationally accredited.21 These also offer a template for standardisation and self assessment to any external assessment programme.
Programmes in Britain
The royal commission on the NHS recommended in 1979 that a special health authority be set up as a development agency and guardian of standards.22 In the early 1980s several monitoring agencies were suggested or piloted,23 but, despite favourable response from national professional bodies to leaked proposals, no such national agency featured in the government's white paper of 1989 Working for Patients.24
In the absence of any governmental lead, several small peer review and (some large) accreditation programmes emerged as external voluntary mechanisms for organisational development. There are now over 35 such programmes with a wealth of standards and trained assessors but little integration, consistency, or reciprocity between them. Their number could be doubled if each royal college, faculty, and professional association were to establish independent accreditation programmes as a collegiate approach to clinical governance. NHS institutions also have their share of visits from clinical training programmes, inspectors (such as for fire regulations, environmental health, etc), and other watchdogs that have begun to publish standards (such as the NHS Information Authority Information Management Centre for data quality and NHS Controls Assurance for risk management and controls assurance).
The Clinical Standards Board for Scotland and the National Institute for Clinical Excellence (NICE), and Commission for Health Improvement (CHI) for England and Wales have been established to improve standards in the NHS. After years of policy vacuum, an early common task must be to tidy up: they must synthesise the experience of Britain and other countries25; provide public access to their own valid standards, reliable assessments, and fair judgments; and, above all, avoid duplication and inconsistency in defining and measuring standards. In short, they should be open to assessment against international criteria and lead the way to consistency and reciprocity within and between systems for improving patient services, clinical training, and public accountability.
Britain could borrow from the US and Australian recommendations for partnership between state and independent programmes for external assessment and define the terms of collaboration. Independent and statutory programmes could be jointly assessed and harnessed according to general criteria drawn from UK policy and experience overseas and from the more specific ALPHA standards.
We need to catalogue, harmonise, and orchestrate organisational standards and their assessment, not only in the NHS but also in the independent and social care sectors. The National Institute for Clinical Excellence has a clear responsibility for defining clinical standards in England and Wales. The Commission for Health Improvement is concerned with the organisation and delivery of clinical governance and national service frameworks, but it has no mandate to define or orchestrate organisational standards (even for its own reviews), and it is specifically excluded from the independent sector. In Scotland the Clinical Standards Board integrates some key features of these two bodies, particularly the task of defining and measuring standards, both clinical and organisational. With yet broader vision, the Scottish Executive has adopted a charter that sets out principles for public and professional inspectorates whose role includes evaluation of cases in the public interest, including health, education, and social work services (www.scotland. gov.uk). This offers a starting point for coherence and learning within and between sectors, and an example for the rest of Britain.
The UK Accreditation Forum (www.caspe.co.uk) was set up in 1998 to support accreditation and peer review programmes, and the Academy of Medical Royal Colleges (www.aomrc.org.uk/) is working towards more coherent procedures for hospital visiting for recognition of training. Neither body has the resources or the authority to standardise standards or to regulate the regulators across the country.
What we need is a formal means to pool current experience, to drive convergence, and to help new programmes to be efficient, complementary, and effective—a resource centre to do for organisational and management standards what NICE, the Cochrane Centre, and the Scottish Intercollegiate Guidelines Network are doing for clinical practice. Its task should be to ensure that organisational standards, assessments, and general results are in the public domain; that the legitimate interests of the public, professions, providers, and funding bodies are balanced and supported; that lessons from successes and failures are systematically embedded in common core standards for assessment; that assessment methods and reporting are consistent in time, place, and service; and that expenditure on the development and operation of external assessment programmes is demonstrably justified by improvements in patient care.
Conclusions
Schemes for inspection, registration, revalidation, and review are proliferating with little national coordination or regard for the evidence of what has worked or not worked for health care in Britain or overseas. This leads to uncertainty among service providers about which standards to adopt, inefficiency in developing new inspection and development programmes, duplication and inconsistency of external assessments, and an excessive burden on the services under scrutiny. The collegial and statutory mechanisms need a public-private partnership, perhaps similar to the National Quality Forum in the United States, to bring clarity, consistency, and transparency to external assessment in Britain.
Acknowledgments
I am grateful for advice on drafts of this paper from Barbara Donaldson of Quality Health New Zealand, from Elma Heidemann of the Canadian Council on Health Services Accreditation, and from Lee Tregloan of the International Society for Quality in Health Care.
Footnotes
Competing interests: I was former leader of the European “ExPeRT” research project (funded by the EC 1996-99). As former president of the International Society for Quality in Health Care, I seeded the international accreditation project and was a member of the American Joint Commission International standards task force. I am founder chairman of the UK Accreditation Forum and have been paid by the Health Quality Service and the Hospital Accreditation Programme. I currently have a contract with the International Society for Quality in Health Care to provide a research review of accreditation programmes around the world, but I otherwise receive no funding from any of the organisations mentioned.
References
- 1.Klazinga N. Re-engineering trust: adoption and adaptation of four external quality assurance models in Western European health care systems. Int J Quality Health Care. 2000;12:183–189. doi: 10.1093/intqhc/12.3.183. [DOI] [PubMed] [Google Scholar]
- 2.Australian Business Excellence Framework Healthcare Advisory Group. A comparison of quality programmes. St Leonards, NSW: Australian Quality Council; 1999. [Google Scholar]
- 3.Donahue KT, van Ostenberg P. Joint Commission International accreditation: relationship to four models of evaluation. Int J Quality Health Care. 2000;12:243–246. doi: 10.1093/intqhc/12.3.243. [DOI] [PubMed] [Google Scholar]
- 4.Sweeney J, Heaton C. Interpretations and variations of ISO 9000 in acute health care. Int J Quality Health Care. 2000;12:203–209. doi: 10.1093/intqhc/12.3.203. [DOI] [PubMed] [Google Scholar]
- 5.Nabitz U, Klazinga N, Walburg J. The EFQM excellence model: European and Dutch experience with the EFQM approach in health care. Int J Quality Health Care. 2000;12:191–201. doi: 10.1093/intqhc/12.3.191. [DOI] [PubMed] [Google Scholar]
- 6.Van Weert C. Developments in professional quality assurance towards quality improvement. Int J Quality Health Care. 2000;12:239–242. doi: 10.1093/intqhc/12.3.239. [DOI] [PubMed] [Google Scholar]
- 7.Arce H. Accreditation: the Argentine experience in the Latin American region. Int J Quality Health Care. 1999;11:425–428. doi: 10.1093/intqhc/11.5.425. [DOI] [PubMed] [Google Scholar]
- 8.Whittaker S, Burns D, Doyle V, Fenney Lynam P. Introducing quality assurance to health service delivery—some approaches from South Africa, Ghana and Kenya. Int J Quality Health Care. 1998;10:263–267. doi: 10.1093/intqhc/10.3.263. [DOI] [PubMed] [Google Scholar]
- 9.Huang P, Hsu YE, Kai-Yuan T, Hsueh Y-S. Can European external peer review techniques be introduced and adopted into Taiwan's hospital accreditation system? Int J Quality Health Care. 2000;12:251–254. doi: 10.1093/intqhc/12.3.251. [DOI] [PubMed] [Google Scholar]
- 10.Ito H, Iwasaki S, Nakano Y, Imanaka Y, Kawakita H, Gunji A. Direction of quality improvement activities of health care organizations in Japan. Int J Quality Health Care. 1998;10:361–363. doi: 10.1093/intqhc/10.4.361. [DOI] [PubMed] [Google Scholar]
- 11.Agence Nationale d'Accréditation et d'Evaluation en Santé. Décret en Conseil d'Etat no 97-311 du 7 Avril. Paris: ANAES; 1997. www.anaes.fr/ANAES/anaesparametrage.nsf/HomePage?readform (Journal Officiel (82)8) ( www.anaes.fr/ANAES/anaesparametrage.nsf/HomePage?readform). ). [Google Scholar]
- 12.Decree of 14 January 1997. Rome: Gazetta Ufficiale della Repubblica Italiana; 1997. [Google Scholar]
- 13.Steele DR. Promoting public confidence in the NHS: the role of the Clinical Standards Board for Scotland. Health Bull Jan 2000 (www.scotland.gov.uk/library2/doc09/hbj0-05.asp). [PubMed]
- 14.President's Advisory Commission on Consumer Protection and Quality in the Health Care Industry. Quality first: better health care for all Americans. Washington DC: US Department of Health and Human Services; 1998. www.hcqualitycommission.gov/final/chap09.html ( www.hcqualitycommission.gov/final/chap09.html). ). [Google Scholar]
- 15.Kizer KW. The National Quality Forum enters the game. Int J Quality Health Care. 2000;12:85–87. doi: 10.1093/intqhc/12.2.85. [DOI] [PubMed] [Google Scholar]
- 16.Brown JG. The external review of hospital quality: a call for greater accountability. Boston, MA: Office of Inspector General, Department of Health and Human Services; 1999. www.dhhs.gov/progorg/oei/reportindex.html (OEI-01-97-00050; 7/99) ( www.dhhs.gov/progorg/oei/reportindex.html). ). [Google Scholar]
- 17.Taskforce on Quality in Australian Health Care. The final report of the Taskforce on Quality in Australian Health Care. Canberra: Australian Department of Health and Aged Care; 1996. www.health.gov.au/pubs/hlthcare/toc.htm ( www.health.gov.au/pubs/hlthcare/toc.htm). ). [Google Scholar]
- 18.National Expert Advisory Group on Safety and Quality in Australian Health Care. Canberra: Australian Department of Health and Aged Care; 1998. Interim report April 1998.www.health.gov.au/about/cmo/report.doc ( www.health.gov.au/about/cmo/report.doc) ) [Google Scholar]
- 19.Scottish Office. Acute services review report. Edinburgh: Scottish Office Publications; 1998. Quality assurance and accreditation.www.scotland.gov.uk/library/documents5/acute-06.htm#1 ( www.scotland.gov.uk/library/documents5/acute-06.htm#1). ). [Google Scholar]
- 20.Joint Commission International accreditation standards for hospitals. Chicago: Joint Commission International; 2000. www.jcrinc.com/internat.htm ( www.jcrinc.com/internat.htm). ). [Google Scholar]
- 21.Heidemann EG. The ALPHA program. Int J Quality Health Care. 1999;11:275–277. doi: 10.1093/intqhc/11.4.275. [DOI] [PubMed] [Google Scholar]
- 22.Royal Commission on the National Health Service. Report of the Royal Commission on the National Health Service. London: HMSO; 1979. (Cmnd 7615). [Google Scholar]
- 23.Shaw CD. Monitoring and standards in the NHS. BMJ. 1982;284:217–218. doi: 10.1136/bmj.284.6310.217. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Department of Health. Working for patients. White paper. London: HMSO; 1989. [Google Scholar]
- 25.Oldham J. An inspectorate for the health service? BMJ. 1997;315:896–897. doi: 10.1136/bmj.315.7113.896. [DOI] [PMC free article] [PubMed] [Google Scholar]