Skip to main content
International Journal of Emergency Medicine logoLink to International Journal of Emergency Medicine
. 2010 Nov 5;3(4):341–349. doi: 10.1007/s12245-010-0240-6

Identification of performance indicators for emergency centres in South Africa: results of a Delphi study

David Maritz 1,, Peter Hodkinson 1, Lee Wallis 1
PMCID: PMC3047843  PMID: 21373303

Abstract

Background

Emergency medicine is a rapidly developing field in South Africa (SA) and other developing nations. There is a need to develop performance indicators that are relevant and easy to measure. This will allow identification of areas for improvement, create standards of care and allow inter-institutional comparisons to be made. There is evidence from the international literature that performance measures do lead to performance improvements.

Aims

To develop a broad-based consensus document detailing quality measures for use in SA Emergency Centres (ECs).

Methods

A three-round modified Delphi study was conducted over e-mail. A panel of experts representing the emergency medicine field in SA was formed. Participants were asked to provide potential performance indicators for use in SA, under subheaders of the various disciplines that are seen in emergency patients. These statements were collated and sent out to the panel for scoring on a 9-point Lickert scale. Statements that did not reach a predefined consensus were sent back to the panellist for reconsideration.

Results

Consensus was reached on 99 out of 153 (65%) of the performance indicators proposed. These were further refined, and a synopsis of the statements is presented, classified as to whether the statements were thought to be feasible or not in the current circumstances.

Conclusions

A synopsis of the useful and feasible performance indicators is presented. The majority are structural and performance-based indicators appropriate to the development of the field in SA. Further refinement and research is needed to implement these indicators.

Electronic supplementary material

The online version of this article (doi:10.1007/s12245-010-0240-6) contains supplementary material, which is available to authorized users.

Keywords: Performance, Quality, Indicator, Emergency, South Africa, Developing world

Introduction

Within SA, the speciality of emergency medicine is facing pressures from increasing patient numbers, the burden of diseases (such as HIV, AIDS and TB), the burden of trauma and the inevitable resource constraints. The legacy of Apartheid has left a health system that is unable to provide adequate, reliable universal health coverage. The government is attempting to address this through implementation of the National Health Insurance scheme (NHI) [1], which aims to improve access to high-quality health care for the whole population. In order for this to succeed, it is incumbent upon health planners to define quality of care, and to develop ways to assess and measure the quality of the care that we provide.

Emergency Centres (ECs) are constantly striving to provide a higher level of patient care in a cost-effective manner. The challenge is for ECs to be flexible and able to adapt to changing conditions in the socio-political landscape while providing a constant, safe and reliable service [2].

Health care has been classified into Structure, Process and Outcome: each can be measured or quantified [3]. Traditionally, there has been an emphasis on measuring outcomes. Outcome measures are those events occurring after the patient leaves the EC and typically include mortality, morbidity and quality of life. They are useful in informing patients of the quality of care that they can expect to receive from the local hospital, and also allow purchasers of health care to see that they are getting value for money. Most research on performance indicators (PIs) within the EC has focussed on the process-based measures of quality care (e.g., waiting times, overcrowding trends) [4]. Outcome indicators (e.g., mortality) are less common in emergency medicine and difficult to measure due to the limited time that the patient is in the EC [4, 5].

While much has been written about the development of systems of emergency care in developed countries, little is known about similar processes in developing world settings. Various studies have looked at PIs in the EC for the developed world setting in the UK and Canada [4, 5], but these may not apply in the developing world setting. There is a pressing need in SA to develop quality and performance indicators that are relevant and easy to measure. This will allow health care providers to identify areas where improvement is needed, create standards of care and allow inter-institutional comparisons to be made. There is compelling evidence from the international literature that performance measures do lead to performance improvements [6].

There are few (if any) data on the development of performance indicators for ECs within the SA public health service. Rigorous quality assurance and clinical governance are being introduced into the private sector emergency care system, but this is not the case within most public health institutions. Where such systems are in place, they are not universally applied, which makes direct comparisons between facilities impossible.

Furthermore, the methods for the development of quality indicators within the EC are not well defined [4]. Most of the research into the development of performance indicators within the EC has made use of the Delphi technique [4, 5, 7] or other consensus-based methods [8]. The Delphi method is a structured process for collecting knowledge from a group of experts by means of focussed questionnaires interspersed with controlled opinion feedback. Proponents of the Delphi method recognise human judgement as a legitimate and useful input, and therefore believe that the use of experts, carefully selected, can lead to reliable and valid results [9].

As South Africa enters into a new age of health care, there will be both an opportunity and pressure for health-care providers to be accountable for the quality of care that they provide. As a result, it will be necessary to re-define and measure the quality of care that we provide in ECs, to ultimately improve the quality of emergency care.

The aim of this study is to develop a broad-based consensus document detailing quality measures for use in SA Emergency Centre.

Methods

Study design

A modified three-round Delphi study was undertaken. A panel of experts in the field of Emergency Medicine in SA (specialist emergency physicians, trauma surgeons and senior nurses) was invited via e-mail to participate in the study. These SA experts were chosen for their experience in emergency care, and were believed to represent public and private sectors, all geographic regions, and both academic and non-academic institutions; they also represented district, regional and central hospitals.

After they had agreed to participate, an e-mail with information on the Delphi process and instructions on how to proceed was sent to the panel members.

Panel members were contacted via e-mail only and given three reminders to respond at each round.

Delphi process and selection of indicator statements

In round 1, members of the Delphi group were invited to produce a list of statements that they considered important with regard to performance in the EC under the subheaders given in Table 1. All statements were collated and organised into a set of initial indicators,—duplications were omitted and those statements not applicable were removed.

Table 1.

Subheadings for proposed indicators in round 1 (Beattie E, Mackway-Jones K. A Delphi study to identify performance indicators for emergency medicine. Emerg. Med. J. 2004; 21: 47-50)

Surgery/orthopaedics/trauma
Paediatrics
Psychiatry
Anaesthesia/analgesia
Obstetrics and gynaecology/ENT/ophthalmology
Primary care
Minor injury
Radiology/imaging/investigations
Cardiac arrest
Bereavement
Major incidents
Other

This document was then sent out as round 2 via e-mail; the Delphi group was asked to rank their agreement with these statements on a 9-point Likert scale (0 = the statement is very poor as a quality indicator; 9= the statement is very good as a quality indicator) [10]. Positive consensus was defined as 80% or more of replies scoring 7 and above; negative consensus was defined as 80% or more of replies scoring 3 and below. Beattie et al. [5] in their study defined positive consensus as 80% or more of replies scoring 6 and above and negative consensus as 80% or more of replies scoring 4 and below. We decided to use tighter clusters to ensure that those statements reaching consensus would be strongly agreed on.

In round 3, those statements from round 2 that had not reached consensus were returned for reconsideration in light of the group opinion. In this round, scores from round 2 were summarised, which allowed panel members to change their response in light of the group opinion, with the aim of achieving consensus. (Appendix 1 shows the format of the Lickert scales used.)

At the end of the process, a list of statements that had reached consensus (as either good or bad indicators of quality of care in the EC) was collated (Supplementary data).

Data analysis

Data from each questionnaire were stored on a password-protected work computer and kept anonymous; data were entered into a Microsoft Excel (Microsoft, Richmond, Va) spreadsheet and tabulated.

Descriptive statistics were calculated for the data, including means and percentages.

Ethics

Ethical approval for this study was obtained from the University of Cape Town. Panel members consented to participate. Replies from each member were kept anonymous.

Results

Delphi participants

Thirty eligible participants were identified to participate in the study. All 30 agreed to participate, with 19 (63%) of these responding to round 1.

Round 2 was sent out to all those originally invited (30 individuals) and 13 newly qualified specialists in emergency medicine, of which in total 24 panel members responded.

Round 3 was only sent out to those who had replied to round 2; thus, the same 24 member panel group took part in round 3—of these, 21 (90%) responded.

Round 1

At the end of round 1, a total of 559 statements were returned under the given subheadings. These statements were refined into a list of 153 statements. These statements were sent out as round 2.

Round 2

At the end of round 2, 30 (20%) of the 153 statements had achieved positive consensus and none negative consensus. Thus, a total of 123 statements were sent out again in round 3 to the 24 panel members

Round 3

A further 69 of the 123 statements achieved positive consensus and none negative consensus. Thus, at the end of round 3 there were 99 positive consensus statements [99 out of 153 (65%)] and no negative statements. Fifty-four statements did not reach consensus (Supplementary Material). Some statements were still felt to be duplications, and hence further refinement was done; a final list of 77 synopsis statements was produced. These were categorised as follows (Tables 2, 3, 4 and 5):

  • As process-, structure- or outcome-based statements

  • As both useful and feasible to measure, or useful but not currently feasible to measure

Table 2.

Synopsis of feasible and useful structure-based performance indicators

Structure-based performance indicators
a) Existence of these structures in the EC
 • A staffed triage area
 • Dedicated minors area
 • Infectious diseases isolation area
 • Dedicated area for bereaved families
 • A safe area for intoxicated/suicidal overdose patients for observation
 • Adequate stores of essential equipment for disaster management (checked regularly)
 • A central command area for disaster management
b) Availability of the following equipment/services in or to the EC
 • Resuscitation drugs and equipment (checked daily)
 • Warmed fluids for resuscitation
 • A full range of equipment to treat patients of all ages
 • Different categories of analgesics/sedatives/anaesthetic dugs
 • A difficult airway trolley in the EC
 • A delivery pack in the EC
 • Emergency HIV prophylaxis all hours
 • Expert staff to assist with the patient who has a difficult airway
 • Rapid ultrasound (FAST) for blunt abdominal trauma
 • Portable X-rays immediately
 • 24-h on-site availability of X-rays/CT scanning and reporting/ultrasound scanning and reporting
 • Patient information containing updated medical information (e.g., wound care)
 • Trauma/counselling/pastoral and social services
c) Guidelines/protocols for:
 • Referral of patients to other hospitals/institutions (including minor injuries)
 • The administration of blood products/management of massive transfusions
 • Current resuscitation protocols in the EC from the Resuscitation Council of SA
 • Termination of CPR
 • The difficult airway
 • Procedural sedation
 • Nurse initiated administration of opiate analgesia
 • Dealing with infectious diseases
 • Acutely psychotic/aggressive patients
 • Disaster management (and regular simulations)
 • Dealing with staff conflicts/discipline issues
d) Personnel/training/audit:
 • Supervised training of junior EC staff
 • CPR training program within the EC
 • Regular simulation training of emergencies for EC staff
 • Regular morbidity and mortality meetings amongst EC staff
 • An active/regular research/audit program amongst EC staff
 • Percentage staff with BLS or ALS qualifications; and relevant diplomas/degrees

Table 3.

Synopsis of feasible and useful process-based performance indicators

Process-based performance indicators
a) Time indicators:
 • Total time in the EC
 • Time from arrival to triage/triage to being seen by doctor/arrival EC to discharge
 • Time taken to obtain emergency blood
 • Time to administration of adequate analgesia
 • Time to obtain an urgent 12-lead ECG for patients with chest pain
 • Door-to-needle (or catheter laboratory) time for acute STEMI
 • Time to first dose of IV antibiotic in septic meningitis
 • Time to stop active bleeding
 • Time to CT scan in head injured patients
 • Time to immobilize a fracture
 • Adherence to target times of the South African Triage Group
b) Percentage of relevant cases/situations where there is documentation of:
 • The weight of a child
 • Vitals in the recovery area post sedation
 • Neurovascular status of an affected limb
 • SpO2 in patients with respiratory problems
 • Peak flow before nebulisations in patients with bronchospasm
 • Visual acuity in patients with visual complaint
 • Fluorescein staining in all presentations of painful red eye
 • INR in patients with resistant epistaxis
 • BP/urine dipstick in pregnant patients

Table 4.

Synopsis of feasible and useful outcome-based performance

Outcome-based performance indicators
 • Number of missed injuries discovered after leaving the Emergency Centre

Table 5.

Synopsis of indicators assessed not to be feasible at this time

Process-based performance indicators
 • Time indicators:
 • Time to sedate a disruptive/acutely psychotic patient
 • Time to activate the disaster plan
 • Level of adherence to:
 • South African Anaesthesia guidelines for difficult airway management
 • Local infection control policies
 • Hospital policy regarding unnatural deaths
 • Radiation exposure standards
 • The NEXUS/Canadian C-spine rules for clearance of the C-spine
 • Burns Society of SA guidelines
 • South African Resuscitation Council guidelines
 • Level of adherence to national treatment targets
 • Level of adherence to South African Anaesthesia guidelines for procedures performed in the EC
 • Percentage of relevant cases/situations where there is documentation of:
 • What was done during a resuscitation
 • Informed consent being taken for procedures done in the EC
 • Discharge advice given
 • Urine is examined/BHCG tested in females with abdominal pain
Outcome-based performance indicators
 • Number of return visits for management of complications following treatment in the EC
 • Number of patients recalled due to missed injury/pathology on X-ray
 • Incidence of complications related to the patient with the difficult airway

Discussion

This study has produced a set of PIs that are all feasible and easily assessed in the current SA EM systems to evaluate and improve emergency centre quality of care. These indicators may need to be refined specific to local situations, although standardisation of at least some of the indicators will allow for direct comparison, audit and future studies. The list is not exhaustive, but provides a useful starting point for pilot studies and further research.

Historically physicians have not prioritised quality measurement and improvement, and the last few decades have seen the quality improvement movement shifting from an external regulatory requirement into an internally driven operation at the core of ECs in the developing world. This transition to a quality-driven revolution remains one of the greatest challenges facing emergency medicine in both the developed and developing world [11].

In their article on quality improvement in EM, Graff et al. [11] concluded that the definition of medical quality should include those factors that describe medical care that is important to all the stakeholders involved within the EC, such as doctors, nurses and patients. The most frequently used framework is that from the Institute of Medicine [12], which highlights the aims for any quality improvement intervention and should include safety, effectiveness, patient centeredness, timeliness, efficiency and fairness. The quality of public services in developing countries has been neglected, with little emphasis having been placed on quality improvement [13].

The Delphi technique has been successfully used elsewhere to develop PIs [4, 5]. This multifaceted and heterogeneous Delphi panel, with all members having considerable experience in the SA EM setting, has provided indicators with good generalisability and validity. Consensus methodology is a means of obtaining expert opinion and turning this into a reliable measure. There are no universally accepted or evidence-based criteria to define consensus, and 80% positive (or negative) response was chosen as a reasonable threshold given the nature of the statements in this study [14, 15].

PIs that assess structural components of ECs are more applicable to the current situation in SA, and the study reflects this. These should provide a good baseline and could be developed alongside national guidelines as to what is applicable for what level of EC.

The process PIs are useful and practical. They consist of time measures of flow and performance of vital clinical tasks, and of processes where documentation should be made that gives evidence that clinical process/protocols have been followed.

Outcome measures are difficult in the emergency environment where we seldom have information on outcome outside of the EC. A single measure of missed injuries is proposed as a feasible PI, but as in other studies it may not be a meaningful global measure of EM outcomes. Patient satisfaction is perhaps a better measure of outcome, and is largely weighted on timeliness and appropriateness of treatment, which should be gauged in the process PIs [5].

Werner and Ash express concerns that although performance measures do improve performance, many are designed to improve compliance to guidelines, which do not necessarily translate into clinical benefits [6]. This needs to be borne in mind, especially for the process-based PIs. Adherence and improvement to PIs should not take away from the priorities in clinical care, which are not necessarily reflected by PIs—for example, there is no prioritization of PIs—and clearly not all are life- or even limb-threatening issues. Sheldon notes the importance of having good evidence to back indicators, as well as consideration of integrating PIs with local and national policies on quality initiatives [16]. They also emphasize the importance of considering how the results of PIs will be analysed and appropriate actions to increase performance. Kruk et al. [17] have emphasised that performance indicators need to be relevant, reliable, feasible and evidence-based before they can be implemented locally. Thus, developed and developing countries may use very different indicators based on local conditions and policies.

Graff et al. [11] have identified a number of barriers to the measurement and implementation of quality improvement programmes. Most important is the lack of reliable, accurate data acquisition and analysis in ECs. This is especially challenging in resource-constrained developing world hospitals. For effective and accurate measurement, data need to be entered into a digital format. Most ECs in developing world settings have limited if any electronic records of EC patients, and data acquisition is through medical record acquisition, which is time consuming. Most ECs run a paper-based log book of EC admissions and discharges. This limits the usefulness and quality of the data. Secondly, the lack of senior administrative and clinical commitment to quality improvement within the EC is a major challenge. Traditionally, the EC has never been a priority within the hierarchical structure of the health care institution, and quality improvement has not formed part of the core aims. Furthermore, a lack of understanding concerning the aspects of quality measurement and improvement among senior staff and colleagues does not foster a team approach to prioritising the goal of improving patient care within the EC. The burden of diseases such as HIV/AIDS, malaria and trauma, and lack of qualified manpower within resource-poor systems have placed a tremendous burden on already overstretched health care systems. Many critics argue that scarce resources should be directed into solving these problems rather than highlighting further problems through measurement interventions [13]. In order for performance monitoring to be successful, it is essential that sound leadership from emergency physicians be fostered to create a multidisciplinary team approach to improving patient care [11].

PIs need to be clearly defined, and tested for validity, reliability and responsiveness before they can be put into common practice [5]. Further refinement and research are needed to guide this process.

Finally, the Centre for Health Economics of York [18] has highlighted some of the main types of unintended consequences of performance indicators that may be detrimental to patient care. These need to be considered when choosing indicators and analysing the results. Firstly, indicators may promote tunnel vision where managers may concentrate on a set of PIs while ignoring other important unmeasured aspects of health care. Secondly, suboptimisation involves pursuing narrow local goals while ignoring the overall objectives of the health system, while myopia is only concentrating on short-term goals and targets. Probably the most detrimental is the misrepresentation and deliberate manipulation of data to satisfy target requirements. Finally, gaming is the altering of behaviour to obtain a strategic advantage.

Creating a list of proposed indicators is one thing, but rolling them out to the ECs is the most difficult task. The PIs need to be further refined so as to ensure that all emergency physicians have the same understanding of the definition of the indicators to ensure an acceptable level of compliance. Further research is needed on how to approach and solve this issue. For example, the process of EC triage as always has been a controversial issue in South Africa. The need to prioritize the care of patients within South African ECs in response to long waiting times and overcrowding became obvious [2023]. A staffed triage area has been identified as an essential process indicator of quality in this study. The Cape Triage group was convened in response to the variable level of triage practiced within South African ECs. Their goal was to develop and validate a new triage tool for use within South Africa. Using this platform, a multidisciplinary panel consisting of experts in the field of emergency care set out to accomplish this with the development of the Cape Triage Score that was rolled out across the Western Cape in January 2006 and will hopefully extend to the rest of South Africa in the near future. Extensive campaigns and training of health care providers in the use of the triage system under the auspices of the Cape Triage Group, the Division of Emergency Medicine of the University of Cape Town and Stellenbosch (UCT/US), and the Emergency Medicine Society of South Africa (EMSSA) have taken place. This campaign has shown positive results in terms of waiting times and mortality in many of the units within the Western Cape. This is an example of how by using an umbrella body like the EMSSA and the Division of Emergency Medicine UCT/US, we can use the PIs identified in this study as a starting point for further debate and discussion. In this way we can create benchmarking standards of good quality of care within our ECs and ensure that all health-care workers in the ECs have a common understanding of these quality indicators. This will ensure compliance with the performance indicators and improve the quality of care delivered. However, this is easier said than done, and we are still a long way off from achieving this goal. A concerted effort will be needed to get all those involved in emergency care under one roof to clarify and further refine these indicators. Governmental legislation and accreditation standards set out by the Department of Health and the Health Professions Council of South Africa will be needed to drive and enforce the process.

Emergency medicine is a rapidly developing speciality within the developing world, but the systems and processes in place are still largely immature and under development. Clear guidelines are needed for the development of the speciality within the developing world. Recent research [24] here in South Africa has identified key consensus areas for Emergency Medicine (EM) development in the developing world with respect to the scope of practice, staffing needs, training and research. The next step in this process is translation of these principles into clear and practical guidelines through focus group discussions to drive policy change, protocols, training and further research into EM development in the developing world.

Limitations

The pool from which the Delphi panel was drawn is small, reflecting the numbers in this recently formed speciality in SA. The panel members invited are all either specialists or have wide experience in the SA setting, but the selection was open to the subjectivity of the author, as well as the availability of e-mail to the panelists during the study period. There is no universally acceptable response rate: the response rate in this study was poor—likely due to the panel members being part of a small number of time-pressured individuals, and perhaps some miscomprehension of the meaning and importance of the study [15]. The use of e-mail may have been a factor in the poor and fluctuant response rates [19].

Consensus methodology has its shortcomings, and the most cited of these are that participants are not able to discuss issues and that the process may encourage participants to change their views according to the majority opinion [14]. The panel in this study, with common background training, would mitigate this to some extent. It is important to note that Delphi methodology does not necessarily identify agreement: there is a difference between agreement and consensus, which means that these consensus statements are not a set of PIs ready for implementation—they are rather guidelines (which also, by their nature, identify areas for further debate and research) [14].

Conclusion

A consensus-designed set of EC PIs is presented for use in the SA setting. These represent the first attempt at a locally designed and appropriate quality of care indicator in the emergency medicine arena. There is a bias in the indicators presented towards structure-based indicators, which is appropriate for the currently developing field in SA. Further research and tailoring of these statements may be necessary at a local level, with standardization to allow comparison and audit of facilities. Despite the limitations mentioned, the proposed framework of indicators could be used to guide further research and allow for comparison across different health care systems.

Further research is currently underway to clearly define these performance indicators so that all ECs have the same working understanding of them. Variations in the level of understanding and compliance with these performance indicators will impact on the quality of care delivered. However, currently within the South African public health sector there are no uniformly agreed-upon quality markers and accreditation standards for our ECs. In a measure to address this pressing issue, the Emergency Medicine Society of South Africa (EMSSA) and National Department of Health (NDoH) are currently developing an accreditation process and NDoH regulations, which will lead to internal and external benchmarking for quality of care within the ECs.

The EMSSA has a pivotal role to ensure that the standard of care delivered within our ECs is improved to safeguard the health and well-being of the most vulnerable in our society.

Electronic supplementary material

ESM 1 (196.5KB, doc)

(DOC 196 kb)

Acknowledgement

We would like to thank the following Delphi panel members from South Africa who contributed to this research:

Dr Patricia Marie Saffy, Dr Fraser John Dawson Lamond, Dr Darryl Ross Wood, Dr Charl Jacques Van Loggerenberg , Dr Cleeve Chelmsford Robertson, Dr Andreas Engelbrecht, Dr Gerald Eric Dalbock, Dr Wayne Patrick Smith, Dr Jacques Goosen, Dr Andy Nicol, DrTim Hardcastle, Dr Dave Muckart, Ms Mandy Taubkin, Mr Theo Lighthelm, Dr Louis Jenkins, Dr Philip barker, Dr Denis Allard, Dr Glen Staples, Dr Elize Esterhuizen, Dr Predeep Navsaria, Dr Paul Kapp, Dr Lee Wallis, Dr Annemarie Kropman, Dr Heike Geduld, Dr Melanie Stander, Dr Japie De Jager, Dr Peter Hodkinson, Dr Charl Carstens, Dr Julian Fleming, Dr Monique Muller

Conflicts of interest

None.

Open Access

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

Biographies

Dr David Maritz

MBCHB Specialist resident in Emergency Medicine

Division of Emergency Medicine, University of Cape Town and Stellenbosch University, South Africa.

Dr Peter Hodkinson

MPHIL (EM) is an Emergency physician at the Division of Emergency Medicine, University of Cape Town and Stellenbosch University, South Africa.

Prof Lee Wallis

MD FRCS FCEM is the Head of the Division of Emergency Medicine, University of Cape Town and Stellenbosch University, South Africa.

Appendix 1: Examples of Lickert scale as used in rounds 2 and 3

Proposed indicator/statement Potential for use as a departmental performance indicator
1.1 1 2 3 4 5 6 7 8 9
V.poor x V.good
Time taken to obtain urgent portable CXR Comments
Should not be done with obvious tension pneumothorax

A: Example of the 9-point Likert scale for the round 2 questionnaire

Proposed indicator/statement Potential for use as a departmental performance indicator
Time taken to obtain urgent portable CXR 1 2 3 4 5 6 7 8 9
V.poor V.good
X
1 0 2 5 6 1 4 4 1
Number of round 2 responses for each score
Comments

B: Example of the format for the round 3 questionnaire (statements that had not reached consensus). (Figures in the bottom row are actual panel responses from round 2; top row represents the panel member’s score for round 3).

Footnotes

The views expressed in this paper are those of the author(s) and not those of the editors, editorial board or publisher.

Funding Sources

None

Competing Interest

This paper was presented in part at the Emergency Medicine in the Developing World Conference, Cape Town, South Africa, November 24-26, 2009.

Contribution statement

Author DM conceived and carried out the study. Author DM analysed and summarised the data findings. Authors LW and PH were part of the Delphi panel, and they assisted with writing up and reviewing the manuscript.

References

  • 1.Ncayiyana DJ. National health insurance on the horizon for South Africa. SAMJ. 2008;98:4. [PubMed] [Google Scholar]
  • 2.Tregunno D, Baker GR, Barnsley J, et al. Competing values of emergency department performance: balancing multiple stakeholder perspectives. Health Serv Res. 2004;39(4):771–792. doi: 10.1111/j.1475-6773.2004.00257.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Donabedian A. Evaluating the quality of medical care. Milbank Mem Fund Q. 1966;44:166–200. doi: 10.2307/3348969. [DOI] [PubMed] [Google Scholar]
  • 4.Lindsay P, Schull M, Bronskill S, et al. The development of indicators to measure the quality of clinical care in emergency departments following a modified-Delphi approach. Acad Emerg Med. 2002;9:1131–1139. doi: 10.1111/j.1553-2712.2002.tb01567.x. [DOI] [PubMed] [Google Scholar]
  • 5.Beattie E, Mackway-Jones K. A Delphi study to identify performance indicators for emergency medicine. Emerg Med J. 2004;21:47–50. doi: 10.1136/emj.2003.001123. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Werner RM, Asch DA. Clinical concerns about clinical performance measurements. Ann Fam Med. 2007;5:159–163. doi: 10.1370/afm.645. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Ospina MB, Bond K, Schull M, et al. Key indicators of overcrowding in Canadian emergency departments: a Delphi study. Can J Emerg Med. 2007;9:339–346. doi: 10.1017/s1481803500015281. [DOI] [PubMed] [Google Scholar]
  • 8.Hung G, Chalut D. A consensus-established set of important indicators of pediatric emergency department performance. Ped Emerg Care. 2008;24:9–15. doi: 10.1097/pec.0b013e31815f39a5. [DOI] [PubMed] [Google Scholar]
  • 9.Thangaratinam S, Redman CWE. The Delphi technique. Obstet Gynaecol. 2005;7:120–125. doi: 10.1576/toag.7.2.120.27071. [DOI] [Google Scholar]
  • 10.Likert A. A technique for the measurement of attitudes. Arch Psychol (Frankf) 1932;22:55. [Google Scholar]
  • 11.Graff L, Stevens C, Spaite D et al (2002) Measuring and improving quality in emergency medicine. Acad Emerg Med 9(11) [DOI] [PubMed]
  • 12.Institute of medicine . Crossing the quality chasm: a new health system for the 21st century. Washington; DC: National academy press; 2001. [PubMed] [Google Scholar]
  • 13.Initiative for sub-district support: technical report No 3. What really improves the quality of primary health care? A review of local and international experience. Health Systems Trust
  • 14.Williams PL, Webb C. The Delphi technique: a methodological discussion. J Adv Nurs. 1994;19:180–186. doi: 10.1111/j.1365-2648.1994.tb01066.x. [DOI] [PubMed] [Google Scholar]
  • 15.Hasson F, Keeney S, McKenna H. Research guidelines for the Delphi survey technique. J Adv Nurs. 2000;32:1008–1015. [PubMed] [Google Scholar]
  • 16.Sheldon T. Promoting health care quality: what role performance indicators? Qual Health Care. 1998;7(Suppl):S45–S50. [PubMed] [Google Scholar]
  • 17.Kruk ME, Freedman LP. Assessing health system performance in developing countries: a review of the literature. Health Policy. 2008;85:263–276. doi: 10.1016/j.healthpol.2007.09.003. [DOI] [PubMed] [Google Scholar]
  • 18.Goddard M, Mannion R, Smith P. The NHS performance framework: taking account of economic behaviour. Centre for health economics, The university of York. Discussion paper 158
  • 19.Harris DR, Connolly H, Christenson J, et al. Pitfalls of email survey research. Can J Emerg Med 2003; 5. http://caep.ca/template.asp?id=E6946BBBF1804F4AAEF600DAF7F37B63#079. Accessed 27 Oct 2009
  • 20.Bruijns SR, Wallis LA, Burch VC. A prospective evaluation of the Cape triage score in the emergency department of an urban public hospital in South Africa. Emerg Med J. 2008;25:398–402. doi: 10.1136/emj.2007.051177. [DOI] [PubMed] [Google Scholar]
  • 21.Bruijns SR, Wallis LA, Burch VC. Effect of introduction of nurse triage on waiting times in a South African emergency department. Emerg Med J. 2008;25:395–397. doi: 10.1136/emj.2007.049411. [DOI] [PubMed] [Google Scholar]
  • 22.Wallis LA, Gottschalk SB, Wood D et al (2006) The Cape triage score—a triage system for South Africa. SAMJ 96(1) [PubMed]
  • 23.Gottschalk SB, Wood D, DeVries S, et al. The Cape triage score: a new triage system for South Africa. Proposal from the cape Triage group. Emerg Med J. 2006;28:149–153. doi: 10.1136/emj.2005.028332. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Hodkinson PW, Wallis LA (2010) Emergency medicine in the developing world: a Delphi study. Acad Emerg Med 17(7) [DOI] [PubMed]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

ESM 1 (196.5KB, doc)

(DOC 196 kb)


Articles from International Journal of Emergency Medicine are provided here courtesy of Springer-Verlag

RESOURCES