Skip to main content
Bulletin of the World Health Organization logoLink to Bulletin of the World Health Organization
. 2015 Aug 21;93(10):719–726. doi: 10.2471/BLT.14.151399

A national system for monitoring the performance of hospitals in Ethiopia

Système national permettant de contrôler les performances des hôpitaux en Éthiopie

Un sistema nacional para monitorizar el rendimiento de los hospitales en Etiopía

نظام قومي لمراقبة أداء المستشفيات في إثيوبيا

埃塞俄比亚全国医院绩效监测体系

Национальная система контроля качества работы больниц в Эфиопии

Zahirah McNatt a, Erika Linnander a, Abraham Endeshaw b, Dawit Tatek a, David Conteh c, Elizabeth H Bradley a,
PMCID: PMC4645435  PMID: 26600614

Abstract

Many countries struggle to develop and implement strategies to monitor hospitals nationally. The challenge is particularly acute in low-income countries where resources for measurement and reporting are scarce. We examined the experience of developing and implementing a national system for monitoring the performance of 130 government hospitals in Ethiopia. Using participatory observation, we found that the monitoring system resulted in more consistent hospital reporting of performance data to regional health bureaus and the federal government, increased transparency about hospital performance and the development of multiple quality-improvement projects. The development and implementation of the system, which required technical and political investment and support, would not have been possible without strong hospital-level management capacity. Thorough assessment of the health sector’s readiness to change and desire to prioritize hospital quality can be helpful in the early stages of design and implementation. This assessment may include interviews with key informants, collection of data about health facilities and human resources and discussion with academic partners. Aligning partners and donors with the government’s vision for quality improvement can enhance acceptability and political support. Such alignment can enable resources to be focused strategically towards one national effort – rather than be diluted across dozens of potentially competing projects. Initial stages benefit from having modest goals and the flexibility for continuous modification and improvement, through active engagement with all stakeholders.

Introduction

Improvement in the quality of hospital care is a fundamental aspect of health system strengthening14 that is directly linked to the service delivery dimension of the World Health Organization (WHO) building blocks of a health system.5 While the monitoring of hospital performance is a key ingredient to such improvement,1,3,4 many countries struggle to develop and implement feasible strategies to monitor hospitals nationally. The challenge is particularly acute in low-income countries where resources for measurement and reporting are scarce.

In the field of global health, research on performance monitoring often focuses broadly on health systems69 rather than on hospitals. The literature on the development and implementation of systems for monitoring hospital performance is largely dominated by case studies and reports from high-income countries with national health systems – e.g. Canada10 and Denmark,11 the United Kingdom of Great Britain and Northern Ireland12 and other countries in western Europe.1315 Although there has also been some relevant research in the United States of America,10 it has tended to focus on a narrow set of quality measures in specific populations.16,17 The WHO performance assessment tool for quality improvement in hospitals is a donor-led, externally designed measurement project rather than a country-led, internally developed initiative.14,15 This tool has been applied in only one middle-income country (South Africa).14,15 Most attempts to monitor hospital performance in low-income settings have involved small numbers of facilities and narrowly defined clinical measures of performance.1824 When creating their accreditation systems for hospitals, both Liberia and Zambia monitored hospital performance for just a year, to collect baseline data.25,26

We could find no peer-reviewed studies done in low-income countries that described the development and sustained implementation of a national system for monitoring hospital performance, based upon a comprehensive set of key performance indicators. We therefore sought to describe the creation and implementation of such a national system in a low-income country. We considered Ethiopia to be a good setting in which to conduct our case study because of recent hospital reform in the country. The reform led to the creation of: (i) the role of hospital chief executive officer – qualified through a master’s degree programme in hospital and health-care administration;27,28 (ii) private wings in hospitals that allowed revenue generation and (iii) hospital governing boards.28,29

The many new government hospitals that were built during the ongoing reform process led to improved hospital access in both rural and urban settings. We describe the development of key performance indicators, the process of monitoring hospital performance relative to these indicators and the trend in performance since 2010, which marked the implementation of Ethiopia’s national system of hospital monitoring. Findings from this case study may be helpful to other low-income countries seeking to elevate the quality of facility-based health care through performance monitoring and accountability.

Key performance indicators

Development

We developed performance indicators that were relevant for hospitals and consistent throughout the country. The first indicator developed was the most fundamental – adherence to national guidelines on hospital management. In 2009, Ethiopia partnered with the Clinton Health Access Initiative and the Yale Global Health Leadership Institute to develop national guidelines for the management of hospitals: the Ethiopian Hospital Reform Implementation Guidelines.30,31 These guidelines included 124 hospital management standards, each of which was a statement – e.g. “the hospital conducts a physical inventory of all pharmaceuticals in the store and each dispensing unit at a minimum of once per year.” Hospitals were asked to report quarterly whether each standard was met.

The success of the rollout of Ethiopia’s first attempt to monitor hospital performance, in 2010, was probably the result of simplicity and focus on hospital management. The guidelines leveraged the Ethiopian Federal Ministry of Health’s investment in the training of hospital chief executive officers via the master’s of hospital and health-care administration degree programme.28,32 The guidelines, the associated scoring sheet, the promotion of adherence to the guidelines and the building of management capacity were made integral parts of the two-year programme. The students in the programme were selected by regional health bureaus.32 At the time of writing, more than 90% of those who successfully completed the degree programme remain employed in the Ethiopian health-care sector (D Tatek, unpublished observations, 2014).

Given the reality that, in 2009–2010, government hospitals were understaffed, financially limited and often did not have 24-hour access to basic resources such as water and electricity, the ministry of health agreed that, before launching reporting on other aspects of hospital performance, such as efficiency, cost, clinical outcomes and patient experience – government hospitals should be accountable for meeting a set of minimum management standards.

By 2011, 40% of government hospitals were reporting data on their adherence to operational standards – to both the ministry of health and the appropriate regional health bureau. Improvements were already apparent in the establishment of hospital quality committees, drug formularies, pharmacy inventory control systems and a host of other quality-improvement efforts.33 Staff from regional health bureaus and development partners visited hospitals to corroborate the reported information and to provide coaching on the operational standards. The environment was poised for the introduction of a more robust monitoring system based upon multiple key performance indicators.

In 2011, the ministry of health negotiated a standardized set of performance indicators, in partnership with the regional health bureaus and with technical assistance as before, from the Clinton Health Access Initiative and the Yale Global Health Leadership Institute. The process of selecting such indicators for the nation’s hospitals was rigorous and included reviewing other country experiences, development of thematic areas and frequent reviews with federal, regional and external stakeholders. Given the need for these indicators to be reliable – and collection of data on them to be feasible6 – the ministry of health held sessions with the regional health bureaus to determine which indicators were most important to the bureaus and what human resource training and infrastructure development were needed to enable collection of the corresponding data.

Six months of research and negotiation resulted in the establishment of 36 national indicators for the assessment of hospital performance. These indicators covered 11 aspects of hospital operations: hospital management, outpatient, emergency, inpatient, maternity, referral and pharmacy services, productivity, human resources, finance and patient experience (Table 1). Together, the indicators encompassed measures of operational functioning, clinical quality, financial activity and patient experience.34 The ministry of health worked with partners to limit the number of indicators that could potentially create perverse incentives (e.g. mortality rates) and to explain, to hospital and ministry of health staff, the potential unintended effects of indicator measurement.

Table 1. Hospital key performance indicators, Ethiopia, 2010.

Category, indicator code Indicator
Hospital management
KPI 1 Proportion of EHRIG operational standards met
Outpatient services
KPI 2 Outpatient attendees
KPI 3 Outpatient attendees seen by private-wing service
KPI 4 Outpatient waiting time to treatment
KPI 5 Outpatients not seen on same day
Emergency services
KPI 6 ED attendees
KPI 7 ED patients triaged within 5 minutes of arrival at ED
KPI 8 ED attendances with stay longer than 24 hours
KPI 9 ED mortality
Inpatient services
KPI 10 Inpatient admissions
KPI 11 Inpatient admissions to private wing
KPI 12 Inpatient mortality
KPI 13 Delay for elective surgical admission
KPI 14 Bed occupancy
KPI 15 Mean length of stay
KPI 16 Incidence of pressure ulcer
KPI 17 Percentage of surgical sites infected
KPI 18 Completeness of inpatient medical records
Maternity services
KPI 19 Deliveries – i.e. live births and stillbirths – attended
KPI 20 Births by surgical, instrumental or assisted vaginal delivery
KPI 21 Institutional maternal mortality
KPI 22 Institutional neonatal deaths within 24 hours of birth
Referral services
KPI 23 Referrals made
KPI 24 Rate of referrals
KPI 25 Emergency referrals, as a proportion of all referrals made
Pharmacy services
KPI 26 Mean stock-out duration of hospital-specific tracer drugs
Productivity
KPI 27 Patient-day equivalents per physician
KPI 28 Patient-day equivalents per nurse or midwife
KPI 29 Major surgeries per surgeon
KPI 30 Major surgeries conducted in private wing
Human resources
KPI 31 Attrition rate among physicians
KPI 32 Staff experience, as a staff satisfaction rating
Finance
KPI 33 Cost per patient-day equivalent
KPI 34 Raised revenue, as a proportion of total operating revenue
KPI 35 Revenue utilization – i.e. the proportion of budget used
Patient experience
KPI 36 Patient experience, as a patient satisfaction rating

ED: emergency department; EHRIG: Ethiopian hospital reform implementation guidelines; KPI: key performance indicator.

The performance indicators included process measures that were directly related to patient outcomes. For example, one indicator – the average time during which stocks of basic drugs were unavailable – highlighted how often inpatients and outpatients were unable to purchase medications and therefore had to remain untreated or source medications from private pharmacies. Another indicator – the percentage of patients triaged within five minutes of arrival in the emergency department was particularly important to all stakeholders as it directly responded to community concerns about mortality and morbidity resulting from delayed treatment.

The success of the development of the indicators was largely due to the simplicity, flexibility and transparency of the process. The number of indicators was kept small and the focus was on measures that could be calculated reasonably easily by hospital staff. The ministry of health required commitment to reporting the 36 national indicators but allowed regional health bureaus to create additional indicators, which stimulated regional ownership. A National Hospital Performance Monitoring and Improvement Manual, which outlined each indicator thoroughly and specified precise definitions and data sources, was disseminated through a series of national workshops funded by the United States Centers for Disease Control and Prevention (CDC) and the ministry of health.

Implementation and monitoring

The implementation of the monitoring system included substantial investments in both human resources and information technology. In terms of human resources, new roles were developed at the hospital level and in the regional health bureaus and ministry of health. Each hospital had several individuals – so-called data owners – who were each dedicated to collecting data on the performance indicators that were relevant to their department. For example, a midwife could be the data owner for neonatal mortality. In addition, each hospital had an indicator collator who worked closely with each data owner and was responsible for the collation of data on all the indicators. Instead of hiring new personnel to undertake these tasks, most hospitals modified the job descriptions of current employees and provided additional short-term, on-site training. Data on the indicators were initially collected on paper forms and then compiled and submitted as spreadsheet computer files. Health and development partners provided technical support for designing data entry and reporting applications.

At bureau and ministry level, the curative and rehabilitation teams and the medical services directorate were dedicated to the performance indicators and hospital operations. These teams were responsible for training hospital data owners and indicator collators, troubleshooting problems with data collection and reporting and synthesizing the hospital-level data into a national database for comparing hospital performance within and across regions (Table 2). The ministry of health used the summary databases during discussions of trends in hospital performance, at quarterly joint sessions of the regional and federal health leadership.

Table 2. National summary data on nine key performance indicators for 121 government hospitals, Ethiopia, 2013.

Indicator Code Quarter of year
First Second Third Fourth All
Hospital management
Proportion of EHRIG operational standards met, % KPI 1 70.6 74.7 75.3 77.5 74.5
Outpatient services
Outpatient attendees, No. KPI 2 586 337 618 442 648 910 648 125 625 453
Outpatient attendees seen by private-wing services, % KPI 3 7.0 6.6 5.9 6.0 6.4
Outpatient waiting time to treatment, minutes KPI 4 37.1 40.3 44.9 41.4 41.0
Outpatients not seen on same day, % KPI 5 0.5 0.5 0.2 0.2 0.3
Emergency services
ED attendees, No. KPI 6 198 078 203 496 212 982 213 570 828 126
ED patients triaged within 5 minutes of arrival at ED, % KPI 7 93.6 76.3 94.9 NR 93.0
ED attendees with stay longer than 24 hours, % KPI 8 2.4 2.1 2.3 2.0 2.2
ED mortality, % KPI 9 0.3 0.2 0.2 0.2 0.2

ED: emergency department; EHRIG: Ethiopian hospital reform implementation guidelines; KPI: key performance indicator; NR, not reported.

The approach used to establish the Ethiopian system for monitoring hospital performance was designed to fit the Ethiopian context. Many hospital employees were initially unfamiliar with the methods used for reliable and valid data collection and few had adequate experience with computer software. As many of the computers available in hospitals functioned poorly, the system was designed to use relatively simple software programmes.

The main challenges that arose during implementation were errors in data collection and calculation at hospital level and the fear of reprisal for poor performance. For instance, some hospital employees were unsure which denominators or patient populations they should be using. Some hospitals repeatedly failed to report data on particular indicators and some were afraid to report data that highlighted poor performance – especially poor clinical indicators. In the first year of the system, rates of surgical site infection and neonatal mortality were often found to be underreported. Hospitals that appeared to be struggling in reporting reasonably accurate data on the key performance indicators were offered additional on-site training and one-on-one coaching. In their hospital-wide meetings, hospital chief executive officers were encouraged to cultivate an accountable but non-punitive environment. Regional health bureaus reinforced the importance of the data-collection efforts and, by improving the timeliness of their feedback on the summary data to hospitals, helped prompt more immediate exploration and correction of data errors.

The costs of the monitoring system were originally covered by a grant from the United States CDC. Implementing partners were unable to quantify such costs accurately or to separate them from those of other programmatic activities. In addition to the efforts of the nongovernmental organization and university partners, the ministry of health and regional health bureaus made both financial and in-kind contributions to the establishment and maintenance of the monitoring system. Future efforts would benefit from a more explicit analysis of costs.

Impact of monitoring

As the national monitoring system was fully implemented, rates of hospital reporting of performance indicators increased. This trend indicated changes in hospital functioning and encouraged improvements in performance. In September 2011, 40% of the 114 government hospitals then in Ethiopia were regularly reporting their performance in terms of all 36 key indicators; by September 2013 this had risen to 78%, and by September 2014, 84%.33,35,36 The collection and analysis of performance data reportedly motivated hospital-based performance-improvement projects – e.g. the introduction of hourly nurse rounding, distinct staff uniforms, continuous pharmaceutical stock reporting and outpatient appointment systems. Between 2012 and 2013, mean adherence to the operational standards increased from 68.2% to 74.5% while the mean number of deliveries attended each month increased from 12 187 to 16 001.35,36

The national monitoring system also improved evidence-based decision-making at both hospital and government level. Comparative performance results were presented at quarterly meetings with hospitals and regional health bureau staff and this allowed for the open review of performance results, feedback and problem solving. Managers at all levels of the health sector aimed to sustain the enthusiasm for performance monitoring. This required continuous investment in the use of data for tangible improvements, media attention and team and organizational rewards and may, in the long term, include institutional accreditation by national bodies. One example was Ethiopia’s recent integration of the 36 performance indicators into a national quality campaign: the Ethiopian Hospital Alliance for Quality. In 2012, the alliance financially rewarded the 15 hospitals that, according to the relevant performance indicators, offered the most positive patient experiences – with about 55 000 United States dollars each. In 2014, the ministry of health began the alliance’s second cycle and prioritized institutional maternal mortality.

General observations

Our five-year experience of the development and implementation of a national system for monitoring hospital performance led to several key observations. First, technical investment was critical throughout the process. Many hours of research, writing and development of guidelines were needed to develop a core set of performance indicators that were evidence-based, comprehensive but not overwhelming, and precisely described to allow their consistent calculation and reporting. Ethiopia’s ministry of health led the initiative between 2009 and 2014 and now has full operational responsibility. The ministry has a department exclusively charged with overseeing the country’s management of hospital performance – with support from key champions, including the Minister of Health.

Second, while technical support was critical in the development of the indicators and related documentation, political support was paramount to successful implementation. The ministry of health set a consistent direction and held partners accountable to deliver on its vision for quality improvement. The regional health bureaus also demonstrated strong leadership in advocating for additional performance indicators that fit their regional needs and ensured government and hospital ownership of the monitoring system. Although disagreement emerged, senior government officials continued discussions until a negotiated consensus brought a stable solution that all parties could then support. The process of identifying the best key indicators conferred momentum and helped sustain the monitoring efforts. Although such characteristics may be key to making lasting changes, they can be challenging to embed in any large-scale national efforts.37

Lastly, both the technical and political inputs were accomplished because of the ability to leverage strong management capacity – which was built at hospital level and supported by the executive master’s degree programme. The importance of management capacity has been highlighted by many studies.21,24,32,3848 The chief-executive-officer model – i.e. the establishment of a dedicated, qualified person in each hospital who is trained in hospital management and supported by a hospital governing board – was pivotal in the successful implementation of the system for monitoring performance. Without the management capacity provided by this model, the ideas and strategies written in technical and political arenas would not have been translated into practice at the hospital level. Once adequate management capacity has been built, performance management and reporting become achievable – and even desirable for facility-level staff who wish to assess their own progress. The combination of leverage from existing hospital management capacity, technical inputs and political support provided the conditions and tools needed to enable success in this country-led effort to elevate the performance of hospitals in Ethiopia.

Conclusion

Ethiopia’s implementation of a national system for monitoring hospital performance serves as an example of a low-income country that aims to improve health service delivery via the creation of a culture of accountability. A limitation of our study is that we lacked outcome data and thus were unable to evaluate the impact of the monitoring system on population health. Such an evaluation would require a long and comprehensive follow-up of patients. Despite this limitation, our observations may be helpful to other low-income countries that are seeking to improve the quality of their hospital care. We offer several recommendations. First, a thorough assessment of the health sector’s readiness to change and desire to prioritize hospital quality can be helpful in the early stages of design and implementation. Such an assessment may include interviews with key informants, collection of data about health facilities and human resources and investigation of local university capacity to offer academic programmes in health-care management. Second, partner and donor alignment with the government’s national vision for quality improvement can enhance acceptability and political support. This alignment can enable resources to be focused strategically towards one national effort – rather than be diluted across dozens of potentially competing projects. Finally, early phases of implementation benefit from having modest early goals and the facility for continuous modification and improvement to the performance monitoring system, through active engagement with all stakeholders.

Acknowledgements

We thank the Ethiopian Federal Ministry of Health, regional health bureaus and government hospitals and Emily Cherlin.

Funding:

The project received financial support from the United States Centers for Disease Control and Prevention (grant 1U2GPS00284).

Competing interests:

All authors participated in various phases of the national rollout of the Ethiopian system for monitoring hospital performance.

References

  • 1.Berwick DM. Lessons from developing nations on improving health care. BMJ. 2004. May 8;328(7448):1124–9. 10.1136/bmj.328.7448.1124 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Better hospitals, better health systems, better health. Consultation draft. Washington: Center for Global Development; 2014. Available from: http://www.cgdev.org/sites/default/files/Hospitals%20for%20Health,%20Consultation%20Draft.%209.22.14.pdf [cited 2015 Apr 17].
  • 3.McCannon CJ, Berwick DM, Massoud MR. The science of large-scale change in global health. JAMA. 2007. October 24;298(16):1937–9. 10.1001/jama.298.16.1937 [DOI] [PubMed] [Google Scholar]
  • 4.Smits HL, Leatherman S, Berwick DM. Quality improvement in the developing world. Int J Qual Health Care. 2002. December;14(6):439–40. 10.1093/intqhc/14.6.439 [DOI] [PubMed] [Google Scholar]
  • 5.The WHO health systems framework [Internet]. Manila: World Health Organization Regional Office for the Western Pacific. Available from: http://www.wpro.who.int/health_services/health_systems_framework/en/ [cited 2014 Nov 14].
  • 6.Kruk ME, Freedman LP. Assessing health system performance in developing countries: a review of the literature. Health Policy. 2008. March;85(3):263–76. 10.1016/j.healthpol.2007.09.003 [DOI] [PubMed] [Google Scholar]
  • 7.Arah OA, Westert GP, Hurst J, Klazinga NS. A conceptual framework for the OECD health care quality indicators project. Int J Qual Health Care. 2006. September;18 Suppl 1:5–13. 10.1093/intqhc/mzl024 [DOI] [PubMed] [Google Scholar]
  • 8.Kelley ET, Arispe I, Holmes J. Beyond the initial indicators: lessons from the OECD health care quality indicators project and the US national healthcare quality report. Int J Qual Health Care. 2006. September;18 Suppl 1:45–51. 10.1093/intqhc/mzl027 [DOI] [PubMed] [Google Scholar]
  • 9.Schoen C, Davis K, How SK, Schoenbaum SCUS. U.S. health system performance: a national scorecard. Health Aff (Millwood). 2006. Nov-Dec;25(6):w457–75. 10.1377/hlthaff.25.w457 [DOI] [PubMed] [Google Scholar]
  • 10.Kazandjian VA, Matthes N, Wicker KG. Are performance indicators generic? The international experience of the quality indicator project. J Eval Clin Pract. 2003. May;9(2):265–76. 10.1046/j.1365-2753.2003.00374.x [DOI] [PubMed] [Google Scholar]
  • 11.Mainz J, Krog BR, Bjørnshave B, Bartels P. Nationwide continuous quality improvement using clinical indicators: the Danish national indicator project. Int J Qual Health Care. 2004. April;16 Suppl 1:i45–50. 10.1093/intqhc/mzh031 [DOI] [PubMed] [Google Scholar]
  • 12.de Vos M, Graafmans W, Kooistra M, Meijboom B, Van Der Voort P, Westert G. Using quality indicators to improve hospital care: a review of the literature. Int J Qual Health Care. 2009. April;21(2):119–29. 10.1093/intqhc/mzn059 [DOI] [PubMed] [Google Scholar]
  • 13.Groene O, Skau JK, Frølich A. An international review of projects on hospital performance assessment. Int J Qual Health Care. 2008. June;20(3):162–71. 10.1093/intqhc/mzn008 [DOI] [PubMed] [Google Scholar]
  • 14.Groene O, Klazinga N, Kazandjian V, Lombrail P, Bartels P. The World Health Organization performance assessment tool for quality improvement in hospitals (PATH): an analysis of the pilot implementation in 37 hospitals. Int J Qual Health Care. 2008. June;20(3):155–61. 10.1093/intqhc/mzn010 [DOI] [PubMed] [Google Scholar]
  • 15.Veillard J, Champagne F, Klazinga N, Kazandjian V, Arah OA, Guisset AL. A performance assessment framework for hospitals: the WHO regional office for Europe PATH project. Int J Qual Health Care. 2005. December;17(6):487–96. 10.1093/intqhc/mzi072 [DOI] [PubMed] [Google Scholar]
  • 16.Krumholz HM, Normand SL, Spertus JA, Shahian DM, Bradley EH. Measuring performance for treating heart attacks and heart failure: the case for outcomes measurement. Health Aff (Millwood). 2007. Jan-Feb;26(1):75–85. 10.1377/hlthaff.26.1.75 [DOI] [PubMed] [Google Scholar]
  • 17.Mullen KJ, Bradley EH. Public reporting and pay for performance. N Engl J Med. 2007. April 26;356(17):1782–1784. 10.1056/NEJMc070578 [DOI] [PubMed] [Google Scholar]
  • 18.Kotagal M, Lee P, Habiyakare C, Dusabe R, Kanama P, Epino HM, et al. Improving quality in resource poor settings: observational study from rural Rwanda. BMJ. 2009;339 oct30 1:b3488. 10.1136/bmj.b3488 [DOI] [PubMed] [Google Scholar]
  • 19.Twum-Danso NA, Akanlu GB, Osafo E, Sodzi-Tettey S, Boadu RO, Atinbire S, et al. A nationwide quality improvement project to accelerate Ghana’s progress toward millennium development goal four: design and implementation progress. Int J Qual Health Care. 2012. December;24(6):601–11. 10.1093/intqhc/mzs060 [DOI] [PubMed] [Google Scholar]
  • 20.Ayieko P, Ntoburi S, Wagai J, Opondo C, Opiyo N, Migiro S, et al. A multifaceted intervention to implement guidelines and improve admission paediatric care in Kenyan district hospitals: a cluster randomised trial. PLoS Med. 2011. April;8(4):e1001018. 10.1371/journal.pmed.1001018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Kwamie A, van Dijk H, Agyepong IA. Advancing the application of systems thinking in health: realist evaluation of the leadership development programme for district manager decision-making in Ghana. Health Res Policy Syst. 2014;12(1):29. 10.1186/1478-4505-12-29 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Mutale W, Stringer J, Chintu N, Chilengi R, Mwanamwenge MT, Kasese N, et al. Application of balanced scorecard in the evaluation of a complex health system intervention: 12 months post intervention findings from the BHOMA intervention: a cluster randomised trial in Zambia. PLoS ONE. 2014;9(4):e93977. 10.1371/journal.pone.0093977 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Swanepoel D, Ebrahim S, Joseph A, Friedland PL. Newborn hearing screening in a South African private health care hospital. Int J Pediatr Otorhinolaryngol. 2007. June;71(6):881–7. 10.1016/j.ijporl.2007.02.009 [DOI] [PubMed] [Google Scholar]
  • 24.Wong R, Hathi S, Linnander EL, El Banna A, El Maraghi M, El Din RZ, et al. Building hospital management capacity to improve patient flow for cardiac catheterization at a cardiovascular hospital in Egypt. Jt Comm J Qual Patient Saf. 2012. April;38(4):147–53. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=12572783&dopt=AbstractCleveland EC, Dahn BT, Lincoln TM, Safer M, Podesta M, Bradley E. Introducing health facility accreditation in Liberia. Glob Public Health. 2011;6(3):271–82. 10.1093/intqhc/14.suppl_1.7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Bukonda N, Tavrow P, Abdallah H, Hoffner K, Tembo J. Implementing a national hospital accreditation program: the Zambian experience. Int J Qual Health Care. 2002. December;14(90001) Suppl 1:7–16. 10.1080/17441692.2010.489052 [DOI] [PubMed] [Google Scholar]
  • 27.Hartwig K, Pashman J, Cherlin E, Dale M, Callaway M, Czaplinski C, et al. Hospital management in the context of health sector reform: a planning model in Ethiopia. Int J Health Plann Manage. 2008. Jul-Sep;23(3):203–18. 10.1002/hpm.915 [DOI] [PubMed] [Google Scholar]
  • 28.Kebede S, Abebe Y, Wolde M, Bekele B, Mantopoulos J, Bradley EH. Educating leaders in hospital management: a new model in Sub-Saharan Africa. Int J Qual Health Care. 2010. February;22(1):39–43. 10.1093/intqhc/mzp051 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.McNatt Z, Thompson JW, Mengistu A, Tatek D, Linnander E, Ageze L, et al. Implementation of hospital governing boards: views from the field. BMC Health Serv Res. 2014;14(1):178. 10.1186/1472-6963-14-178 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Ethiopian Hospital Reform Implementation Guidelines: 1. Addis Ababa: Federal Ministry of Health; 2010. [Google Scholar]
  • 31.Ethiopian Hospital Reform Implementation Guidelines: 2. Addis Ababa: Federal Ministry of Health; 2010. [Google Scholar]
  • 32.Kebede S, Mantopoulos J, Ramanadhan S, Cherlin E, Gebeyehu M, Lawson R, et al. Educating leaders in hospital management: a pre-post study in Ethiopian hospitals. Glob Public Health. 2012;7(2):164–74. 10.1080/17441692.2010.542171 [DOI] [PubMed] [Google Scholar]
  • 33.PEPFAR Ethiopia in-country reporting system (IRS), FY 2011. New York: Clinton Health Access Initiative; 2011. [Google Scholar]
  • 34.Webster TR, Mantopoulos J, Jackson E, Cole-Lewis H, Kidane L, Kebede S, et al. A brief questionnaire for assessing patient healthcare experiences in low-income settings. Int J Qual Health Care. 2011. June;23(3):258–68. 10.1093/intqhc/mzr019 [DOI] [PubMed] [Google Scholar]
  • 35.PEPFAR Ethiopia in-country reporting system (IRS), FY 2013. New York:Clinton Health Access Initiative; 2013. [Google Scholar]
  • 36.PEPFAR Ethiopia in-country reporting system (IRS), FY 2014. New York:Clinton Health Access Initiative; 2014. [Google Scholar]
  • 37.Kickbush I, Gleicher D. Governance for health in the 21st century. Geneva: World Health Organization; 2012. Available from: http://www.euro.who.int/en/publications/abstracts/governance-for-health-in-the-21st-centuryhttp://[cited 2015 Apr 17]. [Google Scholar]
  • 38.Bradley E, Hartwig KA, Rowe LA, Cherlin EJ, Pashman J, Wong R, et al. Hospital quality improvement in Ethiopia: a partnership-mentoring model. Int J Qual Health Care. 2008. December;20(6):392–9. 10.1093/intqhc/mzn042 [DOI] [PubMed] [Google Scholar]
  • 39.Chimwaza W, Chipeta E, Ngwira A, Kamwendo F, Taulo F, Bradley S, et al. What makes staff consider leaving the health service in Malawi? Hum Resour Health. 2014;12(1):17. 10.1186/1478-4491-12-17 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Conn CP, Jenkins P, Touray SO. Strengthening health management: experience of district teams in The Gambia. Health Policy Plan. 1996. March;11(1):64–71. 10.1093/heapol/11.1.64 [DOI] [PubMed] [Google Scholar]
  • 41.Frenk J, Chen L, Bhutta ZA, Cohen J, Crisp N, Evans T, et al. Health professionals for a new century: transforming education to strengthen health systems in an interdependent world. Lancet. 2010. December 4;376(9756):1923–58. 10.1016/S0140-6736(10)61854-5 [DOI] [PubMed] [Google Scholar]
  • 42.Lewin S, Lavis JN, Oxman AD, Bastías G, Chopra M, Ciapponi A, et al. Supporting the delivery of cost-effective interventions in primary health-care systems in low-income and middle-income countries: an overview of systematic reviews. Lancet. 2008. September 13;372(9642):928–39. 10.1016/S0140-6736(08)61403-8 [DOI] [PubMed] [Google Scholar]
  • 43.Rowe LA, Brillant SB, Cleveland E, Dahn BT, Ramanadhan S, Podesta M, et al. Building capacity in health facility management: guiding principles for skills transfer in Liberia. Hum Resour Health. 2010;8(1):5. 10.1186/1478-4491-8-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Seims LR, Alegre JC, Murei L, Bragar J, Thatte N, Kibunga P, et al. Strengthening management and leadership practices to increase health-service delivery in Kenya: an evidence-based approach. Hum Resour Health. 2012;10(1):25. 10.1186/1478-4491-10-25 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Sucaldito NL, Tayag EA, Roces MC, Malison MD, Robie BD, Howze EH. The Philippines field management training program (FMTP): strengthening management capacity in a decentralized public health system. Int J Public Health. 2014. December;59(6):897–903. 10.1007/s00038-014-0603-5 [DOI] [PubMed] [Google Scholar]
  • 46.Swanson RC, Atun R, Best A, Betigeri A, de Campos F, Chunharas S, et al. Strengthening health systems in low-income countries by enhancing organizational capacities and improving institutions. Global Health. 2015;11(1):5. 10.1186/s12992-015-0090-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Umble KE, Brooks J, Lowman A, Malison M, Huong NT, Iademarco M, et al. Management training in Vietnam’s national tuberculosis program: an impact evaluation. Int J Tuberc Lung Dis. 2009. February;13(2):238–46. [PubMed] [Google Scholar]
  • 48.Willis-Shattuck M, Bidwell P, Thomas S, Wyness L, Blaauw D, Ditlopo P. Motivation and retention of health workers in developing countries: a systematic review. BMC Health Serv Res. 2008;8(1):247. 10.1186/1472-6963-8-247 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Bulletin of the World Health Organization are provided here courtesy of World Health Organization

RESOURCES