Abstract
Background: Performance evaluation is essential to quality improvement in healthcare. The current study has identified the potential pros and cons of external healthcare evaluation programs, utilizing them subsequently to look into the merits of a similar case in a developing country.
Methods: A mixed method study employing both qualitative and quantitative data collection and analysis techniques was adopted to achieve the study end. Subject Matter Experts (SMEs) and professionals were approached for two-stage process of data collection.
Results: Potential advantages included greater attractiveness of high accreditation rank healthcare organizations to their customers/purchasers and boosted morale of their personnel. Downsides, as such, comprised the programs’ over-reliance on value judgment of surveyors, routinization and incurring undue cost on the organizations. In addition, the improved, standardized care processes as well as the judgmental nature of program survey were associated, as pros and cons, to the program investigated by the professionals.
Conclusion: Besides rendering a tentative assessment of Iranian hospital evaluation program, the study provides those running external performance evaluations with a lens to scrutinize the virtues of their own evaluation systems through identifying the potential advantages and drawbacks of such programs. Moreover, the approach followed could be utilized for performance assessment of similar evaluation programs.
Keywords: Benefits and Downsides, Healthcare, Performance Evaluation Program, Iran
Introduction
The term ‘performance’ originally emanates from ‘perform’, which denotes fulfilling an obligation or requirement or accomplishing something promised or expected (1). It is defined as the manner in which something functions (2). Robbins and Coulter (3) describe performance as the end result of an activity. Performance can be thus linked to both process and outcome. Performance Measurement (PM) has been defined as “evaluating how well organizations are managed and the value they deliver for customers and other stakeholders” (4). de Bruijn (5) believes that measuring performance can result in more transparency and learning in organizations or Kwak and colleagues (6) associate PM with promoting accountability, highlighting the strengths and weaknesses and guiding the resource usage of organizations. Performance Measurement Systems (PMSs) were historically developed for monitoring and maintaining the control processes in various organizations (7). PMS “is a framework (procedure, system, software) to execute PM in a consistent and complete way” (8). PMSs need to be sensitive to changes in the external and internal environment of organizations; review and reprioritize internal objectives in terms of the changes; measure performance from a multi and interrelated perspective; be easy to use and linked to the organizations’ values and strategy (7,9). A host of PMSs have come to the practice over last decades; for instance, the performance pyramid system (10), performance prism (11), balanced score card (12) and Kanji’s Business Excellence Model (13,14).
In healthcare, measurement of performance has become increasingly important for different stakeholders such as policy-makers, providers and patients/purchasers. Growing demands to ensure transparency, accountability and high quality for healthcare services, controlling costs, and reducing variations in rendering health services have emerged as the main triggers for this healthcare performance evaluation movement (15). Performance is perceived as a multidimensional concept in healthcare. According to Joint Commission on Accreditation of Healthcare Organizations (JCAHO), healthcare performance is composed of nine definable, measurable, and improvable dimensions; “efficacy, appropriateness, continuity, safety, efficiency, effectiveness, availability, timeliness, and respect and caring” and PM is “quantifying processes and outcomes, using one or more of those dimensions” (16).
Most healthcare PMSs originated and developed in industrial sector and over time have been adopted by Healthcare Organizations (HCOs) (17,18). Health service leaders have de facto tried to measure their performance and search for greater efficiency by successful adoption of industrial and commercial models. The arrival of new public management has seemingly accelerated this process of infiltration (19). Healthcare PMSs are classified based upon their source of control, as internal or external, such as European Foundation for Quality Management (EFQM) and Accreditation (20). However, accreditation proved to be more ubiquitous, given its origin of and compatibility with healthcare (21–23).
Notwithstanding a large number of studies on accreditation programs in healthcare worldwide (24–27), very few have ever explored and tested empirically the potential advantages and disadvantages of such schemes (28,29). Particularly this shortage is much more obvious in developing countries. A fairly similar study in title (28) has adopted a different methodology, ending up with rather dissimilar results. Another study has looked into the benefits of participating in accreditation surveying (25). This study aims to first shed light on the potential pros and cons of external evaluation programs concentrating on accreditation programs, and then empirically test these advantages and disadvantages in the context of a developing country, Iran.
The Iranian Hospital Evaluation and Accreditation Program (IHEAP) might not be referred to as an accreditation program, in the conventional sense, given the generic features (i.e. voluntary and state-independent) of traditional accreditation programs (30,31). However, in accordance with growing government involvements in evaluating HCOs and the relevant features of IHEAP (e.g. conducting an external assessment and ranking the hospitals based on the pre-set written standards), it might be considered a quasi-accreditation program. IHEAP owns a national standard-setting and local monitoring (implementing) mode (32). It could be overall conceptualized as mandated, punitive, quasi-confidential, announced, standard-based, prescriptive and structure-oriented with a minimum requirement, absolute (against comparative) measurement and multi-level (grade) award accreditation system, based on the typology proposed by Joint Commission International (JCI) (33).
Methods
The study is of a mixed method design, combining qualitative and quantitative components in a sequential trend (34). It was conducted in two steps; at the first stage, two open-ended questions were put to 12 Subject Matter Experts (SMEs) - academics and practitioners with related background, expertise and/or knowledge in HCOs’ evaluation and accreditation. Questions included; “what could be the potential advantages of a healthcare evaluation and accreditation program for HCOs? And what might be the possible downsides of a healthcare evaluation program for HCOs?” The SMEs were selected considering their publications or work experience, and mainly through snowballing, a qualitative sampling technique, and reached by email, phone or face-to-face interviews, as possible (35). The sufficient number was ensured upon qualitative grounds such as data saturation, that is, when no new data was emerged sampling ceased (36). In addition, relevant literature was drawn upon to complete and enrich (triangulation) the data gathered at the first stage (28,37–39). Respondent validation was further used at this stage to improve validity of the results (40). Qualitative data were analyzed using Thematic Analysis (TA) method (36).
At the second stage, of a quantitative nature, it was sought to see whether any of the aforementioned pros and cons could be attributed to IHEAP from the perspectives of hospitals—public and private—evaluated by this program. To this end, a researcher-administered questionnaire was developed using the data generated from the first stage and put to senior managerial and clinical members 12 hospitals’ in one Iranian Western province, after testing its validity and reliability. It contained 20 questions under two main headings of ‘advantages and disadvantages’ with a Likert-type scale.
The survey received 60 respondents, including all senior management and clinical members, such as hospital manager, matron, housekeeping officer, quality improvement officer and head of a para-clinic department. The main inclusion criterion of respondents was their ‘full familiarity with and involvement in’ the hospital accreditation and evaluation processes. An analysis and review of related formal documents (41) and an informal formative evaluation by the researcher proved them as the most knowledgeable and influential in the evaluation process of the hospitals. A military and a privately owned hospital did not participate.
The researcher or a trained assistant was present at the time of questionnaire completion in order to provide more explanation about questions, if required. A ‘no effect’ and ‘do not know’ option further asked in order to cover all possible responses, despite the fact that the respondents were all familiar with accreditation programs. In fact, the questionnaire was asking whether the respondents associate any of the given advantages or disadvantage with IHEAP.
Descriptive statistical analyses (frequency, percentage) were utilized for categorizing and reporting results. Data was gathered in the first half of 2012.
Results
All SMEs completed the first survey stage, resulting in perceived drawbacks and advantages associated with external evaluation programs. Tables 1 and 2 depict benefits, downsides and corresponding explanations.
Table 1 . Potential benefits of an external healthcare evaluation program .
Benefits | Explanation |
Resource management | Improved planning, organizing and stewardship of hospital resources (i.e. human, money, information, etc.) |
Outcome improvement | Fewer adverse events, Nosocomial infections, post-operative complications, hospital mortality and shorter length of stay |
Teamwork | Enhanced interdisciplinary communication and coordination in a bid to deal with evaluation requirements |
Patient satisfaction | Higher levels of patient satisfaction in view of quality services delivered |
Staff morale | Greater satisfaction and improved morale of staff because of their higher accreditation rank |
Reliable documentation | Reliable documentation; organized, clear and comprehensive medical and administrative records |
Cost minimization | Tangible cuts in hospitals cost of service delivery |
Hospital image | Reputation and prestige of high rank hospitals in society |
Educational benefits | More knowledgeable and well-trained staff |
Structural (physical) preparedness | An investigation into hospitals’ available facilities (e.g. medical equipment, medication etc.) to provide quality services to patients |
Processual improvement | Evidence-based and standardized care processes (procedures) |
Attractiveness to public | Greater attractiveness of ranked hospital to patients and payers such as insurance organizations |
Table 2 . Potential downsides of an external healthcare evaluation program .
Downsides | Explanation |
Mission deviation | Deviation of hospitals from their main mission, namely, treating patients |
Resource diversion | Diverting hospital resources away from strategies aimed directly at addressing the quality and safety issues of services |
Workload | Creating extra burden of work for hospitals creating stress and anxiety for their staff |
Costly | Incurring undue cost on hospitals |
Discouragement | Disappointing and discouraging the hospitals from attempts to improve their own functionality following an unsatisfactory score in their prior accreditation |
Routinization and Bureaucratization | No thinking of innovation, stuck in the requirements imposed by PMSs to do activities in preset ways |
Program Incongruence | Not fitting well with other quality-improvement activities already running in the hospitals such as ISO or EFQM |
Judgmental nature | Over-reliance on value judgment of the surveyors in allocating scores to standards |
PMSs: Performance Measurement Systems; ISO: International Organization for Standardization; EFQM: European Foundation for Quality Managem
All but one respondent completed the second stage of the survey. Results represent judgment of the respondents on the existence and extent of any aforementioned benefits and downsides in IHEAP (Tables 3 and 4). At this stage, given a small sample size of respondents, their perspectives were not compared in terms of hospital type or demographics. The report of the findings below is in order of their effect, starting from the highest one.
Table 3 . Proportion (%) of the views on the advantages of IHEAP .
Advantage | To a large extent | To some extent | To a small extent | No effect | Do not know |
Improved and standardized care processes | 62 | 30 | 8 | 0 | 0 |
The improvement in the management of hospital resources | 59 | 25 | 16 | 0 | 0 |
Greater levels of patient satisfaction in the hospitals | 57 | 30 | 13 | 0 | 0 |
Hospital reputation/prestige | 57 | 20 | 15 | 8 | 0 |
Ensuring structural (physical) readiness | 57 | 43 | 0 | 0 | 0 |
Reliable documentation | 57 | 24 | 19 | 0 | 0 |
Improvement of outcomes | 51 | 22 | 16 | 8 | 3 |
Educational benefits | 35 | 37 | 22 | 3 | 3 |
Effective teamwork | 35 | 37 | 25 | 3 | 0 |
Greater attractiveness | 30 | 28 | 31 | 8 | 3 |
Staff improved morale | 22 | 49 | 29 | 0 | 0 |
Tangible cost reduction | 17 | 39 | 36 | 8 | 0 |
Average | 44.90 | 32.00 | 19.10 | 3.10 | 0.75 |
IHEAP: Iranian Hospital Evaluation and Accreditation Program
Table 4 . Proportion (%) of the views on the downsides of IHEAP .
Disadvantage | To a large extent | To some extent | To a small extent | No effect | Do not know |
Over-reliance on judgment of surveyors | 42 | 22 | 22 | 14 | 0 |
Routinization and bureaucratization | 40 | 26 | 21 | 12 | 1 |
Disappointing and discouraging hospitals | 31 | 25 | 30 | 14 | 0 |
Extra burden of work to hospitals | 14 | 28 | 39 | 19 | 0 |
Deviating hospitals from their mission | 14 | 17 | 39 | 28 | 0 |
Not fitting well with other quality-improvement activities | 14 | 25 | 41 | 17 | 3 |
Incurring undue cost on hospitals | 11 | 17 | 58 | 14 | 0 |
Diverting resources from clinical concerns | 8 | 19 | 48 | 22 | 3 |
Average | 21.75 | 22.37 | 37.25 | 17.50 | 0.87 |
IHEAP: Iranian Hospital Evaluation and Accreditation Program
Advantages
Approximately two-thirds (62%) of the respondents believed that IHEAP could lead to improved and standardized care processes, while a small percentage (8%) objected towards such effects of IHEAP. Fifty nine percent expressed affirmative views on the role of IHEAP in the improved management of hospital resources. As such, only 16% and 13% of the respondents, respectively, did not trust in IHEAP’s capability to improve the quality of hospitals’ services (seen in the form of patient care outcomes and satisfaction as its proxy), while more than half of the respondents confirmed this ability of the program.
Another advantage noted by respondents was the hospitals’ reputation, that is, those ranked high by PMSs could enjoy an enhanced image and prestige with public. In fact, 57% of respondents indicated that IHEAP has improved their hospital’s reputation. However, around one-tenth (8%) did not point to such an effect.
Other benefits raised by the SMEs included greater physical (structural) preparedness and reliable documentation of HCOs evaluated, likely to be generated by external PMSs. Just about 57% of respondents associated these features to IHEAP; although one-third perceive this evaluation program as ineffective in leading the hospitals to create reliable documentation. As to the other advantages such as educational benefits of the program for the hospitals, teamwork, hospital attractiveness for the patients and payers as a result of their high accreditation rank, a relatively balanced proportion of views emerged from the data. Approximately one-third of the respondents (35%, 35%, and 30%) completely agreed with these effects of IHEAP for the hospitals. Another third showed moderate views towards IHEAP in this regard. A slight percentage (8%) did not consider any attractiveness for their hospital due to its evaluation and ranking by IHEAP and 3% were unaware of this effect.
Moderate views were dominant (49%) among the respondents as to the role of IHEAP in improving the satisfaction and morale of hospital staff, 22% believing that staff satisfaction and morale might improve if their hospital earned a high rank from IHEAP. Cost reduction effects of IHEAP were mostly doubted by respondents, insofar as 83% reject the notion that the program could highly lay the groundwork for cost-minimization for hospitals.
Disadvantages
According to Table 4, 42% of respondents claimed that surveyors were highly reliant on their judgment in assessing their hospitals, although 15% completely rejected this issue and 22% believing that IHEAP surveyors might only ‘to some extent’ use their judgment. As such, 40% of respondents indicated that IHEAP has ‘to a large extent’ kept the hospitals busy with their daily routine activities (routinization and bureaucratization), whereas 21% claimed that routinization might be caused by IHEAP to a small degree.
Distribution of the views on discouragement in the hospital caused by IHEAP was somewhat balanced. Thirty one percent of respondents stated that IHEAP might largely give rise to dissuasion in hospitals evaluated, whilst 30% discounted this effect ‘to a small extent’. Fourteen percent completely disregarded such an effect to be created by this program.
Relatively similar results emerged as to downsides of IHEAP such as; imposing extra burden of work to hospitals, deviating hospitals from their main mission and not fitting well with hospital other quality-improvement initiatives already running in hospitals. Only around one-seventh (14%) believed this program might ‘to a large extent’ cause these drawbacks, while well over one-third opposed to that and approximately one-fifth discounted such an effect.
It was largely argued that IHEAP was not incurring costs on hospitals, and only around 10% of the respondents agreed that the program could ‘highly’ give rise to this shortcoming. Similar views emerged with regard to IHEAP’s role in diverting financial resources from clinical concerns, with 22% objecting to this perspective, and while 48% argued that it might ‘to a small degree’ deviate resources from hospitals’ clinical practices.
Discussion
A number of advantages and disadvantages were extracted from the data, some of which corroborated by the literature (25,27,42–45). They were further ordered in terms of their possibility of existence in an external PMS, drawing on the perceptions of people whose organization is evaluated by such a program. The attitudes, whilst subjective in nature, were expected to emanate from the respondents’ tenure and scholarship in relation to accreditation and evaluation programs; this was confirmed by few respondents choosing the ‘do not know’ option in the questionnaires.
Overall the views upon the effect of IHEAP on the hospitals were optimistic. For example, in seven out of 12 advantages associated with IHEAP, approximately 60% of respondents were supportive of the constructive effects of IHEAP. There was a large agreement on two advantages of IHEAP (program’s role in ensuring physical and structural readiness and improved and standardized care processes). This finding seemed to be endorsed by the program’s standards and orientation which is argued to be more related to physical and structural aspects of hospitals (46). Whilst, as to the downsides, the highest agreement was only 42% related to ‘subjective judgment’ of the program surveyors.
On average nearly 50% of respondents overall believed that the advantages were ‘to a large degree’ present in IHEAP in comparison with only 19% arguing that the benefits ‘to a small degree’ existed. Whereas, 21.70% claimed that the identified downsides ‘to a large extent’ featured in IHEAP vis-a-vis 37.20% believing that the effects ‘to a small extent’ existed in the program. 17.50% of respondents associated no such effects to this program.
Moderate views were also dominant among respondents, consistent with the central tendency effect, implying that people by and large tend to stay in the middle (47,48).
In some virtues such as ‘tangible cost reduction’ and ‘raised staff satisfaction and morale’ greater percentage opposed to these advantages to be generated by IHEAP, while approximately 10% did not consider such an effect. Cost-containment is argued to be an important function of evaluation and accreditation (49). Nevertheless, given that cost reduction by such programs might materialize in the long term for a HCO, considering the peculiar features of healthcare (50), respondents did not emphasize this advantage.
PMSs could affect the reputation of HCOs by awarding high or low ranks (51). Consistently, respondents were of similar opinions regarding IHEAP. This perception could arguably drive staff towards self-improvement in order to obtain more prestige. As such, working within a high-ranked hospital may positively influence staff morale (52,53). However, majority of respondents (49%) believed IHEAP only ‘moderately’ improved staff morale and 29% even undervalued this effect to a ‘small extent’. It seems they did not envisage a strong motivational effect for this program, which might be because IHEAP has been running for a quite long time and routinization has somehow been the case (54).
Two weaknesses stood out in relation to IHEAP with ‘over-reliance on the value judgment of the program’s surveyors’ on rating the hospitals’, and ‘routinization and bureaucratization’ receiving nearly 45% of the views, as opposed to only one-fifth denying such a shortcoming. The former is said to be the Achilles’ heel of evaluation programs deploying external surveyors and leaving the process of scoring completely to the discretion and judgement of surveyors (38,55). This may endanger the validity of the accreditation process (56). Referred to in the literature as the ‘ossification effect’ (57), routinization was also indicated by respondents as another strong dysfunction of IHEAP as noted by other studies (39). Ossification, in fact, denotes that HCOs mostly focus on routine and specified areas (because of PMSs) by providing services in ordinary and conventional ways, rather than trying new methods, to avoid missing their chance of a higher evaluation score.
Tunnel vision effect which happens when HCOs and their personnel time and concentration are directed to achieving measures of PMSs, while other, even important, clinical priorities not required by those systems are ignored (58) found to be less apparent in IHEAP. Thirty nine percent of respondents disagreed with such drawbacks as ‘diverting resources from clinical concerns’ or ‘deviating hospitals from their main mission’ by IHEAP. Although this is a positive point, possible justification for their less existence in IHEAP could be because a large proportion of the program’s standards and requirements were related to the hospitals’ physical and non-clinical elements and processes (41). Therefore, less time and money of the clinical side and concerns might be diverted from the hospitals. Moreover, as IHEAP is a free program, the managerial and clinical professionals de facto showed no worry for resource diversion from clinical concerns in their HCO.
The distribution of the views on the disadvantage ‘discouraging hospitals by IHEAP from more efforts following an unsatisfactory rank’ was somewhat balanced, in a way that no conclusion can be made on whether this program could give rise to such a downside. Some of hospitals under the evaluation of IHEAP were engaged in other evaluation systems such as ISO. Notwithstanding differing rationale behind these programs (22,59), only 14% of the respondents pointed to possible conflict between the procedures and requirements of differing programs in their hospital, indicating agreement between IHEAP and other regulatory and improvement initiatives within their hospitals. Such alignment has been identified as a critical enabler of effective implementation of accreditation programs (27). The highest proportion of respondents (58%) believed IHEAP may slightly incur undue costs to hospitals. This is evident because unlike some voluntary programs wherein hospitals are required to pay prescription fees, IHEAP is a free program for hospitals and state-run evaluation systems. Only where the hospitals could not obtain the highest rank, the tariff of their hotel-type services will drop, reducing their income (41), which can be seen to be an indirect cost incurred by IHEAP. It is noteworthy that a modest percentage (22.3%) of respondents was equivocal towards this program, that is, they thought the program to some extent may generate those disadvantages.
On the whole, as far as patient outcomes and satisfaction are concerned (as close proxies of quality), IHEAP was found to hardly promoting and improving, consistent with findings on the performance of similar programs (60,61). Nevertheless, the main value add of IHEAP to the hospitals was improvement in their physical and processual circumstances, which could be in a way justified given its high emphasis (score) on these aspects in the evaluation (62). IHEAP was not found to be incurring undue costs to hospitals, while this is an emerging concern for HCOs aspiring to be accredited (63,64). IHEAP resembled other external assessment programs in terms of its judgmental nature.
Limitations
There are limitations to our study. First, the number of respondents was small, but included all members in the study area. A survey might replicate the second phase in a large scale, country-wide, population. Secondly, the study focused exclusively on a single local accreditation program, and not on hospital evaluation and accreditation as a whole.
Conclusion
Inquiry into the benefits and disadvantages of hospital evaluation programs can provide professionals in evaluation and accreditation of HCOs with a lens to scrutinize the merits of their local program. Managers and decision-makers, associated with IHEAP, should attend to the insignificant advantages and highly emphasised downsides of program, in order to enhance its performance. This analysis may be of value to other groups and bodies running external evaluation programs in similar contexts. A future qualitative research could explore the reasons behind these attitudes and expression of hospitals towards IHEAP.
There are three main strengths of this research: 1) it identifies main advantages and disadvantages of an accreditation and evaluation program in generic terms; 2) it offers a perceptual evaluation of IHEAP performance, which could be utilised as a method for performance assessment of similar PMSs; and 3) the pros and cons of IHEAP are ranked based on their occurrence.
Similar studies could first collect comprehensive data on pros and cons of PMSs, and then, to be optimized, rely both on perceptual and factual (tangible evidence) data to validate the identified benefits and disadvantages.
Acknowledgments
My special thanks to all SMEs, hospital managers and clinicians for their kind cooperation, plus to anonymous reviewers of this paper, for their invaluable comments.
Ethical issues
The study was approved by the ethic committee of Hamedan University of Medical Sciences.
Competing interests
The author declares that he has no competing interests.
Author’s contribution
EJ is the single author of the manuscript.
Key messages
Implications for policy makers
Policy-makers and managers are informed of the strengths and weaknesses of Iranian Hospital Evaluation and Accreditation Program (IHEAP) and could re-work this program accordingly.
Prominent advantages and disadvantages identified could earn higher attention by the authorities.
IHEAP assessment results could be also considered whenever any new external evaluation programs for the hospitals are to be developed.
Perspective of other provinces towards the advantages and downsides could be also sought to develop a better picture of program in the country.
IHEAP could be welcomed by hospitals when it is modified based on their feedback.
Implications for public
Patients/public deserve to receive high quality health services. They could enjoy such services, if a defect-free performance monitoring and evaluation system assesses healthcare organizations. In fact, a well-developed and functioning evaluation program, is expected to promote trust level at a society in healthcare.
Citation: Jaafaripooyan E. Potential pros and cons of external healthcare performance evaluation systems: real-life perspectives on Iranian hospital evaluation and accreditation program. Int J Health Policy Manag 2014; 3: 191–198. doi: 10.15171/ijhpm.2014.84
References
- 1.Dianis NL, Cummings C. An interdisciplinary approach to process performance improvement. J Nurs Care Qual. 1998;12:49–59. doi: 10.1097/00001786-199804000-00011. [DOI] [PubMed] [Google Scholar]
- 2. Øvretveit J. Evaluating Health Interventions: An Introduction to Evaluation of Health Treatments, Services, Policies, and Organizational Interventions. Buckingham, Philadelphia: Open University Press; 1998.
- 3. Robbins SP, Coulter M. Management. 7th edition. New Jersy: Prentice-Hall International, Inc.; 2002.
- 4.Moullin M. Performance measurement definitions: Linking performance measurement and organisational excellence. Int J Health Care Qual Assur. 2007;20:181–3. doi: 10.1108/09526860710743327. [DOI] [PubMed] [Google Scholar]
- 5. de Bruijn JA. Managing Performance in the Public Sector. 2nd edition. Oxon: Routledge; 2007.
- 6.Kwak NK, McCarthy KJ, Parker GE. A human resource planning model for hospital/medical technologists: an analytic hierarchy process approach. J Med Syst. 1997;21:173–87. doi: 10.1023/a:1022812322966. [DOI] [PubMed] [Google Scholar]
- 7.Purbey S, Mukherjee K, Bhar C. Performance measurement system for healthcare processes. International Journal of Productivity and Performance Management. 2006;56:241–51. doi: 10.1108/17410400710731446. [DOI] [Google Scholar]
- 8.Lohman C, Fortuin L, Wouters M. Designing a performance measurement system: a case study. Eur J Oper Res. 2004;156:267–86. doi: 10.1016/s0377-2217(02)00918-9. [DOI] [Google Scholar]
- 9.Bititcti US, Turner T, Begemann C. Dynamics of performance measurement, systems. International Journal of Operations & Production Management. 2000;20:692–704. doi: 10.1108/01443570010321676. [DOI] [Google Scholar]
- 10. Lynch R, Cross K. Measure Up! Yardsticks for Continuous Improvement. Oxford: Blackwell; 1991.
- 11. Neely A, Adams C, Kennerley M. The performance prism: the scorecard for measuring and managing business success. London: Prentice Hall Financial Times; 2002.
- 12.Kaplan RS, Norton DP. The balanced scorecard: measures that drive Performance. Harv Bus Rev. 1992;70:71–9. [PubMed] [Google Scholar]
- 13.Kanji GK. Measurement of business excellence. Total Quality Management & Business Excellence. 1998;9:633–43. doi: 10.1080/0954412988325. [DOI] [Google Scholar]
- 14. Kanji GK. Measuring business excellence. London: Routledge; 2002.
- 15.Hilarion P, Suñol R, Groene O, Vallejo P, Herrera E, Saura RM. Making performance indicators work: The experience of using consensus indicators for external assessment of health and social services at regional level in Spain. Health Policy. 2008;90:94–103. doi: 10.1016/j.healthpol.2008.08.002. [DOI] [PubMed] [Google Scholar]
- 16. Joint Commission on Accreditation of Healthcare Organizations (JCAHO). Tools for Performance Measurement in Health Care: Quick Reference Guide. Washington: Joint Commission Resources; 2002.
- 17.Ballantine J, Brignall S, Modell S. Performance measurement and management in public health services: a comparison of UK and Swedish practice. Management Accounting Research. 1998;9:71–94. doi: 10.1006/mare.1997.0067. [DOI] [Google Scholar]
- 18.Brignall S, Modell S. An institutional perspective on performance measurement and management in the new public sector. Management Accounting Research. 2000;11:281–306. doi: 10.1006/mare.2000.0136. [DOI] [Google Scholar]
- 19.Hood C. The “new public management” in the 1980s: variations on a theme. Accounting Organisations and Society. 1995;20:93–109. doi: 10.1016/0361-3682(93)e0001-w. [DOI] [Google Scholar]
- 20.Veillard J, Champagne F, Klazinga N, Kazandjian V, Arah OA, Guisset AL. A performance assessment framework for hospitals: the WHO regional office for Europe PATH project. Int J Qual Health Care. 2005;17:487–96. doi: 10.1093/intqhc/mzi072. [DOI] [PubMed] [Google Scholar]
- 21.Scrivens E, Lodge J. Accreditation: protecting the professional or the consumer? Journal of Management Studies. 1997;34:167–9. doi: 10.1093/intqhc/12.3.243. [DOI] [Google Scholar]
- 22.Donahue KT, Vanostenberg P. Joint Commission International accreditation: relationship to four models of evaluation. Int J Qual Health Care. 2000;12:243–6. doi: 10.1093/intqhc/12.3.243. [DOI] [PubMed] [Google Scholar]
- 23.Heaton C. External peer review in Europe: an overview from the ExPeRT Project. Int J Qual Health Care. 2000;12:177–82. doi: 10.1093/intqhc/12.3.177. [DOI] [PubMed] [Google Scholar]
- 24.Greenfield D, Hinchcliff R, Pawsey M, Westbrook J, Braithwaite J. The public disclosure of accreditation information in Australia: Stakeholder perceptions of opportunities and challenges. Health Policy. 2013 doi: 10.1016/j.healthpol.2013.09.002. [DOI] [PubMed] [Google Scholar]
- 25.Lancaster J, Braithwaite J, Greenfield D. Benefits of participating in accreditation surveying. Int J Health Care Qual Assur. 2010;23:141–52. doi: 10.1108/09526861011017076. [DOI] [PubMed] [Google Scholar]
- 26.Braithwaite J, Shaw CD, Moldovan M, Greenfield D, Hinchcliff R, Mumford V. et al. Comparison of health service accreditation programs in low-and middle-income countries with those in higher income countries: a cross-sectional study. Int J Qual Health Care. 2012;24:568–77. doi: 10.1093/intqhc/mzs064. [DOI] [PubMed] [Google Scholar]
- 27.Hinchcliff R, Greenfield D, Westbrook JI, Pawsey M, Mumford V, Braithwaite J. Stakeholder perspectives on implementing accreditation programs: a qualitative study of enabling factors. BMC Health Serv Res. 2013;13:437. doi: 10.1186/1472-6963-13-437. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Tabrizi JS, Gharibi F, Wilson AJ. Advantages and disadvantages of health care accreditation models. Health Promot Perspect. 2011;1:1–31. doi: 10.5681/hpp.2011.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Cerqueira M. A literature review on the benefits, challenges and trends in accreditation as a quality assurance system. British Columbia: Ministry of Children and Family Development; 2006.
- 30. Scrivens E. Accreditation: protecting the professional or the consumer? Buckingham: Open University Press; 1995.
- 31.Scrivens E. Assessing the value of accreditation systems. Eur J Public Health. 1997;7:4–8. doi: 10.1093/eurpub/7.1.4. [DOI] [Google Scholar]
- 32.Scrivens E. A taxonomy of the dimensions of accreditation systems. Soc Policy Adm. 1996;30:114–24. doi: 10.1111/j.1467-9515.1996.tb00431.x. [DOI] [Google Scholar]
- 33.Van Ostenberg P. Issues in Developing National Accreditation Programs to Improve the Quality and Safety of Patient Care. Joint Commission International. 2005 [Google Scholar]
- 34. Creswell JW. Designing and conducting mixed methods research. Thousand Oaks, CA: Sage; 2007.
- 35. Silverman D. Doing qualitative research. 3rd edition. Thousand Oaks, CA: Sage; 2010.
- 36. Green J, Thorogood N. Qualitative methods for health research. London: Sage; 2004.
- 37. Barbara B. Factors Influencing Costs and Effectiveness of Accreditation. Oakbrook Terrace: JCAHO; 2006.
- 38.Jaafaripooyan E, Agrizzi D, Akbari-Haghighi F. Healthcare accreditation systems: further perspectives on performance measures. Int J Qual Health Care. 2011;23:645–56. doi: 10.1093/intqhc/mzr063. [DOI] [PubMed] [Google Scholar]
- 39.Yarmohammadian M, Shokri A, Bahmanziari N, Kordi K. The blind spots on Accreditation program (in Persian) Journal of Health System Research. 2013;9:1158–66. [Google Scholar]
- 40. Pope C, Mays N. Qualitative research in health care. 3rd edition. Oxford: Blackwell Publishing; 2006.
- 41. Ministry of Health and Medical Education (MoHME). [The instruction of standards and principles of evaluation of the general hospitals]. Tehran: Centre for healthcare accreditation and supervision; Healthcare organisations evaluation group; 1997.
- 42.Ng GK, Leung GK, Johnston JM, Cowling BJ. Factors affecting implementation of accreditation programmes and the impact of the accreditation process on quality improvement in hospitals: a SWOT analysis. Hong Kong Med J. 2013;19:434–46. doi: 10.12809/hkmj134063. [DOI] [PubMed] [Google Scholar]
- 43. Øvretveit J, Ham C. Action Evaluation of Health Programmes and Changes: A Handbook for a User-focused Approach. Oxon: Radcliffe Publishing; 2002.
- 44.Rooney A, van Ostenberg P. International accreditation: what;s good practice in Sao Paulo is good practice in Istanbul. J AHIMA. 2004;75:38–9. [PubMed] [Google Scholar]
- 45.El-Jardali F, Jamal D, Dimassi H, Ammar W, Tchaghchaghian V. The impact of hospital accreditation on quality of care: perception of Lebanese nurses. Int J Qual Health Care. 2008;20:363–71. doi: 10.1093/intqhc/mzn023. [DOI] [PubMed] [Google Scholar]
- 46. Moghimi A. [Familiarity with evaluation concepts and establishing quality measures]. Tehran: Ministry of Health, Centre for accreditation and supervision, Healthcare organisations evaluation group; 2004.
- 47. Kim PS. Performance Managementand Performance Appraisal in the Public Sector [internet]. 2011. Available from: http://unpan1.un.org/intradoc/groups/public/documents/un-dpadm/unpan045257.pdf
- 48. Vallabhaneni D. What’s Your MBA IQ: A Manager’s Career Development Tool. Hoboken: John Wiley & Sons; 2009.
- 49.Batalden P, Leach D, Swing S, Dreyfus H, Dreyfus S. General Competencies And Accreditation In Graduate Medical Education. Health Aff (Millwood) 2002;21:103–11. doi: 10.1377/hlthaff.21.5.103. [DOI] [PubMed] [Google Scholar]
- 50. Gauld R. Comparative health policy in the Asia-Pacific. Maidenhead, Berkshire: Open University Press; 2005.
- 51.Mannion R, Davies H, Marshall M. Impact of star performance ratings in English acute hospital trusts. J Health Serv Res Policy. 2005;10:18–24. doi: 10.1258/1355819052801877. [DOI] [PubMed] [Google Scholar]
- 52.Pomey M, Lemieux-Charles L, Champagne F, Angus D, Shabah A, Contandriopoulos A. Does accreditation stimulate change? A study of the impact of the accreditation process on Canadian healthcare organizations. Implement Sci. 2010;5:1–14. doi: 10.1186/1748-5908-5-31. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53. Tandulwadikar A, Chigullapalli R. World-class via Accreditations [internet]. Asian Hospital & Healthcare Management 2014. Available from: http://www.asianhhm.com/healthcare_management/healthcare_accreditations.htm
- 54. Jaafaripooyan E. Contextual Approach to the Performance Analysis of Iran’s National Accreditation Programme for Healthcare Organisations [PhD thesis]. Southampton: University of Southampton; 2011.
- 55. Scrivens E. An external evaluation of the hospital accreditation programme. Bristol: University of Bristol; 1993.
- 56.McAlary B. The reliability and validity of hospital accreditation in Australia. J Adv Nurs. 1981;6:409–11. doi: 10.1111/j.1365-2648.1981.tb03242.x. [DOI] [PubMed] [Google Scholar]
- 57.Kelman S, Friedman JN. Performance improvement and performance dysfunction: an empirical examination of distortionary impacts of the emergency room wait-time target in the English national health service. Journal of Public Administration Research and Theory. 2009;19:917–46. doi: 10.1093/jopart/mun028. [DOI] [Google Scholar]
- 58.Bevan G, Hood C. Have targets improved performance in the English NHS? BMJ. 2006;332:419–22. doi: 10.1136/bmj.332.7538.419. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Shaw CD. External quality mechanisms for health care: summary of the ExPeRT project on visitatie, accreditation, EFQM and ISO assessment in European Union countries. Int J Qual Health Care. 2000;12:169–75. doi: 10.1093/intqhc/12.3.169. [DOI] [PubMed] [Google Scholar]
- 60.Greenfield D, Braithwaite J. Health sector accreditation research: a systematic review. Int J Qual Health Care. 2008;20:172–83. doi: 10.1093/intqhc/mzn005. [DOI] [PubMed] [Google Scholar]
- 61.Sack C, Scherag A, Lütkes P, Günther W, Jöckel KH, Holtmann G. Is there an association between hospital accreditation and patient satisfaction with hospital care? A survey of 37 000 patients treated by 73 hospitals. Int J Qual Health Care. 2011;23:278–83. doi: 10.1093/intqhc/mzr011. [DOI] [PubMed] [Google Scholar]
- 62. Ministry of Health and Medical Education (MoHME). [The instruction of standards and principles of evaluation of the general hospitals: Emergency department]. Tehran: Centre for healthcare accreditation and supervision, Healthcare organisations evaluation group; 1997.
- 63.Fairbrother G, Gleeson M. EQuIP accreditation: feedback from a Sydney teaching hospital. Aust Health Rev. 2000;23:200–3. doi: 10.1071/ah000153. [DOI] [PubMed] [Google Scholar]
- 64.Bukonda N, Tavrow P, Abdallah H, Hoffner K, Tembo J. Implementing a national hospital accreditation program: the Zambian experience. Int J Qual Health Care. 2003;14:7–16. doi: 10.1093/intqhc/14.suppl_1.7. [DOI] [PubMed] [Google Scholar]