Abstract
Objective:
This paper summarizes findings of a comprehensive, systematic review of the peer-reviewed and grey literature on performance measurement according to each stage of the performance measurement process – conceptualization, selection and development, data collection, and reporting and use. It also outlines implications for practice.
Methods:
Six hundred sixty-four articles about organizational performance measurement from the health and business literature were reviewed after systematic searches of the literature, multi-rater relevancy ratings, citation checks and expert author nominations. Key themes were extracted and summarized from the most highly rated papers for each performance measurement stage.
Results:
Despite a virtually universal consensus on the potential benefits of performance measurement, little evidence currently exists to guide practice in healthcare. Issues in conceptualizing systems include strategic alignment and scope. There are debates on the criteria for selecting measures and on the types and quality of measures. Implementation of data collection and analysis systems is complex and costly, and challenges persist in reporting results, preventing unintended effects and putting findings for improvement into action.
Conclusion:
There is a need for further development and refinement of performance measures and measurement systems, with a particular focus on strategies to ensure that performance measurement leads to healthcare improvement.
Abstract
Objectif :
Ce document résume les résultats d’un examen détaillé et systématique de la littérature grise et des publications évaluées par les pairs sur la mesure du rendement pour chaque étape du processus – conceptualisation, sélection et développement, collecte de données, présentation des résultats et utilisation. Il présente aussi des répercussions sur la pratique.
Méthodes :
Après avoir effectué des recherches systématiques dans la littérature, demandé à des évaluateurs multiples de déterminer la pertinence des documents repérés, vérifié les citations et désigné les auteurs experts, 664 articles sur la mesure du rendement organisationnel provenant de publications des domaines de la santé et des affaires ont été examinés. On a dégagé puis résumé des thèmes clés à partir des documents ayant reçu la plus haute cote pour chaque étape de la mesure du rendement.
Résultats :
Malgré un consensus quasi universel sur les avantages potentiels de la mesure du rendement, il existe actuellement peu de preuves pour guider la pratique dans les soins de santé. Les problèmes de conceptualisation des systèmes comprennent, entre autres, l’alignement stratégique et la portée. On ne s’entend pas sur les critères à utiliser pour sélectionner les mesures et sur les types et la qualité de ces dernières. La mise en place des systèmes de collecte et d’analyse de données est complexe et coûteuse, et il y a encore des défis à relever dans la présentation des résultats, la prévention des effets non prévus et la transformation des résultats en des mesures concrètes.
Conclusion :
Il faut développer et peaufiner davantage les mesures du rendement et les systèmes connexes, en mettant un accent particulier sur les stratégies pouvant garantir que la mesure du rendement mènera à des améliorations dans les soins de santé.
The purpose of our review was to summarize the current business and healthcare literature on performance measurement (PM) systems and to make recommendations for research and practice. Details of methods are provided in Part I (Healthcare Policy, 1.4). This second paper reports in greater depth on themes and issues extracted from the peer-reviewed and grey literature in relation to stages of the PM process.
The PM Process
The PM literature lacks consensus on concepts and definitions. However, the PM process is typically described as having approximately four stages (Nadzam and Nelson 1997; Nutley and Smith 1998; Bourne et al. 2000; Ibrahim 2001; Smith and Goddard 2002), although many authors caution that the process is more dynamic and less linear than a simple set of stages implies. The stages are (a) conceptualization, (b) selection and/or development of measures, (c) data collection and processing and (d) reporting and using results.
Conceptualization
Two major issues on conceptualization of PM systems are prominent in the literature: aligning with organizational strategic direction and determining the appropriate scope for the system.
Strategy
There is increasing emphasis on aligning PM activities with the strategic direction of the organization, and a general sentiment in both business and health that such alignment is rare in practice. However, maintaining a strategic focus is acknowledged to be more difficult in healthcare than in business for several reasons.
First, organizational goals are often difficult to operationalize in healthcare because of the complexity of treatments, settings and patient groups (Baker and Pink 1995). Public service organizations have broader goals (including societal goals) and “a more complex pattern of accountability than the corporate financial statement” (Smith 1993: 137). The dual management model (professional and administrative) and the interrelationships among multiple internal and external stakeholders (Kleinpell 1997; Lemieux-Charles et al. 2002), each with its particular interest in setting the PM agenda (Nadzam and Nelson 1997; Collopy 1998), create greater complexity. In health services the policy environment is very fluid (Smith and Goddard 2002), perhaps more so than in business environments.
Second, causal links between service and health outcomes are very difficult to specify for both medical and public health interventions, owing to the limits of evidence in medicine and the reality that healthcare is only one of several predictors of health status (Williams et al. 1992; Handler et al. 2001; Leggat et al. 1998).
Third, “customer” dynamics are less straightforward in healthcare than in the purchase of a commercial product or service (Newhouse 2002). People seek care out of necessity, not desire. The provider often has a local monopoly on a given service, limiting both comparators for judgments about performance and opportunities to seek alternatives (Smith 1993). An important commercial goal is repeat business, while in healthcare it is often viewed as an unfortunate necessity because a definitive cure is unattainable. The consumer is also typically less knowledgeable about the service content than in commercial transactions (Jennings and Staggers 1999) and is often vulnerable by virtue of being ill and possibly afraid when seeking care. These realities complicate the patient satisfaction and perceived care quality domains of PM (Jennings and Staggers 1999). The message about the task of strategic conceptualization of a PM system is clear in both sets of literature: “what gets measured gets delivered,” and there are undesirable consequences for organizations, from a strategic point of view, that collect the wrong measures (Voelker et al. 2001).
Scope
The second major issue in conceptualization of PM systems in both literatures is determining the appropriate system scope. Scope decisions apply to three dimensions: vertical (level of the healthcare organization or system), horizontal (breadth across the continuum of care or business units) and longitudinal (temporal) (Collopy 1998). In business there is a trend towards involving all levels of the organization in a common vision that can be reinforced by the PM system itself (Neely et al. 1995; Epstein and Manzoni 1998; Lockamy 1998; Legnini et al. 2000). “One of the major problems with conventional PM is the ease with which organizational wholes are carved up, and their interactions with their environments cease to be of interest as management functions devise measures (and associated targets) for their own territory. This reductionism is associated with some of the problems identified by managers when they seek to improve performance” (Holloway 2001: 173).
Healthcare PM activities are also highly fragmented, verified by the sheer number of single-level or single-service systems described in the literature. Single-level focus creates debates about the value of one over the other: some charge that the patient level is often not addressed in system-level approaches (e.g., Greenhalgh et al. 1996), while others express the opposite concern (e.g., Barrell 2000). Many call for greater consolidation through overarching goals and greater consensus and coordination (Eddy 1998; Kizer 2001), and increasingly multi-level systems are being conceptualized (e.g., Moscovice et al. 1995; Luttman 1998; Evans et al. 2001; Handler et al. 2001). Even so, Nutley and Smith (1998: 53) contend that “calls for a top to bottom PM architecture have largely been ignored.” Others caution that the PM for high-level management and accountability differs from that needed for daily operations (McLoughlin et al. 2001; Voelker et al. 2001).
The horizontal scope of systems is also debated. The business literature reports a few companies attempting to establish measures that capture relevant information across company boundaries (such as with supplier networks), but acknowledges this to be very difficult (Fawcett and Cooper 1998). The roots of healthcare PM are clearly in acute care, and hospital-bounded approaches dominate. Separate PM systems are under development and are testing for other components such as public health (Corso et al. 2000; Handler et al. 2001; Kates et al. 2001), but our review found no systems spanning acute and community care. DeRosario (1999: 38) notes that “to catch the next wave of performance change, we need to begin measuring activities that occur between healthcare sectors,” and others concur (Hall 1996; Kizer 2001). A PM system should match the service delivery model, and it is likely that broader PM systems will emerge with the trend towards regionalized, integrated health services in many jurisdictions. With respect to the temporal dimension, a few authors suggest that PM systems need to address and measure the process of care over time for an individual (Bishop and Pelletier 2001).
Measures selection or development
Many authors stress that, according to measurement theory, measures themselves are just a reflection of reality. In addition, the choice of what to measure among the many options is an imprecise process (van Peursem et al. 1995), reflecting a system of values and social goals (Sheldon 1998). Ibrahim (2001: 431) writes that “performance indicators are inherently controversial” because they require a judgment about what constitutes quality.
Frameworks
After general conceptualization, the next task in PM is to select or develop measures. Optimally, a framework ensures balance across strategic improvement areas and guides the measurement process. An ideal framework describes domains (measure groupings) and dimensions (e.g., organizational levels), but most frameworks reviewed are simply a list of indicators and/or domains (e.g., Lied 1999). More complex frameworks also include one or more dimensions such as level of the healthcare system (McEwan and Goldner 2000) or stakeholder perspective (Nadzam and Nelson 1997; Kizer 2001; McIntyre et al. 2001). We found little consistency in the combinations of 21 domains used in 17 major health PM frameworks reviewed (Adair et al. 2003).
We identified eight business frameworks that included both non-financial and financial measures (Lebas 1995; Neely et al. 1995; Kaplan and Norton 1996, 2001; Epstein and Manzoni 1998; Kueng and Krahn 1999; Kueng 2000; Kanji and Moura 2002) – called multi-dimensional or portfolio approaches – that are tabulated in the full report (Adair et al. 2003). Neely et al. (2000) and Kueng (2000) provide noteworthy reviews of business approaches. The most popular framework in business is the Balanced Scorecard (BSC), which has also been applied in healthcare. Some other approaches to the management of quality in the business literature are noteworthy because of their recent diffusion into healthcare and their close relationship with PM. First are the quality award programs, including the Malcolm Baldrige National Quality Award, the European Foundation for Quality Management’s Business Excellence Model (Neely et al. 1995; Kueng and Krahn 1999; DeBaylo 1999) and many spin-off quality award programs. One widely adopted program, Hoshin Kanri, that developed in Japan in the 1960s and has been disseminated widely is noteworthy for having extensive coverage in the popular press worldwide but virtually none in the western research literature (Tennant and Roberts 2000). The BSC and other portfolio approaches have evolved towards the selection of more forward-looking, strategy-focused measures, but many criticisms of these early-stage approaches persist (Kueng and Krahn 1999; Mooraj et al. 1999; Kueng 2000; Baughan et al. 2002; Brignall 2002; Morgan and Braganza 2002) that parallel the healthcare PM literature.
Issues In Choosing Measures
Several predominant themes relate to measures selection, including the sheer growth in numbers of measures and systems, as well as issues related to the types of measures and their limitations.
In recent years, measures (both indicators and comprehensive instruments) have become so numerous that it would be nearly impossible to catalogue them completely (Nutley and Smith 1998; Sheldon 1998). The national indicator library of the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) is believed to have more than 1,000 measures, and the database of the Agency for Healthcare Research and Quality (AHRQ) contained more than 1,197 in 53 sets by 1995 (AHRQ 2002). Unless indicators are commonly defined, comparative reporting is difficult, if not impossible. The development of measures databases is a welcome sign that this duplication of effort may be waning (e.g., Jennings and Staggers 1999; Hermann et al. 2000). Collaborative efforts to standardize measures are another promising development (Braun and Zibrat 1996; Leggat et al. 1998).
Guidelines or criteria for indicator selection are numerous in both literatures and, again, there is little consistency across sets. Table 1 lists criteria catalogued and synthesized conceptually from health literature papers that are cited in the full report but are too numerous to cite here (Adair et al. 2003). They represent suggested, rather than tested, criteria. The more recent literature puts greater emphasis on the importance of choosing indicators that are meaningful, strategic and evidence-based.
TABLE 1.
CRITERION | DESCRIPTION |
---|---|
Evidence-based | There are valid and reliable operational definitions for the measure that have been demonstrated through rigorous research |
Strategic | The measure directs attention towards the ultimate change desired |
Important | The measure addresses an important or serious health or health services problem (usually defined as health burden or cost) such that there will be sufficient impact from collection and service improvement initiatives |
Attributable | Causal links between the measure, service improvements and health outcomes are known |
Actionable | The measure addresses a service area that can benefit from improvement |
Feasible | Data collection, reporting and follow-through are cost-effective (potential benefits outweigh costs) and there is reasonable technical capacity for collection and analysis, including risk adjustment of compared measures |
Relevant and meaningful | The measure is relevant to most stakeholders, including policy makers, managers, clinicians and the public |
Understandable | The measure is understandable to a non-technical audience (often just a communication issue) |
Balanced | The set of measures is balanced across types of treatments, treatment settings, major health problems, age groups, special populations and levels of the healthcare system. The set is balanced across short- and long-term measures, and balance and appropriateness are considered across process-and outcome-type measures |
Responsive | The measure is sensitive to change over time |
Robustness | Potential adverse effects of the measure can be mitigated, and vulnerability to gaming is minimal |
Non-ambiguous | The measure is clear in terms of which direction for service change is desirable |
Financial indicators are still used as part of health PM systems (e.g., cost per weighted case), but as in business, non-financial indicators have taken centre stage. In discussing BSC applications in health, Voelker et al. (2001) claim that a primary focus on financial measures may actually hinder organizational growth and success. In healthcare, financial measures are notoriously difficult to action because most costs are not variable and there is little flexibility in hiring and firing staff (Brookfield 1992). Because of the complex and multifaceted purposes of healthcare, focusing too heavily on financial measures may diminish prospects for overall improvement. Most PM systems in health continue to collect traditional input/output measures such as service utilization (e.g., bed occupancy, surgery facility use, length of stay and numbers of discharges and admissions), despite repeated commentary that they are poor indicators of performance (Mark et al. 1997; Nutley and Smith 1998). Mortality remains the predominant traditional outcome measure, with the distinct disadvantage that it reflects a rare and end-stage event relative to the total volume of healthcare provided. In a Canadian study of existing indicators reported in 2000, Lemieux-Charles et al. (2000: 52) observed that “indicators measuring integration, coordination and continuity of care, as well as responding to population health needs, were rarely used. These types of measures are critical as we redesign our service delivery systems to address population needs.” Klazinga et al. (2001) consider the ultimate performance measures to be those reflecting overall population health.
Similarly, others express concern about “opportunistic systems” that emphasize readily available measures at the expense of newer, more important and meaningful measures (West 1996; Elkan and Robinson 1998; Nutley and Smith 1998; Smith and Goddard 2002). Shaw (1997: 217) characterizes this as the “spectre of convenience” and asks, “should measures be based on existing available data as ad hoc criteria for achievement, or should health service policy targets first be identified and data then captured specifically to measure their achievement?” A dynamic tension exists between the need for locally meaningful and strategic measures and the benefits of selecting and using standardized measures that enable meaningful comparison.
The business literature also underscores the point that the choice about what not to measure is as important as what to measure, since “things that are measured are considered important while the things not measured are generally considered of less importance” (Waggoner et al. 1999: 54). This literature also notes that once collected, measures are rarely deleted, even if they are obsolete (Neely et al. 2000). Given limited resources, each measure chosen represents an opportunity cost.
The component literatures reveal an important parallel debate about process versus outcomes measures (e.g., Evans et al. 2001; Rubin et al. 2001; Mannion and Davies 2002). The business literature uses other terms, e.g., “a debate on whether performance indicators should be focused on procedures (activities) or on results (output)” (Kueng 2000: 77), but the concepts are identical. Despite some arguments that process measures are more practical, most writers consider them complementary to outcomes or results (e.g., Baker 1995), and all should be chosen to fulfill the specific measurement objective (Wynia et al. 1996).
There are widespread concerns about the paucity of validation work. Eddy (1998: 7) describes current measures as “blunt, expensive, incomplete, and distorting.” There is strong consensus that measures must be evidence-based. Gross et al. (2000) evaluated coronary bypass mortality-related indicators across 24 hospitals and concluded that indicator definitions significantly affected computed rates and changed relative standings. “There are no generally agreed-on external criteria for validity of indicators” (Gross et al. 2000: 210).
Data collection and analysis
Both component literatures strongly note the unanticipated cost and complexity of PM systems. The business literature describes data collection and analysis as “complex, frustrating, difficult, challenging, important, abused and misused” (Lebas 1995: 23). Costs rise because of the high level of technical and managerial expertise required, new information technology and ongoing maintenance. Some also attribute costs (monetary and strategic) to measuring too many different things. “Measuring something makes it important and therefore motivates people. Measuring everything means nothing is important and therefore de-motivates” (Johnston and Fitzgerald 2001: 183). Kueng (2000) identifies success factors in the data collection stage as a parsimonious set of generally accepted indicators, automation and personal involvement of staff and management.
In healthcare, many organizations have lacked the capacity to implement effective systems, and failed attempts are abundant. They generally underestimate the scope and complexity of the infrastructure required to manage healthcare adequately and, by implication, the measurement of its performance (McIntyre et al. 2001). Voelker et al. (2001) and Braun and Zibrat (1996) attribute system failures at this stage to staff and management turnover, technical problems with information systems, budget constraints and competing priorities. Kates et al. (2001) express concern about mandating PM systems in public service organizations without guidance in their implementation and use. Both literatures express concerns about the cost–benefit relation of PM initiatives.
Other issues related to data collection include data sources and quality. Administrative data have long been considered a rich source for PM if properly “mined,” and researchers in particular have produced notable examples of their creative and rigorous use (e.g., Brownell et al. 2001). But many now suggest that the value of secondary data has been overstated, at least as typically formatted (Bishop and Pelletier 2001; McLoughlin et al. 2001). Problems cited include poor reflection of performance, lack of data elements for sensitive diagnosis and risk adjustment, lack of availability and stability of data at smaller levels of aggregation and generally poor quality (Kelman and Smith 2000; Brown 2002). Many writers bemoan the effort devoted to the analysis of retrospective or secondary data at the expense of the collection of more relevant data (Sheldon 1994; Stryer et al. 2000; Voelker et al. 2001). In the more general context of effectiveness research, after 10 years of experience with secondary data, AHRQ’s Patient Outcome Research Team (PORT) investigators are also calling for more prospective and real-time data (Stryer et al. 2000).
Many advocate for routine prospective data collection, fully integrated with clinical practice, that can be used for the delivery of care as well as rolled up for management use (McLoughlin et al. 2001). Concerns remain about the diversion of clinician time from patient care to data recording tasks (Naylor 1999). Ullman et al. (1996: 361) suggest that research-based, standardized measures are “too unwieldy and time consuming to mesh well with the practice ecology.” Several hybrid approaches are proposed (e.g., Schneider et al. 1999; Brook et al. 2000; Hoelzer et al. 2001), and many commentators still consider the electronic health record, with the appropriate data for PM thoughtfully built in and integrated with more general operational data, to be the best solution in the long run (Aller 1996; Slater 1997).
The literature is replete with concerns about PM data quality. These include issues of missing data, reliability, validity, accuracy, precision, statistical and clinical significance and timeliness (Kleinpell 1997; Mark et al. 1997; Shaw 1997; Collopy 1998; Jencks 2000; Roper and Mays 2000; Pink et al. 2001). McKee and James (1997) provide an excellent review of data quality issues that arise when comparing outcomes data across systems that use different diagnostic and severity adjustment schemes, and report error rates as high as 20% to 40%. Many cite the need for consistent definitions and processes and data quality checks (Shaw 1997; Nutley and Smith 1998) and for the transparent reporting of data collection issues that underlie the reported measures (Pink et al. 2001). Pink et al. (2001) consider expert involvement of both researchers and management as essential.
With respect to methods for analysis, sound statistical methods have long been available but many authors suggest that they usually fall by the wayside in practice (Leggat et al. 1998; Nutley and Smith 1998; Roper and Mays 2000; Smith and Goddard 2002). Adjustment methods are many and varied, and consensus is lacking about the best methods for a given analytic problem (Mant and Hicks 1996; Iezzoni 1997; Shahian et al. 2001; Schneider 2002; Smith and Goddard 2002). Several authors stress that the problem is not so much the methods’ mechanics but the lack of understanding of their limitations and inconsistency in application (Ibrahim 2001; Zaslavsky 2001). An obvious solution is to ensure that adequate analytic expertise is brought to the PM task. Organizational comparisons should disclose all analytic methods and reveal potential sources of bias. As well, a “healthy skepticism about ratings or ranking [should] be maintained” (Schneider 2002: 3). Smith and Goddard (2002) suggest that devising better ways to communicate complex results to non-experts could strengthen the link between research and strategic policy.
Reporting and use
A first general theme on the topic of reporting PM information is practical advice on effective presentation for various audiences, with the emphasis on evidence-based communications. A more prominent and controversial topic is the growing practice of reporting performance information to external stakeholders via report cards. Several authors provide excellent reviews of the issues and evidence related to public release of performance data (Leatherman and McCarthy 1999; Marshall et al. 2000; Hoey et al. 2002). Barrell (2000: 15) expresses the general sentiment on this matter: “There seem to be basically two schools of thought: those who believe we can’t afford to do it, and those who believe we can’t afford not to.” In a rare and interesting empirical study that examined organizational response to public disclosure of quality data in the United States, McCormick et al. (2002) demonstrated that in a voluntary system, providers with lower-quality scores were four to six times more likely to withdraw from future disclosure than those with higher scores.
We also found a large literature on the issue of using PM to produce improvement. The business literature clearly advocates a strong link between performance measurement and performance management (Lebas 1995), including the development of causal models between measures, actions taken and subsequent improvement (Lebas 1995; Neely et al. 1995; Neely 1999) through an organizational change process (Kueng 2000). With respect to alignment of incentives for change, Epstein and Manzoni (1998) cite Kerr’s folly (rewarding A while hoping for B) as a common practice in many companies, due to an inability to break out of old patterns of reward and recognition, the lack of an overall system view and focusing on the short term.
The health literature addresses three themes on the application of PM information: its use by organizations as a whole, by individual service providers and, externally, by consumers to make care choices. A second theme is how PM is used for both positive change as well as its unintended or adverse effects. A third is the organizational culture in which PM is embedded.
First, on the issue of “actioning” results, Goddard et al. (2000: 99) observe that “most schemes appear to rely on a vague hope that providers will ‘do something’ in response to the data.” The importance of organizations learning how to link the PM results to actions, rather than having the PM system simply keep records, is restated in many ways (Camp and Tweet 1994; Baker and Pink 1995; Collopy 1998; Voelker et al. 2001). The few studies on organizational (Turpin et al. 1996; Leggat et al. 1998; Lemieux-Charles et al. 2000) or individual provider behavioural change (Jencks 2000; Marshall et al. 2000) in response to organizationwide PM suggest that impact is minimal (Barrell 2000; Legnini et al. 2000; Marshall et al. 2000; Schneider 2002). It is likely that in some settings individual managers and clinical leaders have found effective ways to use and apply performance measurement information, just as in some settings quality improvement has been applied effectively – many examples are provided by the Institute for Healthcare Improvement (2002) – but virtually no rigorous studies have described effective broader-level PM practice and elucidated its features.
The more recent healthcare literature includes descriptions of new mechanisms involving financial incentives for performance at the organizational or individual level. These mechanisms go by a variety of labels, including value-based purchasing, quality- based purchasing, performance-based contracting and pay-for-performance. With respect to alignment of financial incentives at the organizational level, there were many reported instances in US healthcare and some in the United Kingdom. A straight forward incentive system that simply provides high performers with extra funds and penalizes low performers is criticized as having the potential to flow funds to services serving regions with less health need, if the contributors to poorer performance are environmental and socio-economic rather than actual differences in care (Elkan and Robinson 1998). In a fairly innovative concept for incentive alignment, Ward (2000) describes a scheme for improving performance in NHS trusts. In this scheme, funding is not allocated according to performance ranking; instead, greater autonomy and spending latitude are given to higher-ranking organizations (Ward 2000). While financial incentives may seem like common sense, they continue to be controversial and are largely unproven to date (e.g., Giuffrida et al. 2000).
With respect to the potential for adverse effects, the literature contains many examples of (mostly theoretical) adverse effects, which are summarized in Table 2. Goddard et al. (1998, 2000), Smith (2002) and Smith and Goddard (2002) have drawn from the management control literature and written extensively on unintended effects in the public sector and healthcare. They consider that “some of these dysfunctional consequences are the result of the imperfect or incomplete data on which indicators are based, some are due to how the data are used and interpreted, and some are simply intrinsic to any system of PM” (Goddard et al. 1998: 26).
TABLE 2.
|
Smith 2002; Smith and Goddard 2002; Goddard et al. 1998, 1999, 2000; van Peursem et al. 1995; Collopy 1998; Elkan and Robinson 1998; Leggat et al. 1998; Proctor and Campbell 1999; McLoughlin et al. 2001.
A third theme in the health literature is the relatively recent acknowledgment that organizational contextual issues are paramount to effective PM use because of the invariably complex health system environments. Smith (1993: 150) suggests that while PM systems are assumed to be neutral reporting devices, in reality they are “operating in a far messier and less well understood organizational context.” Barnsley et al. (1996), Leggat et al. (1998) and others outline the organizational culture issues in PM. Legnini et al. (2000) provide a very detailed set of recommendations for realigning incentives to encourage positive use of PM information, according to organizational context and stakeholder perspective. Table 3 lists other suggestions. A more comprehensive and holistic approach to PM is being promoted (McKee and Sheldon 1998; Smith 2002), and the emergence of new models may be imminent (Viccars 1998; Campbell et al. 2001).
TABLE 3.
|
Greenhalgh et al. 1996; Mant and Hicks 1996; Turpin et al. 1996; Ford et al. 1997; Collopy 1998; Goddard et al. 1998; Leggat et al. 1998; Nutley and Smith 1998; Bodenheimer 1999; Proctor and Campbell 1999; Gross et al. 2000; Voelker et al. 2001; Weinberg 2001; Zairi and Jarrar 2001; Inamdar et al. 2002; Jarvi et al. 2002; Mannion and Davies 2002.
Summary and Implications for Practice
The literature reviewed on PM reveals several points of consensus as well as divergence, as summarized in Table 4. Overall, no author advocated abandonment of PM, but most recommended moving forward with more awareness of the pitfalls and making informed choices (Smith 1993; van Peursem et al. 1995; Shaw 1997; Eddy 1998; Sennett 1998). Epstein (1995: 4) urges realistic expectations, reminding us not to “let the perfect be the enemy of the good.” Many recommend using PM to create a shift towards a culture of improvement (Proctor and Campbell 1999; Bishop and Pelletier 2001; McLoughlin et al. 2001). In the United States, Braun et al. (1999) and others suggest a national, staged approach including standardized core measures. Berwick (1998) presents an insightful review that challenges current assumptions about healthcare performance. Finally, Lied and Sheingold (2001: 394) summarize the current state of practice on PM as follows: “There are real concerns that the act of measurement itself has taken on such a symbolic significance over and above the power of such information to promote beneficial and worthwhile change. We do not yet know how to make such systems deliver on the promises made for them.”
TABLE 4.
Consensus |
|
Divergence |
|
Finally, there are some key structural aspects of healthcare that challenge actionability. The long and strong tradition of professional autonomy, particularly among physicians, focuses philosophically on individuals, not systems. In many jurisdictions, healthcare professionals have contractual (not employee) relationships with service organizations. There are ethical obligations, real or perceived, to provide often heroic and expensive care even where the likelihood of a successful outcome is small. Optimizing performance in such an environment is different from eliminating inefficiencies in a manufacturing process. Clinical care frequently involves trial and error, particularly where cases are intractably difficult or where the science is imprecise, and what one observer would describe as wasteful, another might view as creative and responsive. These caveats suggest that we pay particular attention to the literature that counsels a balanced, nuanced and comprehensive approach to PM and its uses.
Conclusion
The research literature on PM is expanding daily and the ideas are advancing, but our team has read nothing since completion of the major report that stands out in contradiction with the overall findings presented here. A number of encouraging developments are noted on the policy front in Canada since the review: a recognition of the need for leadership in the federal/provincial/territorial accords on indicator reporting and subsequent comparative national reports, the addition (to Saskatchewan’s Health Quality Council) of three more provincial HQCs (Ontario, Quebec and Alberta) and the establishment of the Canadian Patient Safety Institute. At the same time, the controversial Maclean’s Health Report has come and gone. Much of the current energy is focused on wait times and patient safety. We need to address PM more comprehensively, and work remains as well at the service level – in regions and on the front line. Just as it is no longer acceptable to disseminate clinical treatment without evidence, the stakes are too high to implement healthcare PM without developing the evidence base.
Acknowledgment
The State of the Science Review was funded by the Alberta Heritage Foundation for Medical Research, and significant in-kind support was received from the Alberta Mental Health Board. Thanks are due to K. Omelchuk, H. Gardiner, S. Newman, S. Clelland, A. Beckie, K. Lewis-Ng, I. Frank, J. Osborne, D. Ma, X. Kostaras and O. Berze for their assistance on parts of the broader review. T. Sheldon and C. Baker provided methodologic consultation, and E. Goldner and S. Lewis reviewed the main report. Findings have been presented in part at Academy Health, Nashville, Tennessee, June 2003; World Psychiatric Association, Paris, France, July 2003; International Conference on the Scientific Basis of Health Services, Washington, DC, September 2003; and American Evaluation Association, Reno, Nevada, November 2003.
Contributor Information
Carol E. Adair, Departments of Community Health Sciences and Psychiatry, University of Calgary, Calgary, AB.
Ann L. Casebeer, Department of Community Health Sciences; Centre for Health and Policy Studies, University of Calgary, Calgary, AB.
Judith M. Birdsell, ON Management Ltd; Haskayne School of Business, University of Calgary, Calgary, AB.
Katharine A. HAYDEN, Information Resources, University of Calgary, Calgary, AB.
Steven Lewis, Access Consulting Ltd.; Department of Community Health Sciences, University of Calgary, Calgary, AB.
References
- Adair C., Simpson L., Birdsell J.M., Omelchuk K., Casebeer A., Gardiner H.P., Newman S., Beckie A., Clelland S., Hayden K.A., Beausejour P. A State of the Science Review. Calgary: University of Calgary; 2003. Performance Measurement Systems in Health and Mental Health Services: Models, Practices and Effectiveness. [Google Scholar]
- Agency for Healthcare Research and Quality (AHRQ) Understanding Quality Measurement. Child Health Care Quality Toolbox. 2002 Retrieved March 26, 2006. http://www.ahrq.gov/chtoolbx/understsn.htm .
- Aller K. Information Systems for the Outcomes Movement. Healthcare Information Management. 1996;10(1):37–52. [PubMed] [Google Scholar]
- Baker G., Pink G. A Balanced Scorecard for Canadian Hospitals. Healthcare Management Forum. 1995;8(4):7–21. doi: 10.1016/S0840-4704(10)60926-X. [DOI] [PubMed] [Google Scholar]
- Baker S. Use of Performance Indicators for General Practice. British Medical Journal. 1995;311:209–10. doi: 10.1136/bmj.311.6999.209. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Barnsley J., Lemieux-Charles L., Baker G. Selecting Clinical Outcome Indicators for Monitoring Quality of Care. Healthcare Management Forum. 1996;9(1):5–21. doi: 10.1016/S0840-4704(10)60938-6. [DOI] [PubMed] [Google Scholar]
- Barrell J. Apples to Apples: The Complexities of Health Care Outcomes Reporting. Infusion. 2000;6(7):15–24. [Google Scholar]
- Baughan P., Armistead C., Parker D. Managerial Reflections on the Deployment of Balanced Score Cards. In: Neely A., Walters A., Austin R., editors. Performance Measurement and Management: Research and Action. Boston: Center for Business Performance, Cranfield University; 2002. [Google Scholar]
- Berwick D. Crossing the Boundary: Changing Mental Models in the Service of Improvement. International Journal for Quality in Health Care. 1998;10(5):435–41. doi: 10.1093/intqhc/10.5.435. [DOI] [PubMed] [Google Scholar]
- Bishop W., Pelletier L. Interview with a Quality Leader: Janet Corrigan on the Institute of Medicine and Healthcare Quality. Journal for Healthcare Quality. 2001;23(5):21–24. doi: 10.1111/j.1945-1474.2001.tb00370.x. [DOI] [PubMed] [Google Scholar]
- Bodenheimer T. The American Health Care System: The Movement for Improved Quality in Health Care. New England Journal of Medicine. 1999;340(6):488–92. doi: 10.1056/NEJM199902113400621. [DOI] [PubMed] [Google Scholar]
- Bourne M., Mills J., Wilcox M., Neely A., Platts K. Designing, Implementing and Updating Performance Measurement Systems. International Journal of Operations and Production Management. 2000;20(7):754–71. [Google Scholar]
- Braun B., Koss R., Loeb J. Integrating Performance Measure Data into the Joint Commission Accreditation Process. Evaluation and the Health Professions. 1999;22(3):283–97. doi: 10.1177/016327879902200301. [DOI] [PubMed] [Google Scholar]
- Braun B., Zibrat F. Developing an Outcomes Measurement System: The Value of Testing. American Journal of Medical Quality. 1996;11(2):57–67. doi: 10.1177/0885713X9601100202. [DOI] [PubMed] [Google Scholar]
- Brignall S. The Unbalanced Scorecard: A Social and Environmental Critique. In: Neely A., Walters A., Austin R., editors. Performance Measurement and Management: Research and Action. Boston: Center for Business Performance, Cranfield University; 2002. [Google Scholar]
- Brook R., McGlynn E., Shekelle P. Defining and Measuring Quality of Care: A Perspective from US Researchers. International Society for Quality in Health Care. 2000;12(4):281–95. doi: 10.1093/intqhc/12.4.281. [DOI] [PubMed] [Google Scholar]
- Brookfield D. Performance Measurement: Focusing on the Key Issue. Journal of Management in Medicine. 1992;6(2):39–45. [Google Scholar]
- Brown M. Change and Stability in the Canadian Healthcare System. Expert Reviews of Pharmacoeconomics Outcomes Research. 2002;2(4):309–12. doi: 10.1586/14737167.2.4.309. [DOI] [PubMed] [Google Scholar]
- Brownell M., Roos N., Roos L. Monitoring Health Reform: A Report Card Approach. Social Science and Medicine. 2001;52(5):657–70. doi: 10.1016/s0277-9536(00)00168-4. [DOI] [PubMed] [Google Scholar]
- Camp R., Tweet A. Benchmarking Applied to Health Care. Joint Commission Journal on Quality Improvement. 1994;20(5):229–38. doi: 10.1016/s1070-3241(16)30067-0. [DOI] [PubMed] [Google Scholar]
- Campbell S., Roland M., Leese B. Progress in Clinical Governance: Findings from the First NPCRDC National Tracker Survey of Primary Care Groups/Trusts. British Journal of Clinical Governance. 2001;6(2):90–93. [Google Scholar]
- Collopy B. Health-Care Performance Measurement Systems and the ACHS Care Evaluation Program. Journal of Quality in Clinical Practice. 1998;18(3):171–76. [PubMed] [Google Scholar]
- Corso L., Wiesner P., Halverson P., Brown K. Using the Essential Services as a Foundation for Performance Measurement and Assessment of Local Public Health Systems. Journal of Public Health Management and Practice. 2000;6(5):1–18. doi: 10.1097/00124784-200006050-00003. [DOI] [PubMed] [Google Scholar]
- DeBaylo P. Ten Reasons Why the Baldrige Model Works. Journal for Quality and Participation. 1999;22(1):24–28. [Google Scholar]
- DeRosario J. Healthcare System Performance Indicators: A New Beginning for a Reformed Canadian Healthcare System. Journal for Healthcare Quality. 1999;21(1):37–41. doi: 10.1111/j.1945-1474.1999.tb00937.x. [DOI] [PubMed] [Google Scholar]
- Eddy D. Performance Measurement: Problems and Solutions. Health Affairs. 1998;17(4):7–25. doi: 10.1377/hlthaff.17.4.7. [DOI] [PubMed] [Google Scholar]
- Elkan R., Robinson J. The Use of Targets to Improve the Performance of Health Care Providers: A Discussion of Government Policy. British Journal of General Practice. 1998;48:1515–18. [PMC free article] [PubMed] [Google Scholar]
- Epstein A. Performance Reports on Quality – Prototypes, Problems and Prospects. New England Journal of Medicine. 1995;331(1):57–61. doi: 10.1056/NEJM199507063330114. [DOI] [PubMed] [Google Scholar]
- Epstein M., Manzoni J. Implementing Corporate Strategy: From Tableaux de Bord to Balanced Scorecards. European Management Journal. 1998;16(2):190–203. [Google Scholar]
- Evans D., Edejer T., Lauer J., Frenk J., Murray C. Measuring Quality: From the System to the Provider. International Society for Quality in Health Care. 2001;13(6):439–46. doi: 10.1093/intqhc/13.6.439. [DOI] [PubMed] [Google Scholar]
- Fawcett S., Cooper M. Logistics Performance Measurement and Customer Success. Industrial Marketing Management. 1998;27(4):341–57. [Google Scholar]
- Ford R., Bach S., Fottler M. Methods of Measuring Patient Satisfaction in Health Care Organizations. Health Care Management Review. 1997;22(2):74–89. [PubMed] [Google Scholar]
- Goddard M., Mannion R., Smith P. Performance Indicators. All Quiet on the Front Line. Health Service Journal. 1998;108:24–26. [PubMed] [Google Scholar]
- Goddard M., Mannion R., Smith P. Assessing the Performance of NHS Hospital Trusts: The Role of ‘Hard’ and ‘Soft’ Information. Health Policy. 1999;48(2):119. doi: 10.1016/s0168-8510(99)00035-4. [DOI] [PubMed] [Google Scholar]
- Goddard M., Mannion R., Smith P. Enhancing Performance in Health Care: A Theoretical Perspective on Agency and the Role of Information. Health Economics. 2000;9(2):95–107. doi: 10.1002/(sici)1099-1050(200003)9:2<95::aid-hec488>3.0.co;2-a. [DOI] [PubMed] [Google Scholar]
- Greenhalgh J., Long A., Brettle A., Grant M. The Value of an Outcomes Information Resource. An Evaluation of the UK Clearing House on Health. Journal of Management Medicine. 1996;10(5):55–65. doi: 10.1108/02689239610146553. [DOI] [PubMed] [Google Scholar]
- Gross P., Braun B., Kritchevsky S., Simmons B. Comparison of Clinical Indicators for Performance Measurement of Health Care Quality: A Cautionary Note. British Journal of Clinical Governance. 2000;5(4):202–11. doi: 10.1108/14664100010361755. [DOI] [PubMed] [Google Scholar]
- Guiffrida A., Gosden T., Forland F., et al. Target Payments in Primary Care: Effects on Professional Practice and Health Care Outcomes. Cochrane Database of Systematic Reviews. 2000;(3):CD000531. doi: 10.1002/14651858.CD000531. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hall J. The Challenge of Health Outcomes. Journal of Quality in Clinical Practice. 1996;16(1):5–15. [PubMed] [Google Scholar]
- Handler A., Issel M., Turnock B. A Conceptual Framework to Measure Performance of the Public Health System. American Journal of Public Health. 2001;91(8):1235–39. doi: 10.2105/ajph.91.8.1235. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hermann R., Leff H., Palmer R., Yang D., Teller T., Provost S., Jakubiak C., Chan J. Quality Measures for Mental Health Care: Results from a National Inventory. Medical Care Research and Review. 2000;57(Suppl. 2):136–54. doi: 10.1177/1077558700057002S08. [DOI] [PubMed] [Google Scholar]
- Hoelzer S., Waechter W., Stewart A., Raymond L., Schweiger R. Towards Case-Based Performance Measures: Uncovering Deficiencies in Applied Medical Care. Journal of Evaluation in Clinical Practice. 2001;7(4):355–63. doi: 10.1046/j.1365-2753.2001.00297.x. [DOI] [PubMed] [Google Scholar]
- Hoey J., Todkill A., Flegel K. What’s in a Name? Reporting Data from Public Institutions. Canadian Medical Association Journal. 2002;166(2):193–94. [PMC free article] [PubMed] [Google Scholar]
- Holloway J. Investigating the Impact of Performance Measurement. International Journal of Business Performance Management. 2001;3(2–4):167–80. [Google Scholar]
- Ibrahim J. Performance Indicators from All Perspectives. International Journal for Quality in Health Care. 2001;13(6):431–32. doi: 10.1093/intqhc/13.6.431. [DOI] [PubMed] [Google Scholar]
- Iezzoni L. The Risks of Risk Adjustment. Journal of the American Medical Association. 1997;278(19):1600–7. doi: 10.1001/jama.278.19.1600. [DOI] [PubMed] [Google Scholar]
- Inamdar N., Kaplan R., Bower M., Reynolds K. Applying the Balanced Scorecard in Healthcare Provider Organizations. Journal of Healthcare Management. 2002;47(3):179–96. [PubMed] [Google Scholar]
- Institute for Healthcare Improvement. 2002 Retrieved March 26, 2006. http://www.ihi.org/ihi .
- Jarvi K., Sultan R., Lee A., Lussing F., Bhat R. Multi-Professional Mortality Review: Supporting a Culture of Teamwork in the Absence of Error Finding and Blame-Placing. Hospital Quarterly. 2002;5(4):58–61. doi: 10.12927/hcq..16625. [DOI] [PubMed] [Google Scholar]
- Jencks S. Clinical Performance Measurement – A Hard Sell. Journal of the American Medical Association. 2000;283(15):2015–16. doi: 10.1001/jama.283.15.2015. [DOI] [PubMed] [Google Scholar]
- Jennings B., Staggers N. A Provocative Look at Performance Measurement. Nursing Administration Quarterly. 1999;24(1):17–30. doi: 10.1097/00006216-199910000-00004. [DOI] [PubMed] [Google Scholar]
- Johnston R., Fitzgerald L. Performance Measurement: Flying in the Face of Fashion. International Journal of Business Performance Management. 2001;3(2–4):181–90. [Google Scholar]
- Kanji G., Moura P. Kanji’s Business Scorecard. Total Quality Management. 2002;13(1):13–27. [Google Scholar]
- Kaplan R., Norton D. Linking the Balanced Scorecard to Strategy. California Management Review. 1996;39(1):53–79. [Google Scholar]
- Kaplan R., Norton D. Transforming the Balanced Scorecard from Performance Measurement to Strategic Management: Part I. Accounting Horizons. 2001;15(1):87–104. [Google Scholar]
- Kates J., Marconi K., Mannle T., Jr. Developing a Performance Management System for a Federal Public Health Program: The Ryan White CARE Act, Titles I and II. Evaluation Program and Planning. 2001;24(2):145–55. [Google Scholar]
- Kelman C., Smith L. It’s Time: Record Linkage – The Vision and the Reality. Australian and New Zealand Journal of Public Health. 2000;24(1):100–1. doi: 10.1111/j.1467-842x.2000.tb00734.x. [DOI] [PubMed] [Google Scholar]
- Kizer K. Establishing Health Care Performance Standards in an Era of Consumerism. Journal of the American Medical Association. 2001;286(10):1213–17. doi: 10.1001/jama.286.10.1213. [DOI] [PubMed] [Google Scholar]
- Klazinga N., Stronks K., Delnolj D., Verhoeff A. Indicators without a Cause. Reflections on the Development and Use of Indicators in Health Care from a Public Health Perspective. International Journal for Quality in Health Care. 2001;13(6):433–38. doi: 10.1093/intqhc/13.6.433. [DOI] [PubMed] [Google Scholar]
- Kleinpell R. Whose Outcomes: Patients, Providers, or Payers? Nursing Clinics of North America. 1997;32(3):513–20. [PubMed] [Google Scholar]
- Kueng P. Process Performance Measurement System: A Tool to Support Process-Based Organizations. Total Quality Management. 2000;11(1):67–85. [Google Scholar]
- Kueng P., Krahn A. Building a Process Performance Measurement System: Some Early Experiences. Journal of Scientific and Industrial Research. 1999;58(3–4):149–59. [Google Scholar]
- Leatherman S., McCarthy D. Public Disclosure of Health Care Performance Report. International Journal for Quality in Health Care. 1999;11(2):93–105. doi: 10.1093/intqhc/11.2.93. [DOI] [PubMed] [Google Scholar]
- Lebas M. Performance Measurement and Performance Management. International Journal of Production Economics. 1995;41(1–3):23–35. [Google Scholar]
- Leggat S., Narine L., Lemieux-Charles L., Barnsley J., Baker G., Sicotte C., Champagne F., Bilodeau H. A Review of Organizational Performance Assessment in Health Care. Health Services Management Research. 1998;11:3–23. doi: 10.1177/095148489801100102. [DOI] [PubMed] [Google Scholar]
- Legnini M., Rosenberg L., Perry M., Robertson N. Where Does Performance Measurement Go from Here? Health Affairs. 2000;19(3):173–77. doi: 10.1377/hlthaff.19.3.173. [DOI] [PubMed] [Google Scholar]
- Lemieux-Charles L., Gault N., Champagne F., Barnsley J., Trabut I., Sicotte C., Zitner D. Use of Mid-Level Indicators in Determining Organizational Performance. Hospital Quarterly. 2000;3(4):48–52. doi: 10.12927/hcq..16770. [DOI] [PubMed] [Google Scholar]
- Lemieux-Charles L., McGuire W., Champagne F., Barnsley J., Cole D., Sicotte C. Multilevel Performance Indicators: Examining Their Use in Managing Performance in Health Care Organizations. In: Neely A., Walters A., Austin R., editors. Performance Measurement and Management: Research and Action. Boston: Center for Business Performance, Cranfield University; 2002. [Google Scholar]
- Lied T. Performance: A Multi-Disciplinary and Conceptual Model. Journal of Evaluation in Clinical Practice. 1999;5(4):393–400. doi: 10.1046/j.1365-2753.1999.00210.x. [DOI] [PubMed] [Google Scholar]
- Lied T., Sheingold S. Relationships among Performance Measures for Medicare Managed Care Plans. Health Care Financing Review. 2001;22(3):23–33. [PMC free article] [PubMed] [Google Scholar]
- Lockamy III A. Quality-Focused Performance Measurement Systems: A Normative Model. International Journal of Operations and Production Management. 1998;18(8):740–66. [Google Scholar]
- Luttman R. Next Generation Quality, Part 2: Balanced Scorecards and Organizational Improvement. Topics in Health Information Management. 1998;19(2):22–29. [PubMed] [Google Scholar]
- Mannion R., Davies H. Reporting Health Care Performance: Learning from the Past, Prospects for the Future. Journal of Evaluation in Clinical Practice. 2002;8(2):215–28. doi: 10.1046/j.1365-2753.2002.00331.x. [DOI] [PubMed] [Google Scholar]
- Mant J., Hicks N. Assessing Quality of Care: What Are the Implications of the Potential Lack of Sensitivity of Outcome Measures to Differences in Quality? Journal of Evaluation in Clinical Practice. 1996;2(4):243–48. doi: 10.1111/j.1365-2753.1996.tb00054.x. [DOI] [PubMed] [Google Scholar]
- Mark B., Salyer J., Geddes N. Outcomes Research. Clues to Quality and Organizational Effectiveness? Nursing Clinics of North America. 1997;32(3):589–601. [PubMed] [Google Scholar]
- Marshall M., Shekelle P., Leatherman S., Brook R. The Public Release of Performance Data: What Do We Expect to Gain? A Review of the Evidence. Journal of the American Medical Association. 2000;283(14):1866–74. doi: 10.1001/jama.283.14.1866. [DOI] [PubMed] [Google Scholar]
- McCormick D., Himmelstein D., Woolhandler S., Wolfe S., Bor D. Relationship between Low Quality-of-Care Scores and HMOs’ Subsequent Public Disclosure of Quality-of-Care Scores. Journal of the American Medical Association. 2002;288(12):1484–90. doi: 10.1001/jama.288.12.1484. [DOI] [PubMed] [Google Scholar]
- McEwan K., Goldner E. Accountability and Performance Indicators for Mental Health Services and Supports. Prepared for the Federal/Provincial/Territorial Advisory Network on Mental Health. Ottawa: Health Canada; 2000. [Google Scholar]
- McIntyre D., Rogers L., Heier E. Overview, History and Objectives of Performance Measurement. Health Care Financing Review. 2001;22(3):7–21. [PMC free article] [PubMed] [Google Scholar]
- McKee M., James P. Using Routine Data to Evaluate Quality of Care in British Hospitals. Medical Care. 1997;35(10):OS102–11. doi: 10.1097/00005650-199710001-00013. [DOI] [PubMed] [Google Scholar]
- McKee M., Sheldon T. Measuring Performance in the NHS. British Medical Journal. 1998;316(7128):322. doi: 10.1136/bmj.316.7128.322. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McLoughlin V., Leatherman S., Fletcher M., Owen J. Improving Performance Using Indicators. Recent Experiences in the United States, the United Kingdom, and Australia. International Journal for Quality in Health Care. 2001;13(6):455–62. doi: 10.1093/intqhc/13.6.455. [DOI] [PubMed] [Google Scholar]
- Mooraj S., Oyon D., Hostettler D. The Balanced Scorecard: A Necessary Good or an Unnecessary Evil? European Management Journal. 1999;17(3):481–91. [Google Scholar]
- Morgan C., Braganza A. Performance Measurement Systems: Knowledge Developer or Destroyer? In: Neely A., Walters A., Austin R., editors. Performance Measurement and Management: Research and Action. Boston: Center for Business Performance, Cranfield University; 2002. [Google Scholar]
- Moscovice I., Christianson J., Wellever A. Measuring and Evaluating the Performance of Vertically Integrated Rural Health Networks. Journal of Rural Health. 1995;11(1):9–21. doi: 10.1111/j.1748-0361.1995.tb00392.x. [DOI] [PubMed] [Google Scholar]
- Nadzam D., Nelson M. The Benefits of Continuous Performance Measurement. Nursing Clinics of North America. 1997;32(3):543–59. [PubMed] [Google Scholar]
- Naylor G. Using the Business Excellence Model to Develop a Strategy for Healthcare Organisation. International Journal of Health Care Quality Assurance. 1999;12(2):37–44. doi: 10.1108/09526869910261240. [DOI] [PubMed] [Google Scholar]
- Neely A. The Performance Measurement Revolution: Why Now and What Next? International Journal of Operations and Production Management. 1999;19(2):205–28. [Google Scholar]
- Neely A., Gregory M., Platts K. Performance Measurement System Design – A Literature Review and Research Agenda. International Journal of Operations and Production Management. 1995;15(4):80–116. [Google Scholar]
- Neely A., Mills J., Platts K., Richards H., Gregory M., Bourne M., Kennerley M. Performance Measurement System Design: Developing and Testing a Process-Based Approach. International Journal of Operations and Production Management. 2000;20(9–10):1119–45. [Google Scholar]
- Newhouse J. Why Is There a Quality Chasm? Health Affairs. 2002;21(4):13–25. doi: 10.1377/hlthaff.21.4.13. [DOI] [PubMed] [Google Scholar]
- Nutley S., Smith P. League Tables for Performance Improvement in Health Care. Journal of Health Services and Research Policy. 1998;3(1):50–57. doi: 10.1177/135581969800300111. [DOI] [PubMed] [Google Scholar]
- Pink G., McKillop I., Schraa E., Preyra C., Montgomery C., Baker G. Creating a Balanced Scorecard for a Hospital System. Journal of Health Care Finance. 2001;27(3):1–20. [PubMed] [Google Scholar]
- Proctor S., Campbell C. A Developmental Performance Framework for Primary Care. International Journal of Health Care Quality Assurance. 1999;12(7):279–86. doi: 10.1108/09526869910287549. [DOI] [PubMed] [Google Scholar]
- Roper W., Mays G. Performance Measurement in Public Health: Conceptual and Methodological Issues in Building the Science Base. Journal of Public Health Management and Practice. 2000;6(5):66–77. doi: 10.1097/00124784-200006050-00010. [DOI] [PubMed] [Google Scholar]
- Rubin H., Pronovost P., Diette G. The Advantages and Disadvantages of Process-Based Measures of Health Care Quality. International Journal for Quality in Health Care. 2001;13(6):469–74. doi: 10.1093/intqhc/13.6.469. [DOI] [PubMed] [Google Scholar]
- Schneider E. Measuring Mortality Outcomes to Improve Health Care: Rational Use of Ratings and Rankings. Medical Care. 2002;40(1):1–3. doi: 10.1097/00005650-200201000-00001. [DOI] [PubMed] [Google Scholar]
- Schneider E., Riehl V., Courte-Wienecke S., Eddy D., Sennett C. Enhancing Performance Measurement: NCQA’s Road Map for a Health Information Framework. Journal of the American Medical Journal. 1999;282(12):1184–90. doi: 10.1001/jama.282.12.1184. [DOI] [PubMed] [Google Scholar]
- Sennett C. Moving Ahead, Measure by Measure. Health Affairs. 1998;17(4):36–38. doi: 10.1377/hlthaff.17.4.36. [DOI] [PubMed] [Google Scholar]
- Shahian D., Normand S., Torchiana D., Lewis S., Pastore J., Kuntz R., Dreyer P. Cardiac Surgery Report Cards: Comprehensive Review and Statistical Critique. Annals of Thoracic Surgery. 2001;72(6):2155–68. doi: 10.1016/s0003-4975(01)03222-2. [DOI] [PubMed] [Google Scholar]
- Shaw C. Health-Care League Tables in the United Kingdom. Journal of Quality in Clinical Practice. 1997;17(4):215–19. [PubMed] [Google Scholar]
- Sheldon T. Please Bypass the PORT: Observational Studies of Effectiveness Run a Poor Second to Randomized Controlled Trials. British Medical Journal. 1994;309(6948):142–43. doi: 10.1136/bmj.309.6948.142. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sheldon T. Promoting Health Care Quality: What Role Performance Indicators? Quality in Health Care. 1998;7(Suppl.):s45–s50. [PubMed] [Google Scholar]
- Slater C. What Is Outcomes Research and What Can It Tell Us? Evaluation and the Health Professions. 1997;20(3):243–64. doi: 10.1177/016327879702000301. [DOI] [PubMed] [Google Scholar]
- Smith P. Outcome-Related Performance Indicators and Organizational Control in the Public Sector. British Journal of Clinical Governance. 1993;4(3):135–51. [Google Scholar]
- Smith P. Performance Management in British Health Care: Will It Deliver? Health Affairs. 2002;21(3):103–15. doi: 10.1377/hlthaff.21.3.103. [DOI] [PubMed] [Google Scholar]
- Smith P., Goddard M. Performance Management and Operational Research: A Marriage Made in Heaven? Journal of the Operational Research Society. 2002;53(3):247–55. [Google Scholar]
- Stryer D., Tunis S., Hubbard H., Clancy C. The Outcomes of Outcomes and Effectiveness Research: Impacts and Lessons from the First Decade. Health Services Research. 2000;35(5, Part 1):977–93. [PMC free article] [PubMed] [Google Scholar]
- Tennant C., Roberts P.A. A Technique for Strategic Quality Management. Quality Assurance. 2000;8(2):77–90. doi: 10.1080/105294100317173862. [DOI] [PubMed] [Google Scholar]
- Turpin R., Darcy L., Koss R., McMahill C., Meyne K., Morton D., Rodriguez J., Schmaltz S., Schyve P., Smith P. A Model to Assess the Usefulness of Performance Indicators. International Journal for Quality in Health Care. 1996;8(4):321–29. doi: 10.1093/intqhc/8.4.321. [DOI] [PubMed] [Google Scholar]
- Ullman M., Metzger C., Kuzel T., Bennett C. Performance Measurement in Prostate Cancer Care: Beyond Report Cards. Urology. 1996;47(3):356–65. doi: 10.1016/S0090-4295(99)80453-1. [DOI] [PubMed] [Google Scholar]
- van Peursem K., Pratt M., Lawrence S. Health Management Performance: A Review of Measures and Indicators. Accounting, Auditing and Accountability Journal. 1995;8(5):34–70. [Google Scholar]
- Viccars A. Clinical Governance: Just Another Buzzword of the 90’s? MIDIRS Midwifery Digest. 1998;8(4):409–12. [Google Scholar]
- Voelker K., Rakich J., French G. The Balanced Scorecard in Healthcare Organizations: A Performance Measurement and Strategic Planning Method. Hospital Topics. 2001;79(3):13–24. doi: 10.1080/00185860109597908. [DOI] [PubMed] [Google Scholar]
- Waggoner D., Neely A., Kennerley M. The Forces That Shape Organisational Performance Measurement Systems: An Interdisciplinary Review. International Journal of Production Economics. 1999;60–61:53–60. [Google Scholar]
- Ward S. Counting on Quality. Nursing Standard. 2000;14(52):16. doi: 10.7748/ns.14.52.16.s31. [DOI] [PubMed] [Google Scholar]
- Weinberg N. Using Performance Measures to Identify Plans of Action to Improve Care. Joint Commission Journal on Quality Improvement. 2001;27(12):683–88. doi: 10.1016/s1070-3241(01)27058-8. [DOI] [PubMed] [Google Scholar]
- West R. NHS Performance Guides: Raising the Standard – Indirectly? Journal of Public Health Medicine. 1996;19(3):361–63. doi: 10.1093/oxfordjournals.pubmed.a024646. [DOI] [PubMed] [Google Scholar]
- Williams I., Naylor D., Cohen M., Goel V., Basinski A., Ferris L., Llewellyn-Thomas H. Outcomes and the Management of Health Care. Canadian Medical Association Journal. 1992;147(12):1775–80. [PMC free article] [PubMed] [Google Scholar]
- Wynia M., Hasnaim-Wynia R., McGlynn E., Brook R. Assessing Quality of Care: Process Measures vs. Outcomes Measures. Journal of the American Medical Association. 1996;276(19):1551–52. doi: 10.1001/jama.276.19.1551. [DOI] [PubMed] [Google Scholar]
- Zairi M., Jarrar Y. Measuring Organizational Effectiveness in the NHS: Management Style and Structure Best Practices. Total Quality Management. 2001;12(7, 8):882–89. [Google Scholar]
- Zaslavsky A. Statistical Issues in Reporting Quality Data: Small Samples and Casemix Variation. International Journal for Quality in Health Care. 2001;13(6):481–88. doi: 10.1093/intqhc/13.6.481. [DOI] [PubMed] [Google Scholar]