Skip to main content
Risk Management and Healthcare Policy logoLink to Risk Management and Healthcare Policy
. 2022 Apr 21;15:747–764. doi: 10.2147/RMHP.S357561

Selecting Performance Indicators and Targets in Health Care: An International Scoping Review and Standardized Process Framework

Michael A Heenan 1,, Glen E Randall 1, Jenna M Evans 1
PMCID: PMC9038160  PMID: 35478929

Abstract

Objective

Health care organizations monitor hundreds of performance indicators. It is unclear what processes and criteria organizations use to identify the indicators they use, who is involved in these processes, how performance targets are set, and what the impacts of these processes are. The purpose of this study is to synthesize international approaches to indicator selection and develop a standardized process framework.

Methods

Using the PubMed and Web of Science search engines, a scoping review of peer reviewed and grey literature following PRISMA-ScR guidelines was conducted to identify documents describing indicator selection processes used by health systems. English-language papers from 11 countries published from 2010 to 2020 were included. Papers were thematically analyzed to develop a standardized process framework.

Results

The review included 33 peer-reviewed papers and 11 grey-literature documents. While there are common practices used in health care to select indicators, no single standardized process framework for indicator selection exists. Arbitrary or incomplete indicator selection processes risk over-measurement, lack of alignment with strategic and operational goals, lack of support by end-users, and paralyzed decision-making ability. By consolidating international practices, we developed the 5-P indicator selection process framework to mitigate process risks and support high-quality indicator selection processes.

Conclusion

The 5-P indicator selection process framework consists of five domains and 17 elements, and offers health care agencies a practical structure they can use to design indicator selection processes. The framework also provides researchers with a basis by which the implementation of these processes may be evaluated.

Keywords: performance indicators, performance measurement, targets, quality, hospitals, process framework

Introduction

Over the past 20 years, governments and health care agencies have mandated the collection and monitoring of hundreds of indicators by health service providers, such as hospitals.1,2 Indicators are defined as “measurable elements of practice performance” that relate to clinical, population health, financial, or organizational performance.3 In the USA, the National Quality Forum (NQF) approved indicator list has grown from 200 in 2005 to over 700 in 2011.4 In Canada, over 300 quality indicators have been reported by Ontario hospitals.6 Health system managers in the USA and Canada, as well as the UK and Australia, submit that the emergence of over-measurement has negative consequences.4–6 Arbitrary, top-down approaches to mandating the collecting and monitoring of indicators continue to contribute to over-measurement and data that do not necessarily reflect local context and stakeholder needs.7–10 Large volume of measures can paralyze decision-making.1,11 The development of indicators without local input creates a lack of trust between providers and political bodies, and invites the gaming of metrics given organizations may economically benefit from higher comparative rankings.6,9 The building of the information technology and data infrastructure required to support measurement has amplified the amount of data available, complicated decision-making and increased the financial cost of data collection to health care organizations.4

These findings have led to calls for a more balanced approach to measurement, focusing on how indicators advance strategic goals and user-value.4,11 The World Health Organization urged providers to prioritize measures that align with the specific information needs of those who use indicators for improvement.7 The Institute of Medicine, National Quality Forum, Canadian Institute for Health Information, and Statistics Canada completed indicator review exercises and recommended reducing the number of indicators monitored by health system providers.12–14 Research papers also share indicator selection processes in areas like emergency medicine and primary care.9,15,16 These reports describe different methods used to select indicators at the system or clinical service level. Despite these calls, inconsistent, arbitrary approaches to selecting indicators and targets may lead to variable quality and a lack of engagement that could prohibit those responsible for improving performance from taking action.1,5,7,9,11

Study Purpose

The following paper describes a scoping review to answer the question, “How and by whom are health care performance indicators and targets selected in Commonwealth Fund countries?” The review synthesizes different approaches used to select health care indicators and targets and proposes a standardized indicator selection process framework.

Methodology

A scoping review was completed following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) guideline.17,61 PubMed and Web of Science search engines were utilized given their focus on biomedicine and health care, and coverage of multiple databases. Inclusion criteria consisted of articles published from 2010 to 2020, written in English, with a focus on acute care hospital services. Articles from the 11 countries in the Commonwealth Fund’s annual comparison of health system outcomes (www.commonweathfund.org) were included. These countries, comprised of Australia, Canada, France, Germany, The Netherlands, New Zealand, Norway, Sweden, Switzerland, the United Kingdom, and the United States, were selected given their health systems comparability. Keywords used within the literature search are available as Supplementary Data. Exclusion criteria consisted of articles that were study protocols or systematic reviews; did not describe a selection process; involved non-hospital-based services; were not written in English; or were from non-Commonwealth Fund comparator countries.

A grey-literature search was conducted by identifying publicly available documents on government agency and health policy institutes’ websites from each of the 11 Commonwealth Fund countries. Hand searching of 24 policy health institute websites resulted in identifying 83 documents for review of which 11 were included in this review. A listing of the institutes is available as Supplementary Data.

In total, forty-four documents (thirty-three peer-reviewed and 11 grey-literature) met the criteria for final review. Figure 1 illustrates the PRISMA-SCR peer-reviewed and grey-literature search decision tree.17

Figure 1.

Figure 1

The flow of study identification and selection according to PRISMA-ScR guidelines.

Note: Adapted from Page MJ, McKenzie JE, Bossuyt PM, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. PLOS Medicine. 2021;18(3):e1003583. Creative commons license and disclaimer available from: http://creativecommons.org/licenses/by/4.0/legalcode.17

Abbreviation: PRISMA-ScR, preferred reporting items for systematic reviews and meta-analyses extension for scoping reviews.

Data were systematically extracted from each of the included papers and were used to inform the development of a standardized process framework. This process included identifying common themes arising from the literature and arranging them under preliminary categories.18 Initial categories included what is being selected (clinical indicators, business indicators, targets), rationale for the selection process, individuals involved in the process, steps used to prepare for the process, methods and criteria used to select indicators, and post-selection activities. The development of the framework was iterative with changes to categorization and wording as data extraction and thematic analysis progressed.18

Results

Tables 1 and 2 summarize the country of origin and field of study of included papers, respectively. Tables 3 and 4 summarize the content of the peer-reviewed and grey-literature, respectively. Five themes emerged from the analysis of peer-reviewed and grey-literature documents: aim; governance; preparation; methodologies; and validation.

Table 1.

Peer Reviewed and Grey Literature by Country

Country Peer Reviewed Literature Grey Literature
Australia 3 0
Canada 8 3
France 2 0
Germany 3 0
Netherlands 3 0
New Zealand 1 1
Norway 0 0
Sweden 0 0
Switzerland 1 0
United Kingdom 1 3
United States 11 4
Total 33 11

Table 2.

Peer Reviewed and Grey Literature by Field of Study

Acute Care Clinical Area Peer Reviewed Literature Grey Literature
Cancer 4 0
Cardiology 4 0
Critical Care 1 0
Emergency Care 2 0
Geriatrics 1 0
Hospital or Health Systems 6 11
Infection Control 4 0
Maternity 2 0
Mental Health 1 0
Patient Safety 2 0
Pediatrics 3 0
Surgery 3 0
Total 33 11

Table 3.

Scoping Review Peer-Reviewed Literature Summary

Article Info Indicators Addressed Consensus Method Used Article Summary
First Author Year Jurisdiction Field of Study Clinical Quality Business Based Target Setting
Aktaa19 2020 UK Cardiology Yes No No Not Applicable Paper proposes a 4-step process for KPI selection in cardiology, including identification of domains of care by constructing a conceptual framework; construction of candidate QIs via a systematic review of the literature; selection of a final set of QIs by obtaining expert opinions using the modified-Delphi method; and validation. Paper noted that expert panels have inherent bias. Therefore expansion of participants is important mitigation.
Bianchi20 2013 Switzerland Cancer Yes No No Modified-Delphi Colorectal Cancer Quality Indicator (QI) selection process governed by an expert panel identified 27 QIs from an original list of 149. QIs were rated using a Likert Scale and within clinical categories that followed the care continuum. Validation of the final QI set of was led by an academic researcher. Noted limitation of physician only panel. Offers a template for indicator definition sheets.
Bramesfeld21 2015 Germany Infection Prevention and Control Yes No No Modified-Delphi Study identified 32 indicators for measuring the prevention and management of Catheter Related Blood Stream Infections. Process considered relevance and feasibility criteria. Panelists participated in a pre-survey workshop. QIs were classified as process, outcome or structural. Likert scale was used to rate QIs.
Casey22 2013 USA Hospital System Yes No No Modified-Delphi Paper summarizes a panel process that examined the relevance of nationally reportable indicators to rural hospitals. Process included an expert panel that voted on the indicators to give Rural hospitals direction on which indicators are best to be used and how they align to national indicator reporting. Categorized the indicators into clinical categories; voting was noted but scale not described.
Chrusch23 2016 Canada Critical Care Yes No Yes Nominal Group Technique Paper describes a multiple case study in which conferences were held to have experts select indicators for comparing ICU performance. Organizations test indicators and report back on how they were used and the data results. Results identified 22 ICU indicators. Validation of indicators conducted.
Elliot24 2018 Australia Hospital System Yes Yes No Modified-Delphi Paper describes a 5-step process used to systematically select 20 indicators to monitor hospital strategic plan. 725 indicators were narrowed down to 110 by staff. Executives selected 20 clinical and business indicators. Five phases: (1) identification of potential indicators; (2) consolidation into a pragmatic set; (3) analysis of potential indicators against criteria; (4) mapping indicators to strategic plan; (5) key stakeholder presentation
Emond25 2015 Netherlands Surgery Yes No No Modified-Delphi Article describes a process that selected patient safety indicators in surgery. Process was governed by steering committee and expert panel of hospital leaders. 11 indicators were selected and validated in 8 hospitals. Patients and managers were on the panel.
Fekri26 2017 Canada Hospital System Yes No No Modified-Delphi Paper describes process used to select a national set of indicators. Technical group narrowed first set of metrics via quantitative survey followed by a consensus conference of end-users. 37 of 56 indicators were selected. Process included clear guiding principles.
Goldfarb27 2018 USA Cardiology Yes No No Modified-Delphi Systematic review of cardiology quality indicators was completed ahead of an international expert panel survey. Fifteen QIs were selected from an original list of 108, using a Likert scale. QIs were categorized as process, outcome or structural. Expert panel consisted of only physicians.
Grace28 2014 Canada Cardiology Yes No No Modified-Delphi Study identified quality indicators in cardiac rehabilitation. Process has three stages including ratings by working groups and validation of final QIs by stakeholders. Process resulted in a final list of 5 QIs from a list of 37. Qualitative and quantitative validation of QIs was completed.
Gurvitz29 2013 USA Cardiology Yes No No Modified-Delphi Paper describes indicators selection process aimed at monitoring quality improvement for adults with congenital heart disease (ACHD) conditions. Expert panel only included Physicians. 55 of 61 indicators were selected based on literature review and clinical guidelines. indicators were not independently validated.
Guth30 2016 USA Patient Safety Yes No No Kepner-Tregoe Decision Analysis Case study report on process used to select indicators for a hospital quality scorecard. Governing committee and working groups, narrowed 750 indicators to 25. Process included metric collection; harm evaluation; metric viability; ability to implement; categorizing metrics; assess impact; and risk assessment.
Mangione-Smith31 2011 USA Pediatrics Yes No Yes Modified-Delphi Paper summarizes a process that selected quality indicators for a health insurance program. Voting on a Likert scale resulted in 25 of 199 indicators being chosen. Noted field testing was needed to set targets.
Martinez32 2018 USA Hospital System Yes No No Participatory Design Approach Article describes how a hospital prioritized metrics for an electronic dashboard. Resulted in 10 indicators mapped to the Donabedian framework of process, outcome, and structure. Process asked end-users about barriers to using indicators. Noted that different audiences need different indicators.
Mazzone33 2014 USA Cancer Yes No No Modified-Delphi Panel of physicians selected Quality Indicators (QIs) to evaluate lung cancer processes of care. Narrowed original list of 18 QIs to 7. Assessed indicators using clearly defined criteria. Assessed indicators using defined criteria. Validity included testing QIs in 3 organizations. Paper noted bias of physician only panel.
Moehring34 2017 USA Infection Prevention and Control Yes No No Modified-Delphi Study selected indicators to aid decision making in antimicrobial stewardship Programs. Process governed by a panel of physicians and pharmacists. Panel rated QIs against 4 questions versus defined criteria. 14 metrics were selected from an original list of 90 using a Likert scale.
Morris35 2012 Canada Infection Prevention and Control Yes No No Modified-Delphi Paper describes process where expert panel rated potential indicators using a set of criteria. Panelists rated indicators on a Likert scale and could add anonymous comments. 4 indicators from an original list of 14 were selected. No patient or family member participated in process.
Perera36 2012 New Zealand Hospital System Yes No Yes Not Applicable Paper describes indicator framework. Framework includes prioritization of indicators; delineation of intent; implementation requirements; development of indicator specifications; assessment of indicator purpose, and target development. Paper notes indicators for one purpose may be inappropriate for another. indicator credibility relies on having defined purpose. Targets need to be developed based on current performance and understanding of barriers to attaining targets.
Profit37 2011 USA Pediatrics Yes No No Modified-Delphi Study selected indicators for neonatal intensive care units. Process resulted in 9 of 28 indicators aligned with IOM dimensions of quality using clear assessment criteria and indicator definitions. Expert panel did not include an administrator.
Reiter38 2011 Germany Hospital System Yes No No QUALIFY Instrument Paper describes selecting hospital quality indicators deemed suitable for hospital disclosure. Working groups of clinicians and representatives selected 31 of 55 indicators for disclosure.
Sauvegrain39 2019 France Maternity Yes No No Delphi Survey Paper describes process to select indicators for obstetrical care. Scientific committee and expert panel selected 13 indicators from a list of 28 that were derived from current database and literature review. Noted training ahead of process was not done but should be in future. Stated indicator targets should be discussed as an accompany process. Noted panel participants will have biases.
Schnitker40 2015 Australia Emergency Yes No No Modified-Delphi Study selected process quality indicators (PQIs) to monitor Emergency Department patients with cognitive impairment. Approach included building a list of PQIs based on a literature review. Process resulted in in 11 PQIs being selected from original list of 22. Process field tested indicators for data quality ahead of final selection. Noted a panel of local experts have biases and recommend involving outside experts.
Schull16 2011 Canada Emergency Yes No No Modified-Delphi Study selected national measures for Emergency Departments. Process resulted in selection of 48 of 170 candidate indicators. Categorized indicators by clinical domain. Noted when a panel is system-based it can underrepresent smaller and rural hospitals.
Science41 2019 Canada Infection Prevention and Control Yes No No Modified-Delphi Study identified metrics for antimicrobial stewardship programs. Process was governed by a steering committee and expert panel. Process resulted in the selection of 4 metrics. Noted that bias in panels can be mitigated by neutral facilitator.
Soohoo42 2010 USA Surgery Yes No Yes Modified-Delphi Study selected indicators for total joint replacement patients. Panel of orthopedic surgeons selected 68 indicators from an original list of 101. Field tested indicators for data quality and to inform the setting of targets.
Stang43 2013 Canada Pediatrics Yes No Yes Modified-Delphi Study identified indicators for high acuity pediatric conditions. An interdisciplinary advisory group selected 62 indicators from a list of 97. Noted that field testing of final indicators can inform potential benchmarks and targets.
Stegbauer44 2017 Germany Mental Health Yes No No Modified-Delphi Study selected indicators for schizophrenia. Expert panel narrowed 847 indicators to a list of 27 using 2 main criteria: relevance and schizophrenia. Indicator had to be defined in terms of matching an outcome (goal) and be tied to a treatment (process). Patients were on panel.
Thern45 2014 Germany Infection Prevention and Control Yes No No Modified-Delphi Study selected 42 indicators from a list of 99. Process included surveying experts ahead of the development of an indicator list, a literature search, ranking of indicators using a Likert scale and an in-person conference. Stated that final list of indicators should be validated for data quality.
Tsiamis46 2018 Australia Cancer Yes No No Modified-Delphi Physician panel selected indicators to monitor radiotherapy for men with prostate cancer. Process included literature review and categorizing QIs along the continuum of care.
17 out of an original list of 114 QIs were selected. Noted physician only panel could have bias. Noted most QIIs selected were process metrics.
van der Wees47 2019 Netherlands Patient Safety Yes No No User Based Design Paper proposed a framework to select Patient Reported Outcomes Measures. Framework developed using a design approach based on user needs and was guided by a project team of experts and end-user representatives.
Van Grootven48 2018 USA Geriatrics Yes No No Delphi Study selected indicators to evaluate in-hospital geriatric programs. 31 of 44 indicators were chosen using Likert scale against 2 criteria: appropriateness and feasibility. Panelists had at least 2 years of experience in geriatric medicine. Panel demographics balanced age and gender to ensure equity.
van Heurn49 2015 Netherlands Surgery Yes No Yes Modified-Delphi Panel of surgeons selected 24 neonatal surgical indicators an original list of 220. Paper emphasized importance of validation data and having external experts review final list for link to best practice. Study stated indicators need validation to inform targets.
Wood50 2013 Canada Cancer Yes No Yes Modified-Delphi Study selected indicators in renal cell carcinoma. Panel selected 23 indicators from an original list of 34 that were generated from a literature search and panel input. Categorization of indicators followed continuum of care. Noted physician only panel should include other professions. Noted indicator data should be tested to inform targets.

Table 4.

Scoping Review Grey-Literature Summary

Article Info Indicator Type Addressed Consensus Method Used Article Summary
First Author Year Jurisdiction Field of Study Clinical Quality Business Based Target Setting
Health Quality Ontario51 2016 Canada Hospital Yes No No Modified-Delphi Agency aimed to reduce number of patient safety indicators. 11 indicators selected from original inventory of 180. Structured process included clear aim, guiding principles, literature search, voting using a Likert scale, and involved representation from clinical experts, sector representatives and patients.
CIHI13 2015 Canada System Yes No No Conference followed by Working Groups Agency prioritized a national set of indicators. Document explains process of conference, criteria and post conference work that led to a manageable list. Broad representation but no patient or front-line manager. Had clear indicator assessment criteria. Conclusion noted requirement to validate indicators for data quality.
Ontario Hospital Association52 2019 Canada Hospital Yes No No Modified-Delphi Process aimed to reduce amount of measurement. Criteria used included public accountability, system monitoring, local monitoring and indicator retirement. Over 500 indicators reduced to 156 with 144 indicators retired. Expert panel did not include patients or frontline staff but noted they were required in future. Noted targets needed but did not address directly.
Health Quality and Safety Commission New Zealand53 2012 New Zealand System Yes No No Modified-Delphi Paper summarizes process used to select 17 indicators for public reporting and quality improvement. Process included a steering committee, advisory group, and a use of defined criteria. Panel included managers and patients.
The King’s Fund54 2010 UK System Yes No Yes Not Applicable Paper provides guidance on measuring acute care quality. Key topics include defining measurement; identifying audiences and purposes of indicators; impact indicators and benchmarks have on staff; and steps to select indicators. Paper emphasizes indicators and targets will motivate or unintendedly harm users. As such, processes need to ensure data is tailored to right audience.
National Institute for Health and Care Excellence55 2019 UK System Yes No No Modified-Delphi Document describes how national system indicators were selected and how indicators are to be used. Document shares the principles and aims of indicator selection, committee structures, testing of indicators, and consultation with stakeholders. Validation included qualitative feedback from end-users. Process involved managers and public. Emphasizes regular review required for acceptability.
The Health Foundation56 2019 UK System Yes No No Qualitative Interviews Multiple-case study interviewed unit-level staff on how best to reduce indicators to manageable number to enable improvement. Categorized indicators into Donabedian framework and patient reported outcome and experience measures. Assessment criteria included indicators being easily understood, relevant to area, and actionable.
Hospital Association of New York State57 2016 USA Hospital Yes No Yes Not Applicable Discussion paper proposes indicator selection process. Processes should aim to have indicators match clinical reality and allow improvement; include assessment criteria; use ranking methodologies; and validate indicators for data quality. Report suggests indicator assessment criteria should include fit with priorities; performance history; relevance; actionability; and financial impact.
National Quality Forum14 2019 USA System Yes No No Modified-Delphi Process Guide explains governance model, process and criteria used to select national indicators. Process included interdisciplinary membership, feedback from stakeholders ahead of and during process and clear assessment criteria. Indicators categorized using Donabedian framework of structure, process, and outcomes.
National Quality Forum58 2020 USA System Yes Yes No Not Applicable Paper discusses work of committee that examined definitions, best practices, data issues and impact of measurement. Paper offers a four-step process to assess and select indicators and noted costs and efficiency indicators should be considered. Paper stated processes should include education on how to use indicators.
Institute of Medicine59 2015 USA System Yes No Yes Modified-Delphi Process Paper proposes 15 indicators that measure health outcomes while reducing burden of measurement on clinicians and enhancing transparency and comparability. Report provides an overview of process followed, including criteria set used. Calls on system to test indicators for both statistical and face validity.

Aim

The first theme addresses the rationale that an indicator and target selection process is conducted. Subthemes that arose to form this theme included describing an aim statement (100% of peer-reviewed and 100% of grey-literature documents); offering a set of principles to guide the work (30.3% of peer-reviewed and 72.7% of grey-literature documents); and identifying the system or organizational unit in which the work is based (100% of peer-reviewed and 100% of grey-literature documents).

Peer-reviewed literature focused on specific organizational units measuring discrete clinical processes or outcomes, whereas grey-literature focused on system-level indicators that address quality and patient safety. As a result, peer-reviewed papers’ aim statements are more narrowly defined than those found in the grey-literature. Values such as openness, transparency, and accountability were frequently cited as being part of a set of guiding principles.26,30,31,38,53,55 Papers that described selection processes within clinical areas stressed that indicators should match the care continuum, so they are representative of the patient journey and clinical practice.23,42,46

All documents noted the system or organizational unit the process was designed to inform.13,14,16,19–59 Indicator selection processes must consider the intended use of the indicator given indicators can be used for a variety of reasons, including accountability, process improvement, and public reporting.32,36,42,47,55,56

Governance

Governance oversight of indicator and target selection processes is the second theme. Subthemes included identifying structures that provide an oversight function (97.0% of peer-reviewed and 100% of grey-literature documents), and the identification and recruitment of process participants (93.9% of peer-reviewed and 72.7% of grey-literature documents).

Documents shared two models of governance. The first model is a single-body governance structure where the process is managed and conducted by one steering committee or expert panel.20–22,24,27,31–35,37,39–57 The second model is a multi-body structure that has a steering committee responsible for managing the process and offering recommendations, but also includes sub-committees or expert panels that assist with literature reviews, data collection, and assessments.13,14,16,19,23,25,26,29,30,38,51–53,58,59

Most documents identified who participated in indicator and target selection processes. Several peer-reviewed papers revealed studies that involved only physicians,20,27,29,33,42,46,49,50 while other studies incorporated broader representation from areas, such as nursing, allied health, research, quality, and administration.13,14,16,19–24,26,28,31–35,37–41,43–56,58 Some indicator selection processes involved patients and family members, noting that their contribution ensured indicators connected with the consumer of services.14,19,21,25,38–40,44,47,51,53,55,58,59 Studies using only physicians and nurses cited their clinical backgrounds as a strength but acknowledged the need to expand participation to mitigate medical biases.19,29,33,46,48,50 Studies that had expert panels with broader memberships believed that broader participation enabled a more inclusive view of the care process.13,16,19,21,25,40,42,51,52,54,55,59 One study required panelists to have at least 2 years of clinical experience, and a balance of gender representation to ensure experience and equity perspectives are considered in the selection of indicators.48

Preparation

Five subthemes emerged to create the third theme: preparation. These sub-themes consisted of seeking early input from end-users on their indicator needs (21.2% peer-reviewed and 36.4% grey literature documents); reviewing literature and evidence-based guidelines (87.8% peer-reviewed and 36.4% of grey-literature documents); compiling an indicator inventory and definition list (100% of peer-reviewed and 100% of grey-literature documents); placing indicators into categorical themes (84.8% of peer-reviewed and 81.8% of grey-literature documents); and, developing participant orientation and training materials (33.3% of peer-reviewed and 36.4% of grey-literature documents).

All documents described an indicator selection process that involved consulting data libraries, peer-reviewed literature, and clinical guidelines to create an inventory of potential indicators. Documents stated that a final list of indicators built from comprehensive sources improves their relevancy to end-users while enabling future comparability and benchmarking.13,14,16,19–59

Documents that sought end-user input upfront on indicator knowledge and user requirements13,20,21,32,42,45,47,50,52,55,56 and issued orientation materials13,14,20,21,26,32,37,38,40,44,46,50,54,58 reported increased participant engagement and improved understanding of the process among participants.

Process Methodologies

The fourth theme speaks to the methodologies used to assess and recommend indicators and targets. This theme emerged from documents that described consensus-building methods (97.0% of peer-reviewed and 90.9% of grey-literature documents); facilitation (24.2% of peer-reviewed and 89.7% of grey-literature documents); indicator assessment criteria (100% of peer-previewed and 90.9% of grey-literature documents); and rating methods by which indicators were assessed (90.9% of peer-reviewed and 54.5% of grey-literature documents).

Studies that utilize consensus-building processes, such as a modified-Delphi approach, involved issuing surveys to seek input on the number of indicators to be considered, followed by an in-person or online web conference to finalize the selection.12,16,20–22,24–29,31,33–35,37,40–46,49–53,55,59 These consensus-building processes increase validity with participants12,16,20–22,24–29,31,33–35,37,40–46,49–53,55,59 Several papers reported that processes facilitated by a neutral expert minimized steering committee or expert panel bias.16,20,30,35,38,41,49 Common indicator assessment criteria include relevance, scientific soundness, feasibility, and usability, as per the Appraisal of Indicators through Research and Evaluation (AIRE) tool.13,14,19–60 Analytically, studies generally ranked indicators using Likert scales from 1 to 7 or 1 to 9.20,21,25,26,29–31,33–35,37,40,42–46,48,50 Two studies allowed participants to provide qualitative feedback on indicators between modified-Delphi rounds.34,48

Validation

The final theme, validation, emerged in two forms: quantitatively testing for data quality (39.4% of peer-reviewed and 63.6% of grey-literature documents) and qualitative feedback from end-users on face validity (21.2% of peer-reviewed and 63.6% of grey-literature documents). Processes that statistically tested indicators for data quality emphasized the increased scientific soundness of the indicators14,16,19,23,25,28,30,33,36,39,41,43,47,49,54–56,58,59 and better informed the target setting.43 Processes that validated a final list of indicators with end-users reported improved relevance and usability by users, especially in cases where the expert panels did not include front-line directors, managers, or patients.13,14,21,23,28,30,36,37,44,54–56,58,59

Target Setting

No document summarized a process that directly addressed the setting of indicator targets or benchmarks. Literature that made suggestions in this area emphasized that targets and benchmarks need to be better defined and understood by end-users.23,31,36,42,43,49,50,54–57 Benchmarks have limitations as they are generally based on a subset of performance units versus an agreed upon best practice. Benchmarks are not necessarily the required target, given a unit’s indicator performance may already have exceeded the benchmark. Thus, an indicator target may be intended to simply maintain performance.23,31,36,42,43,49,50,54,56 Similarly, given that performance on an indicator may be behind the benchmark, incremental improvement towards the benchmark may be a more appropriate target.54,57 Targets may also distort practice choices or not reflect the care needed at the patient level given targets generally measure macro-outcomes at the population level versus operational realities. As such, targets must be set carefully by testing for scientific soundness and relevance to end-users.23,31,36,42,43,49,50,54

Discussion

This scoping review identified 44 documents that addressed the research question, “How and by whom are health care key performance indicators and targets selected in Commonwealth Fund countries?” The review demonstrates that structured indicator selection processes are generally governed by steering committees or expert panels, are guided by clear aim statements, involve literature searches on potential indicators, use consensus seeking methods, categorize indicators as process, outcome, or structure metrics, and align indicators to categories, such as strategic themes or clinical care processes. Not all documents describe preparation and validation stages. Only a few studies engaged end-users up front about how they use indicators or validated the relevance of the chosen indicators with stakeholders after indicators were selected. Similarly, only a few studies tested selected indicators for data quality. No paper directly addressed the targets, but some advocated for testing data to ensure benchmarking could occur.

Most papers focused on clinical access and quality indicators and did not address medical education, system-level, or business-related indicators in areas such as finance, human resources, and supply chain. As such, governors of indicator selection processes should be mindful that health care managers, administrative leaders and other clinical actors have many more indicators to manage than only those related to quality and patient safety.

Indicator selection processes varied in who participated, in particular, those included on expert panels. Findings seem to indicate that, given the multidisciplinary nature of health care delivery and the need to ensure indicators match the information needs of end-users, indicator selection processes should be inclusive and equitable.7,48 No study has directly addressed how to set performance targets. Moreover, given that indicators are used as an instrument to help advance performance, findings suggest that those responsible for indicator selection and target setting should ensure end-users understand and provide input on the targets they are accountable for achieving.

While all documents described steps of an indicator selection process, no process included each component identified in the thematic analysis. Incomplete indicator selection processes risk over-measurement, the lack of prioritizing strategic and operational goals, lack of support by end-users, and paralyzed decision-making ability.2–4,7,11 These gaps present an opportunity to build a standardized framework that can assist organizations in developing a comprehensive indicator and target selection process.

The 5-P Indicator Selection Process Framework

The themes extracted from each of the papers led to the development of a standardized process framework. The 5-P Indicator Selection Process Framework consists of five domains and 17 elements. The framework’s first domain, “Purpose”, sets out the reasons why an indicator selection and target setting process is undertaken. By stating the process aim, the principles used to guide the process, and the organization level at which the indicators will be used, organizations can facilitate a shared understanding of the rationale they are trying to achieve. The second domain, “Polity”, identifies the governance structures that manage the selection process, how the process will be resourced, and who will participate. The third domain, “Prepare”, addresses how to plan for selection. Elements include asking potential users about their experience with indicators, researching literature and best practices, developing a defined inventory of potential indicators, categorizing indicators into strategic themes, and delivering training or orientation materials and programs. The fourth domain, “Procedure”, describes the steps used to assess indicators and targets and gain consensus. Elements include consensus building methods, facilitation, assessment criteria, analytical assessment of potential indicators, and target-setting. The final domain of the framework is “Prove”. This domain describes the validation processes used to test any final set of indicators for data quality and relevance with end-users. Table 5 summarizes each domain and element.

Table 5.

The 5-P Indicator Selection Process Framework

Domain Elements Element Description
Purpose Clarify Aim Articulate the rationale for conducting an indicator and target selection exercise. By stating the process aim, whether it is to align indicators to an operational process, a strategic plan, a regulatory requirement, or public reporting, the work can be scoped properly.
Develop Guiding Principles Establish principles to ensure participants understand the values by which the process is being conducted. Principles may include openness, transparency, scientific soundness, relevance, accountability, scope, and span of control.
Identify Level of Use Identify the organizational unit that will use the indicators to ensure relevancy to end-users. As an example, indicators used by a board to monitor quality outcomes may be different than indicators selected by a clinical unit focused on process improvement.
Polity Build Governance Structures Identify a structure that will manage indicator and target selection to ensure it is completed. These structures may include a steering committee, a project management team, a data quality advisory group, and an expert panel that will assess potential indicators and targets.
Recruit Participants Select and recruit expert panel members. Panels should be diverse and multi-disciplinary to ensure equity and a broad view of how indicators and targets will be used. Composition of panels should consider the process aim and level of use when selecting participants.
Prepare Seek End-User Input Seek input from end-users to understand their experiences with the potential indicators under consideration and solicit ideas on the draft criteria they may recommend in evaluating indicators.
Research Evidence-Based Literature Identify the range of indicators used in their area or that are required by regulation. A search of literature and evidence-based guidelines, and government mandated indicators will help organizations identify a comprehensive set of indicators to assess.
Build an Inventory of Potential Indicators Compile a comprehensive list of indicators with definitions and data sources, so participants understand each indicator to be evaluated. If the process addresses target selection, the nature of the target (eg, past performance, benchmark, best practice) should be explained.
Categorize Potential Indicators into Strategic Themes Categorize indicators into themes aligned with the organization’s strategy, quadrants of the balanced scorecard, or the Donabedian framework of outcomes, process, and structure. By creating categories, process participants and end-users will better understand the linkage an indicator has with the identified purpose.
Orient and Train Participants Provide participants with orientation materials on the process aim, definition and purpose of each indicator, potential targets, and methods they will use to recommend indicators and targets.
Procedure Utilize a Consensus Building Method Identify and use a recognized consensus building method such as the Delphi, modified-Delphi, or Normative Group Technique. This is particularly important when indicators are being identified to measure a new strategy compared to a quality improvement project.
Identify a Facilitator Select an independent facilitator so as to not bias the process. The facilitator should be a third-party, or a neutral party from an organization’s performance measurement department.
Establish Indicator Selection Criteria Set criteria by which the assessment of indicators will be based. Common criteria include those prescribed by the Appraisal of Indicators through Research and Evaluation (AIRE) tool such as relevance, scientific soundness, feasibility, and validity. Criteria may change based on the aim statement and level of use described in the “Purpose” Domain.
Analytically Assess Indicators Identify a Likert assessment scale participants will use to evaluate indicators against criteria, and how assessments will be completed, either via survey, in person, or both.
Set Indicator Targets Assign a target for each indicator. Considerations may include maintaining performance if the current indicators result is ahead of a benchmark, attempting to reach a benchmark if performance is behind ideal performance, or making progress towards the benchmark should it be deemed unattainable within the period in which the indicator is being measured.
Prove Assess Data Quality Validate the final list of indicators by testing data quality. Processes may wish to defer the setting of specific indicator targets until after this phase to ensure targets are based on valid data trends.
Validate with End-Users Seek feedback from end-users on the relevance the final set of indicators and targets have to their environment and performance requirements, and whether the identified target motivates the end-user to implement improvement actions.

Whereas previously published constructs such as the Appraisal of Indicators through Research and Evaluation (AIRE) Instrument60 and the Quality Indicator Critical Appraisal (QICA) tool8 suggest criteria to guide, which individual indicators should be considered, the 5-P Indicator Selection Process Framework offers a standardized process that governs and guides the overall process. Organizations that are mature in their performance measurement capabilities may use the framework to assess their current process and identify targeted opportunities for improvement. Less mature organizations and organizations undergoing transformations that may influence the number or type of indicators they measure should consider adopting the framework as a whole. By adopting the framework, organizations will have a clear purpose for selecting indicators; adopt governance models that enhance equitable participation from multiple stakeholder groups, including patients; select indicators based on evidence-based criteria; and ensure indicators match end-users needs by validating any final set of indicators.

Limitations

The scoping review focused on clinical services generally found within acute care hospital settings. Future research should include articles on primary care and post-acute care to validate or extend the proposed framework. Only one individual screened and reviewed the papers in this review. To mitigate potential biases, the reviewer regularly debriefed with other members of the research team on inclusion and exclusion decisions. The 5-P Indicator Selection Process Framework is the result of a scoping review and has not been validated in real-world settings. Future research may involve validating the framework by assessing it in practice.

Conclusion

This paper began by describing the proliferation of measurement in health care and risks associated with inconsistent indicator section processes. The overabundance of indicators has paralyzed decision-making, and eroded trust between those who ask for indicators and those who are expected to use them to make change. Many policy institutes and academics have called for a more appropriate, lower number of indicators. Indicator selection or reduction processes cannot occur by happenstance. The adoption or elimination of indicators should be guided by the 5-P Indicator Selection Process Framework to ensure a systematic, evidence-based, and inclusive approach that engages measurement experts and those who use indicators to monitor and improve performance in both selection and validation.

The 5-P Indicator Selection Process Framework provides a practical, standardized structure that health care agencies, hospitals, and clinical disciplines can use to guide the selection of performance indicators and targets. The 5-P Indicator Selection Process Framework may also act as an implementation framework by which researchers evaluate how health care agencies select indicators and targets.

Disclosure

The authors report no conflict of interest in this work.

References

  • 1.Cassel CK, Conway PH, Delbanco SF, Jha AK, Saunders RS, Lee TH. Getting more performance from performance measurement. N Engl J Med. 2014;371(23):2145–2147. doi: 10.1056/NEJMp1408345 [DOI] [PubMed] [Google Scholar]
  • 2.Panzer RJ, Gitomer RS, Greene WH, Webster PR, Landry KR, Riccobono CA. Increasing demands for quality measurement. JAMA. 2013;310(18):1971–1980. doi: 10.1001/jama.2013.282047 [DOI] [PubMed] [Google Scholar]
  • 3.Lawrence M, Olesen F. Indicators of quality in health care. Eur J Gen Pract. 1997;3(3):103–108. doi: 10.3109/13814789709160336 [DOI] [Google Scholar]
  • 4.Meyer GS, Nelson EC, Pryor DB, et al. More quality measures versus measuring what matters: a call for balance and parsimony. BMJ Qual Saf. 2012;21(11):964–968. doi: 10.1136/bmjqs-2012-001081 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Greenburg A, Dale A. Measuring what matters in hospitals. Health Quality Ontario (HQO); 2021. Available from: https://www.hqontario.ca/Blog/hospital-care/measuring-what-matters-in-hospitals. Accessed April 5, 2022. [Google Scholar]
  • 6.Mannion R, Braithwaite J. Unintended consequences of performance measurement in healthcare: 20 salutary lessons from the English National Health Service. Intern Med J. 2012;42(5):569–574. doi: 10.1111/j.1445-5994.2012.02766.x [DOI] [PubMed] [Google Scholar]
  • 7.Smith PC, Mossialos E, Papanicolas I. Performance measurement for health system improvement: experiences, challenges and prospects: background document 2. Regional World Health Organization. Office for Europe; 2008. Available from: https://apps.who.int/iris/handle/10665/350328. Accessed April 5, 2022. [Google Scholar]
  • 8.Jones P, Shepherd M, Wells S, Le Fevre J, Ameratunga S. What makes a good healthcare quality indicator? A systematic review and validation study. Emerg Med Austral. 2014;26(2):113–124. doi: 10.1111/1742-6723.12195 [DOI] [PubMed] [Google Scholar]
  • 9.Campbell SM, Braspenning JA, Hutchinson A, Marshall M. Research methods used in developing and applying quality indicators in primary care. BMJ Qual Saf Health Care. 2002;11(4):358–364. doi: 10.1136/qhc.11.4.358 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Teare GF. Measurement of quality and safety in healthcare: the past decade and the next. Healthc Quart. 2014;17:45–50. doi: 10.12927/hcq.2014.23950 [DOI] [PubMed] [Google Scholar]
  • 11.Perla R. Commentary: health systems must strive for data maturity. Am J Med Qual. 2013;28(3):263–264. doi: 10.1177/1062860612465000 [DOI] [PubMed] [Google Scholar]
  • 12.Blumenthal D, McGinnis JM. Measuring vital signs: an IOM report on core metrics for health and health care progress. JAMA. 2015;313(19):1901–1902. doi: 10.1001/jama.2015.4862 [DOI] [PubMed] [Google Scholar]
  • 13.Canadian Institute for Health Information and Statistics Canada. Rethink, renew, retire: report from the fourth consensus conference on evaluating priorities for Canada’s health indicators; 2015. Available from: https://secure.cihi.ca/free_products/Rethink_Renew_Retire.pdf. Accessed April 5, 2022.
  • 14.National Quality Forum. Committee guidebook for the NQF measure endorsement process; 2019. Available from: https://www.qualityforum.org/Measuring_Performance/Measuring_Performance.aspx. Accessed April 5, 2022.
  • 15.Madsen MM, Eiset AH, Mackenhauer J, et al. Selection of quality indicators for hospital-based emergency care in Denmark, informed by a modified-Delphi process. Scand J Trauma Resusc Emerg Med. 2016;24(1):11. doi: 10.1186/s13049-016-0203-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Schull MJ, Guttmann A, Leaver CA, et al. Prioritizing performance measurement for emergency department care: consensus on evidence-based quality of care indicators. CJEM. 2011;13(5):300–309, E28-E43. doi: 10.2310/8000.2011.110334 [DOI] [PubMed] [Google Scholar]
  • 17.Page MJ, McKenzie JE, Bossuyt PM, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. PLOS Medicine. 2021;18(3):e1003583. doi: 10.1371/journal.pmed.1003583 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Mays N, Pope C, Popay J. Systematically reviewing qualitative and quantitative evidence to inform management and policy-making in the health field. J Health Serv Res Policy. 2005;10(1_suppl):6–20. doi: 10.1258/1355819054308576 [DOI] [PubMed] [Google Scholar]
  • 19.Aktaa S, Batra G, Wallentin L, et al. European Society of Cardiology methodology for the development of quality indicators for the quantification of cardiovascular care and outcomes. Eur Heart J Qual Care Clin Outcomes. 2020. doi: 10.1093/ehjqcco/qcaa069 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Bianchi V, Spitale A, Ortelli L, Mazzucchelli L, Bordoni A. Quality indicators of clinical cancer care (QC 3) in colorectal cancer. BMJ open. 2013;3(7):e002818. doi: 10.1136/bmjopen-2013-002818 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Bramesfeld A, Wrede S, Richter K, et al. Development of quality indicators and data assessment strategies for the prevention of central venous catheter-related bloodstream infections (CRBSI). BMC Infect Dis. 2015;15:435. doi: 10.1186/s12879-015-1200-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Casey MM, Moscovice I, Klingner J, Prasad S. Rural relevant quality measures for critical access hospitals. J Rural Health. 2013;29(2):159–171. doi: 10.1111/j.1748-0361.2012.00420.x [DOI] [PubMed] [Google Scholar]
  • 23.Chrusch CA, Martin CM; Project TQIICC. Quality improvement in critical care: selection and development of quality indicators. Can Respir J. 2016;2016:2516765. doi: 10.1155/2016/2516765 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Elliot C, Mcullagh C, Brydon M, Zwi K. Developing key performance indicators for a tertiary children’s hospital network. Austra Health Rev. 2018;42(5):491–500. doi: 10.1071/AH17263 [DOI] [PubMed] [Google Scholar]
  • 25.Emond YE, Stienen JJ, Wollersheim HC, et al. Development and measurement of perioperative patient safety indicators. Br J Anaesth. 2015;114(6):963–972. doi: 10.1093/bja/aeu561 [DOI] [PubMed] [Google Scholar]
  • 26.Fekri O, Leeb K, Gurevich Y. Systematic approach to evaluating and confirming the utility of a suite of national health system performance (HSP) indicators in Canada: a modified Delphi study. BMJ open. 2017;7(4):e014772. doi: 10.1136/bmjopen-2016-014772 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Goldfarb M, Bibas L, Newby LK, et al. Systematic review and directors survey of quality indicators for the cardiovascular intensive care unit. Int J Cardiol. 2018;260:219–225. doi: 10.1016/j.ijcard.2018.02.113 [DOI] [PubMed] [Google Scholar]
  • 28.Grace SL, Poirier P, Norris CM, Oakes GH, Somanader DS, Suskin N. Pan-Canadian development of cardiac rehabilitation and secondary prevention quality indicators. Can J Cardiol. 2014;30(8):945–948. doi: 10.1016/j.cjca.2014.04.003 [DOI] [PubMed] [Google Scholar]
  • 29.Gurvitz M, Marelli A, Mangione-Smith R, Jenkins K. Building quality indicators to improve care for adults with congenital heart disease. J Am Coll Cardiol. 2013;62(23):2244–2253. doi: 10.1016/j.jacc.2013.07.099 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Guth RM, Storey PE, Vitale M, et al. Decision analysis for metric selection on a clinical quality scorecard. Am J Med Qual. 2016;31(5):400–407. doi: 10.1177/1062860615589117 [DOI] [PubMed] [Google Scholar]
  • 31.Mangione-Smith R, Schiff J, Dougherty D. Identifying children’s health care quality measures for Medicaid and CHIP: an evidence-informed, publicly transparent expert process. Acad Pediatr. 2011;11(3):S11–21. doi: 10.1016/j.acap.2010.11.003 [DOI] [PubMed] [Google Scholar]
  • 32.Martinez DA, Kane EM, Jalalpour M, et al. An electronic dashboard to monitor patient flow at the Johns Hopkins hospital: communication of key performance indicators using the donabedian model. J Med Syst. 2018;42(8):133. doi: 10.1007/s10916-018-0988-4 [DOI] [PubMed] [Google Scholar]
  • 33.Mazzone PJ, Vachani A, Chang A, et al. Quality indicators for the evaluation of patients with lung cancer. Chest. 2014;146(3):659–669. doi: 10.1378/chest.13-2900 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Moehring RW, Anderson DJ, Cochran RL, et al. Expert consensus on metrics to assess the impact of patient-level antimicrobial stewardship interventions in acute-care settings. Clin Infect Dis. 2017;64(3):377–383. doi: 10.1093/cid/ciw787 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Morris AM, Brener S, Dresser L, et al. Use of a structured panel process to define quality metrics for antimicrobial stewardship programs. Infect Control Hosp Epidemiol. 2012;33(5):500–506. doi: 10.1086/665324 [DOI] [PubMed] [Google Scholar]
  • 36.Perera R, Dowell A, Crampton P. Painting by numbers: a guide for systematically developing indicators of performance at any level of health care. Health Policy. 2012;108(1):49–59. doi: 10.1016/j.healthpol.2012.07.008 [DOI] [PubMed] [Google Scholar]
  • 37.Profit J, Gould JB, Zupancic JAF, et al. Formal selection of measures for a composite index of NICU quality of care: baby-MONITOR. J Perinatol. 2011;31(11):702–710. doi: 10.1038/jp.2011.12 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Reiter A, Geraedts M, Jäckel W, Fischer B, Veit C, Döbler K. Selection of hospital quality indicators for public disclosure in Germany. Z Evid Fortbild Qual Gesundhwes. 2011;105(1):44–48. doi: 10.1016/j.zefq.2010.12.024 [DOI] [PubMed] [Google Scholar]
  • 39.Sauvegrain P, Chantry AA, Chiesa-Dubruille C, Keita H, Goffinet F, Deneux-Tharaux C. Monitoring quality of obstetric care from hospital discharge databases: a Delphi survey to propose a new set of indicators based on maternal health outcomes. PLoS One. 2019;14(2):e0211955. doi: 10.1371/journal.pone.0211955 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Schnitker LM, Martin-Khan M, Burkett E, Beattie ERA, Jones RN, Gray LC. Process quality indicators targeting cognitive impairment to support quality of care for older people with cognitive impairment in emergency departments. Acad Emerg Med. 2015;22(3):285–298. doi: 10.1111/acem.12616 [DOI] [PubMed] [Google Scholar]
  • 41.Science M, Timberlake K, Morris A, Read S, Le Saux N. Quality metrics for antimicrobial stewardship programs. Pediatrics. 2019;143(4). doi: 10.1542/peds.2018-2372 [DOI] [PubMed] [Google Scholar]
  • 42.SooHoo NF, Lieberman JR, Farng E, Park S, Jain S, Ko CY. Development of quality of care indicators for patients undergoing total hip or total knee replacement. BMJ Qual Saf. 2011;20(2):153–157. doi: 10.1136/bmjqs.2009.032524 [DOI] [PubMed] [Google Scholar]
  • 43.Stang AS, Straus SE, Crotts J, Johnson DW, Guttmann A. Quality indicators for high acuity pediatric conditions. Pediatrics. 2013;132(4):752–762. doi: 10.1542/peds.2013-0854 [DOI] [PubMed] [Google Scholar]
  • 44.Stegbauer C, Willms G, Kleine-Budde K, Bramesfeld A, Stammann C, Szecsenyi J. Development of indicators for a nationwide cross-sectoral quality assurance procedure for mental health care of patients with schizophrenia, schizotypal and delusional disorders in Germany. Z Evid Fortbild Qual Gesundhwes. 2017;126:13–22. doi: 10.1016/j.zefq.2017.07.006 [DOI] [PubMed] [Google Scholar]
  • 45.Thern J, de With K, Strauss R, Steib-Bauert M, Weber N, Kern WV. Selection of hospital antimicrobial prescribing quality indicators: a consensus among German antibiotic stewardship (ABS) networkers. Infection. 2014;42(2):351–362. doi: 10.1007/s15010-013-0559-z [DOI] [PubMed] [Google Scholar]
  • 46.Tsiamis E, Millar J, Baxi S, et al. Development of quality indicators to monitor radiotherapy care for men with prostate cancer: a modified Delphi method. Radiother Oncol. 2018;128(2):308–314. doi: 10.1016/j.radonc.2018.04.017 [DOI] [PubMed] [Google Scholar]
  • 47.van der Wees PJ, Verkerk EW, Verbiest MEA, et al. Development of a framework with tools to support the selection and implementation of patient-reported outcome measures. J Patient Rep Outcomes. 2019;3(1):75. doi: 10.1186/s41687-019-0171-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Van Grootven B, McNicoll L, Mendelson DA, et al. Quality indicators for in-hospital geriatric co-management programmes: a systematic literature review and international Delphi study. BMJ Open. 2018;8(3):e020617. doi: 10.1136/bmjopen-2017-020617 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.van Heurn E, de Blaauw I, Heij H, et al. Quality measurement in neonatal surgical disorders: development of clinical indicators. Eur J Pediatr Surg. 2015;25(6):526–531. doi: 10.1055/s-0034-1396416 [DOI] [PubMed] [Google Scholar]
  • 50.Wood L, Bjarnason GA, Black PC, et al. Using the Delphi technique to improve clinical outcomes through the development of quality indicators in renal cell carcinoma. J Oncol Pract. 2013;9(5):e262–267. doi: 10.1200/JOP.2012000870 [DOI] [PubMed] [Google Scholar]
  • 51.Health Quality Ontario (HQO). Hospital sector indicator reduction and management strategy; 2016. Available from: https://www.hqontario.ca/System-Performance/Measuring-System-Performance/Hospital-Sector-Indicator-Reduction-and-Management-Strategy. Accessed April 5, 2022.
  • 52.Ontario Hospital Association and Health Quality Ontario. A sustainable indicator reduction and management strategy for the Ontario Hospital Sector; 2019. Available from: https://www.oha.com/health-system-transformation/high-performing-health-system/a-sustainable-indicator-reduction-and-management-strategy. Accessed April 5, 2022.
  • 53.Health Quality & Safety Commission. Health quality & safety indicators: summary feedback document; 2012. Available from: https://www.hqsc.govt.nz/assets/Health-Quality-Evaluation/PR/HQSI-Feedback-and-Engagement-document-July12-web.pdf. Accessed April 5, 2022.
  • 54.The King’s Fund. Getting the measure of quality: opportunities and challenges; 2010. Available from: https://www.kingsfund.org.uk/publications/getting-measure-quality. Accessed April 5, 2022.
  • 55.National Institute for Health and Care Excellence. NICE indicator process guide; 2019. Available from: https://www.Nice.Org.Uk/Media/Default/Get-Involved/Meetings-In-Public/Indicator-Advisory-Committee/Ioc-Process-Guide.Pdf. Accessed April 5, 2022.
  • 56.The Health Foundation. The measurement maze; 2019. Available from: https://www.health.org.uk/publications/reports/the-measurement-maze. Accessed April 5, 2022.
  • 57.HANYS. Quality measurement: focus on the measures that matter; 2016. Available from: https://www.hanys.org/communications/pr/2016/2016-04-20_quality_measurements_focus_on_measures_that_matter.cfm. Accessed April 5, 2022.
  • 58.National Quality Forum. Measure sets and measurement systems: multistakeholder guidance for design and evaluation; 2020. Available from: https://www.qualityforum.org/Publications/2020/07/Measure_Sets_and_Measurement_Systems__Multistakeholder_Guidance_for_Design_and_Evaluation.aspx. Accessed April 5, 2022.
  • 59.Institute of Medicine. Vital signs: core metrics for health and health care progress; 2015. Washington, DC: The National Academies Press. Available from: https://www.nap.edu/catalog/19402/vital-signs-core-metrics-for-health-and-health-care-progress. Accessed April 5, 2022. [PubMed] [Google Scholar]
  • 60.De koning J, Smulders A, Klazinga N. The Appraisal of Indicators Through Research and Evaluation (AIRE) Instrument. Amsterdam: Academic Medical Center; 2006. [Google Scholar]
  • 61.Tricco AC, Lillie E, Zarin W, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann Intern Med. 2018;169(7):467–473. doi: 10.7326/M18-0850 [DOI] [PubMed] [Google Scholar]

Articles from Risk Management and Healthcare Policy are provided here courtesy of Dove Press

RESOURCES