Abstract
Background
This study explores the meaning of actionable healthcare performance indicators for quality of care-related decisions. To do so, we analyse the constructs of fitness for purpose and fitness for use across healthcare systems and in practice based on the literature, expert opinion and user experience.
Methods
A multiphase qualitative study was undertaken. Phases included a literature review, a first round of one-on-one interviews with a panel of academics and thought leaders in the field (n=16), and a second round of interviews with real-world users of performance indicators (n=16). Thematic analysis was conducted between phases in order to triangulate findings in a stepwise process.
Results
Common uses of healthcare performance indicators were differentiated within micro-meso-macro contexts of healthcare systems. Each purpose of use signals different decision-making tasks, and in effect information needs. An indicator’s fitness for use can be appraised by three clusters of considerations: methodological, contextual and managerial. Methodological considerations gauge an indicator’s perceived importance, engagement potential, interpretability, standardisation, feasibility of remedial actions, alignment to care models and sensitivity to change. Information infrastructure, system governance, workforce capacity and learning culture were found as enabling contextual considerations. Managerial considerations influencing an indicator’s use in practice were found to span the selection of indicators, data collection, analysis, display of results and delivery of information to decision-makers.
Conclusion
The actionability of a healthcare performance indicator should be appraised by its alignment with the intended purpose of use beyond aggregate healthcare system levels, in combination with the extent to which methodological, contextual and managerial fitness for use considerations are met. Striking a better balance between the importance weighted to an indicator’s statistical merits and emphasis put to its fitness for purpose and use is needed for indicators that are ultimately actionable for quality of care-related decision-making.
Keywords: quality measurement, health services research, healthcare quality improvement, management, performance measures
Introduction
Healthcare performance measurement, and its use as performance intelligence, plays an important role in guiding the decisions of healthcare system actors with respect to quality of care.1 Since the early 2000s, the importance of performance measurement in healthcare,2 its institutionalisation as standard practice within3 and across healthcare systems,4–6 and more recently its professionalisation7 has received widespread prioritisation. This attention has increased scientific rigour around criteria for selecting indicators (eg, reliability, validity),8 9 development of indicator sets (eg, parsimony, epidemiological relevance),10 and methods, tools and approaches to guide these processes.11–13
Importantly, adherence to agreed-upon criteria for a statistically sound indicator does not guarantee that it is useful for decision-making. The information needs of decision-makers across healthcare systems, including policy-makers, managers, clinicians and patients, are varied. The type of indicator, data sources, level of precision, timeliness and relevant comparisons are among the key differences.1 14 15 For example, working to improve antibiotic prescribing, a primary care clinician may assess new and represcribing of antibiotics in their practice quarterly; an insurer, the adherence of practices to prescribing guidelines for issuing payment incentives annually; and a policy-maker, the total volume of antibiotics prescribed per 100 000 population by region, nationally and in comparison with other countries by policy cycle.
In effect, the ability for an indicator to meet the information needs of decision-makers goes beyond their statistical quality and is rather a measure of their actionability. To be actionable, it is generally agreed an indicator should be both fit for purpose—serving an intended decision-making function—and fit for use—getting the right information into the right hands at the right time.16–18 While there is agreement on the importance of actionability,18–20 and increasing attention put to its two main constructs of fitness for purpose and use, it still remains an elusive concept to define, assess and operationalise. In the absence of a common understanding of the meaning of actionability, the tendency to select indicators on the merit of their potential to be actionable perpetuates.18 21–23 And while there are implicit criteria that appear to influence the actual use of indicators, such as data availability and ease of interpretation,1 15 24–26 how these relate across different healthcare systems remains underexplored.1 14 15
With the advancement of information systems and data analytics, there has been impressive growth in the speed, volume and range of data available for performance measurement.27 28 COVID-19 and the ensuing surge in performance data reported is evidence of this.29 30 It also serves to illustrate that an abundance of information does not translate to informed decisions. Our attention is increasingly called to this fact and the work still needed to advance methods for measuring quality of care31–33 and patient safety34 in order to obtain additional value from our data-rich systems.35–38
In this study, we set out with the aim to gain further insights into the meaning of actionable healthcare performance indicators for quality of care-related decision-making across healthcare systems. To do so, we explore the notions of fitness for purpose and fitness for use derived through the existing literature, expert opinion and experiences of data users in varied developed country contexts. We pose two questions. The first aims to differentiate an indicator’s purpose of use by micro-meso-macro decision-making levels, investigating what are the uses of healthcare performance indicators across healthcare systems. The second aims to consolidate the determinants of an indicator’s fitness for use, exploring what are the key considerations influencing an indicator’s use.
Methods
Design
We applied qualitative methods39 in a multiphase approach, comprising a review to examine actionability according to the published literature40 and multiple perspective semistructured interviews39 41 42 to gain insights from two groups (panels) representing the scientific community and data users. We employed one-on-one interviews following our literature review rather than a questionnaire or focus groups for richer exchanges and the possibility to elicit the individual opinions of each participant.43 Our stepwise approach to analysis allowed for the triangulation of findings across phases and to aggregate individual-level results for panel-wide themes.42 The study adheres to the Consolidated Criteria for Reporting Qualitative Research.39
Indicators refer to a quantifiable variable measured to provide simplified information about a larger area of interest,44 typically measured over time.9 45 In the scope of this study, we focus on healthcare performance indicators: indicators for quality of care-driven decision-making to improve performance on one or more of the six dimensions of quality: safe, effective, patient-centred, timely, efficient and equitable care.8 46 As an exploratory study, we prioritised the generalisability of findings and were inclusive of varied types of healthcare (eg, primary, acute, specialist, long-term care), settings (eg, primary care, hospitals), health system types and countries, although limited to developed country contexts.
To explore our first research question, we took as a basis the characterisation of decision-making in healthcare systems by three contexts: patient care (micro-level), organisational (meso-level) and policy (macro-level), as illustrated in figure 1.47 48 Indicators are used to inform decisions in each context, be it quality improvement, services management, population health planning or other strategic and tactical choices.
Figure 1.
Decision-making contexts across healthcare systems.
Data collection and analysis
Phase 1: literature review and content analysis
We reviewed the existing literature with the following aims: to examine the current scientific understanding of actionable healthcare performance indicators; to generate an initial core list of indicator purposes of use and fitness for use considerations; and to identify leading experts in the field. Our search was conducted using PubMed at the outset of the study in early 2019. The search was limited to the past 10 years and articles published in English using the following key terms in varied combinations: health care performance indicator, actionability, quality of care, measurement and use. We also reviewed reporting of relevant international organisations and networks, namely the WHO and its regional offices, the Organisation for Economic Co-operation and Development (OECD), and the European Commission Expert Group on Health Systems Performance Assessment. Reference lists of articles and reports identified were reviewed in a snowballing approach.
The results of the initial literature search were synthesised and used to inform a provisional approach and visualisation of the uses of healthcare performance indicators by micro-meso-macro context. Recurrent fitness for use considerations were also distilled and clustered. These findings were prepared as an expert panel brief for use as a background document in the second phase (online supplemental appendix 1).
bmjqs-2020-011247supp001.pdf (613.4KB, pdf)
Phase 2: interviews with expert panel and thematic analysis
The first panel aimed to engage prominent academics and thought leaders in the field of healthcare performance measurement and quality of care (hereafter, expert panel). Experts were identified based on the authorship of literature reviewed and with consideration to the following criteria for the panel’s composition: a balance of expertise in areas related to quality of care, performance measurement, governance, data and information systems or management; senior academic or technical roles related to their area of expertise; and affiliation to varied healthcare systems and geographical contexts. A target of 15 experts were pursued for manageability and presumed saturation.49
One pilot interview was conducted to ensure relevance and clarity. Piloting resulted in the addition of illustrative examples of data users and fitness for use considerations. Panellists were invited to participate via email and received a panel brief in advance. The brief provided relevant study details together with the findings of phase 1. All interviews were conducted by the primary researcher (EB, female) with experience in semistructured interviews and subject matter expertise. Interviews took place between August and September 2019 both in person and at distance based on the proximity and preference of panellists. Interviews lasted between 45 and 60 min. Records of the interviews were prepared as detailed summaries rather than verbatim transcripts in the approach described by Halcomb and Davidson.50 The research adheres to the Dutch ethics guidelines stated in the ‘Medical Research Act with People (Wet medisch-wetenschappelijk onderzoek met mensen (WMO)) (Dutch), in BWBR0009408, W.a.S. Ministry of Health, Editor. 1998: Hague, Netherlands’,51 for which verbal consent was deemed adequate by the authors as no human data were retained. To ensure informed voluntary participation, participants provided written agreement to participate in the study during the recruitment stage and restated verbally their consent at the start of all interviews.
The interview records of this first panel were stored in an Excel-based tool for thematically analysing themes (EB). The analysis incorporated a deductive and inductive approach: topics explored in the interviews (online supplemental appendix 1) were used to guide the deductive thematic analysis52 and new themes that emerged were identified using an inductive approach.53 The data were also interpreted by redrawing conceptual diagrams. Two others (DSK, NSK) with complementary expertise in quality of care, performance measurement, health governance and management reviewed the findings to ensure consistency and reach agreement on the theme extraction.
Phase 3: interviews with user panel and thematic analysis
The findings from the expert panel were used to refine the mapping of uses of healthcare performance by micro-meso-macro level and fitness for use themes. The revisions were summarised in a new brief prepared for a second panel of one-on-one interviews (online supplemental appendix 2). This panel aimed to engage real-world data users for their first-hand experiences using healthcare performance indicators for quality of care-related decision-making (hereafter, user panel).
A target of 15 data users actively contributing to the further development of this field were pursued as panellists. The selection drew on existing membership lists of international networks, working groups and projects related to healthcare performance indicators, measurement and quality of care, such as the OECD Health Care Quality Indicators Project54 and initiatives of the European Commission (eg, HealthPros55). The panel composition aimed to capture a range of perspectives, with representation of differing health system types, country affiliations and uses of healthcare performance indicators. Interviews were conducted in the same manner as the first panel and were completed between November 2019 and January 2020.
Interview records were consolidated in the existing Excel-based tool for further thematic analysis. The topics and themes explored were used to refine and/or confirm the classification resulting from the expert panel on uses of healthcare performance indicators and fitness for use considerations. Observing the convergence of themes, with this phase data collection and analysis were considered complete.
Results
Literature review and panel results
Based on the literature synthesis, 19 experts were identified and invited to participate in the first panel. Of these, 16 experts agreed to participate. Non-participants were either unreachable (n=1), unavailable (n=1) or referred to an alternative contact (n=1). Together, expert panellists had published more than 50 articles or reports on the use, selection or improvement of healthcare performance indicators at the time of study. This literature (online supplemental appendix 3) was reviewed in phase 1 together with other relevant works.22 34 44 48 56–64 Expert panellists were predominately affiliated to academia and in senior or executive roles spanning eight countries (Australia, Canada, Denmark, Germany, Italy, the Netherlands, UK and USA). A range and balance of areas of expertise that included performance measurement, quality of care, governance, information systems and management were achieved.
The user panel comprised participants spanning the micro-level, meso-level and macro-level of healthcare systems. Participants included representatives of national health authorities, health standards and accreditation agencies, insurers, professional associations, as well as clinicians and patient advocates. In total, 31 participants were contacted, of which 16 agreed to participate (online supplemental appendix 3). Non-participants reported the same reasons as the first panel, with the majority (n=6) referring to an alternative contact and the remainder being either unreachable (n=5) or unavailable (n=4). User panellists spanned seven countries (Belgium, Canada, Germany, Ireland, the Netherlands, UK and USA). Table 1 summarises the key characteristics across panellists.
Table 1.
Characteristics of panellists
| Expert panel | n (%) | User panel | n (%) |
| Total | 16 (–) | 16 (–) | |
| Affiliation* | Uses | ||
| Academia | 10 (63) | Macro | 7 (44) |
| International organisation | 3 (19) | Meso | 4 (25) |
| Think tank | 3 (19) | Micro | 3 (19) |
| Expertise | Organisation type | ||
| Measurement | 5 (31) | Government | 5 (31) |
| Quality of care | 3 (19) | Health services | 4 (25) |
| Governance | 3 (19) | Standards | 3 (19) |
| Information systems | 3 (19) | Research | 2 (13) |
| Management | 2 (13) | Improvement | 2 (13) |
| Region | |||
| Europe | 9 (56) | 9 (56) | |
| North America | 5 (31) | 7 (44) | |
| Oceania | 2 (13) | – | |
| Sex | |||
| Male | 11 (69) | 9 (56) | |
| Female | 5 (31) | 7 (44) |
*Primary affiliations.
From the literature reviewed, 11 clusters of uses of healthcare performance indicators and fitness for use considerations related to the methodological quality of an indicator were identified (figure 2). In the second phase, there was agreement across experts on the relevance and importance to distinguish purposes of use of healthcare performance indicators beyond aggregate micro-level, meso-level and macro-level. The panel shared strong views to avoid a hierarchy within levels, finding this introduced a rigidity that may not translate across contexts. Rather, the framing of uses identified as common or frequent was found more transferable.
Figure 2.
Summary of key findings across study phases. Note: boxes denote key themes emerging by study phase. Broken lines denote a change in level. Solid lines denote agreement between phases with possible adjustments to phrasing. Darker grey shading denotes the introduction of new elements. Ordering within cells is not indicative of importance.
The experts introduced further consistency, refinements and additional purposes of use and fitness for use considerations. Specifically, the uses of indicators for functions such as regulation or strategy development were differentiated from mechanisms to achieve these functions, such as international comparisons or public reporting. Refinements to the distribution of uses across levels were introduced for consistency, for example, recategorising improvements of organisations and networks to the meso-level. Additions included emphasis on the use of indicators by patients as a decision-maker for informed choice and the cross-cutting function of research. The clustering of fitness for use considerations was disaggregated, with emphasis on the importance of considering an indicator’s use in a specific setting (where it is used) and as a process (how it is used).
In the third phase, user panellists agreed with the categorisation of uses by micro-level, meso-level and macro-level. Accountability was viewed as an aim rather than specific use and external assessments were viewed rather as a mechanism. There were detailed discussions on fitness for use considerations, with agreement to classify considerations that underscored the importance of the setting in which an indicator is used for its contextualisation. The case was made to view practical considerations as managerial aspects related to the process of using indicators.
Purposes of use of healthcare performance indicators
Through our stepwise approach to data collection and analysis, common uses of healthcare performance indicators were differentiated beyond the aggregate decision-making contexts of patient care (micro-level), organisations (meso-level) and policy (macro-level). In table 2, we list the uses for healthcare performance indicators identified, each serving different managerial decision-making functions, users and information needs. The purposes of use are not exhaustive and may take varied forms by healthcare system. Specifically, expert and user panellists noted variation in the degree of patient choice, role of insurers or mandate of professional bodies.
Table 2.
Differentiating uses of healthcare performance indicators across healthcare systems
| Context | Purpose of use | Illustrative uses | Illustrative users | Illustrative information need |
| Macro | System performance monitoring. | Signalling the performance of the system as a whole; comparing performance internationally; publicly reporting system performance. | Public; ministry of health; regional (provincial, state) authorities; health service executive (authority). | How is my healthcare system doing? How does it compare with others? |
| Strategy development. | Setting health policy priorities; identifying emerging health priority areas; and monitoring trends in current priority areas. | Government and ministries; regional (provincial, state) authorities; accountable care organisations; health maintenance organisations. | Have I chosen the right areas to prioritise? What is the impact of strategies that are in place? |
|
| System quality assurance. | Measuring care processes; reporting of incidents and never events. | Quality inspectorate; national quality observatory; health and safety executive. | Is care being delivered as intended? Where do problems in the delivery of care lie? |
|
| Meso | Regulation (professional, facility, pharmaceuticals). | Informing accreditation, certification and/or licensing processes. | Medical councils, chambers, college of physicians; medicines and healthcare products regulatory agency. | Does the performance of organisations, facilities, medicines, etc, meet established standards? |
| Professional development. | Reporting internally and benchmarking within profession or specialty. | Societies of medical professionals; professional associations; training institutions. | How do healthcare professionals of a specific specialty perform? | |
| Quality-based financing. | Issuing performance-based payment (pay-for-performance); value-based contracting. | Healthcare insurers; healthcare providers. | Are existing guidelines or standards being adhered to? Does this merit the issuing of incentives? |
|
| Organisation/network performance improvement. | Improving performance of hospitals, networks and care groups; assessing local needs and geographical differences. | Hospital management; integrated care; networks/groups; local collaboratives of care. | Are affiliated practices/facilities performing optimally? | |
| Micro | Practice or team performance improvement. | Convening audit and feedback, plan-do-study-act, and/or collaborative, team-based improvement cycles; comparing across practices. | Primary care practices; specialist departments or units; pathways of care. | How is my team performing? How can we improve our performance? How do I perform relative to my team members? |
| Individual performance improvement. | Identifying trends in the management of patients; tailoring services to target groups. | Individual physicians; nurse/practitioners; other healthcare professionals. | How am I managing my practice panel? How can I improve my performance? |
|
| Informed choice. | Selecting a healthcare provider; participating in care decision-making; self-managing care needs. | Patients; family members and carers; public. | What treatment options or providers are best for me? | |
| Cross-cutting | Research. | Exploring the use of indicators across contexts. | Academia and academic networks; think tanks, research groups; topic-specific associations. | Secondary user-directed. |
The detailed differentiation of uses of healthcare performance indicators signals important, yet often overlooked, distinctions in information needs within system levels. To illustrate these differences, we take the macro-level as an example. While uses of healthcare performance indicators in this context share an overall aim of informing policy decisions, distinctions between uses include system performance monitoring—signalling to system stakeholders, often including the public, the performance of the system as a whole, answering ‘How is my health care system doing?’; or strategy development—signalling to ministries, departments of health or similar with the aim of identifying priority areas, monitor trends and ultimately answering ‘Have I chosen the right areas to prioritize?’; or system quality assurance—informing decisions of health service executives, quality inspectors or quality observatories for an overview of care processes and signalling of incidents, answering ‘Is care being delivered as intended?’
Fitness for use of healthcare performance indicators
Three main clusters of considerations influencing the second construct of actionability—fitness for use—were found. These include methodological, contextual and managerial considerations (table 3).
Table 3.
Overview of methodological, contextual and managerial fitness for use considerations
| Clusters | Considerations | Guiding questions for considering an indicator’s use |
| Methodological | ||
| Measures what matters. | Does anybody care? | |
| Wide engagement. | What can we do? | |
| Easily interpreted. | Does the indicator signal a clear direction? | |
| Clear standardisation. | Is the indicator clearly defined and replicable? | |
| Alignment of accountability. | Are entry points for taking action feasible? | |
| Measurement matches delivery. | Is the indicator a reflection of the system? | |
| Sensitive to meaningful change. | Is the indicator sufficiently sensitive to change? | |
| Contextual | ||
| Information infrastructure | Interoperability. | Can needed data be accessed? |
| Data quality. | Is the data of quality? | |
| Governance | Political will and vision. | Is there high-level commitment and direction for use? |
| Regulation for data protection. | Does existing legislation facilitate use? | |
| Cross-sector partnerships. | Are cross-sector partnerships in place? | |
| Aligned financing structures. | Do financing structures encourage the intended use? | |
| Workforce capacity | Data and quality expertise. | Are the competencies to interpret and use data in place? |
| Time dedicated to improvement. | Is time allocated to encourage use? | |
| Culture | Learning orientation. | Is an environment for learning cultivated? |
| Shared responsibility for health. | Do users feel accountable for improvement? | |
| Managerial | ||
| Selecting healthcare performance indicators | Clear purpose of use. | What is the purpose of use? (eg, strategy development) |
| Target end user is known. | Is the target audience known? (eg, clinicians, public) | |
| Conceptual framework. | Is the dimension of quality pursued clear? | |
| Indicator quality. | Is the indicator scientifically sound? | |
| Source, type and availability of data. | What data are needed and are they available? (eg, administrative, clinical, survey data, wearables) | |
| Standards for appraisal. | How will improvements in performance be assessed? | |
| Degree of public disclosure. | Is the indicator for internal or external (public) use? | |
| Accompanying indicators. | Are there relevant accompanied indicators? | |
| Previous use. | Has the indicator been used previously? | |
| Accessing data | Representativeness of data. | Are the data complete? |
| Data linkages. | Can relevant data sources be linked? | |
| Data collection tools. | How will data be collected? (eg, paper-based, automated electronically, manual electronic entry) | |
| Unity of language/coding. | Is there consistency in coding across data to be used? | |
| Applying methods of analysis | Type of analysis. | How will the data be analysed? (eg, benchmarking, time trend, case mix correction) |
| Aggregation of indicators. | How can composites/indices be used to simplify data? | |
| Reference group. | Who is the reference group? | |
| Breakdowns/cohorts. | How will the data be disaggregated? (eg, age, sex, ethnicity, geographically) | |
| Calculation of values. | How will values be calculated? (eg, mean, median, SD, top 10% mean) | |
| Time interval. | Should a time trend be reported and at what interval? | |
| Application of risk adjustments. | How will risk adjustments be applied? (eg, variable specification, source, weighting scheme) | |
| Managing missing data. | How will missed data points be handled? | |
| Contextualising data. | What other data are needed to give the indicator meaning? | |
| Displaying findings | Chart options. | How will the data be visualised? (eg, chart, map, table) |
| Simplification techniques. | What techniques to simplify the meaning can be applied? (eg, colour, size variation, icons) | |
| Customisation of display. | How can users customise the data? (eg, change of display, change of information) | |
| Narrated interpretation. | How can the quality and the meaning of data be narrated? | |
| Format of reporting. | How will it be reported? (eg, print, mobile, web-based) | |
| Reaching decision-makers | Frequency of reporting. | What is the relevant reporting cycle (eg, real time, quarterly, annually, biennially) |
| Dissemination channels. | How will users be reached? (eg, mail, email, champions) | |
| Guidance on use. | How can users be supported to make use of findings? | |
Methodological considerations
Methodological considerations pertain to the indicator itself, although beyond its statistical quality. Seven recurrent considerations were identified. First, an indicator should measure what matters. User panellists emphasised the importance that the target audience cares about the results, explaining an indicator that ‘moves’ people makes everyone uncomfortable that the right thing is not already being done. Second, the extent to which an indicator resonates with a range of stakeholders was emphasised as a key gauge of its ability to facilitate a ‘what can we do’ approach, rather than limiting action to an individual user.65 Third, an indicator’s inherent ease of interpretation was described by panellists and in the literature18 66 67 to strongly influence an end user’s confidence in their interpretation of its meaning. Fourth, the extent to which an indicator is clearly defined was described as a key contributor to trust in what it signals, as well as the likelihood of wide uptake. Fifth, an indicator should be able to be broken down into its constituent parts to make change points clear,8 with panellists finding a remote or disconnected indicator from a user’s performance difficult to act on.59 63 Sixth, an indicator should measure a phenomenon as true to lived experience as possible.27 68 The tendency to focus on specific (siloed) areas of care was described to reduce performance to overly narrow aspects of care and, as one user panellist described, misses the ‘system-ness’ of quality. Lastly, the ability of an indicator to be sufficiently sensitive to change based on its intended use was described by both panels as intuitive, yet often a challenge for an indicator to meet.
Contextual considerations
Contextual considerations refer to critical factors pertaining to the setting in which an indicator is used. Four main clusters emerged. One, the information infrastructure was met with consensus across panellists as a key predictor of use, determining the ability to collect, store and extract information. Relevant considerations repeatedly raised included the interoperability of information systems (ie, linkages, output format) and overall data quality (ie, consistency in field, codes, maintenance). Second, characteristics of governance were emphasised, with panellists citing the importance of political will and vision, regulatory arrangements for data exchanges, as well as cross-sector partnerships and aligned financing structures. Third, workforce capacity considerations were underscored, specifically the data literacy skills of actors across the healthcare system and the availability of protected time for the healthcare workforce to use data. Lastly, pertaining to culture and professional norms, be it in clinical practice, healthcare organisations, professional networks or government agencies, the importance of a learning orientation and shared sense of responsibility was emphasised as a predictor of the importance placed to measurement and ultimately the use of an indicator.
Managerial considerations
The importance of embedding indicators into performance management systems is well established.60 63 69–72 Based on the literature and insights from the panels, we conceptualised an indicator’s use cycle (figure 3). This cycle was used to consolidate considerations brought forward around embedding indicators into management systems to safeguard an indicator’s use in practice. The considerations reflect key decisions to be managed across the cycle and include selecting an indicator with consideration to define clear parameters of its intended use,18 38 73 gain clarity around its construction,60 assess data needs and define measurement considerations; accessing data to ensure data are available, of quality or can feasibly be collected48; applying methods of analysis for the relevant calculation of values that correspond to the intended purpose63; displaying findings, including decisions around how data are visualised74 and the degree of story-telling to describe and interpret results to support understanding of what is meant and any caveats48 75 76; and actually reaching decision-makers, with decisions needed as to the frequency of dissemination, channel used for delivering information and guidance (if any) to facilitate the use of information provided.63
Figure 3.
Use cycle for managing healthcare performance indicators.
Discussion
Principal findings
Healthcare performance indicators share a common aim to provide simplified, readily understood information to facilitate decision-making.9 44 45 An indicator’s ability to do so in practice extends beyond its statistical quality and rather is characterised by its actionability.16–18 67 In this study, we explored actionability through the two constructs of fitness for purpose and fitness for use and observe the following main findings into their further operationalisation.
First, the different uses of an indicator within micro-meso-macro and research contexts stress the importance of clarity and precision on the intended use of an indicator. The relevance of precision regarding an indicator’s use has been stressed in the literature15 18–20 23 and previously explored from the perspective of different end users.1 Our findings further differentiate uses of indicators across healthcare systems. While not pursuing a universal, exhaustive listing of purposes of use—recognising varied healthcare system types and contextual considerations that deem this irrelevant—our findings signal the imperative of clarity regarding an indicator’s intended use and user to gauge its potential usefulness. The taxonomy of uses of healthcare performance indicators can be an input to further operationalise the construct of fitness for purpose.
Second, we find an indicator’s fitness for use is captured by three types of considerations. These relate to an indicator’s technical qualities, its intended context of use and its handling across what can be characterised as a use cycle. It means, to gauge an indicator’s fitness for use, a range of considerations should be assessed that span, for example, ‘Does the indicator signal a clear direction?’ to ‘Can needed data be accessed?’ and ‘What is the relevant reporting cycle?’ The listed considerations (table 3) based on the literature and views of panellists are a testament to the wide range of variables weighing on an indicator’s use that require thoughtful handling.
Third, an indicator’s fitness for purpose and fitness for use should be taken together to appraise actionability. For example, a policy-maker may identify a target to be measured in the scope of a strategy, yet for this specific purpose fitness for use considerations may not be met due to information system constraints or other contextual limitations. In another instance, an indicator may meet fitness for use considerations yet lack a clear and specific purpose and, in effect, misses a target audience. In both cases, the actionability of the indicator is compromised.
Lastly, as the expertise and lived experience of panellists served to highlight, the actionability of an indicator is not a guarantee of impact. Literature on the misuse, manipulation of data and unintended consequences of performance measurement depicts this.45 73 This distinction between action and impact underscores that while actionable healthcare performance indicators may be a precursor to better decision-making, the impact of an indicator weighs on considerations of its own.
Applications and further research
This study has sought to consolidate the relevant literature and engage informants from differing contexts, areas of expertise and first-hand experiences for diverse insights. Future research should test the findings empirically, investigating purposes of use and fitness for use considerations by specific country contexts, governance structures, services delivery systems or areas of specialisation.
The findings of this study have a range of potential applications. In the context of the COVID-19 pandemic, actionable healthcare performance indicators have proven of paramount importance,29 77 and surges in publicly reported data illustrate the increased demand for information.78 79 The extent to which this information informs decision-making is a reflection of the alignment between an indicator’s intended purpose of use and related fitness for use considerations. The findings could also inform the selection of indicators for measurement frameworks and indicator sets that cascade healthcare system levels by priority areas (eg, tackling the misuse of antibiotic prescribing, strengthening integrated care), where different decision-making functions need to work in combination.
Limitations
These findings may not be generalisable beyond the context of developed countries. The effect of system conditions, such as level of decentralisation, public–private mix and development status, has not been captured nor investigated given the targeted sample of informants, and as suggested should be explored empirically. The initial literature review was limited to English-language materials, which may also impact the generalisability of findings. Engaging expert panellists beyond English-speaking countries sought to minimise this. Some nuances may have become lost in choosing to summarise rather than transcribe interviews, although the advantages of our approach were found better suited for the study aims and design. In exploring performance indicators in the scope of healthcare, the study has not captured the broader use of indicators for public health despite its importance. Distributing panellists between panels was to the discretion of the study team for the purposes of the two-panel design, although many participants held positions or memberships suitable to both. The value of engaging panellists from different perspectives and stages took precedent. The prominence of panellists meant some were known to the authors. In order to avoid bias, a consistent interviewer was selected with the least previous engagement with panellists.
Conclusion
Clarifying the meaning of actionable healthcare performance indicators is a perquisite to its further operationalisation. This study has explored the body of literature on the actionability of healthcare performance indicators for quality of care-related decision-making together with expert opinion and data user experiences in an effort to unpack the constructs of fitness for purpose and fitness for use. The study aimed to capture these constructs from a system perspective. The findings signal the importance of clarity and precision on an indicator’s purpose of use and context for the handling of methodological, contextual and managerial considerations weighing on its use in practice. Striking a better balance between the importance weighted to an indicator’s statistical merits and emphasis put to an indicator’s fitness for purpose and use is needed for indicators that are actionable for quality of care-related decision-making.
Acknowledgments
We thank the following individuals for their contribution at varied stages of this work: JohnMarc Alban; Thomas Boeckz; Jeffrey Braithwaite; Emma Cartwright; Louise Clement; Cheryl Damberg; Greg Dempsey; Gail Dobel; Tejal Gandhi; Oliver Groene; Torsten Hecke; Peter Hibbert; John Lavis; Doreen MacNeil; Jan Mainz; Stephanie Medlock; Rob Nelissen; Sabina Nuti; Jillian Oderkirk; Irene Papanicolas; Lotte Ramerman; Ari Robiscek; Alexandru Rotar; Eric Schneider; Peter Smith; Jorien Soethout; Juan Tello; Rob Tollenaar; Michael van den Berg; Jeremy Veillard; Robert Verheij; and Naira Yeitsvan.
Footnotes
Contributors: EB, NSK and DSK conceived and designed the study. EB collected the data. EB, NSK and DSK analysed the data and drafted the manuscript. All authors revised the manuscript.
Funding: This study was funded by H2020 Marie Skłodowska-Curie Actions (765141).
Competing interests: None declared.
Provenance and peer review: Not commissioned; externally peer reviewed.
Supplemental material: This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
Data availability statement
Data are available upon reasonable request to the corresponding author.
Ethics statements
Patient consent for publication
Not required.
References
- 1. Smith P. Part 1: Principles of performance measurement. In: Performance measurement for health system improvement: experiences, challenges and prospects. Copenahgen: WHO Regional Office for Europe, 2008. [Google Scholar]
- 2. WHO Regional Office for Europe . The Tallinn charter: health systems for health and wealth. Copenhagen: WHO Regional Office for Europe, 2008. [Google Scholar]
- 3. Fekri O, Macarayan ER, Klazinga N. Health system performance assessment in the WHO European Region: which domains and indicators have been used by Member States for its measurement? In: Health evidence network synthesis report 55. Copenhagen: WHO Regional Office for Europe, 2018. [PubMed] [Google Scholar]
- 4. World Health Organization . The world health report 2000: health systems: improving performance. Geneva: World Health Organization, 2000. [Google Scholar]
- 5. Kelley J, Hurst J. Health care quality indicators project conceptual framework paper, in OECD health working papers No. 23. Paris: OECD, 2006. [Google Scholar]
- 6. Perić N, Hofmarcher-Holzhacker MM, Simon J. Health system performance assessment landscape at the EU level: a structured synthesis of actors and actions. Arch Public Health 2017;75:5. 10.1186/s13690-016-0173-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Kringos DS, Groene O, Johnsen SP. Training the first generation of health care performance intelligence professionals in Europe and Canada. Acad Med 2019;94:747–8. 10.1097/ACM.0000000000002694 [DOI] [PubMed] [Google Scholar]
- 8. Mainz J. Developing evidence-based clinical indicators: a state of the art methods primer. Int J Qual Health Care 2003;15:5i–11. 10.1093/intqhc/mzg084 [DOI] [PubMed] [Google Scholar]
- 9. Mainz J. Defining and classifying clinical indicators for quality improvement. Int J Qual Health Care 2003;15:523–30. 10.1093/intqhc/mzg081 [DOI] [PubMed] [Google Scholar]
- 10. Boerma T, AbouZahr C, Evans D, et al. Monitoring intervention coverage in the context of universal health coverage. PLoS Med 2014;11:e1001728. 10.1371/journal.pmed.1001728 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. McGlynn EA. The outcomes utility index: will outcomes data tell us what we want to know? Int J Qual Health Care 1998;10:485–90. 10.1093/intqhc/10.6.485 [DOI] [PubMed] [Google Scholar]
- 12. de Koning J, Burgers J, Klazinga N. Appraisal of indicators through research and evaluation (AIRE). Amsterdam: University of Amsterdam, 2008. [Google Scholar]
- 13. Fitch K. The RAND/UCLA appropriateness method user’s manual. Santa Monica, CA: RAND, 2001. [Google Scholar]
- 14. Quentin W. Measuring healthcare quality. In: Busse R, ed. Improving healthcare quality in Europe: characteristics, effectiveness and implementation of different strategies. WHO and OECD: Copenhagen, 2019. [PubMed] [Google Scholar]
- 15. Damberg CL, Sorbero ME, Lovejoy SL, et al. An evaluation of the use of performance measures in health care. Rand Health Q 2012;1:3. [PMC free article] [PubMed] [Google Scholar]
- 16. OECD . Health in the 21st century: putting data to work for stronger health systems. Paris: OECD, 2019. [Google Scholar]
- 17. McDowell R. Signs to look for: criteria for developing and selecting fit for purpose indicators. Wellington, New Zealand: PricewaterhouseCoopers, 2017. [Google Scholar]
- 18. Smith P, Mossialos E, Papanicolas I. Performance measurement for health system improvement: eperiences, challenges and prospects. Copenhagen: WHO Regional Office for Europe, 2008. [Google Scholar]
- 19. Carinci F, Van Gool K, Mainz J, et al. Towards actionable international comparisons of health system performance: expert revision of the OECD framework and quality indicators. Int J Qual Health Care 2015;27:137–46. 10.1093/intqhc/mzv004 [DOI] [PubMed] [Google Scholar]
- 20. European Commission Brussels . Expert group on health system performance assessment, so what? strategies across Europe to assess quality of care, 2016. [Google Scholar]
- 21. Smith P. Principles of performance measurement. In: Performance measurement for health system improvement. Cambridge: Cambridge University Press, 2009. [Google Scholar]
- 22. Klazinga N, Stronks K, Delnoij D, et al. Indicators without a cause. reflections on the development and use of indicators in health care from a public health perspective. Int J Qual Health Care 2001;13:433–8. 10.1093/intqhc/13.6.433 [DOI] [PubMed] [Google Scholar]
- 23. Hilarion P, Suñol R, Groene O, et al. Making performance indicators work: the experience of using consensus indicators for external assessment of health and social services at regional level in Spain. Health Policy 2009;90:94–103. 10.1016/j.healthpol.2008.08.002 [DOI] [PubMed] [Google Scholar]
- 24. Smith P. Health system performance assessment. Brussels: European Commission, 2014. [Google Scholar]
- 25. Secanell M, Groene O, Arah OA, et al. Deepening our understanding of quality improvement in Europe (DUQuE): overview of a study of hospital quality management in seven countries. Int J Qual Health Care 2014;26:5–15. 10.1093/intqhc/mzu025 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. iHD . Transform introduction enriching knowledge and enhancing care through data, 2016. Available: https://www.i-hd.eu/index.cfm/resources/ec-projects-results/transform/
- 27. Nuti S. Let’s play the patients music: A new generation of performance measurement systems in healthcare. Manag Decis 2018;56:2252–72. 10.1108/MD-09-2017-0907 [DOI] [Google Scholar]
- 28. Verheij R. Reuse of routine care data for policy and science: how things can be improved [Dutch]. Utrecht: Nivel, 2019. [Google Scholar]
- 29. Kringos D, Carinci F, Barbazza E, et al. Managing COVID-19 within and across health systems: why we need performance intelligence to coordinate a global response. Health Res Policy Syst 2020;18:80. 10.1186/s12961-020-00593-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. The Lancet Infectious Diseases . The COVID-19 infodemic. Lancet Infect Dis 2020;20:875. 10.1016/S1473-3099(20)30565-X [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31. National Academies of Sciences, E.a.M . Crossing the global quality chasm: improving health care worldwide. Washington, DC: National Academies of Sciences, Endineering and Medicine, 2018. [PubMed] [Google Scholar]
- 32. World Health Organization, World Bank Group, and Organisation for Economic Co-operation and Development . Delivering quality health services: a global imperative for universal health coverage. Geneva: World Health Organization, 2018. [Google Scholar]
- 33. Kruk ME, Gage AD, Arsenault C, et al. High-quality health systems in the sustainable development goals era: time for a revolution. Lancet Glob Health 2018;6:e1196–252. 10.1016/S2214-109X(18)30386-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34. Salzburg Global Seminar . The Salzburg statement on moving measurement into action: global principles for measuring patient safety. Institute for Healthcare Improvement and Salzburg Global Seminar, 2019. [Google Scholar]
- 35. Panch T, Szolovits P, Atun R. Artificial intelligence, machine learning and health systems. J Glob Health 2018;8:020303. 10.7189/jogh.08.020303 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Panch T, Pearson-Stuttard J, Greaves F, et al. Artificial intelligence: opportunities and risks for public health. Lancet Digit Health 2019;1:e13–14. 10.1016/S2589-7500(19)30002-0 [DOI] [PubMed] [Google Scholar]
- 37. Verheij RA, Curcin V, Delaney BC, et al. Possible sources of bias in primary care electronic health record data use and reuse. J Med Internet Res 2018;20:e185. 10.2196/jmir.9134 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38. Braithwaite J, Hibbert P, Blakely B, et al. Health system frameworks and performance indicators in eight countries: a comparative international analysis. SAGE Open Med 2017;5:2050312116686516:205031211668651. 10.1177/2050312116686516 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39. Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care 2007;19:349–57. 10.1093/intqhc/mzm042 [DOI] [PubMed] [Google Scholar]
- 40. Baumeister RF, Leary MR. Writing narrative literature reviews. Rev Gen Psychol 1997;1:311–20. 10.1037/1089-2680.1.3.311 [DOI] [Google Scholar]
- 41. Kvale S, interviews D. The SAGE qualitative research kit. Los Angeles, Calif: Sage, 2009. [Google Scholar]
- 42. Vogl S, Schmidt E-M, Zartler U. Triangulating perspectives: ontology and epistemology in the analysis of qualitative multiple perspective interviews. Int J Soc Res Methodol 2019;22:611–24. 10.1080/13645579.2019.1630901 [DOI] [Google Scholar]
- 43. Keeney S, Hasson F, McKeena H. The Delphi technique in nursing and health research. nited Kingdom Sinead Keeney, Felicity Hasson and Hugh McKenna, 2011. [Google Scholar]
- 44. Hammond AL. Environmental indicators : a systematic approach to measuring and reporting on environmental policy performance in the context of sustainable development, 1995. [Google Scholar]
- 45. Astleithner F, Hamedinger A, Holman N, et al. Institutions and indicators – the discourse about indicators in the context of sustainability. J Hous Built Environ 2004;19:7–24. 10.1023/B:JOHO.0000017704.49593.00 [DOI] [Google Scholar]
- 46. Institute of Medicine . Crossing the quality chasm: a new health system for the 21st century. Washington, DC: National Academy Press, 2001. [PubMed] [Google Scholar]
- 47. Plochg T, Klazinga NS. Community-based integrated care: myth or must? Int J Qual Health Care 2002;14:91–101. 10.1093/oxfordjournals.intqhc.a002606 [DOI] [PubMed] [Google Scholar]
- 48. Raleigh VS, Foot C. Getting the measure of quality: opportunities and challenges. London: The King’s Fund, 2010. [Google Scholar]
- 49. Fink A, Kosecoff J, Chassin M, et al. Consensus methods: characteristics and guidelines for use. Am J Public Health 1984;74:979–83. 10.2105/AJPH.74.9.979 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50. Halcomb EJ, Davidson PM. Is verbatim transcription of interview data always necessary? Appl Nurs Res 2006;19:38–42. 10.1016/j.apnr.2005.06.001 [DOI] [PubMed] [Google Scholar]
- 51. Ministry of Health . Medical research act with people (Wet medisch-wetenschappelijk onderzoek met mensen (WMO)) [Dutch], in BWBR0009408. Hague, Netherlands: Ministry of Health, 1998. [Google Scholar]
- 52. King N. Using templates in the thematic analysis of text. In: Essential guide to qualitative methods in organizational research. Sage, 2004. [Google Scholar]
- 53. Virginia B, Victoria C. Using thematic analysis in psychology. Qual Res Psychol 2006;3:77–101. [Google Scholar]
- 54. OECD . OECD health care quality indicators project - background. Available: http://www.oecd.org/els/health-systems/oecdhealthcarequalityindicatorsproject-background.htm
- 55. HealthPros . International training network for healthcare performance intelligence professionals, 2020. Available: https://www.healthpros-h2020.eu/
- 56. Freeman T. Using performance indicators to improve health care quality in the public sector: a review of the literature. Health Serv Manage Res 2002;15:126–37. 10.1258/0951484021912897 [DOI] [PubMed] [Google Scholar]
- 57. Grover V, Chiang RHL, Liang T-P, et al. Creating strategic business value from big data analytics: a research framework. J Manag Inf Syst 2018;35:388–423. 10.1080/07421222.2018.1451951 [DOI] [Google Scholar]
- 58. Esquivel A, Meric-Bernstam F, Bernstam EV. Accuracy and self correction of information received from an Internet breast cancer list: content analysis. BMJ 2006;332:939–42. 10.1136/bmj.38753.524201.7C [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59. Compton-Phillips A, Lee TH. The “give a darn” method for outcomes measurement, in NEJM catalyst. NEJM Catalyst, 2018. [Google Scholar]
- 60. van den Berg MJ, Kringos DS, Marks LK, et al. The Dutch health care performance report: seven years of health care performance assessment in the Netherlands. Health Res Policy Syst 2014;12:1. 10.1186/1478-4505-12-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61. Berwick DM, James B, Coye MJ. Connections between quality measurement and improvement. Med Care 2003;41:I–30. 10.1097/00005650-200301001-00004 [DOI] [PubMed] [Google Scholar]
- 62. Arnaboldi M, Lapsley I, Steccolini I. Performance management in the public sector: the ultimate challenge. Financial Account Manag 2015;31:1–22. 10.1111/faam.12049 [DOI] [Google Scholar]
- 63. Brehaut JC, Colquhoun HL, Eva KW, et al. Practice feedback interventions: 15 suggestions for optimizing effectiveness. Ann Intern Med 2016;164:435–41. 10.7326/M15-2248 [DOI] [PubMed] [Google Scholar]
- 64. Saver BG, Martin SA, Adler RN, et al. Care that matters: quality measurement and health care. PLoS Med 2015;12:e1001902. 10.1371/journal.pmed.1001902 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65. Nuti S, Vola F, Bonini A, et al. Making governance work in the health care sector: evidence from a 'natural experiment' in Italy. Health Econ Policy Law 2016;11:17–38. 10.1017/S1744133115000067 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66. Hibbard JH, Peters E. Supporting informed consumer health care decisions: data presentation approaches that facilitate the use of information in choice. Annu Rev Public Health 2003;24:413–33. 10.1146/annurev.publhealth.24.100901.141005 [DOI] [PubMed] [Google Scholar]
- 67. Marshall MN, Romano PS, Davies HTO. How do we maximize the impact of the public reporting of quality of care? Int J Qual Health Care 2004;16:i57–63. 10.1093/intqhc/mzh013 [DOI] [PubMed] [Google Scholar]
- 68. Nuti S, De Rosis S, Bonciani M, et al. Rethinking healthcare performance evaluation systems towards the People-Centredness approach: their pathways, their experience, their evaluation. Healthc Pap 2017;17:56–64. 10.12927/hcpap.2017.25408 [DOI] [PubMed] [Google Scholar]
- 69. CIHI . How is an indicator developed at CIHI? 2019. Available: https://www.cihi.ca/en/how-is-an-indicator-developed-at-cihi [Accessed 3 Feb 2020].
- 70. Michel P, Fraticelli L, Parneix P, et al. Assessing the performance of indicators during their life cycle: the mixed quid method. Int J Qual Health Care 2020;32:12–19. 10.1093/intqhc/mzz090 [DOI] [PubMed] [Google Scholar]
- 71. Hibbert PD, Wiles LK, Cameron ID, et al. CareTrack Aged: the appropriateness of care delivered to Australians living in residential aged care facilities: a study protocol. BMJ Open 2019;9:e030988. 10.1136/bmjopen-2019-030988 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 72. Pronovost PJ, Miller MR, Dorman T, et al. Developing and implementing measures of quality of care in the intensive care unit. Curr Opin Crit Care 2001;7:297–303. 10.1097/00075198-200108000-00014 [DOI] [PubMed] [Google Scholar]
- 73. Mannion R, Braithwaite J. Unintended consequences of performance measurement in healthcare: 20 salutary lessons from the English National health service. Intern Med J 2012;42:569–74. 10.1111/j.1445-5994.2012.02766.x [DOI] [PubMed] [Google Scholar]
- 74. Ballard A. Promoting performance information use through data visualization: evidence from an experiment. Public Perform Manag Rev 2020;43:109–28. 10.1080/15309576.2019.1592763 [DOI] [Google Scholar]
- 75. Robicsek A. Six modest proposals for health care measurement in NEJM catalyst. Waltham, MA: NEJM Catalyst, 2019. [Google Scholar]
- 76. Canadian Institute for Health Information, Better Information for Improved Health . A vision for health system use of data in Canada. Ottawa: CIHI, 2013. [Google Scholar]
- 77. WHO Regional Office for Europe . Strengthening and adjusting public health measures throughout the COVID-19 transition phases. Copenhagen: WHO Regional Office for Europe, 2020. [Google Scholar]
- 78. Kennedy H. Simply data visualizations have become key to communicating about the COVID-19 pandemic, but we know little about their impact in LSE impact Blog. UK: London School of Economics and Political Sciences London, 2020. [Google Scholar]
- 79. Fisher D, Teo YY, Nabarro D. Assessing national performance in response to COVID-19. Lancet 2020;396:653–5. 10.1016/S0140-6736(20)31601-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
bmjqs-2020-011247supp001.pdf (613.4KB, pdf)
Data Availability Statement
Data are available upon reasonable request to the corresponding author.



