Skip to main content
BMJ Open Quality logoLink to BMJ Open Quality
. 2021 Jul 9;10(3):e001293. doi: 10.1136/bmjoq-2020-001293

Scoping review of balanced scorecards for use in healthcare settings: development and implementation

Victoria Bohm 1, Diane Lacaille 2,3, Nicole Spencer 1, Claire EH Barber 1,2,
PMCID: PMC8273481  PMID: 34244173

Abstract

Objective

Balanced scorecards (BSCs) were developed in the early 1990s in corporate settings as a strategic performance management tool that emphasised measurement from multiple perspectives. Since their introduction, BSCs have been adapted for a variety of industries, including to healthcare settings. The aim of this scoping review was to describe the application of BSCs in healthcare.

Methods

Medline, Embase and CINHAL databases were searched using keywords and medical subject headings for ‘balanced scorecard’ and related terms from 1992 to 17/04/2020. Title and abstract screening and full text review were conducted in duplicate by two reviewers. Studies describing the development and/or implementation of a BSC in a healthcare setting were included. Data were abstracted using pilot-tested forms and reviewed for key themes and findings.

Results

8129 records were identified and 841 underwent a full text review. 87 articles were included. Over 26 countries were represented and the majority of BSCs were applied at a local level (54%) in hospital settings (41%). While almost all discussed Kaplan and Norton’s original BSC (97%), only 69% described alignment with a strategic plan. Patients/family members were rarely involved in development teams (3%) which typically were comprised of senior healthcare leaders/administrators. Only 21% of BSCs included perspectives using identical formatting to the original BSC description. Lessons learnt during development addressed three main themes: scorecard design, stakeholder engagement and feasibility.

Conclusions

BSC frameworks have been used in various healthcare settings but frequently undergo adaptation from the original description in order to suit a specific healthcare context. Future BSCs should aim to include patients/families to promote patient-centred healthcare systems. Considering the heterogeneity evident in development approaches, methodological guidance in this area is warranted.

Keywords: implementation science, management, quality improvement methodologies, quality improvement

Introduction

Since the early 1990s balanced scorecards (BSCs) have expanded on traditional approaches to organisational and/or strategic performance measurement by emphasising the utility of a multidomain framework, as opposed to one that is finance-centric.1 Prior to the development of this new approach, an organisation’s financial results were considered the primary indicator of performance and organisational success over time. The BSC concept evolved from a year-long research project led by Kaplan, then at the Harvard Business School and Norton, CEO of the research institute sponsoring the study.1 The tool began as a novel approach to organisational performance and evolved ultimately to become an innovative system for strategic management in a corporate setting.2 It has since been adopted across a wide variety of industries including those involved in the delivery of healthcare services, both public and private.3

Kaplan and Norton’s approach4 specifies that a BSC be driven by, and aligned with, the organisation’s mission, vision and strategy. This alignment is accomplished by selecting measures, or key performance indicators (KPIs), that fulfil two requirements: first, each KPI must fit into one of the four perspectives, described below; second, the KPIs themselves must measure activities or results that are directly related to steps the organisation is taking to implement its strategy. The result is a powerful system that supports organisations to assess performance related to its strategy in a holistic and targeted fashion.

The BSC adds multidimensionality to an organisation’s performance measurement framework by allocating measures across four perspectives (sometimes called domains): financial, customer, internal business-process and learning and growth.1 The financial perspective assesses financial performance and is central to the BSC in that all other objectives and measures should be linked, through cause-and-effect, to it; the customer perspective provides information about how the organisation is perceived by its customers; the internal business-process perspective reflects the organisation’s performance on activities most related to achieving financial and customer objectives; and the learning and growth perspective reflects how an organisation is enabling itself to achieve the objectives in the other three perspectives, for example through employee development or process innovation.4

BSCs have been developed and implemented in healthcare since the mid-1990s, shortly after the tool was first published. The tool has been applied to address a variety of challenges that range from the imperative to improve quality and safety of care, guide the administration of public or private healthcare services and support the profitability or competitiveness of healthcare corporations in market systems.3

As a tool developed in a corporate, or ‘business’, setting, that has been adopted across many industries, the present scoping review was undertaken to better understand how BSCs have been applied in healthcare in the 30 years since its inception. Specifically, we sought to describe the methods used to develop BSCs, whether they are linked to organisational strategic plans, who was involved in their development, the structure of the developed tool in terms of the four BSC domains and the number of KPIs represented and their implementation.

Methods

The scoping review was developed and reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR).5 6 A scoping review protocol was developed a priori (available on request). The search strategy was developed in consultation with a medical librarian using all identified keywords and medical subject headings (MeSH) for a ‘balanced scorecard’ including related terms (search strategy shown in online supplemental material). The search was undertaken across three databases, MEDLINE, EMBASE and CINAHL, for articles since 1992 (the year of Kaplan and Norton’s first description of the BSC) to 17 April 2020.

Supplementary data

bmjoq-2020-001293supp001.pdf (305.8KB, pdf)

Title and abstract screening and article full-text review were conducted in duplicate independently by two authors (VB and CB) with any disagreements resolved by discussion. The following inclusion criteria were used: English language studies of any design were included if they described the development and/or implementation of a BSC in a healthcare setting (hospital, clinic, department, disease-specific BSC, health region(s), national health service or programme). Studies were excluded if they were non-English, published in abstract form only, or if they reported only on a ‘scorecard’ or other type of quality framework that the authors did not describe as a BSC. Studies describing BSCs developed to evaluate medical education, hospital laboratory services, research, long-term care facilities/nursing homes or general population-health measures (eg, child health) were excluded.

Data extraction was performed in duplicate by two of three independent reviewers (CB, VB, NS) in 22% (19/87) of the studies with all studies reviewed by one of the reviewers (VB) for consistency in abstraction. Any papers where data were challenging to extract by any reviewer were reviewed in duplicate and all disagreements were discussed and resolved by group consensus. Reviewers used a pretested data extraction form and definitions for all abstraction terms were finalised prior to independent abstraction. Data extracted addressed the following areas: article identifying information; geographic location; population and setting of study; methods of BSC development; fidelity to original domain description of the BSC (defined as use of the original perspectives exactly described by Kaplan and Norton, or with addition or modification); types of performance measures included; data collection and data sources and mechanism of scorecard display as well as outcomes of implementation. Selected quotations were identified and abstracted and an inductive thematic analysis was used7 to identify key themes and subthemes from the ‘lessons learnt’ by studies during BSC development and implementation.

Results

A flow diagram of the scoping review results is shown in figure 1. There were 8129 records identified by the search after duplicate removal, 7288 were excluded during title and abstract screening resulting in 841 undergoing a full text review. Reasons for exclusion of 755 articles during the full text review are shown in figure 1, with 1 additional article identified during a hand search of references for a total of 87 articles included. There were seven scorecards that were described in two or more papers while in three papers, between three and five scorecards were described. In total, 87 individual scorecards were evaluated. Relationships between scorecards and papers are shown in online supplemental table 1.

Figure 1.

Figure 1

Flow diagram of included balanced scorecard studies in healthcare settings. KPI, key performance indicator.

Table 1 summarises the characteristics of included studies (complete references are shown in online supplemental table 1).

Table 1.

Characteristics of included papers on balanced scorecards in healthcare

N (%)
Papers included (n) 87
Balanced Scorecards described (n)* 87
Year of Publication (n=87 papers)
 1995–1999 7 (8)
 2000–2004 21 (24)
 2005–2009 18 (21)
 2010–2014 27 (31)
 2015–2019 13 (15)
 2020 1 (1)
Countries represented (n=87 papers) >26
World regions represented (n=87 papers)
 Africa: Ethiopia, Zambia 2 (2)
 Americas: Brazil, Canada, USA 48 (55)
 Asia: Afghanistan, Bangladesh, China, Indonesia, Iran, Japan, Lebanon, Malaysia, Pakistan, Singapore, Taiwan 20 (23)
 Europe: Austria, Germany, Greece, Italy, Netherlands, Sweden, Switzerland, UK, ‘Europe Region’† 15 (17)
 Oceania: Australia, New Zealand 2 (2)
Geographic scope of scorecard (n=87 scorecards)
 Multinational (2 or more countries) 3 (3)
 National 11 (13)
 Multiple locations/regions within a country 5 (6)
 Regional 21 (24)
 Local (applied within a single hospital/health centre) 47 (54)
Organisation or facility type where scorecard developed/implemented (n=87 scorecards)
 Hospital(s)/health centre(s) 36 (41)
 Inpatient unit/clinical service within a hospital 11 (13)
 Department (academic/specialist group for example, department of anaesthesia or family practice) 10 (11)
 Health Region (publicly administered healthcare within a region of a country, that is, provincial) 8 (9)
 Diagnostic specific healthcare services (defined by a specific patient population) 7 (8)
 Integrated healthcare system (organisation providing comprehensive healthcare services) 6 (7)
 National health system (National/Federal health system) 5 (6)
 Community-based clinics (primary care in a community-based setting) 4 (5)

*Some papers that described an initiative that resulted in the development of >1 BSC: 2 resulted in 3 BSCs and 1 resulted in the development of 5 BSCs.

†Scorecard implemented in 15 unnamed countries within Europe.

BSC, balanced scorecard.

While our search strategy began in 1992, no relevant publications were identified between 1992 and 1994. We examined publication date in 5-year increments starting in 1995 and the time period with the greatest number of publications was between 2010 and 2014 (n=37, 31%). There were more than 26 different countries represented with over half of studies from the Americas, specifically Canada and the USA. A majority of scorecards were applied at the local level (n=47, 54%) and were most frequently developed for different types of hospital settings such as acute care hospital or health centre or inpatient specialty care hospital (n=36, 41%).

Characteristics of balanced scorecard development

The methods used to develop the scorecard were clearly described in 24 BSCs (28%), minimally so in an additional 46 (53%) and not at all in 17 (19%). Of the included BSCs, almost all discussed and/or cited Kaplan and Norton’s original BSC (n=84, 97%) in relation to their own BSC. Only 60 (69%) described the alignment of their BSC with a strategic plan, while 10 (11%) had no alignment and in 17 (20%), it was unclear.

For over half of the BSCs there was some description of who was involved in their development (n=51, 59%), as shown in table 2. Senior healthcare leaders or administrators (n=37, 43%) were the most common type of role held by those involved followed by managers or management teams (n=24, 28%) and then physicians (n=21, 24%). Patients or family members were included in the development of only 3% (n=3) of BSCs. A variety of other roles and groups were involved in BSC development and these are described further in table 2.

Table 2.

Frequency and type of individuals involved in balanced scorecard (BSC) development and receiving BSC reporting

Types of individuals involved in BSC development or receiving reporting (n=87 BSCs) Involved in BSC development, n (%) Receiving BSC reporting, n (%)
Senior healthcare leaders/administrators 37 (43) 25 (29)
Managers/management teams 24 (28) 27 (31)
Physicians 21 (24) 13 (15)
Researchers (healthcare policy/management or otherwise unspecified) 15 (17) 6 (7)
Physician administrators (chiefs, directors, executives) 14 (16) 4 (5)
Healthcare providers 12 (14) 5 (6)
Nurses 10 (11) 6 (7)
Teams/Committees (ie, Quality Team/Committee/Department)/ 10 (11) 17 (20)
‘Staff’, ‘personnel’ 9 (10) 6 (7)
Government 9 (10) 8 (9)
Information technology professionals 6 (7) N/A
Non-governmental organisations 4 (5) 2 (2)
Key stakeholders or experts, otherwise unspecified 4 (5) N/A
Nurses in administrative positions 3 (3) 3 (3)
Patients/family 3 (3) 3 (3)
Board 3 (3) 8 (9)
External consultants 2 (2) N/A
Other hospitals or organisations 4 (5) 4 (5)
General public N/A 2 (2)
‘Everyone’ N/A 7 (8)
‘Stakeholders’ N/A 3 (3)
Unclear N/A 4 (5)

BSC, balanced scorecard; N/A, Not Applicable.

Balanced scorecard perspectives

A majority (n=83, 96%) of included BSCs were structured into perspectives although some articles referred to these as ‘domains’ and appeared to use this term interchangeably. Only 18 (21%) matched Kaplan and Norton’s BSC exactly, containing only 4 perspectives labelled using the same terminology and 11 of these (61%) were published between 1998 and 2008. Geographic trends in use of the original perspectives were examined and varied by country. For example, in the Americas, none of the Canadian publications identified used the original description of the perspectives (n=17), while one scorecard from Brazil and eight from the USA did (although seven of these were publications before 2003). Seven of the studies identified from Asian countries retained the original scorecard perspectives, two from European countries and none from Oceania or Africa. An additional 18 (21%) of BSCs had four perspectives that were similar to Kaplan and Norton’s original BSC but one or more perspective had been adapted to a healthcare context. Fifteen (17%), contained at least four perspectives similar to Kaplan and Norton’s but ≥1 perspective(s) were added to capture additional concepts. Ten scorecards (11%) bore no resemblance to the original BSC. In these, the perspectives were labelled using language more in keeping with a healthcare quality framework (eg, Institute of Medicine Quality Domains).8 Of the 83 scorecards with perspectives, the median number was four perspectives (Q1, Q2: 4, 5; and range 3–9).

Indicator selection and characteristics

Methods for indicator selection were described for 66 BSCs (76%). For 8 (12%) of these, no further details were provided beyond naming who was responsible for selecting the indicators. A Delphi or modified Delphi approach was used to select the indicators for 8 (12%) of the BSCs. Considering all 66 of the BSCs for which there was some description of the process(es) used to select indicators, 50 (76%), cited that one or more of a variety of other approaches was used. Methods for indicator selection could be categorised into the following types of activities with many studies employing more than one: prioritisation; consensus; review of published or pre-existing measures; consultation with stakeholders; consideration of feasibility (most often related to data availability for specific indicators); consideration of alignment with organisational strategy; consideration of other constructs (importance, reliability, validity and so on) or consideration of alignment with other included measures (frequencies of approaches shown in table 3).

Table 3.

Frequency of approaches used in indicator selection during balanced scorecard development

Types of approaches used during indicator selection N* (%)
Prioritisation (authors described criteria that were applied to select indicators for inclusion and, typically, there was some discussion of what those criteria were) 23 (35)
Delphi Consensus method 8 (12)
Non-Delphi Consensus method (authors described that the selection of indicators relied on the input of multiple people) 18 (27)
Selection of the indicators involved one/some of the following methods: lit review, consideration of pre-existing surveys, previously published measures/scorecards 16 (24)
Consultation with stakeholders was a part of the process (ie, through surveys, interviews, focus groups) 15 (23)
Selecting the indicators involved consideration of the feasibility of reporting on candidate measures (ie, data availability, ease of data collection and so on) 13 (20)
Selecting the indicators involved consideration of the alignment of candidate measures with the organisational strategy or their operational importance 13 (20)
Selecting the indicators involved consideration of any/all of the following specific constructs: importance, scientific soundness, clinical relevance, alignment with best practice, or validity of the measure 10 (15)
Selecting the indicators involved consideration of the alignment of the measures with the domains of the BSC 4 (6)
It was mentioned that there was a pretest or pilot-test prior to finalising the indicators for inclusion on the BSC 4 (6)
Selecting the indicators involved consideration of the reliability of the indicators 1 (2)

*Multiple methods may be applied in the same BSC development process

BSC, balanced scorecard.

The number of indicators included in the BSC was reported for 63 scorecards (72%). This information was not reported for 14 (16%) of BSCs and it was unclear in a further 10 (11%). The number of indicators ranged from 4 to 179 indicators (median 21; Q1, Q2: 13.5, 31.5). For 42 (48%) BSCs, all of the indicators represented in the tool were listed or described in the paper such that it was possible to understand what was being measured. In 18 (21%) BSCs, only some of the indicators were described, in 20 (23%), it was unclear if all or only some were represented and in 8 (7%), none of the indicators were described.

Many BSCs categorised the indicators beyond the BSC perspectives (n=35, 40%), including some which made explicit links to strategic themes and objectives9 or key performance activities,10 while others categorised measures according to Donabedian domains (structure, outcome, process).11 High proportions of the BSCs included at least 1 structure (n=61, 70%), 1 process (n=74, 85%) or 1 outcome indicator (n=81, 93%). Fifty-eight (67%) of BSCs had at least one of each (structure, process and outcome indicator).

For 29 BSCs, performance targets were determined for all indicators (33%), while 17 (20%) described targets for some indicators. Internal targets (based on organisation-specific performance goals) were described in 16 (18%) BSCs, external targets were used in 6 (7%) and a mix of internal and external targets in 4 (5%).

Balanced scorecard data sources and reporting

Information about performance measure data sources was discussed for 54 of the BSCs (62%). While there were over 12 different data sources described in total, the 3 employed most commonly were: patient feedback through survey/questionnaires, interviews or feedback cards (n=24, 28%); staff feedback (through similar mechanisms) (n=18, 21%) and for 18 (21%) of BSCs, existing IT infrastructure was harnessed to access data (table 4).

Table 4.

Data sources used for balanced scorecard development

Yes
n (%)
Data sources used (n=87)
No information about data sources* 28 (32)
Patient feedback (survey/questionnaire, interview, feedback cards) 24 (28)
Staff feedback (survey/questionnaire, interview, feedback cards) 18 (21)
Existing IT infrastructure 18 (21)
Accounting/financial data 14 (16)
Chart review 10 (11)
Direct observation of healthcare processes 10 (11)
Other 9 (10)
HR information system 5 (6)
Audit 5 (6)
New IT infrastructure 5 (6)
National data sources 5 (6)
Data from electronic medical records 5 (6)
Reports 3 (3)

*Information about performance measure data sources was discussed in 54 BSCs (62%) and unclear in additional 5 (6%)

Sampling strategies for data collection were infrequently discussed. Details were provided for only 15 (17%) of the BSCs and there was significant variability. For example, some used a random sampling of included facilities12 while others stratified sampling by facility13 or by patient characteristic. Alternatively, some reported sampling all staff14 or using all available data.15 Risk adjustment of indicators was infrequently discussed (n=4, 5%).

The frequency with which results were reported was variable with 18 (21%) scorecards reporting quarterly and 9 (10%) annually, while 4 (5%) tailored the frequency of reporting to specific indicators and 2 (2%) did so according to the recipient. Some scorecards reported results only once (n=6, 7%) such as when the tool was used to evaluate programmes or compare between them, while others specified an irregular reporting schedule (n=2, 2%).

The individuals or groups receiving performance measurement results through the BSCs were described for 60 scorecards (69%). The most common BSC recipients included managers/management teams (n=27, 31%) and senior healthcare leaders/administrators (n=25, 29%) (table 2). The general public and patients/families were rarely described as stakeholders who received the results of the BSCs ( 3%) (table 2).

The format used to disseminate results was described for 51% of BSCs (n=44) with dashboards (n=14, 16%) and reports (n=7, 8%) cited most commonly. Results for 5 (6%) of BSCs were communicated via presentation at meetings or rounds, while electronic formats such as the internet or an intranet were used 5% (n=4) of the time. For 3 (3%), hard copies of results were posted physically on facility walls for patients/families and staff to see. Some papers discussed communication strategies further, noting that levels of performance were illustrated via symbols and or colours (n=11, 13%).

Balanced scorecard implementation

At the time of publication, only 62 of 87 (71%) BSCs had been implemented, which was defined as a clear description of implementation and/or clear reporting of one or more included indicator results. Twenty (23%) had not been implemented and it was unclear in an additional 5 (6%). Implementation of the scorecard was associated with improved outcomes in one or more domains in 44 (51%) of all studies, 1 (1%) study reported no improvement, 30 (34%) did not discuss results of measurement and in 4 (5%), it was unclear if any improvements were achieved. An additional 7 (8%) BSCs compared performance between countries/regions or facilities at a single point in time, so no improvement over time could be discerned.

At the time of article publication, 14 (16%) BSCs indicated that reporting was ongoing, 33 (38%) reported an intention for reporting to continue and in 20 (23%), the tool had not been implemented. In 4 (5%) cases the reporting was stopped and for 16 (18%) it was unclear if reporting would continue in the future.

Lessons learned in scorecard development

For a majority (n=70, 80%) of the BSCs included in the review, the paper authors discussed lessons learnt related to the scorecard development initiative. There were three main themes that emerged: (1) scorecard design; (2) stakeholder engagement and (3) feasibility (shown in online supplemental table 2 along with subthemes and selected quotations).

Scorecard design was a major focus for reflection with four subthemes (indicator selection, adaptation from original Kaplan and Norton BSC framework, importance of linkage to an organisational strategy and BSC development as an ongoing, iterative process). Indicator selection is a critical part of the process of BSC development. Authors of BSC papers highlighted that it is important to have a limited number of indicators9 16–18 and recommended having predetermined criteria for indicator selection.19 Adaptation from the original Kaplan and Norton framework was described by a number of authors as necessary in order for the scorecard to better suit a healthcare context.10 20–22 While linkage to organisational strategy is integral to the BSC development, the importance of making these connections was reiterated.23–25 Finally, many emphasised that the scorecard itself is not a static tool. Its initial implementation may bring to light the need for adjustments or redesign with the changing nature of the organisation.26–31

Stakeholder engagement was the second main theme with the subthemes of: trust, transparency and buy-in, communication, leadership support, socialisation to a BSC approach and accountability and ownership. Trust, transparency and buy-in were discussed by a number of authors as critical to establish when implementing a BSC to ensure optimal engagement of team members in the initiative.20 23 29 32–34 The tool itself was seen as highly effective as a mechanism of communication between team members/departments.14 23 24 35 Leadership support in BSC efforts was believed to be key to successful efforts,36–39 as was socialising the concept of the BSC to all involved as this was identified as a challenge in some settings.20 40 41 Defining who was accountable for individual metrics or the BSC was also part of effective implementation strategies.24 25 36 Lastly, incentivisation (through monetary and/or non-monetary means) was used to increase engagement in some BSC initiatives.14 23 33 37 42 43

Feasibility was the final theme of ‘lessons learnt’ with subthemes of resource requirements, sustainability and data challenges. Developing a BSC is a major undertaking that requires significant resources such as dedicated staff and/or budget allocations.13 27 34 40 44 45 Challenges related to data availability, such as accessing the necessary data or the cleanliness of the data that was available, were common and these issues impeded the development and/or implementation of many BSCs and/or hampered sustainability.13 27 28 46

Discussion

Our scoping review highlights the extensive variability in BSC development methods in the healthcare literature. As evidenced by our review, when a BSC is implemented within a healthcare organisation or system there are often adaptations made from the original description by Kaplan and Norton to better suit the organisation or health system’s context and needs. This may involve adapting the perspectives or adding new perspectives not captured in Kaplan and Norton’s framework, among other adjustments. This appears to be a trend which has increased over time with higher fidelity to the original BSC perspective descriptions in earlier years. Studies included in this review also varied in the extent to which the process of developing the tool was described. The documentation of BSC development is critical, as it is through the development process that links to strategic performance targets are outlined to address a strategic plan. A BSC is not intended to be a list of stand-alone measures.4 This highlights an area in need of future standardisation of approaches and enhanced transparency of reporting.

There have been numerous reviews and commentaries on BSCs in healthcare including those by Zelman,47 Voelker,2 Tarantino,48 Kocakulah,49 Inamdar50 among others. In addition, a systematic review of BSC use in Italy, Portugal and Spain51 was published more recently, in 2018. However, to our knowledge, there has been limited systematic evaluation of BSCs at an international level. Our review does complement a systematic review of BSCs from 2007 by Rabbani et al52 which focused on the application of the tool in low-income health settings. The previous review identified 44 articles and similarly described a broad range of healthcare settings in which BSC’s had been used. The 2007 review highlights a number of main findings including: (1) frequent adaptations to the traditional four perspectives of the BSC; (2) ‘pitfalls’ in the use of the BSC and (3) perspectives of the use of the BSC in low-income countries. The present review builds on the findings from the 2007 review by further quantifying the degree to which the perspectives of the BSC have been adapted over time and by itemising the types of steps employed during development and in implementation of BSC worldwide. While the focus of our review was not to highlight differences in BSCs in resource limited settings, our review did identify a number of additional examples of BSCs that have been implemented since the Rabbani’s 200752 review including in Afghanistan,13 Pakistan,53 54 Bangladesh12 and Ethiopia.45 BSCs (or their adaptations) may be feasible tools for quality improvement in resource-limited settings. In Afghanistan,13 the BSC was used to evaluate the entire health system using a stratified random sample and leveraging observation of patient care and interviews. While the authors report the scorecard was highly effective and provided useful information for benchmarking, they note limitations particular to the setting including a lack of routinely collected data on patient outcomes. They also highlight the expense of implementation of an independent monitoring system was high but provided additional training benefits and ongoing monitoring was less expensive. In a hospital setting in Pakistan,53 successful BSC implementation was ascribed to key existing ‘prerequisites’ for BSC implementation including a strategic plan, leadership and appropriate information systems, which were aligned with the ‘lessons learnt’ presented in our review.

A striking finding of our review, although perhaps not surprising given the origins of BSCs in a business world and from a managerial perspective, was the lack of patient involvement in the development or reporting processes for the healthcare BSCs. There is an increasing focus on patient-centred approaches to healthcare.55 Patient-centred care refers to patient-clinician interactions and encompasses how health systems are designed and function, including alignment of a healthcare systems’ mission and values with patient-centred goals.56 57 It follows that quality frameworks including the BSC, if used in healthcare, should consider including patients in both their development and reporting strategies and not simply limit their representation to a domain that reports on customer experience.

An example of an exception to this general finding of lack of patient involvement in BSC development in our review was a BSC from New Zealand.58 Indicator development in this study involved ‘cultural, family and consumer advisors’ to ensure that ‘no decisions [were] made without genuine input from indigenous peoples (“Mãori/tangata-whenua”), families/carers, and service users’.58 This BSC also captured the percentage of staff with bicultural training, recognising the importance of tailoring health services to meet patients’ and caregivers’ needs in culturally sensitive and specific ways. We recommend that future BSC development efforts in healthcare should consider patient and family engagement throughout the development and reporting process to best support patient and family-centred care.

Measurement feasibility emerged as a major theme of the ‘lessons learnt’ that were discussed in a number of studies. Our review highlighted a diversity of data collection methods used for scorecards with most BSCs requiring data from multiple sources in order to report out results. Many BSCs relied on leveraging existing data capture systems (ie, clinical, accounting/financial and human resources databases) as well as on implementing new systems and/or processes to collect the necessary data. Paper-based and telephone surveys were surprisingly common methods of data collection and were used most often for gauging patient and staff experience. In some resource-limited settings, data were collected almost exclusively by survey, interview, chart review or direct observation to document clinical processes or outcomes. Of note, the included studies spanned a number of decades during which the methods and technology available for gathering and analysing health data have evolved substantially. Thus, methods reported in the included studies, and any related lessons-learnt or challenges encountered, may have been reflective of technology available for data analytics, reporting and communication at the time.

Passive electronic collection of data through electronic medical records (EMRs) was used for some BSCs.44 While they can support the feasibility of measurement strategies through reduced time and labour costs, for example, it was noted, however, that their exclusive use could lead to important concepts being missed such as surveys assessing patient/family satisfaction and quality-of-life that may not be otherwise captured in EMRs.44

Reporting on the efficacy and results of BSC implementation on health and health system outcomes was not a primary objective of this review. Nonetheless, we identified some studies which reported on improvements following BSC implementation for at least one of the included performance measures as well as strategies/lessons learnt for successful implementation. We did not identify any randomised trials of BSC implementation and, as such, future study may be warranted to compare BSC strategies to other quality improvement frameworks to ascertain the impact on patient and health system outcomes given the significant resource burden involved in BSC development and maintenance.

While our scoping review was extensive and provided a great deal of information about BSC scorecard development methodology in the healthcare sector, it was not without limitations. Challenges in evaluating the literature were encountered given inconsistent use of terminology. While our search strategy accommodated for this by including a large number of search terms, it is possible some BSCs were omitted as they were not clearly defined as such. Conversely, we may have included some scorecards the author’s defined as BSCs that may have not adhered to BSC development methodology as outlined by Kaplan and Norton, although this has been an informative finding of our review. It is also possible that BSCs developed for healthcare organisations were omitted if their development methodology was not published in medical journal indexing databases. There is also a lack of standardised methodological approaches for BSC development or evaluation. We itemised key methodological elements of BSC development to increase transparency in our reporting; however, a lack of standard reporting in the literature made it very challenging to compare and contrast development strategies by country or by year of BSC development beyond adherence to original description of Kaplan and Norton’s ‘perspectives’.

Given our literature review spanned a number of decades, it is possible that some of the scorecards identified are no longer in use or have been modified substantially since the publications we identified in the literature. The data abstracted were based on available published information and, if incomplete in its published form, this could also have impacted results as we did not contact authors or conduct a grey literature review of websites to gain any additional information.

Conclusion

BSCs are frequently used as a framework for quality improvement in healthcare. However, adaptations from the original BSC description are needed in many healthcare contexts to better address measurement needs. Our review highlights that the flexibility of this framework has led to worldwide application in many different healthcare contexts. Defining features of a BSC framework include linkage of performance metrics to organisational strategy and capture of multiple domains allowing ‘balanced perspectives’ considering consumer, staff, financial and clinical outcomes. These characteristics differentiate the BSC from other approaches to performance measurement that are also applied in healthcare.

In the future, efforts for BSC development in healthcare should consider whether the framework is most appropriate for the healthcare context and report any adaptations made transparently. Composition of healthcare BSC development teams should consider inclusion of patients/families where appropriate and carefully select highly feasible and valid performance measures to address performance goals aligned with their organisational strategy. Future work to develop standardised approaches to BSC development and reporting in healthcare may be warranted to allow a more meaningful comparison of strategies and reflection on results of implementation.

Footnotes

Contributors: VB contributed to conception and design of the work, acquisition, analysis and interpretation of the data and drafting the manuscript. NS contributed to acquisition of data, analysis and interpretation of data and drafting the manuscript. DL contributed to conception of the work, interpretation of data and provided critical intellectual content and assisted in drafting the manuscript. CB contributed to conception and design of the work, acquisition, analysis and interpretation of the data and drafting the manuscript. CV, NS, DL and CB all provided final approval of the version to be published and agree to be accountable for all aspects of the work

Funding: This study was funded by a Canadian Institutes of Health Research Project Grant PJT 153265. CB is funded by a Stars Career Development Award (SI2-169745) from the Canadian Institutes of Health Research Institute of Musculoskeletal Health and Arthritis and The Arthritis Society. DL holds the Mary Pack Chair in Rheumatology Research from the University of British Columbia and The Arthritis Society.

Competing interests: None declared.

Provenance and peer review: Not commissioned; externally peer reviewed.

Supplemental material: This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Data availability statement

All data relevant to the study are included in the article or uploaded as supplementary information.

Ethics statements

Patient consent for publication

Not required.

References

  • 1.Kaplan RS, Norton DP. The balanced scorecard--measures that drive performance. Harv Bus Rev 1992;70:71–9. [PubMed] [Google Scholar]
  • 2.Voelker KE, Rakich JS, French GR. The balanced scorecard in healthcare organizations: a performance measurement and strategic planning methodology. Hosp Top 2001;79:13–24. 10.1080/00185860109597908 [DOI] [PubMed] [Google Scholar]
  • 3.Zelman WN, Pink GH, Matthias CB. Use of the balanced scorecard in health care. J Health Care Finance 2003;29:1–16. [PubMed] [Google Scholar]
  • 4.Kaplan RS, Norton DP. Translating strategy into action: the balanced Scorecard. Boston, Massachussetts: Harvard Business School Press, 1996. [Google Scholar]
  • 5.Tricco AC, Lillie E, Zarin W, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med 2018;169:467–73. 10.7326/M18-0850 [DOI] [PubMed] [Google Scholar]
  • 6.Peters MDJ, Godfrey CM, McInerney P. Chapter 11: Scoping Reviews (2020 Version). In: Aromataris E, Munn Z, eds. Joanna Briggs Institute Reviewer’s Manual. JBI, 2020. [Google Scholar]
  • 7.Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol 2006;3:77–101. 10.1191/1478088706qp063oa [DOI] [Google Scholar]
  • 8.Institute of Medicine (IOM) . Crossing the quality chasm: a new health system for the 21st century Washington, D.C, 2001. Available: https://www.ahrq.gov/talkingquality/measures/six-domains.html
  • 9.Samaranayake P, Dadich A, Fitzgerald A, et al. Developing an evaluation framework for clinical redesign programs: lessons learnt. J Health Organ Manag 2016;30:950–70. 10.1108/JHOM-07-2015-0109 [DOI] [PubMed] [Google Scholar]
  • 10.Catuogno S, Arena C, Saggese S, et al. Balanced performance measurement in research hospitals: the participative case study of a haematology department. BMC Health Serv Res 2017;17:522. 10.1186/s12913-017-2479-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Breton M, Smithman MA, Brousselle A, et al. Assessing the performance of centralized waiting Lists for patients without a regular family physician using clinical-administrative data. BMC Fam Pract 2017;18:1. 10.1186/s12875-016-0573-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Khan MM, Hotchkiss DR, Dmytraczenko T, et al. Use of a balanced Scorecard in strengthening health systems in developing countries: an analysis based on nationally representative Bangladesh health facility survey. Int J Health Plann Manage 2013;28:202–15. 10.1002/hpm.2136 [DOI] [PubMed] [Google Scholar]
  • 13.Peters DH, Noor AA, Singh LP, et al. A balanced scorecard for health services in Afghanistan. Bull World Health Organ 2007;85:146–51. 10.2471/BLT.06.033746 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Werle J, Dobbelsteyn L, Feasel AL, et al. A study of the effectiveness of performance-focused methodology for improved outcomes in Alberta public healthcare. Healthc Manage Forum 2010;23:169–74. 10.1016/j.hcmf.2010.08.007 [DOI] [PubMed] [Google Scholar]
  • 15.Griffith JR, Alexander JA, Jelinek RC. Measuring comparative Hospital performance. J Healthc Manag 2002;47:41–57. 10.1097/00115514-200201000-00009 [DOI] [PubMed] [Google Scholar]
  • 16.Chan GJ, Parco KB, Sihombing ME. Improving health services to displaced persons in Aceh, Indonesia: a balanced scorecard.[Erratum appears in Bull World Health Organ. 2010 Oct 1;88(10):796]. Bull World Health Organ 2010;88:709–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Gao H, Chen H, Feng J, et al. Balanced scorecard-based performance evaluation of Chinese County hospitals in underdeveloped areas. J Int Med Res 2018;46:1947–62. 10.1177/0300060518757606 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Totten MK. Using a scorecard for strategic results. Trustee 2013;66:15–18. [PubMed] [Google Scholar]
  • 19.Harber BW. The balanced scorecard solution at peel Memorial Hospital. Hosp Q 1998;1:59–63. 10.12927/hcq.6744 [DOI] [PubMed] [Google Scholar]
  • 20.Bouland DL, Fink E, Fontanesi J. Introduction of the balanced Scorecard into an academic department of medicine: creating a road map to success. J Med Pract Manage 2011;26:331–5. [PubMed] [Google Scholar]
  • 21.Chang L-C, Lin SW, Northcott DN. The NHS Performance Assessment Framework: a "balanced scorecard" approach? J Manag Med 2002;16:345–58. 10.1108/02689230210446526 [DOI] [PubMed] [Google Scholar]
  • 22.Haworth J. Measuring performance. Nurs Manage 2008;15:22–8. 10.7748/nm2008.06.15.3.22.c8212 [DOI] [PubMed] [Google Scholar]
  • 23.Rimar S. Strategic planning and the balanced scorecard for faculty practice plans. Acad Med 2000;75:1186–8. 10.1097/00001888-200012000-00011 [DOI] [PubMed] [Google Scholar]
  • 24.Tsasis P, Harber B. Using the balanced scorecard to mobilize human resources in organizational transformation. Health Serv Manage Res 2008;21:71–80. 10.1258/hsmr.2007.007008 [DOI] [PubMed] [Google Scholar]
  • 25.Veillard J, Huynh T, Ardal S, et al. Making health system performance measurement useful to policy makers: aligning strategies, measurement and local health system accountability in Ontario. Healthc Policy 2010;5:49–65. 10.12927/hcpol.2013.21639 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Crabtree CW. Balanced nursing report card. Comput Inform Nurs 2011;29:613–8. 10.1097/NCN.0b013e31823ba392 [DOI] [PubMed] [Google Scholar]
  • 27.Curtright JW, Stolp-Smith SC, Edell ES. Strategic performance management: development of a performance measurement system at the Mayo clinic. J Healthc Manag 2000;45:58–68. 10.1097/00115514-200001000-00014 [DOI] [PubMed] [Google Scholar]
  • 28.Devitt R, Klassen W, Martalog J. Strategic management system in a healthcare setting--moving from strategy to results. Healthc Q 2005;8:58–65. 10.12927/hcq.2013.17693 [DOI] [PubMed] [Google Scholar]
  • 29.Hwa M, Sharpe BA, Wachter RM. Development and implementation of a balanced scorecard in an academic hospitalist group. J Hosp Med 2013;8:148–53. 10.1002/jhm.2006 [DOI] [PubMed] [Google Scholar]
  • 30.Stopper A, Raddatz A, Grassmann A, et al. Delivering quality of care while managing the interests of all stakeholders. Blood Purif 2011;32:323–30. 10.1159/000333829 [DOI] [PubMed] [Google Scholar]
  • 31.Sugarman PA, Watkins J. Balancing the scorecard: key performance indicators in a complex healthcare setting. Clinician in Management 2004;12:129–32. [Google Scholar]
  • 32.Donnelly LF, Gessner KE, Dickerson JM, et al. Quality initiatives: department scorecard: a tool to help drive imaging care delivery performance. Radiographics 2010;30:2029–38. 10.1148/rg.307105017 [DOI] [PubMed] [Google Scholar]
  • 33.El-Jardali F, Saleh S, Ataya N, et al. Design, implementation and scaling up of the balanced scorecard for hospitals in Lebanon: policy coherence and application lessons for low and middle income countries. Health Policy 2011;103:305–14. 10.1016/j.healthpol.2011.05.006 [DOI] [PubMed] [Google Scholar]
  • 34.Fields SA, Cohen D. Performance enhancement using a balanced scorecard in a patient-centered medical home. Fam Med 2011;43:735–9. [PubMed] [Google Scholar]
  • 35.Verzola A, Bentivegna R, Carandina G, et al. Multidimensional evaluation of performance: experimental application of the balanced scorecard in Ferrara university hospital. Cost Eff Resour Alloc 2009;7:15. 10.1186/1478-7547-7-15 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Heenan M, DiEmanuele M, Hayward-Murray K, et al. Hospital on a page: standardizing data presentation to drive quality improvement. Healthc Q 2012;15:41–5. 10.12927/hcq.2012.22767 [DOI] [PubMed] [Google Scholar]
  • 37.Heenan M, Higgins D. Engaging physician leaders in performance measurement and quality. Healthc Q 2009;12:66–9. 10.12927/hcq.2009.20663 [DOI] [PubMed] [Google Scholar]
  • 38.Lorden A, Coustasse A, Singh KP. The balanced scorecard framework-a case study of patient and employee satisfaction: what happens when it does not work as planned? Health Care Manage Rev 2008;33:145–55. 10.1097/01.HMR.0000304503.27803.aa [DOI] [PubMed] [Google Scholar]
  • 39.McLean SR, Mahaffey SM. Implementing a surgical balanced scorecard. Surgical Services Management 2000;6:43–7. [Google Scholar]
  • 40.Koumpouros Y. Balanced scorecard: application in the general Panarcadian hospital of Tripolis, Greece. Int J Health Care Qual Assur 2013;26:286–307. 10.1108/09526861311319546 [DOI] [PubMed] [Google Scholar]
  • 41.Radnor Z, Lovell B. Success factors for implementation of the balanced scorecard in a NHS multi-agency setting. Int J Health Care Qual Assur Inc Leadersh Health Serv 2003;16:99–108. 10.1108/09536860310465618 [DOI] [PubMed] [Google Scholar]
  • 42.Meliones J. Saving money, saving lives. Harv Bus Rev 2000;78:57–62. [PubMed] [Google Scholar]
  • 43.Smith C, Christiansen T, Dick D, et al. Performance management tools motivate change at the frontlines. Healthc Manage Forum 2014;27:15–19. 10.1016/j.hcmf.2013.12.003 [DOI] [PubMed] [Google Scholar]
  • 44.Kittelson S, Pierce R, Youngwerth J. Palliative care Scorecard. J Palliat Med 2017;20:517–27. 10.1089/jpm.2016.0292 [DOI] [PubMed] [Google Scholar]
  • 45.Teklehaimanot HD, Abdella M, Teklehaimanot A, Tedella AA, et al. Use of balanced Scorecard methodology for performance measurement of the health extension program in Ethiopia. Am J Trop Med Hyg 2016;94:1157–69. 10.4269/ajtmh.15-0192 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Chen X-yun, Yamauchi K, Kato K, et al. Using the balanced scorecard to measure Chinese and Japanese Hospital performance. Int J Health Care Qual Assur Inc Leadersh Health Serv 2006;19:339–50. 10.1108/09526860610671391 [DOI] [PubMed] [Google Scholar]
  • 47.Zelman WN, Blazer D, Gower JM, et al. Issues for academic health centers to consider before implementing a balanced-scorecard effort. Acad Med 1999;74:1269–77. 10.1097/00001888-199912000-00006 [DOI] [PubMed] [Google Scholar]
  • 48.Tarantino DP. Using the balanced scorecard as a performance management tool. Physician Exec 2003;29:69–72. [PubMed] [Google Scholar]
  • 49.Kocakülâh MC, Austill AD. Balanced scorecard application in the health care industry: a case study. J Health Care Finance 2007;34:72–99. [PubMed] [Google Scholar]
  • 50.Inamdar N, Kaplan RS, Bower M. Applying the balanced scorecard in healthcare provider organizations. J Healthc Manag 2002;47:179–95. 10.1097/00115514-200205000-00008 [DOI] [PubMed] [Google Scholar]
  • 51.Gonzalez-Sanchez MB, Broccardo L, Martins Pires AM. The use and design of the BSC in the health care sector: a systematic literature review for Italy, Spain, and Portugal. Int J Health Plann Manage 2018;33:6–30. 10.1002/hpm.2415 [DOI] [PubMed] [Google Scholar]
  • 52.Rabbani F, Jafri SMW, Abbas F, et al. Reviewing the application of the balanced scorecard with implications for low-income health settings. J Healthc Qual 2007;29:21–34. 10.1111/j.1945-1474.2007.tb00210.x [DOI] [PubMed] [Google Scholar]
  • 53.Rabbani F, Jafri SMW, Abbas F, et al. Designing a balanced scorecard for a tertiary care hospital in Pakistan: a modified Delphi group exercise. Int J Health Plann Manage 2010;25:74–90. 10.1002/hpm.1004 [DOI] [PubMed] [Google Scholar]
  • 54.Rabbani F, Pradhan NA, Zaidi S, et al. Service quality in contracted facilities. Int J Health Care Qual Assur 2015;28:520–31. 10.1108/IJHCQA-05-2014-0066 [DOI] [PubMed] [Google Scholar]
  • 55.McMillan SS, Kendall E, Sav A, et al. Patient-Centered approaches to health care: a systematic review of randomized controlled trials. Med Care Res Rev 2013;70:567–96. 10.1177/1077558713496318 [DOI] [PubMed] [Google Scholar]
  • 56.What is Patient-Centered Care? : NEJM Catalyst Innovations in Healthcare; 2017. Available: https://catalyst.nejm.org/doi/full/10.1056/CAT.17.0559 [Accessed 3 July 2020].
  • 57.Epstein RM, Street RL. The values and value of patient-centered care. Ann Fam Med 2011;9:100–3. 10.1370/afm.1239 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Coop CF. Balancing the balanced scorecard for a new Zealand mental health service. Aust Health Rev 2006;30:174–80. 10.1071/AH060174 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary data

bmjoq-2020-001293supp001.pdf (305.8KB, pdf)

Data Availability Statement

All data relevant to the study are included in the article or uploaded as supplementary information.


Articles from BMJ Open Quality are provided here courtesy of BMJ Publishing Group

RESOURCES