Skip to main content
Healthcare Policy logoLink to Healthcare Policy
. 2014 Sep;10(SP):36–44.

Hospitals' Internal Accountability

Obligation interne de rendre compte dans les hôpitaux

Nancy Kraetschmer 1, Janak Jass 2, Cheryl Woodman 3, Irene Koo 4, Seija K Kromm 5, Raisa B Deber 6
PMCID: PMC4255580  PMID: 25305387

Abstract

This study aimed to enhance understanding of the dimensions of accountability captured and not captured in acute care hospitals in Ontario, Canada. Based on an Ontario-wide survey and follow-up interviews with three acute care hospitals in the Greater Toronto Area, we found that the two dominant dimensions of hospital accountability being reported are financial and quality performance. These two dimensions drove both internal and external reporting. Hospitals' internal reports typically included performance measures that were required or mandated in external reports. Although respondents saw reporting as a valuable mechanism for hospitals and the health system to monitor and track progress against desired outcomes, multiple challenges with current reporting requirements were communicated, including the following: 58% of survey respondents indicated that performance-reporting resources were insufficient; manual data capture and performance reporting were prevalent, with the majority of hospitals lacking sophisticated tools or technology to effectively capture, analyze and report performance data; hospitals tended to focus on those processes and outcomes with high measurability; and 53% of respondents indicated that valuable cross-system accountability, performance measures or both were not captured by current reporting requirements.


This paper examines internal hospital accountability dimensions, approaches and requirements and considers the impacts of accountability on performance and reporting in hospitals across Ontario. The focus on performance accountability in healthcare organizations continues to increase, with Ontario hospitals using regulated and mandated performance measurement and reporting systems to improve accountability (Health Council of Canada 2012; Smith et al. 2008; Snowdon et al. 2012; Veillard et al. 2010). Published evidence suggests that linking strategy to performance measurement to achieve desired outcomes is critical (Jha et al. 2003; Kaplan and Norton 2004, 2005; Porter and Teisberg 2004).

The dominant dimensions of accountability in Ontario hospitals have traditionally been based on financial and quality performance. As noted in the introduction to this issue (Deber 2014), accountability means having to be answerable to someone for meeting defined objectives, and can have financial, performance and political/democratic dimensions. The tools used can vary. Accounting may be linked to financial incentives (e.g., pay for performance) that adjust payments to induce hospitals to behave in desired ways. It may be linked to quality performance, as in Ontario's Excellent Care for All Act, 2010 requirements, such as the Quality Improvement Plan (QIP). Not surprisingly, external reporting accountabilities and funding tied to specific performance measures (e.g., wait times) are prioritized over other indicators that do not have incentives or accountability contracts attached.

Externally, selected measures are required to be reported to a variety of bodies, including the Ontario Ministry of Health and Long-Term Care (MOHLTC), local health integration networks (LHINs), Health Quality Ontario (HQO), the Canadian Institute for Health Information (CIHI) and Cancer Care Ontario (CCO). As noted by Kromm and colleagues (2014), these measures are often not fully aligned.

Internally, hospitals report performance to such groups as senior management, clinical teams, boards of directors and board committees. Hospitals select and monitor outcomes that have high measurability, though this approach may not provide the most valuable performance information or drive accountability through cross-system comparisons. Hospitals measure their performance and use internal data analyses to align decision-making with internal goals and external performance expectations. Having a core set of indicators reported across all hospitals encourages benchmarking and has the potential to drive performance in certain domains (Baker and Pink 1995; Veillard et al. 2005).

For the purpose of understanding internal hospital accountability mechanisms, we conducted a case study of approaches and challenges faced by Ontario hospitals.

Ontario Case Study

Ontario hospitals are private, not-for-profit organizations that receive the vast majority of their funding from the provincial government. In 2006, Ontario, like most other Canadian provinces, regionalized elements of its healthcare system. Ontario created 14 regional LHINs to oversee the planning, funding and management of many of Ontario's healthcare services, but allowed hospitals to retain their independent boards.

Ontario has emphasized creating a culture of accountability and has used legislation as a policy tool towards this goal. The province already had in place an extensive and diverse number of hospital legislative compliance requirements intended to drive accountability and performance (including the Public Hospitals Act and the Broader Public Sector Accountability Act). Hospitals are also covered by provisions for responding to adverse events or complaints (e.g., the Drug and Pharmacies Regulation Act, Accessibility for Ontarians with Disabilities Act, Occupational Health and Safety Act). The Commitment to the Future of Medicare Act, 2004 mandated the use of accountability agreements between the provincial government and each acute care hospital. After the Local Health System Integration Act, 2006 created LHINs, these accountability agreements were transferred from the MOHLTC to each LHIN. Hospitals now must sign a hospital-service accountability agreement (H-SAA) with their LHIN in order to obtain funding from the MOHLTC. The H-SAA requires hospitals to measure and report on a core set of indicators (see Kromm et al. 2014). In 2010, the Excellent Care for All Act set new standards of accountability for hospitals, outlining a minimum set of core measures with internal and external reporting requirements, a requirement for publicly posted annual QIPs, and requirements for linkage between executive compensation and achievement of improvements in the quality measurements.

In addition, Ontario hospitals participate in a voluntary accreditation process led by Accreditation Canada (www.accreditation.ca), a not-for-profit organization that helps hospitals and other healthcare organizations across Canada drive high-quality care within their organization (see also Mitchell et al. 2014). Hospitals in Ontario voluntarily participate in Accreditation Canada's accreditation programs, which collect data every three to four years, as a way to evaluate their performance against national standards of excellence. Accreditation Canada sets accreditation standards related to governance, risk management, leadership, medication management, prevention and control, and patient safety. As hospitals strive to become accredited, they demonstrate to their employees and the public that the institution provides high-quality healthcare.

Methods

All Ontario acute care hospitals (n=116) were mailed the Acute Care Hospital Strategic Priorities Survey 2011 between September and December 2011. The mailed surveys were addressed to the hospital's chief executive officer. We also interviewed three senior hospital administrators responsible for hospital performance at three different Ontario teaching hospitals located in the Greater Toronto Area in 2011. The interviews allowed us to capture these administrators' perceptions of the advantages and disadvantages of current accountability and performance-reporting arrangements.

Results

The overall survey response rate was 45.7%; 71.4% of teaching hospitals responded, compared to 54.4% of large community hospitals and 26.7% of small community hospitals. For interviews, the response rate was 100%. Based on the analyses of the data from the survey and key informant interviews, we focus on seven themes that emerged. (The precise indicators that are referenced reflect practice at the time of the interviews, and may change over time.)

Theme 1: Internal hospital reporting aligns with external reporting requirements

Key informant interviews suggest that acute care hospitals try to ensure alignment of internal reporting requirements with external reporting requirements, particularly those required by MOHLTC and the LHINs, which collectively control hospital funding. Hospitals internally employ reporting tools such as balanced scorecards, internal reporting dashboards or both to showcase selected organizational goals and specific related measures. These internal performance reports are routinely monitored by management and other internal stakeholders, such as clinical teams and the board. These internal reporting tools reflect performance and financial measures outlined in the H-SAA and annual QIPs, thought to drive improvements in quality of care across Ontario's health system.

Theme 2: Organizational foci aligned with external accountabilities

According to interviewees, hospital focus is driven to a certain extent by external reporting accountabilities and funding (e.g., wait times and alternative level of care) over other indicators that do not have incentive funding (e.g., pay for performance) or accountability contracts attached to them. Interviewees indicated that collecting and reporting data are critical to measuring hospital performance and assisting with making internal decisions. Measures that are most valuable from the perspectives of the organization and senior management include those linked with quality and safety, and efficiency/financial considerations (e.g., cost per weighted volumes, total margin, current ratio, wait times, readmission rate, alternative level of care, patient satisfaction, wait times and employee satisfaction/engagement).

Theme 3: Performance reporting requirement challenges

Fifty-eight per cent of survey respondents indicated their hospital had insufficient resources dedicated to capturing, analyzing and reporting performance data. As the system moves to increased reporting requirements, this resource constraint may become even more of an issue. The key informant interviewees also noted it was challenging for hospitals to track the total resources used to collect and report on mandatory and voluntary indicators. Unless centralized within a department, these costs are spread across the organization and there is no consistent or effective way to capture them for comparative purposes.

Over 73% of survey respondents said that their hospital did not use an automated monitoring and reporting system (e.g., business intelligence system) to manage financial and operational performance (e.g., for reporting and analyzing dashboards and performance scorecards). Being able to capture and report data and provide an integrated view of information at all levels of the organization, in a timely way, could and should enhance decision-making. One interviewee recommended that a centralized, accessible reporting system with one reporting methodology should be used across all hospitals to enhance and standardize data quality, efficiency and collection. As well, there may be inefficiencies and confusion when indicator data collected using different methodologies are compared and benchmarked. Some hospitals have recognized the need to capture data in a consistent and centralized manner within their own organizations, and have implemented a business intelligence system that supports efficient data collection and reporting, timeliness and information on (internal) trending. These systems are expensive, and many hospitals have neither the access nor the resources to implement them.

Several quotations from respondents illustrate these themes:

Performance and accountability reporting requirements are increasing rapidly without a corresponding increase in budgets to allow for this.

Smaller hospitals have the same reporting requirement as larger community and teaching hospitals, but given our financial means do not have the same infrastructure/manpower to focus on performance and accountability reporting.

Data extrapolation can be arduous – our systems are not fully integrated – organization size impacts analysis and extrapolation of small IS/IT [information system/information technology].

Funding only permits us to capture and report the data; we don't have staff with the needed time and knowledge to analyze the data.

Theme 4: Data should be used to drive quality improvement

Respondents indicated that data should be used to drive quality improvement. Specifically, they suggested that a key set of indicators that can drive quality improvement should be required to be reported publicly, internally to the governing board or both. One interviewee noted that their institution struggles with “old hospital data” being in the public domain when they know there are more recent/real-time data that demonstrate a different picture and are more meaningful internally.

According to interviewees:

Key is not collecting and reporting on indicators but using the information to drive quality improvement.

Funding structures do not necessarily support a systems quality approach/incentive.

Theme 5: Reporting requirements help drive data collection and reporting

Indicators are only as good as the data quality. Reporting requirements help drive improved data collection, quality and reporting. For example, external indicators that have clear definitions and methodologies for data collection and reporting can drive consistency and permit comparisons across hospitals. One interviewee pointed out that it is important to recognize that sudden improvements in performance with respect to certain indicators may not be a true improvement but just a difference in how the data are collected and reported. Similarly, sudden decreases in performance can be a result of changes to indicator definitions and methodologies.

With the ever-increasing drive towards accountability, it appears that some indicators are susceptible to gaming. Indicators that are difficult to game are pure “counts,” such as the number of hip replacements done by a hospital. However, some other indicators that are attached to funding, including computerized tomography (CT) and magnetic resonance imaging (MRI) hours/volumes and wait times for hip/knee joint replacement surgeries, were seen as being more susceptible to gaming. According to all interviewees, waiting lists for MRIs are sometimes gamed because increased efficiency leads to decreased financial incentives for the hospital. Therefore, hospitals may increase their waiting lists by shutting down MRI operating hours beyond the base funded volumes or hours. If efficiency stays the same with fewer MRI hours, waiting lists will increase, leading to more funding.

Theme 6: Improved coordination with other agents and prioritization of measures

Eighty-five per cent of survey respondents stated that their organization is required to report the same performance measure, often measured slightly differently, to two or more agencies such as MOHLTC, a LHIN, HQO, CCO and CIHI. For example, at the time of this study alternative-level-of-care (ALC) data were being reported to different bodies (e.g., Ontario Hospital Association, LHINs and Wait Time Information System) using different methodologies.

It was suggested by those interviewed that process maps for data collection would be useful in understanding how and where data flowed within the system to determine whether indicators were being reported to multiple organizations and whether efficiencies in reporting processes could be introduced. All interviewees suggested that similar indicators are often reported differently internally than externally. Examples of patient safety indicators that were reported differently internally than externally included internal reporting of rates for nosocomial infections such as Clostridium difficile (C. diff.) and methicillin-resistant Staphylococcus aureus (MRSA), while the external reporting of bacteraemia included only the number of cases of C. diff. and MRSA. One reason these indicators may be reported differently within hospitals is that historically, hospitals determined their own reporting requirements for internal reporting, but when this reporting was translated into the system level, different methodologies were often employed, e.g., number of infections versus infection rates. Interviewees also suggested that these indicators may be reported differently because the hospital's focus needs to be on the “vital few” measures rather than the broader reporting that is currently occurring.

According to respondents:

Measurement and reporting is not well coordinated and handled on an organization-wide basis … needs to be more focused and selective.

Ontario needs to articulate the responsibilities of MOHLTC, CCO, HQO, LHINs in a coherent way … there is excessive structure and no coherent agenda. A consequence is multiple siloed information requests to hospitals.

Theme 7: Lack of system and physician performance accountability measures

Respondents felt that some current indicators did not capture what is important. A particular omission, mentioned by 53% of survey respondents, was that valuable cross-system accountability or performance measures such as measures of integration across the system are not captured by current requirements. It was also noted that physician accountability indicators (e.g., conservable hospital-stay days, physician performance) were not reported on a systemwide basis, despite the fact that physicians contribute to driving the performance of hospitals. Some hospitals have begun to capture individual physician performance data, but this practice is neither common nor mandatory.

Discussion and Conclusion

In this study, we found that the dominant dimensions of hospital accountability that drive both internal and external reporting were financial and quality performance dimensions. Hospitals' internal reports usually include the performance measures that are also required in reports to external organizations. Our respondents suggested that internal hospital accountability systems are influenced by external hospital reporting requirements, even if these did not provide the optimal data needed for internal purposes.

Reporting is seen as a valuable mechanism for hospitals and the health system to monitor and track progress against desired outcomes. Within hospitals, many different accountabilities and indicators are tracked, and the degree to which hospitals are required to report is seen by some as challenging. The study showed that smaller hospitals in particular struggle with reporting because they do not have the necessary resources, either through a lack of budget, inability to retain staff with the skill sets required or internal resource allocation decisions. Indeed, the low survey response rate (26.7%) from smaller hospitals could have also been a result of these limited resources and gives a fuller picture of the challenges of reporting in rural hospitals. With the increased focus on internal and external reporting, it was interesting to find that most hospitals do not have sophisticated reporting tools to capture and report performance data. Manual reporting is still prevalent and may affect data quality.

There is a tendency for hospitals to monitor performance for those processes and outcomes that have high measurability and controllability. In particular, the desire to have indicators that have high measurability and controllability has resulted in few across-system measures being reported, despite increased stress on improving system integration. There was also a perception among our respondents that organizations report publicly only that information which is required, as there are no incentives or mechanisms to report additional information. With increasing pressures to advance the culture of accountability and quality improvement, hospitals must focus on increasing the quality of their data and improving the alignment across the various bodies to which these data must be reported so they can utilize it to drive quality improvement.

Acknowledgements

This study was funded by CIHR-PHSI Grant (CIHR Grant Number PHE-101967). The authors thank Andrea Thompson for her input into the study.

Contributor Information

Nancy Kraetschmer, Senior Manager, Patient Experience, Cancer Care Ontario, Toronto, ON.

Janak Jass, Vice-President, Operations and Transformation, Bridgepoint Health, Toronto, ON.

Cheryl Woodman, Director, Strategy and Performance, Women's College Hospital, Toronto, ON.

Irene Koo, Quality Lead, Ambulatory Programs, The Hospital for Sick Children, Toronto, ON.

Seija K. Kromm, Postdoctoral Fellow, Health System Performance Research Network, University of Toronto, Toronto, ON.

Raisa B. Deber, Professor, Institute of Health Policy, Management & Evaluation, University of Toronto, Toronto, ON.

References

  1. Baker G.R., Pink G.H. 1995. “A Balanced Scorecard for Canadian Hospitals.” Healthcare Management Forum 8(4): 7–13. [DOI] [PubMed] [Google Scholar]
  2. Deber R.B. 2014. “Thinking about Accountability.” Healthcare Policy 10(Special Issue): 12–24. [PMC free article] [PubMed] [Google Scholar]
  3. Health Council of Canada. 2012. Measuring and Reporting on Health System Performance in Canada: Opportunities for Improvement. Retrieved March 23, 2014. <http://www.healthcouncilcanada.ca/rpt_det.php?id=370>.
  4. Jha A.K., Perlin J.B., Kizer K.W., Dudley R.A. 2003. “Effect of the Transformation of the Veterans Affairs Healthcare System on the Quality of Care.” New England Journal of Medicine 348(22): 2218–27. 10.1056/NEJMsa021899. [DOI] [PubMed] [Google Scholar]
  5. Kaplan R.S., Norton D.P. 2004. Strategy Maps: Converting Intangible Assets into Tangible Outcomes (1st ed.). Boston: Harvard Business School Press. [Google Scholar]
  6. Kaplan R.S., Norton D.P. 2005. “The Office of Strategy Management.” Harvard Business Review 83(10): 72–80. [PubMed] [Google Scholar]
  7. Kromm S.K., Baker G.R., Wodchis W.P., Deber R.B. 2014. “Acute Care Hospitals' Accountability to Provincial Funders.” Healthcare Policy 10(Special Issue): 25–35. [PMC free article] [PubMed] [Google Scholar]
  8. Mitchell J.I., Nicklin W., MacDonald B. 2014. “The Accreditation Canada Program: A Complementary Tool to Promote Accountability in Canadian Healthcare.” Healthcare Policy 10(Special Issue): 150–53. [PMC free article] [PubMed] [Google Scholar]
  9. Porter M.E., Teisberg E.O. 2004. “Redefining Competition in Health Care.” Harvard Business Review 82(6): 65–76. [PubMed] [Google Scholar]
  10. Smith P.C., Mossialos E., Papanicolas I. 2008. Performance Measurement for Health System Improvement: Experiences, Challenges and Prospects. Copenhagen: World Health Organization on behalf of the European Observatory on Health Systems and Policies; Retrieved March 23, 2014. <http://www.euro.who.int/__data/assets/pdf_file/0003/84360/E93697.pdf>. [Google Scholar]
  11. Snowdon A., Schnarr K., Hussein A., Alessi C. 2012. Measuring What Matters: The Cost vs. Values of Health Care. London, ON: University of Western Ontario; Retrieved March 23, 2014. <http://sites.ivey.ca/healthinnovation/thought-leadership/white-papers/measuring-what-matters-the-cost-vs-values-of-health-care-november-2012/>. [Google Scholar]
  12. Veillard J., Champagne F., Klazinga N., Kazandjian V., Arah O.A., Guisset A.L. 2005. “A Performance Assessment Framework for Hospitals: The WHO Regional Office for Europe Path Project.” International Journal for Quality in Health Care 17(6): 487–96. 10.1093/intqhc/mzi072. [DOI] [PubMed] [Google Scholar]
  13. Veillard J., Huynh T., Ardal S., Kadandale S., Klazinga N.S., Brown A.D. 2010. “Making Health System Performance Measurement Useful to Policy Makers: Aligning Strategies, Measurement and Local Health System Accountability in Ontario.” Healthcare Policy 5(3): 49–65. [PMC free article] [PubMed] [Google Scholar]

Articles from Healthcare Policy are provided here courtesy of Longwoods Publishing

RESOURCES