Skip to main content
PLOS One logoLink to PLOS One
. 2020 Aug 19;15(8):e0236480. doi: 10.1371/journal.pone.0236480

Effects of quality-based procedure hospital funding reform in Ontario, Canada: An interrupted time series study

Alvin Ho-ting Li 1,2,*, Karen S Palmer 3,4, Monica Taljaard 1,5, J Michael Paterson 2,6, Adalsteinn Brown 2,6,7,8, Anjie Huang 2, Husayn Marani 3,6, Lauren Lapointe-Shaw 9, Daniel Pincus 2,6,10, Marian S Wettstein 2,6,11, Girish S Kulkarni 2,11,12, David Wasserstein 10,13, Noah Ivers 2,3,14
Editor: Hans-Peter Brunner-La Rocca15
PMCID: PMC7437861  PMID: 32813687

Abstract

Background

The Government of Ontario, Canada, announced hospital funding reforms in 2011, including Quality-based Procedures (QBPs) involving pre-set funds for managing patients with specific diagnoses/procedures. A key goal was to improve quality of care across the jurisdiction.

Methods

Interrupted time series evaluated the policy change, focusing on four QBPs (congestive heart failure, hip fracture surgery, pneumonia, prostate cancer surgery), on patients hospitalized 2010–2017. Outcomes included return to hospital or death within 30 days, acute length of stay (LOS), volume of admissions, and patient characteristics.

Results

At 2 years post-QBPs, the percentage of hip fracture patients who returned to hospital or died was 3.13% higher in absolute terms (95% CI: 0.37% to 5.89%) than if QBPs had not been introduced. There were no other statistically significant changes for return to hospital or death. For LOS, the only statistically significant change was an increase for prostate cancer surgery of 0.33 days (95% CI: 0.07 to 0.59). Volume increased for congestive heart failure admissions by 80 patients (95% CI: 2 to 159) and decreased for hip fracture surgery by 138 patients (95% CI: -183 to -93) but did not change for pneumonia or prostate cancer surgery. The percentage of patients who lived in the lowest neighborhood income quintile increased slightly for those diagnosed with congestive heart failure (1.89%; 95% CI: 0.51% to 3.27%) and decreased for those who underwent prostate cancer surgery (-2.08%; 95% CI: -3.74% to -0.43%).

Interpretation

This policy initiative involving a change to hospital funding for certain conditions was not associated with substantial, jurisdictional-level changes in access or quality.

Introduction

Policymakers worldwide are experimenting with hospital funding models to improve system performance [13]. Although such reforms may contribute to improvements in resource allocation and patient outcomes, they may also invoke unintended consequences [46].

In April 2011, the Government of Ontario, Canada, announced a multi-year phased-in implementation of “patient-based” hospital funding [7]. These hospital funding reforms reduced reliance on global hospital budgets (i.e., fixed annual amount based largely on historical spending) by introducing two new components to hospital funding: Health Based Allocation Model (HBAM), organizational-level funding based on service and patient characteristics; and Quality-Based Procedures (QBPs), a novel approach to hospital funding sharing some characteristics with activity-based funding (ABF) [8]. QBPs consist of pre-set reimbursement rates for managing patients with specific diagnoses or those undergoing specific procedures, coupled with best-practice clinical handbooks for each QBP [7]. Between April 2012-April 2016, 19 priority QBPs were implemented for a range of medical and surgical conditions [9].

The stated goal of QBPs was to “facilitate adoption of best clinical evidence-informed practices” and appropriately reduce “variation in costs and practice across the province while improving outcomes” [10]. The provincial government’s rationale and assumed mechanism of action for QBPs was as follows: "QBPs are specific clusters of patient services that offer opportunities for health care providers to share best practices and will allow the system to provide even better quality care, while increasing system efficiencies. By promoting the adoption of clinical evidence-informed practices, clinical practice variation should be reduced across the province while improving patient outcomes to ensure that patients receive the right care, in the right place, at the right time" [11].

Like all “patient focused” activity-based funding systems, QBPs established a prospective payment rate based on service type and volume. Funding was carved out of hospitals’ global budgets and then reallocated to hospitals at the start of the relevant fiscal year as a fixed fee and fixed volume, for each QBP procedure or diagnosis. The fixed volume of QBP-funded cases per hospital is based on historical volume levels at each hospital. The fixed fee is adjusted for each hospital based on its unique case-mix index (CMI) to account for the complexity in its overall patient population.

However, QBPs differ from most ABF reforms in that funding applies only to a very limited set of diagnoses and procedures, and they rely on the use of handbooks to encourage incorporation of best practices [1215]. To create these handbooks for each QBP, the Ministry of Health and Long Term Care, in collaboration with partners such as Health Quality Ontario, Cancer Care Ontario, and the Cardiac Care Network, established expert advisory panels with leading clinicians, scientists, and patients. They defined episodes of care for selected diagnoses or procedures, developed best practice recommendations for patient care, and suggested indicators to monitor for ongoing quality improvement. The resulting QBP Clinical Handbooks serve as a compendium of evidence and clinical consensus [11].

There is no mechanism in place to enforce adherence to the clinical pathways in the handbooks or to measure adherence; hospitals are paid via QBPs whether they follow the pathways or not, but the intent was that following the pathways would enable hospitals to deliver care for the amount paid by QBPs [7, 16].

To date, there has been no peer-reviewed evaluation of the overall effects of QBPs on key indicators of patient care. We took advantage of Ontario’s data infrastructure to evaluate this new hospital payment model, focusing on system-level changes in measures of quality of care, access to care, and hospital coding behaviour for four QBPs including planned and unplanned surgical procedures, and medical diagnoses, selected a priori by our research team: (1) congestive heart failure, (2) hip fracture, (3) pneumonia, and (4) prostate cancer surgery.

Methods

Setting, context, and design

Hospital-based care in Ontario, Canada is publicly-funded. Ontario’s 141 publicly-funded hospital corporations comprise 262 hospital sites [17], of which a majority receive QBP funding. Small hospitals (n = 55 with typically fewer than 2700 inpatients or day surgery cases per year in two of the last three years) and specialty hospitals—such as for mental health, children, chronic care, and rehabilitation—primarily receive funding through global budgets and have only implemented select QBPs (e.g. tonsillectomy) depending on their specific patient population (e.g. children). These hospitals are excluded from our analyses because they only perform a very small number, if any, of the diagnoses and/or procedures performed by QBP-funded hospitals [7].

Using population-based interrupted time series (ITS) analyses, and based on a dated pre-specified protocol and dataset creation plans held at ICES, we evaluated patients admitted to Ontario hospitals for four pre-specified QBPs. We selected these QBPs with input from health system decision makers and hospital leaders to represent a range of acute versus elective and surgical versus medical issues. This was further informed by our prior qualitative work which identified sources of potential variation in the extent to which, and the ways in which, hospitals responded to QBPs [7, 16]. We chose a priori to incorporate a 3 month transition period to allow time for any clinical changes in response to the funding model change to be implemented. We used an ITS design which is a robust quasi-experimental design that can be used to evaluate policy changes at the whole system- and population-level when randomization is infeasible. [1822] The study interval depended on data availability and varied by QBP: congestive heart failure (April 2010-February 2017) launched April 2013; hip fracture (April 2012-February 2017) launched April 2014; pneumonia (April 2012-February 2017) launched April 2014; and prostate cancer surgery (April 2010-February 2017) launched April 2015. We chose these intervals to ensure at least 24 monthly data points pre-policy.

Ethics approval

The use of data in this project was authorized under section 45 of Ontario’s Personal Health Information Protection Act, which does not require review by a research ethics board.

Study patients

We separately identified patients for each QBP cohort using inclusion and exclusion criteria detailed in each clinical handbook [1215]. In short, cohorts for congestive heart failure and pneumonia were defined using specific qualifying hospital discharge diagnoses; hip fracture, using a combination of discharge diagnoses and procedures; and prostate cancer surgery, using specific procedure codes. We considered only admissions to hospitals that received funding for one of the QBPs under evaluation. Episodes of care had to be separated by at least 30 days (to exclude 30-day hospital readmissions). We excluded patients without a valid Ontario Health Insurance Plan (OHIP) number who could not be accurately followed in our data sets, and patients with missing demographic information (<0.1%).

Data sources and quality measures

We used multiple linked health administrative databases to describe study patients and ascertain outcomes. Patient demographic information and vital status were obtained from the Registered Persons Database. Hospital diagnoses and procedures were obtained from the Canadian Institute for Health Information (CIHI) Discharge Abstract Database (CIHI-DAD). Emergency department admissions were captured using CIHI’s National Ambulatory Care Reporting System (NACRS).

We described patients according to age at hospital admission, sex, neighbourhood income quintile, rurality of residence, Deyo-Charlson Comorbidity Index [23], and number of emergency department visits and hospitalization days in the year preceding qualifying admission.

Outcomes were assessed for each QBP in three domains:

  1. Quality of care: i) death or return to hospital (i.e. unplanned presentation to emergency department or hospital admission within thirty days, among patients discharged alive and not transferred); ii) mean acute hospital length of stay (LOS); and iii) mean total LOS for entire episode of care including transfers;

  2. Access to care: i) total volume of admissions; ii) proportion of patients aged 65 years or older; and iii) proportion of patients living in lowest neighborhood income quintile;

  3. Coding behaviour: hospital discharge coding behavior as assessed by mean HBAM Inpatient Group (HIG) resource intensity weight. HIG weight is the Ontario-specific acute inpatient grouping methodology used to account for patients’ clinical- and resource-utilization characteristics [24].

We selected these measures because policymakers hoped that QBPs would reduce length of stay in settings where it was longer than optimal without decreasing quality (i.e., outcomes such as deaths, return to hospital, or inappropriate coding). The expectation was also that shorter lengths of stay, as typically seen in other countries implementing ABF-like reforms, would facilitate greater throughput to increase total patient volume across the system, and that access to care would not be compromised by inequity across age and income [8, 25]. Socioeconomic status (SES) may contribute to inequalities in access to care, so we used neighborhood income quintile is an indicator of SES [2628]. Prior research has shown that financial incentives associated with hospital funding reforms, such as QBPs, may alter coding behaviour to maximize reimbursement [2932]. HIG weight is a measure of coding behaviour because it incorporates both case mix and the resource intensity of each patient care episode adjusted for patient characteristics. If upcoding is occurring we would expect to see changes in HIG weight. Thus, to the extent changes in HIG weight do not represent true abrupt changes in patient case mix, the HIG weight is one potential measure by which to evaluate effects on coding.

Statistical analysis

For each outcome, we calculated monthly summaries, aggregated across hospitals (percent, mean, or raw count) and plotted them over time. For each QBP, we excluded three months of data following start of the funding change to account for a policy “transition” period [33]. We chose a three month transition a priori, postulating that it would take a fiscal quarter for any policy effects to occur. We accounted for seasonality by decomposing the data in trend, seasonal, and random components, and then removing the seasonal component [34].

We used segmented linear regression analysis of the seasonally-adjusted data. We used the forecast library in the R statistical software package to fit the model and used an automated stepwise selection procedure based on the Akaike Information Criterion (AIC) to include autoregressive terms accounting for the serial correlation [35, 36]. We used visual inspection of the observed and fitted data as well as residual plots to verify goodness of fit. Our model included fixed terms for pre-policy intercept and slope, intercept change at the time of policy (immediate difference in level following implementation of QBPs accounting for the postulated three-month transition period), and post-policy trend change (difference in the slope following implementation of QBPs). For the main results, we expressed the effect of QBP on each outcome as the counterfactual difference after 2 years, that is, the difference between the observed rate and the rate that would have occurred had QBPs been not implemented. This was estimated as the difference at 2 years post-implementation between the fitted post-implementation rates and the projected rates estimated from the pre-intervention intercept and slope. All analyses were performed at ICES (www.ices.on.ca) using linked, coded data. We used SAS v. 9.3 to prepare the monthly time series data for each outcome measure, and R version 3.4.4 to perform the regression analyses (nlme, car) and plot and compute the 95% confidence intervals for the counterfactual [37].

Results

Patient characteristics

S1S4 Tables describe the overall characteristics for each cohort. The patient characteristics remained largely unchanged throughout the study period.

Results from segmented regression analysis

The counterfactual estimates from the segmented regression analyses, representing the effect of QBPs on outcomes at 2 years post-implementation, are presented in Table 1. Figs 13 and S1S4 Figs present the observed data and fitted values from the segmented regression analyses. The full results from the segmented regression analyses are presented in S5 and S6 Tables.

Table 1. Estimated effect of implementation of QBPs on outcomes after 2 years, calculated as counterfactual difference from the segmented regression analysis (absolute difference, 95% confidence intervals).

Hip Fracture Congestive Heart Failure Pneumonia Prostate Cancer Surgery
%Percentage who were readmitted to hospital or died within 30 d 3.13% (0.37 to 5.89) 0.72% (-0.84 to 2.29) 1.97% (-0.34 to 4.28) 1.28% (-3.64 to 6.19)
Mean Acute Length of Stay, (days) 0.31 (-0.69 to 1.3) -0.16 (-0.61 to 0.3) -0.18 (-0.61 to 0.25) 0.33 (0.07 to 0.59)
Mean Total Length of Stay (days) 0.71 (-1.31 to 2.72) 0.16 (-0.46 to 0.78) -0.23 (-0.62 to 0.16) 0.34 (0.06 to 0.61)
Percentage% patients aged 65 years and older 0.6% (-1.42 to 2.62) -0.15% (-1.33 to 1.03) 4.81% (-4.69 to 14.3) 2.91% (-2.86 to 8.67)
Percentage% patients living in the lowest neighborhood income quintile -1.65% (-4.31 to 1.01) 1.89% (0.51 to 3.27) 0.85% (-1.89 to 3.59) -2.08% (-3.74 to -0.43)
Volume, n -138 (-183 to -93) 80 (2 to 159) 258 (-90 to 607) 19.7 (-42 to 81)
Mean HIG weight 0.1% (-0.13 to 0.33) 0.13% (-0.28 to 0.54) -0.07% (-0.24 to 0.11) 0.01% (-0.05 to 0.06)

Abbreviations: HIG, Health Based Allocation Model (HBAM) Inpatient Group (HIG)

Fig 1. Percent of patients returned to hospital or died.

Fig 1

Red solid line represents the fitted model. The red dashed line represents the counterfactual (i.e. if no policy change occurred). The vertical dashed line represents the date of policy change. The grey shaded area represents the three months of “transition” period. Data are seasonally adjusted.

Fig 3. Total volume of admissions.

Fig 3

Red solid line represents the fitted model. The red dashed line represents the counterfactual (i.e. if no policy change occurred). The vertical dashed line represents the date of policy change. The grey shaded area represents the three months of “transition” period. Data are seasonally adjusted.

Quality of care

At 2 years post-implementation, the estimated percentage of hip fracture patients who returned to hospital or died within 30 days was higher by an absolute 3.13% (95% CI: 0.37% to 5.89%) than if QBPs had not been introduced. There was no change in LOS for hip fracture patients in comparison to the counterfactual (Table 1). For prostate cancer surgery patients, the increase in mean acute LOS over the counterfactual was 0.33 days (95% CI: 0.07 to 0.59) and mean total LOS 0.34 days (95% CI: 0.06 to 0.61). There were no other statistically significant changes observed for the LOS outcome (Fig 2 and S1 Fig).

Fig 2. Mean acute length of stay for the episode of care (days).

Fig 2

Red solid line represents the fitted model. The red dashed line represents the counterfactual (i.e. if no policy change occurred). The vertical dashed line represents the date of policy change. The grey shaded area represents the three months of “transition” period. Data are seasonally adjusted.

Access to care

At 2 years post-implementation, the volume of patients admitted with congestive heart failure was higher by 80 patients (95% CI: 2 to 159) than if QBPs had not been introduced (Table 1). The volume of hip fracture admissions was lower by 138 patients (95% CI: -183 to -93). Percentage of admitted patients living in the lowest income quintile was higher (1.89%; 95% CI: 0.51% to 3.27%) for those diagnosed with congestive heart failure and decreased for those with prostate cancer surgery (-2.08%; 95% CI: -3.74% to -0.43%).

Hospital coding behaviour

We observed no statistically significant changes in mean HIG weight for any of the four cohorts (S4 Fig).

Interpretation

Summary of findings

For the seven outcomes across the four diagnoses and/or procedures we studied, we compared the observed against expected counterfactual findings across outcomes in the domains of quality, access, and coding behaviour and found an inconsistent and generally weak response to the QBP funding reform at the system-level. In general, QBPs did not appear to result in changes to prevailing trends in return to hospital or death. Contrary to expectations, LOS increased slightly for prostate cancer surgery. Counterintuitively, despite no change in LOS, QBP funding was associated with a decrease in overall volume of admissions for hip fracture surgeries and a small absolute increase (of 3%) in the monthly percentage of hip fracture patients who returned to hospital or died within 30 days of discharge. We observed small increases in the percentage of patients admitted with congestive heart failure and decreases for prostate cancer surgery residing in the lowest neighborhood income quintile (<2%).

Explanation of findings

To our knowledge, there is no published quantitative research on the broad effects of the implementation of the QBP funding reform policy. Prior qualitative analyses of QBPs has revealed challenges associated with implementation of this complex hospital funding reform policy [7, 16, 38].

Previous studies found mixed reactions after other types of hospital funding reform [25, 39]. For example, evaluation of a limited experiment with activity-based funding in British Columbia, Canada, showed small decreases in volume, small increases in patients’ length of stay, and no changes in measures of quality (i.e., unplanned readmissions and in-hospital mortality) [25]. Conversely, a systematic review of ABF affirmed that transition to activity-based funding initially decreased length of stay in the US and internationally, and also found important policy- and clinically-relevant changes, including substantial increases in admissions to post-acute care following hospitalization [8].

Unintended consequences typically associated with ABF-like reforms include, for example, patients being discharged “sicker and quicker” to post-acute care facilities or home, and “upcoding”, which may be appropriate if it represents more accurate coding, or inappropriate [8]. We did not see the decrease in LOS that might have been precipitated by accelerated discharge. Nor did we observe changes in coding behaviour for the QBPs studied.

The slight variation we observed in effects—some positive, others negative—may be partly explained by our prior qualitative work, in which we observed variation in response to the reform related to complexity of changes required, internal capacity for organizational change, and availability and appropriateness of supports to manage change [16]. It may also simply represent noise rather than signal. Our goal in this paper was not to understand quantitatively the variation in responsiveness by hospital, but to evaluate the system-level effects of the jurisdictional policy change, as this is the level at which ‘success’ of the policy reform must ultimately be judged.

The lack of large-scale meaningful changes in association with Ontario’s shift to QBP funding is perhaps not surprising. Funding reforms may not be necessary or effective when desirable changes are already occurring. For example, hospitals were already under long-standing pressures to reduce length of stay and may have reached a floor, which may explain why further financial pressure from QBPs had little effect. Similarly, hospitals may have also lacked effective incentives or supports to address readmissions, since, unlike activity-based funding reforms in other countries, Ontario’s QBP funding reform did not financially disincentivize return to hospital nor link funding to care outcomes.

Limitations

Our study has several important limitations that are common to observational studies of policy changes. Teasing apart the effect of QBPs in the presence of multiple system-level changes is challenging. First, other initiatives to improve patient care and/or control costs may have overlapped with the timing of QBP implementation. Specific initiatives that we are aware of included passage in 2010 of the Excellent Care for All Act (ECFA) [40], the introduction of Health Based Allocation Model hospital funding reforms in April 2012 [24], and the introduction of Community Health Links in December 2012 [41] (S5S7 Figs, S7 Table). Visual inspection of data points around the timing of introduction of these initiatives however suggests that they are unlikely to have had a major impact on the outcomes studied in our analyses. A possible exception is the introduction of Community Health Links in December 2012: due to its timing close to that of the QBPs for congestive heart failure (CHF) in April 2013, it is difficult to independently assess the effect of the QBPs for this condition. Undetected confounding is always possible in any uncontrolled study. Policies aimed at improving health care are constantly being tinkered with, which may influence any particular intervention, such as QBPs, in ways not easily detected. Second, given the nearly ubiquitous implementation of QBP in Ontario, we did not identify suitable contemporaneous comparators in this study, which could have strengthened the inferences drawn. Although one of the ways to optimize an ITS is to add negative controls, we did not add these because the goal of the reform was to effect broad change across the entire system, resulting in only a small number of unique and, therefore, non-comparable hospitals being exempted from implementing QBPs (i.e., very small hospitals with few beds and/or those with unique targeted populations). Third, because our analyses were restricted to QBP-funded hospitals rather than all hospitals, we cannot be certain that our results are generalizable to the whole system; however, the proportion of QBP procedures occurring outside of QBP-funded hospitals is low (<11%). Fourth, examining a broader range of outcomes (e.g. extent to which patient care is aligned with evidence-based care processes described in QBP clinical pathways; reduction in inter-hospital variation in care, cost, and wait-times) may be more sensitive to, or reveal different effects of QBPs on, patient care and outcomes, care providers, and the health care system as a whole. Fifth, it is noteworthy that QBPs are unique to Ontario, making generalizability to other jurisdictions (both within and outside Canada) difficult to assess. Although somewhat similar in design to ABF reforms elsewhere, critical differences include the absence of financial disincentives for readmission with QBPs; a smaller and less ubiquitous funding scope limited to fewer priority diagnoses and procedures than ABF reforms elsewhere. However, this study is relevant to health system funding reforms that attempt to improve quality while also cutting costs, though contextual factors that influence the linkage between quality and cost are difficult to capture in relatively simple funding reforms [42]. Sixth, we did not assess how QBPs impacted hospitals’ finances, so we cannot make any inferences about whether increases or decreases in hospitals’ budgets affected patient care and/or outcomes for the QBPs we evaluated. Seventh, there may be benefits or harms of QBPs that we did not measure, or other policy objectives that may have been met, such as those related to total cost per episode-of-care or cost to the system overall. We were careful to limit our conclusions to only the QBPs and outcomes we evaluated to avoid being overbroad.

Conclusion

We found mixed and generally very small effects on quality of care, access to care, and coding behaviour, across the four QBPs we studied. We speculate that challenges with implementing the best practice pathways featured in the QBP handbooks, together with progressive controls on hospital expenditures, and a worsening overall fiscal picture in Ontario coincident with QBP implementation, may have led to inconsistent and weak signals. Further experimentation with funding reform as a potential mechanism to improve outcomes might yield greater impact if focused on specific diagnoses and procedures in which suboptimal process or outcome measures are well-established and for which efforts to improve outcomes by other means have been inadequate.

Supporting information

S1 Fig. Mean length of stay for entire episode (days).

Red solid line represents the fitted model. The red dashed line represents the counterfactual (i.e. if no policy change occurred). The vertical dashed line represents the date of policy change. The grey shaded area represents the three months of “transition” period. Data are seasonally adjusted.

(TIFF)

S2 Fig. Percent change in patients over 65 (%).

Red solid line represents the fitted model. The red dashed line represents the counterfactual (i.e. if no policy change occurred). The vertical dashed line represents the date of policy change. The grey shaded area represents the three months of “transition” period. Data are seasonally adjusted.

(TIFF)

S3 Fig. Percent change in patients living in the lowest neighborhood income quintile.

Red solid line represents the fitted model. The red dashed line represents the counterfactual (i.e. if no policy change occurred). The vertical dashed line represents the date of policy change. The grey shaded area represents the three months of “transition” period. Data are seasonally adjusted.

(TIFF)

S4 Fig. Mean HIG weight.

Red solid line represents the fitted model. The red dashed line represents the counterfactual (i.e. if no policy change occurred). The vertical dashed line represents the date of policy change. The grey shaded area represents the three months of “transition” period. Data are seasonally adjusted.

(TIFF)

S5 Fig. Percent of patients returned to hospital or died with competing initiatives.

Red solid line represents the fitted model. The red dashed line represents the counterfactual (i.e. if no policy change occurred). The vertical dashed line represents the date of policy change. The grey shaded area represents the three months of “transition” period. Competing initiatives are outlined in the legend. Data are seasonally adjusted.

(TIFF)

S6 Fig. Mean acute length of stay for the episode of care (days) with competing initiatives.

Red solid line represents the fitted model. The red dashed line represents the counterfactual (i.e. if no policy change occurred). The vertical dashed line represents the date of policy change. Competing Interventions are outlined in the legend. The grey shaded area represents the three months of “transition” period. Competing Initiatives are outlined in the legend. Data are seasonally adjusted.

(TIFF)

S7 Fig. Total volume of diagnoses/procedures with competing initiatives.

Red solid line represents the fitted model. The red dashed line represents the counterfactual (i.e. if no policy change occurred). The vertical dashed line represents the date of policy change. The grey shaded area represents the three months of “transition” period. Competing Initiatives are outlined in the legend. Data are seasonally adjusted.

(TIFF)

S1 Table. Cohort characteristics for congestive heart failure patients included in the analysis.

(DOCX)

S2 Table. Cohort characteristics for hip fracture patients included in the analysis.

(DOCX)

S3 Table. Cohort characteristics for pneumonia patients included in the analysis.

(DOCX)

S4 Table. Cohort characteristics for prostate cancer patients included in the analysis.

(DOCX)

S5 Table. Estimated coefficients from the interrupted time series analysis on the impact of quality-based procedure policy (% patients returned to hospital or died, mean acute length of stay, total volume).

(DOCX)

S6 Table. Estimated coefficients from the interrupted time series analysis on the impact of quality-based procedure policy (% change in patients over 65, % change in patients living in lowest neighborhood income quintile, mean HIG weight.

(DOCX)

S7 Table. Overlapping initiatives.

(DOCX)

S1 File

(ZIP)

Acknowledgments

We thank Michael Law, University of British Columbia, for insightful comments on an earlier draft of the manuscript.

Data Availability

The dataset from this study is held securely in coded form at ICES. While legal data sharing agreements between ICES and data providers (e.g., health organizations and government) prohibit ICES from making the dataset publicly available, access may be granted to those who meet pre-specified criteria for confidential access, available at www.ices.on.ca/DAS (email: das@ices.on.ca). The full dataset creation plan and underlying analytic code are available as Supporting Information files, understanding that the computer programs may rely upon coding templates or macros that are unique to ICES and are therefore either inaccessible or may require modification.

Funding Statement

This work was funded through an Ontario Strategy for Patient Oriented Research Support Unit (OSSU) Impact Award, which was, in turn, funded by the Canadian Institutes of Health Research and the Government of Ontario. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. This study was also supported by ICES, which is funded by an annual grant from the Ontario Ministry of Health and Long-Term Care (MOHLTC). The opinions, results and conclusions reported in this paper are those of the authors and are independent from the funding sources. No endorsement by ICES or the Ontario MOHLTC is intended or should be inferred. Parts of this material are based on data and information compiled and provided by the Canadian Institute for Health Information (CIHI). However, the analyses, conclusions, opinions and statements expressed herein are those of the authors, and not necessarily those of CIHI.

References

  • 1.Damberg CL, Sorbero ME, Lovejoy SL, Martsolf G, Raaen L, Mandel D. Measuring Success in Health Care Value-Based Purchasing Programs: Findings from an Environmental Scan, Literature Review, and Expert Panel Discussions.: 242. [PMC free article] [PubMed] [Google Scholar]
  • 2.Mattison CA, Wilson MG. Rapid synthesis: Examining the effects of value-based physician payment models. Hamilton, Canada: McMaster Health Forum, 10 October 2017.
  • 3.Sutherland J, Crump RT, Repin N, Hellsten E. Paying for Hospital Services: A Hard Look at the Options. SSRN Electronic Journal. 2013. [cited 13 Jun 2018]. 10.2139/ssrn.2303809 [DOI] [Google Scholar]
  • 4.Gu Q, Koenig L, Faerberg J, Steinberg CR, Vaz C, Wheatley MP. The Medicare Hospital Readmissions Reduction Program: Potential Unintended Consequences for Hospitals Serving Vulnerable Populations. Health Services Research. 2014;49: 818–837. 10.1111/1475-6773.12150 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Weeks WB, Rauh SS, Wadsworth EB, Weinstein JN. The Unintended Consequences of Bundled Payments. Annals of Internal Medicine. 2013;158: 62 10.7326/0003-4819-158-1-201301010-00012 [DOI] [PubMed] [Google Scholar]
  • 6.Lipsitz LA. Understanding Health Care as a Complex System: The Foundation for Unintended Consequences. JAMA. 2012;308: 243–244. 10.1001/jama.2012.7551 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Palmer KS, Brown AD, Evans JM, Marani H, Russell KK, Martin D, et al. Qualitative analysis of the dynamics of policy design and implementation in hospital funding reform. PLOS ONE. 2018;13: e0191996 10.1371/journal.pone.0191996 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Palmer KS, Agoritsas T, Martin D, Scott T, Mulla SM, Miller AP, et al. Activity-Based Funding of Hospitals and Its Impact on Mortality, Readmission, Discharge Destination, Severity of Illness, and Volume of Care: A Systematic Review and Meta-Analysis. Jimenez-Soto E, editor. PLoS ONE. 2014;9: e109975 10.1371/journal.pone.0109975 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Government of Ontario, Ministry of Health and Long Term Care. Health System Funding Reform—Health Care Professionals—MOHLTC. [cited 16 Jul 2018]. http://www.health.gov.on.ca/en/pro/programs/ecfa/funding/hs_funding_qbp.aspx
  • 10.Ontario Ministry of Health and Long-Term Care, Quality-Based Procedures Indicators, An Implementation Guidance Document, 2014. http://health.gov.on.ca/en/pro/programs/ecfa/docs/qbp_indicator_guidance_en.pdf
  • 11.Ontario Hospital Association, Toolkit to Support Implementation of Quality-Based Procedures, ISBN # 978-0-88621-353-4, https://www.oha.com/Documents/QBP%20Toolkit.pdf
  • 12.Health Quality Ontario, Ministry of Health and Long-Term Care. Quality-based procedures: Clinical handbook for community-acquired pneumonia. Toronto, ON: Health Quality Ontario; 2014 Feb p. 67. www.hqontario.ca/evidence/evidence-process/episodes-of-care#community-acquired-pneumonia
  • 13.Health Quality Ontario, Ministry of Health and Long-Term Care. Quality-Based Procedures: Clinical Handbook for Hip Fracture. Toronto, ON: Health Quality Ontario; 2013 May p. 97. http://www.health.gov.on.ca/en/pro/programs/ecfa/docs/qbp_hipfracture.pdf
  • 14.Health Quality Ontario, Ministry of Health and Long-Term Care. Quality-based procedures: clinical handbook for heart failure (acute and postacute). Toronto, ON: Health Quality Ontario; 2015 Feb p. 78. http://www.health.gov.on.ca/en/pro/programs/ecfa/docs/qbp_heart.pdf
  • 15.Health Quality Ontario, Ministry of Health and Long-Term Care. Quality-Based Procedures Clinical Handbook for Cancer Surgery. Toronto, ON: Health Quality Ontario; 2016 Jan p. 92. http://www.health.gov.on.ca/en/pro/programs/ecfa/docs/qbp_cancer_surgery.pdf
  • 16.Palmer KS, Brown AD, Evans JM, Marani H, Russell KK, Martin D, et al. Standardising costs or standardising care? Qualitative evaluation of the implementation and impact of a hospital funding reform in Ontario, Canada. Health Research Policy and Systems. 2018;16: 74 10.1186/s12961-018-0353-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Ontario Hospital Association—Your Hospitals. [cited 23 Jun 2018]. https://www.oha.com/your-hospitals
  • 18.Fine B, Schultz SE, White L, Henry D. Impact of restricting diagnostic imaging reimbursement for uncomplicated low back pain in Ontario: a population-based interrupted time series analysis. cmajo. 2017;5: E760–E767. 10.9778/cmajo.20160151 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Hardy G, Colas JA, Weiss D, Millar D, Forster A, Walker M, et al. Effect of an innovative community-based care model, the Monarch Centre, on postpartum length of stay: an interrupted time-series study. CMAJ Open. 2018;6: E261–E268. 10.9778/cmajo.20180033 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Jandoc R, Burden AM, Mamdani M, Lévesque LE, Cadarette SM. Interrupted time series analysis in drug utilization research is increasing: systematic review and recommendations. Journal of Clinical Epidemiology. 2015;68: 950–956. 10.1016/j.jclinepi.2014.12.018 [DOI] [PubMed] [Google Scholar]
  • 21.Kontopantelis E, Doran T, Springate DA, Buchan I, Reeves D. Regression based quasi-experimental approach when randomisation is not an option: interrupted time series analysis. BMJ. 2015;350: h2750 10.1136/bmj.h2750 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Rudoler D, de Oliveira C, Cheng J, Kurdyak P. Payment incentives for community-based psychiatric care in Ontario, Canada. CMAJ. 2017;189: E1509–E1516. 10.1503/cmaj.160816 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Charlson M, Szatrowski TP, Peterson J, Gold J. Validation of a combined comorbidity index. J Clin Epidemiol. 1994;47: 1245–1251. 10.1016/0895-4356(94)90129-5 [DOI] [PubMed] [Google Scholar]
  • 24.Health Based Allocation Model: HBAM Inpatient Group Methodology and Reports at CIHI. https://www.oha.com/Documents/HBAM-What%20You%20Need%20To%20Know.pdf
  • 25.Sutherland JM, Liu G, Crump RT, Law M. Paying for volume: British Columbia’s experiment with funding hospitals based on activity. Health Policy. 2016;120: 1322–1328. 10.1016/j.healthpol.2016.09.010 [DOI] [PubMed] [Google Scholar]
  • 26.Moscelli G, Siciliani L, Gutacker N, Cookson R. Socioeconomic inequality of access to healthcare: Does choice explain the gradient? Journal of Health Economics. 2018;57: 290–314. 10.1016/j.jhealeco.2017.06.005 [DOI] [PubMed] [Google Scholar]
  • 27.Health Quality Ontario. Income and health: opportunities to achieve health equity in Ontario. 2016. https://www.hqontario.ca/Portals/0/documents/system-performance/health-equity-report-en.pdf
  • 28.Canadian Institute for Health Information, Trends in Income-Related Health Inequalitites in Canada, Technical Report, Revised July 2016. Ottawa, ON: CIHI 2016. https://secure.cihi.ca/free_products/trends_in_income_related_inequalities_in_canada_2015_en.pdf
  • 29.Seiber EE. Physician Code Creep: Evidence in Medicaid and State Employee Health Insurance Billing. Health Care Financ Rev. 2007;28: 83–93. [PMC free article] [PubMed] [Google Scholar]
  • 30.Abler S, Verde P, Stannigel H, Mayatepek E, Hoehn T. Effect of the introduction of diagnosis related group systems on the distribution of admission weights in very low birthweight infants. Arch Dis Child Fetal Neonatal Ed. 2011;96: F186–189. 10.1136/adc.2010.192500 [DOI] [PubMed] [Google Scholar]
  • 31.Helms CM. A pseudo-epidemic of septicemia among Medicare patients in Iowa. Am J Public Health. 1987;77: 1331–1332. 10.2105/ajph.77.10.1331 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Medicare Fraud Strike Force Charges 89 Individuals for Approximately $223 Million in False Billing. [cited 23 Jun 2018]. https://www.justice.gov/opa/pr/medicare-fraud-strike-force-charges-89-individuals-approximately-223-million-false-billing
  • 33.Taljaard M, McKenzie JE, Ramsay CR, Grimshaw JM. The use of segmented regression in analysing interrupted time series studies: an example in pre-hospital ambulance care. Implementation Science. 2014;9: 77 10.1186/1748-5908-9-77 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Jebb AT, Tay L, Wang W, Huang Q. Time series analysis for psychological research: examining and forecasting change. Front Psychol. 2015;6 10.3389/fpsyg.2015.00727 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Hyndman RJ, Khandakar Y. Automatic Time Series Forecasting: The forecast Package for R. Journal of Statistical Software. 2008;27 10.18637/jss.v027.i03 [DOI] [Google Scholar]
  • 36.auto.arima function | R Documentation. [cited 3 Jul 2019]. https://www.rdocumentation.org/packages/forecast/versions/8.7/topics/auto.arima
  • 37.Zhang F, Wagner AK, Soumerai SB, Ross-Degnan D. Methods for estimating confidence intervals in interrupted time series analyses of health interventions. J Clin Epidemiol. 2009;62 10.1016/j.jclinepi.2008.08.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Baxter P, Cleghorn L, Alvarado K, Cummings G, Kennedy D, McKey C, et al. Quality-based procedures in Ontario: exploring health-care leaders’ responses. J Nurs Manag. 2016;24: 50–58. 10.1111/jonm.12271 [DOI] [PubMed] [Google Scholar]
  • 39.Moreno-Serra R, Wagstaff A. System-wide impacts of hospital payment reforms: evidence from Central and Eastern Europe and Central Asia. J Health Econ. 2010;29: 585–602. 10.1016/j.jhealeco.2010.05.007 [DOI] [PubMed] [Google Scholar]
  • 40.Government of Ontario, Ministry of Health and Long Term Care. Excellent Care For All—Health Care Professionals—MOHLTC. [cited 26 Jan 2019]. http://www.health.gov.on.ca/en/pro/programs/ecfa/legislation/act.aspx
  • 41.Government of Ontario, Ministry of Health and Long Term Care. Healthy Change—Ontario’s Action Plan for Health Care—Public Information—MOHLTC. [cited 26 Jan 2019]. http://www.health.gov.on.ca/en/ms/ecfa/healthy_change/healthlinks.aspx
  • 42.Øvretveit J, Health Foundation (Great Britain). Does improving quality save money?: a review of evidence of which improvements to quality reduce costs to health service providers. London: The Health Foundation; 2009.

Decision Letter 0

Hans-Peter Brunner-La Rocca

10 Dec 2019

PONE-D-19-27876

Effects of Quality-based Procedure hospital funding reform in Ontario, Canada: An Interrupted Time Series Study

PLOS ONE

Dear Dr Li,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Both reviewers indicated the potential of the manuscript but also highlighted some points that need to be addressed. In particular, both raised the point that QBP may not be generally known sufficiently to many readers. Providing additional information on QBP is therefore important and needs to be carefully addressed in your revisions. I would also like to ask you to address the concerns about not following the PLOS ONE policy made by one of the reviewers.

We would appreciate receiving your revised manuscript by Jan 24 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Hans-Peter Brunner-La Rocca, M.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. In the ethics statement in the manuscript and in the online submission form, please provide additional information about the patient records used in your retrospective study.

Specifically, please ensure that you have discussed whether all data were fully anonymized before you accessed them and/or whether the IRB or ethics committee waived the requirement for informed consent.

If patients provided informed written consent to have data from their medical records used in research, please include this information.

3. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

In your revised cover letter, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially identifying or sensitive patient information) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. Please see http://www.bmj.com/content/340/bmj.c181.long for guidelines on how to de-identify and prepare clinical data for publication. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

We will update your Data Availability statement on your behalf to reflect the information you provide.

4. Thank you for stating the following in the Competing Interests section:

'We have read the journal's policy and the authors of this manuscript have the following competing interests: NI, AB, KP, HM report funding from Ontario Strategy for Patient Oriented Research Support Unit (OSSU) Impact Award during the conduct of the study; OSSU was in turn funded by the Canadian Institutes of Health Research (CIHR) and the Government of Ontario. NI reports support from a CIHR New Investigator Award and from the Department of Family and Community Medicine, University of Toronto, unrelated to this work. AB is a former senior official in the Ontario Government and serves on the current Premier’s Council on Health. AL reports support from a CIHR Fellowship Award.'

a. Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests).  If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.

b. Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf.

Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests

5. Your ethics statement must appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please move it to the Methods section and delete it from any other section. Please also ensure that your ethics statement is included in your manuscript, as the ethics section of your online submission will not be published alongside your manuscript.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: I Don't Know

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The study attempts to estimate the effects of an intervention designed to improve the quality of patient care provided by hospitals in the province of Ontario Canada. The intervention consists of the introduction of Quality-based Procedures (QBPs) for the provision of care for inpatients being treated for congestive heart failure, hip fracture, pneumonia, and prostate cancer. The authors focus on monthly data on various measures of the quality of care of those patients who were admitted to hospital to treat these conditions, both before the introduction of the intervention and afterwards. The authors also examined the total volume of cases and the fraction of cases that were from more compromised patients (those from low income neighbourhoods and those 65+ years of age). This used an interrupted time series analysis with various adjustments for seasonality and AR errors. There was little if any change in the pre policy trend line after the policy was introduced, leading the authors to conclude that these policies were largely ineffective.

This was a good study with potential. I have two substantive comments on the methods and one on the data disclosure (which I am required to address as a referee).

First, the authors disclose that i) the QBP consisted of a fixed dollar remuneration for the entire episode of care, and the provision of information on “best practices” for the treatment of the respective conditions and ii) there were no financial penalties for bad outcomes or non-adherence.

Other features of the QBP are not explained. For instance, the authors speak of there being a “fixed volume” component to the QBP: “fixed fee and fixed volume, for each QBP procedure or diagnosis.” It is unclear what this means. Does it mean for instance that the government would pay only for a certain volume of cases? If so, what would happen to the funds should the number of patients treated fall below the maximum number allowed? Would this money flow back to the hospital?

If a patient whose treatment falls under the QBP rubric is discharged but then re-admitted to the hospital due to complications, how is the cost of the readmission covered? Is this supposed to come from the fixed remuneration for that patient?

What measures were present, if any, to prevent hospitals from cream skimming? Is this even possible? Could hospitals have any influence on the disease severity and volume of patients seeking care? Is the remuneration per patient adjusted for patient characteristics or the complexity of the condition?

It would also be helpful to understand the ability of hospital management to direct the clinical care of patients in their hospitals. How much discretion did they have?

Information on the nature of the intervention and how it would have affected clinical decision making in the hospital might go some ways to explaining the results.

Second, before and after designs, such as the kind used in this study, face the challenge of distinguishing the effect of the intervention from the effects of other interventions that were introduced over the sample period. The authors describe some of these other interventions but are of the opinion that they can be safely ignored because i) none were introduced at the exact same time as the QBP policy and ii) those that were introduced before the policy would have effects that “would have been captured in the secular trend” and iii) those that were introduced after the policy would have effects that “did not affect the entire system in a reliable fashion for the procedures, diagnoses, or outcomes under investigation, or did not have a specific time when changes in the outcomes measured might have been expected.”

I don’t find these arguments persuasive. There are multiple ways that the QBP effect could be obscured by other effects. There is no requirement for instance that any policies introduced prior to the intro of the QBP would have effects that would combine and result in a continuous pre policy linear time trend. The statement that policies introduced after QBP had no impact on trends because one cannot predict when changes in outcomes would have occurred suggests that indeed these other policies could have had some impact.

I would address this issue in two ways. First, provide some more info on the other policies that were introduced over the sample period and provide some justification that they indeed have no material impact on the trends in the outcomes under investigation. This could be contained in an appendix.

Second it may be helpful as well to run a structural break test on the time series data, allowing the data to determine if there were any structural breaks at any point in the time series data. The references contained in the following study may be helpful

https://link.springer.com/article/10.1007/s10614-011-9271-1

In the cases where the authors have detected a break in trend (such as for hip fracture mortality), it would be useful to see if the automated structural break detection algorithm also detects this. This would lend some confidence that the break really is due to the QBP policy.

The referee form also requires that I comment on the steps taken by the researchers to make their data available to other researchers so as to allow for independent replication. The authors have elected to keep their “source data” – both the individual level data and the aggregated data – confidential. They do display graphs of the trends of the deseasonalized data. But do not present the actual values of the aggregated data in either raw or deseasonalized forms. The claimed justification for the secrecy is a provision of Ontario’s “Personal Health Information Privacy Act” which evidently prohibits disclosure. My sense is that the study authors do not justify why disclosure is not permitted. I am very interested to understand exactly how the disclosure of data with personal identifiers removed could violate the Act. Exactly what is the privacy breach for releasing the individual level data with personal identifiers removed? How would the Act be violated if the aggregated unadjusted data were to be released?

Reviewer #2: General Comments:

1. QBP's and quality measures/clinical handbooks. The content of the QBP quality handbooks will not be familiar to many readers. Can the authors provide basic information on the content, scope, quality/rigor, and specificity of the clinical guidance in the QBP quality handbooks, perhaps in an online supplement? To what extent would the quality of care recommendations differ from the standard of care already in place in Canadian teaching hospitals or large community hospitals? How evidence-based and specific are the recommendations? Was there ever a realistic prospect that providing information in QBP handbooks could measurably improve quality of care, with or without an enforcement mechanism in QBP administration?

2. Is there a reason why QBP implementation would be expected to affect the number of admissions for congestive heart failure, pneumonia, hip fracture, and even prostatectomy? I would think that the number of admissions would be driven primarily by the disease incidence, rather that hospital capacity or funding models. More elective procedures, such as non-cancer surgery e.g. knee and hip joint replacement, might be more sensitive to changes in funding models.

3. While implementation of QBPs did not affect the outcomes the authors evaluated, it is possible that the funding model did achieve other policy objectives. To the extent that QBPs are intended to limit the financial risk of the provincial funder associated with shortfalls in annual hospital budgets, it is possible that QBP funding did achieve some policy objectives. Further, it is possible that episode-of-care funding incentivized hospitals to provide more efficient and less costly care for these episodes. The investigators would not have detected these efficiencies, since they would not likely be reflected in changes in clinical outcome, or even general resource utilization measures such as length of stay.

Specific Comments:

1. Abstract "Patients from the lowest income neighborhoods increased slightly..." will be a confusing statement to people not familiar with SES income quintiles. Perhaps state something like "The proportion of patients admitted for congestive heart failure who lived in the lowest neighborhood income quintile increased.." or something like that which more clearly indicates the direction of any change in access/SES of target population.

2. Introduction 3rd paragraph "Funding was carved out of hospitals' global budgets and then reallocated…as a fixed fee" is not entirely correct, since funding per episode of care is a product of a fixed fee and the CMI of the hospital (a hospital-level attribute based on historical patient case mix for similar admissions). The amount of funding for the same QBP varies significantly from hospital to hospital based on CMI.

3. Page 7 What is "responsiveness" to QBP's? ("prior qualitative work which identified sources of potential variation in responsiveness to QBP's")

4. Page 9 explanation of access to care measures. More explanation about why the "proportion of patients living in lowest neighborhood income quintile" is a measure of access would help average readers understand this concept.

5. Page 9 coding behaviour. Can you explain why mean HBAM HIG RIW trend is a measure of coding "behaviour"? How can you know if changes in RIW's do not represent true abrupt changes in patient case mix?

6. Page 11 results 2nd paragraph "...the effect of QBPs on outcomes 2 years after implementation..." (don't need to say "after 2 years post-implementation"). Same comment applies throughout eg 1st sentence 3rd paragraph of results.

7. Page 14 discussion. "The slight variation we observed in effects--some results went up, others down...". These changes may still reflect noise rather than signal. I would not necessarily assume any causal association with the exposure. Other explanations do include random chance, bias, other secular trends, etc.

8. Page 15 "...the goal of the reform was to effect broad change..." (not affect)

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: David Urbach

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Aug 19;15(8):e0236480. doi: 10.1371/journal.pone.0236480.r002

Author response to Decision Letter 0


30 Jan 2020

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

Authors’ Response to 1:

We have carefully checked that the manuscript now meets PLOS ONE’s style requirements for title page, file naming, and body of manuscript.

2. In the ethics statement in the manuscript and in the online submission form, please provide additional information about the patient records used in your retrospective study. Specifically, please ensure that you have discussed whether all data were fully anonymized before you accessed them and/or whether the IRB or ethics committee waived the requirement for informed consent. If patients provided informed written consent to have data from their medical records used in research, please include this information.

Authors’ Response to 2:

We have updated our ethics statement to read as follows:

“The use of data in this project was authorized under section 45 of Ontario’s Personal Health Information Protection Act, which does not require review by a research ethics board.”

3. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

In your revised cover letter, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially identifying or sensitive patient information) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. Please see http://www.bmj.com/content/340/bmj.c181.long for guidelines on how to de-identify and prepare clinical data for publication. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

We will update your Data Availability statement on your behalf to reflect the information you provide.

Authors’ Response to 3:

We’ve consulted with our organization and we now write:

“The dataset from this study is held securely in coded form at ICES. While data sharing agreements prohibit ICES from making the dataset publicly available, access may be granted to those who meet pre-specified criteria for confidential access, available at www.ices.on.ca/DAS. The full dataset creation plan and underlying analytic code are available from the authors upon request, understanding that the computer programs may rely upon coding templates or macros that are unique to ICES and are therefore either inaccessible or may require modification.”

4. Thank you for stating the following in the Competing Interests section:

'We have read the journal's policy and the authors of this manuscript have the following competing interests: NI, AB, KP, HM report funding from Ontario Strategy for Patient Oriented Research Support Unit (OSSU) Impact Award during the conduct of the study; OSSU was in turn funded by the Canadian Institutes of Health Research (CIHR) and the Government of Ontario. NI reports support from a CIHR New Investigator Award and from the Department of Family and Community Medicine, University of Toronto, unrelated to this work. AB is a former senior official in the Ontario Government and serves on the current Premier’s Council on Health. AL reports support from a CIHR Fellowship Award.'

a. Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests). If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.

Authors’ Response 4a: These competing interests do not alter our adherence to PLOS ONE policies. Our data sharing agreements allow access to any researcher who meets pre-specified criteria for confidential access, available at www.ices.on.ca/DAS.

b. Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf.

Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests

Authors’ Response 4b: Our revised competing interests statement now reads as follows, and we now also include the updated competing interests in the body of our cover letter, as requested.

“We have read the journal's policy and the authors of this manuscript have the following competing interests: NI, AB, KP, HM report funding from Ontario Strategy for Patient Oriented Research Support Unit (OSSU) Impact Award during the conduct of the study; OSSU was in turn funded by the Canadian Institutes of Health Research (CIHR) and the Government of Ontario. NI reports support from a CIHR New Investigator Award and from the Department of Family and Community Medicine, University of Toronto, unrelated to this work. AB is a former senior official in the Ontario Government and serves on the current Premier’s Council on Health. AL reports support from a CIHR Fellowship Award. These competing interests do not alter our adherence to PLOS ONE policies. Our data sharing agreements allow access to researchers who meets pre-specified criteria for confidential access, available at www.ices.on.ca/DAS.”

5. Your ethics statement must appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please move it to the Methods section and delete it from any other section. Please also ensure that your ethics statement is included in your manuscript, as the ethics section of your online submission will not be published alongside your manuscript.

Authors’ Response to 5:

Our ethics statement now appears in the Methods section of the manuscript, and does not appear in any other section.

Reviewers' comments and Authors’ responses:

Reviewer #1:

6. The study attempts to estimate the effects of an intervention designed to improve the quality of patient care provided by hospitals in the province of Ontario Canada. The intervention consists of the introduction of Quality-based Procedures (QBPs) for the provision of care for inpatients being treated for congestive heart failure, hip fracture, pneumonia, and prostate cancer. The authors focus on monthly data on various measures of the quality of care of those patients who were admitted to hospital to treat these conditions, both before the introduction of the intervention and afterwards. The authors also examined the total volume of cases and the fraction of cases that were from more compromised patients (those from low income neighbourhoods and those 65+ years of age). This used an interrupted time series analysis with various adjustments for seasonality and AR errors. There was little if any change in the pre policy trend line after the policy was introduced, leading the authors to conclude that these policies were largely ineffective.

This was a good study with potential. I have two substantive comments on the methods and one on the data disclosure (which I am required to address as a referee).

First, the authors disclose that i) the QBP consisted of a fixed dollar remuneration for the entire episode of care, and the provision of information on “best practices” for the treatment of the respective conditions and ii) there were no financial penalties for bad outcomes or non-adherence.

Other features of the QBP are not explained. For instance, the authors speak of there being a “fixed volume” component to the QBP: “fixed fee and fixed volume, for each QBP procedure or diagnosis.” It is unclear what this means. Does it mean for instance that the government would pay only for a certain volume of cases? If so, what would happen to the funds should the number of patients treated fall below the maximum number allowed? Would this money flow back to the hospital?

Authors’ Response to 6:

Yes, government pays for only a certain volume of QBP-funded cases per hospital, based on historical volume levels at each hospital. However, we know from prior qualitative research (URLs below) that hospitals could “borrow” from their own global budget to cover the costs of any diagnoses/procedures in excess of those funded by QBPs. We don't know the extent to which this borrowing happens, other than it does. Conversely, if the number of patients treated falls below the maximum allowed, the funds are ostensibly returned to government, but we have no evidence of the extent to which this happens either.

Our statement in the manuscript remains accurate as written, “Funding was carved out of hospitals’ global budgets and then reallocated to hospitals at the start of the relevant fiscal year as a fixed fee and fixed volume, for each QBP procedure or diagnosis.” Although Reviewer #1 asks good questions, we feel that revising the manuscript to drill down to this level of detail about the flow of funds back and forth between government and hospitals goes beyond what is required in this manuscript on quantitative effects. We discuss this in our prior qualitative research publications, including these two, both of which we cite early in the manuscript at references 7 and 12 for those interested in learning more about the implementation of QBPs:

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0191996

https://health-policy-systems.biomedcentral.com/articles/10.1186/s12961-018-0353-6

.

7. If a patient whose treatment falls under the QBP rubric is discharged but then re-admitted to the hospital due to complications, how is the cost of the readmission covered? Is this supposed to come from the fixed remuneration for that patient?

Authors’ Response to 7:

If readmission occurs within 30 days of discharge due to complications, treatment costs are not funded from the fixed remuneration for the original admission, but rather from the global budget. In this sense, QBPs are not a bundled payment that would include the costs of readmission. If a patient is readmitted after 30 days, even for the same diagnosis/procedure, or for complications arising from the original admission, it is considered a new admission, funded anew through additional QBP funds, separate from the original amount.

8. What measures were present, if any, to prevent hospitals from cream skimming? Is this even possible? Could hospitals have any influence on the disease severity and volume of patients seeking care? Is the remuneration per patient adjusted for patient characteristics or the complexity of the condition?

Authors’ Response to 8:

Q 8a: “What measures were present, if any, to prevent hospitals from cream skimming? Is this even possible?”

A 8a: We know of no specific measures to prevent adverse risk selection, aka cream skimming (i.e. selection of high-value, lower-severity, low-cost patients to enhance profitability or reputation). To examine the extent to which this might be occurring, we evaluated 3 variables related to adverse risk selection: 1) patients’ income quintile; 2) proportion of admissions over age 65; 3) Charlson Score (co-morbidity index, an indicator of severity of illness). We found no evidence of adverse selection for these indicators, meaning no decline after QBPs in the proportion of poorer, older, or sicker patients being admitted. It does not appear, based on the outcomes we examined, that the hospital system, as a whole, was influencing patient selection, though it is possible that individual hospitals may have done so. We did not report data at the level of individual hospitals because our interest was in effects at the system-level.

Q 8b: “Could hospitals have any influence on the disease severity and volume of patients seeking care?”

A 8b: Hospitals could not influence the disease severity or volume of patients “seeking” care (as the reviewer asks), because any patient is free to seek care in any Canadian hospital. Hospitals could, theoretically, influence who is treated or admitted to hospital for care. We did not study, for example, transport diversions, but it seems unlikely that a diversion for unplanned care would occur based only on a suspected diagnosis as communicated by a paramedic en route in an ambulance. Nor did we study wait list manipulation for planned care, by which we mean the theoretical possibility that complex cases might be pushed to the following year if the QBP funds were exhausted, or, conversely, less complex electives cases might be preferentially selected. This is an interesting question for future research, but beyond the scope of this paper.

Q 8c: “Is the remuneration per patient adjusted for patient characteristics or the complexity of the condition?”

A 8c: There is a per hospital adjustment to account for overall complexity of the overall patient population, but there is no adjustment per patient. The QBP calculation is adjusted for each hospital, but not for each patient. CMI of the hospital (a hospital-level attribute based on historical patient case mix for similar admissions) is considered in the QBP fee. The amount of funding for the same QBP varies significantly from hospital to hospital based on CMI. We have added a sentence to clarify that, “The fee is adjusted for each hospital based on its case-mix index (CMI) to account for overall complexity in the patient population.”

9: It would also be helpful to understand the ability of hospital management to direct the clinical care of patients in their hospitals. How much discretion did they have?

Authors’ Response 9:

Hospital management has no discretion to direct the care of individual patients in their hospitals, but they have some discretion over, for example, discharge policies to encourage shorter lengths of stay, including resources on the ward to facilitate discharge. They also have some control over the OR time available. Again, the extent to which managerial discretion may influence clinical care is very interesting, but beyond the scope of this paper.

10: Information on the nature of the intervention and how it would have affected clinical decision-making in the hospital might go some ways to explaining the results.

Authors’ Response 10:

With regard to the nature of the intervention, we have discussed this extensively in our prior work noted are references 7 and 12. We did not study clinical decision-making in hospital, per se, to understand the extent to which this might explain our results. This would require qualitative research, interviewing clinicians to understand how QBPs influenced their decision-making, if at all. This is beyond the scope of this particular paper.

11: Second, before and after designs, such as the kind used in this study, face the challenge of distinguishing the effect of the intervention from the effects of other interventions that were introduced over the sample period. The authors describe some of these other interventions but are of the opinion that they can be safely ignored because i) none were introduced at the exact same time as the QBP policy and ii) those that were introduced before the policy would have effects that “would have been captured in the secular trend” and iii) those that were introduced after the policy would have effects that “did not affect the entire system in a reliable fashion for the procedures, diagnoses, or outcomes under investigation, or did not have a specific time when changes in the outcomes measured might have been expected.”

I don’t find these arguments persuasive. There are multiple ways that the QBP effect could be obscured by other effects. There is no requirement for instance that any policies introduced prior to the intro of the QBP would have effects that would combine and result in a continuous pre policy linear time trend. The statement that policies introduced after QBP had no impact on trends because one cannot predict when changes in outcomes would have occurred suggests that indeed these other policies could have had some impact.

I would address this issue in two ways. First, provide some more info on the other policies that were introduced over the sample period and provide some justification that they indeed have no material impact on the trends in the outcomes under investigation. This could be contained in an appendix.

Authors’ Response 11:

We can think of 3 possible options to address Reviewer #1’s request for more information on the other policies and further justification for why we believe they had no material impact. Our preference would be Option 1, but we defer to the editor to advise us on which of these is preferred:

Option 1: No further change required

In Limitations, we already identify the initiatives that occurred near the implementation of QBPs, each with corresponding citations for readers interested in deeper examination. We accept that the reviewer is not convinced, but we have now re-examined each initiative, and we remain confident in what we already say in the manuscript. That is, “…we did not consider them to be potential temporal confounders nor a source of bias or threat to the internal validity of our ITS. We did not, therefore, evaluate any influence they may have had on our outcomes during the time of the funding change.”

Importantly, the nature of these initiatives is such that they were mainly high level, implemented over a long time period – or not even fully implemented at all – across the entire system, and were very much indirectly related – if at all – to the QBPs and outcomes we evaluated. We did not evaluate the extent to which, or when, each of these initiative was implemented across the system, as that would have required an entirely different study.

For example, the 2010 Excellent Care for All Act (enacted 2 years prior to implementation of the first QBPs), simply sets out a number of requirements for hospitals, including that they establish committees on quality-related issues, implement quality improvement plans, link executive compensation to achievement of targets, implement of patient satisfaction and staff surveys, declare their values following public consultation, and establish a patient relations process. We do not believe that these high level and long term efforts to put “patients first” and strengthen organization focus on accountability would have affected the four QBPs we studied, or the specific clinical outcomes we measured.

Similarly, Health Links, introduced in 2014 (2 years after QBPs were implemented), focuses on coordinating care for patients living with multiple chronic conditions and complex needs, by ensuring that each patient has a Coordinated Care Plan (CCP). Again, it is unlikely that implementing a CCP would obscure any effects of the QBPs we studied (congestive heart failure, hip fracture, pneumonia, and prostate cancer surgery), for the outcomes we measured (return to hospital or death within 30 days, acute LOS, volume of admissions, and patient characteristics).

Similarly, pressure for supply side controls existed during a period of austerity that began in 2009, and hospitals contributed to getting the province back on track financially by accepting “years of zero percent funding increases at a time when inflation, patient volumes, labour costs, energy, and regulatory requirements grew significantly” (ref 36 in manuscript). Again, we don’t believe these economic ups and downs, pervasive in these times, would have obscured the effect for the QBPs and outcomes we studied. We have modified one sentence in the Limitations to clarify one of the initiatives, further explaining that there was “pressure to implement” supply side controls on hospital expenditures “during a period of austerity” from “2009” (not 2011, per reference 36).

Similarly, the Health Based Allocation Methods (HBAM), introduced in 2012, was another new way to partially fund hospitals. Instead of just counting each patient that a hospital treats, HBAM weights cases to account for the fact that some hospitals treat more high-resource patients than others. HBAM uses data that trails funding by two years. Therefore, services changes in fiscal year 2012/13 would not impact funding until 2014/15. Our study period ran from 2010-2017. Any potential influence of HBAM could not even have occurred until some indeterminate time after 2014, and even then, HBAM and QBPs are completely separate funding streams that don't depend on, or compete with, each other.

We do already say in the manuscript that, “However, these initiatives were mostly implemented well before the QBP funding reform, meaning that in this study their effects would have been captured in the secular trend. Those that occurred afterwards did not affect the entire system in a reliable fashion for the procedures, diagnoses, or outcomes under investigation, or did not have a specific time when changes in the outcomes measured might have been expected.”

Option 2: Add more to Limitations

We could add more details in Limitations including more citations and explanation about why these other high level policies were unlikely to have had any material impact on the trends we investigated. For example, similar to the explanations of these initiatives that we have provided in Option 1 above, we could say more about why we believe that any effects of policies introduced prior to QBPs would combine and result in a continuous pre-policy linear time trend. In this case, we seek advice from the Editor about what additional information is preferred, given that we do already provide citations to each of these initiatives for curious readers.

Option 3: Add Appendix

In an Appendix, we could separately describe in detail each of the other initiatives that were implemented during the study period, similar to what we have written in Option 1 above. However, we already cite source materials in the manuscript for each initiative, and readers could simply review the cited materials as is typical when a reader is curious to know more.

12. Second it may be helpful as well to run a structural break test on the time series data, allowing the data to determine if there were any structural breaks at any point in the time series data. The references contained in the following study may be helpful

https://link.springer.com/article/10.1007/s10614-011-9271-1

In the cases where the authors have detected a break in trend (such as for hip fracture mortality), it would be useful to see if the automated structural break detection algorithm also detects this. This would lend some confidence that the break really is due to the QBP policy.

Authors’ Response 12: Change-point detection methods in time series are a popular method for detecting if any abrupt changes occurred at all during any time period. However, given the number of outcomes and QBPs we were studying, and the slow/uncertain implementation of any other changes during the study period (as explained above in Response 11, Option 1) we chose to pre-specify a time period, and incorporated an implementation period to detect the magnitude of change at the same time the policy was implemented rather than taking a more data-driven approach. From the variety of outcomes and QBPs we chose, none had any major effects and we cautiously interpret this as QBP having small, if any, effects.

13. The referee form also requires that I comment on the steps taken by the researchers to make their data available to other researchers so as to allow for independent replication. The authors have elected to keep their “source data” – both the individual level data and the aggregated data – confidential. They do display graphs of the trends of the deseasonalized data. But do not present the actual values of the aggregated data in either raw or deseasonalized forms. The claimed justification for the secrecy is a provision of Ontario’s “Personal Health Information Privacy Act” which evidently prohibits disclosure. My sense is that the study authors do not justify why disclosure is not permitted. I am very interested to understand exactly how the disclosure of data with personal identifiers removed could violate the Act. Exactly what is the privacy breach for releasing the individual level data with personal identifiers removed? How would the Act be violated if the aggregated unadjusted data were to be released?

Authors’ Response 13: We consulted with ICES’ Privacy Office and the Chief Science Officer. Our inability to make the aggregate unadjusted data sets publicly available has nothing to do with Ontario’s privacy laws, per se. Rather, it is currently prohibited under our data sharing agreements with data partners. While we renegotiate data sharing agreements to make more open access possible, access to study-specific data sets can be made available to individuals who meet criteria for confidential access through ICES’ Data and Analytical Services Unit. We described ICES policy on data sharing above, and repeat it here:

The dataset from this study is held securely in coded form at ICES. While data sharing agreements prohibit ICES from making the dataset publicly available, access may be granted to those who meet pre-specified criteria for confidential access.

For more details, please also see: https://www.ices.on.ca/DAS/Public-Sector

Reviewer #2: General Comments:

14. QBP's and quality measures/clinical handbooks. The content of the QBP quality handbooks will not be familiar to many readers. Can the authors provide basic information on the content, scope, quality/rigor, and specificity of the clinical guidance in the QBP quality handbooks, perhaps in an online supplement? To what extent would the quality of care recommendations differ from the standard of care already in place in Canadian teaching hospitals or large community hospitals? How evidence-based and specific are the recommendations? Was there ever a realistic prospect that providing information in QBP handbooks could measurably improve quality of care, with or without an enforcement mechanism in QBP administration?

Authors’ Response 14:

In the manuscript, we already cite each of the four relevant QBP handbooks at references 18-21. We have now cited the handbooks a second time, at their first mentioned in the Introduction.

The handbooks extensively describe the methodology by which they were created, including “content, scope, quality/rigor, and specificity of the clinical guidance”. We have added new text providing basic information about the handbooks, along with a new citation, as follows: “To create these handbooks for each QBP, the Ministry of Health and Long Term Care, in collaboration with partners such as Health Quality Ontario, Cancer Care Ontario, and the Cardiac Care Network, established expert advisory panels with leading clinicians, scientists, and patients. They defined episodes of care for selected diagnoses or procedures, developed best practice recommendations for patient care, and suggested indicators to monitor for ongoing quality improvement. The resulting QBP Clinical Handbooks serve as a compendium of evidence and clinical consensus.”

We did not assess the extent to which the handbook recommendations differed from existing standards of care in place in Canadian teaching/community hospitals. This is a different study question, beyond the scope of ours.

In our previously published qualitative work we examined whether “there was ever a realist prospect that the QBP handbooks could measurably improve quality of care, with or without an enforcement mechanism”. We have now added a citation to this work as indicated in the Introduction.

15. Is there a reason why QBP implementation would be expected to affect the number of admissions for congestive heart failure, pneumonia, hip fracture, and even prostatectomy? I would think that the number of admissions would be driven primarily by the disease incidence, rather that hospital capacity or funding models. More elective procedures, such as non-cancer surgery e.g. knee and hip joint replacement, might be more sensitive to changes in funding models.

Authors’ Response 15:

QBPs were originally selected in an attempt to standardize care for costly high volume diagnoses and procedures. Recall that the goals of QBPs were two-fold, as we explain in the Introduction: improve quality and reduce cost. We examined the tension between standardizing costs vs. standardizing quality in both of our published qualitative research papers (https://health-policy-systems.biomedcentral.com/articles/10.1186/s12961-018-0353-6 and https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0191996), which we reference in this manuscript.

We selected a range of QBP diagnoses and procedures – some planned, some unplanned, some surgical, some medical – in an attempt to assess whether the effects varied with the nature of the QBP. We assessed volume of admissions because we were interested in knowing whether QBPs might incentivize more admissions for some procedures and less for others, and whether that affected equitable access to care via adverse risk selection for patients > 65 years old, or those in lower income quintiles, or those with greater severity of illness. As we say in our Conclusion, going forward, it might be more useful to focus on “specific diagnoses and procedures”. This is partly what our research revealed.

16. While implementation of QBPs did not affect the outcomes the authors evaluated, it is possible that the funding model did achieve other policy objectives. To the extent that QBPs are intended to limit the financial risk of the provincial funder associated with shortfalls in annual hospital budgets, it is possible that QBP funding did achieve some policy objectives. Further, it is possible that episode-of-care funding incentivized hospitals to provide more efficient and less costly care for these episodes. The investigators would not have detected these efficiencies, since they would not likely be reflected in changes in clinical outcome, or even general resource utilization measures such as length of stay.

Authors’ Response 16:

We agree. There may be other benefits or harms of QBPs that we did not measure, or other policy objectives that may have been met, such as related to total cost per episode-of-care or to the system overall. As we state in the first sentence of the “Summary of Findings”, we were careful to limit our Conclusions to only the QBPs and outcomes we evaluated to avoid being overbroad in our conclusions.

Specific Comments:

17. Abstract "Patients from the lowest income neighborhoods increased slightly..." will be a confusing statement to people not familiar with SES income quintiles. Perhaps state something like "The proportion of patients admitted for congestive heart failure who lived in the lowest neighborhood income quintile increased.." or something like that which more clearly indicates the direction of any change in access/SES of target population.

Authors’ Response 17:

We have revised accordingly. The sentence now reads, “The percentage of patients who lived in the lowest neighborhood income quintile increased slightly for those diagnosed with congestive heart failure (1.89%; 95% CI: 0.51% to 3.27%) and decreased for those who underwent prostate cancer surgery (-2.08%; 95% CI: -3.74% to -0.43%).”

18. Introduction 3rd paragraph "Funding was carved out of hospitals' global budgets and then reallocated…as a fixed fee" is not entirely correct, since funding per episode of care is a product of a fixed fee and the CMI of the hospital (a hospital-level attribute based on historical patient case mix for similar admissions). The amount of funding for the same QBP varies significantly from hospital to hospital based on CMI.

Authors’ Response 18:

We have revised the Introduction as follows, “The fee is adjusted for each hospital based on its unique case-mix index (CMI) to account for the complexity in its overall patient population.”

19. Page 7 What is "responsiveness" to QBP's? ("prior qualitative work which identified sources of potential variation in responsiveness to QBP's")

Authors’ Response 19:

We have revised the Methods as follows, “This was further informed by our prior qualitative work which identified sources of potential variation in the extent to which, and the ways in which, hospitals responded to QBPs.”

20. Page 9 explanation of access to care measures. More explanation about why the "proportion of patients living in lowest neighborhood income quintile" is a measure of access would help average readers understand this concept.

Authors’ Response 20:

We have revised the Methods as follows:

“Socioeconomic status (SES) may contribute to inequalities in access to care, so we used neighborhood income quintile is an indicator of SES”.

21. Page 9 coding behaviour. Can you explain why mean HBAM HIG RIW trend is a measure of coding "behaviour"? How can you know if changes in RIW's do not represent true abrupt changes in patient case mix?

Authors’ Response 21:

As we say in Methods, “…to the extent changes in HIG weight do not represent true abrupt changes in patient case mix, the weight is one potential measure by which to evaluate effects on coding.”

HIG weight is a measure of coding “behaviour” because if upcoding is occurring—whether due to legitimately better coding or questionable or inappropriate coding—we would expect to see a change in HIG weight, since HIG weights are based on CMGs (which incorporate case mix), and upcoding would be reflected in case mix.

RIW is a “relative cost weight value assigned to each patient care episode. It reflects the resource intensity of each patient care episode and is adjusted for a number of factors (including age, comorbidity level and selected interventions).” (https://www.cihi.ca/en/pce_methodology_notes_en.pdf)

Although we observed no statistically significant changes in mean HIG weight for any of the four cohorts, had we seen changes in HIG weight, we know they would not have represented true abrupt changes in patient case mix because there were no changes in the patient characteristics that are used to calculate RIW (i.e. age, Charlson Comorbidity Index). Therefore, there was no abrupt change in patient case mix that would have accounted for changes in HIG weight, had we seen such changes (which we did not). In other words, if changes in HIG were associated with changes in case mix, it is likely we would have seen some evidence of this, as we considered trends in important patient characteristics. We saw neither.

upcoding

22. Page 11 results 2nd paragraph "...the effect of QBPs on outcomes 2 years after implementation..." (don't need to say "after 2 years post-implementation"). Same comment applies throughout eg 1st sentence 3rd paragraph of results.

Authors’ Response 22:

Prior to submitting the manuscript, we had considerable discussion about this language, including the precise meaning of “at” vs “after”, and the fact that the 25th data point was in April of the relevant year. We now suggest “at 2 years post-implementation”, and have revised accordingly.

23. Page 14 discussion. "The slight variation we observed in effects--some results went up, others down...". These changes may still reflect noise rather than signal. I would not necessarily assume any causal association with the exposure. Other explanations do include random chance, bias, other secular trends, etc.

Authors’ Response 23:

We agree and have added, “They may also represent noise rather than signal”.

24. Page 15 "...the goal of the reform was to effect broad change..." (not affect)

Authors’ Response 24:

We agree, and have revised accordingly.

Decision Letter 1

Hans-Peter Brunner-La Rocca

19 Feb 2020

PONE-D-19-27876R1

Effects of Quality-Based Procedure Hospital Funding Reform in Ontario, Canada: An Interrupted Time Series Study

PLOS ONE

Dear Dr Li,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

One of the reviewer has raised major concerns about the manuscript and particularly that his/her suggestions have not been appropriately addressed. You suggested different options how to proceed and the option of not including the suggestions is not appropriate. I, therefore, would like to ask you to reconsider your revisions. In its present form, it is not acceptable for publication.

We would appreciate receiving your revised manuscript by Apr 04 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Hans-Peter Brunner-La Rocca, M.D.

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: I Don't Know

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: (No Response)

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: (No Response)

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The initial version of the paper provided little information on the nature of the QBP policies and the impacts of these policies on how patients would be cared for and managed or the types of patients that are admitted to hospital. Thus I asked the authors to expand their discussion of the impact of the QBP policies on the hospitals finances – which would be of obvious concern to the hospital management – and also discuss the influence, if any, hospital management had on the clinical decision making of healthcare providers working in the hospital. Obviously if the QBP policy has no impact on how patients are managed and treated in hospital then one does not need to conduct any empirical analysis. The policy, by design, will have no impact.

The authors seem to dismiss these concerns. When asked to provide more information on the nature of the financial arrangements between the government and hospitals the authors respond: “Although Reviewer #1 asks good questions, we feel that revising the manuscript to drill down to this level of detail about the flow of funds back and forth between government and hospitals goes beyond what is required in this manuscript on quantitative effects.”

When asked to provide more information on the control that hospital management has on healthcare providers, the authors respond: “... the extent to which managerial discretion may influence clinical care is very interesting, but beyond the scope of this paper.”

I disagree. To my mind, these are very much within the scope of the paper.

The initial version of the paper provided what I deemed to be an unconvincing explanation that the QBP effect was not confounded by other changes to hospital financing in the province of Ontario. To be clear, I am not suggesting here that the QBP effect was confounded. I was merely asking that the authors enumerate the other major policy changes that occurred over the sample period and provide some assurance that the effects of these policies, if any, could be relegated to the pre-policy linear time trend. This material could appear in an appendix.

The authors again elected to not make the change. They instead appealed to the journal editor to allow them to choose “Option 1: No further change required”. They are of the opinion that it is sufficient to state that “…we did not consider them to be potential temporal confounders nor a source of bias or threat to the internal validity of our ITS.” The authors take the view that the onus is on the reader to track down source material, contained in their reference list, and then make an independent determination. I disagree. My view is that the onus is on the authors to provide some evidence that the policy effects are identified. The authors have gone some way in the direction that I recommended. In their reply letter, the authors have enumerated each of the other policies and provided some discussion. This could easily be expanded on to form materials for a supplementary appendix.

My final major comment on the initial version of the paper concerned the inability of the reader to access even the highly aggregated data for independent replication of their results and the adequacy of their regression model specification. The authors do graph the deseasonalized data but the data points are faint, making it hard to check their model fit and specification. The authors also refused my request to conduct tests for structural breaks at other time points.

Part of the stated rationale for keeping even the highly aggregated data secret was that Ontario’s privacy laws precluded their disclosure. I challenged the authors on this. The authors have now removed this rationale from the paper and now stress that agreements with “data partners” are the limiting factor. This explanation, too, seems questionable. What aspect of the data sharing agreements permit exposition of graphs of the deseasonalized time series data (albeit graphs rendered in a way that makes it difficult to ascertain the data values) but prohibit actually providing the unadjusted aggregated data?

In summary, then, the authors, in the revised paper, have rejected, without sufficient justification, my primary suggestions:

• exposition of discussion of how the QBP would be expected to affect patient health outcomes due to their impact on hospital finances and the attendant impact of these changes in managerial incentives on clinical care

• exposition of the role of concurrent policies to obscure the apparent impact of QBP on outcomes

• evaluation of their ITS model by allowing for different break points in the time series

• disclosure of even the aggregated de-seasonalized time series data to permit independent verification

Reviewer #2: The manuscript contains changes responsive to the reviewer comments. I do note that the authors have provided answers to several author queries in their response to reviewers but have not included all of thin information in the manuscript. The authors should consider including information summarizing their responses in the revised manuscript, where appropriate. (Readers are likely to have similar questions as the manuscript reviewers)

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: David Urbach

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Aug 19;15(8):e0236480. doi: 10.1371/journal.pone.0236480.r004

Author response to Decision Letter 1


5 Apr 2020

Dear Dr. Brunner-La Rocca,

Thank you for the opportunity to revise and resubmit our manuscript.

Please find attached our reply, which we hope you will find satisfactory.

Best wishes,

Alvin Li, Karen Palmer, and Noah Ivers

Authors’ response to Reviewers

Reviewer #1:

The initial version of the paper provided little information on the nature of the QBP policies and the impacts of these policies on how patients would be cared for and managed or the types of patients that are admitted to hospital. Thus I asked the authors to expand their discussion of the impact of the QBP policies on the hospitals finances – which would be of obvious concern to the hospital management – and also discuss the influence, if any, hospital management had on the clinical decision making of healthcare providers working in the hospital. Obviously if the QBP policy has no impact on how patients are managed and treated in hospital then one does not need to conduct any empirical analysis. The policy, by design, will have no impact.

The authors seem to dismiss these concerns. When asked to provide more information on the nature of the financial arrangements between the government and hospitals the authors respond: “Although Reviewer #1 asks good questions, we feel that revising the manuscript to drill down to this level of detail about the flow of funds back and forth between government and hospitals goes beyond what is required in this manuscript on quantitative effects.”

When asked to provide more information on the control that hospital management has on healthcare providers, the authors respond: “... the extent to which managerial discretion may influence clinical care is very interesting, but beyond the scope of this paper.”

I disagree. To my mind, these are very much within the scope of the paper.

The initial version of the paper provided what I deemed to be an unconvincing explanation that the QBP effect was not confounded by other changes to hospital financing in the province of Ontario. To be clear, I am not suggesting here that the QBP effect was confounded. I was merely asking that the authors enumerate the other major policy changes that occurred over the sample period and provide some assurance that the effects of these policies, if any, could be relegated to the pre-policy linear time trend. This material could appear in an appendix.

The authors again elected to not make the change. They instead appealed to the journal editor to allow them to choose “Option 1: No further change required”. They are of the opinion that it is sufficient to state that “…we did not consider them to be potential temporal confounders nor a source of bias or threat to the internal validity of our ITS.” The authors take the view that the onus is on the reader to track down source material, contained in their reference list, and then make an independent determination. I disagree. My view is that the onus is on the authors to provide some evidence that the policy effects are identified. The authors have gone some way in the direction that I recommended. In their reply letter, the authors have enumerated each of the other policies and provided some discussion. This could easily be expanded on to form materials for a supplementary appendix.

My final major comment on the initial version of the paper concerned the inability of the reader to access even the highly aggregated data for independent replication of their results and the adequacy of their regression model specification. The authors do graph the deseasonalized data but the data points are faint, making it hard to check their model fit and specification. The authors also refused my request to conduct tests for structural breaks at other time points.

Part of the stated rationale for keeping even the highly aggregated data secret was that Ontario’s privacy laws precluded their disclosure. I challenged the authors on this. The authors have now removed this rationale from the paper and now stress that agreements with “data partners” are the limiting factor. This explanation, too, seems questionable. What aspect of the data sharing agreements permit exposition of graphs of the deseasonalized time series data (albeit graphs rendered in a way that makes it difficult to ascertain the data values) but prohibit actually providing the unadjusted aggregated data?

In summary, then, the authors, in the revised paper, have rejected, without sufficient justification, my primary suggestions:

Point #1. exposition of discussion of how the QBP would be expected to affect patient health outcomes due to their impact on hospital finances and the attendant impact of these changes in managerial incentives on clinical care

Point #2. exposition of the role of concurrent policies to obscure the apparent impact of QBP on outcomes

Point #3. evaluation of their ITS model by allowing for different break points in the time series

Point #4. disclosure of even the aggregated de-seasonalized time series data to permit independent verification

Response to Reviewer #1:

We offer the following responses to each of the Reviewer’s four primary suggestions:

Point #1. With regard to “exposition of discussion of how the QBP would be expected to affect patient health outcomes due to their impact on hospital finances and the attendant impact of these changes in managerial incentives on clinical care:

The Ontario Ministry of Health and Long Term Care (MoHLTC) hypothesized that by providing a standard price per diagnosis/procedure, along with a handbook of best practice clinical care pathways to guide clinical care and reduce variation in care, the types of outcomes we measured would improve. The point of QBPs was simply to set the fee paid per diagnosis/procedure (because there was presumed to be too much variation in cost) and incentivize best practice (because the variation in cost was presumed to be associated with too much variation in care). The MoHLTC’s rationale and assumed mechanism of action for QBPs was as follows:

"QBPs are specific clusters of patient services that offer opportunities for health care providers to share best practices and will allow the system to provide even better quality care, while increasing system efficiencies. By promoting the adoption of clinical evidence-informed practices, clinical practice variation should be reduced across the province while improving patient outcomes to ensure that patients receive the right care, in the right place, at the right time."At page 4, Introduction, we have added this quote to the manuscript to further explain QBPs.

The broader impetus for Health System Funding Reform, of which QBPs were a component, was:

“to promote quality and improved outcomes and create a more equitable allocation of resources. Many countries around the world, including Australia, Germany, Denmark and the United Kingdom have used funding as a lever for change. Over the past two decades, these models have been associated with successes in decreasing wait times/ improving access to care, reducing unit costs per admission, reducing variation in both costs and clinical practice and, most importantly, improving quality.”

Our intent was to test the MoHLTC’s hypothesis that QBPs would facilitate best practice, which they believed would improve these patient outcomes. To date, there has been no peer-reviewed evaluation of the overall effects of QBPs on key indicators of patient care. We simply took advantage of Ontario’s data infrastructure to evaluate whether this new hospital payment model aimed at system-level change had any impact on measures of quality of care, access to care, and hospital coding behaviour for four QBPs.

The Reviewer asks that we explain the “impact of the QBP policies on the hospitals’ finances”. We did not assess how QBPs impacted hospitals’ finances/budgets. We don't know whether or how QBPs changed hospitals’ budgets, so we cannot make any inferences about whether increases or decreases in hospitals’ budgets were associated with improvement or worsening of the indicators we evaluated. Similarly, since we did not assess the impact of QBPs on hospitals’ finances, we do not known whether there were budget-induced changes in managerial incentives that might have affected clinical care. To further answer the reviewer’s question about the “nature” of those arrangements would require qualitative interviews with government and hospitals’ CFOs. Lastly, we didn’t study the extent to which hospital management controls, or does not control, health care providers in hospitals. To assess this would require qualitative interviews with hospital clinicians and hospital management. In general, though, in Canadian hospitals physician clinicians function with autonomy in their decision-making about patient care.

At Page 6, Limitations, we have now added a sixth point to explain this:

“Sixth, we did not assess how QBPs impacted hospitals’ finances, so we cannot make any inferences about whether increases or decreases in hospitals’ budgets affected patient care and/or outcomes for the QBPs we evaluated.”

Point #2: With regard to “exposition of the role of concurrent policies to obscure the apparent impact of QBP on outcomes”:

We have added a Supplemental Table with an overview of concurrent policies and now reference this table in the Limitations section.

S7 Table: Overlapping Policies

Concurrent Initiative Description

Excellent Care for All Act (2010) This initiative was an important policy change, which was passed on 2010.

“The Act requires health care organizations, currently defined as hospitals, to:

• Develop and post annual quality improvement plans.

• Implement patient and employee satisfaction surveys and a patient relations process.

• Link executive compensation to achievement of quality plan performance improvement targets.

• Develop declarations of values after public consultation.

• Create quality committees to report to each hospital board on quality related issues.”

Source: https://www.ontariocanada.com/registry/view.do?postingId=4544&language=en

Community Health Links, (2014) This program provides individualized, coordinated, care plans for patients living with multiple chronic conditions and complex needs.

Source: http://www.health.gov.on.ca/en/pro/programs/transformation/community.aspx

Health Based Allocation Methods

(2012) Health Based Allocation Methods (HBAM) was a new funding methodology to partially fund hospitals. HBAM uses expected weighted cases and expected unit cost.

Source: https://www.oha.com/Documents/HBAM-What%20You%20Need%20To%20Know.pdf

Austerity measures (since 2009) “During a period of significant austerity beginning in 2009, Ontario’s hospitals contributed to getting the province back on track financially by accepting years of zero percent funding increases at a time when inflation, patient volumes, labour costs, energy, and regulatory requirements grew significantly”

More info: https://www.oha.com/Bulletins/2558_OHA_A%20Sector%20on%20the%20Brink_rev.pdf

Point #3: With regard to “evaluation of their ITS model by allowing for different break points in the time series”:

At page 6, Methods, we have revised to justify our rationale for using a quasi- experimental ITS approach, as follows:

“Rather than a data driven approach, we chose to pre-specify time periods simultaneous with when each QBP was introduced. A priori, we decided to incorporate a 3 month transition period to allow time for any clinical changes in response to the funding model change to be implemented. Had we used a data-driven approach, it is likely that, given the number of outcomes and QBPs, we would have detected changes during the time series and would not have been able to infer that these changes were a result of effects from QBPs. Instead, we used an ITS design because, in the absence of randomized experiments, ITS design is an effective method to evaluate policy changes at the whole system- and population-level.[18–22].”

Point #4: With regard to “disclosure of even the aggregated de-seasonalized time series data to permit independent verification”:

Below we now include a letter from the ICES’ Privacy & Legal Office that explains the basis for ICES’ restriction on release of datasets for public use. PlosOne has challenged ICES’ justification in the past, and we understand that much time has been consumed by ICES and other authors trying to explain the nuances to different personnel. Thus, we have also sent this letter to the Editor-in-Chief, Dr. Joerg Heber for consideration (with copy to Kelley Ross, ICES’ Privacy & Legal Office). We understand the frustration of not disclosing even the aggregated de-seasonalized data, given the ongoing open science initiative, which we support wholeheartedly. On this matter, however, we, as scientists, have no choice.

Reviewer #2:

The manuscript contains changes responsive to the reviewer comments. I do note that the authors have provided answers to several author queries in their response to reviewers but have not included all of thin information in the manuscript. The authors should consider including information summarizing their responses in the revised manuscript, where appropriate. (Readers are likely to have similar questions as the manuscript reviewers)

Response to Reviewer #2: Thank you for your comment. We now added selected information from our previous responses into the revised manuscript, as indicated in tracked changes and below:

At page 9, Data Sources and Quality, we have further explained why HIG weight is a measure of upcoding, as follows:

“HIG weight is a measure of coding behaviour because it incorporates both case mix and the resource intensity of each patient care episode adjusted for patient characteristics. If upcoding is occurring we would expect to see changes in HIG weight. Thus, to the extent changes in HIG weight do not represent true abrupt changes in patient case mix, the HIG weight is one potential measure by which to evaluate effects on coding.”

At page 6, Limitations, we have added this:

“Seventh, there may be benefits or harms of QBPs that we did not measure, or other policy objectives that may have been met, such as those related to total cost per episode-of-care or cost to the system overall. We were careful to limit our conclusions to only the QBPs and outcomes we evaluated to avoid being overbroad.”

Decision Letter 2

Hans-Peter Brunner-La Rocca

30 Apr 2020

PONE-D-19-27876R2

Effects of Quality-Based Procedure Hospital Funding Reform in Ontario, Canada: An Interrupted Time Series Study

PLOS ONE

Dear Dr Li,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

One of the reviewers remain to have significant concerns about your manuscript. I would like you to adequately address these issues. Even if you disagree with the reviewer you must clearly state why the reviewer's notion are not correct and why you did not changed the revised manuscript accordingly.

We would appreciate receiving your revised manuscript by Jun 14 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Hans-Peter Brunner-La Rocca, M.D.

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: No

Reviewer #2: No

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: In this revised manuscript the authors write, in the methods section:

Had we used a data-driven approach, it is likely that, given the number of outcomes and QBPs, we would have detected changes during the time series and would not have been able to infer that these changes were a result of effects from QBPs. Instead, we used an ITS design because, in the absence of randomized experiments, ITS design is an effective method to evaluate policy changes at the whole system- and population-level.” 18–22]

I don't find this justification for not conducting the structural break tests convincing. The ITS design per se is not "an effective method to evaluate policy changes at the whole system- and population-level.” It depends critically on the extent to which the pre-policy trend, extrapolated into the post-policy period, reflect the counterfactual. It also depends on the pre policy trends being linear.

In the limitations section, the potential for confounding from the policy changes that occurred during the sample period. The authors write:

"However, these initiatives were mostly implemented well before the QBP funding reform, meaning that in this

study their effects would have been captured in the secular trend. Those that occurred afterwards

did not affect the entire system in a reliable fashion for the procedures, diagnoses, or outcomes

under investigation, or did not have a specific time when changes in the outcomes measured

might have been expected. Therefore, our findings regarding the (lack of) achievement of desired

system-level changes with the QBP funding reform in Ontario are robust."

As I have stated in my reviews of the earlier versions of the manuscript my view is that you can't draw this conclusion. It simply is not supported by the results that you present. You are unable or unwilling to render your graphs in a way that one can clearly see the data points or display the values of the aggregated data in the graphs. Your are unwilling to perform structural break tests even though you admit the possibility earlier on in the methods section that you might encounter breaks different from the ones that you pre-specify.

We do seem to be going in circles here. I would like to see these results in print but I am not satisfied with your identification strategy.

To break the impasse, I propose that you render the graphs so that the data points are more legible (this may involve shrinking the y scale and increasing marker size); the data points in the graphs in the paper I am reviewing are very faint. If you are able to do this then you can perform a visual inspection of the data points around the policy change. If the data points appear to be clustered around a linear trend over the sample period then this is good evidence that there were no effects from QBP. If there were discontinuities in the data series then you can comment on the role of QBP and the role of the other policies that were introduced around the same period of time and the other factors that could have affected outcomes (such as the role of provincial fiscal conditions, tax revenues and hospital budgets). If there is uncertainty over what caused a break in the time series after the intro of the QBP policies then you can estimate the size of the break and you can bound the size of the QBP effect. You have already stated that the QBP policy effect was not confounded, but if the QBP policy effect was realized only after a lag, and there were other factors changing around the same time, would these factors exert an increase or decrease on the outcome variable? What effect would these have on the apparent QBP policy effect?

Reviewer #2: (No Response)

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: David Urbach

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Aug 19;15(8):e0236480. doi: 10.1371/journal.pone.0236480.r006

Author response to Decision Letter 2


21 Jun 2020

Dear Dr. Brunner-La Rocca,

Thank you for the opportunity to revise and resubmit our manuscript.

Please find attached our reply. We’ve now addressed the reviewer’s concerns and include our analytic code and data creation plan as a supplemental information. We’ve also made a number of edits throughout the manuscript. We sincerely hope you will now find our manuscript acceptable for publication.

Best wishes,

Alvin Li, Karen Palmer and Noah Ivers on behalf of co-authors

Reviewer #1:

Reviewer Comment #1:

In this revised manuscript the authors write, in the methods section:

“Had we used a data-driven approach, it is likely that, given the number of outcomes and QBPs, we would have detected changes during the time series and would not have been able to infer that these changes were a result of effects from QBPs. Instead, we used an ITS design because, in the absence of randomized experiments, ITS design is an effective method to evaluate policy changes at the whole system- and population-level.” 18–22]

I don't find this justification for not conducting the structural break tests convincing. The ITS design per se is not "an effective method to evaluate policy changes at the whole system- and population-level.” It depends critically on the extent to which the pre-policy trend, extrapolated into the post-policy period, reflect the counterfactual. It also depends on the pre policy trends being linear.

RESPONSE #1: We have deleted that text from the revised version.

Reviewer Comment #2: In the limitations section, the potential for confounding from the policy changes that occurred during the sample period. The authors write:

"However, these initiatives were mostly implemented well before the QBP funding reform, meaning that in this study their effects would have been captured in the secular trend. Those that occurred afterwards did not affect the entire system in a reliable fashion for the procedures, diagnoses, or outcomes under investigation, or did not have a specific time when changes in the outcomes measured might have been expected. Therefore, our findings regarding the (lack of) achievement of desired system-level changes with the QBP funding reform in Ontario are robust."

As I have stated in my reviews of the earlier versions of the manuscript my view is that you can't draw this conclusion. It simply is not supported by the results that you present.

RESPONSE #2: We have deleted this text from the revised version.

Reviewer Comment #3: You are unable or unwilling to render your graphs in a way that one can clearly see the data points or display the values of the aggregated data in the graphs. Your are unwilling to perform structural break tests even though you admit the possibility earlier on in the methods section that you might encounter breaks different from the ones that you pre-specify.

We do seem to be going in circles here. I would like to see these results in print but I am not satisfied with your identification strategy.

To break the impasse, I propose that you render the graphs so that the data points are more legible (this may involve shrinking the y scale and increasing marker size); the data points in the graphs in the paper I am reviewing are very faint. If you are able to do this then you can perform a visual inspection of the data points around the policy change.

If the data points appear to be clustered around a linear trend over the sample period then this is good evidence that there were no effects from QBP. If there were discontinuities in the data series then you can comment on the role of QBP and the role of the other policies that were introduced around the same period of time and the other factors that could have affected outcomes (such as the role of provincial fiscal conditions, tax revenues and hospital budgets). If there is uncertainty over what caused a break in the time series after the intro of the QBP policies then you can estimate the size of the break and you can bound the size of the QBP effect. You have already stated that the QBP policy effect was not confounded, but if the QBP policy effect was realized only after a lag, and there were other factors changing around the same time, would these factors exert an increase or decrease on the outcome variable? What effect would these have on the apparent QBP policy effect?”

RESPONSE #3: As the reviewer suggested, we have reproduced the graphs so that the data points are more visible. We have also carried out the requested visual inspection of the data points around the various policy changes and added additional ITS plots with vertical lines indicating the timing of these other policy changes to allow the reader to inspect the data. We have also made a number of cosmetic improvements to the figures.

We see no obvious discontinuities associated with these other initiatives, with the possible exception of Health Links. We now state in the Limitations section:

“Our study has several important limitations that are common to observational studies of policy changes. Teasing apart the effect of QBPs in the presence of multiple system-level changes is challenging. First, other initiatives to improve patient care and/or control costs may have overlapped with the timing of QBP implementation. Specific initiatives that we are aware of included passage in 2010 of the Excellent Care for All Act (ECFA)[40], the introduction of Health Based Allocation Model hospital funding reforms in April 2012 [24], and the introduction of Community Health Links in December 2012[42] (see S5-7 Figs). Visual inspection of data points around the timing of introduction of these initiatives however suggests that they are unlikely to have had a major impact on the outcomes studied in our analyses. A possible exception is the introduction of Community Health Links in December 2012: due to its timing close to that of the QBPs for congestive heart failure (CHF) in April 2013, it is difficult to independently assess the effect of the QBPs for this condition. Undetected confounding is always possible in any uncontrolled study. Policies aimed at improving health care are constantly being tinkered with, which may influence any particular intervention, such as QBPs, in ways not easily detected.

We have included these new figures in the Supplemental Figures 5-7 and revised the Limitations text as above.

The descriptions for the three figures are as follows and they are attached as a separate image file (per PLoS One policy).

S5 Fig. Percent of patients returned to hospital or died with Competing Initiatives. Red solid line represents the fitted model. The red dashed line represents the counterfactual (i.e. if no policy change occurred). The vertical dashed line represents the date of policy change. The grey shaded area represents the three months of “transition” period. Competing initiatives are outlined in the legend. Data are seasonally adjusted.

S6. Fig. Mean acute length of stay for the episode of care (days) with Competing Initiatives. Red solid line represents the fitted model. The red dashed line represents the counterfactual (i.e. if no policy change occurred). The vertical dashed line represents the date of policy change. Competing Interventions are outlined in the legend. The grey shaded area represents the three months of “transition” period. Competing Initiatives are outlined in the legend. Data are seasonally adjusted.

S7 Fig. Total volume of admissions with Competing Initiatives. Red solid line represents the fitted model. The red dashed line represents the counterfactual (i.e. if no policy change occurred). The vertical dashed line represents the date of policy change. The grey shaded area represents the three months of “transition” period. Competing Initiatives are outlined in the legend. Data are seasonally adjusted.

Data Sharing

We have now included the codes used in the analyses as a supplementary file.

Decision Letter 3

Hans-Peter Brunner-La Rocca

9 Jul 2020

Effects of Quality-Based Procedure Hospital Funding Reform in Ontario, Canada: An Interrupted Time Series Study

PONE-D-19-27876R3

Dear Dr. Li,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Hans-Peter Brunner-La Rocca, M.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Hans-Peter Brunner-La Rocca

7 Aug 2020

PONE-D-19-27876R3

Effects of Quality-Based Procedure Hospital Funding Reform in Ontario, Canada: An Interrupted Time Series Study

Dear Dr. Li:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Hans-Peter Brunner-La Rocca

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Fig. Mean length of stay for entire episode (days).

    Red solid line represents the fitted model. The red dashed line represents the counterfactual (i.e. if no policy change occurred). The vertical dashed line represents the date of policy change. The grey shaded area represents the three months of “transition” period. Data are seasonally adjusted.

    (TIFF)

    S2 Fig. Percent change in patients over 65 (%).

    Red solid line represents the fitted model. The red dashed line represents the counterfactual (i.e. if no policy change occurred). The vertical dashed line represents the date of policy change. The grey shaded area represents the three months of “transition” period. Data are seasonally adjusted.

    (TIFF)

    S3 Fig. Percent change in patients living in the lowest neighborhood income quintile.

    Red solid line represents the fitted model. The red dashed line represents the counterfactual (i.e. if no policy change occurred). The vertical dashed line represents the date of policy change. The grey shaded area represents the three months of “transition” period. Data are seasonally adjusted.

    (TIFF)

    S4 Fig. Mean HIG weight.

    Red solid line represents the fitted model. The red dashed line represents the counterfactual (i.e. if no policy change occurred). The vertical dashed line represents the date of policy change. The grey shaded area represents the three months of “transition” period. Data are seasonally adjusted.

    (TIFF)

    S5 Fig. Percent of patients returned to hospital or died with competing initiatives.

    Red solid line represents the fitted model. The red dashed line represents the counterfactual (i.e. if no policy change occurred). The vertical dashed line represents the date of policy change. The grey shaded area represents the three months of “transition” period. Competing initiatives are outlined in the legend. Data are seasonally adjusted.

    (TIFF)

    S6 Fig. Mean acute length of stay for the episode of care (days) with competing initiatives.

    Red solid line represents the fitted model. The red dashed line represents the counterfactual (i.e. if no policy change occurred). The vertical dashed line represents the date of policy change. Competing Interventions are outlined in the legend. The grey shaded area represents the three months of “transition” period. Competing Initiatives are outlined in the legend. Data are seasonally adjusted.

    (TIFF)

    S7 Fig. Total volume of diagnoses/procedures with competing initiatives.

    Red solid line represents the fitted model. The red dashed line represents the counterfactual (i.e. if no policy change occurred). The vertical dashed line represents the date of policy change. The grey shaded area represents the three months of “transition” period. Competing Initiatives are outlined in the legend. Data are seasonally adjusted.

    (TIFF)

    S1 Table. Cohort characteristics for congestive heart failure patients included in the analysis.

    (DOCX)

    S2 Table. Cohort characteristics for hip fracture patients included in the analysis.

    (DOCX)

    S3 Table. Cohort characteristics for pneumonia patients included in the analysis.

    (DOCX)

    S4 Table. Cohort characteristics for prostate cancer patients included in the analysis.

    (DOCX)

    S5 Table. Estimated coefficients from the interrupted time series analysis on the impact of quality-based procedure policy (% patients returned to hospital or died, mean acute length of stay, total volume).

    (DOCX)

    S6 Table. Estimated coefficients from the interrupted time series analysis on the impact of quality-based procedure policy (% change in patients over 65, % change in patients living in lowest neighborhood income quintile, mean HIG weight.

    (DOCX)

    S7 Table. Overlapping initiatives.

    (DOCX)

    S1 File

    (ZIP)

    Data Availability Statement

    The dataset from this study is held securely in coded form at ICES. While legal data sharing agreements between ICES and data providers (e.g., health organizations and government) prohibit ICES from making the dataset publicly available, access may be granted to those who meet pre-specified criteria for confidential access, available at www.ices.on.ca/DAS (email: das@ices.on.ca). The full dataset creation plan and underlying analytic code are available as Supporting Information files, understanding that the computer programs may rely upon coding templates or macros that are unique to ICES and are therefore either inaccessible or may require modification.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES