Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2022 May 25;17(5):e0268948. doi: 10.1371/journal.pone.0268948

Economic evaluation of the Target-D platform to match depression management to severity prognosis in primary care: A within-trial cost-utility analysis

Yong Yi Lee 1,2,3,*, Cathrine Mihalopoulos 1,4,#, Mary Lou Chatterton 1,#, Susan L Fletcher 5,#, Patty Chondros 5,#, Konstancja Densley 5,#, Elizabeth Murray 5,6,#, Christopher Dowrick 5,7,#, Amy Coe 5,#, Kelsey L Hegarty 5,8,#, Sandra K Davidson 5,#, Caroline Wachtler 5,9,#, Victoria J Palmer 5,#, Jane M Gunn 5,#
Editor: Isabelle Durand-Zaleski10
PMCID: PMC9132336  PMID: 35613149

Abstract

Background

Target-D, a new person-centred e-health platform matching depression care to symptom severity prognosis (minimal/mild, moderate or severe) has demonstrated greater improvement in depressive symptoms than usual care plus attention control. The aim of this study was to evaluate the cost-effectiveness of Target-D compared to usual care from a health sector and partial societal perspective across 3-month and 12-month follow-up.

Methods and findings

A cost-utility analysis was conducted alongside the Target-D randomised controlled trial; which involved 1,868 participants attending 14 general practices in metropolitan Melbourne, Australia. Data on costs were collected using a resource use questionnaire administered concurrently with all other outcome measures at baseline, 3-month and 12-month follow-up. Intervention costs were assessed using financial records compiled during the trial. All costs were expressed in Australian dollars (A$) for the 2018–19 financial year. QALY outcomes were derived using the Assessment of Quality of Life-8D (AQoL-8D) questionnaire. On a per person basis, the Target-D intervention cost between $14 (minimal/mild prognostic group) and $676 (severe group). Health sector and societal costs were not significantly different between trial arms at both 3 and 12 months. Relative to a A$50,000 per QALY willingness-to-pay threshold, the probability of Target-D being cost-effective under a health sector perspective was 81% at 3 months and 96% at 12 months. From a societal perspective, the probability of cost-effectiveness was 30% at 3 months and 80% at 12 months.

Conclusions

Target-D is likely to represent good value for money for health care decision makers. Further evaluation of QALY outcomes should accompany any routine roll-out to assess comparability of results to those observed in the trial. This trial is registered with the Australian New Zealand Clinical Trials Registry (ACTRN12616000537459).

Introduction

Depression is a leading cause of disease burden globally and is associated with widespread economic impacts [1, 2]. Several effective treatments for depression exist, including psychotherapy and pharmacotherapy [35]. Stepped care approaches, which start from a low-intensity intervention before transitioning to increasing levels of treatment, are recommended by several depression treatment guidelines as optimal clinical practice [35]. While evidence exists on the effectiveness/cost-effectiveness of stepped care approaches for anxiety disorders, there is comparatively little evidence for these approaches when treating depression [6]. Target-D is a new approach comprising a clinical prediction tool (CPT) embedded within a person-centred e-health platform that provides symptom feedback, priority setting and management options matched to the predicted severity of an individual’s depressive symptoms in three months’ time (i.e., minimal/mild, moderate or severe) [7]. A randomised controlled trial (RCT) recently showed that, relative to usual care, Target-D improves depressive symptoms at 3 months (as measured by the 9-item Patient Health Questionnaire [PHQ-9])–i.e., a standardised mean reduction of -0.16 (95% confidence interval: -0.26 to -0.05) [8]. This study describes the economic evaluation conducted within the Target-D RCT. It aims to determine whether systematically matching depression care to symptom severity prognosis was cost-effective when compared to usual care.

Methods

Intervention trial

A within-trial economic evaluation was conducted across 14 general practices in metropolitan Melbourne, Australia. Study participants were aged 18–65 years and self-reported: current depressive symptoms (scoring ≥2 on the PHQ-2); stable antidepressant medication use in the past month; no antipsychotic medication use or regular psychotherapy; internet access; and English proficiency. The trial compared the Target-D intervention (feedback, priority setting and prognosis-matched management options) to usual care plus attention control (a telephone call from the research team). Management options for participants in the intervention arm included: (1) unguided internet-based cognitive behavioural therapy (iCBT) (minimal/mild prognostic group) [9]; (2) clinician-guided iCBT (moderate group) [10]; and (3) nurse-led collaborative care (severe group) [11]. S1 Text (see S1 Appendix) outlines treatments received in each trial arm. The study protocol and analysis of clinical outcomes are described elsewhere [7, 8]. The authors assert that all procedures contributing to this work comply with the ethical standards of the relevant national and institutional committees on human experimentation and with the Helsinki Declaration of 1975, as revised in 2008. All procedures involving human subjects/patients were approved by The University of Melbourne Health Sciences Human Ethics Sub-Committee (approval number 1543648).

Cost analysis

The economic evaluation primarily adopted an Australian health sector perspective. An additional analysis was also conducted from a partial societal perspective, as recommended by recent economic evaluation guidelines [12]. Health sector costs included the cost of delivering Target-D alongside the cost of other health care services incurred by study participants over the trial period. Societal costs included all health sector costs plus the cost of self-reported productivity losses. S2 Table (see S1 Appendix) presents an impact inventory with details of each cost component by study perspective. All costs are presented in Australian dollars (A$) for the 2018–19 financial year. Discounting was not required.

A micro-costing approach was used to estimate the cost of the Target-D intervention based on trial data (see S3 Table in S1 Appendix). Intervention costs were divided into: (1) the costs of screening using the Target-D CPT; and (2) the costs of treating participants within each assigned prognostic group. Screening costs were sourced from the study team which comprised the cost of hardware (iPads and Wi-Fi dongles) and the cost of CPT maintenance. Personnel time to approach individuals in the GP waiting room involved one minute per encounter and was costed using the relevant wage rate (plus overheads) for a research assistant. The average cost per screened person was computed and applied to everyone in the intervention arm.

Intervention costs for participants in the minimal/mild prognostic group comprised the annual registration cost of an Australian unguided iCBT program [9] and the cost of research assistants providing an initial check-in phone call. The average cost between two Australian clinician-guided iCBT programs was applied to participants in the moderate prognostic group [10, 13], alongside the cost of research assistants providing periodic check-in phone calls to monitor participant progress. In this instance, the two programs represent a low-to-high range of possible unit cost values for clinician-guided iCBT in Australia. A subsequent sensitivity analysis was done to test the impact of using the highest unit cost, rather than the average. Intervention costs for participants in the severe prognostic group encompassed time/resources incurred by nurses to administer collaborative care (laptops, phones, etc). It is anticipated that all costed activities described above that involve research assistants will likely require similarly qualified staff to facilitate the implementation of the intervention as part of routine practice.

Participants were asked to complete a resource use questionnaire (RUQ) at baseline, 3-month and 12-month follow-up. The RUQ measured: health care professional visits (GPs, psychologists, psychiatrists, etc.); diagnostic tests; medication usage; hospitalisations; emergency department visits; community mental health contacts; and time taken off from paid and unpaid work (see S4 Text in S1 Appendix).

Health outcomes

The Assessment of Quality of Life-8D (AQoL-8D) was used to measure participants’ health-related quality of life at baseline, 3-month and 12-month follow-up [14]. Australian general population preference weights were applied to calculate utility weights at each time point [15]. Utility weights were then used to estimate quality-adjusted life years (QALYs) based on the area-under-the-curve (AUC) method [16].

Statistical analysis

Data management was implemented using Excel 2016 and R version 3.6.0. Statistical analyses were conducted using Stata 15 (College Station, Texas, USA). Researchers undertaking the cost analysis were not blinded to trial arm allocation. Base case analyses were conducted as intention-to-treat from the health sector and societal perspectives. All enrolled participants who completed a baseline assessment were included. However, 66% participants were missing RUQ data at 3 months (46%) or 12 months (60%); while 68% were missing AQoL-8D data at 3 months (49%) or 12 months (66%). Multiple imputation methods were implemented in Stata to account for missing data that were deemed missing at random following several exploratory analyses presented in S5 Text in S1 Appendix. Missing cost and outcomes data were imputed 100 times using multiple imputation by chained equations (MICE), with predictive mean matching and adjustment for baseline covariates associated with data missingness–i.e., trial arm, clinic, age, gender, highest level of education and having visited a psychologist/counsellor in the past 12 months.

The difference in mean QALYs between trial arms was estimated using a generalised linear model (GLM) involving the ‘Gaussian’ family and ‘identity’ link. The ratio of mean costs from both health sector and societal perspectives were estimated using GLMs involving the ‘gamma’ family and ‘log’ link. All GLMs were estimated with and without adjustment for several baseline covariates specified in the study protocol ‒ i.e., baseline PHQ-9 score, general practice and prognostic group [7]. Baseline AQoL-8D utility weights were also included as an additional baseline covariate for GLMs involving QALY outcomes. Subgroup analyses were conducted across the three prognostic groups.

Incremental cost-effectiveness ratios (ICERs) were calculated as the difference in mean costs between the intervention and control arms divided by the difference in mean QALYs. ICERs were calculated by study perspective (health sector and societal), follow-up period (3 and 12 months) and, for the subgroup analysis, by prognostic group (total, minimal/mild, moderate and severe). A resampling method comprising single imputation nested in bootstrapping [17] was used to quantify the impact of input parameter uncertainty around the resulting differences in mean costs/QALYs and the mean ICERs. This method works by generating a single call to the MICE procedure to produce a complete dataset with which to analyse GLMs of costs/QALYs within each bootstrap resample. Following the generation of 1,000 bootstrap resamples, the bootstrap percentile method was used to estimate 95% confidence intervals (95% CI) around the differences in mean costs/QALYs and the mean ICERs [18]. The intervention was considered cost-effective if the resulting ICER was less than the Australian willingness-to-pay threshold of A$50,000 per QALY [1921].

A summary list of all variables included in the statistical analysis is provided in S2 Appendix, alongside the Stata do-file used to implement the statistical analysis.

Sensitivity analysis

Sensitivity analysis was conducted to examine the impact of including additional baseline covariates associated with non-response at 3 and 12 months that were identified in the primary outcomes analysis (i.e., age, gender, highest level of education, current employment status, health care card status, long term illness, living alone, number of times visited a psychiatrist or counsellor in past 12 months and current use of antidepressants) [8]. Two additional sensitivity analyses investigated the impact of changes to the intervention costing. The first involved increasing the unit cost for the clinician-guided iCBT program delivered to participants in the moderate prognostic group to the highest unit cost among the two alternative programs (average cost changed from $132 to $222 per person). The second involved incorporating sunk costs around Target-D CPT research and development (average cost of screening changed from $0.96 to $2.30 per person). Additionally, a complete case analysis was conducted for QALYs and costs to examine the impact of no multiple imputation of missing data.

Results

Intervention trial

Most of the 1,868 trial participants were allocated to the minimal/mild prognostic group (72.6%, n = 1,357, intervention = 679, control = 678); with 288 (15.4%, intervention = 143, control = 145) to the moderate group; and 223 (11.9%, intervention = 111, control = 112) to the severe group. Participants across both trial arms were similar overall and within prognostic groups [8].

Cost analysis

The cost of screening was estimated to be $0.96 per person after excluding sunk costs related to Target-D CPT research and development (see S3 Table in S1 Appendix). The cost of Target-D treatment in the intervention arm was: $14 per person for the minimal/mild prognostic group; $132 per person for the moderate group; and $676 per person in the severe group.

Table 1 shows the estimated ratio of mean health sector costs between the intervention and control arms. Health sector costs were comparable between trial arms at 3-month follow-up. No statistically significant differences were detected across all participants or the minimal/mild and moderate prognostic groups. However, mean health sector costs were 1.9 times higher (95% CI: 1.24 to 2.93) among the intervention arm relative to the control arm for those in the severe prognostic group. This was likely due to the high-cost nature of collaborative care delivered to participants in the severe group (see S3 Table in S1 Appendix for detailed costs). At 12 months, no statistically significant differences were observed overall or across each of the prognostic groups. Health sector costs were 9% lower (not significant) in the intervention arm relative to the control arm (ratio of 0.91, 95% CI: 0.76 to 1.09). Lower health sector costs were also observed in the minimal/mild and moderate prognostic groups (ratios of 0.89 and 0.87, respectively). In the severe prognostic group, health sector costs were 34% higher in the intervention arm at 12-month follow-up (95% CI: -6% to +90%). The estimated ratio of mean societal costs between the intervention and control arms are presented in Table 2. Mean costs were higher (but not significant) in the intervention arm at 3-month follow-up for all participants and across each of the three prognostic groups.

Table 1. Comparison of health sector costs by trial arm, across all participants and stratified by prognostic group (multiple imputed data).

All participants
(n = 1,868)
p-value Minimal/mild
(n = 1,357)
p-value Moderate
(n = 288)
p-value Severe (n = 223) p-value
3 months
Mean costs (SE) 1
Intervention arm 625 (54) 487 (52) 812 (138) 1,842 (265)
Control arm 630 (52) 509 (50) 947 (148) 966 (148)
Ratio of mean costs between arms (95% CI)1 0.99 (0.78 to 1.25) 0.94 0.96 (0.72 to 1.27) 0.76 0.86 (0.55 to 1.33) 0.49 1.91 (1.24 to 2.93) 0.003
Sensitivity analysis2 1.04 (0.83 to 1.31) 0.72 1.02 (0.77 to 1.35) 0.88 0.96 (0.65 to 1.42) 0.85 1.93 (1.34 to 2.79) 0.001
Sensitivity analysis3 1.01 (0.80 to 1.27) 0.95 0.96 (0.72 to 1.27) 0.76 0.97 (0.63 to 1.48) 0.87 1.91 (1.24 to 2.92) 0.003
Sensitivity analysis4 0.99 (0.79 to 1.25) 0.96 0.96 (0.72 to 1.27) 0.77 0.86 (0.55 to 1.33) 0.50 1.91 (1.24 to 2.93) 0.003
Sensitivity analysis5 0.96 (0.75 to 1.24) 0.77 0.96 (0.71 to 1.29) 0.77 0.74 (0.45 to 1.24) 0.26 1.72 (1.01 to 2.92) 0.046
12 months
Mean costs (SE) 1
Intervention arm 1,643 (112) 1,389 (122) 1,864 (237) 3,517 (422)
Control arm 1,798 (115) 1,565 (127) 2,134 (246) 2,632 (346)
Ratio of mean costs between arms (95% CI)1 0.91 (0.76 to 1.09) 0.32 0.89 (0.70 to 1.12) 0.31 0.87 (0.63 to 1.21) 0.42 1.34 (0.94 to 1.90) 0.11
Sensitivity analysis2 0.94 (0.78 to 1.13) 0.50 0.91 (0.72 to 1.15) 0.45 0.89 (0.66 to 1.21) 0.47 1.33 (0.95 to 1.86) 0.09
Sensitivity analysis3 0.92 (0.77 to 1.10) 0.36 0.89 (0.70 to 1.12) 0.31 0.92 (0.67 to 1.27) 0.61 1.34 (0.94 to 1.90) 0.11
Sensitivity analysis4 0.91 (0.76 to 1.09) 0.33 0.89 (0.70 to 1.12) 0.31 0.87 (0.63 to 1.21) 0.42 1.34 (0.94 to 1.90) 0.11
Sensitivity analysis5 0.87 (0.69 to 1.10) 0.25 0.89 (0.67 to 1.18) 0.42 0.66 (0.42 to 1.06) 0.09 1.16 (0.62 to 2.15) 0.65

Abbreviations: SE = standard error; CI = confidence interval

1 Baseline mean and the ratio of the mean for the intervention arm and control arm estimated using a generalised linear model (family = gamma, link = log) with random intercepts for individuals and adjusted for baseline PHQ-9 score, general practice and prognostic group (the final covariate only applied to the analysis involving all participants);

2 Same as 1, adjusted for factors associated with non-response to the primary outcome measure, the PHQ-9 score, at 3 and 12 months (age, gender, highest level of education, current employment status, hold a health care card, long-term illness, live alone, number of times visited a psychiatrist or counsellor in past 12 months and current use of antidepressants);

3 Same as 1, but using a higher unit cost for the clinician-guided iCBT course delivered to the moderate prognostic group (unit cost changed from $132 per person to $222 per person);

4 Same as 1, but with the inclusion of sunk costs for the development of the Target-D CPT (cost of screening changed from $0.96 per person to $2.30 per person);

5 Same as 1, for complete cases only (i.e., no multiple imputation of missing data)

Table 2. Comparison of societal costs by trial arm, across all participants and stratified by prognostic group (multiple imputed data).

All participants (n = 1,868) p-value Minimal/mild (n = 1,357) p-value Moderate
(n = 288)
p-value Severe (n = 223) p-value
3 months
Mean costs (SE) 1
Intervention arm 5,326 (244) 5,093 (277) 5,462 (704) 6,133 (794)
Control arm 4,966 (225) 4,856 (258) 5,296 (614) 4,537 (663)
Ratio of mean costs between arms (95% CI)1 1.07 (0.95 to 1.21) 0.27 1.05 (0.90 to 1.22) 0.53 1.03 (0.73 to 1.46) 0.86 1.35 (0.90 to 2.04) 0.15
Sensitivity analysis2 1.07 (0.94 to 1.23) 0.30 1.05 (0.89 to 1.24) 0.53 1.01 (0.69 to 1.48) 0.94 1.31 (0.87 to 2.02) 0.23
Sensitivity analysis3 1.07 (0.95 to 1.22) 0.25 1.05 (0.90 to 1.22) 0.53 1.05 (0.75 to 1.48) 0.78 1.35 (0.90 to 2.04) 0.15
Sensitivity analysis4 1.07 (0.95 to 1.21) 0.27 1.05 (0.90 to 1.22) 0.53 1.03 (0.73 to 1.46) 0.86 1.35 (0.90 to 2.04) 0.15
Sensitivity analysis5 1.08 (0.94 to 1.24) 0.31 1.05 (0.90 to 1.23) 0.54 1.11 (0.74 to 1.67) 0.60 1.28 (0.84 to 1.96) 0.25
12 months
Mean costs (SE) 1
Intervention arm 17,159 (662) 17,053 (796) 16,104 (1,542) 17,978 (2,161)
Control arm 17,538 (632) 17,316 (769) 18,246 (1,511) 16,731 (2,013)
Ratio of mean costs between arms (95% CI)1 0.98 (0.88 to 1.08) 0.67 0.98 (0.87 to 1.11) 0.81 0.88 (0.68 to 1.14) 0.34 1.07 (0.76 to 1.52) 0.67
Sensitivity analysis2 0.98 (0.88 to 1.09) 0.71 0.97 (0.85 to 1.10) 0.62 0.85 (0.64 to 1.12) 0.25 1.03 (0.70 to 1.52) 0.86
Sensitivity analysis3 0.98 (0.89 to 1.08) 0.68 0.98 (0.87 to 1.11) 0.81 0.89 (0.69 to 1.15) 0.36 1.07 (0.76 to 1.52) 0.69
Sensitivity analysis4 0.98 (0.88 to 1.08) 0.67 0.98 (0.87 to 1.11) 0.81 0.88 (0.68 to 1.14) 0.34 1.07 (0.76 to 1.52) 0.69
Sensitivity analysis5 0.94 (0.81 to 1.08) 0.36 1.00 (0.85 to 1.18) 0.96 0.79 (0.51 to 1.20) 0.27 0.75 (0.46 to 1.20) 0.23

Abbreviations: SE = standard error; CI = confidence interval

1 Baseline mean and the ratio of the mean for the intervention arm and control arm estimated using a generalised linear model (family = gamma, link = log) with random intercepts for individuals and adjusted for baseline PHQ-9 score, general practice and prognostic group (the final covariate only applied to the analysis involving all participants);

2 Same as 1, adjusted for factors associated with non-response to the primary outcome measure, the PHQ-9 score, at 3 and 12 months (age, gender, highest level of education, current employment status, hold a health care card, long-term illness, live alone, number of times visited a psychiatrist or counsellor in past 12 months and current use of antidepressants);

3 Same as 1, but using a higher unit cost for the clinician-guided iCBT course delivered to the moderate prognostic group (unit cost changed from $132 per person to $222 per person);

4 Same as 1, but with the inclusion of sunk costs for the development of the Target-D CPT (cost of screening changed from $0.96 per person to $2.30 per person)

5 Same as 1, for complete cases only (i.e., no multiple imputation of missing data)

Health outcomes

Estimated differences in mean QALYs resulted in no significant differences between trial arms at 3 months (Table 3), although there was a trend towards marginally higher QALYs in the intervention arm overall (difference of 0.002 across all participants, 95% CI: -0.0002 to 0.003) (Table 3) and in the moderate prognostic group (0.004, 95% CI: -0.001 to 0.008). At 12 months, the difference in mean QALYs was significant overall (0.011, 95% CI: 0.001 to 0.022), but not within prognostic groups. This was possibly due to insufficient power in the subgroup analysis.

Table 3. Comparison of quality-adjusted life years by trial arm, across all participants and stratified by prognostic group (multiple imputed data).

All participants (n = 1,868) p-value Minimal/mild (n = 1,357) p-value Moderate (n = 288) p-value Severe (n = 223) p-value
3 months
Mean QALYs (SE) 1
Intervention arm 0.147 (0.0006) 0.164 (0.0008) 0.112 (0.002) 0.089 (0.002)
Control arm 0.145 (0.0006) 0.163 (0.0008) 0.108 (0.001) 0.085 (0.002)
Difference in mean QALYs between arms (95% CI)1 0.002 (-0.0002 to 0.003) 0.09 0.0008 (-0.001 to 0.003) 0.47 0.004 (-0.001 to 0.008) 0.13 0.003 (-0.002 to 0.008) 0.22
Sensitivity analysis2 0.001 (-0.0004 to 0.003) 0.15 0.0007 (-0.001 to 0.003) 0.52 0.003 (-0.001 to 0.008) 0.16 0.003 (-0.002 to 0.008) 0.19
Sensitivity analysis3 0.001 (-0.0009 to 0.003) 0.27 -0.0002 (-0.003 to 0.002) 0.85 0.004 (-0.0008 to 0.009) 0.10 0.004 (-0.001 to 0.010) 0.17
12 months
Mean QALYs (SE) 1
Intervention arm 0.607 (0.004) 0.669 (0.005) 0.482 (0.011) 0.392 (0.012)
Control arm 0.596 (0.004) 0.661 (0.004) 0.465 (0.010) 0.367 (0.011)
Difference in mean QALYs between arms (95% CI)1 0.011 (0.001 to 0.022) 0.049 0.008 (-0.005 to 0.021) 0.23 0.016 (-0.014 to 0.047) 0.29 0.024 (-0.011 to 0.059) 0.18
Sensitivity analysis2 0.010 (-0.001 to 0.021) 0.09 0.00 (-0.006 to 0.020) 0.29 0.016 (-0.015 to 0.046) 0.32 0.024 (-0.010 to 0.059) 0.16
Sensitivity analysis3 0.015 (0.00004 to 0.030) 0.049 0.009 (-0.008 to 0.027) 0.32 0.048 (0.002 to 0.095) 0.04 0.013 (-0.034 to 0.059) 0.59

Abbreviations: QALYs = quality-adjusted life years; SE = standard error; CI = confidence interval

1 Baseline mean and the difference between the mean for the intervention arm minus the mean for control arm estimated using a generalised linear model (family = Gaussian, link = identity) with random intercepts for individuals and adjusted for baseline AQoL-8D utility weight, baseline PHQ-9 score, general practice and prognostic group (the final covariate only applied to the analysis involving all participants);

2 Same as 1, adjusted for factors associated with non-response to the primary outcome measure, the PHQ-9 score, at 3 and 12 months (age, gender, highest level of education, current employment status, hold a health care card, long-term illness, live alone, number of times visited a psychiatrist or counsellor in past 12 months and current use of antidepressants)

3 Same as 1, for complete cases only (i.e., no multiple imputation of missing data)

Cost-effectiveness results

Cost-effectiveness results are presented in Table 4. When adopting a health sector perspective, the intervention was found to be dominant across all participants at both 3 and 12 months ‒ i.e., the intervention jointly improved health and achieved net cost savings. Under the societal perspective, the intervention was only found to be dominant across all participants at 12 months. The ICER at 3 months exceeded the Australian willingness-to-pay threshold of A$50,000 per QALY ($237,128, 95% CI: dominant to dominated). Cost-effectiveness planes and cost-effectiveness acceptability curves are presented in: S6 and S7 Figs for results at 3-month follow-up (see S1 Appendix); and Figs 1 and 2 at 12-month follow-up. The probability of the intervention being cost-effective when adopting the health sector perspective was 81% at 3 months (S6 Fig in S1 Appendix) and 96% at 12 months (Fig 1). From the societal perspective, the probability of the intervention being cost-effective was 30% at 3 months (S7 Fig in S1 Appendix) and 80% at 12 months (Fig 2).

Table 4. Incremental cost-effectiveness ratios under the health sector and societal perspectives, across all participants and stratified by prognostic group.

All participants (n = 1,868) Minimal/mild (n = 1,357) Moderate (n = 288) Severe (n = 223)
Health sector perspective
ICER (95% CI)1
3 months Dominant2 (Dominant2 to Dominated3) Dominant2 (Dominant2 to Dominated3) Dominant2 (Dominant2 to Dominated3) 364,966 (91,062 to Dominated3)
12 months Dominant2 (Dominant2 to 87,636) Dominant2 (Dominant2 to Dominated3) Dominant2 (Dominant2 to Dominated3) 45,424 (Dominant2 to Dominated3)
Societal perspective
ICER (95% CI)1
3 months 237,128 (Dominant2 to Dominated3) 254,155 (Dominant2 to Dominated3) 52,762 (Dominant2 to Dominated3) 721,711 (Dominant2 to Dominated3)
12 months Dominant2 (Dominant2 to 140,191) Dominant2 (Dominant2 to Dominated3) Dominant2 (Dominant2 to Dominated3) 57,039 Dominant2 to Dominated3)

Abbreviations: CI = confidence interval; ICER = incremental cost-effectiveness ratio

1 Mean incremental costs and mean incremental QALYs estimated using a generalised linear model (family = gamma, link = log) with random intercepts for individuals and adjusted for baseline AQoL-8D utility weight (mean incremental QALYs only), baseline PHQ-9 score, general practice and prognostic group (the final covariate only applied to the analysis involving all participants). Confidence intervals were estimated for the mean ICER based on 1,000 bootstrap resamples. The mean difference in QALYs between trial arms was observed to approach zero across all base case and subgroup analyses. This can potentially lead to the lower and upper bounds of a 95% confidence interval, derived using the bootstrap percentile method, encompassing a marginally higher coverage than the target 95% confidence region (e.g., 97% coverage of the mean ICER) [22]. Even so, any resulting imprecision in the estimation of the 95% confidence bounds will likely be inconsequential to the interpretation of study findings given the wide range of ICER values that were consistently observed between the lower and upper confidence bounds (e.g., confidence bounds ranging between ’dominant’ and ’dominated’). This reflects the high degree of uncertainty observed across mean ICER values for all base case and subgroup analyses; with bootstrap resamples consistently spanning all four quadrants of the cost-effectiveness plane.

2 A ‘dominant’ ICER indicates that the intervention costs less and is more effective than the control.

3 A ‘dominated’ ICER indicates that the intervention costs more and is less effective than the control.

Fig 1. Cost-effectiveness results for the health sector perspective across all participants at 12 months.

Fig 1

Abbreviations: A$ = Australian dollars; CI = confidence interval; ICER = incremental cost-effectiveness ratio; QALYs = quality-adjusted life years; WTP = willingness to pay.

Fig 2. Cost-effectiveness results for the societal perspective across all participants at 12 months.

Fig 2

Abbreviations: A$ = Australian dollars; CI = confidence interval; ICER = incremental cost-effectiveness ratio; QALYs = quality-adjusted life years; WTP = willingness to pay.

In the subgroup analysis, the intervention was dominant when analysing the minimal/mild and moderate prognostic groups at 3 and 12 months under the health sector perspective (see Table 4). The ICER for the severe group exceeded the willingness-to-pay threshold at 3 months ($364,966, 95% CI: $91,062 to dominated), then fell below this threshold at 12 months ($45,424, 95% CI: dominant to dominated). When adopting the societal perspective, at 3-months the ICERs for the minimal/mild, moderate and severe groups exceeded the willingness-to-pay threshold. At 12 months, however, the ICERs under the societal perspective were observed to be: dominant among the minimal/mild and moderate groups; and just above the willingness-to-pay threshold among the severe group.

Sensitivity analysis

Mean health sector costs and mean societal costs between trial arms were robust to sensitivity analysis (Tables 1 and 2). The addition of covariates associated with non-response (Table 3) reduced the 12-month difference in mean QALYs across all participants, which ceased to be significant (0.010, 95% CI: -0.001 to 0.021). In the complete case analysis (Table 3), the 12-month difference in mean QALYs remained significant across all participants (0.015, 95% CI: 0.00004 to 0.030). The 12-month difference in mean QALYs became significant in the moderate prognostic group (0.048, 95% CI: 0.002 to 0.095). This could either be a chance finding or an indication that the missing-at-random assumption underlying multiple imputation does not hold; though it is highly unlikely data are missing completely at random (as is assumed in complete case analysis).

Discussion

Summary

This is the first economic evaluation of an e-health platform designed to personally tailor depression management in primary care. The results suggest that over 12 months, the screening and intervention program is likely to be a more effective and less costly approach than usual GP care. This finding is driven by an observed difference in quality of life between the two trial arms and no significant differences in costs at 12-month follow-up.

Strengths and limitations

Major strengths of this study include the large sample size and the incorporation of economic data collection within the RCT design, thus adding greater certainty to results. The study also used a quality of life instrument that is sensitive to measuring health states among people with mental health problems, thus increasing the likelihood that any quality of life impacts resultant from the intervention would be detected [14, 23]. Use of a self-reported measure of health care utilisation may be a limitation due to recall bias; though previous studies have demonstrated the validity of self-reported resource use questionnaires [24]. While more frequent data collection (e.g., every three months) may minimise recall bias, the additional burden it places on participants and trial staff alike was not considered appropriate for this trial. The resource use questionnaire was thoroughly piloted prior to the commencement of the study and the study authors are confident that all main resource use categories relevant to this population were included. Sensitivity analyses demonstrated that QALY gains became non-significant after incorporating covariates for missing data involving the primary outcome (i.e., PHQ-9 scores). This suggests that while significant differences in QALYs were observed at 12 months in the base case analysis, these differences may be considered uncertain; particularly given that differences were not observed for the primary outcome of the main RCT [8]. The study sample involved an urban population in Melbourne which may limit the generalisability of study findings to other settings (e.g., other countries or rural regions of Australia). Study findings are limited by high levels of missing RUQ and AQoL-8D data, a common issue in trial-based economic evaluation [25]. Finally, the study was powered for the primary outcome only and may be underpowered to detect differences in costs or QALYs [7, 8].

Comparison with existing literature

Interventions that are directly comparable to the Target-D approach have not been evaluated in the literature to date. However, Grochtdreis, Brettschneider [26] reviewed several cost-effectiveness studies of collaborative care interventions for depression and which share common features with the Target-D intervention. This review identified ICERs ranging from dominant to US$874,562 per QALY and concluded that future research should incorporate a time horizon of one year or more and QALYs as an outcome measure–both of which were adopted in the current study. Another recent review of the cost-effectiveness of stepped care interventions for depression and anxiety identified one cost-effectiveness study of stepped care targeting visually impaired older people with both depression and anxiety [27]. The results of this study suggest that stepped care may be dominant when compared to usual care. Additionally, study findings appear to support the existing literature by suggesting that stepped care for depression can deliver improved clinical outcomes without increasing costs.

Conclusion

The Target-D intervention is likely to represent good value for money and provides indicative support for further development of digitally supported mental health care. Refinement and further testing of the approach is warranted to determine whether the size and extent of QALY gains at 12 months can be replicated. It remains to be seen whether results observed under tightly controlled trial conditions will still occur under routine service delivery conditions. In particular, the routine roll-out of the intervention needs to consider how screening will occur (e.g., by staff at general practices or online).

Supporting information

S1 Appendix. Supplementary materials.

(PDF)

S2 Appendix. Variable list and Stata do-file.

(PDF)

Acknowledgments

The authors would like to thank all the patients, family physicians, and clinics who took part in Target-D; and the many research assistants who assisted with data collection. The data used to develop the clinical prediction tool were collected as a part of the diamond project (NHMRC project ID: 299869, 454463, 566511 and 1002908). We acknowledge the 30 dedicated family physicians, their patients, and clinic staff for making the diamond study possible. We also acknowledge staff and students at the School of Computing and Information Systems at the University of Melbourne for early work that informed the presentation of the e-health platform as well as the focus group participants that provided feedback on early versions of the Target-D materials. Finally, we thank staff at the former Melbourne Networked Society Institute (MNSI) who built the Target-D website.

Data Availability

Ethical restrictions prevent the sharing of potentially sensitive data provided by study participants over the course of the Target-D clinical trial (Australian New Zealand Clinical Trials Registry ACTRN12616000537459). No contingency was included in the original plain language statement for study participants to provide informed consent to any prospective sharing of their personal data, whether as part of a minimum dataset or another form. Data requests can be sent to the Office of Research Ethics and Integrity (OREI) at The University of Melbourne (HumanEthics-Enquiries@unimelb.edu.au). For all other general enquiries regarding the Target-D clinical trial, please contact the chief investigator, Prof Jane Gunn (j.gunn@unimelb.edu.au).

Funding Statement

Target-D was funded by a grant from the National Health and Medical Research Council (NHMRC project ID: 1059863). The funding organisation had no role in the design and conduct of the study; collection, management analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

References

  • 1.Lee YC, Chatterton ML, Magnus A, Mohebbi M, Le LK, Mihalopoulos C. Cost of high prevalence mental disorders: Findings from the 2007 Australian National Survey of Mental Health and Wellbeing. Aust N Z J Psychiatry. 2017;51(12):1198–211. doi: 10.1177/0004867417710730 [DOI] [PubMed] [Google Scholar]
  • 2.Ferrari AJ, Charlson FJ, Norman RE, Patten SB, Freedman G, Murray CJ, et al. Burden of depressive disorders by country, sex, age, and year: findings from the global burden of disease study 2010. PLoS Med. 2013;10(11):e1001547. doi: 10.1371/journal.pmed.1001547 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Malhi GS, Bassett D, Boyce P, Bryant R, Fitzgerald PB, Fritz K, et al. Royal Australian and New Zealand College of Psychiatrists clinical practice guidelines for mood disorders. Aust N Z J Psychiatry. 2015;49(12):1087–206. doi: 10.1177/0004867415617657 [DOI] [PubMed] [Google Scholar]
  • 4.National Institute for Health and Care Excellence. Depression in adults: recognition and management, Clinical guideline (CG90) [Online]. NICE; 2009 [updated April 201821 October 2018]. Available from: https://www.nice.org.uk/guidance/cg90. [PubMed]
  • 5.Van Weel-Baumgarten EM, Van Gelderen MG, Grundmeijer HGLM, Licht-Strunk E, Van Marwijk HWJ, van Rijswijk HCAM, et al. NHG-Standaard Depressie (tweede herziening) [NHG-standard depression (second revision)]. Huisarts Wet. 2012;55(6):252–9. [Google Scholar]
  • 6.Ho FY, Yeung WF, Ng TH, Chan CS. The Efficacy and Cost-Effectiveness of Stepped Care Prevention and Treatment for Depressive and/or Anxiety Disorders: A Systematic Review and Meta-Analysis. Sci Rep. 2016;6:29281. doi: 10.1038/srep29281 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Gunn J, Wachtler C, Fletcher S, Davidson S, Mihalopoulos C, Palmer V, et al. Target-D: a stratified individually randomized controlled trial of the diamond clinical prediction tool to triage and target treatment for depressive symptoms in general practice: study protocol for a randomized controlled trial. Trials. 2017;18(1):342. doi: 10.1186/s13063-017-2089-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Fletcher S, Chondros P, Densley K, Murray E, Dowrick C, Coe A, et al. Matching depression management to severity prognosis in primary care: results of the Target-D randomised controlled trial. Br J Gen Pract. 2021;71(703):e85–e94. doi: 10.3399/BJGP.2020.0783 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Solomon D, Proudfoot J, Clarke J, Christensen H. e-CBT (myCompass), Antidepressant Medication, and Face-to-Face Psychological Treatment for Depression in Australia: A Cost-Effectiveness Comparison. J Med Internet Res. 2015;17(11):e255. doi: 10.2196/jmir.4207 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Clinical Research Unit for Anxiety and Depression. THIS WAY UP [Online]. Sydney: St Vincent’s Hospital Sydney; 2020 [5 July 2020]. Available from: https://thiswayup.org.au/.
  • 11.Archer J, Bower P, Gilbody S, Lovell K, Richards D, Gask L, et al. Collaborative care for depression and anxiety problems. Cochrane Database Syst Rev. 2012;10:CD006525. doi: 10.1002/14651858.CD006525.pub2 [DOI] [PubMed] [Google Scholar]
  • 12.Sanders GD, Neumann PJ, Basu A, Brock DW, Feeny D, Krahn M, et al. Recommendations for Conduct, Methodological Practices, and Reporting of Cost-effectiveness Analyses: Second Panel on Cost-Effectiveness in Health and Medicine. JAMA. 2016;316(10):1093–103. doi: 10.1001/jama.2016.12195 [DOI] [PubMed] [Google Scholar]
  • 13.Lee YC, Gao L, Dear BF, Titov N, Mihalopoulos C. The cost-effectiveness of the online Mindspot Clinic for the treatment of depression and anxiety in Australia. J Ment Health Policy Econ. 2017;20(4):155–66. [PubMed] [Google Scholar]
  • 14.Richardson J, Iezzi A, Khan MA, Maxwell A. Validity and reliability of the Assessment of Quality of Life (AQoL)-8D multi-attribute utility instrument. Patient. 2014;7(1):85–96. doi: 10.1007/s40271-013-0036-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Richardson J, Sinha K, Iezzi A, Khan MA. Modelling utility weights for the Assessment of Quality of Life (AQoL)-8D. Qual Life Res. 2014;23(8):2395–404. doi: 10.1007/s11136-014-0686-8 [DOI] [PubMed] [Google Scholar]
  • 16.Glick HA, Doshi JA, Sonnad SS, Polsky D. Economic Evaluation in Clinical Trials: OUP Oxford; 2014. [Google Scholar]
  • 17.Brand J, van Buuren S, le Cessie S, van den Hout W. Combining multiple imputation and bootstrap in the analysis of cost-effectiveness trial data. Stat Med. 2019;38(2):210–20. doi: 10.1002/sim.7956 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Briggs A, Fenn P. Confidence intervals or surfaces? Uncertainty on the cost-effectiveness plane. Health Econ. 1998;7(8):723–40. doi: [DOI] [PubMed] [Google Scholar]
  • 19.Carter R, Vos T, Moodie M, Haby M, Magnus A, Mihalopoulos C. Priority setting in health: origins, description and application of the Australian Assessing Cost-Effectiveness initiative. Expert Rev Pharmacoecon Outcomes Res. 2008;8(6):593–617. doi: 10.1586/14737167.8.6.593 [DOI] [PubMed] [Google Scholar]
  • 20.George B, Harris A, Mitchell A. Cost-effectiveness analysis and the consistency of decision making: evidence from pharmaceutical reimbursement in Australia (1991 to 1996). Pharmacoeconomics. 2001;19(11):1103–9. doi: 10.2165/00019053-200119110-00004 [DOI] [PubMed] [Google Scholar]
  • 21.Harris AH, Hill SR, Chin G, Li JJ, Walkom E. The role of value for money in public insurance coverage decisions for drugs in Australia: a retrospective analysis 1994–2004. Med Decis Making. 2008;28(5):713–22. doi: 10.1177/0272989X08315247 [DOI] [PubMed] [Google Scholar]
  • 22.Siani C, de Peretti C, Moatti JP. Revisiting methods for calculating confidence region for ICERs: are Fieller’s and bootstrap methods really equivalent? [Online]. Institut pour la Recherche en Santé Publique (IReSP); 2003 [9 Mar 2022]. Available from: https://www.iresp.net/wp-content/uploads/2018/12/Siani-article-3.pdf.
  • 23.Richardson J, Elsworth G, Iezzi A, Khan M, Mihalopoulos C, Schweitzer I, et al. Increasing the sensitivity of the AQoL inventory for the evaluation of interventions affecting mental health. Melbourne: Centre for Health Economics, 2011. [Google Scholar]
  • 24.Leggett LE, Khadaroo RG, Holroyd-Leduc J, Lorenzetti DL, Hanson H, Wagg A, et al. Measuring Resource Utilization: A Systematic Review of Validated Self-Reported Questionnaires. Medicine (Baltimore). 2016;95(10):e2759. doi: 10.1097/MD.0000000000002759 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Leurent B, Gomes M, Carpenter JR. Missing data in trial-based cost-effectiveness analysis: An incomplete journey. Health Econ. 2018;27(6):1024–40. doi: 10.1002/hec.3654 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Grochtdreis T, Brettschneider C, Wegener A, Watzke B, Riedel-Heller S, Harter M, et al. Cost-effectiveness of collaborative care for the treatment of depressive disorders in primary care: a systematic review. PloS one. 2015;10(5):e0123078. doi: 10.1371/journal.pone.0123078 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Reeves P, Szewczyk Z, Proudfoot J, Gale N, Nicholas J, Anderson J. Economic Evaluations of Stepped Models of Care for Depression and Anxiety and Associated Implementation Strategies: A Review of Empiric Studies. Int J Integr Care. 2019;19(2):8. doi: 10.5334/ijic.4157 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Isabelle Durand-Zaleski

Transfer Alert

This paper was transferred from another journal. As a result, its full editorial history (including decision letters, peer reviews and author responses) may not be present.

22 Oct 2021

PONE-D-21-25251Economic evaluation of the Target-D platform to match depression management to severity prognosis in primary care: a within-trial cost-utility analysisPLOS ONE

Dear Dr. Lee,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses all the points raised during the review process, with particular attention given to the econmic comments of reviwer 1 .

Please submit your revised manuscript by Dec 06 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Isabelle Durand-Zaleski

Academic Editor

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please update your submission to use the PLOS LaTeX template. The template and more information on our requirements for LaTeX submissions can be found at http://journals.plos.org/plosone/s/latex.

3. Thank you for stating the following in the Acknowledgments Section of your manuscript:

“The authors would like to thank all the patients, family physicians, and clinics who took part

in Target-D; and the many research assistants who assisted with data collection. The data used

to develop the clinical prediction tool were collected as a part of the diamond project which

was funded by the National Health and Medical Research Council (NHMRC project ID:

299869, 454463, 566511 and 1002908). We acknowledge the 30 dedicated family physicians,

their patients, and clinic staff for making the diamond study possible. We also acknowledge

staff and students at the School of Computing and Information Systems at the University of

Melbourne for early work that informed the presentation of the e-health platform as well as the

focus group participants that provided feedback on early versions of the Target-D materials.

Finally, we thank staff at the former Melbourne Networked Society Institute (MNSI) who were

funded to build the Target-D website.”

We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form.

Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows:

“Target-D was funded by a grant from the National Health and Medical Research Council (NHMRC project ID: 1059863). The funding organisation had no role in the design and conduct of the study; collection, management analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.”

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

4. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

5. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

In your revised cover letter, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

We will update your Data Availability statement on your behalf to reflect the information you provide.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: I Don't Know

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The paper aims to assess cost-effectiveness of the target-D intervention against usual care. Mental health and primary care are both very important subjects for public health. Evaluation of e-health is also of prime importance. The work is original but more methodological precisions were needed.

Main concerns:

1) A micro-costing approach was used to estimate the cost of the target-D intervention except for the clinician-guided iCBT program (moderate prognostic group) (page 5, lines 112-113: “The average cost between two Australia clinician-guides iCBT programs was applied to participants in the moderate prognostic group”). Why?

2) Some costs appeared to be research-induced (e.g. p5, lines 105-106 “personnel time to approach individuals in the GP waiting room involved one minute per encounter”). Please clarify

3) The drop-out rate was very important for the RUQ data and the AQoL-8D data. Not enough information is given to the lector on how the missing data were treated. Bootstrapped data need to be stratified by treatment arm. The process of missing data need to be tested, not assumed (page 6, line 140: “assuming data were missing at random”). Tests should be presented and discussed. If data were not MAR, extensive sensitivity analyses (scenarios) on costs and quality-of-life utilities need to be conducted (not only complete case analysis, valid if and only if missing data were MCAR). Please give reference if this issue was treated in a previous paper.

4) Two GLM had been estimated, one for costs (link=log; family=gamma) and another for utility scores (link=identity; family=Gaussian). In the paper, it is not clear if these models were re-estimated for each bootstrapped sample or only once. If ICER is computed from estimated coefficients, how the ratio was converted into a difference for costs? How potential correlation between errors terms of the QALY equation and the cost equation were taken into account in the analysis?

5) Concerning the QALY equation, was the value at baseline systematically included among covariates? (not clear page 6, lines 147-149: “All GLM models were estimated with and without adjustment for several baselines specified in the study protocol- i.e. baseline PHQ-9 score (not QALY, as requested in guidelines),general practice and prognostic group”)

6) It could be interesting to present details on cost provided by microcosting for each level of intervention. Page 8, lines 191-194, be more affirmative “This was likely due to the high-cost nature of collaborative care delivered to participants in the severe group”.

7) Acceptability curves could be estimated for the 3 prognostic groups.

8) The conclusion (page 19, line 338-341) was very strong, not really in line with methodological issues mentioned page 18, lines 318-319 and 325-327

Reviewer #2: The authors present an economic evaluation of the Target-D intervention, based on resource utilization information collected during a clinical trial of Target-D versus usual care in Melbourne, Australia. Results are presented both from a health sector perspective and a societal perspective. Authors conclude that Target-D likely has good value for health care decision makers. The manuscript is well written. I only have a few minor recommendations for the authors.

1. line 43: authors state that health sector and societal costs were "comparable" between trial arms at 3 and 12 months. Authors should replace "comparable" with "not significantly different" since authors did not do a specific test for equality of the costs.

2. Authors should provide the number of control and intervention participants in each of the prognostic groups (minimal/mild, moderate, severe), rather than relying on readers to go to the published paper on trial results to get this information. This can likely just be put in the text in lines 177-178.

3. lines 208 (note under Table 1), 217 (note under Table 2), and 235 (note under Table 3): "partcipants" should be "participants"

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 May 25;17(5):e0268948. doi: 10.1371/journal.pone.0268948.r002

Author response to Decision Letter 0


7 Feb 2022

Editor #1:

E1.1 – 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf

and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

We have sought to comply with the PLOS ONE style templates the best we can.

E1.2 – 2. Please update your submission to use the PLOS LaTeX template. The template and more information on our requirements for LaTeX submissions can be found at

http://journals.plos.org/plosone/s/latex.

For now, we have opted to submit our manuscript using Word document files. We will be more than willing to provide a submission in the PLOS LaTeX template following the acceptance of our paper.

E1.3 – 3. Thank you for stating the following in the Acknowledgments Section of your manuscript: “The authors would like to thank all the patients, family physicians, and clinics who took part in Target-D ... Finally, we thank staff at the former Melbourne Networked Society Institute (MNSI) who were funded to build the Target-D website.” We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form. Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

We have modified the acknowledgements to remove all references to funding information. Please see the amended text below.

The authors would like to thank all the patients, family physicians, and clinics who took part in Target-D; and the many research assistants who assisted with data collection. The data used to develop the clinical prediction tool were collected as a part of the diamond project (NHMRC project ID: 299869, 454463, 566511 and 1002908). We acknowledge the 30 dedicated family physicians, their patients, and clinic staff for making the diamond study possible. We also acknowledge staff and students at the School of Computing and Information Systems at the University of Melbourne for early work that informed the presentation of the e-health platform as well as the focus group participants that provided feedback on early versions of the Target-D materials. Finally, we thank staff at the former Melbourne Networked Society Institute (MNSI) who built the Target-D website.

E1.4 – 4. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability. Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized. Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access. We will update your Data Availability statement to reflect the information you provide in your cover letter.

5. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

In your revised cover letter, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

We will update your Data Availability statement on your behalf to reflect the information you provide.

Ethics approval for the clinical trial underlying the submitted manuscript was obtained through the University of Melbourne. We have contacted Ms Hilary Young, Secretary for the Medicine and Dentistry Human Ethics Sub-Committee (HESC) at the University of Melbourne, to provide us with guidance on this matter (Phone: +61 3 8344 8595, Email: hilary.young@unimelb.edu.au). We have attached a copy of the resulting correspondence. To summarise, we have been notified that the plain language statement of the original study does not provide a contingency for study participants to provide informed consent for any prospective data sharing, particularly given that collected data is potentially identifying and involves sensitive personal information. As such, we are unable to release the data as part of any minimum dataset. We have amended our Data Availability Statement to read:

Ethical restrictions prevent the sharing of potentially sensitive data provided by study participants over the course of the Target-D clinical trial (Australian New Zealand Clinical Trials Registry ACTRN12616000537459). No contingency was included in the original plain language statement for study participants to provide informed consent to any prospective sharing of their personal data, whether as part of a minimum dataset or another form. For all enquiries regarding the Target-D clinical trial and the underlying dataset, please contact the chief investigator, Prof Jane Gunn (j.gunn@unimelb.edu.au). For general enquiries regarding the ethics approval of the trial, please contact the Office of Research Ethics and Integrity (OREI) at The University of Melbourne (HumanEthics-Enquiries@unimelb.edu.au).

As a compromise to our inability to provide a minimum dataset, we have now included an additional supplementary appendix (S2 Appendix) that contains summary metadata on all in-scope data variables, alongside a copy of the Stata do-file. The following sentence has been added to the end of the ‘Statistical analysis’ section of the Methods in lines 175-176:

A summary list of all variables included in the statistical analysis is provided in S2 Appendix, alongside the Stata do-file used to implement the statistical analysis.

Reviewer #1:

The paper aims to assess cost-effectiveness of the target-D intervention against usual care. Mental health and primary care are both very important subjects for public health. Evaluation of e-health is also of prime importance. The work is original but more methodological precisions were needed.

Main concerns:

R1.1 – 1) A micro-costing approach was used to estimate the cost of the target-D intervention except for the clinician-guided iCBT program (moderate prognostic group) (page 5, lines 112-113: “The average cost between two Australia clinician-guides iCBT programs was applied to participants in the moderate prognostic group”). Why?

There is no definitive, gold standard unit cost for clinician-guided iCBT in Australia. Instead, we have available two alternative unit costs that provide a low-to-high range of possible values. The point value used in the base case analysis comprised the average of the low and high values. This is the rationale for why we performed a subsequent sensitivity analysis analysing the impact of adopting the highest unit cost ($222 per person). In response to this comment, we have added the following sentence to improve clarity on lines 116-117:

In this instance, the two programs represent a low-to-high range of possible unit cost values for clinician-guided iCBT in Australia. A subsequent sensitivity analysis was done to test the impact of using the highest unit cost, rather than the average.

R1.2 – 2) Some costs appeared to be research-induced (e.g. p5, lines 105-106 “personnel time to approach individuals in the GP waiting room involved one minute per encounter”). Please clarify

We made sure to include costs that would occur in routine practice and to exclude research-related costs. Resource use involving the research assistants (e.g., approaching individuals in the GP waiting room or periodic check-in phone calls) will need to be performed by similarly qualified staff if the intervention were to be implemented as part of routine practice. We have added a sentence to lines 120-122 to clarify this point:

It is anticipated that all costed activities described above that involve research assistants will likely require similarly qualified staff to facilitate the implementation of the intervention as part of routine practice.

R1.3 – 3) The drop-out rate was very important for the RUQ data and the AQoL-8D data. Not enough information is given to the lector on how the missing data were treated. Bootstrapped data need to be stratified by treatment arm. The process of missing data need to be tested, not assumed (page 6, line 140: “assuming data were missing at random”). Tests should be presented and discussed. If data were not MAR, extensive sensitivity analyses (scenarios) on costs and quality-of-life utilities need to be conducted (not only complete case analysis, valid if and only if missing data were MCAR). Please give reference if this issue was treated in a previous paper.

We thank the reviewer for encouraging us to be clear in how we have chosen to address the problem of missing data. In response to this comment, we have now added a new section to the supplementary materials, ‘Supplementary Text S5. Analysis of missing data mechanisms’. In this supplement, we have provided a detailed exploration of missing data patterns and an empirical rationale for why we have concluded that there is sufficient evidence to infer that missing utility/cost data can be considered missing at random (as opposed to missing not at random). Based on these analyses, we identified several baseline sociodemographic variables that were associated with the likelihood of missing utility/cost values. We have consequently included these variables as adjustment covariates in the multiple imputation analysis, which was also updated. All results based on these methodological refinements have been amended accordingly.

We have modified text on lines 144-150 to read:

Multiple imputation methods were implemented in Stata to account for missing data that were deemed missing at random following several exploratory analyses presented in Supplementary Text 5 in S1 Appendix. Missing cost and outcomes data were imputed 100 times using multiple imputation by chained equations (MICE), with predictive mean matching and adjustment for baseline covariates associated with data missingness – i.e., trial arm, clinic, age, gender, highest level of education and having visited a psychologist/counsellor in the past 12 months.

R1.4 – 4) Two GLM had been estimated, one for costs (link=log; family=gamma) and another for utility scores (link=identity; family=Gaussian). In the paper, it is not clear if these models were re-estimated for each bootstrapped sample or only once. If ICER is computed from estimated coefficients, how the ratio was converted into a difference for costs? How potential correlation between errors terms of the QALY equation and the cost equation were taken into account in the analysis?

The reviewer is justified in their call for further descriptive detail on the methods used to implement bootstrapping. We confirm that the ICER was computed using estimated GLM coefficients of the difference in mean costs and the difference in mean QALYs. In the original analysis, we adopted a resampling method that encompassed, ‘bootstrapping nested in multiple imputation’. Since then, we have encountered recommendations by Brand et al., 2019 (doi: 10.1002/sim.7956) and Prof Andy Briggs who collectively advocate, ‘single imputation nested in bootstrapping’. Based on these recommendations, we have revised our analytic approach and modified our description of the methods/results accordingly.

The text on lines 161-173 has now been amended to read:

Incremental cost-effectiveness ratios (ICERs) were calculated as the difference in mean costs between the intervention and control arms divided by the difference in mean QALYs. ICERs were calculated by study perspective (health sector and societal), follow-up period (3 and 12 months) and, for the subgroup analysis, by prognostic group (total, minimal/mild, moderate and severe). A resampling method comprising single imputation nested in bootstrapping [17] was used to quantify the impact of input parameter uncertainty around the resulting differences in mean costs/QALYs and the mean ICERs. This method works by generating a single call to the MICE procedure to produce a complete dataset with which to analyse GLMs of costs/QALYs within each bootstrap resample. Following the generation of 1,000 bootstrap resamples, the bootstrap percentile method was used to estimate 95% confidence intervals (95% CI) around the differences in mean costs/QALYs and the mean ICERs [18]. The intervention was considered cost-effective if the resulting ICER was less than the Australian willingness-to-pay threshold of A$50,000 per QALY [19-21].

Furthermore, we have now included an additional supplementary appendix (S2 Appendix) that contains both the Stata do-file and a summary list of data variables. The following sentence has been added to the end of the ‘Statistical analysis’ section of the Methods in lines 175-176:

A summary list of all variables included in the statistical analysis is provided in S2 Appendix, alongside the Stata do-file used to implement the statistical analysis.

R1.5 – 5) Concerning the QALY equation, was the value at baseline systematically included among covariates? (not clear page 6, lines 147-149: “All GLM models were estimated with and without adjustment for several baselines specified in the study protocol- i.e. baseline PHQ-9 score (not QALY, as requested in guidelines),general practice and prognostic group”)

The reviewer has made an important critique of our methods. Our initial analysis did not adjust for baseline utility scores derived using the AQoL-8D measure as we were narrowly focussed on reproducing the primary outcomes analysis, which made adjustments for the baseline PHQ-9 score. Moreover, we had a priori postulated that baseline PHQ-9 scores would be (in theory) highly correlated to baseline AQoL-8D scores. Following the reviewer’s comment, we have made a decision to re-analyse QALY outcomes after making an additional adjustment for baseline AQoL-8D scores.

In the methods, we have amended lines 155-159:

All GLMs were estimated with and without adjustment for several baseline covariates specified in the study protocol ‒ i.e., baseline PHQ-9 score, general practice and prognostic group [7]. Baseline AQoL-8D scores were also included as an additional baseline covariate for GLMs involving QALY outcomes.

The results that are reported in Table 3 now reflect these changes. Additionally, the first footnote to the results presented in Table 3 on line 253 has also been amended to reflect the addition of the baseline AQoL-8D score as a baseline covariate.

R1.6 – 6) It could be interesting to present details on cost provided by microcosting for each level of intervention. Page 8, lines 191-194, be more affirmative “This was likely due to the high-cost nature of collaborative care delivered to participants in the severe group”.

Detailed costs encompassing the microcosting approach are presented for each intervention level in Supplementary Table S3. We have amended to sentence on lines 212-214 to read:

This was likely due to the high-cost nature of collaborative care delivered to participants in the severe group (see Supplementary Table 3 in S1 Appendix for detailed costs).

R1.7 – 7) Acceptability curves could be estimated for the 3 prognostic groups.

We appreciate the reviewer’s suggestion here. However, we have opted not to present in-depth results for the three prognostic groups (i.e., cost-effectiveness planes or cost-effectiveness acceptability curves) given that these are subgroup analyses that are underpowered to detect statistically significant differences, particularly when compared to the aggregate findings. We have amended text in lines 162-165 to emphasise the fact that the analysis of prognostic groups encompasses a subgroup analysis:

ICERs were calculated by study perspective (health sector and societal), follow-up period (3 and 12 months) and, for the subgroup analysis, by prognostic group (total, minimal/mild, moderate and severe).

R1.8 – 8) The conclusion (page 19, line 338-341) was very strong, not really in line with methodological issues mentioned page 18, lines 318-319 and 325-327

We have amended the relevant texts on lines 359-362 and lines 365-366 to soften conclusions drawn based on our study findings. These texts now read as follows:

The results of this study suggest that stepped care may be dominant when compared to usual care. Additionally, study findings appear to support the existing literature by suggesting that stepped care for depression can deliver improved clinical outcomes without increasing costs.

The Target-D intervention is likely to represent good value for money and provides indicative support for further development of digitally supported mental health care.

Reviewer #2:

The authors present an economic evaluation of the Target-D intervention, based on resource utilization information collected during a clinical trial of Target-D versus usual care in Melbourne, Australia. Results are presented both from a health sector perspective and a societal perspective. Authors conclude that Target-D likely has good value for health care decision makers. The manuscript is well written. I only have a few minor recommendations for the authors.

R2.1 – 1. line 43: authors state that health sector and societal costs were "comparable" between trial arms at 3 and 12 months. Authors should replace "comparable" with "not significantly different" since authors did not do a specific test for equality of the costs.

We thank the reviewer for this suggestion and have amended the Abstract text accordingly (see line 43).

R2.2 – 2. Authors should provide the number of control and intervention participants in each of the prognostic groups (minimal/mild, moderate, severe), rather than relying on readers to go to the published paper on trial results to get this information. This can likely just be put in the text in lines 177-178.

We have added this information to lines 194-198 at the beginning of the Results section, as requested by the reviewer.

R2.3 – 3. lines 208 (note under Table 1), 217 (note under Table 2), and 235 (note under Table 3): "partcipants" should be "participants"

We thank the reviewer for spotting this mistake. The spelling of this word has now been corrected in all relevant locations.

Attachment

Submitted filename: 00b_Target_D_Response_to_Reviewers.pdf

Decision Letter 1

Isabelle Durand-Zaleski

7 Mar 2022

PONE-D-21-25251R1Economic evaluation of the Target-D platform to match depression management to severity prognosis in primary care: a within-trial cost-utility analysisPLOS ONE

Dear Dr. Lee,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the 2 minor points raised during the review process:

1) As the effectiveness difference between arms is very small (non significant), I should be preferable to estimate the 95% CI for ICER using the Fieller's method instead of the bootstrap method. The former is less sensitive to misinterpretation of the CI bounds than the latter [see https://www.iresp.net/wp-content/uploads/2018/12/Siani-article-3.pdf] 2) In the QALY equation, the utility score at inclusion should be included as covariate (not baseline AQoL-8D score) [see Willan and Briggs, Statistical analysis of cost-effectiveness data, Statistics in Practice, Wiley, page 24-25]

Please submit your revised manuscript by Apr 21 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Isabelle Durand-Zaleski

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Additional Editor Comments (if provided):

You have addressed most of the reviewers' comments, I recommend that you take into account the 2 suggestions of reviewer 1 in your final version.

1) As the effectiveness difference between arms is very small (non significant), I should be preferable to estimate the 95% CI for ICER using the Fieller's method instead of the bootstrap method. The former is less sensitive to misinterpretation of the CI bounds than the latter [see https://www.iresp.net/wp-content/uploads/2018/12/Siani-article-3.pdf]

2) In the QALY equation, the utility score at inclusion should be included as covariate (not baseline AQoL-8D score) [see Willan and Briggs, Statistical analysis of cost-effectiveness data, Statistics in Practice, Wiley, page 24-25]

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: (No Response)

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: (No Response)

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: (No Response)

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: (No Response)

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Thank you to the authors for adressing all my comments on the previous version of the paper.

I have two (marginal) left comments :

1) As the effectiveness difference between arms is very small (non significant), I should be preferable to estimate the 95% CI for ICER using the Fieller's method instead of the bootstrap method. The former is less sensitive to misinterpretation of the CI bounds than the latter [see https://www.iresp.net/wp-content/uploads/2018/12/Siani-article-3.pdf]

2) In the QALY equation, the utility score at inclusion should be included as covariate (not baseline AQoL-8D score) [see Willan and Briggs, Statistical analysis of cost-effectiveness data, Statistics in Practice, Wiley, page 24-25]

Reviewer #2: (No Response)

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 May 25;17(5):e0268948. doi: 10.1371/journal.pone.0268948.r004

Author response to Decision Letter 1


21 Mar 2022

Reviewer #1:

Thank you to the authors for adressing all my comments on the previous version of the paper. I have two (marginal) left comments :

R1.1 – 1) As the effectiveness difference between arms is very small (non significant), I should be preferable to estimate the 95% CI for ICER using the Fieller's method instead of the bootstrap method. The former is less sensitive to misinterpretation of the CI bounds than the latter [ see https://www.iresp.net/wp-content/uploads/2018/12/Siani-article-3.pdf ]

We thank the reviewer for suggesting Fieller's theorem as a potentially more accurate method by which to estimate the 95% confidence interval for the mean ICER; under the scenario when the expected value of the difference in effectiveness outcomes between trial arms approaches zero (as occurs in our study). We have read through the article by Siani et al. (2003) and note that: (1) Fieller's theorem has the potential to produce more accurate 95% confidence interval bounds with excellent coverage over the 95% confidence region - based largely on simulations analysed in the quoted study; and (2) 95% CIs produced using the non-parametric bootstrap percentile method can potentially lead to confidence bounds with a marginally higher coverage of the target 95% confidence region (i.e., 97% coverage of the mean ICER). We conducted a quick search of the literature and struggled to find studies that have replicated the findings of Siani et al., (2003). This makes it difficult to confirm the veracity of the phenomena identified by Siani et al. (2003). Even so, we concede that the potential for the bias identified by Siani et al. (2003) remains.

If the aforementioned bias were to transpire, then we contend that such imprecision in the estimation of 95% confidence bounds will not have a material impact on the interpretation of our study findings. This is due largely to the bootstrap resampling results which indicated a high degree of uncertainty around the expected value of mean ICERs estimated across all base case and subgroup analyses (i.e., bootstrap resamples for mean ICERs consistently covered all four quadrants of the cost-effectiveness plane). As such, the 95% confidence bounds produced by the bootstrap percentile method did not approach the nominated WTP threshold of A$50,000 per QALY. This was especially true when the lower and upper 95% confidence bounds spanned the South-East ('dominant') and North-West ('dominated') quadrants of the cost-effectiveness plane. In summary, any (comparatively small) imprecision around the 95% confidence bounds presented in Table 4 is expected to be inconsequential when compared to the extreme range of lower and upper 95% confidence bounds.

In response to this comment, we have added a note to Table 4 stating that:

The mean difference in QALYs between trial arms was observed to approach zero across all base case and subgroup analyses. This can potentially lead to the lower and upper bounds of a 95% confidence interval, derived using the bootstrap percentile method, encompassing a marginally higher coverage than the target 95% confidence region (e.g., 97% coverage of the mean ICER) [22]. Even so, any resulting imprecision in the estimation of the 95% confidence bounds will likely be inconsequential to the interpretation of study findings given the wide range of ICER values that were consistently observed between the lower and upper confidence bounds (e.g., confidence bounds ranging between 'dominant' and 'dominated'). This reflects the high degree of uncertainty observed across mean ICER values for all base case and subgroup analyses; with bootstrap resamples consistently spanning all four quadrants of the cost-effectiveness plane.

R1.2 – 2) In the QALY equation, the utility score at inclusion should be included as covariate (not baseline AQoL-8D score) [see Willan and Briggs, Statistical analysis of cost-effectiveness data, Statistics in Practice, Wiley, page 24-25]

We apologise to the reviewer for using imprecise terminology that has, in turn, led to this instance of semantic confusion. When we used the term 'baseline AQoL-8D score', our intended meaning was 'baseline AQoL-8D utility weight' – i.e., utility weights estimated based on scoring the AQoL-8D multi-attribute utility instrument. In response to this comment, we have changed all instances of 'AQoL-8D score(s)' to 'AQoL-8D utility weight(s)'.

Attachment

Submitted filename: 00b_Target_D_Response_to_Reviewers.pdf

Decision Letter 2

Isabelle Durand-Zaleski

12 May 2022

Economic evaluation of the Target-D platform to match depression management to severity prognosis in primary care: a within-trial cost-utility analysis

PONE-D-21-25251R2

Dear Dr. Lee,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Isabelle Durand-Zaleski

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Isabelle Durand-Zaleski

16 May 2022

PONE-D-21-25251R2

Economic evaluation of the Target-D platform to match depression management to severity prognosis in primary care: a within-trial cost-utility analysis

Dear Dr. Lee:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Isabelle Durand-Zaleski

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Appendix. Supplementary materials.

    (PDF)

    S2 Appendix. Variable list and Stata do-file.

    (PDF)

    Attachment

    Submitted filename: 00b_Target_D_Response_to_Reviewers.pdf

    Attachment

    Submitted filename: 00b_Target_D_Response_to_Reviewers.pdf

    Data Availability Statement

    Ethical restrictions prevent the sharing of potentially sensitive data provided by study participants over the course of the Target-D clinical trial (Australian New Zealand Clinical Trials Registry ACTRN12616000537459). No contingency was included in the original plain language statement for study participants to provide informed consent to any prospective sharing of their personal data, whether as part of a minimum dataset or another form. Data requests can be sent to the Office of Research Ethics and Integrity (OREI) at The University of Melbourne (HumanEthics-Enquiries@unimelb.edu.au). For all other general enquiries regarding the Target-D clinical trial, please contact the chief investigator, Prof Jane Gunn (j.gunn@unimelb.edu.au).


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES