Skip to main content
PLOS One logoLink to PLOS One
. 2023 Oct 17;18(10):e0293077. doi: 10.1371/journal.pone.0293077

Cost-effectiveness of incorporating Ebola prediction score tools and rapid diagnostic tests into a screening algorithm: A decision analytic model

Antoine Oloma Tshomba 1,2,*, Daniel Mukadi-Bamuleka 2,3, Anja De Weggheleire 4, Olivier M Tshiani 2,3, Charles T Kayembe 5, Placide Mbala-Kingebeni 2,3, Jean-Jacques Muyembe-Tamfum 2,3, Steve Ahuka-Mundeke 2,3, Faustin M Chenge 1,6, Bart Karl M Jacobs 7, Dieudonné N Mumba 2,4, Désiré D Tshala-Katumbay 2,8,9, Sabue Mulangu 2,3
Editor: Jan Rychtář10
PMCID: PMC10581462  PMID: 37847703

Abstract

Background

No distinctive clinical signs of Ebola virus disease (EVD) have prompted the development of rapid screening tools or called for a new approach to screening suspected Ebola cases. New screening approaches require evidence of clinical benefit and economic efficiency. As of now, no evidence or defined algorithm exists.

Objective

To evaluate, from a healthcare perspective, the efficiency of incorporating Ebola prediction scores and rapid diagnostic tests into the EVD screening algorithm during an outbreak.

Methods

We collected data on rapid diagnostic tests (RDTs) and prediction scores’ accuracy measurements, e.g., sensitivity and specificity, and the cost of case management and RDT screening in EVD suspect cases. The overall cost of healthcare services (PPE, procedure time, and standard-of-care (SOC) costs) per suspected patient and diagnostic confirmation of EVD were calculated. We also collected the EVD prevalence among suspects from the literature. We created an analytical decision model to assess the efficiency of eight screening strategies: 1) Screening suspect cases with the WHO case definition for Ebola suspects, 2) Screening suspect cases with the ECPS at -3 points of cut-off, 3) Screening suspect cases with the ECPS as a joint test, 4) Screening suspect cases with the ECPS as a conditional test, 5) Screening suspect cases with the WHO case definition, then QuickNavi™-Ebola RDT, 6) Screening suspect cases with the ECPS at -3 points of cut-off and QuickNavi™-Ebola RDT, 7) Screening suspect cases with the ECPS as a conditional test and QuickNavi™-Ebola RDT, and 8) Screening suspect cases with the ECPS as a joint test and QuickNavi™-Ebola RDT. We performed a cost-effectiveness analysis to identify an algorithm that minimizes the cost per patient correctly classified. We performed a one-way and probabilistic sensitivity analysis to test the robustness of our findings.

Results

Our analysis found dual ECPS as a conditional test with the QuickNavi™-Ebola RDT algorithm to be the most cost-effective screening algorithm for EVD, with an effectiveness of 0.86. The cost-effectiveness ratio was 106.7 USD per patient correctly classified. The following algorithms, the ECPS as a conditional test with an effectiveness of 0.80 and an efficiency of 111.5 USD per patient correctly classified and the ECPS as a joint test with the QuickNavi™-Ebola RDT algorithm with an effectiveness of 0.81 and a cost-effectiveness ratio of 131.5 USD per patient correctly classified. These findings were sensitive to variations in the prevalence of EVD in suspected population and the sensitivity of the QuickNavi™-Ebola RDT.

Conclusions

Findings from this study showed that prediction scores and RDT could improve Ebola screening. The use of the ECPS as a conditional test algorithm and the dual ECPS as a conditional test and then the QuickNavi™-Ebola RDT algorithm are the best screening choices because they are more efficient and lower the number of confirmation tests and overall care costs during an EBOV epidemic.

Introduction

Since the mid-nineties, Ebola virus disease (EVD) outbreaks have emerged in sub-Saharan tropical Africa, where about 130 million people live with filovirus exposure risk [15]. The infection is plagued by a high case fatality rate (CFR), which ranges between 50% and 90%.

The control of EVD outbreaks relies on the early and accurate implementation of public health measures such as 1) surveillance and detection of suspect cases; 2) ring vaccination and tracing of contacts; 3) prompt isolation of cases and care management, and 4) infection prevention and control measures (household decontamination and safe and dignified burials). The care management of patients with EVD includes standard-of-care (SOC), nutrition, and specific therapeutics [6]. Recently, two monoclonal antibody-based therapies received FDA approval for treating EVD, strongly changing the prognosis of the infection by reducing the lethality below 35% [7].

During EVD outbreaks, surveillance relies on the World Health Organization (WHO) clinical case definition to admit suspect cases at the point-of-care and transfer them to the care unit while waiting for EVD laboratory confirmation. After EVD confirmation, specific treatment is administered to confirmed cases in addition to the SOC. EVD diagnosis relies on the GeneXpert® Ebola Assay (Cepheid, Sunnyvale, CA, USA) [8], an automated, sensitive, and specific reverse transcriptase polymerase chain reaction (RT-PCR) technology.

However, GeneXpert® is still expensive and requires trained staff, a power supply, minimal infrastructure, and a reliable supply chain to be able to use it at the peripheral levels where most EVD outbreaks occur in poor resource settings [911]. While in theory the turnaround time is 2–4 hours, in reality delays of 2–5 days have been noted [12, 13]. In addition, the WHO clinical case definition used in the field to screen for EVD is insufficiently accurate to discriminate EVD cases from non-cases at the point-of-care [1416].

As most common tropical diseases such as malaria, typhoid fever, meningitis, etc. can display EVD-like symptoms, non-EVD cases may also reach the care units for clinical management and further interventions. In this context, it is more likely that false positives are isolated, raising the workload, and undermining the availability of scarce resources.

For the above reasons (i.e., the cost, technical demand, and long turnaround time for the EVD results, coupled with the poor discriminating performance of the WHO clinical case definition used), the WHO issued a target product profile (TPP) for EBOV tests, including rapid diagnostic tests (RDTs), to shorten the turnaround time and provide high accuracy (desired level: sensitivity >98%, specificity >99%; acceptable level: sensitivity > 95%, specificity > 99%) [11, 17, 18]. Subsequently, researchers have developed many Ebola (EBOV) antigen-based rapid diagnostic tests (RDTs) and RT-PCR assays [19, 20]. Some of these screening tools are under evaluation or have already been evaluated and showed better performance [2022].

Simultaneously, some researchers have developed clinical prediction scores as rapid diagnostic tools to assess the disease probability in suspected EVD cases [2325]. A screening score tool developed is the extended clinical prediction score (ECPS), which includes clinical and epidemiological predictors. This prediction score showed quite good diagnostic accuracy (AUROC of 0.88, 95% CI: 0.86–0.89) and a cross-validated area under the ROC curve (AUCCV) of 0.87 in cross-validation evaluation [26]. The authors proposed three scenarios for the implementation of the ECPS: 1) the score by choosing an operational cut-off for action; 2) the ECPS as a joint test; and 3) the score as a conditional test to target additional diagnostic testing.

For instance, the WHO recommendation for the use of rapid, sensitive, safe, and simple Ebola diagnostic tests to optimize EVD screening underlying innovative approaches for screening Ebola suspect cases [17, 27]. Thus, developing novel screening methods requires evidence of clinical benefit and economic efficiency to identify cost-effective strategies to use in the screening of EVD suspect cases, which can help policymakers determine whether RDT and diagnostic prediction scores could efficiently replace the WHO case definition for Ebola suspect cases at the point-of-care. Thus far, there is no evidence of the cost-effectiveness and efficiency of rapid diagnostic tools.

This study assesses, from a healthcare perspective, the cost-effectiveness of combining Ebola rapid diagnostic tests (RDTs) and prediction tools in the screening of EVD cases based on a decision analytic model.

Methods

Decision analysis is a quantitative approach to evaluate the consequences of alternative strategies and to guide the choice of the most effective or cost-effective course of action under uncertainty [28]. Decision analysis requires a decision tree that identifies every possible decision and the consequence of each decision and then assigns a probability and a payoff to each consequence [29].

We considered the healthcare system perspective for this analysis, and all analyses were performed using TreeAge Pro software version 2021 (TreeAge, Williamstown, Massachusetts, USA).

Screening algorithms

The surveillance used the WHO case definition for EVD to recruit suspect cases ("algorithm 1"). As per the WHO case definition, a suspect case is any person with sudden fever and hemorrhage, or sudden fever with at least three other general symptoms such as severe headache, muscle and joint pain, and fatigue [30]. We compared the algorithm 1 with 1) ECPS at -3 points of cut-off, 2) ECPS as a join test, 3) ECPS as a conditional test, and 4) along with the QuickNavi™-Ebola RDT (e.g., WHO case definition, ECPS at -3 points of cut-off, ECPS as a joint test, or ECPS as a conditional test followed by QuickNavi™-Ebola RDT).

For algorithms with a combination of two screening tests, one positive diagnostic test (e.g., RDT, clinical prediction score, or WHO case definition) must isolate an EVD suspect case. Table 1 describes the different screening algorithms compared in this analysis.

Table 1. Description of screening algorithms compared in the model.

Screening algorithm Algorithm description
Algorithm 1 Screening EVD suspect cases with WHO case definition
Algorithm 2 Screening EVD suspect cases with the ECPS at -3 points of cut-off
Algorithm 3 Screening EVD suspect cases using ECPS as a joint test or approach
Algorithm 4 Screening EVD suspect cases using ECPS as a conditional test or approach
Algorithm 5 Screening EVD suspect cases by combining/sequencing the WHO case definition for suspect cases first and the QuickNavi™-Ebola RDT
Algorithm 6 Screening EVD suspect cases by combining/sequencing the ECPS at -3 points of cut-off first and the QuickNavi™-Ebola RDT
Algorithm 7 Screening EVD suspect cases by combining/sequencing the ECPS as a conditional test or approach first and the QuickNavi™-Ebola RDT.
Algorithm 8 Screening EVD suspect cases by combining/sequencing the ECPS as a joint test or approach first and the QuickNavi™-Ebola RDT.

Footnotes: ECPS: Extended clinical prediction score, RDT: rapid diagnostic test.

As described by Tshomba et al. [26], the two screening methods—joint and conditional tests with ECPS—are methods in which suspects with no reported risk of exposure would be assumed to be free of the disease, and the clinical team would act appropriately (e.g., no further action is taken). Using the joint approach, all suspects at low-, intermediate-, and high-risk reported exposure are clinically assessed, and only those with a predicted likelihood of EVD greater than 5% are suggested for isolation. In the conditional test, regardless of their estimated probability of contracting the illness, all suspects with high-risk reported exposure should be isolated. Next, suspects with low and intermediate reported exposure who have an EVD-predicted probability more than 5% should be isolated.

Decision models and outcomes

Fig 1 depicts a decisional tree comparing the related screening-action strategies. From the decision root node, each resulting branch represents the strategy chosen to screen the EVD suspect cases. S1 Fig draws the complete decision tree model, and S1 File describes and defines each algorithm tested in the model (S1 Fig and S1 File).

Fig 1. Decision tree for eight competing algorithms for the screening of Ebola virus disease (EVD).

Fig 1

This is a reduced tree displayed. Not all the branch sequences are displayed in the graph. The non-displayed ones follow the same sequence, e.g., as one test to screen Ebola suspects or combining/sequencing two tests to screen Ebola suspects (thus, the same as the two examples of possible scenarios displayed in the figure). Algorithms 1, 2, and 3 use a single screening test; their visual representations are similar to the branch shown on the decision branch of algorithm 4. Algorithms 6, 7, and 8 use two sequencing screening tests; their visual presentations look like this, shown in this format on algorithm 5’s branch. QuickNavi™-Ebola RDT is used after the first screening test in algorithms with two screening tests. EVD = Ebola virus disease; ECPS = extended clinical prediction score; WHO = World health organization.

Decision branches may provide the following outcomes: 1) "EVD case isolated" (a true EVD case (true positive) isolated and clinically managed in temporary healthcare with the SOC); 2) "Non-EVD case erroneously isolated" (a non-Ebola case (false positive) isolated and cared for in temporary healthcare with the SOC); 3) "EVD case erroneously ruled out" (a true Ebola case sent back into the community (false negative)); 4) "Non-EVD case correctly ruled out" (a non-Ebola case (true negative) sent back to the community).

Probability estimates

Table 2 shows the probabilities included in our cost-effectiveness model. The table includes baseline estimates and plausible intervals to be used for the sensitivity analysis. We retrieved probabilities from the literature review or computed them from the DRC’s 2018–2020 EVD outbreak’s surveillance data. The probabilities included the prevalence of EVD among the suspected population, the sensitivity, and the specificity of the RDT or prediction scores included in the decision model (Table 2).

Table 2. Probability parameters included in the decision tree model.

Parameter Baseline value Plausible range or 95%CI Reference source
Prevalence of EVD infection in suspected population (%) 6.2 2.0–10.0 [26]
Prevalence of EVD infection in suspected population (%) 0.062 0.02–0.10 [26]
Sensitivity following the WHO criteria (%) 81.5 74.1–87.2 [15]
Specificity following the WHO criteria (%) 35.7 28.5–43.6 [15]
Sensitivity of ECPS at -3 points of cut-off (%) 98.0 96.5–98.9 [26]
Specificity of ECPS at -3 points of cut-off (%) 37.0 36.0–37.9 [26]
Sensitivity of ECPS as a joint test (%) 80.0 76.8–83.0 Computed using data published in [26]
Specificity of ECPS as a joint test (%) 81.9 81.1–82.7 Computed using data published in [26]
Sensitivity of ECPS as a conditional test (%) 65.4 61.6–69.1 Computed using data published in [26]
Specificity of ECPS as a conditional test (%) 87.1 86.4–87.7 Computed using data published in [26]
Sensitivity of QuickNavi-Ebola RDT (%) 87.4 63.6–96.8 [20]
Specificity of QuickNavi-Ebola RDT (%) 99.6 99.3–99.8 [20]

Effectiveness

Table 3 presents the effectiveness values included as payoffs for outcomes in the decision tree model. We estimated the effectiveness of each EVD screening algorithm by considering all the steps of EVD suspect case management. Therefore, we quantified the effectiveness of each algorithm in terms of the number of isolated EVD cases (true positives). The result of the GeneXpert® Ebola test was considered the reference standard. A positive result confirms an EVD infection and calls for specific care. Therefore, we considered any "case correctly isolated" and "non-case correctly ruled out" as benefits and assigned a "one" mark as the effectiveness value for each true positive or true negative.

Table 3. Effectiveness payoff assigned to outcomes of the decision tree model.

Disease status Action taken Disease outcome Baseline value Plausible range Source
EVD-positive case Correctly isolated True-positive 1 - by assumption
EVD-negative case Correctly ruled out True-negative 1 - by assumption
EVD-negative case Erroneously isolated False-positive -0.077 (-0.124)- (-0.037) See S2 File.
EVD-positive case Not isolated False-negative -2.49 (-2.60)-(-2.38) [33]

For each non-case erroneously isolated, we assigned the score of the isolated iatrogenic case minus the probability of infection given random contact with an Ebola patient, as computed by Gilbert [31]. As a frontline vaccination for healthcare workers would be implemented, we computed this probability of infection using the secondary attack rate (SAR) for direct physical contact of 22.9% (95% CI: 11.6%–34.2%) for those with direct contact but no nursing in the hospital [32].

We negatively assigned this probability reported to the number of non-EVD exposed contacts due to this classification error in the isolation (as a payoff). Negatively because it is the harm caused by isolation, e.g., iatrogenic harm. For this erroneously isolated false positive, we assumed that each isolated false positive and his two family caregivers were non-EVD (e.g., three non-EVD would be exposed in the isolation ward). Thus, a value of -0.077 was assigned to each isolated non-EVD case.

We hypothesized that the community as a whole would be exposed to the Ebola virus infection by these false negatives in the community. Therefore, we assigned a score equal to minus the anticipated number of Ebola cases that this false-negative case—which was ruled out—would produce in the entire susceptible population (e.g., minus the basic reproductive number, the Ro, which accounts for the transmissibility and the typical number of community contacts that this false-negative would harm). In a population that is entirely susceptible, the basic reproduction number is the number of secondary instances that one case would result in.

For each EVD case ruled out, we assigned a value of -2.49, e.g., minus the Ro as estimated by Lewnard [33], as the effectiveness payoff (Table 3). S2 File gives the details of iatrogenic probability computation (S2 File).

Costs

We used a micro-costing approach to estimate the operational direct costs. Micro costing is a technique relying on a detailed listing of every resource consumed separately for step-by-step individual action [34]. For laboratory workers, we included the DRC Ministry of Health (MoH) salary scale determined during the 2018–2020 outbreak period. Indeed, we calculated the time spent on the sample collection and analysis (GeneXpert® test). We assumed that all alive suspects gave a blood sample for testing and received extensive supportive care while waiting for the results.

We assigned to each suspect in the care unit a cost of USD 342 including 1) supportive systematic treatment, 2) personal protective equipment, 3) personnel costs as estimated by Bartsch et al. [35], and 4) the cost of surveillance estimated at USD 1.8 [36].

Running a single GeneXpert® test took an average of 107 min. This time excluded: 1) the sample collection process at the care unit; 2) the pre-analytical phase in the laboratory (material preparation, labeling, and notification form); 3) the sample reception and unpacking; 4) the sample inactivation and aliquoting within the glovebox; and 5) results reading and delivery. We fixed the cost of a GeneXpert® test at USD 20, corresponding to the pricing given to subsidized partners [37]. We assigned USD 10 to the QuickNavi™-Ebola RDT [38].

We assumed negligible costs for additional supplies (cryotubes, pipet tips, and other supplies), the cost of Cepheid GeneXpert® platform depreciation over time, and capital costs (costs incurred in the same year).

Analysis

Efficiency analysis

The efficiency of each screening strategy was assessed on the basis of the cost-effectiveness ratio in terms of USD per EVD case isolated. The cost-effectiveness ratio for a given algorithm was calculated using the following formula:

Costeffectivenessratio=CostofagivenEbolascreeningalgorithmEffectivenessofthatEbolascreeningalgrithm (1)

Our primary outcomes were 1) the expected costs per suspected case, 2) the number of confirmed EVD cases isolated, and 3) the cost-effectiveness of the proposed screening algorithms. S2 File describes in detail the technical approach used to compute the total cost of screening suspects, the number of EVD isolated for each screening algorithm, and each probability used in the formula.

We computed the incremental cost-effectiveness ratio (ICER) of isolating one additional EVD case by comparing each alternative algorithm to the best screening algorithm after ranking their effectiveness. The ICER was the incremental cost divided by incremental effectiveness, weighted by the EVD prevalence among the suspects. The resulting cost-effectiveness ratio for each algorithm represents the magnitude of additional health gained (e.g., EVD isolated here) per additional unit of resources spent.

The ICER was calculated using the formula as follows:

ICER=CostofagivenalgorithmCostofthealgorithmcomparatorEffectivenessofagivenalgrithmCostofthealgorithmcomparator (2)

Where the numerator, in the case of Ebola disease, represents the incremental cost, which is the total expense incurred due to an additional health effect, e.g., an isolated EVD case. It is calculated by looking at the additional expenses made throughout the screening process, such as supplies used, for one extra health effect. The denominator represents the incremental effectiveness, which is the increase in the effectiveness of the Ebola screening throughout the screening process.

Sensitivity analysis

We performed one-way sensitivity analyses, i.e., deterministic analyses, and probabilistic sensitivity analysis. The exact values of each parameter used in the model are uncertain. We performed a series of one-way sensitivity analyses to evaluate the effect of changes in parameter values over their plausible range on the efficiency ranking of algorithms, e.g., to test the robustness of our ranking conclusion.

The parameters included in the model of algorithms and considered for this sensitivity analysis were: 1) the prior Ebola virus disease probability (e.g., the disease prevalence in the suspected population); 2) the sensitivities and specificities of RDT and scores; 3) the cost of standard-of-care; 4) the cost of QuickNavi™-Ebola RDT; and 5) the cost of the GeneXpert® test. Additionally, as there are currently no marketed RDTs or therapies for Ebola (they are still in negotiations), we performed a 2-way sensitivity analysis exploring the effects of changing the price of the QuickNavi-Ebola RDT and the price of SOC on algorithm ranking. Three levels of the annual per capita 2021-DRC gross domestic product (2021-DRC GDP), as a willingness-to-pay, were used in this analysis (at one, two, and three times the 2021-DRC GDP).

To evaluate the overall cost-effectiveness sensitivity of the model, we performed a probabilistic sensitivity analysis (PSA) using Monte Carlo simulation. This latter quantifies the degree of confidence in the cost-effectiveness outputs based on uncertainty in the model inputs [39]. We plotted the cost-effectiveness acceptability curve (CEAC) to summarize the impact of parameter uncertainty on the cost-effectiveness outcome, the incremental cost-effectiveness ratio. The CEAC plots, on the horizontal axis, a range of cost-effectiveness thresholds against the probability that the screening algorithm will be cost-effective at that threshold on the vertical axis.

To simulate, we replaced the parameters’ point estimates by defining probability distributions for selected decision model parameters. We assumed a beta distribution for all probabilities and a gamma distribution for all nonnegative numeric parameters’ values. We set the willingness-to-pay (WTP) threshold at USD 50,000. Lastly, as suggested by the World Health Organization Choosing Interventions that are Cost-Effective (WHO-CHOICE) group, we used the country-specific WTP threshold to identify the cost-effective algorithm [40].

For the DRC, we used the annual per capita 2021-DRC gross domestic product, which was USD 584.1 [41]. The best decision is to choose the algorithm that has the highest ICER and falls just at or below the WTP threshold [40].

Ethic statements

This study was part of the Ebola outbreak response and disease surveillance in the North-Kivu Ebola outbreak in the Democratic Republic of the Congo and did not constitute human research. This economic evaluation study used published results from the literature to build the decision model. Thus, it did not require ethical approval.

Results

The model output related to the efficiency of the application of each screening algorithm on a suspect case of EVD is presented in Table 4, which reports the 1) cost and effectiveness of the person screened for the complete screening-action process, 2) incremental cost, 3) incremental effectiveness, 4) incremental cost-effectiveness ratio, and 5) efficiency of each algorithm compared with the most effective screening algorithm studied.

Table 4. Cost, incremental cost, effectiveness, incremental effectiveness, cost-effectiveness ratio, and incremental cost-effectiveness ratio of Ebola screening-action algorithms based on baseline value.

Screening algorithm Cost (USD) Incremental cost (USD) Effectiveness (patients correctly classified) Incremental effectiveness (patients correctly classified) Efficiency (USD per patient correctly classified) Incremental cost-effectiveness (USD per additional patient correctly classified) Algorithm Ranking
Algorithm 4 88.6   0.80   111.5   1
Algorithm 7 91.4 2.8 0.86 0.06 106.7 44.6 2
Algorithm 8 106.3 14.9 0.81 -0.05 131.5 -308.9* 3
Algorithm 3 118.9 27.5 0.77 -0.08 153.6 -331.7* 4
Algorithm 2 250.2 158.9 0.36 -0.50 696.6 -319.3* 5
Algorithm 6 252.1 160.7 0.36 -0.50 697.3 -324.5* 6
Algorithm 5 259.0 167.6 0.34 -0.51 752.9 -326.8* 7
Algorithm 1 274.9 183.5 0.31 -0.55 885.5 -335.8* 8

*: absolutely dominated

At the baseline point of values, e.g., using the point estimates of each input parameter value, the cost of screening using the WHO case definition was USD 274.9 per patient correctly classified ("Algorithm 1") and USD 250.2 using the ECPS at the -3 cut-off point of the score ("Algorithm 2"). The cost was USD 118.9 for screening with the ECPS as a joint test ("Algorithm 3") and USD 88.9 for screening with the EPCS as a conditional test ("Algorithm 4"). The screening costs per case isolated were increased: 1) from 250.2 to 252.17 USD when using the selective QuickNavi™-Ebola RDT testing after negative ECPS at -3 points of cut-off ("Algorithm 6"); and 2) from 88.9 to 91.4USD after negative ECPS as a conditional test. The screening costs per isolated case were decreased: 1) from 274.9 to 259.0 USD (an incremental USD 15.9 [5.8%] decrease) when using the selective QuickNavi™-Ebola RDT testing after negative WHO case definition ("Algorithm 5") and 2) from 118.9 to 106.3 after negative ECPS as a joint test.

The use of the ECPS conditional test ("Algorithm 4") was the cheapest and decreased the screening costs from USD 274.9 to USD 88.6 per patient correctly classified if compared to the traditional WHO case definition algorithm (an incremental USD 186.3 [67.8%] decrease) (Table 4).

We found fewer EVD cases (true positives) when using two algorithms without RDT testing (algorithms 1 and 2) and two dual screening algorithms with RDT testing (algorithms 6 and 5). In contrast, the highest number of patients correctly classified was obtained with dual screening with selective QuickNavi™-Ebola testing after a negative ECPS as a conditional test ("Algorithm 7") or as a joint test ("Algorithm 8") and an ECPS as a joint test or a conditional test ("Algorithm 3" or "Algorithm 4"). However, all six screening algorithms were absolutely dominated by the algorithm using ECPS as a conditional test (“Algorithm 4”) and the algorithm sequencing ECPS as a conditional test and the QuickNavi™-Ebola testing ("Algorithm 7").

The traditional algorithm using the WHO case definition for suspects ("Algorithm 1") to screen Ebola suspects had an effectiveness of 0.31. This fraction of effectiveness reflects the number of EVD suspects who were correctly classified after taking into consideration the harm brought on by incorrect classifications. It can be seen as the percentage of patients who were correctly categorized for each patient screened. It costs USD 274.9 per isolated case, with USD 885.5 per patient correctly classified for efficiency. The algorithm using the ECPS as a conditional test and the algorithm with the dual ECPS as a conditional test associated with the QuickNavi™-Ebola RDT were the most cost-effective for EVD suspect screening. Compared to ECPS as a joint test alone ("Algorithm 3"), using the ECPS as a conditional test and the dual ECPS as a conditional test with QuickNavi™-Ebola RDT for screening were associated with an efficiency of USD 111.5 and USD 106.7 per patient correctly classified, respectively.

Fig 2 shows the results of one-way sensitivity analysis. The variations in input parameters, including the prevalence of EVD in suspected population and the sensitivity of the QuickNavi™-Ebola RDT, changed the analysis ranking or conclusion. For instance, the screening algorithm’s efficiency using the ECPS as a conditional test and selective QuickNavi™-Ebola RDT testing after a negative ECPS was about 80.0 and 84.3 USD per patient correctly classified for the prevalence under 4% and about 146.7 and 124.2 USD per patient correctly classified, respectively, for the EVD prevalence at 10%(Fig 2, and S3 Fig). In addition, the variation in disease prevalence among suspected populations changed the effectiveness and cost of the dominant screening algorithms (S4 Fig).

Fig 2. Variations in cost-effectiveness ratios of eight Ebola screening algorithms as a function of prevalence of Ebola virus disease in suspected population and sensitivities of the ECPS as a joint or conditional test, and the QuickNavi™-Ebola RDT.

Fig 2

A is the effect of variation in the prevalence of the Ebola virus disease on the efficiency of algorithms. B is the effect of variation in the sensitivity of the ECPS as a joint test on the efficiency of algorithms. C is the effect of variation in the sensitivity of the ECPS as a conditional test on the efficiency of algorithms. D is the effect of variation in the sensitivity of the QuickNavi™-Ebola RDT on the Efficiency of algorithms on the Efficiency of algorithms.

Therefore, the ECPS as a joint or conditional test algorithm had the lowest cost at a prevalence greater than 10% (S4 Fig). Our one-way sensitivity analysis also indicates that the prevalence of EVD in the suspected population, the cost of the QuickNavi™–Ebola RDT and the cost of SOC are the most crucial variables that influence the ICER for the dual ECPS as a conditional test with QuickNavi™-Ebola RDT (Fig 3).

Fig 3. Tornado diagram presenting One-way sensitivity analysis of ICER comparing the combining ECPS as a conditional test with QuickNavi™–Ebola RDT algorithm (Algorithm 7) to WHO case definition for the suspect algorithm (Algorithm 1) and the ECPS as a joint test algorithm (Algorithm 3).

Fig 3

Vertical line represents incremental effects when using baseline estimates of all parameters. Not all the parameters tested in the sensitivity analysis are visible on the plot. All key variables were included in the sensitivity analysis. Alg. = algorithm; ECPS = extended clinical prediction score; ICER = incremental cost-effectiveness ratio; RDT = rapid diagnostic test; blue: decrease; red: increase.

To address 95% of the total uncertainty of the outcome cost-effectiveness, we should consider the uncertainties for the following parameters: 1) cost of SOC (81%), 2) prevalence of EVD in the suspected population (10%), and 3) cost of QuickNavi™-Ebola RDT (4%).

Fig 4 depicts the two-way sensitivity analysis on the cost of the QuickNavi™-Ebola RDT and the cost of the SOC. S1 Table shows the cost-effectiveness ratios related to the tiered cost of both the QuickNavi™-Ebola RDT and the levels of SOC when using different algorithms. The most cost-effective screening algorithms include the highest number of true positives and the lowest number of false positives by varying the costs of SOC and QuickNavi™-Ebola RDT together. When the cost of the QuickNavi™-Ebola RDT was lower (USD 10), the ranking of the screening algorithms would not change even if the cost of SOC was > USD 150. However, the proportional efficiency estimates would be altered in low cost SOC contexts, e.g., USD 150 per course, if the cost of QuickNavi™-Ebola RDT was > USD 10 (Fig 4 and S1 Table).

Fig 4. Two-way sensitivity analysis comparing the net health benefit of EVD screening algorithms by varying both the cost of the QuickNavi™-Ebola RDT and the cost of the standard of care.

Fig 4

The figure shows the two-way sensitivity analysis based on variations in the cost of the QuickNavi™-Ebola RDT and the cost of the SOC at a willingness-to-pay of USD 584.1. For these, a willingness-to-pay of USD 1168.2 and a willingness-to-pay of USD 1752.3 do not appear here, as they display this at a willingness-to-pay of USD 584.1.

In probabilistic sensitivity analysis (PSA), the results showed that the dual ECPS as a conditional test with the QuickNavi™-Ebola RDT algorithm displayed the highest probability of being cost-effective among the evaluated algorithms, as shown in Fig 5.

Fig 5. Cost-effectiveness scatterplot depicting the probabilistic sensitivity analysis (PSA) for 1000 iterations of simulated cost-effectiveness ratio of 8 algorithms for screening Ebola virus disease suspects.

Fig 5

Fig 6 shows the cost-effectiveness acceptability curve and the probability of being cost-effective for each screening algorithm. The screening algorithm with the ECPS as a conditional test ("Algorithm 4") was cost-effective in about 31% of simulations at WTP less than USD 200 and in 0% of simulations at WTP of USD 300 and higher. The probability that dual ECPS as a conditional test with QuickNavi™-Ebola RDT algorithm was the most cost-effective increased from the WTP threshold value of USD 300 and read 100% from the WTP of USD 500 and higher, while this probability was zero or nearly zero for any other algorithms across the WTP threshold spectrum.

Fig 6. Cost-effectiveness acceptability curve comparing Algorithm 1 (screening with the WHO case definition) to seven Ebola screening algorithms.

Fig 6

The curves depict the probability of being cost-effective for each screening algorithm. The curves show that the probability that integrating of the dual ECPS as a conditional test with QuickNavi™–Ebola RDT (”Algorithm 7”) into the screening algorithm for Ebola suspects compared to any other screening algorithms at varying thresholds WTP. "Algorithm 4" was cost-effective in about 31% of simulations at WTP less than USD 200, and in 0% of simulations at WTP USD 300; "Algorithm 7" was cost-effective in 68.4% of simulations at WTP USD 100, in 97.2% of simulations at WTP USD 350, and in 100% of simulations at WTP of USD 500 and higher. Abbreviations: Alg. = algorithm; EVD = Ebola virus disease; ECPS = extended clinical prediction score; WTP = willingness-to-pay.

At the USD 50 000 WTP threshold, the PSA showed that including single screening testing with the ECPS as a joint test or a conditional test ("Algorithm 3" or "Algorithm 4") and dual screening with selective QuickNavi™-Ebola testing after a negative ECPS as a conditional test ("Algorithm 7") or as a joint test ("Algorithm 8") screening algorithms were cost-effective. Indeed, they were inexpensive and highly effective in 100% of simulations compared to traditional screening algorithms with the WHO case definition for the suspects. At this USD 50 000 WPT, the dual screening algorithm with the ECPS at the -3 point cut-off and then the QuickNavi™-Ebola RDT ("Algorithm 6") was cost-effective in 90.3% of simulations; the ECPS at the -3 point cut-off of the score ("Algorithm 2") was cost-effective in 89.7% of simulations; and the selective QuickNavi™-Ebola RDT testing after a negative WHO case definition ("Algorithm 5") was in about 100% of simulations (Fig 7). Additionally, the dual screening algorithm with ECPS as a conditional test and QuickNavi™-Ebola RDT was 100% cost-effective in a simulation pattern compared to any other screening algorithm.

Fig 7. Incremental cost-effectiveness of each algorithm compared to the WHO case definition-screening algorithm (Algorithm 1) during iterations of Monte Carlo simulation.

Fig 7

The ellipse represents 95% confidence points. The diagonal dashed line represents ICERs at a WTP threshold of USD 50 000. Points to the right of this dashed line are considered cost-effective. Dotted horizontal line shows incremental cost of USD 0. Points below this line represent iterations in which the given algorithm was cost- saving in 100% of simulation compared to Algorithm 1. This figure does not present all simulations of algorithms compared to algorithm 1. Those not presented here were cost- saving in 100% of simulation compared to algorithm 1 at this WTP threshold. Green points: ICERs that fall below the WTP line in Monte Carlo simulations, the maximum acceptable ICER (the algorithm is considered cost-effective); Red points: ICERs that fall above the WTP line, the maximum acceptable ICER (the algorithm is considered costly and less effective). Abbreviations: Alg. = algorithm; EVD = Ebola virus disease; ECPS = extended clinical prediction score; WTP = willingness-to-pay; ICER = incremental cost-effectiveness ratio.

Regarding the "WHO case definition" algorithm, at the WTP threshold of GDP per capita (USD 584.1) per additional EVD isolated, we found it cost-effective in 100% simulations while using ECPS as a joint test algorithm, ECPS as a conditional test algorithm, dual ECPS as a joint test and QuickNavi™-Ebola RDT algorithm, and ECPS as a conditional test and QuickNavi™-Ebola RDT algorithm. We found cost-effectiveness in 88.2% of simulations for ECPS at -3 points of cut-off, 99.6% of simulations for the dual WHO case definition/QuickNavi™-Ebola RDT algorithm, and 88.9% of simulations for EPCS at -3 points of cut-off/QuickNavi™-Ebola RDT algorithm (Fig 8).

Fig 8. Incremental cost-effectiveness of each algorithm compared to WHO case definition algorithm (Algorithm 1) during 1000 iterations of Monte Carlo simulation at a WTP threshold of USD 584.1.

Fig 8

The ellipse represents 95% confidence points. The diagonal dashed line represents ICERs at a WTP threshold of USD 584.1. Points to the right of this dashed line are considered cost-effective. Dotted horizontal line shows incremental cost of USD 0. Points below this line represent iterations in which the given algorithm was cost- saving compared to Algorithm 1. This figure does not present all simulations of algorithms compared to algorithm 1. Those not presented here were cost- saving in 100% of simulation compared to algorithm 1 at this WTP threshold. Green points: ICERs that fall below the WTP line in Monte Carlo simulations, the maximum acceptable ICER (the algorithm is considered cost-effective); Red points: ICERs that fall above the WTP line, the maximum acceptable ICER (the algorithm is considered costly and less effective). Abbreviations: Alg. = algorithm; WTP = willingness-to-pay; ICER = incremental cost-effectiveness ratio.

The dual screening algorithm with ECPS as a conditional test with QuickNavi™-Ebola RDT was cost-effective compared to any other screening algorithm at the country-specific WTP threshold. All algorithms were cost-effective in 100% of simulations if the WTP threshold of 3 times GDP per capita per isolated EVD is used, except algorithms 2, 5, and 6 (S5 Fig).

Discussion

Current observations demonstrate that the performance accuracy of the WHO case definition for EVD suspect cases is inadequate due to its low sensitivity and specificity. Therefore, its use in the screening of EVD-suspicious cases leads to suboptimal effectiveness in the isolation process during outbreaks [1416].

However, during EVD outbreaks, health professionals use the clinical criteria to isolate suspected Ebola cases while they await confirmation by the GeneXpert® test. Our study findings show that incorporating scoring and RDT tools into the screening algorithms for suspect cases improves the efficiency of isolating EVD suspect cases. The WHO case definition algorithm used is less effective and costly than the other screening algorithms evaluated.

The ECPS as a joint or conditional algorithm and the dual screening algorithms (which combine the ECPS as a joint or conditional test with the QuickNavi™-Ebola RDT algorithms) provided the highest number of EVD cases (true positives) isolated for cost in our findings. From a health system perspective, our analysis shows that incorporating screening with ECPS as a conditional test algorithm and dual algorithm testing with ECPS as a conditional test and QuickNavi™-Ebola RDT into EVD case finding was highly cost-effective. Thus, these algorithms were inexpensive, more effective, and cost—saving compared to the current WHO case definition algorithm or any other competing algorithms.

Moreover, our analysis showed that, in the context of the low cost of SOC, the high cost of QuickNavi™-Ebola RDT resulted in changes in the ranking of algorithm efficiencies, and ECPS as a conditional became the most cost-effective. Therefore, choosing an algorithm will depend on the cost of both SOC per course and QuickNavi™-Ebola RDT. The prevalence of EVD in outbreaks ranges between 2 and 10% in the suspected population [1, 26]. Our conclusions, e.g., the ranking of algorithms, changed due to the variation in prevalence in this range in the one-way sensitivity analysis. In addition, the findings showed that the variations in the sensitivity of the QuickNavi™-Ebola RDT resulted in changes in the algorithms’ efficiency ranking. We did not observe any variation in any other sensitivities or specificities of the tests used in the model that would influence the algorithm’s ranking. However, the variation in the SOC cost changed the algorithms ranking at the lower estimates, i.e., USD 150 per course. No marketed price for the QuickNavi™-Ebola RDT exists as of now [38]. If the cost of the SOC is less than 150 USD and the estimated price of the RDT is more than 10 USD per test, using the ECPS as a joint or conditional test for screening will provide better value for money (cost) for the overall health gained.

According to the results of our model, screening the ECPS as a conditional test algorithm and with the dual ECPS as a conditional test with the QuickNavi™-Ebola RDT algorithm resulted in USD 88.6 and 91.4 of ICER per additional EVD case isolated, respectively. Additionally, it could be considered cost-effective when a USD 50 000 WTP threshold was applied. From the perspective of the DRC public health system, the dual ECPS as a conditional test with the QuickNavi™-Ebola RDT algorithm can be considered suitable for screening of EVD suspect cases in outbreak settings. When the scenario of the DRC per capita domestic annual gross product is applied to our analysis, the cost of screening with dual ECPS as a conditional test and the QuickNavi™-Ebola RDT algorithm would be above the WTP threshold (USD 584.1 per additional patient correctly classified).

The WHO recommendations consider that interventions costing more than three times the gross domestic product are not cost-effective [42]. In our findings, the cost of ECPS as a joint test, ECPS as a conditional test, and dual ECPS as a joint test with QuickNavi™-Ebola RDT algorithms did not exceed the WHO-CHOICE threshold. Therefore, we observed that although these three algorithms mentioned above did not find highly cost-effective alternatives, they met the DRC public health system’s perspective.

At baseline estimates, our results showed that the difference between the cost-effectiveness of the dual algorithm (i.e., ECPS as a conditional test associated with the QuickNavi™-Ebola RDT) and ECPS as a joint or conditional test algorithm was marginal. In a context associated with the scarcity of Ebola RDTs on the market or the difficulty of scaling them up, introducing ECPS as a joint or conditional test for screening could be a better choice. Indeed, none of our algorithm rankings changed in 100% of the simulations when we looked at the 2021-DRC-GDP per capita willing-to-pay threshold compared with the WHO case definition. All other factors are equal, the cost-effectiveness threshold is the amount a decision-maker is willing-to-pay for a unit of the health effect. Therefore, the cost-effect analysis should be based on specific health effect targets to achieve, specific budget constraints to keep in mind, and the willing-to-pay ceiling provided by the primary user of the analysis results. In this way, the objective of the analysis could be to minimize the cost for the health effect target, maximize the health effect for the budget constraint, or know what algorithm to consider cost-effective. A cost-effectiveness threshold, such as the per capita gross product, is usually chosen to identify the screening algorithms that provide the best value for the cost. Alternatively, whether that threshold represents a better reference for countries, willingness-to-pay remains unclear, as no evidence of a linear relationship was established between them. The per capita gross product usually does not constitute the social willingness-to-pay. This last includes not only the market willingness-to-pay but also the nonmarket values, i.e., social preferences. Thus, the choice of the threshold depends on how decision-makers, health managers, and patients weigh the value of health benefits. Patients or healthcare managers could use other preferences, resulting in an overestimation of the value of the health benefit and leading to a very stringent threshold that can rule out some efficient algorithms. Conversely, the deciders not directly concerned by the given health problem (those living far from the epicenter of the outbreaks) could value the health benefit differently with a lax threshold, resulting in the inclusion of some inefficient options. From the healthcare perspective, choosing the cost-effectiveness threshold could lead to an important opportunity cost for the providers, e.g., healthcare workers directly concerned with managing scarcely available resources to gain health.

Therefore, choosing a threshold to identify cost-effective algorithms to implement must mean reaching a real consensus that places thresholds (ICERs) in the context of their application (a choice that considers local policies and managerial options such as funding resources, ethics, feasibility, local participation, etc.) [4345]. Moreover, findings from this study support the idea that it is worth using some algorithms to screen EVD suspects in outbreak contexts with available emergency funding during the epidemic period. However, integrating these cost-effective algorithms into the Ebola surveillance system requires additional analysis, including a budget impact and feasibility assessment. The budget impact analysis will assess whether the adoption of a new EVD screening strategy is affordable. This will help quantify the financial impact of the adoption, given the resource and budget constraints in low- or middle-income countries and the number of unmet needs for the budget holder (i.e., the health system, government, etc.) [46].

This study explicitly responds to the World Health Organization’s call for an innovative EVD screening strategy, as it assesses the cost-effectiveness aspects and provides valuable data for decision-makers in the context of increased EVD outbreaks in countries in Central and West Africa. However, no commercial RDT is available, and our study used only one RDT in its analyses. Including more than one RDT would give better insight into which RDT to use in screening. No algorithm built into the model was evaluated prospectively. Thus, different algorithms should be evaluated in future outbreaks to assess their real impact.

Conclusion

This study demonstrates that in screening EVD suspects, Ebola clinical prediction scores as rapid diagnostic tools and QuickNavi™-Ebola RDT can be highly cost-effective compared with the traditional WHO clinical case definition.

If prediction scores and RDT are adopted, using dual ECPS as a conditional test with the QuickNavi™-Ebola RDT algorithm is the best screening option as it lowers the costs of confirmation testing and overall care during an EBOV epidemic. In some circumstances, such as those with a low cost of SOC, using the ECPS as a joint or conditional test to screen EVD suspects could be cost-effective in the DRC context. However, additional analyses that investigate the affordability and feasibility and account for all stakeholders’ preferences are required to support their extended use in the surveillance system for the Ebola virus disease in the concerned countries.

Supporting information

S1 File. The detailed description of algorithms tested in the decision tree model.

(DOCX)

S2 File. The supplementary appendix.

(DOCX)

S3 File. CHEERS checklist.

(DOCX)

S1 Table. Cost-effectiveness ratios (USD per EVD isolated) in relation to variation in the cost of QuickNavi™-Ebola RDT and cost of standard-of-care (SOC).

(DOCX)

S1 Fig. Complete decision tree model.

(TIF)

S2 Fig. Variations in cost-effectiveness ratios of eight Ebola screening algorithms as a function of sensitivities of the WHO case definition for the suspect and ECPS at -3 points of cut-off.

A is the effect of variation in the sensitivity of the WHO case definition for the suspect on the efficiency of algorithms. B is the effect of variation in the sensitivity of the ECPS at -3 points of cut-off on the efficiency of algorithms.

(TIF)

S3 Fig. Variations in cost effectiveness ratios of the eight Ebola screening algorithms as a function of the cost of standard-of-care and QuickNavi™-Ebola RDT.

A presents the effect of variation in the cost of standard-of-care on the efficiency of the eight Ebola screening algorithms. B presents the effect of variation in the QuickNavi™-Ebola RDT cost on the efficiency of the 8 Ebola screening algorithms.

(TIF)

S4 Fig. Variations in the effectiveness and cost of the eight Ebola screening algorithms as a function of the prevalence of Ebola virus disease in the suspected population.

A depicts the effect of variation in the prevalence of Ebola virus disease on the effectiveness of screening algorithms. B, the effect of variation in the prevalence of Ebola virus on the cost of screening algorithms. The dotted horizontal line shows the threshold value of the prevalence over which the cost of the algorithm changes. Over this threshold of 10% of disease prevalence, the cost of ECPS as a joint or conditional test becomes low. Abbreviations: Alg. = algorithm; ECPS = extended clinical prediction score; EVD = Ebola virus disease.

(TIF)

S5 Fig. Incremental cost-effectiveness of each algorithm compared to WHO case definition- algorithm (Algorithm 1) during 1000-iterations of Monte Carlo simulation at a WTP threshold of USD 1, 752.3.

The ellipse represents 95% confidence points. The diagonal dashed line represents ICERs at a WTP threshold of USD 1,752.3. Points to the right of this dashed line are considered cost-effective. The dotted horizontal line shows an incremental cost of USD 0. Points below this line represent iterations in which an algorithm was cost saving compared with algorithm 1. This figure does not present all simulations of algorithms compared to algorithm 1. Those not presented here were cost- saving in 100% of simulations compared to algorithm 1 at this WTP threshold. Green points: ICERs that fall below the WTP line in Monte Carlo simulations, the maximum acceptable ICER (the algorithm is considered cost-effective); Red points: ICERs that fall above the WTP line, the maximum acceptable ICER (the algorithm is considered costly and less effective). Abbreviations: Alg. = algorithm; WTP = willingness to pay; ICER = incremental cost-effectiveness ratio.

(TIF)

S1 Data

(ZIP)

Acknowledgments

We are grateful to Professor Lutgarde Lynen from the Institute of Tropical Medicine in Antwerp, Belgium, for her helpful comments on the manuscript.

Data Availability

All relevant data are within the paper.

Funding Statement

This study has received partial support from NIH (Grant reference NIH FIC/R01EY031894). The funder had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  • 1.Rosello A, Mossoko M, Flasche S, Van Hoek AJ, Mbala P, Camacho A, et al. Ebola virus disease in the Democratic Republic of the Congo, 1976–2014. Elife. 2015;4. Epub 20151103. doi: 10.7554/eLife.09015 ; PubMed Central PMCID: PMC4629279. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.W. H. O. Ebola Response Team, Aylward B, Barboza P, Bawo L, Bertherat E, Bilivogui P, et al. Ebola virus disease in West Africa—the first 9 months of the epidemic and forward projections. N Engl J Med. 2014;371(16):1481–95. Epub 20140922. doi: 10.1056/NEJMoa1411100 ; PubMed Central PMCID: PMC4235004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Aruna A, Mbala P, Minikulu L, Mukadi D, Bulemfu D, Edidi F, et al. Ebola Virus Disease Outbreak—Democratic Republic of the Congo, August 2018-November 2019. MMWR Morb Mortal Wkly Rep. 2019;68(50):1162–5. Epub 20191220. doi: 10.15585/mmwr.mm6850a3 ; PubMed Central PMCID: PMC6936163. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Pigott DM, Golding N, Mylne A, Huang Z, Henry AJ, Weiss DJ, et al. Mapping the zoonotic niche of Ebola virus disease in Africa. Elife. 2014;3:e04395. Epub 20140908. doi: 10.7554/eLife.04395 ; PubMed Central PMCID: PMC4166725. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Pigott DM, Golding N, Mylne A, Huang Z, Weiss DJ, Brady OJ, et al. Mapping the zoonotic niche of Marburg virus disease in Africa. Trans R Soc Trop Med Hyg. 2015;109(6):366–78. Epub 20150327. doi: 10.1093/trstmh/trv024 ; PubMed Central PMCID: PMC4447827. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Lamontagne F, Fowler RA, Adhikari NK, Murthy S, Brett-Major DM, Jacobs M, et al. Evidence-based guidelines for supportive care of patients with Ebola virus disease. Lancet. 2018;391(10121):700–8. Epub 20171017. doi: 10.1016/S0140-6736(17)31795-6 ; PubMed Central PMCID: PMC6636325. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Mulangu S, Dodd LE, Davey RT Jr., Tshiani Mbaya O, Proschan M, Mukadi D, et al. A Randomized, Controlled Trial of Ebola Virus Disease Therapeutics. N Engl J Med. 2019;381(24):2293–303. Epub 20191127. doi: 10.1056/NEJMoa1910993 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Semper AE, Broadhurst MJ, Richards J, Foster GM, Simpson AJ, Logue CH, et al. Performance of the GeneXpert Ebola Assay for Diagnosis of Ebola Virus Disease in Sierra Leone: A Field Evaluation Study. PLoS Med. 2016;13(3):e1001980. Epub 20160329. doi: 10.1371/journal.pmed.1001980 ; PubMed Central PMCID: PMC4811569. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Dhillon RS, Srikrishna D, Kelly JD. Deploying RDTs in the DRC Ebola outbreak. Lancet. 2018;391(10139):2499–500. Epub 20180607. doi: 10.1016/S0140-6736(18)31315-1 . [DOI] [PubMed] [Google Scholar]
  • 10.Katawera V, Kohar H, Mahmoud N, Raftery P, Wasunna C, Humrighouse B, et al. Enhancing laboratory capacity during Ebola virus disease (EVD) heightened surveillance in Liberia: lessons learned and recommendations. Pan Afr Med J. 2019;33(Suppl 2):8. Epub 20190529. doi: 10.11604/pamj.supp.2019.33.2.17366 ; PubMed Central PMCID: PMC6675925. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Emperador DM, Mazzola LT, Wonderly Trainor B, Chua A, Kelly-Cirino C. Diagnostics for filovirus detection: impact of recent outbreaks on the diagnostic landscape. BMJ Glob Health. 2019;4(Suppl 2):e001112. Epub 20190207. doi: 10.1136/bmjgh-2018-001112 ; PubMed Central PMCID: PMC6407532. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Nouvellet P, Garske T, Mills HL, Nedjati-Gilani G, Hinsley W, Blake IM, et al. The role of rapid diagnostics in managing Ebola epidemics. Nature. 2015;528(7580):S109–16. doi: 10.1038/nature16041 ; PubMed Central PMCID: PMC4823022. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Chua AC, Cunningham J, Moussy F, Perkins MD, Formenty P. The Case for Improved Diagnostic Tools to Control Ebola Virus Disease in West Africa and How to Get There. PLoS Negl Trop Dis. 2015;9(6):e0003734. Epub 20150611. doi: 10.1371/journal.pntd.0003734 ; PubMed Central PMCID: PMC4465932. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Zachariah R, Harries AD. The WHO clinical case definition for suspected cases of Ebola virus disease arriving at Ebola holding units: reason to worry? Lancet Infect Dis. 2015;15(9):989–90. Epub 20150723. doi: 10.1016/S1473-3099(15)00160-7 . [DOI] [PubMed] [Google Scholar]
  • 15.Lado M, Walker NF, Baker P, Haroon S, Brown CS, Youkee D, et al. Clinical features of patients isolated for suspected Ebola virus disease at Connaught Hospital, Freetown, Sierra Leone: a retrospective cohort study. Lancet Infect Dis. 2015;15(9):1024–33. Epub 20150723. doi: 10.1016/S1473-3099(15)00137-1 . [DOI] [PubMed] [Google Scholar]
  • 16.Caleo G, Theocharaki F, Lokuge K, Weiss HA, Inamdar L, Grandesso F, et al. Clinical and epidemiological performance of WHO Ebola case definitions: a systematic review and meta-analysis. Lancet Infect Dis. 2020;20(11):1324–38. Epub 20200625. doi: 10.1016/S1473-3099(20)30193-6 ; PubMed Central PMCID: PMC9355392. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.World Health Organization. Urgently Needed: Rapid, Sensitive, Safe and Simple Ebola Diagnostic Tests 2014. Available from: http://www.who.int/mediacentre/news/ebola/18-november-2014-diagnostics/en/ Accessed on October 31, 2022. [Google Scholar]
  • 18.Wonderly B, Jones S, Gatton ML, Barber J, Killip M, Hudson C, et al. Comparative performance of four rapid Ebola antigen-detection lateral flow immunoassays during the 2014–2016 Ebola epidemic in West Africa. PLoS One. 2019;14(3):e0212113. Epub 20190307. doi: 10.1371/journal.pone.0212113 ; PubMed Central PMCID: PMC6405069. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Kaushik A, Tiwari S, Dev Jayant R, Marty A, Nair M. Towards detection and diagnosis of Ebola virus disease at point-of-care. Biosens Bioelectron. 2016;75:254–72. Epub 20150820. doi: 10.1016/j.bios.2015.08.040 ; PubMed Central PMCID: PMC4601610. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Mukadi-Bamuleka D, Bulabula-Penge J, De Weggheleire A, Jacobs BKM, Edidi-Atani F, Mambu-Mbika F, et al. Field performance of three Ebola rapid diagnostic tests used during the 2018–20 outbreak in the eastern Democratic Republic of the Congo: a retrospective, multicentre observational study. Lancet Infect Dis. 2022;22(6):891–900. Epub 20220314. doi: 10.1016/S1473-3099(21)00675-7 . [DOI] [PubMed] [Google Scholar]
  • 21.Broadhurst MJ, Kelly JD, Miller A, Semper A, Bailey D, Groppelli E, et al. ReEBOV Antigen Rapid Test kit for point-of-care and laboratory-based testing for Ebola virus disease: a field validation study. Lancet. 2015;386(9996):867–74. Epub 20150625. doi: 10.1016/S0140-6736(15)61042-X . [DOI] [PubMed] [Google Scholar]
  • 22.Walker NF, Brown CS, Youkee D, Baker P, Williams N, Kalawa A, et al. Evaluation of a point-of-care blood test for identification of Ebola virus disease at Ebola holding units, Western Area, Sierra Leone, January to February 2015. Euro Surveill. 2015;20(12). Epub 20150326. doi: 10.2807/1560-7917.es2015.20.12.21073 . [DOI] [PubMed] [Google Scholar]
  • 23.Fitzgerald F, Wing K, Naveed A, Gbessay M, Ross JCG, Checchi F, et al. Development of a Pediatric Ebola Predictive Score, Sierra Leone(1). Emerg Infect Dis. 2018;24(2):311–9. doi: 10.3201/eid2402.171018 ; PubMed Central PMCID: PMC5782873. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Genisca AE, Chu TC, Huang L, Gainey M, Adeniji M, Mbong EN, et al. Risk Prediction Score for Pediatric Patients with Suspected Ebola Virus Disease. Emerg Infect Dis. 2022;28(6):1189–97. doi: 10.3201/eid2806.212265 ; PubMed Central PMCID: PMC9155869. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Ingelbeen B, De Weggheleire A, Van Herp M, van Griensven J. Symptom-Based Ebola Risk Score for Ebola Virus Disease, Conakry, Guinea. Emerg Infect Dis. 2018;24(6):1162. doi: 10.3201/eid2406.171812 ; PubMed Central PMCID: PMC6004844. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Tshomba AO, Mukadi-Bamuleka DR, De Weggheleire A, Tshiani OM, Kitenge RO, Kayembe CT, et al. Development of Ebola virus disease prediction scores: Screening tools for Ebola suspects at the triage-point during an outbreak. PLoS One. 2022;17(12):e0278678. Epub 20221216. doi: 10.1371/journal.pone.0278678 ; PubMed Central PMCID: PMC9757576. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.World health Organization. Target Product Profile for Zaïre ebolavirus rapid, simple test to be used in the control of the Ebola outbreak in West Africa 2014. Available from: http://www.finddx.org/wp-content/uploads/2016/02/WHO-TPP-ebola-2014.pdf Accessed on March 8, 2023. [Google Scholar]
  • 28.Pauker SG, Kassirer JP. Decision analysis. N Engl J Med. 1987;316(5):250–8. doi: 10.1056/NEJM198701293160505 . [DOI] [PubMed] [Google Scholar]
  • 29.Kassirer JP. The principles of clinical decision making: an introduction to decision analysis. Yale J Biol Med. 1976;49(2):149–64. PubMed Central PMCID: PMC2595272. [PMC free article] [PubMed] [Google Scholar]
  • 30.World Health Organization. Case definition recommendations for Ebola or Marburg Virus Diseases 2014. Available from: https://www.who.int/csr/resources/publications/ebola/ebola-case-definition-contact-en.pdf Accessed on March 10, 2023. [Google Scholar]
  • 31.Gilbert JA, Meyers LA, Galvani AP, Townsend JP. Probabilistic uncertainty analysis of epidemiological modeling to guide public health intervention policy. Epidemics. 2014;6:37–45. Epub 20131119. doi: 10.1016/j.epidem.2013.11.002 ; PubMed Central PMCID: PMC4316830. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Dean NE, Halloran ME, Yang Y, Longini IM. Transmissibility and Pathogenicity of Ebola Virus: A Systematic Review and Meta-analysis of Household Secondary Attack Rate and Asymptomatic Infection. Clin Infect Dis. 2016;62(10):1277–86. Epub 20160229. doi: 10.1093/cid/ciw114 ; PubMed Central PMCID: PMC4845791. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Lewnard JA, Ndeffo Mbah ML, Alfaro-Murillo JA, Altice FL, Bawo L, Nyenswah TG, et al. Dynamics and control of Ebola virus transmission in Montserrado, Liberia: a mathematical modelling analysis. Lancet Infect Dis. 2014;14(12):1189–95. Epub 20141023. doi: 10.1016/S1473-3099(14)70995-8 ; PubMed Central PMCID: PMC4316822. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Xu X, Lazar CM, Ruger JP. Micro-costing in health and medicine: a critical appraisal. Health Econ Rev. 2021;11(1):1. Epub 20210106. doi: 10.1186/s13561-020-00298-5 ; PubMed Central PMCID: PMC7789519. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Bartsch SM, Gorham K, Lee BY. The cost of an Ebola case. Pathog Glob Health. 2015;109(1):4–9. Epub 20150111. doi: 10.1179/2047773214Y.0000000169 ; PubMed Central PMCID: PMC4445295. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Keita M, Lucaccioni H, Ilumbulumbu MK, Polonsky J, Nsio-Mbeta J, Panda GT, et al. Evaluation of Early Warning, Alert and Response System for Ebola Virus Disease, Democratic Republic of the Congo, 2018–2020. Emerg Infect Dis. 2021;27(12):2988–98. doi: 10.3201/eid2712.210290 ; PubMed Central PMCID: PMC8632192. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.FIND. Negotiated prices. Available from: https://www.finddx.org/pricing/genexpert/ Accessed on April 20, 2021.
  • 38.Makiala S, Mukadi D, De Weggheleire A, Muramatsu S, Kato D, Inano K, et al. Clinical Evaluation of QuickNavi(TM)-Ebola in the 2018 Outbreak of Ebola Virus Disease in the Democratic Republic of the Congo. Viruses. 2019;11(7). Epub 20190628. doi: 10.3390/v11070589 ; PubMed Central PMCID: PMC6669708. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Laker LF, Torabi E, France DJ, Froehle CM, Goldlust EJ, Hoot NR, et al. Understanding Emergency Care Delivery Through Computer Simulation Modeling. Acad Emerg Med. 2018;25(2):116–27. Epub 20170921. doi: 10.1111/acem.13272 ; PubMed Central PMCID: PMC5805575. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Sanders GD, Neumann PJ, Basu A, Brock DW, Feeny D, Krahn M, et al. Recommendations for Conduct, Methodological Practices, and Reporting of Cost-effectiveness Analyses: Second Panel on Cost-Effectiveness in Health and Medicine. JAMA. 2016;316(10):1093–103. doi: 10.1001/jama.2016.12195 . [DOI] [PubMed] [Google Scholar]
  • 41.The World Bank. GDP per capita (current US$): World Bank national accounts data, and OECD National Accounts data 2021. Available from: https://data.worldbank.org/indicator/NY.GDP.PCAP.CD Accessed on Jun 15, 2022. [Google Scholar]
  • 42.Schwarzer R, Rochau U, Saverno K, Jahn B, Bornschein B, Muehlberger N, et al. Systematic overview of cost-effectiveness thresholds in ten countries across four continents. J Comp Eff Res. 2015;4(5):485–504. doi: 10.2217/cer.15.38 . [DOI] [PubMed] [Google Scholar]
  • 43.Hope T. Rationing and life-saving treatments: should identifiable patients have higher priority? J Med Ethics. 2001;27(3):179–85. doi: 10.1136/jme.27.3.179 ; PubMed Central PMCID: PMC1733406. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Boujaoude MA, Mirelman AJ, Dalziel K, Carvalho N. Accounting for equity considerations in cost-effectiveness analysis: a systematic review of rotavirus vaccine in low- and middle-income countries. Cost Eff Resour Alloc. 2018;16:18. Epub 20180518. doi: 10.1186/s12962-018-0102-2 ; PubMed Central PMCID: PMC5960127. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Rumbold B, Weale A, Rid A, Wilson J, Littlejohns P. Public Reasoning and Health-Care Priority Setting: The Case of NICE. Kennedy Inst Ethics J. 2017;27(1):107–34. doi: 10.1353/ken.2017.0005 ; PubMed Central PMCID: PMC6728154. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Garattini L, van de Vooren K. Budget impact analysis in economic evaluation: a proposal for a clearer definition. Eur J Health Econ. 2011;12(6):499–502. doi: 10.1007/s10198-011-0348-5 . [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Jan Rychtář

23 May 2023

PONE-D-23-12291Cost-effectiveness of incorporating Ebola prediction score tools and rapid diagnostic tests into a screening algorithm: a decision analytic modelPLOS ONE

Dear Dr. Tshomba,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jul 07 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Jan Rychtář

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified (1) whether consent was informed and (2) what type you obtained (for instance, written or verbal, and if verbal, how it was documented and witnessed). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information.

If you are reporting a retrospective study of medical records or archived samples, please ensure that you have discussed whether all data were fully anonymized before you accessed them and/or whether the IRB or ethics committee waived the requirement for informed consent. If patients provided informed written consent to have data from their medical records used in research, please include this information.

3. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

"Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

Additional Editor Comments:

The reviewer makes several suggestions for revisions and improvements and I urge the authors to take all of these suggestions into considerations and to make appropriate changes to their manuscript

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Tshomba et al performed a computational decision tree analysis to compute the cost-effectiveness of different Ebola screening algorithms. They determined that combining the WHO case definition with the ECPS screening procedure and additionally screening negative suspect cases with the QuickNavi molecular rapid test is the most cost-effective strategy. Additional sensitivity analyses indicate that this recommendation holds across the range of plausible parameter measurement error. The approach used is sensible and is applied reasonably. However, I have a few technical questions/comments that are relevant to the overall conclusions; in particular, I wonder whether the cost of missing an infected individual during screening is appropriately estimated here. The manuscript’s findings are relevant to healthcare workers and public-health officials involved in Ebola outbreak response and address a very important issue with real-world implications. Therefore, I recommend publication after the technical issues have been addressed.

MAJOR COMMENTS:

- Should the potential for additional infections in the community be incorporated into the cost of missing a case (false negative error)? The payoff for this outcome is set to 0 “by assumption” in Table 3, which I’m guessing was chosen to simply represent the lack of benefit acquired from correctly isolating a case. However, missing a true Ebola infection may have additional public-health consequences, as the infected individual may infect several others in the community, which is not considered here. In the current analysis, the cost of erroneous isolation (false positive) is greater than the cost of missing a true case, which would not make sense if an infected individual allowed to remain in the community is expected to infect 2 or more others. Please consider incorporating these secondary effects into your analysis, as the risk of infecting others due to a false negative screening result may outweigh the risk of erroneous isolation, which is quite costly in the current analysis.

- Can the cost of erroneously isolating an uninfected individual (given in Table 3) be reduced by improving isolation practices to reduce the probability of becoming infected as a result of isolation?

- It is not clear to me what the precise definitions of screening algorithms 3 and 4 are. They are simply described as “2) ECPS as a join [sic] test, 3) ECPS as a conditional test” (lines 144-145). The tree in Fig. 1 does not clarify this point, but rather simply labels a branch as “ECPS as a conditional test”. From that description, I’m not sure what other test(s) are performed in addition to the ECPS. Since the molecular RDT is described separately and the only other screening algorithm mentioned is the WHO criteria, I assume the ECPS is being used joint/conditionally with the WHO criteria. Also, in terms of the conditional version of the test, I’m not sure which screening protocol is being applied first. Please clarify further in the text.

- The results in Tables 4 and 5 for algorithms 3 and 4 are exactly the same, which seems unlikely given their different sensitivities/specificities in Table 2. Is this expected?

- I’m confused as to why the sensitivity analyses in Fig. 2B-C show no change in cost-effectiveness for any condition tested. For example, a back-of-the-envelope calculation suggests that in 2C, if the sensitivity of the conditional test increases by ~12% (as it does over the range of the x-axis, 0.61 to 0.69), then the cost-effectiveness ratio of algorithm 4 should decrease substantially, from ~$106/case to ~$95/case, since the number of cases identified should go up by 12% as well (from Equation 4 in S1 Text). This change should be visible in Fig. 2C but is not. Please explain.

- The sensitivities of the ECPS joint and conditional tests (algorithms 3 and 4) are presumably not independent variables, since both are functions of the sensitivities of the ECPS and the other screening method with which the ECPS was combined (again, possibly the WHO definition?). Therefore, if the joint sensitivity changes, the conditional sensitivity will also change, and treating these two values as separate quantities in the sensitivity analysis doesn’t make much sense. A better choice might be to perform the sensitivity analysis in Fig. 2 on the sensitivities of the individual tests making up the joint/conditional screening protocols (i.e., the ECPS and WHO protocol sensitivities).

- Is the WTP value of $50,000 used in Fig. 6 and specified as the “default” value in the Methods appropriate in any circumstance? It seems to me that the country specific WTP threshold (used in Fig. 7, for example) is much more relevant and that the $50,000 value was chosen arbitrarily and is far too high for the likely use contexts of these screening algorithms.

MINOR COMMENTS:

- The description of algorithm 4 in Table 1 should indicate that this is a conditional test, not a joint test.

- “Joint test” is sometimes written erroneously as “join test”.

- The resolution of some of the original figure files (e.g., Fig. 2) are quite low, making them difficult to read.

- A couple of clauses end with “or so” (e.g., lines 536, 545), which I don’t believe is standard English- perhaps they can be replaced with “etc”?

- Perhaps defining the term “incremental” (e.g., cost vs. incremental cost) in the text would help readers who are unfamiliar with formal cost-effectiveness analysis.

- Table 5 includes a lot of numerical data and is difficult to interpret quickly. Perhaps plotting these data as a heatmap, for example, would make them more interpretable? The raw tabular data can be included in the supplement.

- Raw numerical outputs from key parts of the cost-effectiveness analysis are included in the main text (Tables 4 and 5). However, raw data from most of the sensitivity analyses (Figs. 3-7) are not available. These data are probably reproducible using the commercial software used by the authors, but this would not be accessible to many readers.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Debra Van Egeren

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2023 Oct 17;18(10):e0293077. doi: 10.1371/journal.pone.0293077.r002

Author response to Decision Letter 0


20 Jun 2023

Kinshasa, Jun 20, 2023

Tshomba Oloma Antoine

Institut National de Recherche Biomédicale (INRB)

Kinshasa, Dem. Rep. of Congo,

antotshomba@yahoo.fr

+243 815602451

Jun 20, 2023

Jan Rychtář

Academic Editor

PLOS ONE Journal

plosone@plos.org

Dear Editor,

We are resubmitting our manuscript entitled “Cost-effectiveness of incorporating Ebola prediction score tools and rapid diagnostic tests into a screening algorithm: a decision analytic model” as a Research Article for consideration of publication in the PLOS ONE Journal.

First, we want to express our gratefulness to the Editor and Reviewers’ for their overall very positive comments on our work and their suggestions for improvement. Through this letter, we have tried answering to the best of our knowledge the questions, suggestions and remarks provided by the Editors and Reviewers.

None of the authors has a competing interest to declare and our manuscript has not been submitted, or accepted elsewhere. All authors have contributed to, seen, and approved the final, submitted version of the manuscript.

We have upload the following documents:

- The clean version of the Manuscript, file labeled "Manuscript"

- The track changes version of the manuscript, file labeled “Revised Manuscript with Track Changes”.

- The covering letter addressing the editorial and referees’ comments, file labeled “Response to Reviewers”.

- We made available, as raw data, outputs from our main analyses, e.g., one-way sensitivity analysis outputs and outputs from probabilistic sensitivity analysis.

- This economic study did not constitute human research. Thus, participant consent was not applicable, and the study did not require ethical approval.

- At last, after creating and adding other figures and supplementary files, we re-ordered them in the text.

Again thank you for all deep and helpful remarks and recommendations you have addressed on our manuscript.

We will look forward to hearing whether this manuscript can be considered of interest for publication in the PLOS ONE Journal and remain at your disposal for any required clarifications.

Yours sincerely,

Antoine Tshomba

PONE-D-23-12291

Cost-effectiveness of incorporating Ebola prediction score tools and rapid diagnostic tests into a screening algorithm: a decision analytic model

PLOS ONE

Dear Dr. Tshomba,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jul 07 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

• A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

• A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

• An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Jan Rychtář

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

Thank you very much for providing to me these PLOS ONE formatting guidelines. I am using them to correct and adapt to PLOS ONE requirements

2. Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified (1) whether consent was informed and (2) what type you obtained (for instance, written or verbal, and if verbal, how it was documented and witnessed). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information.

If you are reporting a retrospective study of medical records or archived samples, please ensure that you have discussed whether all data were fully anonymized before you accessed them and/or whether the IRB or ethics committee waived the requirement for informed consent. If patients provided informed written consent to have data from their medical records used in research, please include this information.

Thank you very much for your advice

This economic study did not constitute human research. Thus, participant consent were not applicable and the study did not require ethical approval.

We provided this statement in the manuscript, on lines 308 to 311 in Ethic statements point of the methods

Our study did not constitute human research. It was part of the Ebola outbreak response and disease surveillance in the North-Kivu Ebola outbreak DRC. This economic evaluation study used published results from the literature to build the decision model. Thus, it did not require ethical approval.

3. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

"Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

Thank you Dear Editor for your advice.

We provided minimal analyses’ outputs and added this statement in Data availability statements: lines 629 to 632

This modeling study was an economic evaluation that built its model using publicly accessible information from the literature. We made available, for transparency purposes, the output reports of our analyses, e.g., the one-way and probabilistic sensitivity analysis outputs.

Additional Editor Comments:

The reviewer makes several suggestions for revisions and improvements and I urge the authors to take all of these suggestions into considerations and to make appropriate changes to their manuscript

Thank you Dear Editor.

Indeed, all these suggestions were really helpful and fundamental for our manuscript's improvement. We tried to take, at our best, all these suggestions into consideration.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

________________________________________

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

________________________________________

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

________________________________________

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

________________________________________

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Tshomba et al performed a computational decision tree analysis to compute the cost-effectiveness of different Ebola screening algorithms. They determined that combining the WHO case definition with the ECPS screening procedure and additionally screening negative suspect cases with the QuickNavi molecular rapid test is the most cost-effective strategy. Additional sensitivity analyses indicate that this recommendation holds across the range of plausible parameter measurement error. The approach used is sensible and is applied reasonably. However, I have a few technical questions/comments that are relevant to the overall conclusions; in particular, I wonder whether the cost of missing an infected individual during screening is appropriately estimated here. The manuscript’s findings are relevant to healthcare workers and public-health officials involved in Ebola outbreak response and address a very important issue with real-world implications. Therefore, I recommend publication after the technical issues have been addressed.

We appreciate you making this observation.

We updated and incorporated the cost of missing a real Ebola case into our model. We have modified our conclusion to reflect that the cost of incorrectly ruling out an Ebola case includes or reflects the cost of managing the patient who was incorrectly ruled out as well as the cost of any potential Ebola cases that this false-negative case could lead to in the community.

Our supplemental appendix contains a corrected version of the cost formula.

MAJOR COMMENTS:

- Should the potential for additional infections in the community be incorporated into the cost of missing a case (false negative error)? The payoff for this outcome is set to 0 “by assumption” in Table 3, which I’m guessing was chosen to simply represent the lack of benefit acquired from correctly isolating a case. However, missing a true Ebola infection may have additional public-health consequences, as the infected individual may infect several others in the community, which is not considered here. In the current analysis, the cost of erroneous isolation (false positive) is greater than the cost of missing a true case, which would not make sense if an infected individual allowed to remain in the community is expected to infect 2 or more others. Please consider incorporating these secondary effects into your analysis, as the risk of infecting others due to a false negative screening result may outweigh the risk of erroneous isolation, which is quite costly in the current analysis.

Thank you for bringing this to my attention. You are totally right!

Indeed, no risk zero exists for a deadly infection like Ebola. Thus, we agreed with you that not incorporating this in our analysis could constitute a big limitation of the study.

In light of the findings of these studies, by Leroy et al. and Mbala et al., which suggest that the role of these false-negative cases (composed of asymptomatic and mildly symptomatic patients) in human-to-human disease transmission in the community is poorly understood and, if there is one, this role is very low, we assumed a reduction of disease secondary attack rate up to 50% and assigned a false-negative patient erroneously ruled out less the likelihood that these false-negative cases generate Ebola cases in the community and computed its payoff as described in supplementary appendix

(References to include: Leroy et al., Early immune responses accompanying human asymptomatic Ebola infections) Clin Exp Immunol 2001, Vol. 124, Issue 3, Pages 453–60, and Mbala et al. Evaluating the frequency of asymptomatic Ebola virus infection (Philos Trans R Soc Lond B Biol Sci 2017 Vol. 372 Issue 1721)

Therefore, we added these statements:

In method part of the manuscript: lines 214 to 224

The effectiveness reward for erroneous negatives was calculated in the same manner as for false positives. We used the method described in the supplementary appendix and referenced the findings by Leroy et al. and Mbala et al. [ref. to include] to make our assumptions. We assumed a 50% reduction in the disease secondary attack rate for the transmission in the community by these false negative patients and assigned a false negative patient erroneously ruled out minus the random probability that these false negative cases generate Ebola cases in the community. Therefore, for each EVD instance that was ruled out, we assigned a value of -0.00125 as the effectiveness payoff.

In the results part of the manuscript:

We updated Tables 4 and 5, Figures 4 to 7, and Supplementary Figures.

In the text, we updated where needed accordingly.

- Can the cost of erroneously isolating an uninfected individual (given in Table 3) be reduced by improving isolation practices to reduce the probability of becoming infected as a result of isolation?

We appreciate you sharing this, and we think the concept is appealing.

At the screening stage, while awaiting the lab confirmation results, all isolated suspects receive the same intervention, such as standard of care, test confirmation, and other associated interventions. Therefore, at this point, the secondary attack rate (SAR) change has no longer had an impact on the price of FP. The specificity of the screening test being performed will largely determine how the FP cost will change at this step. A SAR reduction, however, could, following confirmation, lessen the amount of iatrogenic EVDs created in isolation wards, thereby lowering the cost of FP care with expensive EVD-specific treatment.

We concentrated on recording all costs incurred during the screening procedure for this study; hence, we simply recorded the overall cost or expenses.

To understand how a drop in the disease SAR would affect the cost of FP isolation, we manually computed this cost, assuming any cost for EVD-specific treatment.

Costs due to FP at the SAR of 2.5% would be reduced by 20% at the SAR of 2% and by 60% at the SAR of 1%.

However, this was not the purpose of our analysis.

- It is not clear to me what the precise definitions of screening algorithms 3 and 4 are. They are simply described as “2) ECPS as a join [sic] test, 3) ECPS as a conditional test” (lines 144-145). The tree in Fig. 1 does not clarify this point, but rather simply labels a branch as “ECPS as a conditional test”. From that description, I’m not sure what other test(s) are performed in addition to the ECPS. Since the molecular RDT is described separately and the only other screening algorithm mentioned is the WHO criteria, I assume the ECPS is being used joint/conditionally with the WHO criteria. Also, in terms of the conditional version of the test, I’m not sure which screening protocol is being applied first. Please clarify further in the text.

We sincerely appreciate it.

In addition to fixing the error in Table 1, we also provided a supplementary that details each algorithm definition (S1 File) and a plot depicts the entire decision tree model's graph (S1 Fig.).

We added this statement:

On lines 160 and 162:

Supplementary figure S1 draws the complete decision tree model, and supplementary file S1 describes and defines each algorithm tested in the model (S1 Fig. and S1 File).

On lines 170 to 175 of the procedure part's legend for Fig. 1, we added the following statement:

Algorithms 1, 2, and 3 use a single screening test; their visual representations are similar to the branch shown on the decision branch of algorithm 4. Algorithms 6, 7, and 8 use two sequencing screening tests; their visual presentations look like this, shown in this format on algorithm 5's branch. QuickNavi™-Ebola RDT is used after the first screening test in algorithms with two screening tests.

We also updated Supplementary figures and file numeration

- The results in Tables 4 and 5 for algorithms 3 and 4 are exactly the same, which seems unlikely given their different sensitivities/specificities in Table 2. Is this expected?

We are grateful for your observation.

We thoroughly rechecked our model and its inputs, and we found a problem in the two branches because they had the same sensitivity and specificity inputs.

We corrected them and revised Tables 4 and 5 and the text with the required modifications.

In addition, we added the cost of an EVD case that was mistakenly ruled out (a false negative) and updated our manuscript where needed.

- I’m confused as to why the sensitivity analyses in Fig. 2B-C show no change in cost-effectiveness for any condition tested. For example, a back-of-the-envelope calculation suggests that in 2C, if the sensitivity of the conditional test increases by ~12% (as it does over the range of the x-axis, 0.61 to 0.69), then the cost-effectiveness ratio of algorithm 4 should decrease substantially, from ~$106/case to ~$95/case, since the number of cases identified should go up by 12% as well (from Equation 4 in S1 Text). This change should be visible in Fig. 2C but is not. Please explain.

We are very grateful for you.

After double-checking our model and its inputs, we changed the corresponding graph; however, the results under examination in the interval range on which we applied our one-way sensitivity analysis exhibited no change.

We were aware that their variation had little impact on the cost-effectiveness ratio within the range of the believable interval in which we did our sensitivity analysis.

We also understood a nonlinear relationship between the prevalence of diseases and outcomes, such as the cost-effectiveness ratio.

The calculation is performed at the expected value at the root of each algorithm. This takes into account every possible result in this decision branch, which is reliant on several other probabilities in the branch. The modification is not remarkable as a result of this.

However, we attempted to explain the observed tendencies using particular data from the analyses' raw outputs, where needed.

- The sensitivities of the ECPS joint and conditional tests (algorithms 3 and 4) are presumably not independent variables, since both are functions of the sensitivities of the ECPS and the other screening method with which the ECPS was combined (again, possibly the WHO definition?). Therefore, if the joint sensitivity changes, the conditional sensitivity will also change, and treating these two values as separate quantities in the sensitivity analysis doesn’t make much sense. A better choice might be to perform the sensitivity analysis in Fig. 2 on the sensitivities of the individual tests making up the joint/conditional screening protocols (i.e., the ECPS and WHO protocol sensitivities).

Again thank you for your advice.

We also performed sensitivity analyses on the ECPS and WHO case definition sensitivities.

We also provided as a supplementary Figure S2, a figure that plots one-way sensitivity analysis investigating the uncertainty of outcomes due to variation of ECPS and WHO case definition sensitivities.

- Is the WTP value of $50,000 used in Fig. 6 and specified as the “default” value in the Methods appropriate in any circumstance? It seems to me that the country specific WTP threshold (used in Fig. 7, for example) is much more relevant and that the $50,000 value was chosen arbitrarily and is far too high for the likely use contexts of these screening algorithms.

Thank you very much for this observation.

In fact, after re-checking our model inputs and including the cost of erroneously ruling out false negative cases, this WTP appeared to be relevant. As recommended, using this $50,000 WTP allows the results to be interpreted in various global contexts.

Thus, we updated the current Fig. 6 and its numeration, after adding one more figure in the manuscript.

MINOR COMMENTS:

- The description of algorithm 4 in Table 1 should indicate that this is a conditional test, not a joint test.

Thank you very much. We corrected the error in the Table 1

- “Joint test” is sometimes written erroneously as “join test”.

Thank you very much. We corrected this in the Table

- The resolution of some of the original figure files (e.g., Fig. 2) are quite low, making them difficult to read.

Thank you very much.

We corrected by improving the resolution. Thank you.

- A couple of clauses end with “or so” (e.g., lines 536, 545), which I don’t believe is standard English- perhaps they can be replaced with “etc”?

Thank you very much.

We corrected them. Thank you.

- Perhaps defining the term “incremental” (e.g., cost vs. incremental cost) in the text would help readers who are unfamiliar with formal cost-effectiveness analysis.

Thank you for this recommendation

We added this on lines 266 to 271

Where the numerator, in the case of Ebola disease, represents the incremental cost, which is the total expense incurred due to an additional health effect, e.g., an isolated EVD case. It is calculated by looking at the additional expenses made throughout the screening process, such as supplies used, for one extra health effect. The denominator represents the incremental effectiveness, which is the increase in the effectiveness of the Ebola screening throughout the screening process.

- Table 5 includes a lot of numerical data and is difficult to interpret quickly. Perhaps plotting these data as a heatmap, for example, would make them more interpretable? The raw tabular data can be included in the supplement.

You are welcome for pointing this out.

As no marketed Ebola RDTs or treatments are available, we performed a two-way analysis with both the cost of QuickNavi-Ebola and the cost of SOC to capture these variations on algorithm ranking, thus the decision. We plotted a two-way sensitivity analysis and moved the relative Table 5 in supporting information as S1 Table

We added these statements in method part of the manuscript on lines 283 to 288

Additionally, as there are currently no marketed RDTs or therapies for Ebola (they are still in negotiations), we performed a 2-way sensitivity analysis exploring the effects of changing the price of the QuickNavi-Ebola RDT and the price of SOC on algorithm ranking. Three levels of 2017-DRC GDP, as a willingness to pay, were used in this analysis (at one, two, and three times the GDP).

At results part, we added updated the text on lines 406 to 409 and these statements to present the two-way figure.

Fig 4. Two-way sensitivity analysis comparing the net heath benefit of EVD screening algorithms

Fig 4 legend:

A: at a willingness to pay of USD 584.1; B: at a willingness to pay of USD 1168.2; C: at a willingness to pay of USD 1752.3

We added in supporting information

S1 Table. Cost-effectiveness ratios (USD per EVD isolated) in relation to variation the cost of QuickNavi™-Ebola RDT and cost of standard-of-care (SOC)

- Raw numerical outputs from key parts of the cost-effectiveness analysis are included in the main text (Tables 4 and 5). However, raw data from most of the sensitivity analyses (Figs. 3-7) are not available. These data are probably reproducible using the commercial software used by the authors, but this would not be accessible to many readers.

Yes. We made available all analyses outputs as supporting data.

________________________________________

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Debra Van Egeren

________________________________________

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Thank you. Yes, we uploaded them here.

Decision Letter 1

Jan Rychtář

2 Jul 2023

PONE-D-23-12291R1Cost-effectiveness of incorporating Ebola prediction score tools and rapid diagnostic tests into a screening algorithm: a decision analytic modelPLOS ONE

Dear Dr. Tshomba,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

While the reviewer appreciate that most of the previous concerns were addressed, there are still several issues that need to be improved/addressed. Please revise your manuscript accordingly addressing all reviewer's comments.

==============================

Please submit your revised manuscript by Aug 16 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Jan Rychtář

Academic Editor

PLOS ONE

Additional Editor Comments:

While the reviewer appreciates that most of the previous concerns were addressed, there are still several issues that need to be improved/addressed.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Thank you for addressing the majority of my questions and comments. However, I still have concerns about how the costs of false positives and false negatives are being estimated, which were further exposed while I was reviewing the changes made during this revision. I think additional clarification and/or justification is required for the term of the form (1-(1-θ)δ) in equations 12-15 in S2 File, which estimates the “probability of infection given random contact with an EVD patient”. The secondary attack rate θ is estimated in the relevant reference (Okware et al) as the fraction of contacts of an EVD case that get infected. Given that definition, I don’t understand why the duration of infectiousness (1/δ) is being used here in this way. I suppose if you’re assuming that the risk to each suspected case entering isolation is equivalent to the risk of coming into contact with a single EVD patient for one day, and you are assuming the published secondary attack rate is being estimated from household contacts who are exposed continuously for the entire duration of infectiousness, there is some possible justification for this equation, which would need to be further explained in the manuscript. Unfortunately I believe that this risk estimate cannot be appropriately justified using this reasoning- it’s not clear from the reference for the secondary attack rate value exactly how a contact is defined, and there are published estimates for the household attack rate that are much higher than 2.5% (see the meta-analysis in Dean et al. Clin Infect Dis 2016 https://doi.org/10.1093/cid/ciw114 which includes the reference cited here). A better way to estimate infection risk for false positives would probably be to use results from a study specifically designed to measure the risk of nosocomial infection in patients erroneously isolated in EVD units (e.g., Arkell et al Tropical Medicine and International Health 2016

https://doi.org/10.1111/tmi.12802) which seem to estimate a higher risk of infection than calculated in this manuscript (though the absolute risk of exposure is still reasonably low).

The estimate of risk of community transmission from a false negative (which was calculated using the same reasoning) is even more problematic. Again, the assumption being made seems to be that the expected number of cases resulting from a true EVD case being allowed to remain in the community is equivalent to the risk of a single person coming into contact with that EVD patient for one day, except now the secondary attack rate is 50% lower than was estimated previously (presumably to account for lower risk from “asymptomatic” EVD patients). This seems to me to be quite an underestimation of the expected number of cases. First, I think these patients are not truly “asymptomatic” and instead may have nonspecific symptoms or be presymptomatic, so I don’t think there is good justification for the assumption that their infectiousness is lower (especially by an arbitrary 50%, which the authors do not justify). Also, I’m guessing that many of these patients will return to their homes after a negative screening, potentially exposing multiple people (possibly in higher-risk caregiving roles) over multiple days, making the expected number of new cases resulting from a single false negative to be much higher than the 0.00125 calculated here. Using the R0 value for EVD community transmission (while perhaps a bit of an overestimate) for the effect of a false negative is likely more correct, and more easily justified, and would result in a false negative penalty that is orders of magnitude higher than the value currently used here.

Since these false negative/positive risks are very important factors in the cost-effectiveness analysis (and thus the conclusions of the whole manuscript), they require suitable justification before I can recommend publication. I apologize for not noticing these issues in my first review.

I have a couple additional minor comments:

- I think Eq. 13 in S2 File is missing a 1-Prev term?

- Please explicitly state in the data availability statement that the raw data are available in the Supporting Outputs Data so readers know exactly where to find them.

- Figs. 7 and 8 legends: define red/green colors of points

- It’s still a little unclear in the text what “joint” and “conditional” mean when referring to the ECPS test. The authors define these terms very clearly in their previous publication (ref. [26] in this manuscript, relevant passage copied below), and I’d suggest adding a similar description here or at least referencing this publication at the point in the text where these algorithms are defined.

“Finally, we evaluated our prediction models according to two additional clinical practice approaches in healthcare settings: joint and conditional tests or approaches. In both approaches, the suspects with no reported risk of exposure would be considered to not have the disease and the clinical team would act accordingly. No additional action, e.g., isolation, would be required. In the joint approach, the clinical team should clinically examine all suspects at low-, intermediate-, and high-risk reported exposure and recommend for isolation only those with a predicted probability of EVD greater than 5% (the cut-off chosen to maximize sensitivity, about 90 percent, in disease adverse context). In the conditional approach, the clinical team should isolate all suspects with high-risk reported exposure irrespective of their predicted probability of the disease and then suspects at low and intermediate reported exposure having an EVD-predicted probability greater than 5%.”

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Debra Van Egeren

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2023 Oct 17;18(10):e0293077. doi: 10.1371/journal.pone.0293077.r004

Author response to Decision Letter 1


25 Jul 2023

Kinshasa, July 25, 2023

Tshomba Oloma Antoine

Institut National de Recherche Biomédicale (INRB)

Kinshasa, Dem. Rep. of Congo,

antotshomba@yahoo.fr

+243 815602451

July 25, 2023

Jan Rychtář

Academic Editor

PLOS ONE Journal

plosone@plos.org

Dear Editor,

We are resubmitting our manuscript entitled “Cost-effectiveness of incorporating Ebola prediction score tools and rapid diagnostic tests into a screening algorithm: a decision analytic model” as a Research Article for consideration of publication in the PLOS ONE Journal.

First, we want to express our gratefulness to the Editor and Reviewers for their overall very positive comments on our work and their suggestions for improvement. Through this letter, we have tried answering to the best of our knowledge the questions, suggestions and remarks provided by the Editors and Reviewers.

None of the authors has a competing interest to declare and our manuscript has not been submitted, or accepted elsewhere. All authors have contributed to, seen, and approved the final, submitted version of the manuscript.

We have upload the following documents:

- The clean version of the Manuscript, file labeled "Manuscript"

- The track changes version of the manuscript, file labeled “Revised Manuscript with Track Changes”.

- The covering letter addressing the editorial and referees’ comments, file labeled “Response to Reviewers”.

- We used secondary attack rates from meta-analysis by Dean et al and made changes where needed in the manuscript text, figures and supporting materials and data

Again thank you a million for all deepest and helpful remarks and recommendations you have addressed on our manuscript.

We will look forward to hearing whether this manuscript can be considered of interest for publication in the PLOS ONE Journal and remain at your disposal for any required clarifications.

Yours sincerely,

Antoine Tshomba

PONE-D-23-12291R1

Cost-effectiveness of incorporating Ebola prediction score tools and rapid diagnostic tests into a screening algorithm: a decision analytic model

PLOS ONE

Dear Dr. Tshomba,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

While the reviewer appreciate that most of the previous concerns were addressed, there are still several issues that need to be improved/addressed. Please revise your manuscript accordingly addressing all reviewer's comments.

==============================

Please submit your revised manuscript by Aug 16 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Jan Rychtář

Academic Editor

PLOS ONE

Additional Editor Comments:

While the reviewer appreciates that most of the previous concerns were addressed, there are still several issues that need to be improved/addressed.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

________________________________________

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

________________________________________

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

________________________________________

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

________________________________________

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

________________________________________

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Thank you for addressing the majority of my questions and comments. However, I still have concerns about how the costs of false positives and false negatives are being estimated, which were further exposed while I was reviewing the changes made during this revision. I think additional clarification and/or justification is required for the term of the form (1-(1-θ)δ) in equations 12-15 in S2 File, which estimates the “probability of infection given random contact with an EVD patient”. The secondary attack rate θ is estimated in the relevant reference (Okware et al) as the fraction of contacts of an EVD case that get infected. Given that definition, I don’t understand why the duration of infectiousness (1/δ) is being used here in this way. I suppose if you’re assuming that the risk to each suspected case entering isolation is equivalent to the risk of coming into contact with a single EVD patient for one day, and you are assuming the published secondary attack rate is being estimated from household contacts who are exposed continuously for the entire duration of infectiousness, there is some possible justification for this equation, which would need to be further explained in the manuscript. Unfortunately I believe that this risk estimate cannot be appropriately justified using this reasoning- it’s not clear from the reference for the secondary attack rate value exactly how a contact is defined, and there are published estimates for the household attack rate that are much higher than 2.5% (see the meta-analysis in Dean et al. Clin Infect Dis 2016 https://doi.org/10.1093/cid/ciw114 which includes the reference cited here). A better way to estimate infection risk for false positives would probably be to use results from a study specifically designed to measure the risk of nosocomial infection in patients erroneously isolated in EVD units (e.g., Arkell et al Tropical Medicine and International Health 2016

https://doi.org/10.1111/tmi.12802) which seem to estimate a higher risk of infection than calculated in this manuscript (though the absolute risk of exposure is still reasonably low).

The estimate of risk of community transmission from a false negative (which was calculated using the same reasoning) is even more problematic. Again, the assumption being made seems to be that the expected number of cases resulting from a true EVD case being allowed to remain in the community is equivalent to the risk of a single person coming into contact with that EVD patient for one day, except now the secondary attack rate is 50% lower than was estimated previously (presumably to account for lower risk from “asymptomatic” EVD patients). This seems to me to be quite an underestimation of the expected number of cases. First, I think these patients are not truly “asymptomatic” and instead may have nonspecific symptoms or be presymptomatic, so I don’t think there is good justification for the assumption that their infectiousness is lower (especially by an arbitrary 50%, which the authors do not justify). Also, I’m guessing that many of these patients will return to their homes after a negative screening, potentially exposing multiple people (possibly in higher-risk caregiving roles) over multiple days, making the expected number of new cases resulting from a single false negative to be much higher than the 0.00125 calculated here. Using the R0 value for EVD community transmission (while perhaps a bit of an overestimate) for the effect of a false negative is likely more correct, and more easily justified, and would result in a false negative penalty that is orders of magnitude higher than the value currently used here.

Since these false negative/positive risks are very important factors in the cost-effectiveness analysis (and thus the conclusions of the whole manuscript), they require suitable justification before I can recommend publication. I apologize for not noticing these issues in my first review.

We appreciate your observation very much. We also appreciate the two informative documents you sent our way. We discovered further information in the articles that could offer trustworthy estimates of the expense and harm associated with false positives and false negatives.

According to the paper by Gilbert et al., this random probability (1-(1-θ)^δ) measures the risk of transmission during an infectious patient's entire period of time. We used this probability formula as such to compute the harm of errors in Ebola suspect-case classification. This probability is a function of both the secondary attack rate and the period or duration of infectiousness in an EVD patient.

Once again, thank you; we believed that the SAR values presented in Dean et al.'s work should be the ones we must employ. As a result, we used SAR estimates from the study by Dean et al. in place of the SAR by Okware et al.

For the false-positive isolated case, we did not employ the SAR for contacts providing nursing care because we believed that a frontline vaccination for healthcare professionals that is implemented in Ebola outbreak control nowadays in health setting would be applied to protect them from professional transmission. Therefore, since only isolated suspects who are not immunized would be exposed to this danger trough physical contact, we applied the direct physical contact SAR. As a result, we used the mean household SAR, or 22.9% (11.6%-34.2%), for those with direct contact but no nursing.

Similarly, for the false-negative case, we assumed that no nursing care is done for mild or asymptomatic Ebola cases ruled out for the community and that the conclusion from Mbala et al. and Leroy et al. is that mild or non-specific symptomatic Ebola has a reduced role in person-to-person transmission. Therefore, we used the overall household SAR, which is 12.5% (8.6%–16.3%).

Using these estimates, we were able to re-estimate the arm related to errors of classification, e.g., false positive and false negative harm.

For the random probability formula's parameter, for instance, (1-(1-θ) ^δ), which is defined as 1/ the length of time that an EVD is infectious (1 divided by the length of time that an EVD is contagious). This detail is not clear in the supplemental appendix S2. Therefore, we should make this explicit in the supplemental appendix S2 file by clarifying each part in Eq. 12 to 15.

Moreover, we did not include/use the R0 in our economic model. R0 is the number of secondary cases that one case would cause in a population that is completely susceptible, and it is estimated using biological, socio-behavioral, and environmental parameters that are involved in the transmission of the pathogen. As a result, R0 is calculated using ever-more complicated mathematical models, leading to incorrect interpretations and representations of R0 because each modeler develops models for specific objectives. Therefore, setting R0 numbers frequently given in the literature for historical epidemics may not be applicable to all Ebola outbreaks.

Additionally, R0 is one of the measures that is most frequently used to analyze the dynamics of infectious diseases, which was not the goal of this work.

We thus found it intuitive, simple to understand, and simple to generalize the random probability of contamination/transmission for false negatives (in the community) and false positives (in the healthcare setting) as a proportion of iatrogenic cases that could be generated in order to compute the errors of classification harm.

Therefore, we added these statements:

Method part of manuscript, on lines 221 to 224

As a frontline vaccination for healthcare workers would be implemented, we computed this probability of infection using the secondary attack rate (SAR) for direct physical contact of 22.9% (95% CI: 11.6%–34.2%) for those with direct contact but no nursing in the hospital (Ref. Dean et al.).

On lines 230 to 233

Therefore, we calculated the random probability that these false-negative cases generate Ebola cases in the community using the overall household SAR of 12.5% (95% CI: 8.6%–16.3 percentage) for human-to-human transmission in this setting without nursing (Ref. Dean et al.).

We corrected the harm associated with false positive and false negative in table 2 and in text.

In the results part of the manuscript:

We updated where needed accordingly (Tables 4 and 5, Figures 4 to 8, Supplementary Figures and the text of the manuscript

I have a couple additional minor comments:

- I think Eq. 13 in S2 File is missing a 1-Prev term?

Thank you very much. Yes, this term is missing in this formula. Thus, we corrected that as follows:

Probability of iatrogenic EVD=[(1-〖Prev〗_EVD )×((1-Spe〖c_t〗_i )+Spe〖c_t〗_i×(1-Spe〖c_t〗_j ))×(1-(1 - θ_h)ᵟ)]

- Please explicitly state in the data availability statement that the raw data are available in the Supporting Outputs Data so readers know exactly where to find them.

Thank you very much.

Added on line 657 of data availability statement:

in supporting information-Compressed file (the Supporting Outputs Data).

- Figs. 7 and 8 legends: define red/green colors of points

Thank you very for this observation.

We added in Fig. 7 and 8 legends and Suppl. Fig. legends these statements:

Green points: ICERs that fall below the WTP line in Monte Carlo simulations, the maximum acceptable ICER (the algorithm is considered cost-effective); Red points: ICERs that fall above the WTP line, the maximum acceptable ICER (the algorithm is considered costly and less effective).

- It’s still a little unclear in the text what “joint” and “conditional” mean when referring to the ECPS test. The authors define these terms very clearly in their previous publication (ref. [26] in this manuscript, relevant passage copied below), and I’d suggest adding a similar description here or at least referencing this publication at the point in the text where these algorithms are defined.

“Finally, we evaluated our prediction models according to two additional clinical practice approaches in healthcare settings: joint and conditional tests or approaches. In both approaches, the suspects with no reported risk of exposure would be considered to not have the disease and the clinical team would act accordingly. No additional action, e.g., isolation, would be required. In the joint approach, the clinical team should clinically examine all suspects at low-, intermediate-, and high-risk reported exposure and recommend for isolation only those with a predicted probability of EVD greater than 5% (the cut-off chosen to maximize sensitivity, about 90 percent, in disease adverse context). In the conditional approach, the clinical team should isolate all suspects with high-risk reported exposure irrespective of their predicted probability of the disease and then suspects at low and intermediate reported exposure having an EVD-predicted probability greater than 5%.”

Thank you for this observation. This could make clear their use. Thank you.

We added these statements:

Method part of the manuscript, on lines 155 to 164:

As described by Tshomba et al., the two screening methods—joint and conditional tests with ECPS—are methods in which suspects with no reported risk of exposure would be assumed to be free of the disease, and the clinical team would act appropriately (e.g., no further action is taken). In using the joint approach, all suspects at low-, intermediate-, and high-risk reported exposure are clinically assessed, and only those with a predicted likelihood of EVD greater than 5% are suggested for isolation. In the conditional test, regardless of their estimated probability of contracting the illness, all suspects with high-risk reported exposure should be isolated. Next, suspects with low and intermediate reported exposure who have an EVD-predicted probability of more than 5% should be isolated.________________________________________

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Debra Van Egeren

________________________________________

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Thank you. We analyzed them using PACE.

Attachment

Submitted filename: Response to Reviewers1.docx

Decision Letter 2

Jan Rychtář

31 Jul 2023

PONE-D-23-12291R2Cost-effectiveness of incorporating Ebola prediction score tools and rapid diagnostic tests into a screening algorithm: a decision analytic modelPLOS ONE

Dear Dr. Tshomba,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

The reviewer continues to raise a number of substantial issues. Given this is already a second revision, you will have only one more chance to revise the manuscript.

The reviewer is willing to communicate with you directly to go over the methodology rather than continue back and forth with the revisions.

If you are agreeable to this, please reach out to me directly via email (rychtarj@vcu.edu) and I will connect you with the reviewer.

Please submit your revised manuscript by Sep 14 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Jan Rychtář

Academic Editor

PLOS ONE

Additional Editor Comments:

The reviewer continues to raise a number of substantial issues. Given this is already a second revision, you will have only one more chance to revise the manuscript.

The reviewer is willing to communicate with you directly to go over the methodology rather than continue back and forth with the revisions.

If you are agreeable to this, please reach out to me directly via email (rychtarj@vcu.edu) and I will connect you with the reviewer.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Unfortunately, I think there are still two major issues with the core methodology of the study that have not been addressed.

First, I don’t think my concerns from the previous revision have been adequately addressed, so please allow me to clarify. I appreciate your response and your update to the SAR values used. However, I still believe that the formulae used to estimate the false positive and false negative costs do not accurately reflect the conceptual description of these costs given in the text or author response, do not correctly use the SAR values as consistently defined in both Gilbert et al and Dean et al, and may underestimate the expected number of cases resulting from a false positive/negative. The authors state that “[a]ccording to the paper by Gilbert et al., this random probability (1-(1-θ)^δ) measures the risk of transmission during an infectious patient's entire period of time” in their response. This is not consistent with the passage describing that expression in the Gilbert et al reference, which describes it as “the probability of infection given contact with an infectious individual” and uses it as a part of a differential equations SIR model. There are two differences between this definition and the authors’ interpretation. First, this expression represents risk of infection per contact, rather than the absolute risk for each infected individual regardless of contact rate. This is why, in this reference, this expression is multiplied by the contact rate. Second, the expression represents the probability of infection within one unit of time (here, 1 day or one contact), not the entire period of time (hence its inclusion in a system of differential equations). Therefore, by using this expression (without accounting for the number of contacts or amount of time each contact lasts) to represent the expected number of new infections that an infected individual will subsequently infect, the authors are implicitly assuming that each uninfected individual erroneously isolated in an EVD ward only experiences a single instance of direct contact with an infected patient (in the case of a FP) and infected individual erroneously screened as negative only has a single, transient contact after returning to the community (in the case of a FN). This assumption should be at least explicitly stated in the manuscript, and likely should be reevaluated, particularly for FNs. I can see how there would be little contact with infected individuals for uninfected patients in an EVD ward, but I have a hard time believing that an infected individual returning to a community after a false negative screening would have the same total risk over the entire period they are in the community as an infected individual with a single transient contact. That seems to be an underestimate, as I’m guessing most of these individuals will return home for multiple days and be in contact with multiple family members. However, the authors’ definition reproduced above describing the overall risk of transmission does not match with the expression they used but better matches either the SAR alone (defined in Gilbert et al as “the proportion of individuals who will become infected upon contact with an infectious individual during the total infectious period”, which agrees with the definition employed in Dean et al, the reference from which the parameter values were taken), which still doesn’t take into account the contact rate, or R0, which does take into account the contact rate and seems to be the closest metric to what the authors intend this cost to represent. As the authors correctly state in their response, “R0 is the number of secondary cases that one case would cause in a population that is completely susceptible”, which seems to exactly what is intended with this cost- the expected number of new cases resulting from a single infected individual returning to the community without isolation. I agree that there are issues with the estimation of R0 but the assumptions underlying the strategy the authors are currently using to estimate the FN risk is at the very least not justified in the manuscript and is likely underestimating the risk by multiple orders of magnitude. Admittedly it seems that with the current approach the FN risk hardly seems to affect the results at all (during the last revision this risk increased by about an order of magnitude and the results in Table 4 are almost the same), but I think this may reflect an issue with the overall approach as well, as outlined below.

Second, I have very serious concerns about how the effectiveness/payoff of a screening strategy is being defined more generally here. The authors repeatedly state that the effectiveness is defined as the number or fraction of true EVD cases isolated (e.g., in the abstract line 54, header row in Table 4, Methods lines 210-211); however, this is not the definition implemented in the payoff matrix (Table 3) or described later in the Methods (lines 213-215), where true positive and true negative outcomes are in fact weighted equally. The effectiveness is therefore nearly equal to the accuracy of the algorithm (i.e., (TP+TN)/(total screened)), with a negligible contribution from the penalties from FPs and FNs (which are 2 orders of magnitude smaller than the payoffs given to TN and TP, “by assumption”). This can lead to very problematic conclusions since the overwhelming majority of subjects being screened don’t have EVD (prevalence ~6% as given in Table 2), causing the specificity of the test to dominate the effectiveness metric. For example, consider the trivial screening algorithm where all individuals being screened are given a negative test result and sent home. This procedure has a sensitivity of 0 and specificity of 1, leading to an effectiveness payout of approximately 0.94 (calculated as (1-Prev)*(payoff of TN) + Prev*(payoff of FN)). This effectiveness value is higher than the best real algorithm tested in the manuscript! In fact, since the cost of each of these “screenings” would be very low (presumably $0), the methodology used in this manuscript would identify this dummy procedure as the best possible screening test in terms of cost, effectiveness, and the cost-effectiveness ratio. Obviously, this is undesirable behavior (just shutting down EVD isolation units is not a good strategy!). Therefore, it is necessary to come up with a different payoff matrix that doesn’t just mostly optimize the test specificity.

What would be a better payoff matrix? One option would be to implement the payoff matrix that is repeatedly described in the text but not actually used (“the number of isolated EVD cases (true positives)” lines 210-211), which would be assigning a value of 1 to TP and 0 to all other outcomes. Of course, this would just be proportional to the sensitivity of the screening algorithm and would not account for the specificity at all, so the strategy of isolating everyone who comes in would have the highest value (though also the highest cost). A better strategy, however, might be to quantify all payoffs in terms of monetary cost or benefit. This actually solves two serious issues with the current approach. First, it naturally solves the problem with deciding on how to weight TP and TN benefits that is explained above. TP events will have a benefit defined as the financial benefit to society for their isolation, which I’d suggest defining as the value generated by the effects of the patient receiving supportive care in the unit and extending their lifespan (e.g., see Bartsch et al Pathog Glob Health 2015 https://doi.org/10.1179%2F2047773214Y.0000000169 for estimates of the financial costs of EVD). TN events can be defined as having a payoff of $0. The second problem that this proposed redefinition into monetary units solves is the current mismatch in units between the TP or TN benefits and the FP and FN costs. TP and TN payoffs represent the number of correct screening calls made (assigning them each a value of 1) while FP and FN costs represent the number of additional expected EVD cases generated by the outcome (assigning a value of -1 per expected new EVD case). Therefore, the approach gives a correct negative screening result the same weight as generating a new EVD case that wouldn’t have occurred without an FP/FN result. These two events don’t seem like they should be on the same scale or be equivalent- causing a new infection that wouldn’t have otherwise occurred should be quite a bit more costly, I think! Instead, this current approach makes the FP and FN costs essentially negligible. Assigning FP and FN events monetary costs instead (perhaps by estimating the financial cost of each new EVD case) would put them on the same scale as TP/TN payoffs.

Overall, I think the conceptual issues and problematic limiting behavior of the current approach are serious and warrant major changes before publication. However, it’s possible that even after these changes, the conclusions of the manuscript will remain the same. Currently, the general conclusion seems to be that algorithms with higher specificity (i.e., those that use the ECPS as a joint or conditional test) perform better than lower specificity algorithms, not only because they have higher “effectiveness” as currently defined, but also because they have significantly lower cost since they isolate fewer people. The differences in specificity between these two groups of algorithms is so great (~35% vs. >80%) that the conclusion that these tests are more cost effective will almost certainly continue to hold.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Debra Van Egeren

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2023 Oct 17;18(10):e0293077. doi: 10.1371/journal.pone.0293077.r006

Author response to Decision Letter 2


9 Sep 2023

Kinshasa, September 9, 2023

Tshomba Oloma Antoine

Institut National de Recherche Biomédicale (INRB)

Kinshasa, Dem. Rep. of Congo,

antotshomba@yahoo.fr

+243 815602451

September 9, 2023

Jan Rychtář

Academic Editor

PLOS ONE Journal

plosone@plos.org

Dear Editor,

We are resubmitting our manuscript entitled “Cost-effectiveness of incorporating Ebola prediction score tools and rapid diagnostic tests into a screening algorithm: a decision analytic model” as a Research Article for consideration of publication in the PLOS ONE Journal.

First, we want to express our gratefulness to the Editor and Reviewers for their overall very positive comments on our work and their suggestions for improvement. Through this letter, we have tried answering to the best of our knowledge the questions, suggestions and remarks provided by the Editors and Reviewers.

None of the authors has a competing interest to declare and our manuscript has not been submitted, or accepted elsewhere. All authors have contributed to, seen, and approved the final, submitted version of the manuscript.

We have upload the following documents:

- The clean version of the manuscript is in the file labeled “Manuscript.”

- The track changes version of the manuscript is in the file labeled "Revised Manuscript with Track Changes."

- The covering letter addressing the editorial and referees’ comments is in the file labeled "Response to Reviewers."

- We used secondary attack rates from the meta-analysis by Dean et al. and computed the FP payoff to include in our model, and we used the basic reproductive number to express the FN penalty.

- We made changes where needed in the manuscript text, figures, supporting materials, and data.

Again thank you a million for all deepest and helpful remarks and recommendations you have addressed on our manuscript.

We will look forward to hearing whether this manuscript can be considered of interest for publication in the PLOS ONE Journal and remain at your disposal for any required clarifications.

Yours sincerely,

Antoine Tshomba

PONE-D-23-12291R2

Cost-effectiveness of incorporating Ebola prediction score tools and rapid diagnostic tests into a screening algorithm: a decision analytic model

PLOS ONE

Dear Dr. Tshomba,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

The reviewer continues to raise a number of substantial issues. Given this is already a second revision, you will have only one more chance to revise the manuscript.

The reviewer is willing to communicate with you directly to go over the methodology rather than continue back and forth with the revisions.

If you are agreeable to this, please reach out to me directly via email (rychtarj@vcu.edu) and I will connect you with the reviewer.

Please submit your revised manuscript by Sep 14 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

• A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

• A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

• An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Jan Rychtář

Academic Editor

PLOS ONE

Additional Editor Comments:

The reviewer continues to raise a number of substantial issues. Given this is already a second revision, you will have only one more chance to revise the manuscript.

The reviewer is willing to communicate with you directly to go over the methodology rather than continue back and forth with the revisions.

If you are agreeable to this, please reach out to me directly via email (rychtarj@vcu.edu) and I will connect you with the reviewer.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

________________________________________

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

________________________________________

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

________________________________________

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

________________________________________

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

________________________________________

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Unfortunately, I think there are still two major issues with the core methodology of the study that have not been addressed.

First, I don’t think my concerns from the previous revision have been adequately addressed, so please allow me to clarify. I appreciate your response and your update to the SAR values used. However, I still believe that the formulae used to estimate the false positive and false negative costs do not accurately reflect the conceptual description of these costs given in the text or author response, do not correctly use the SAR values as consistently defined in both Gilbert et al and Dean et al, and may underestimate the expected number of cases resulting from a false positive/negative. The authors state that “[a]ccording to the paper by Gilbert et al., this random probability (1-(1-θ)^δ) measures the risk of transmission during an infectious patient's entire period of time” in their response. This is not consistent with the passage describing that expression in the Gilbert et al reference, which describes it as “the probability of infection given contact with an infectious individual” and uses it as a part of a differential equations SIR model. There are two differences between this definition and the authors’ interpretation. First, this expression represents risk of infection per contact, rather than the absolute risk for each infected individual regardless of contact rate. This is why, in this reference, this expression is multiplied by the contact rate. Second, the expression represents the probability of infection within one unit of time (here, 1 day or one contact), not the entire period of time (hence its inclusion in a system of differential equations). Therefore, by using this expression (without accounting for the number of contacts or amount of time each contact lasts) to represent the expected number of new infections that an infected individual will subsequently infect, the authors are implicitly assuming that each uninfected individual erroneously isolated in an EVD ward only experiences a single instance of direct contact with an infected patient (in the case of a FP) and infected individual erroneously screened as negative only has a single, transient contact after returning to the community (in the case of a FN). This assumption should be at least explicitly stated in the manuscript, and likely should be reevaluated, particularly for FNs. I can see how there would be little contact with infected individuals for uninfected patients in an EVD ward, but I have a hard time believing that an infected individual returning to a community after a false negative screening would have the same total risk over the entire period they are in the community as an infected individual with a single transient contact. That seems to be an underestimate, as I’m guessing most of these individuals will return home for multiple days and be in contact with multiple family members. However, the authors’ definition reproduced above describing the overall risk of transmission does not match with the expression they used but better matches either the SAR alone (defined in Gilbert et al as “the proportion of individuals who will become infected upon contact with an infectious individual during the total infectious period”, which agrees with the definition employed in Dean et al, the reference from which the parameter values were taken), which still doesn’t take into account the contact rate, or R0, which does take into account the contact rate and seems to be the closest metric to what the authors intend this cost to represent. As the authors correctly state in their response, “R0 is the number of secondary cases that one case would cause in a population that is completely susceptible”, which seems to exactly what is intended with this cost- the expected number of new cases resulting from a single infected individual returning to the community without isolation. I agree that there are issues with the estimation of R0 but the assumptions underlying the strategy the authors are currently using to estimate the FN risk is at the very least not justified in the manuscript and is likely underestimating the risk by multiple orders of magnitude. Admittedly it seems that with the current approach the FN risk hardly seems to affect the results at all (during the last revision this risk increased by about an order of magnitude and the results in Table 4 are almost the same), but I think this may reflect an issue with the overall approach as well, as outlined below.

Thank you very much, and I really appreciate this comment. We feel that not taking into account the average number of contacts that these false negatives mistakenly return in the community will not give the correct reflection of the number of cases that these false negatives could generate in the community.

We think our model that favors the specificity over the sensitivity of screening tests employed in tested algorithms results from underestimating the harm associated with FN.

In conclusion, you are right about this fundamental observation, and we sincerely thank you for it.

Therefore, to correct the problem with our model, we are going to use the estimated value of the penalty for FP, as it appears easy to estimate both the transmissibility and the average number of healthy people exposed in isolation due to this misclassification. With the information available, we will multiply the strength of infection and estimating the average number of possible contacts (here, the number of people exposed) per isolation of a false positive to score this error of classification (FP).

However, estimating the number of contacts that this false negative could have in the community remains difficult to pin down. Thus, the use of Ro, a composite measure that gives both information on transmissibility and average contact numbers, may represent the way to solve the problem linked to the underestimation of the FN penalty, thus balancing the favor given to the specificity of our present model.

In short, to correct the problem with our model, we are going to use the estimated value of the penalty for FP, as it appears easy to estimate both the transmissibility and the average number of healthy people exposed in isolation due to this misclassification. However, as far as FN is concerned, we intend to use the Ro value in the absence of any intervention that can be found in the literature.

Thus, in the method part of the manuscript, in the paragraph regarding the classification error harming computation, we corrected and rewrote sentences

• On lines from 127 to 128, as follows:

We negatively assigned this probability reported to the number of non-EVD exposed contacts due to this classification error in the isolation (as a payoff). Negatively because it is the harm caused by isolation, e.g., iatrogenic harm. For this erroneously false positive isolated, we assumed that each isolated false positive and his two family caregivers were non-EVD (e.g., three non-EVD would be exposed in the isolation ward).

• On lines 236 to 238, as follows:

Therefore, we assigned a score equal to minus the anticipated number of Ebola cases that this false-negative case—which was ruled out—would produce in the entire susceptible population (e.g., minus the basic reproductive number, the Ro, which accounts for the transmissibility and the typical number of community contacts that this false-negative would harm)

• We added these statements on lines 239 to 246:

For this erroneously false positive isolated, we assumed that each isolated false positive and his two family caregivers were non-EVD (thus, three non-EVD in the isolation ward). Thus, a value of -0.077 was assigned to each isolated non-EVD case. We hypothesized that the community as a whole would be exposed to the Ebola virus infection by these false negatives in the community. Therefore, we assigned a score equal to minus the anticipated number of Ebola cases that this false-negative case—which was ruled out—would produce in the entire susceptible population (e.g., minus the basic reproductive number, the Ro, which accounts for the transmissibility and the typical number of community contacts that this false-negative would harm and represents ). In a population that is entirely susceptible, the basic reproduction number is the number of secondary instances that one case would result in.

For each EVD case ruled out, we assigned a value of -2.49, e.g., minus the Ro as estimated by Lewnard [http://dx.doi.org/10.1016/S1473-3099(14)70995-8], as the effectiveness payoff.

Second, I have very serious concerns about how the effectiveness/payoff of a screening strategy is being defined more generally here. The authors repeatedly state that the effectiveness is defined as the number or fraction of true EVD cases isolated (e.g., in the abstract line 54, header row in Table 4, Methods lines 210-211); however, this is not the definition implemented in the payoff matrix (Table 3) or described later in the Methods (lines 213-215), where true positive and true negative outcomes are in fact weighted equally. The effectiveness is therefore nearly equal to the accuracy of the algorithm (i.e., (TP+TN)/(total screened)), with a negligible contribution from the penalties from FPs and FNs (which are 2 orders of magnitude smaller than the payoffs given to TN and TP, “by assumption”). This can lead to very problematic conclusions since the overwhelming majority of subjects being screened don’t have EVD (prevalence ~6% as given in Table 2), causing the specificity of the test to dominate the effectiveness metric. For example, consider the trivial screening algorithm where all individuals being screened are given a negative test result and sent home. This procedure has a sensitivity of 0 and specificity of 1, leading to an effectiveness payout of approximately 0.94 (calculated as (1-Prev)*(payoff of TN) + Prev*(payoff of FN)). This effectiveness value is higher than the best real algorithm tested in the manuscript! In fact, since the cost of each of these “screenings” would be very low (presumably $0), the methodology used in this manuscript would identify this dummy procedure as the best possible screening test in terms of cost, effectiveness, and the cost-effectiveness ratio. Obviously, this is undesirable behavior (just shutting down EVD isolation units is not a good strategy!). Therefore, it is necessary to come up with a different payoff matrix that doesn’t just mostly optimize the test specificity.

What would be a better payoff matrix? One option would be to implement the payoff matrix that is repeatedly described in the text but not actually used (“the number of isolated EVD cases (true positives)” lines 210-211), which would be assigning a value of 1 to TP and 0 to all other outcomes. Of course, this would just be proportional to the sensitivity of the screening algorithm and would not account for the specificity at all, so the strategy of isolating everyone who comes in would have the highest value (though also the highest cost). A better strategy, however, might be to quantify all payoffs in terms of monetary cost or benefit. This actually solves two serious issues with the current approach. First, it naturally solves the problem with deciding on how to weight TP and TN benefits that is explained above. TP events will have a benefit defined as the financial benefit to society for their isolation, which I’d suggest defining as the value generated by the effects of the patient receiving supportive care in the unit and extending their lifespan (e.g., see Bartsch et al Pathog Glob Health 2015 https://doi.org/10.1179%2F2047773214Y.0000000169 for estimates of the financial costs of EVD). TN events can be defined as having a payoff of $0. The second problem that this proposed redefinition into monetary units solves is the current mismatch in units between the TP or TN benefits and the FP and FN costs. TP and TN payoffs represent the number of correct screening calls made (assigning them each a value of 1) while FP and FN costs represent the number of additional expected EVD cases generated by the outcome (assigning a value of -1 per expected new EVD case). Therefore, the approach gives a correct negative screening result the same weight as generating a new EVD case that wouldn’t have occurred without an FP/FN result. These two events don’t seem like they should be on the same scale or be equivalent- causing a new infection that wouldn’t have otherwise occurred should be quite a bit more costly, I think! Instead, this current approach makes the FP and FN costs essentially negligible. Assigning FP and FN events monetary costs instead (perhaps by estimating the financial cost of each new EVD case) would put them on the same scale as TP/TN payoffs.

Overall, I think the conceptual issues and problematic limiting behavior of the current approach are serious and warrant major changes before publication. However, it’s possible that even after these changes, the conclusions of the manuscript will remain the same. Currently, the general conclusion seems to be that algorithms with higher specificity (i.e., those that use the ECPS as a joint or conditional test) perform better than lower specificity algorithms, not only because they have higher “effectiveness” as currently defined, but also because they have significantly lower cost since they isolate fewer people. The differences in specificity between these two groups of algorithms is so great (~35% vs. >80%) that the conclusion that these tests are more cost effective will almost certainly continue to hold.

Thank you very much for this comment.

As we just pointed out above, we recognized that our model underestimated the penalties, especially for the FN, leading to a model that only favors the specificity of our tested screening algorithms. We corrected this situation by taking into account the average number of non-EVD cases that would be exposed to the disease because of these classification errors (in the isolation ward or community setting).

In addition, using the monetary valuation of effectiveness means that we should use cost-benefit evaluation.

Yes, we know! Cost-benefit analysis (CBA) is the reference of economic studies as it is the most comprehensive and theoretically sound form of economic evaluation, and it has been used as an aid to decision-making in many different areas of economic and social policy in the public sector. But, in this manuscript, our objective was rather to evaluate the cost-effectiveness of the screening algorithms we tested; thus, we did not use this cost-benefit analysis (CBA) method, which estimates and totals up the equivalent monetary value of the benefits and costs relative to each screening algorithm. Thus, it seeks to place monetary values on both the inputs (costs) and outcomes (benefits) of health care to establish whether they are worthwhile. CBA requires program consequences to be valued in monetary units, thus enabling the analyst to make a direct comparison of the program's incremental cost with its incremental consequences in commensurate units of measurement, e.g., dollars or pounds.

We chose CEA because, in this study, we only evaluated one pillar of the Ebola control strategies: the screening of Ebola suspects. We did not analyze all the other pillars of response to the Ebola epidemic, e.g., Ebola-specific treatments for positive patients.

Again, not much is known about the Ebola disease yet. Information about the monetary value of each outcome is not yet available. We do not think that performing a CBA will be easy or plausible, as no extensive analysis of their monetary value exists currently.

In short, we are aware of the problem with CEA and CUA, as the numerator and numerator do not have the same unit, and performing a CBA has the problem of a lack of monetary data to date.

Therefore, we chose the CEA that, easily, can be performed with the couple of data sets available.

When more information on the monetary value of effectiveness becomes available, a CBA will be worthwhile in economic evaluation and this will be another manuscript to write.

At least, we tried to perform by including inputs according to the other scenarios you proposed (e.g., assigning a value of 1 to TP and 0 to all other outcomes or assigning -1 per expected new EVD case), but this resulted in implausible results of effectiveness for all algorithms (mostly quite a zero or negative number as effectiveness).

To sum:

Sincerely, we are grateful for all the comments you have made on this manuscript. All your comments were fundamental and really improved our manuscript's quality.________________________________________

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Debra Van Egeren________________________________________

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Thank you.

We uploaded and analyzed them using PACE before the submission

Decision Letter 3

Jan Rychtář

27 Sep 2023

PONE-D-23-12291R3Cost-effectiveness of incorporating Ebola prediction score tools and rapid diagnostic tests into a screening algorithm: a decision analytic modelPLOS ONE

Dear Dr. Tshomba,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

The reviewer is happy with the revision and recommends only minor changes before the manuscript can be accepted for the publication.

==============================

Please submit your revised manuscript by Nov 11 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Jan Rychtář

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Additional Editor Comments:

The reviewer is happy with the revision and recommends only minor changes before the manuscript can be accepted for the publication.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: At this point I think the manuscript can be published if the language describing the definition of effectiveness is corrected, and I do not think I need to see this manuscript again. In the abstract (and in many other places in the text, all of which need to be changed), the effectiveness is described as follows:

Our analysis found dual ECPS as a conditional test with the QuickNavi™-Ebola RDT algorithm to be the most cost-effective screening algorithm for EVD, with an effectiveness of 0.86 (e.g., isolating 86% of EVD cases).

The last part ("isolating 86% of EVD cases") is incorrect. The provided definition (isolating X% of true cases) I quoted above is actually the sensitivity of the test, not the effectiveness. The effectiveness formula now being used still has no real-world meaning, but I believe it is mathematically equivalent to using the expected number of EVD cases prevented per individual screened (do NOT use this as the definition either though, as stated this is not the actual definition of that number as it stands). I would instead not provide any real-world definition and simply indicate that it is a metric of test effectiveness.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Debra Van Egeren

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2023 Oct 17;18(10):e0293077. doi: 10.1371/journal.pone.0293077.r008

Author response to Decision Letter 3


4 Oct 2023

Kinshasa, October 3, 2023

Tshomba Oloma Antoine

Institut National de Recherche Biomédicale (INRB)

Kinshasa, Dem. Rep. of Congo,

antotshomba@yahoo.fr

+243 815602451

October 3, 2023

Jan Rychtář

Academic Editor

PLOS ONE Journal

plosone@plos.org

Dear Editor,

We are resubmitting our manuscript entitled “Cost-effectiveness of incorporating Ebola prediction score tools and rapid diagnostic tests into a screening algorithm: a decision analytic model” as a Research Article for consideration of publication in the PLOS ONE Journal.

First, we want to express our gratefulness to the Editor and Reviewers for their overall very positive comments on our work and their suggestions for improvement. Through this letter, we have tried answering to the best of our knowledge the questions, suggestions and remarks provided by the Editors and Reviewers.

None of the authors has a competing interest to declare and our manuscript has not been submitted, or accepted elsewhere. All authors have contributed to, seen, and approved the final, submitted version of the manuscript.

We have upload the following documents:

- The clean version of the Manuscript, file labeled "Manuscript"

- The track changes version of the manuscript, file labeled “Revised Manuscript with Track Changes”.

- The covering letter addressing the editorial and referees’ comments, file labeled “Response to Reviewers”.

- We corrected the effectiveness’s definition and made changes where needed in the manuscript text, and figures’ labels

Frankly, we would like thank all of you for your fundamental and helpful remarks and recommendations you have addressed on our manuscript.

We will look forward to hearing whether this manuscript can be considered of interest for publication in the PLOS ONE Journal and remain at your disposal for any required clarifications.

Yours sincerely,

Antoine Tshomba

From: PLOS ONE <em@editorialmanager.com>

À :Antoine Oloma Tshomba

mer. 27 sept. à 11:46

PONE-D-23-12291R3

Cost-effectiveness of incorporating Ebola prediction score tools and rapid diagnostic tests into a screening algorithm: a decision analytic model

PLOS ONE

Dear Dr. Tshomba,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

The reviewer is happy with the revision and recommends only minor changes before the manuscript can be accepted for the publication.

==============================

Please submit your revised manuscript by Nov 11 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

• A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

• A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

• An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Jan Rychtář

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Thank you very much. We reviewed all our references for retracted papers, but we did not find any.

Additional Editor Comments:

The reviewer is happy with the revision and recommends only minor changes before the manuscript can be accepted for the publication.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

________________________________________

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

________________________________________

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

________________________________________

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

________________________________________

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

________________________________________

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: At this point I think the manuscript can be published if the language describing the definition of effectiveness is corrected, and I do not think I need to see this manuscript again. In the abstract (and in many other places in the text, all of which need to be changed), the effectiveness is described as follows:

Our analysis found dual ECPS as a conditional test with the QuickNavi™-Ebola RDT algorithm to be the most cost-effective screening algorithm for EVD, with an effectiveness of 0.86 (e.g., isolating 86% of EVD cases).

The last part ("isolating 86% of EVD cases") is incorrect. The provided definition (isolating X% of true cases) I quoted above is actually the sensitivity of the test, not the effectiveness. The effectiveness formula now being used still has no real-world meaning, but I believe it is mathematically equivalent to using the expected number of EVD cases prevented per individual screened (do NOT use this as the definition either though, as stated this is not the actual definition of that number as it stands). I would instead not provide any real-world definition and simply indicate that it is a metric of test effectiveness.

Thank you very much for all these fundamental observations.

We agree with you! Actually, mathematically, the effectiveness relative to the design does not have a real-world meaning.

Indeed, as we assigned and weighted equally true positive and true-negative outcomes, the fraction of effectiveness can represent the number of suspects correctly classified in the total number of suspects screened (TP and TN in the total number of suspects screened). However, as we considered the consequences of the corresponding misclassification errors (FP and FN) that we assigned negatively in the formulas as harm, the effectiveness fraction accurately approximates the accuracy, but not exactly.

In short, we can say that the effectiveness number reported in the manuscript can be interpreted as the “net number of patients correctly classified.” (i.e., the accuracy minus the penalty)

So, we added these statements to define the effectiveness on lines 375 to 378 as follows:

This fraction of effectiveness reflects the number of EVD suspects who were correctly classified after taking into consideration the harm brought on by incorrect classifications. It can be seen as the percentage of patients who were correctly categorized for each patient screened.

Additionally, we made changes where needed in the manuscript text, and figures’ labels

________________________________________

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Debra Van Egeren

________________________________________

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

We uploaded and checked them on PACE

Decision Letter 4

Jan Rychtář

5 Oct 2023

Cost-effectiveness of incorporating Ebola prediction score tools and rapid diagnostic tests into a screening algorithm: a decision analytic model

PONE-D-23-12291R4

Dear Dr. Tshomba,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Jan Rychtář

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Jan Rychtář

9 Oct 2023

PONE-D-23-12291R4

Cost-effectiveness of incorporating Ebola prediction score tools and rapid diagnostic tests into a screening algorithm: a decision analytic model

Dear Dr. Tshomba:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Jan Rychtář

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. The detailed description of algorithms tested in the decision tree model.

    (DOCX)

    S2 File. The supplementary appendix.

    (DOCX)

    S3 File. CHEERS checklist.

    (DOCX)

    S1 Table. Cost-effectiveness ratios (USD per EVD isolated) in relation to variation in the cost of QuickNavi™-Ebola RDT and cost of standard-of-care (SOC).

    (DOCX)

    S1 Fig. Complete decision tree model.

    (TIF)

    S2 Fig. Variations in cost-effectiveness ratios of eight Ebola screening algorithms as a function of sensitivities of the WHO case definition for the suspect and ECPS at -3 points of cut-off.

    A is the effect of variation in the sensitivity of the WHO case definition for the suspect on the efficiency of algorithms. B is the effect of variation in the sensitivity of the ECPS at -3 points of cut-off on the efficiency of algorithms.

    (TIF)

    S3 Fig. Variations in cost effectiveness ratios of the eight Ebola screening algorithms as a function of the cost of standard-of-care and QuickNavi™-Ebola RDT.

    A presents the effect of variation in the cost of standard-of-care on the efficiency of the eight Ebola screening algorithms. B presents the effect of variation in the QuickNavi™-Ebola RDT cost on the efficiency of the 8 Ebola screening algorithms.

    (TIF)

    S4 Fig. Variations in the effectiveness and cost of the eight Ebola screening algorithms as a function of the prevalence of Ebola virus disease in the suspected population.

    A depicts the effect of variation in the prevalence of Ebola virus disease on the effectiveness of screening algorithms. B, the effect of variation in the prevalence of Ebola virus on the cost of screening algorithms. The dotted horizontal line shows the threshold value of the prevalence over which the cost of the algorithm changes. Over this threshold of 10% of disease prevalence, the cost of ECPS as a joint or conditional test becomes low. Abbreviations: Alg. = algorithm; ECPS = extended clinical prediction score; EVD = Ebola virus disease.

    (TIF)

    S5 Fig. Incremental cost-effectiveness of each algorithm compared to WHO case definition- algorithm (Algorithm 1) during 1000-iterations of Monte Carlo simulation at a WTP threshold of USD 1, 752.3.

    The ellipse represents 95% confidence points. The diagonal dashed line represents ICERs at a WTP threshold of USD 1,752.3. Points to the right of this dashed line are considered cost-effective. The dotted horizontal line shows an incremental cost of USD 0. Points below this line represent iterations in which an algorithm was cost saving compared with algorithm 1. This figure does not present all simulations of algorithms compared to algorithm 1. Those not presented here were cost- saving in 100% of simulations compared to algorithm 1 at this WTP threshold. Green points: ICERs that fall below the WTP line in Monte Carlo simulations, the maximum acceptable ICER (the algorithm is considered cost-effective); Red points: ICERs that fall above the WTP line, the maximum acceptable ICER (the algorithm is considered costly and less effective). Abbreviations: Alg. = algorithm; WTP = willingness to pay; ICER = incremental cost-effectiveness ratio.

    (TIF)

    S1 Data

    (ZIP)

    Attachment

    Submitted filename: Response to Reviewers1.docx

    Data Availability Statement

    All relevant data are within the paper.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES