Skip to main content
PLOS Neglected Tropical Diseases logoLink to PLOS Neglected Tropical Diseases
. 2024 Apr 10;18(4):e0012077. doi: 10.1371/journal.pntd.0012077

Diagnostic accuracy of DPP Fever Panel II Asia tests for tropical fever diagnosis

Sandhya Dhawan 1, Sabine Dittrich 2,3,¤, Sonia Arafah 2, Stefano Ongarello 2, Aurelian Mace 4, Siribun Panapruksachat 4, Latsaniphone Boutthasavong 4, Aphaphone Adsamouth 4, Soulignasak Thongpaseuth 4, Viengmon Davong 4, Manivanh Vongsouvath 4, Elizabeth A Ashley 3,4, Matthew T Robinson 3,4, Stuart D Blacksell 1,3,4,*
Editor: Thomas C Darton5
PMCID: PMC11034646  PMID: 38598549

Abstract

Background

Fever is the most frequent symptom in patients seeking care in South and Southeast Asia. The introduction of rapid diagnostic tests (RDTs) for malaria continues to drive patient management and care. Malaria-negative cases are commonly treated with antibiotics without confirmation of bacteraemia. Conventional laboratory tests for differential diagnosis require skilled staff and appropriate access to healthcare facilities. In addition, introducing single-disease RDTs instead of conventional laboratory tests remains costly. To overcome some of the delivery challenges of multiple separate tests, a multiplexed RDT with the capacity to diagnose a diverse range of tropical fevers would be a cost-effective solution. In this study, a multiplex lateral flow immunoassay (DPP Fever Panel II Assay) that can detect serum immunoglobulin M (IgM) and specific microbial antigens of common fever agents in Asia (Orientia tsutsugamushi, Rickettsia typhi, Leptospira spp., Burkholderia pseudomallei, Dengue virus, Chikungunya virus, and Zika virus), was evaluated.

Methodology/Principal findings

Whole blood (WB) and serum samples from 300 patients with undefined febrile illness (UFI) recruited in Vientiane, Laos PDR were tested using the DPP Fever Panel II, which consists of an Antibody panel and Antigen panel. To compare reader performance, results were recorded using two DPP readers, DPP Micro Reader (Micro Reader 1) and DPP Micro Reader Next Generation (Micro Reader 2). WB and serum samples were run on the same fever panel and read on both micro readers in order to compare results. ROC analysis and equal variance analysis were performed to inform the diagnostic validity of the test compared against the respective reference standards of each fever agent (S1 Table). Overall better AUC values were observed in whole blood results. No significant difference in AUC performance was observed when comparing whole blood and serum sample testing, except for when testing for R. typhi IgM (p = 0.04), Leptospira IgM (p = 0.02), and Dengue IgG (p = 0.03). Linear regression depicted R2 values had ~70% agreement across WB and serum samples, except when testing for leptospirosis and Zika, where the R2 values were 0.37 and 0.47, respectively. No significant difference was observed between the performance of Micro Reader 1 and Micro Reader 2, except when testing for the following pathogens: Zika IgM, Zika IgG, and B pseudomallei CPS Ag.

Conclusions/Significance

These results demonstrate that the diagnostic accuracy of the DPP Fever Panel II is comparable to that of commonly used RDTs. The optimal cut-off would depend on the use of the test and the desired sensitivity and specificity. Further studies are required to authenticate the use of these cut-offs in other endemic regions. This multiplex RDT offers diagnostic benefits in areas with limited access to healthcare and has the potential to improve field testing capacities. This could improve tropical fever management and reduce the public health burden in endemic low-resource areas.

Author summary

Tropical fevers, specifically those caused by non-malarial infectious agents, contribute to considerable morbidity and mortality in the Asia-Pacific region. Diagnosis of these pathogens is challenging since the clinical signs are often indistinguishable. Conventional laboratory tests to differentiate between tropical diseases require substantial infrastructure and experienced staff, limiting access to accurate tests in low-resource endemic regions. Rapid diagnostic tools (RDTs) offer an affordable solution for disease management and patient care. Although RDTs are also available for detecting non-malarial pathogens, there are financial and accessibility issues in establishing multiple separate tests in resource-constrained regions. To overcome these challenges, a multi-detection diagnostic platform with the capacity to diagnose a diverse range of tropical fevers would be a solution. This study aimed to evaluate the accuracy of an easier-to-use multiplex lateral flow immunoassay (DPP Fever Panel II Assay) that can detect IgM antibodies and specific antigens of common tropical diseases in Asia (Scrub typhus, Murine typhus, Leptospirosis, Melioidosis, Dengue fever, Chikungunya, and Zika virus). The test performed offers comparable diagnostic accuracy to commercially available tests, as well as some reference tests. The test also performs at equivalent accuracy with both blood and serum samples. If the fever panel were used as a stand-alone test for acute febrile illness diagnosis, cut-offs would need to be adjusted depending on the use of the test, and the desired sensitivity and specificity. There is a need to investigate the use of these cut-offs in other endemic regions, which could improve the rate of tropical fever diagnosis in low-resource settings.

Introduction

Tropical fever diagnosis has long perplexed healthcare professionals [1,2]. It is well-established that infectious agents are the primary cause of fever-related illness worldwide. In addition to globally prevalent agents, various pathogens are restricted to specific geographical regions and largely contribute to fever epidemiology in resource-limited settings [3]. In South and Southeast Asia, most of the population lives in rural areas, where poverty rates are high, and healthcare access is limited [4]. Diagnosing and treating diseases in these areas can be challenging due to the limited data available on the causes, resulting in incorrect treatment, including the unnecessary use of antimicrobials. However, it is well documented that febrile illnesses account for substantial morbidity and mortality in these regions [5,6].

While fever is the most frequent and debilitating clinical symptom in the tropics, measures to identify the spectrum of tropical fever aetiology and implement appropriate management measures have been limited [2]. This is especially accurate for non-malarial febrile illnesses. Clinically differentiating between common tropical diseases is challenging because the clinical presentation of fever-causing pathogens is similar. The lack of specific early presentation confounds diagnosis and subsequent treatment [2,7].

The use of rapid diagnostic tests (RDTs) for the early detection of malaria parasites has become common practice over the last decade and aided in improving malaria point-of-care testing globally [8]. As a result, improved case management and control measures significantly decreased the incidence of malarial fever [9], whereas other fever aetiologies proportionally increased [10]. Although single-plex qualitative RDTs for detecting non-malarial fevers are available, there are significant financial and access issues in establishing RDTs for numerous tropical pathogens, both at the patient management and healthcare system level [7]. Once malaria is ruled out, healthcare practitioners are unable to provide further testing and treatment because they receive insufficient training, support, and compensation [2,4,11,12]. As such, curable bacterial infections are often missed during diagnosis [4,13,14], and empiric antibiotic treatments are routinely administered [10,14]. Unnecessary antibiotic use acts as a driver for antimicrobial resistance across communities [15,16]. In low-resource settings where access to laboratory and human resource capacity is constrained, RDTs are preferred for diagnosis because of their affordability and ease of use.

However, RDT kits are designed with set cut-off values that often compromise sensitivity for specificity; in fact, this is a challenge of many serological tests [17]. Thresholds are often selected based on limited samples from one or two regions and often do not take into account varying background seropositivity across different countries, resulting in suboptimal test performance when used outside of the regions tested [7]. There is also a common problem of RDTs of unknown quality being used. While highly sensitive RDTs are vital, tests with low specificity have limited utility in clinical and public health decision-making. Low specificity can lead to high misdiagnosis rates, inappropriate use of antibiotics, and undertreatment of bacterial infections [1821]. In addition, tests with low specificity can also distort the accuracy of disease estimates, which further hinders the effectiveness of public health response measures [7,1821].

To overcome some of the delivery challenges of multiple separate tests, a multiplexed RDT with the capacity to diagnose a diverse range of tropical fevers would be a solution. A multiplex assay could deliver significant advantages over current single-plex qualitative RDTs, as they would enable the simultaneous detection and differentiation of numerous infections with comparable clinical manifestations. Additionally, if such a tool is quantitative rather than qualitative as current RDTs, region-specific cut-offs can be used to accomplish defined objectives. Quantitative readings for specific antigens can also serve as indices of severity, as has been shown for histidine-rich protein 2 (HRP2) in malaria [22], capsular polysaccharide (CPS) in melioidosis[23], and non-structural protein 1 (NS1) in dengue [24].

In this study, a multiplex lateral flow immunoassay (DPP Fever Panel II Assay Asia, Chembio, Inc.), that can detect serum immunoglobulin M (IgM) and specific microbial antigens of common fever agents in Asia (Orientia tsutsugamushi, Rickettsia typhi, Leptospira spp., Burkholderia pseudomallei, Dengue virus, Chikungunya virus, and Zika virus), will be evaluated. The objectives were to assess (i) the diagnostic accuracy of the test in a clinical setting representative of the intended use setting, (ii) compare test performance across whole blood and serum samples, and (iii) assess reader performance variability between two types of micro readers, a DPP Fever Panel II Asia Micro Reader (Micro Reader 1) and the other a DPP Fever Panel II Asia Micro Reader Next Generation (Micro Reader 2).

Methods

Ethics statement

The UI-study was approved by the Oxford Tropical Research Ethics Committee (OxTREC, 006–07), and the National Ethics Committee for Health Research in Lao PDR (049/NECHR and 046/NECHR), with approval to use leftover specimens for further research. All patients provided written consent for use of leftover specimens.

Study population

Specimens were obtained from adult patients (>15 years old) enrolled in the “Prospective study of the causes of fever amongst patients admitted to Mahosot Hospital, Vientiane, Lao PDR” (UI-study) between November 2019 to October 2020. Mahosot Hospital is a main primary-tertiary public hospital in Vientiane (capital of Laos) and receives referrals from across the country. Patients who had fever (≥ 38°C) within 24 hours of admission or at enrolment, an illness duration <1 week, a request for blood culture, and leftover paired whole blood and serum volumes of >250μl (following standard diagnostic testing) were enrolled for this study. Samples used for this study were collected from leftover samples on day samples were received, and were stored at 4°C for a maximum of 24 hours prior to testing in this current study.

DPP fever panel investigation

Whole blood (WB) and serum samples from 300 patients recruited in Vientiane, Laos PDR were tested using the DPP Fever Panel II test, consisting of an Antibody panel and Antigen panel. DPP tests were repeated on samples if they failed. For each patient, testing procedure followed the manufacturer’s instructions and were done with both paired blood and serum specimens (to compare specimen suitability), using 50μl of sample for the antigen panel and 10μl of specimen for the antibody panel. To compare reader performance, results were recorded using two DPP readers, DPP Micro Reader (Micro Reader 1) and DPP Micro Reader Next Generation (Micro Reader 2). Whilst diagnostic staff were not blinded to the results of the comparator (S1 Table) and DPP tests, review bias was minimized as the DPP test results do not require interpretation by an operator, only numerical values are displayed by the reader, and the result interpretation was done during data analysis and was not be given to the operator; and pre-specified thresholds for positivity were used for ELISA tests. Targets tested included O. tsutsugamushi IgM, R. typhi IgM, Leptospira spp. IgM, B. pseudomallei CPS Ag, Dengue IgM, Dengue IgG, Dengue NS1, Chikungunya IgM, Zika IgM, Zika IgG (Table 1). The test is not yet commercially available; the cutoff values have not been finalised.

Table 1. Diagnostic performance of the DPP II Fever Panel Asia on serum versus whole blood.

Summary statistics (cut off, sensitivity, specificity, and AUC values) for the diagnostic performance of whole blood and serum samples run on the panel are depicted. True positives, referring to positives by reference test, have been included as well. Results from both micro readers are shown.

Pathogen Whole Blood Serum
Micro Reader 1 Total True positives Cut-off Sensitivity (%) Specificity (%) AUC (95% CI) Total True positives Cut-off Sensitivity (%) Specificity (%) AUC (95% CI)
O. tsutsugamushi IgM 291 21 ≥4 57.1 59.6 0.61 (0.48–0.74) 291 21 ≥4 42.9 55.2 0.49 (0.34–0.65)
R. typhi IgM 291 59 ≥16 76.3 73.3 0.79 (0.72–0.86) 291 59 ≥19 69.5 70.7 0.76 (0.69–0.84)
Leptospira spp. IgM 291 52 ≥21 55.8 63.6 0.60 (0.51–0.70) 291 52 ≥19 50.0 54.0 0.53 (0.44–0.62)
Dengue IgM 295 36 ≥7 83.3 74.1 0.85 (0.78–0.92) 295 36 ≥9 75.0 76.8 0.81 (0.73–0.90)
Dengue IgG 295 89 ≥5.6 60.7 67.0 0.66 (0.60–0.73) 295 89 ≥6 60.7 61.7 0.64 (0.57–0.71)
Chikungunya IgM 293 14 ≥5.3 71.4 85.0 0.82 (0.67–0.95) 293 14 ≥6.1 78.6 88.2 0.86 (0.72–0.99)
Zika IgM 291 8 ≥3.6 100.0 84.5 0.97 (0.93–1.00) 291 8 ≥4.5 87.5 90.8 0.94 (0.89–1.00)
Zika IgG 285 66 ≥2.4 59.1 57.5 0.64 (0.56–0.71) 285 66 ≥1.9 50.0 52.1 0.53 (0.45–0.60)
Dengue NS1 294 36 ≥25 83.3 93.4 0.88 (0.80–0.97)
B. pseudomallei CPS Ag 283 8 ≥5 25.0 52.7 0.65 (0.13–0.56)
Micro Reader 2
O. tsutsugamushi IgM 291 21 ≥2.9 66.7 63.0 0.71 (0.62–0.79) 290 21 ≥2.7 57.1 59.1 0.59 (0.45–0.72)
R. typhi IgM 291 59 ≥22 72.9 77.6 0.79 (0.72–0.86) 291 59 ≥22 72.9 67.5 0.75 (0.68–0.83)
Leptospira spp. IgM 291 52 ≥21 57.7 50.2 0.59 (0.49–0.69) 290 52 ≥22 51.9 52.9 0.53 (0.44–0.62)
Dengue IgM 295 36 ≥8 77.8 78.0 0.84 (0.77–0.92) 295 36 ≥8.5 80.6 74.5 0.84 (0.77–0.91)
Dengue IgG 295 89 ≥3.8 70.8 54.9 0.68 (0.61–0.75) 295 89 ≥5.4 60.7 58.3 0.64 (0.57–0.71)
Chikungunya IgM 293 8 ≥4.1 78.6 78.1 0.82 (0.70–0.95) 293 14 ≥4.5 71.4 72.4 0.82 (0.69–0.94)
Zika IgM 291 8 ≥3.7 100.0 76.7 0.94 (0.87–1.00) 290 8 ≥8.3 75.0 98.6 0.91 (0.79–1.00)
Zika IgG 285 66 ≥2.5 50.0 49.3 0.51 (0.42–0.59) 284 66 ≥3 53.9 53.4 0.53 (0.44–0.61)
Dengue NS1 294 36 ≥35 86.1 93.4 0.93 (0.44–0.61)
B. pseudomallei CPS Ag 294 8 ≥6 75.0 72.4 0.71 (0.29–0.67)

Reference diagnostics

True positives were determined as positives by reference diagnostic tests. The reference diagnostic methods for each pathogen are outlined in S1 Table. For O. tsutsugamushi IgM and R. typhi IgM detection, an in-house ELISA was used as the reference assay, while for Leptospira spp. IgM detection, the SERION ELISA classic Leptospira IgM test was used. The reference test for B. pseudomallei CPS detection was blood culture. For both Dengue IgM and NS1 detection, the SD Bioline Dengue Duo IgM/IgG/NS1 (CE Marked) assay was used the reference. While for Chikungunya IgM, Zika IgM, and Dengue IgG, DPP Zika/ Chikungunya/ Dengue multiplex test (CE-marked) was used as the reference assay. Where carried out retrospectively, diagnostic staff were blinded to DPP results.

Statistical analysis

The data and statistical analysis for this study were performed using Stata/BE 17.0 and R programming language (R 4.1.0). The diagnostic performance of the assays was assessed via sensitivity and specificity. Receiver operating characteristic (ROC) curves were also created using the pROC and ROCR packages on R. The area under the curve (AUC) was examined to inform the diagnostic validity of the test and to advise an appropriate region-specific diagnostic cut-off. A optimal cut-off was selected by maximising both sensitivity and specificity indices from the ROC analysis [25]. A test of equal variance on the AUCs was performed using ‘roccomp’ and ‘rocgold’ commands on Stata to inform performance variability between WB and serum samples. Chi-squared hypothesis testing and linear regressions were also conducted to assess the statistical significance of sample type variability and reader variability.

Results

Diagnostic accuracy of the DPP Fever Panel in whole blood samples

At an optimal cut-off where sensitivity and specificity are at a suitable compromise, the DPP II O. tsutsugamushi IgM test sensitivity was between 57.1–66.7%, while the specificity was approx. 59.6–63.0% (Table 1). R. typhi IgM sensitivity was at an appropriate 72.9–76.3%, and specificity was between 73.3–77.6% at an optimal cut-off value. Leptospira IgM sensitivity at an optimal cut-off was 55.8–57.7%, and specificity was low at 50.2–63.6% (Table 1). Dengue IgM sensitivity at its optimal range was between 77.8–83.3%, while the specificity was 74.1–78.1%. In comparison, sensitivity for Dengue IgG detection was between 60.7–70.8% and specificity 54.9–67.0% at the optimal cut-off. At an optimal cut-off, Chikungunya IgM detection provided a sensitivity and specificity of 71.4–78.6% and 78.1–85.0%, respectively. Zika IgM detection had a sensitivity of 100%, at 76.7–84.5% specificity. While sensitivity and specificity for Zika IgG detection were compromised, sensitivity was optimal at 50.0–59.0% and specificity at 49.3–57.5% (Table 1).

Diagnostic accuracy of the DPP Fever Panel in serum samples

At similar cut-off ranges (Table 1), O. tsutsugamushi IgM detection in serum samples had an optimal sensitivity of 42.9–57.1% and a specificity of 55.2–59.1%. R. typhi IgM sensitivity and specificity were similar, with a sensitivity of 69.5–72.9% and a specificity of 67.5–70.7%. Leptospira IgM detection sensitivity at an optimal cut-off was 50.0–51.9%, and specificity was 52.9–54.0%. However, test performance for B. pseudomallei CPS Ag displayed greater variance, with sensitivity ranging from 25–75% and specificity ranging from 53–72% (Table 1). Dengue IgM sensitivity and specificity at its optimal range were between 75.0–80.6% and 74.5–76.8%, respectively. While Dengue IgG sensitivity at the optimal cut-off was 60.7%, and specificity was between 58.3–61.7%. Dengue NS1, on the other hand, provided a sensitivity of 83–86% and a specificity of 93.4%. Chikungunya IgM detection had a sensitivity of 71.4–78.6% and a specificity was 72.4–88.2%. At the optimal cut-off, Zika IgM detection was 75.0–78.6% sensitive and 90.8–98.6% specific. While for Zika IgG detection, sensitivity and specificity were compromised, with sensitivity optimal at 50–53.9% with a specificity of 52.1–53.4% (Table 1).

AUROC analysis

ROC analysis was performed to assess the diagnostic accuracy of the DPP test performance in whole blood samples against serum samples (Fig 1). O. tsutsugamushi IgM detection via the DPP Fever Panel II provided an AUC value of 0.61 and 0.71 when run using WB samples and 0.49 and 0.59 with serum samples (Fig 1A). R. typhi IgM detection had an AUC of 0.79 across both readers for WB, with an AUC of 0.76 and 0.75 for serum detection (Fig 1B). The AUC for Leptospira IgM detection was 0.60 and 0.59 in WB, while in serum was 0.53 across both readers (Fig 1C). Dengue IgM detection in WB resulted in an AUC of 0.85 and 0.84 and was 0.81 and 0.84 using serum samples (Fig 1E). In comparison, Dengue IgG had an AUC of 0.66 and 0.68 in WB, while IgG detection in serum provided an AUC value of 0.64 (Fig 1F). Chikungunya IgM performed adequately, with an AUC of 0.82 across both readers for WB detection and 0.86 and 0.82 for serum detection (Fig 1H). Zika IgM detection in WB provided an AUC of 0.97 and 0.94 and AUC values of 0.91 and 0.94 in serum detection (Fig 1I). On the other hand, Zika IgG had an AUC value of 0.64 and 0.51 using WB samples and 0.53 when conducted on serum samples. (Fig 1J).

Fig 1. Receiver Operative Characteristic (ROC) analysis for WB and serum samples.

Fig 1

Area under the curve (AUC) values for WB and serum samples across both readers are shown. No WB samples were available for Dengue NS1 assay and B pseudomallei CPS Ag assay. Legend: embedded in the graph.

B. pseudomallei CPS antigen detection resulted in an AUC of 0.65 and 0.71 on readers 1 and 2, respectively (Fig 1D). The serum samples were also tested for Dengue NS1 detection via the DPP Fever Panel, which provided an AUC value of 0.88 with Micro Reader 1 and 0.93 using Micro Reader 2 (Fig 1G).

Pairwise comparison of whole blood and serum test performance

Overall better AUC values were observed when WB samples were tested (Table 1). AUC values for whole blood and serum were compared against the gold standard reference results. A test of equality of ROC areas was performed. The AUC variance between whole blood and serum samples ranged from 0.51 to 0.95, with the difference between pathogens being ±0.1 units (Table 2). No significant difference in AUC performance was observed when comparing whole blood and serum sample testing, except for when testing for R. typhi IgM (p = 0.04), Leptospira IgM (p = 0.02), and Dengue IgG (p = 0.03) (Table 2). The AUC for R. typhi IgM WB samples was 0.79, while for serum samples was 0.75. The AUC for Leptospira IgM WB samples was 0.59, while for serum samples was 0.53. Dengue IgG WB samples had an AUC value of 0.67, and serum samples had one of 0.64. Linear regression analysis was also conducted to compare WB and serum sample result variance; all outputs were significant. The R2 values generally had ~70% agreement across WB and serum samples, except when testing for Leptospirosis and Zika, where the R2 values were 0.37 and 0.47, respectively (Table 3).

Table 2. Analysis of Equal Variance of WB and serum AUC values.

Pairwise comparison of area under the curve values for whole blood and serum was performed via a chi-square test to deduce variance in performances. Results from both readers (Micro Reader 1 and 2) were compiled to inform robust results.

Pathogen Total AUC
(WB)
AUC
(Serum)
χ2 value p-value
O. tsutsugamushi IgM 581 0.62 0.54 3.91 0.05
R. typhi IgM 581 0.79 0.75 4.31 0.04
Leptospira spp. IgM 581 0.59 0.53 5.64 0.02
Dengue IgM 590 0.85 0.83 2.62 0.11
Dengue IgG 590 0.67 0.64 4.76 0.03
Chikungunya IgM 586 0.81 0.82 0.09 0.77
Zika IgM 581 0.95 0.92 1.26 0.26
Zika IgG 569 0.58 0.51 3.75 0.05

Table 3. Linear regression analysis of WB and serum diagnostic performance.

WB and serum sample results were directly compared via linear regression to deduce test performance variance across both sample types. Results from both micro readers (1 and 2) were compiled to inform robust results.

Pathogen Total Standard Error R2 (p-value) 95% CI
O. tsutsugamushi IgM 581 0.025 0.72 (0.00) 0.913–1.010
R. typhi IgM 581 0.021 0.74 (0.00) 0.796–0.878
Leptospira spp. IgM 581 0.035 0.37 (0.00) 0.589–0.727
Dengue IgM 590 0.016 0.77 (0.00) 0.680–0.744
Dengue IgG 590 0.016 0.77 (0.00) 0.680–0.744
Chikungunya IgM 586 0.015 0.76 (0.00) 0.600–0.658
Zika IgM 581 0.024 0.47 (0.00) 0.486–0.579
Zika IgG 581 0.024 0.47 (0.00) 0.486–0.579

Pairwise comparison of reader performance

Whole blood

Linear regression analysis and ROC test of equal variance were performed to compare performance across both readers. There was no significant difference between reader performances for O. tsutsugamushi IgM (p = 0.046), R. typhi IgM detection (p = 0.872), Leptospira IgM (p = 0.317), Dengue IgM (p = 0.466), Dengue IgG (p = 0.209), Chikungunya IgM (p = 0.930), Zika IgM (p = 0.200). There was a significant difference in reader performance for Zika IgG detection (p = 0.004). Although, a linear regression of the reader results suggests similar R2 values for Zika IgM and IgG detection, at 0.662 and 0.664, respectively (Table 4).

Table 4. Linear regression analysis of reader performance.

Reader results from both Micro Reader 1 and Micro Reader 2 were directly compared via linear regression to deduce test performance variance.

Pathogen Total Standard Error R2 (p-value) 95% CI
WB
O. tsutsugamushi IgM 291 0.005 0.97 (0.00) 0.767–0.788
R. typhi IgM 291 0.009 0.96 (0.00) 0.741–0.776
Leptospira spp. IgM 291 0.006 0.98 (0.00) 0.781–0.805
Dengue Ab 295 0.011 0.94 (0.00) 0.756–0.800
Chikungunya IgM 293 0.019 0.82 (0.00) 0.628–0.700
Zika IgM 291 0.026 0.66 (0.00) 0.567–0.669
Zika IgG 285 0.026 0.67 (0.00) 0.569–0.672
Serum
O. tsutsugamushi IgM 290 0.012 0.94 (0.00) 0.760–0.805
R. typhi IgM 290 0.011 0.95 (0.00) 0.738–0.780
Leptospira spp. IgM 290 0.008 0.97 (0.00) 0.747–0.779
B. pseudomallei CPS Ag* 281 0.062 0.00 (0.81) -0.137–0.107
Dengue NS1* 291 0.011 0.93 (0.00) 0.684–0.728
Dengue Ab 295 0.011 0.93 (0.00) 0.672–0.716
Chikungunya IgM 293 0.010 0.95 (0.00) 0.728–0.766
Zika IgM 290 0.032 0.44 (0.00) 0.410–0.534
Zika IgG 284 0.032 0.44 (0.00) 0.411–0.536

Serum

There was no significant difference between reader performances for R. typhi IgM (p = 0.114), Leptospirosis IgM (p = 0.910), Dengue IgM (p = 0.08), Dengue IgG (p = 0.904), Dengue NS1 (p = 0.124), Chikungunya IgM (p = 0.525), Zika IgM (p = 0.550) and Zika IgG (p = 0.944) (Table 4). There was a 94.1% agreement between reader results (R2, 0.941) for O. tsutsugamushi IgM detection; however, a significant difference between reader performances was detected (p = 0.05). B. pseudomallei CPS antigen showed no significant difference across reader performance (p = 0.411); though the linear regression revealed an R2 value of 0.0002, it was not a significant output (p = 0.811). While there was no statistical difference between reader performance regarding Zika IgM and IgG detection, the agreement between reader performance was limited. Linear regression analysis displayed an R2 value of 0.435 and 0.439 for Zika IgM and IgG detection, respectively (Table 4).

Discussion

This study evaluated the DPP Fever Panel II for the multi-analyte detection of scrub typhus, murine typhus, leptospirosis, melioidosis, dengue fever, chikungunya, and zika virus. The two micro readers (Micro Reader 1 and 2) were screened for performance variability, and the diagnostic platform was assessed using both whole blood and serum samples. Here, test performance was assessed using cutoffs recommended by the manufacturers and region-specific cutoffs calibrated for an optimal level of sensitivity and specificity in endemic settings.

The DPP assay performed poorly when compared to established O. tsutsugamushi RDTs, which had greater overall sensitivity (66–84%) and specificity (93–99%) [2628]. Since it remains unclear how long IgM and IgG antibodies persist in human scrub typhus, samples taken early after symptom onset may not have detectable levels of IgM antibodies [2830]. Due to the antigenic diversity of O. tsutsugamushi strains, cutoffs should be re-evaluated regionally, and local strains included in the antigen pool should be continually updated for accurate clinical diagnosis [31,32].

The DPP assay component for R. typhi performed comparably to other RDTs (sensitivity: ~51–60%, specificity: ~94–100%) [3336], though on the lower end of specificity (67–78%). Little advancements have been made in rapid tests for murine typhus diagnosis [34,37], and it is speculated that the cause of low sensitivity could be the antigenic diversity of R. typhi strains geographically, as is the case for O. tsutsugamushi [38].

The DPP Leptospira spp. IgM assay performed similarly to other RDTs available for leptospirosis diagnosis (sensitivity: 17.9–75%, specificity: 62.1–97.7%) [3943], albeit at the lower end of specificity. Despite this, the DPP assay obtained consistent sensitivity (~50–58%) and specificity (~50–63%) across sample types, and the diagnostic performance was comparable to earlier used diagnostic tools among healthy slum populations to detect leptospirosis on admission [44]. Commercially available RDTs for the detection of Leptospira spp. remain limited in their diagnostic accuracy, none reliably delivering a sensitivity or specificity of >80% on admission [39]. According to published studies, the circulation of location-specific leptospiral serovars contributes to regional variances in background antibody levels [41,45,46], and some serovars may impact the diagnostic accuracy of RDTs [47]. However, the reason region-specific serovars cause more severe illness remains unknown. It is also important to note that anti-Leptospira IgM antibodies are not detectable 4–5 days after symptom onset (S2 and S3 Tables) [48,49], and IgM can persist in the blood for years after infection [50,51]. Assays are required to be adjusted to local settings, and samples are collected after a period of seroconversion to avoid false positive results and ensure higher accuracy in diagnosis.

The sensitivity of the DPP B. pseudomallei CPS Ag (25%) was comparable to commercially used RDTs for melioidosis (31%) [52], although Micro Reader 2 provided a higher sensitivity (75%) using the regional cutoff. It is well-described that antigen test accuracy in unamplified blood is limited compared to blood culture [53], and only serum samples were tested for B. pseudomallei CPS Ag in this study. However, as demonstrated by the DPP test performance, the CPS antigen is not recommended for melioidosis serodiagnosis, as the sensitivity remains lower than culture, the current gold standard (60%) [54,55].

It should be noted that previous studies demonstrate clear associations between CPS positivity and fatality among melioidosis patients [7,23]. By examining the relationship between CPS-positives and disease severity/mortality, we can further investigate the biomarker capacity of the DPP CPS antigen test. The Chembio recommended cutoffs provide a CPS test of higher specificities (95–96%), which could serve high utility in clinical settings to distinguish mild self-limiting illness from severe disease if validated and studied further.

The sensitivity and specificity of the DPP dengue NS1 antigen and IgM antibody were equivalent to that of other RDTs. The Dengue NS1 assay provided greater sensitivity (83–90%) in diagnosis compared to commercially available tests (~45–85%) [5662]. Commercially available Dengue IgM RDTs provide a diverse range of sensitivity (~20–82%), but generally, studies demonstrate diagnostic sensitivity to be on the lower end of the spectrum [5659,61]. The DPP Dengue IgM assay specificity is comparable to other IgM RDTs; however, specificity is reduced to ~70% if sensitivity is prioritised. However, the variability in diagnostic accuracy of the DPP Dengue IgM target across WB and serum samples was inconclusive (Table 4).

Further validation studies must be done to confirm the disparities of using whole blood or serum samples. Cutoffs should be adjusted appropriately to represent the region’s background seropositivity to achieve desired clinical outcomes. The DPP Dengue IgG assay does not perform as well as the IgM assay and is not comparable to the sensitivity and specificity of readily available Dengue IgG RDTs. There was also a significant difference in assay performance across WB and serum samples. This may be attributed to the average duration of illness, which was observed to be ~6.4 days (S2 and S3 Tables). IgM antibodies are only detectable ~50% of patients 3–5 days after symptom onset [63], and IgG develops latently and may not be detectable for up to 2 weeks after onset of symptoms. A combination of the NS1, IgM, and IgG tests could provide a higher level of accuracy for dengue fever diagnosis. Consistent with previous research, pooling all three analytes, or a combination of two or three, bestowed optimal diagnostic performance (sensitivity ~90%, specificity ~89%) and proved to be of great clinical utility in many low-technology settings [57,61,6466].

The DPP Chikungunya IgM assay performed at an above-average range, typically in line with the sensitivity (20–100%) and specificity (73–100%) of commercially available RDTs for Chikungunya IgM detection [60,67]. Previous studies document that Chikungunya IgM detection sensitivity increases in the second week after symptom onset [68,69]. The sample collection time is, as such, paramount to ensuring valid test performance and should be considered in the future.

The DPP Zika IgM assay was performed as well as other Chembio Zika IgM RDT assays, showing similar levels of sensitivity (~79–86%) and specificity (~87–100%) [70,71]. The DPP Zika IgG test did not offer high levels of diagnostic accuracy even when cutoffs were optimised to suit regional settings, although IgG detection across RDTs is effective (~90–99%) [71]. IgG levels in Zika infections are often used as a marker of exposure since it develops weeks after onset and can persist in the body for 5–6 months [72,73]. Samples in this study were collected <24h after hospitalisation and may have contributed to lower sensitivity and specificity levels. Further investigation into coinfection rates and cross-reactivity between ZIKV and DENV antigens and antibodies [74,75] is required for diagnosing Zika infections with higher confidence.

The main limitations of the study are the restricted sample size it was conducted in; an absence of true positives (Table 1) can bias sensitivity and specificity, which does not parallel real-life settings. The diagnostic accuracy of the DPP Fever Panel II Asia was overall limited, while the sensitivity of the diagnostic panel is lower than the specificity, it is likely attributed to low levels of antibody during the acute phase of infection [51]. It is recommended to repeat the test after a period of seroconversion to allow for higher confidence in pathogen detection [40]. Further validation should explore cross-reactivity rates as well [76].

Further validation studies are recommended to ensure the synonymous performance of both readers. The performance of both readers was incompatible with one another regarding specific pathogens (Zika IgM, Zika IgG, and B. pseudomallei CPS Ag). Further research to explore why WB samples provided better diagnostic accuracy than serum samples overall may also be of interest. However, they are particularly informative for the company to decide on the most appropriate sample to list in their IFU.

The DPP Fever Panel II Asia offers the opportunity for highly specific rapid multiplex diagnosis of bacterial and arboviral infections. In many low-resource settings, where access to diagnostic infrastructure is limited, introducing an adequately sensitive and specific tool would afford immense benefits for point-of-care clinical management and outbreak surveillance. The DPP Fever Panel II Asia provides quick results without requiring specialised equipment. Given the ease with which the test can be performed, it serves both clinical and field utility, especially when health workers may have limited training [40]. Point-of-care diagnostic tools, particularly biomarker-based and multi-pathogen detection assays, must be prioritised to help guide treatment decisions in decentralised settings [77].

Supporting information

S1 Table. Reference diagnostic tests.

(DOCX)

pntd.0012077.s001.docx (15KB, docx)
S2 Table. Summary statistics for WB assay.

(DOCX)

pntd.0012077.s002.docx (18.6KB, docx)
S3 Table. Summary statistics for serum assay.

(DOCX)

pntd.0012077.s003.docx (20.1KB, docx)
S1 Fig. Estimated accuracies and 95%-confidence intervals for reader performance.

Error bars are shown. Legend: x-axis, micro readers 1 and 2 (Micro Reader 1 and 2); light pink circles, WB; green circles, serum.

(DOCX)

pntd.0012077.s004.docx (76KB, docx)
S1 Data. Supporting information (raw data).

(XLSX)

pntd.0012077.s005.xlsx (190.9KB, xlsx)

Acknowledgments

The authors would like to thank the patients and staff of Mahosot Hospital, Vientiane. We thank the staff of the Microbiology Laboratory, Mahosot Hospital, and Dr Susath Vongphachanh Director of Mahosot Hospital and the directors team. We further thank the team from Chembio, particularly Dr Angelo Gunasekera for their technical support and onsite training of staff. We also thank Dr Danoy Chammanam for helping enrol patients.

Data Availability

All relevant data are within the manuscript and its Supporting Information files.

Funding Statement

FIND - the Global Alliance for Diagnostics received funding from the Australian and UK governments to provide study coordination and management. This research was funded in whole, or in part, by the Wellcome Trust [220211]. The funders played no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Oldach DW, Richard RE, Borza EN, Benitez RM. A mysterious death. N Engl J Med. 1998;338(24):1764–9. doi: 10.1056/NEJM199806113382411 [DOI] [PubMed] [Google Scholar]
  • 2.Bottieau E, Yansouni CP. Fever in the tropics: the ultimate clinical challenge? Clin Microbiol Infect. 2018;24(8):806–7. doi: 10.1016/j.cmi.2018.06.018 [DOI] [PubMed] [Google Scholar]
  • 3.Prasad N, Murdoch DR, Reyburn H, Crump JA. Etiology of Severe Febrile Illness in Low- and Middle-Income Countries: A Systematic Review. PLoS One. 2015;10(6):e0127962. doi: 10.1371/journal.pone.0127962 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Chandna A, Chew R, Shwe Nwe Htun N, Peto TJ, Zhang M, Liverani M, et al. Defining the burden of febrile illness in rural South and Southeast Asia: an open letter to announce the launch of the Rural Febrile Illness project. Wellcome Open Res. 2021;6:64. doi: 10.12688/wellcomeopenres.16393.2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Crump JA, Kirk MD. Estimating the Burden of Febrile Illnesses. PLoS Negl Trop Dis. 2015;9(12):e0004040. doi: 10.1371/journal.pntd.0004040 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Shrestha P, Dahal P, Ogbonnaa-Njoku C, Das D, Stepniewska K, Thomas NV, et al. Non-malarial febrile illness: a systematic review of published aetiological studies and case reports from Southern Asia and South-eastern Asia, 1980–2015. BMC Med. 2020;18(1):299. doi: 10.1186/s12916-020-01745-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Amornchai P, Hantrakun V, Wongsuvan G, Boonsri C, Yoosuk S, Nilsakul J, et al. Sensitivity and specificity of DPP(R) Fever Panel II Asia in the diagnosis of malaria, dengue and melioidosis. J Med Microbiol. 2022;71(8). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.WHO. World Malaria Report 2020. WHO; 2020.
  • 9.Landier J, Parker DM, Thu AM, Lwin KM, Delmas G, Nosten FH, et al. Effect of generalised access to early diagnosis and treatment and targeted mass drug administration on Plasmodium falciparum malaria in Eastern Myanmar: an observational study of a regional elimination programme. Lancet. 2018;391(10133):1916–26. doi: 10.1016/S0140-6736(18)30792-X [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Hopkins H, Bruxvoort KJ, Cairns ME, Chandler CI, Leurent B, Ansah EK, et al. Impact of introduction of rapid diagnostic tests for malaria on antibiotic prescribing: analysis of observational and randomised studies in public and private healthcare settings. BMJ. 2017;356:j1054. doi: 10.1136/bmj.j1054 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.McLean ARD, Wai HP, Thu AM, Khant ZS, Indrasuta C, Ashley EA, et al. Malaria elimination in remote communities requires integration of malaria control activities into general health care: an observational study and interrupted time series analysis in Myanmar. BMC Med. 2018;16(1):183. doi: 10.1186/s12916-018-1172-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Lubell Y, Chandna A, Smithuis F, White L, Wertheim HFL, Redard-Jacot M, et al. Economic considerations support C-reactive protein testing alongside malaria rapid diagnostic tests to guide antimicrobial therapy for patients with febrile illness in settings with low malaria endemicity. Malar J. 2019;18(1):442. doi: 10.1186/s12936-019-3059-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Mayxay M, Castonguay-Vanier J, Chansamouth V, Dubot-Peres A, Paris DH, Phetsouvanh R, et al. Causes of non-malarial fever in Laos: a prospective study. Lancet Glob Health. 2013;1(1):e46–54. doi: 10.1016/S2214-109X(13)70008-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Lubell Y, Blacksell SD, Dunachie S, Tanganuchitcharnchai A, Althaus T, Watthanaworawit W, et al. Performance of C-reactive protein and procalcitonin to distinguish viral from bacterial and malarial causes of fever in Southeast Asia. BMC Infect Dis. 2015;15:511. doi: 10.1186/s12879-015-1272-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Ashley EA, Dhorda M, Fairhurst RM, Amaratunga C, Lim P, Suon S, et al. Spread of artemisinin resistance in Plasmodium falciparum malaria. N Engl J Med. 2014;371(5):411–23. doi: 10.1056/NEJMoa1314981 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Hamilton WL, Amato R, van der Pluijm RW, Jacob CG, Quang HH, Thuy-Nhien NT, et al. Evolution and expansion of multidrug-resistant malaria in southeast Asia: a genomic epidemiology study. Lancet Infect Dis. 2019;19(9):943–51. doi: 10.1016/S1473-3099(19)30392-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Blacksell SD, Bryant NJ, Paris DH, Doust JA, Sakoda Y, Day NP. Scrub typhus serologic testing with the indirect immunofluorescence method as a diagnostic gold standard: a lack of consensus leads to a lot of confusion. Clin Infect Dis. 2007;44(3):391–401. doi: 10.1086/510585 [DOI] [PubMed] [Google Scholar]
  • 18.Hinjoy S, Hantrakun V, Kongyu S, Kaewrakmuk J, Wangrangsimakul T, Jitsuronk S, et al. Melioidosis in Thailand: Present and Future. Trop Med Infect Dis. 2018;3(2):38. doi: 10.3390/tropicalmed3020038 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Tiono AB, Diarra A, Sanon S, Nebie I, Konate AT, Pagnoni F, et al. Low specificity of a malaria rapid diagnostic test during an integrated community case management trial. Infect Dis Ther. 2013;2(1):27–36. doi: 10.1007/s40121-013-0006-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Ishengoma DS, Francis F, Mmbando BP, Lusingu JP, Magistrado P, Alifrangis M, et al. Accuracy of malaria rapid diagnostic tests in community studies and their impact on treatment of malaria in an area with declining malaria burden in north-eastern Tanzania. Malar J. 2011;10:176. doi: 10.1186/1475-2875-10-176 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Chinkhumba J, Skarbinski J, Chilima B, Campbell C, Ewing V, San Joaquin M, et al. Comparative field performance and adherence to test results of four malaria rapid diagnostic tests among febrile patients more than five years of age in Blantyre, Malawi. Malar J. 2010;9:209. doi: 10.1186/1475-2875-9-209 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Hendriksen IC, Mwanga-Amumpaire J, von Seidlein L, Mtove G, White LJ, Olaosebikan R, et al. Diagnosing severe falciparum malaria in parasitaemic African children: a prospective evaluation of plasma PfHRP2 measurement. PLoS Med. 2012;9(8):e1001297. doi: 10.1371/journal.pmed.1001297 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Amornchai P, Hantrakun V, Wongsuvan G, Wuthiekanun V, Wongratanacheewin S, Teparrakkul P, et al. Evaluation of antigen-detecting and antibody-detecting diagnostic test combinations for diagnosing melioidosis. PLoS Negl Trop Dis. 2021;15(11):e0009840. doi: 10.1371/journal.pntd.0009840 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Paranavitane SA, Gomes L, Kamaladasa A, Adikari TN, Wickramasinghe N, Jeewandara C, et al. Dengue NS1 antigen as a marker of severe clinical disease. BMC Infect Dis. 2014;14:570. doi: 10.1186/s12879-014-0570-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Habibzadeh F, Habibzadeh P, Yadollahie M. On determining the most appropriate test cut-off value: the case of tests with continuous results. Biochem Med (Zagreb). 2016;26(3):297–307. doi: 10.11613/BM.2016.034 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Kannan K, John R, Kundu D, Dayanand D, Abhilash KPP, Mathuram AJ, et al. Performance of molecular and serologic tests for the diagnosis of scrub typhus. PLoS Negl Trop Dis. 2020;14(11):e0008747. doi: 10.1371/journal.pntd.0008747 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Kim YJ, Park S, Premaratna R, Selvaraj S, Park SJ, Kim S, et al. Clinical Evaluation of Rapid Diagnostic Test Kit for Scrub Typhus with Improved Performance. J Korean Med Sci. 2016;31(8):1190–6. doi: 10.3346/jkms.2016.31.8.1190 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Saraswati K, Day NPJ, Mukaka M, Blacksell SD. Scrub typhus point-of-care testing: A systematic review and meta-analysis. PLoS Negl Trop Dis. 2018;12(3):e0006330. doi: 10.1371/journal.pntd.0006330 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Paris DH, Chattopadhyay S, Jiang J, Nawtaisong P, Lee JS, Tan E, et al. A nonhuman primate scrub typhus model: protective immune responses induced by pKarp47 DNA vaccination in cynomolgus macaques. J Immunol. 2015;194(4):1702–16. doi: 10.4049/jimmunol.1402244 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Chattopadhyay S, Jiang J, Chan TC, Manetz TS, Chao CC, Ching WM, et al. Scrub typhus vaccine candidate Kp r56 induces humoral and cellular immune responses in cynomolgus monkeys. Infect Immun. 2005;73(8):5039–47. doi: 10.1128/IAI.73.8.5039-5047.2005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Batra HV. Spotted fevers & typhus fever in Tamil Nadu. Indian J Med Res. 2007;126(2):101–3. [PubMed] [Google Scholar]
  • 32.Blacksell SD, Jenjaroen K, Phetsouvanh R, Wuthiekanun V, Day NP, Newton PN, et al. Accuracy of AccessBio Immunoglobulin M and Total Antibody Rapid Immunochromatographic Assays for the Diagnosis of Acute Scrub Typhus Infection. Clin Vaccine Immunol. 2010;17(2):263–6. doi: 10.1128/CVI.00448-08 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Blacksell SD, Jenjaroen K, Phetsouvanh R, Tanganuchitcharnchai A, Phouminh P, Phongmany S, et al. Accuracy of rapid IgM-based immunochromatographic and immunoblot assays for diagnosis of acute scrub typhus and murine typhus infections in Laos. Am J Trop Med Hyg. 2010;83(2):365–9. doi: 10.4269/ajtmh.2010.09-0534 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Kelly DJ, Chan CT, Paxton H, Thompson K, Howard R, Dasch GA. Comparative evaluation of a commercial enzyme immunoassay for the detection of human antibody to Rickettsia typhi. Clin Diagn Lab Immunol. 1995;2(3):356–60. doi: 10.1128/cdli.2.3.356-360.1995 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Tay ST, Kamalanathan M, Rohani MY. Antibody prevalence of Orientia tsutsugamushi, Rickettsia typhi and TT118 spotted fever group rickettsiae among Malaysian blood donors and febrile patients in the urban areas. Southeast Asian J Trop Med Public Health. 2003;34(1):165–70. [PubMed] [Google Scholar]
  • 36.Saunders JP, Brown GW, Shirai A, Huxsoll DL. The longevity of antibody to Rickettsia tsutsugamushi in patients with confirmed scrub typhus. Trans R Soc Trop Med Hyg. 1980;74(2):253–7. doi: 10.1016/0035-9203(80)90254-0 [DOI] [PubMed] [Google Scholar]
  • 37.Yuhana Y, Tanganuchitcharnchai A, Sujariyakul P, Sonthayanon P, Chotivanich K, Paris DH, et al. Diagnosis of Murine Typhus by Serology in Peninsular Malaysia: A Case Report Where Rickettsial Illnesses, Leptospirosis and Dengue Co-Circulate. Trop Med Infect Dis. 2019;4(1). doi: 10.3390/tropicalmed4010023 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Parola P, Blacksell SD, Phetsouvanh R, Phongmany S, Rolain JM, Day NP, et al. Genotyping of Orientia tsutsugamushi from humans with scrub typhus, Laos. Emerg Infect Dis. 2008;14(9):1483–5. doi: 10.3201/eid1409.071259 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Dittrich S, Boutthasavong L, Keokhamhoung D, Phuklia W, Craig SB, Tulsiani SM, et al. A Prospective Hospital Study to Evaluate the Diagnostic Accuracy of Rapid Diagnostic Tests for the Early Detection of Leptospirosis in Laos. Am J Trop Med Hyg. 2018;98(4):1056–60. doi: 10.4269/ajtmh.17-0702 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Rao M, Amran F, Aqilla N. Evaluation of a Rapid Kit for Detection of IgM against Leptospira in Human. Can J Infect Dis Med Microbiol. 2019;2019:5763595. doi: 10.1155/2019/5763595 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Dinhuzen J, Limothai U, Tachaboon S, Krairojananan P, Laosatiankit B, Boonprasong S, et al. A prospective study to evaluate the accuracy of rapid diagnostic tests for diagnosis of human leptospirosis: Result from THAI-LEPTO AKI study. PLoS Negl Trop Dis. 2021;15(2):e0009159. doi: 10.1371/journal.pntd.0009159 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Amran F, Liow YL, Halim NAN. Evaluation of a Commercial Immuno-Chromatographic Assay Kit for Rapid Detection of IgM Antibodies against Leptospira Antigen in Human Serum. J Korean Med Sci. 2018;33(17):e131. doi: 10.3346/jkms.2018.33.e131 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Alia SN, Joseph N, Philip N, Azhari NN, Garba B, Masri SN, et al. Diagnostic accuracy of rapid diagnostic tests for the early detection of leptospirosis. J Infect Public Health. 2019;12(2):263–9. doi: 10.1016/j.jiph.2018.10.137 [DOI] [PubMed] [Google Scholar]
  • 44.Nabity SA, Ribeiro GS, Aquino CL, Takahashi D, Damiao AO, Goncalves AH, et al. Accuracy of a dual path platform (DPP) assay for the rapid point-of-care diagnosis of human leptospirosis. PLoS Negl Trop Dis. 2012;6(11):e1878. doi: 10.1371/journal.pntd.0001878 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Chadsuthi S, Bicout DJ, Wiratsudakul A, Suwancharoen D, Petkanchanapong W, Modchang C, et al. Investigation on predominant Leptospira serovars and its distribution in humans and livestock in Thailand, 2010–2015. PLoS Negl Trop Dis. 2017;11(2):e0005228. doi: 10.1371/journal.pntd.0005228 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Blacksell SD, Smythe L, Phetsouvanh R, Dohnt M, Hartskeerl R, Symonds M, et al. Limited diagnostic capacities of two commercial assays for the detection of Leptospira immunoglobulin M antibodies in Laos. Clin Vaccine Immunol. 2006;13(10):1166–9. doi: 10.1128/CVI.00219-06 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Goris MG, Leeflang MM, Loden M, Wagenaar JF, Klatser PR, Hartskeerl RA, et al. Prospective evaluation of three rapid diagnostic tests for diagnosis of human leptospirosis. PLoS Negl Trop Dis. 2013;7(7):e2290. doi: 10.1371/journal.pntd.0002290 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Picardeau M, Bertherat E, Jancloes M, Skouloudis AN, Durski K, Hartskeerl RA. Rapid tests for diagnosis of leptospirosis: current tools and emerging technologies. Diagn Microbiol Infect Dis. 2014;78(1):1–8. doi: 10.1016/j.diagmicrobio.2013.09.012 [DOI] [PubMed] [Google Scholar]
  • 49.Silva MV, Camargo ED, Batista L, Vaz AJ, Brandao AP, Nakamura PM, et al. Behaviour of specific IgM, IgG and IgA class antibodies in human leptospirosis during the acute phase of the disease and during convalescence. J Trop Med Hyg. 1995;98(4):268–72. [PubMed] [Google Scholar]
  • 50.Budihal SV, Perwez K. Leptospirosis diagnosis: competancy of various laboratory tests. J Clin Diagn Res. 2014;8(1):199–202. doi: 10.7860/JCDR/2014/6593.3950 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Haake DA, Levett PN. Leptospirosis in humans. Curr Top Microbiol Immunol. 2015;387:65–97. doi: 10.1007/978-3-662-45059-8_5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Wongsuvan G, Hantrakun V, Teparrukkul P, Imwong M, West TE, Wuthiekanun V, et al. Sensitivity and specificity of a lateral flow immunoassay (LFI) in serum samples for diagnosis of melioidosis. Trans R Soc Trop Med Hyg. 2018;112(12):568–70. doi: 10.1093/trstmh/try099 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Hoffmaster AR, AuCoin D, Baccam P, Baggett HC, Baird R, Bhengsri S, et al. Melioidosis diagnostic workshop, 2013. Emerg Infect Dis. 2015;21(2). doi: 10.3201/eid2102.141045 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Limmathurotsakul D, Jamsen K, Arayawichanont A, Simpson JA, White LJ, Lee SJ, et al. Defining the true sensitivity of culture for the diagnosis of melioidosis using Bayesian latent class models. PLoS One. 2010;5(8):e12485. doi: 10.1371/journal.pone.0012485 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Suttisunhakul V, Wuthiekanun V, Brett PJ, Khusmith S, Day NP, Burtnick MN, et al. Development of Rapid Enzyme-Linked Immunosorbent Assays for Detection of Antibodies to Burkholderia pseudomallei. J Clin Microbiol. 2016;54(5):1259–68. doi: 10.1128/JCM.02856-15 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Pan-ngum W, Blacksell SD, Lubell Y, Pukrittayakamee S, Bailey MS, de Silva HJ, et al. Estimating the true accuracy of diagnostic tests for dengue infection using bayesian latent class models. PLoS One. 2013;8(1):e50765. doi: 10.1371/journal.pone.0050765 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Blacksell SD, Jarman RG, Bailey MS, Tanganuchitcharnchai A, Jenjaroen K, Gibbons RV, et al. Evaluation of six commercial point-of-care tests for diagnosis of acute dengue infections: the need for combining NS1 antigen and IgM/IgG antibody detection to achieve acceptable levels of accuracy. Clin Vaccine Immunol. 2011;18(12):2095–101. doi: 10.1128/CVI.05285-11 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Blacksell SD, Jarman RG, Gibbons RV, Tanganuchitcharnchai A, Mammen MP Jr., Nisalak A, et al. Comparison of seven commercial antigen and antibody enzyme-linked immunosorbent assays for detection of acute dengue infection. Clin Vaccine Immunol. 2012;19(5):804–10. doi: 10.1128/CVI.05717-11 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Raafat N, Blacksell SD, Maude RJ. A review of dengue diagnostics and implications for surveillance and control. Trans R Soc Trop Med Hyg. 2019;113(11):653–60. doi: 10.1093/trstmh/trz068 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Moreira J, Brasil P, Dittrich S, Siqueira AM. Mapping the global landscape of chikungunya rapid diagnostic tests: A scoping review. PLoS Negl Trop Dis. 2022;16(7):e0010067. doi: 10.1371/journal.pntd.0010067 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Mahajan R, Nair M, Saldanha AM, Harshana A, Pereira AL, Basu N, et al. Diagnostic accuracy of commercially available immunochromatographic rapid tests for diagnosis of dengue in India. J Vector Borne Dis. 2021;58(2):159–64. doi: 10.4103/0972-9062.321747 [DOI] [PubMed] [Google Scholar]
  • 62.Wiwanitkit S, Wiwanitkit V. Rapid diagnosis of dengue infection in acute phase. J Vector Borne Dis. 2015;52(1):110. [PubMed] [Google Scholar]
  • 63.WHO. Dengue: Guidelines for Diagnosis, Treatment, Prevention and Control. 2009. [PubMed] [Google Scholar]
  • 64.Macedo JVL, Frias IAM, Oliveira MDL, Zanghelini F, Andrade CAS. A systematic review and meta-analysis on the accuracy of rapid immunochromatographic tests for dengue diagnosis. Eur J Clin Microbiol Infect Dis. 2022;41(9):1191–201. doi: 10.1007/s10096-022-04485-6 [DOI] [PubMed] [Google Scholar]
  • 65.Fry SR, Meyer M, Semple MG, Simmons CP, Sekaran SD, Huang JX, et al. The diagnostic sensitivity of dengue rapid test assays is significantly enhanced by using a combined antigen and antibody testing approach. PLoS Negl Trop Dis. 2011;5(6):e1199. doi: 10.1371/journal.pntd.0001199 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.Wang SM, Sekaran SD. Evaluation of a commercial SD dengue virus NS1 antigen capture enzyme-linked immunosorbent assay kit for early diagnosis of dengue virus infection. J Clin Microbiol. 2010;48(8):2793–7. doi: 10.1128/JCM.02142-09 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Andrew A, Navien TN, Yeoh TS, Citartan M, Mangantig E, Sum MSH, et al. Diagnostic accuracy of serological tests for the diagnosis of Chikungunya virus infection: A systematic review and meta-analysis. PLoS Negl Trop Dis. 2022;16(2):e0010152. doi: 10.1371/journal.pntd.0010152 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Kosasih H, Widjaja S, Surya E, Hadiwijaya SH, Butarbutar DP, Jaya UA, et al. Evaluation of two IgM rapid immunochromatographic tests during circulation of Asian lineage Chikungunya virus. Southeast Asian J Trop Med Public Health. 2012;43(1):55–61. [PubMed] [Google Scholar]
  • 69.Rianthavorn P, Wuttirattanakowit N, Prianantathavorn K, Limpaphayom N, Theamboonlers A, Poovorawan Y. Evaluation of a rapid assay for detection of IgM antibodies to chikungunya. Southeast Asian J Trop Med Public Health. 2010;41(1):92–6. [PubMed] [Google Scholar]
  • 70.Debi Boeras CTD, Jose L. Pelegrino, Marc Grandadam, Veasna Duong, Philippe Dussart, Paul Brey DR, Marisa Adati, Annelies Wilder-Smith, Andrew K. Falconar, Claudia M. Romero,Maria Guzman, Nagwa Hasanin, Amadou Sall, and Rosanna W. Peeling. Evaluation of Zika rapid tests as aids for clinical diagnosis and epidemic preparedness. Lancet. 2022;49. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Kim YH, Lee J, Kim YE, Chong CK, Pinchemel Y, Reisdorfer F, et al. Development of a Rapid Diagnostic Test Kit to Detect IgG/IgM Antibody against Zika Virus Using Monoclonal Antibodies to the Envelope and Non-structural Protein 1 of the Virus. Korean J Parasitol. 2018;56(1):61–70. doi: 10.3347/kjp.2018.56.1.61 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Munoz-Jordan JL. Diagnosis of Zika Virus Infections: Challenges and Opportunities. J Infect Dis. 2017;216(suppl_10):S951-S6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Petersen LR, Jamieson DJ, Honein MA. Zika Virus. N Engl J Med. 2016;375(3):294–5. [DOI] [PubMed] [Google Scholar]
  • 74.Endale A, Medhin G, Darfiro K, Kebede N, Legesse M. Magnitude of Antibody Cross-Reactivity in Medically Important Mosquito-Borne Flaviviruses: A Systematic Review. Infect Drug Resist. 2021;14:4291–9. doi: 10.2147/IDR.S336351 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Waggoner JJ, Pinsky BA. Zika Virus: Diagnostics for an Emerging Pandemic Threat. J Clin Microbiol. 2016;54(4):860–7. doi: 10.1128/JCM.00279-16 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76.Blacksell SD, Lim C, Tanganuchitcharnchai A, Jintaworn S, Kantipong P, Richards AL, et al. Optimal Cutoff and Accuracy of an IgM Enzyme-Linked Immunosorbent Assay for Diagnosis of Acute Scrub Typhus in Northern Thailand: an Alternative Reference Method to the IgM Immunofluorescence Assay. J Clin Microbiol. 2016;54(6):1472–8. doi: 10.1128/JCM.02744-15 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77.Hanson KE, Couturier MR. Multiplexed Molecular Diagnostics for Respiratory, Gastrointestinal, and Central Nervous System Infections. Clin Infect Dis. 2016;63(10):1361–7. doi: 10.1093/cid/ciw494 [DOI] [PMC free article] [PubMed] [Google Scholar]
PLoS Negl Trop Dis. doi: 10.1371/journal.pntd.0012077.r001

Decision Letter 0

Shaden Kamhawi, Thomas C Darton

30 Jan 2024

Dear Dr. Blacksell,

Thank you very much for submitting your manuscript "Diagnostic accuracy of DPP® Fever Panel II Asia tests for tropical fever diagnosis" for consideration at PLOS Neglected Tropical Diseases. As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. The reviewers appreciated the attention to an important topic. Based on the reviews, we are likely to accept this manuscript for publication, providing that you modify the manuscript according to the review recommendations.

Please prepare and submit your revised manuscript within 30 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email.

Please include a paragraph about limitations of the assay in the discussion based on reviewer 1 comments.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to all review comments, and a description of the changes you have made in the manuscript.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Thank you again for your submission to our journal. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Thomas C Darton

Academic Editor

PLOS Neglected Tropical Diseases

Shaden Kamhawi

Editor-in-Chief

PLOS Neglected Tropical Diseases

***********************

Reviewer's Responses to Questions

Key Review Criteria Required for Acceptance?

As you describe the new analyses required for acceptance, please consider the following:

Methods

-Are the objectives of the study clearly articulated with a clear testable hypothesis stated?

-Is the study design appropriate to address the stated objectives?

-Is the population clearly described and appropriate for the hypothesis being tested?

-Is the sample size sufficient to ensure adequate power to address the hypothesis being tested?

-Were correct statistical analysis used to support conclusions?

-Are there concerns about ethical or regulatory requirements being met?

Reviewer #1: The authors should state how the samples used as "true positives" were confirmed to be so. Was it based on clinical presentation, some other diagnostic test or a combination of both ? When developing and testing a new assay it is crucial that the "known positives" are indeed "genuine true positives".

Reviewer #2: -Are the objectives of the study clearly articulated with a clear testable hypothesis stated? - Yes

-Is the study design appropriate to address the stated objectives? - Yes

-Is the population clearly described and appropriate for the hypothesis being tested? - Yes

-Is the sample size sufficient to ensure adequate power to address the hypothesis being tested? - Authors addressed in limitation

-Were correct statistical analysis used to support conclusions? - Yes

-Are there concerns about ethical or regulatory requirements being met? - Yes

--------------------

Results

-Does the analysis presented match the analysis plan?

-Are the results clearly and completely presented?

-Are the figures (Tables, Images) of sufficient quality for clarity?

Reviewer #1: Yes

What is the "CPS" antigen of B.pseudomallei ? Some readers (including me ) may not know what it is !

Reviewer #2: -Does the analysis presented match the analysis plan? - Suggested for Minor revision

-Are the results clearly and completely presented? - Yes

-Are the figures (Tables, Images) of sufficient quality for clarity? - Yes

--------------------

Conclusions

-Are the conclusions supported by the data presented?

-Are the limitations of analysis clearly described?

-Do the authors discuss how these data can be helpful to advance our understanding of the topic under study?

-Is public health relevance addressed?

Reviewer #1: This new assay is not all that impressive and I think this needs to be stated in the Conclusions. Its a bit misleading to just say "it is comparable to that of commonly used RDTs", although admittedly these aren't that good either !

Reviewer #2: Suggested for Minor revision

--------------------

Editorial and Data Presentation Modifications?

Use this section for editorial suggestions as well as relatively minor modifications of existing data that would enhance clarity. If the only modifications needed are minor and/or editorial, you may wish to recommend “Minor Revision” or “Accept”.

Reviewer #1: The comparison between the 2 readers is not that interesting to the general reader, although it may well be so to the sponsoring company. This could be summarised in the paper rather than gone into in detail.

"WB" as an abbreviation usually means "Western Blot", not "whole blood". Was it really "whole blood" or was it plasma ? Could another abbreviation be used instead of "WB" ?

Reviewer #2: (No Response)

--------------------

Summary and General Comments

Use this section to provide overall comments, discuss strengths/weaknesses of the study, novelty, significance, general execution and scholarship. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. If requesting major revision, please articulate the new experiments that are needed.

Reviewer #1: Interesting paper but apart from a few assays its not that impressive with only modest sensitivities and specificities for most of the tropical infections tested for.

line 103. "lack of non-specificity". Don't you mean "lack of specificity" ?

line 144 "will be evaluated" should be "was evaluated"

line 342 "...from severe disease if validated with further study" reads better.

line 360 "igG develops latently" ? Maybe "IgG develops slowly".

line 379 suggest change to "hospitalisation and this early sampling may have contributed to lower sensitivity and specificity for detecting antibody levels."

lines 387-8 suggest change to "Repeating the assay after a period sufficient to allow for seroconversion is recommended to provide greater confidence in the result."

line 414. What is "FIND" ?

line 649. change "compiled" to "combined"

Reviewer #2: The screening and diagnosis of many infectious syndromes using a single sample at the time of a patient visit to a hospital or at the time of a field visit by healthcare workers in the community is very important for timely diagnosis and starting appropriate therapy without delay. The study focuses on this aspect in febrile illness patients with multiplexing the point-of-care tests. I have added a few comments for revision for further comments or recommendations.

--------------------

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Sivanantham Krishnamoorthi

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

References

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article's retracted status in the References list and also include a citation and full reference for the retraction notice.

Attachment

Submitted filename: PNTD-D-23-00978 Reviewer Comment.docx

pntd.0012077.s006.docx (14.1KB, docx)
Attachment

Submitted filename: PNTD-D-23-00978_reviewer (1).pdf

PLoS Negl Trop Dis. doi: 10.1371/journal.pntd.0012077.r003

Decision Letter 1

Shaden Kamhawi, Thomas C Darton

18 Mar 2024

Dear Dr. Blacksell,

We are pleased to inform you that your manuscript 'Diagnostic accuracy of DPP® Fever Panel II Asia tests for tropical fever diagnosis Subtitle: Validation of DPP® Fever Panel II' has been provisionally accepted for publication in PLOS Neglected Tropical Diseases.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.

Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.

IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.

Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.

Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Neglected Tropical Diseases.

Best regards,

Thomas C Darton

Academic Editor

PLOS Neglected Tropical Diseases

Shaden Kamhawi

Editor-in-Chief

PLOS Neglected Tropical Diseases

***********************************************************

Many thanks for addressing the reviewer comments raised, which has been done in a clear and thorough way.

PLoS Negl Trop Dis. doi: 10.1371/journal.pntd.0012077.r004

Acceptance letter

Shaden Kamhawi, Thomas C Darton

28 Mar 2024

Dear Dr. Blacksell,

We are delighted to inform you that your manuscript, "Diagnostic accuracy of DPP Fever Panel II Asia tests for tropical fever diagnosis Subtitle: Validation of DPP Fever Panel II," has been formally accepted for publication in PLOS Neglected Tropical Diseases.

We have now passed your article onto the PLOS Production Department who will complete the rest of the publication process. All authors will receive a confirmation email upon publication.

The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any scientific or type-setting errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript. Note: Proofs for Front Matter articles (Editorial, Viewpoint, Symposium, Review, etc...) are generated on a different schedule and may not be made available as quickly.

Soon after your final files are uploaded, the early version of your manuscript will be published online unless you opted out of this process. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.

Thank you again for supporting open-access publishing; we are looking forward to publishing your work in PLOS Neglected Tropical Diseases.

Best regards,

Shaden Kamhawi

co-Editor-in-Chief

PLOS Neglected Tropical Diseases

Paul Brindley

co-Editor-in-Chief

PLOS Neglected Tropical Diseases

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Table. Reference diagnostic tests.

    (DOCX)

    pntd.0012077.s001.docx (15KB, docx)
    S2 Table. Summary statistics for WB assay.

    (DOCX)

    pntd.0012077.s002.docx (18.6KB, docx)
    S3 Table. Summary statistics for serum assay.

    (DOCX)

    pntd.0012077.s003.docx (20.1KB, docx)
    S1 Fig. Estimated accuracies and 95%-confidence intervals for reader performance.

    Error bars are shown. Legend: x-axis, micro readers 1 and 2 (Micro Reader 1 and 2); light pink circles, WB; green circles, serum.

    (DOCX)

    pntd.0012077.s004.docx (76KB, docx)
    S1 Data. Supporting information (raw data).

    (XLSX)

    pntd.0012077.s005.xlsx (190.9KB, xlsx)
    Attachment

    Submitted filename: PNTD-D-23-00978 Reviewer Comment.docx

    pntd.0012077.s006.docx (14.1KB, docx)
    Attachment

    Submitted filename: PNTD-D-23-00978_reviewer (1).pdf

    Attachment

    Submitted filename: PNTD-D-23-00978 Reviewer Comment_SD.docx

    pntd.0012077.s008.docx (21KB, docx)

    Data Availability Statement

    All relevant data are within the manuscript and its Supporting Information files.


    Articles from PLOS Neglected Tropical Diseases are provided here courtesy of PLOS

    RESOURCES