Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Oct 1.
Published in final edited form as: Crit Care Med. 2021 Oct 1;49(10):1651–1663. doi: 10.1097/CCM.0000000000005085

Discriminating Bacterial and Viral Infection Using a Rapid Host Gene Expression Test

Ephraim L Tsalik 1,2,3,*, Ricardo Henao 2,4,5,*, Jesse L Montgomery 6, Jeff W Nawrocki 6, Mert Aydin 2, Emily C Lydon 2, Emily R Ko 2,7, Elizabeth Petzold 2, Bradly P Nicholson 8, Charles B Cairns 9,10, Seth W Glickman 9, Eugenia Quackenbush 9, Stephen F Kingsmore 11, Anja K Jaehne 12, Emanuel P Rivers 12, Raymond J Langley 13, Vance G Fowler 3,5, Micah T McClain 1,2,3, Robert J Crisp 6, Geoffrey S Ginsburg 2, Thomas W Burke 2, Andrew C Hemmert 6, Christopher W Woods 1,2,3; The Antibacterial Resistance Leadership Group
PMCID: PMC8448917  NIHMSID: NIHMS1685979  PMID: 33938716

Abstract

Objective:

Host gene expression signatures discriminate bacterial and viral infection but have not been translated to a clinical test platform. This study enrolled an independent cohort of patients to describe and validate a first-in-class host gene expression test for bacterial vs. viral discrimination (HR-B/V).

Design:

Subjects were recruited from 2006–2016. Enrollment blood samples were collected in an RNA preservative and banked for later testing. The reference standard was an expert panel clinical adjudication, which was blinded to gene expression and procalcitonin results.

Setting:

Four U.S. Emergency Departments.

Patients:

623 subjects with acute respiratory illness or suspected sepsis.

Interventions:

45-transcript signature measured on the BioFire® FilmArray® System in ~45 minutes.

Measurements and Main Results:

HR-B/V test performance characteristics were evaluated in 623 participants (mean age 46 years; 45% male) with bacterial infection, viral infection, co-infection, or non-infectious illness. Performance of the HR-B/V test was compared to procalcitonin. The test provided independent probabilities of bacterial and viral infection in ~45 minutes. In the 213-subject training cohort, the HR-B/V test had an area under the curve (AUC) for bacterial infection of 0.90 (95% CI 0.84-0.94) and 0.92 (95% CI 0.87-0.95) for viral infection. Independent validation in 209 subjects revealed similar performance with an AUC of 0.85 (95% CI 0.78-0.90) for bacterial infection and 0.91 (95% CI 0.85-0.94) for viral infection. The test had 80.1% (95% CI 73.7%-85.4%) average weighted accuracy for bacterial infection and 86.8% (95% CI 81.8%-90.8%) for viral infection in this validation cohort. This was significantly better than 68.7% (95% CI 62.4%-75.4%) observed for procalcitonin (P<0.001). An additional cohort of 201 subjects with indeterminate phenotypes (co-infection or microbiology-negative infections) revealed similar performance.

Conclusions:

The HR-B/V host gene expression test measured using the BioFire® System rapidly and accurately discriminated bacterial and viral infection better than procalcitonin, which can help support more appropriate antibiotic use.

Keywords: Gene Expression Signatures, Point-of-Care Testing, Bacterial Infections, Viral Infections, Pneumonia, Sepsis

Introduction

Acute respiratory illness (ARI) is the most common reason for acute healthcare visits (1, 2). Patients with ARI are inappropriately treated with antibacterials at high rates due to challenges in discriminating viral, bacterial, or non-infectious etiologies (3, 4). Diagnostics that reliably discriminate bacterial and viral etiologies in patients with ARI could improve clinical management.

Currently available diagnostic strategies for ARI largely focus on pathogen identification, which identifies a pathogen in only a minority of cases (5). Depending on the type of pathogen assay, additional limitations include long time to result, inability to discriminate infection from colonization, and the need for a priori suspicion for the specific pathogen. In contrast, measuring the host response to infection offers a complementary, unbiased strategy that overcomes many of these limitations. Procalcitonin, a commonly used host response marker, has exhibited mixed results in discriminating bacterial from viral ARI etiologies (68) and in guiding antibacterial use (9, 10). In contrast, recent studies have shown that peripheral blood host gene expression accurately discriminates bacterial, viral, and non-infectious etiologies of ARI (1117).

Host gene expression tests are commercially available for non-infectious diseases such as oncology and transplant rejection (18, 19). Due to their complexity, these tests are typically performed in referral laboratories and require days to return results. Consequently, host gene expression tests are not currently available for infectious diseases, which require results at the point-of-need for real-time clinical decision-making. Our study objectives were to demonstrate the feasibility of developing a host gene expression test that could be utilized at the point-of-need and to demonstrate its ability to discriminate bacterial, viral, and non-infectious illness in a clinical cohort. To achieve these objectives, we developed a first-in-class research-use-only (RUO) test for the BioFire® System to quantify host gene expression for suspected infection. This host response bacterial/viral test (HR-B/V) was evaluated in a multisite study of patients presenting to the Emergency Department (ED) with suspected infection of up to 28-days duration using clinical adjudication as the reference standard. The findings presented here demonstrate the feasibility of a rapid, point-of-need, host response test that differentiates bacterial and viral illness.

Materials and Methods

Study Design

Studies were approved by relevant Institutional Review Boards and in accordance with the Declaration of Helsinki. All subjects or their legally authorized representatives provided written informed consent. Patients were enrolled by convenience sampling in four Emergency Departments (EDs) from 2006-2016: Duke University Medical Center (Durham, NC), Durham VA Health Care System (Durham, NC), UNC Health Care (Chapel Hill, NC), and Henry Ford Hospital (Detroit, MI). This was done as part of three consecutively executed observational studies: CAPSOD (ClinicalTrials.gov NCT00258869) (2022), the Community Acquired Pneumonia and Sepsis Study (CAPSS), and the Rapid Diagnostics in Categorizing Acute Lung Infection (RADICAL). Patients were eligible for CAPSOD and CAPSS if they were ≥6-years with a known or suspected infection of <28-days duration and exhibited two or more systemic inflammatory response syndrome criteria (23). RADICAL enrolled patients age ≥2-years with ARI of <28-days duration. Prior antimicrobial exposure was not exclusionary. ARI was defined as having at least two qualifying symptoms or one qualifying symptom and at least one qualifying vital sign abnormality. Qualifying symptoms included headache, rhinorrhea, nasal congestion, sneezing, sore throat, itchy/watery eyes, conjunctivitis, cough, shortness of breath, sputum production, chest pain, and wheezing. Qualifying vital sign abnormalities included heart rate ≥90 (or ≥110 for children aged 2-6 years), respiratory rate ≥20, and temperature ≥38.0ºC or ≤36.0ºC. There were 1,274 subjects enrolled in CAPSOD, 1,320 in CAPSS, and 944 in RADICAL. Subjects were selected from this larger pool based on the availability of a PAXgene Blood RNA sample and confirmatory microbiology (with the exception of suspected bacterial and suspected viral cases). In suspected cases where no microbiological etiology was identified, consecutive subjects were selected for inclusion in this study.

Diagnostic Reference Standard

In the absence of a gold standard for bacterial/viral discrimination, we performed retrospective adjudications as previously described (20, 24). Clinician adjudicators had experience managing patients with ARI defined by subspecialty training in hospital medicine, emergency medicine, infectious diseases, or pulmonary/critical care medicine or by >2 years of post-graduate clinical experience in that field. Two independent adjudications were performed >28-days after enrollment using the full medical record, supplemental etiology testing, and case report forms but not host-response test nor procalcitonin results. This avoided incorporation bias and allowed procalcitonin to be used as an independent comparator. Supplemental testing included the BinaxNOW S. pneumoniae urinary antigen test (Alere) and a multiplex viral respiratory pathogen panel (ResPlex V2.0, Qiagen; Respiratory Viral Panel, Luminex; or Respiratory Pathogen Panel, Luminex). For discordant adjudications, a consensus panel of at least three adjudicators entered the final adjudication by consensus or majority vote.

BioFire Testing

A custom, RUO BioFire test was designed to measure 45 host mRNA transcripts (Supplementary Table 1) that were previously shown to be differentially expressed in viral, bacterial, or non-infectious causes of ARI (Supplementary Methods) (16, 25). A larger pool of targets was initially selected. The assays included in the final HR-B/V test were selected through iterative evaluations and selection for robust performance (strong linearity and low variability in quantitative reporting) with the BioFire nested, multiplex PCR chemistry. The test included internal and endogenous normalization controls, selected for their low coefficients of variation (<0.1). Upon loading 100µL of PAXgene-preserved blood (~27µl whole blood volume) into the disposable pouch, automated sample extraction, nucleic acid purification, reverse-transcription, and two stage (multiplexed-nested) real-time PCR was performed by the BioFire® FilmArray® Instrument (Supplementary Figure 1). All assays were tested in duplicate within each pouch in case of assay failure although failure rates were low in both the discovery (0.18%) and validation (0.13%) cohorts. The real-time PCR curve quantification results, as expressed in Cq values, were collected for each assay for each sample. The Cq value is a semi-quantitative measure of target abundance defined by the PCR cycle number at which a target is detected. Therefore, lower values (i.e., fewer PCR cycles) indicates a greater abundance of the target in a sample.

Statistical Analysis

Normalized target expression values were used to build two independent sparse logistic regression models: viral vs. non-viral infection and bacterial vs. non-bacterial infection (26). The two probabilities are independent, which allows for the identification of coinfection (i.e., both positive) or no infection (i.e., both negative). To generate these probabilities, the BioFire® Cq values were used to build a logistic regression model trained on subjects with known phenotype. The regularization parameter of the model and performance metrics were estimated using nested Leave-One-Out Cross-Validation (LOOCV), where the internal LOOCV was used for the regularization parameter and the outer LOOCV for performance estimates including Area Under the receiving operating Characteristic (AUC), Positive Percent Agreement (PPA) and Negative Percent Agreement (NPA) (27). PPA is calculated in the same manner as sensitivity whereas NPA is calculated in the same manner as specificity. The terms PPA and NPA were used instead of sensitivity and specificity given the use of a reference standard rather than a gold standard, as recommended for the clinical performance evaluation of molecular diagnostic tests (28). After training the model, parameters were fixed and applied to independent cohorts. Thresholds for the bacterial and viral tests (27.5% and 41.7%, respectively) were calculated to optimize the average weighted accuracy (AWA) (29). AWA is a pragmatic metric of diagnostic yield or global utility of a diagnostic test that integrates sensitivity and specificity, accounts for disease prevalence within the population, and accounts for the clinical implications of false-positive and false-negative results. The AWA was calculated assuming a 10-30% bacterial infection prevalence, 50-80% viral infection prevalence, r=0.25 for bacterial classification, and r=2 for viral classification. Details regarding the development of the AWA metric and how it specifically applies to this test are described elsewhere (29). Comparison of HR-B/V to procalcitonin was performed using the chi-square test. Procalcitonin concentrations ≥0.25ng/ml indicated bacterial infection (10).

Additional details regarding study design, case definitions, procalcitonin measurement, transcript selection process, and statistical analysis are included in the Supplementary Materials.

Results

Clinical Cohort

We enrolled 623 subjects at four EDs presenting with suspected sepsis or ARI (Figure 1). Of these, 422 had microbiologically confirmed phenotypes (Supplementary Table 2) and were randomly assigned to training (n=213) or validation (n=209) cohorts so the numbers of bacterial, viral, and non-infectious illness cases were balanced (Table 1). The remaining 201 subjects (82 suspected bacterial infections, 83 suspected viral infections, and 36 coinfections) were tested but not included in calculations of performance characteristics due to the absence of a reliable reference standard.

Figure 1: Experimental Flow.

Figure 1:

The Indeterminate Phenotypes group was not used to calculate performance characteristics but used to demonstrate the distribution of HR-B/V test results in these groups. ARI = Acute Respiratory Illness.

Table 1:

Subjects Characteristics

Demographic and Clinical Variables Total Cohort
(N=623)
Training Cohort (N=213) Validation Cohort (N=209) Coinfection (N=36) Suspected Bacterial Infection (N=82) Suspected Viral Infection (N=83)
Age – mean years (SD) 46.1 (17.9) 48.6 (17.9) 47.4 (18.6) 46.1 (16.4) 47.2 (18.9) 35.3 (14.1)
Male sex – no. (%) 280 (44.9) 100 (46.9) 96 (45.9) 12 (33.3) 49 (59.8) 23 (27.7)
Race – no. (%)a
 White 317 (50.9) 110 (51.6) 97 (46.4) 25 (69.4) 53 (64.6) 32 (38.6)
 Black 284 (45.6) 96 (45.1) 104 (49.8) 11 (30.6) 25 (30.5) 48 (57.8)
 Other 22 (3.5) 7 (3.3) 8 (3.8) 0 4 (4.9) 3 (3.6)
Etiology – no. (%)b
 Bacterial 135 (32.0) 68 (31.9) 67 (32.1)
 Viral 183 (43.4) 92 (43.2) 91 (43.5)
 Non-infection 104 (24.6) 53 (24.9) 51 (24.4)
Abnormal temperature – no. (%)c 236 (37.9) 84 (39.4) 79 (37.8) 19 (52.8) 41 (50.0) 13 (15.7)
Comorbidities
 Chronic Lung Disease 166 (26.6) 57 (26.8) 63 (30.1) 7 (19.4) 24 (29.3) 15 (18.1)
 Chronic Liver Disease 14 (2.2) 5 (2.3) 6 (2.9) 2 (5.6) 0 1 (1.2)
 Coronary Artery Disease 76 (12.2) 29 (13.6) 26 (12.4) 3 (8.3) 12 (14.6) 6 (7.2)
 Diabetes 135 (21.7) 53 (24.9) 47 (22.5) 9 (25) 14 (17.1) 12 (14.5)
 Dialysis 12 (1.9) 8 (3.8) 4 (1.9) 0 0 0
 Heart Failure 42 (6.7) 23 (10.8) 11 (5.3) 1 (2.8) 6 (7.3) 1 (1.2)
 HIV Infection 11 (1.8) 6 (2.8) 5 (2.4) 4 (11.1) 8 (9.8) 2 (2.4)
 Hypertension 280 (44.9) 112 (52.6) 90 (43.1) 14 (38.9) 36 (43.9) 28 (33.7)
 Immunosuppressive Therapy 62 (10.0) 24 (11.3) 25 (12.0) 2 (5.6) 7 (8.5) 4 (4.8)
 Malignancy 59 (9.5) 23 (10.8) 25 (12.0) 2 (5.6) 7 (8.5) 2 (2.4)
Hospitalized – no. (%) 304 (48.8) 118 (55.4) 107 (51.2) 23 (63.9) 51 (62.2) 5 (6.0)
a

Race was reported by participants.

b

Etiology is only defined in subjects who had a microbiologically confirmed pathogen. Those with coinfection, suspected bacterial, or suspected viral etiologies were excluded from this calculation.

c

Abnormal temperature is defined as ≤35.5ºC or ≥38.0ºC.

Classification

Training Cohort:

Thresholds for positive and negative test results were selected to maximize the AWA, which incorporates the clinical significance of false positive and false negative errors (Supplementary Figure 2). Using nested LOOCV in the training cohort, the HR-B/V test had an AWA of 83.3% (95% CI 77.4%-88.2%) for the identification of bacterial infection and 85.9% (95% CI 80.4%-90.1%) for viral infection (Figure 2A and Table 2). The corresponding AUCs were 0.90 (95% CI 0.84%-0.94%) and 0.92 (95% CI 0.87-0.95), respectively. Precision-recall curves for this and subsequent comparisons are shown in Supplementary Figure 3. The model trained in these 213 subjects was then fixed and used for all subsequent tests. A heatmap highlighting the contribution of each transcript in the signature to discriminate bacterial, viral, and non-infectious illness is shown in Supplementary Figure 4.

Figure 2: Classification performance.

Figure 2:

The test assigns each subject a probability of viral infection (y-axis) and bacterial infection (x-axis). The vertical line is the threshold for bacterial infection (0.275) while the horizontal line is the threshold for viral infection (0.417). The top left region is indicative of viral infection, bottom left indicates no infection, top right suggests bacterial/viral coinfection, and the bottom right indicates bacterial infection. For panels A and B, colors represent the adjudicated phenotype: blue=bacterial, yellow=non-infectious illness, red=viral. A) Classification of 213 training cohort subjects. B) Classification of 209 validation cohort subjects. (C) The probabilities of bacterial infection (y-axis) as measured by the HR-B/V test vs. procalcitonin (x-axis) are plotted for each subject. The vertical line represents a procalcitonin threshold of 0.25ng/mL. The horizontal line is the threshold for a positive bacterial HR-B/V test. Subjects in the training cohort (n=213) are represented by circles whereas validation cohort subjects (n=209) are represented by a plus. Blue represents cases adjudicated as bacterial. Red represents cases adjudicated as non-bacterial (viral or non-infectious illness). The top right region was identified as bacterial by both tests. The bottom left region represents a non-bacterial classification by both tests. (D) Receiver operating characteristic plot for bacterial vs. non-bacterial infection using the HR-B/V test vs. procalcitonin in the training (discovery) and validation cohorts. E) Classification of 36 subjects with bacterial/viral coinfection (superinfection). Red circles represent clinically suspected cases of superinfection without microbiological confirmation. Blue circles represent microbiologically confirmed superinfection. F) 83 suspected viral infections (red circles) and 82 suspected bacterial infections (blue circles) are shown.

Table 2:

Performance characteristics of the HR-B/V test in the training and validation cohorts.

Test Group AWA AUC PPA NPA Overall Accuracy LR+ LR− F1 PTP
Training Cohort
Bacterial Infection 83.3
(77.4-88.2)
0.90
(0.84-0.94)
85.3
(75.8-92.9)
81.4
(75.0-86.8)
82.6
(77.5-86.9)
4.6
(3.2-6.8)
0.18
(0.09-0.3)
0.758
(0.676-0.829)
52.2
(44.0-61.7)
Viral Infection 85.9
(80.4-90.1)
0.92
(0.87-0.95)
85.9
(78.1-92.1)
86.0
(78.9-90.9)
85.9
(80.8-90.1)
6.1
(4.1-10.0)
0.16
(0.09-0.26)
0.840
(0.772-0.892)
91.6
(87.3-94.4)
Procalcitonin 77.1
(70.4-83.0)
0.84
(0.76-0.89)
67.6
(54.3-77.6)
86.2
(79.0-90.9)
80.3
(74.6-85.4)
4.9
(3.1-8.1)
0.38
(0.25-0.52)
0.687
(0.579-0.770)
53.9
(43.1-67.4)
Validation Cohort
Bacterial Infection 80.1
(73.7-85.4)
0.85
(0.78-0.90)
79.1
(68.9-87.9)
81.0
(73.6-86.9)
80.4
(75.0-85.6)
4.2
(2.9-5.9)
0.26
(0.15-0.41)
0.721
(0.626-0.794)
49.9
(41.5-58.9)
Viral Infection 86.8
(81.8-90.8)
0.91
(0.85-0.94)
89.0
(81.4-94.7)
84.7
(77.1-90.5)
86.6
(81.8-90.9)
5.8
(3.8-9.4)
0.13
(0.07-0.22)
0.853
(0.796-0.901)
91.2
(87.4-94.4)
Procalcitonin 68.7
(62.4-75.4)
0.72
(0.62-0.79)
53.7
(41.0-65.3)
83.1
(76.5-88.5)
73.7
(66.7-79.4)
3.2
(2.1-4.99)
0.56
(0.43-0.71)
0.567
(0.463-0.667)
43.5
(33.7-54.6)

All values are presented with a 95% confidence interval. AWA = Average Weighted Accuracy. PPA = Positive Percent Agreement. NPA = Negative Percent Agreement. LR+ = Likelihood Ratio Positive. LR- = Likelihood Ratio Negative. PTP = Bayesian post-test probability, representing the expected post-test probability for a pre-test probability with a uniform prevalence distribution in the range of 0.1-0.3 for bacterial infection and 0.5-0.8 for viral infection.

Validation Cohort:

In the 209-subject validation cohort, the test had an AWA of 80.1% (95% CI 73.7%-85.4%) for bacterial infection and 86.8% (95% CI 81.8%-90.8%) for viral infection (Figure 2B and Table 2). The corresponding AUC were 0.85 (95% CI 0.78-0.90) and 0.91 (95% CI 0.85-0.94), respectively.

Infection Site:

To evaluate the test in specific clinical subgroups, we combined the training and validation groups to increase the evaluable sample size. Whereas all viral infections were respiratory in nature, the bacterial infections included a variety of anatomic sites. The PPA for bacterial infection was 84% for respiratory tract (n=50), 71% for urinary tract (n=35), 86% for vascular device (n=14), 75% for skin/soft tissue (n=12), 100% for intra-abdominal (n=12), and 92% for other sites (n=12) (Supplementary Table 3).

Procalcitonin:

In the combined cohort of 422 subjects, the HR-B/V test was significantly better at discriminating bacterial from non-bacterial etiologies compared to procalcitonin (Figure 2CD). This was driven by a higher PPA for the HR-B/V test (82.2% vs. 60.7% for procalcitonin; P<0.001). NPA for bacterial infection was similar (81.2% for HR-B/V vs. 84.7% for procalcitonin; P=0.27). Whereas the HR-B/V test distinguishes viral from non-infectious etiologies, procalcitonin does not, which precluded a comparison of the tests for this purpose.

Confounders:

We evaluated the impact of age, sex, ethnicity, and illness severity (as defined by the need for hospitalization). This was done in the combined 422-subject cohort to improve the ability to detect such differences though the study was not powered for these analyses. There were no statistically significant differences due to age, sex, or ethnicity, but there was a higher accuracy among non-hospitalized subjects compared to hospitalized subjects (83.8% vs. 74.2%; P=0.02). (Supplementary Table 4).

Alternative reporting schemes

There is no standardized approach to reporting results of composite biomarkers. Thus far, we applied a single threshold to determine the presence or absence of disease so every tested subject receives a potentially actionable result. However, values close to the thresholds may have greater uncertainty. To account for this uncertainty, we evaluated the impact of two alternative reporting schemes: probability quartiles and inclusion of an equivocal zone.

Infection Score Quartiles:

This scheme provides greater ability to rule-in or rule-out bacterial and viral infection for subjects in the highest and lowest quartiles, respectively. Probability thresholds defined by training cohort quartiles were applied to the validation group. Those thresholds were well-calibrated in both cohorts (Supplementary Figure 5). For bacterial infection diagnosis, the lowest quartile had a PPA of 100% and 94.0% (LR- 0.09) in the training and validation cohorts, respectively (Supplementary Table 5, Supplementary Figure 6). The highest quartile for bacterial infection in the training and validation cohorts had a NPA of 92.4% (LR+ 5.04) and 90.1% (LR+ 4.39), respectively. The HR-B/V test performed better with respect to viral infection with a PPA of 96.7% in both the training and validation cohorts as well as a NPA of 98.3% and 94.9%, respectively. Additional results are shown in Supplementary Table 5 and Supplementary Figure 6.

Equivocal zone:

An equivocal zone decreases the number of subjects with an actionable result, but the diagnostic confidence is higher for subjects above or below the zone’s thresholds. We defined probability thresholds for both bacterial (0.18-0.37) and viral (0.26-0.47) infection that maximized AWA in the training cohort while assigning <15% of subjects to the equivocal zone. Scatter plots of subjects in training and validation cohorts for both the bacterial and viral models are presented along with a graphical representation of the scheme in Supplementary Figure 7. Incorporating an equivocal zone, the HR-B/V test had an AUC of 0.92 and AWA of 87% in the training cohort, as compared to an AUC of 0.86 and AWA of 83% in the validation cohort. For viral infection diagnosis, the training cohort had an AUC of 0.94 and AWA of 91% as compared to an AUC of 0.91 and 87% AWA in the validation cohort. Confusion matrices are presented in Supplementary Table 6. A comparison of results for the three schemes (single threshold, quartiles, and equivocal zone) is shown in Table 6.

Coinfection

We identified 36 cases of respiratory superinfection defined as a bacterial infection arising during or after an antecedent viral infection. In 12 cases, both bacterial and viral pathogens were microbiologically confirmed. In the remaining 24 subjects, a bacterial superinfection was clinically suspected but not microbiologically confirmed. Using the first reporting scheme where a single threshold determined the presence or absence of bacterial and viral infection, the HR-B/V test identified a bacterial infection in all 12 microbiologically confirmed cases (100%) as compared to 75% for procalcitonin (P=0.07) (Figure 2E). Among the 24 cases of suspected superinfection, 11 were identified as having a viral infection (n=11, 45.8%), five (20.8%) had a bacterial host response, five (20.8%) had both bacterial and viral responses, and three (12.5%) were negative for infection. Procalcitonin was ≥0.25ng/ml in 10/24 (41.7%) suspected superinfection cases.

We then evaluated how many subjects in the training and validation cohorts adjudicated as having a monomicrobial infection would have been diagnosed with coinfection using the HR-B/V test. Among the 135 bacterial infections, eight (5.9%) also demonstrated a host response to viral infection. Among the 183 viral infections, twelve (6.6%) also demonstrated a host response to bacterial infection.

Suspected infection

A host-based approach might offer the greatest benefit to patients with ARI but no positive microbiology. In this study, these subjects had clinical syndromes compatible with bacterial or viral infection based on expert panel adjudication but no identified pathogen. Of the 82 suspected bacterial cases, 61 (74.4%) were classified as bacterial or bacterial/viral coinfection. An additional ten (12.2%) were classified as viral while eleven (13.4%) cases were classified as neither (Figure 2F). Procalcitonin was ≥0.25ng/mL in 38 cases (46.3%). In the group of 83 suspected viral cases, thirty (36.1%) had a viral host response, 31 (37.3%) had a bacterial host response, three (3.6%) had both bacterial and viral responses, and nineteen (22.9%) were negative for both. Procalcitonin was <0.25ng/mL in 77 cases (92.8%).

Discussion

The overlap in symptoms due to bacterial, viral, and non-infectious disease leads to diagnostic uncertainty and inappropriate antimicrobial use. Pathogen detection tests play an important clinical role but are insufficient to make a diagnosis in the majority of ARI cases. In the current study, we built upon our previous findings that host gene expression accurately discriminates bacterial, viral, and non-infectious disease (16, 25). Beyond simply validating the signature, this study provides proof of principle that a complex host gene expression signature based on machine learning algorithms can be translated to a clinical platform and validated in an independent test cohort. The HR-B/V test was superior to procalcitonin both with respect to the identification of bacterial infection and the ability to discriminate viral from non-infectious disease. We also showed that blood serves as an accurate biosensor for bacterial infection at multiple different anatomic sites of infection. Furthermore, the HR-B/V test provided clear results even in complex or ambiguous cases such as coinfection or suspected infection.

Host gene expression signatures have been identified for multiple conditions including coronary artery disease, oncology, transplant rejection, and sepsis (18, 19, 30, 31). However, the utility of these tests is limited by a turnaround time of many hours to days due to their high complexity. In this study, we utilized the widely available BioFire system to measure host gene expression of 45 host mRNA biomarkers with results available in about 45 minutes. The biological roles and associated pathways for these targets have previously been described (16, 25). HR-B/V overall accuracy was similar to that previously reported despite using a much smaller signature and translation to an integrated, sample-to-answer platform: 87% using microarray (25), 88% using Taqman Low Density Array RT-PCR (24), and 80-87% in this study (depending on the subgroup).

Multiple studies have described host response signatures to discriminate viral and bacterial infection (14, 16, 3242). In most cases, these signatures focus only on subjects with bacterial or viral infection without adequately accounting for the possibility of non-infectious illness. To address this limitation, we utilized a composite of two signatures: bacterial vs. non-bacterial (i.e., viral or non-infectious) illness and viral vs. non-viral (i.e., bacterial or non-infectious) illness. The possible outputs of this composite signature are therefore bacterial infection, viral infection, coinfection, or no infection. Although this scheme increases generalizability, it comes at the expense of a lower overall test accuracy. First, the test must distinguish three categories rather than just two, increasing the opportunities for classification errors. Second, there is a high degree of overlap in the host’s response to bacterial infection and non-infectious illness. Third, the AWA statistical approach minimizes false negative bacterial errors, which carry the greatest risk of patient harm. In so doing, it maximizes the test’s sensitivity for bacterial infection at the expense of specificity. These factors may explain the lower overall accuracy among hospitalized subjects, which were more likely to have either bacterial or non-infectious etiologies.

Most biomarker tests measure a single analyte and typically report the value as a concentration (e.g., procalcitonin in ng/ml). However, multi-analyte host response assays convert raw measurements (e.g., Cq for mRNA) into a probability function or composite score. Presently, there are no standardized ways to report such results. Previously described schemes include the use of single thresholds to provide results for all tested patients, quartiles/bands, and equivocal zones (24, 25, 4345). In this study, we compared results for all three schemes. Our findings do not specify which approach is best but highlight the challenges in reporting results of composite biomarker tests. Moreover, different clinical scenarios (e.g., screening vs. diagnosis) might warrant different approaches.

The HR-B/V test identified all cases of microbiologically confirmed bacterial superinfection. However, some patients with superinfection but no confirmed bacterial pathogen demonstrated a viral host response. This suggests that secondary or persistent viral infections may be responsible for some suspected superinfections. Along these lines, we observed a significant number of patients with suspected (microbiology-negative) bacterial infections who instead had a viral host response. Without microbiological confirmation, these could be adjudication errors, test errors, or perhaps infections due to atypical bacterial pathogens such as Mycoplasma.

The best currently available clinical laboratory test for bacterial vs. viral discrimination is procalcitonin. A meta-analysis demonstrated that procalcitonin-guided algorithms reduced antibiotic use and improved patient outcomes (46). However, this result was not reproduced in the US-based ProACT randomized controlled trial (9). Furthermore, the ability of procalcitonin to discriminate bacterial from viral infection has been limited: 55% sensitivity and 76% specificity in patients with community-acquired pneumonia (8). Procalcitonin performed better in this study (60.7% sensitivity and 84.7% specificity) although not as well as host gene expression, consistent with prior observations (25, 35, 38, 47, 48). Despite these low performance characteristics, procalcitonin is widely used to guide antibacterial use. This underscores that a diagnostic test need not have perfect or even exceptional accuracy to be clinically useful, desirable as that may be. If a biomarker is to be the sole determinant of treatment in the absence of additional clinical data, then performance characteristics should be sufficiently high after accounting for the clinical consequences of false positives and false negatives. However, when used as an adjunct to other clinical information, biomarkers can be clinically useful and actionable even with lower accuracies (e.g., procalcitonin, white blood cell counts, fever). It is noteworthy that our reference standard in this study was clinical adjudication, which is known to be inaccurate. As such, we would not expect (nor desire) performance metrics that are too good to be true. In such a situation, the test would have done little more than perfectly matched an imperfect comparator.

Among this study’s limitations are that it was not powered to detect differences due to demographics such as age, race, and ethnicity. A peripheral blood host gene expression test may not perform as expected in patients with profound abnormalities in their peripheral leukocyte counts or distributions such as neutropenia. A recent evaluation of host gene expression in subjects with immunocompromising conditions revealed slightly lower, but still clinically useful performance (49). We did not assess the kinetics of the host response and are therefore unable to assess response to treatment. Perhaps the greatest limitation is the absence of a gold standard to define the presence of bacterial or viral infection. We therefore relied on expert adjudication, which is imperfect despite being the best available standard. Lastly, a clinical utility study will be necessary to demonstrate that such a test actually mitigates antibiotic overuse without compromising (and perhaps improving) patient outcomes.

Conclusions

This study demonstrates the first translation of a host gene expression signature for the diagnosis of bacterial and viral infection. In doing so, we demonstrate the feasibility of quantifying the host transcriptional response for real-time clinical decision-making, opening a new pathway for test development in multiple clinical domains. The HR-B/V test was superior to procalcitonin.

Supplementary Material

Supplementary Material

Table 3. Comparison of result reporting schemes.

We evaluated three different reporting schemes. The single threshold scheme uses a single numerical threshold to determine whether a subject has a bacterial infection in the bacterial vs. non-bacterial model or a viral infection in the viral vs. non-viral model. This single threshold allows for all subjects to be classified. The Quartile scheme uses multiple thresholds to assign subjects into probability bands. The reported PPA and NPA values focus on the top and bottom quartiles, therefore representing 50% of the cohort. The equivocal zone model allows for a probability band in which no call can be made. The equivocal zone thresholds for bacterial vs. non-bacterial and viral vs. non-viral classification were selected to exclude no more than 15% of the cohort.

Single Threshold Quartiles Equivocal Zone
PPA NPA % cohort PPA NPA % cohort PPA NPA % cohort
Training Cohort-Bacterial vs. Non-Bacterial Model 85.3 81.4 100 100 92.4 50 85.4 86.2 85
Validation Cohort-Bacterial vs. Non-Bacterial Model 79.1 81.0 100 94.0 90.1 50 83.2 83.1 88
Training Cohort-Viral vs. Non-Viral Model 85.9 86.0 100 96.7 98.3 50 84.5 92.3 85
Validation Cohort-Viral vs. Non-Viral Model 89.0 84.7 100 96.7 94.9 50 81.5 91.8 85

Acknowledgments

Conflicts of Interest and Source of Funding: This work was supported in part by the National Institute of Allergy and Infectious Diseases of the National Institute of Health [grant numbers U01AI066569 and UM1AI104681] and the U.S. DARPA [contract number N66001-09-C2082]. ECL was supported by the Eugene A. Stead Scholarship from Duke University School of Medicine and the Infectious Diseases Society of America Medical Scholars Program. The content is solely the responsibility of the authors and does not represent the official views of the National Institutes of Health. These funding sources had no role in the writing of the manuscript or the decision to submit it for publication. BioFire, Inc. provided in-kind support for test development reagents used in this study. ELT, RH, MTM, GSG, TWB, and CWW have filed for a patent pertaining to the signatures discussed in this study (WO 2017/004390 A1). ELT, GSG, and CWW are co-founders of Predigen, Inc. TWB is a consultant for and holds equity in Predigen, Inc. ACH and JWN are employees of BioFire Diagnostics, LLC. CBC is a consultant for bioMérieux, Inc. JLM and RJC are former employees of BioFire Diagnostics, LLC. RJC is currently an employee of bioMérieux, Inc. VGF reports personal fees from Novartis, Novadigm, Durata, Debiopharm, Genentech, Achaogen, Affinium, Medicines Co., Cerexa, Tetraphase, Trius, MedImmune, Bayer, Theravance, Basilea, Affinergy, Janssen, xBiotech, Contrafect, Regeneron, Basilea, Destiny, Amphliphi Biosciences. Integrated Biotherapeutics, and C3J; grants from NIH, MedImmune, Pfizer, Advanced Liquid Logic, Cerexa/Forest/Actavis/Allergan, Theravance, Novartis, Cubist/Merck, Medical Biosurfaces, Locus, Affinergy, Contrafect, Karius, Genentech, Regeneron, Basilea, Janssen, Green Cross, Cubist, Cerexa, Durata, Theravance, Debiopharm; Royalties from UpToDate; and a patent for sepsis diagnosis (US9850539B2).

Copyright Form Disclosure: Dr. Tsalik received support from BioFire Diagnostics by way of consumables and test instruments; received funding from Predigen, Inc.; disclosed the off-label product use of in vitro diagnostic. Drs. Tsalik, Henao, McClain, Ginsburg, and Woods disclosed filing for a patent pertaining to the signatures discussed in this study (WO 2017/004390 A1). Drs. Tsalik, Ginsburg, and Woods disclosed that they are co-founders of Predigen, Inc. Drs. Tsalik, Ko, Petzold, Cairns, Kingsmore, Fowler, Ginsburg, Burke, and Woods received support for article research from the National Institutes of Health (NIH). Drs. Nawrocki, Crisp, and Hemmert received funding from BioFire Diagnostics, LLC.; disclosed that they are employees of BioFire Diagnostics, LLC. Dr. Nawrocki disclosed he has shares in BioMerieux. Dr. Aydin disclosed work for hire. Dr. Ko’s institution received funding from the Antibiotic Resistance Leadership Group; disclosed the off-label product use of diagnostic tests. Dr. Petzold received support for article research from the Defense Advanced Research Projects Agency (DARPA) (NIH NIAID U01AI066569 & UM1AI104681 US DARPA contract - N66001-09-C2082). Dr. Cairns’ institution received funding from the NIH (NIAID) and the DARPA; received funding from BioMerieux. Dr. Kingsmore’s institution received funding from the NIH. Dr. Fowler’s institution received funding from the NIH, MedImmune, Allergan, Pfizer, Advanced Liquid Logics, Theravance, Novartis, Merck; Medical Biosurfaces; Locus; Affinergy; Contrafect; Karius; Genentech, Regeneron, Basilea, and Janssen; received funding from Basilea, Affinergy, Janssen, Basilea, Integrated Biotherapeutics; C3J, Armata, Valanbio; Akagera, Aridis, Novartis, Novadigm, Durata, Debiopharm, Genentech, Achaogen, Affinium, Medicines Co., Cerexa, Tetraphase, Trius, MedImmune, Bayer, Theravance, Basilea, Affinergy, Janssen, xBiotech, Contrafect, Regeneron, Destiny, UpToDate; Stock options Valanbio; patent sepsis diagnostics pending. Dr. McClain disclosed he has patents pending on diagnostic signatures for respiratory infections. Dr. Ginsburg’s institution received funding from DARPA; received support for article research from the Bill & Melinda Gates Foundation. Dr. Burke’s institution received funding from the NIH; received funding from Predigen, Inc.; disclosed he is a co-inventor on patents pending on Molecular Methods to Diagnose and Treat Respiratory Infections. Dr. Hemmert disclosed work for hire; disclosed the off-label product use of BioFire FilmArray System. The remaining authors have disclosed that they do not have any potential conflicts of interest.

Acknowledgements:

We are grateful for the contributions made by Marshall Nichols, Christina Nix, and Carolyne Whiting for their data management support. We also acknowledge the contributions made by Olga Better, Anna Mazur, Brad Nicholson, Jack Anderson, Charles Bullard, and Pamela Isner in the laboratory. This study would not have been possible without the support of multiple clinical staff responsible for enrollment at all participating sites. We acknowledge bioMérieux Inc. for providing the reagents used to measure procalcitonin concentrations.

References

  • 1.Centers for Disease Control and Prevention. National Ambulatory Medical Care Survey: 2015 State and National Summary Tables In: Services USDoHaH (Ed). 2015, p^pp [Google Scholar]
  • 2.Disease GBD, Injury I, Prevalence C: Global, regional, and national incidence, prevalence, and years lived with disability for 310 diseases and injuries, 1990-2015: a systematic analysis for the Global Burden of Disease Study 2015. Lancet 2016; 388(10053):1545–1602 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Shapiro DJ, Hicks LA, Pavia AT, et al. : Antibiotic prescribing for adults in ambulatory care in the USA, 2007-09. The Journal of antimicrobial chemotherapy 2014; 69(1):234–240 [DOI] [PubMed] [Google Scholar]
  • 4.Lee GC, Reveles KR, Attridge RT, et al. : Outpatient antibiotic prescribing in the United States: 2000 to 2010. BMC medicine 2014; 12:96. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Jain S, Self WH, Wunderink RG, et al. : Community-Acquired Pneumonia Requiring Hospitalization among U.S. Adults. N Engl J Med 2015; 373(5):415–427 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Self WH, Balk RA, Grijalva CG, et al. : Procalcitonin as a Marker of Etiology in Adults Hospitalized With Community-Acquired Pneumonia. Clin Infect Dis 2017; 65(2):183–190 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Musher DM, Bebko SP, Roig IL: Serum procalcitonin level, viral polymerase chain reaction analysis, and lower respiratory tract infection. J Infect Dis 2014; 209(4):631–633 [DOI] [PubMed] [Google Scholar]
  • 8.Kamat IS, Ramachandran V, Eswaran H, et al. : Procalcitonin to Distinguish Viral From Bacterial Pneumonia: A Systematic Review and Meta-analysis. Clin Infect Dis 2020; 70(3):538–542 [DOI] [PubMed] [Google Scholar]
  • 9.Huang DT, Yealy DM, Filbin MR, et al. : Procalcitonin-Guided Use of Antibiotics for Lower Respiratory Tract Infection. N Engl J Med 2018; 379(3):236–249 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Schuetz P, Wirz Y, Sager R, et al. : Procalcitonin to initiate or discontinue antibiotics in acute respiratory tract infections. Cochrane Database Syst Rev 2017; 10(10):CD007498 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Zaas AK, Chen M, Varkey J, et al. : Gene expression signatures diagnose influenza and other symptomatic respiratory viral infections in humans. Cell Host Microbe 2009; 6(3):207–217 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Woods CW, McClain MT, Chen M, et al. : A Host Transcriptional Signature for Presymptomatic Detection of Infection in Humans Exposed to Influenza H1N1 or H3N2. PLoS ONE 2013; 8(1):e52198. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Mejias A, Dimo B, Suarez NM, et al. : Whole blood gene expression profiles to assess pathogenesis and disease severity in infants with respiratory syncytial virus infection. PLoS Med 2013; 10(11):e1001549. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Ramilo O, Allman W, Chung W, et al. : Gene expression patterns in blood leukocytes discriminate patients with acute infections. Blood 2007; 109(5):2066–2077 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Parnell GP, McLean AS, Booth DR, et al. : A distinct influenza infection signature in the blood transcriptome of patients with severe community-acquired pneumonia. Crit Care 2012; 16(4):R157. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Zaas AK, Burke T, Chen M, et al. : A host-based RT-PCR gene expression signature to identify acute respiratory viral infection. Sci Transl Med 2013; 5(203):203ra126. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Hu X, Yu J, Crosby SD, et al. : Gene expression profiles in febrile children with defined viral and bacterial infection. Proc Natl Acad Sci U S A 2013; 110(31):12792–12797 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Cardoso F, van’t Veer LJ, Bogaerts J, et al. : 70-Gene Signature as an Aid to Treatment Decisions in Early-Stage Breast Cancer. N Engl J Med 2016; 375(8):717–729 [DOI] [PubMed] [Google Scholar]
  • 19.Crespo-Leiro MG, Stypmann J, Schulz U, et al. : Performance of gene-expression profiling test score variability to predict future clinical events in heart transplant recipients. BMC Cardiovasc Disord 2015; 15:120. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Langley RJ, Tsalik EL, van Velkinburgh JC, et al. : An integrated clinico-metabolomic model improves prediction of death in sepsis. Sci Transl Med 2013; 5(195):195ra195. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Tsalik EL, Jaggers LB, Glickman SW, et al. : Discriminative value of inflammatory biomarkers for suspected sepsis. The Journal of emergency medicine 2012; 43(1):97–106 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Glickman SW, Cairns CB, Otero RM, et al. : Disease progression in hemodynamically stable patients presenting to the emergency department with sepsis. Academic emergency medicine : official journal of the Society for Academic Emergency Medicine 2010; 17(4):383–390 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Bone RC, Balk RA, Cerra FB, et al. : Definitions for sepsis and organ failure and guidelines for the use of innovative therapies in sepsis. The ACCP/SCCM Consensus Conference Committee. American College of Chest Physicians/Society of Critical Care Medicine. Chest 1992; 101(6):1644–1655 [DOI] [PubMed] [Google Scholar]
  • 24.Lydon EC, Henao R, Burke TW, et al. : Validation of a host response test to distinguish bacterial and viral respiratory infection. EBioMedicine 2019; 48:453–461 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Tsalik EL, Henao R, Nichols M, et al. : Host gene expression classifiers diagnose acute respiratory illness etiology. Sci Transl Med 2016; 8(322):322ra311. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Hastie T, Tibshirani R, Friedman JH: The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, 2001 [Google Scholar]
  • 27.Fawcett T: An introduction to ROC analysis. Pattern Recognition Letters 2006; 27(8):861–874 [Google Scholar]
  • 28.Biswas B: Clinical Performance Evaluation of Molecular Diagnostic Tests. The Journal of Molecular Diagnostics 2016; 18(6):803–812 [DOI] [PubMed] [Google Scholar]
  • 29.Liu Y, Tsalik EL, Jiang Y, et al. : Average Weighted Accuracy (AWA): Pragmatic Analysis for a RADICAL Study. Clin Infect Dis 2019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Rosenberg S, Elashoff MR, Beineke P, et al. : Multicenter validation of the diagnostic accuracy of a blood-based gene expression test for assessing obstructive coronary artery disease in nondiabetic patients. Ann Intern Med 2010; 153(7):425–434 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Zimmerman JJ, Sullivan E, Yager TD, et al. : Diagnostic Accuracy of a Host Gene Expression Signature That Discriminates Clinical Severe Sepsis Syndrome and Infection-Negative Systemic Inflammation Among Critically Ill Children. Crit Care Med 2017; 45(4):e418–e425 [DOI] [PubMed] [Google Scholar]
  • 32.Heinonen S, Jartti T, Garcia C, et al. : Rhinovirus Detection in Symptomatic and Asymptomatic Children: Value of Host Transcriptome Analysis. Am J Respir Crit Care Med 2016; 193(7):772–782 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Mahajan P, Kuppermann N, Mejias A, et al. : Association of RNA Biosignatures With Bacterial Infections in Febrile Infants Aged 60 Days or Younger. JAMA 2016; 316(8):846–857 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Mahajan P, Kuppermann N, Suarez N, et al. : RNA transcriptional biosignature analysis for identifying febrile infants with serious bacterial infections in the emergency department: a feasibility study. Pediatr Emerg Care 2015; 31(1):1–5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Suarez NM, Bunsow E, Falsey AR, et al. : Superiority of transcriptional profiling over procalcitonin for distinguishing bacterial from viral lower respiratory tract infections in hospitalized adults. J Infect Dis 2015; 212(2):213–222 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Herberg JA, Kaforou M, Wright VJ, et al. : Diagnostic Test Accuracy of a 2-Transcript Host RNA Signature for Discriminating Bacterial vs Viral Infection in Febrile Children. JAMA 2016; 316(8):835–845 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Kaforou M, Herberg JA, Wright VJ, et al. : Diagnosis of Bacterial Infection Using a 2-Transcript Host RNA Signature in Febrile Infants 60 Days or Younger. JAMA 2017; 317(15):1577–1578 [DOI] [PubMed] [Google Scholar]
  • 38.Wallihan RG, Suarez NM, Cohen DM, et al. : Molecular Distance to Health Transcriptional Score and Disease Severity in Children Hospitalized With Community-Acquired Pneumonia. Front Cell Infect Microbiol 2018; 8:382. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Sweeney TE, Wong HR, Khatri P: Robust classification of bacterial and viral infections via integrated host gene expression diagnostics. Sci Transl Med 2016; 8(346):346ra391. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Sampson DL, Fox BA, Yager TD, et al. : A Four-Biomarker Blood Signature Discriminates Systemic Inflammation Due to Viral Infection Versus Other Etiologies. Sci Rep 2017; 7(1):2914. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.McClain MT, Constantine FJ, Henao R, et al. : Dysregulated transcriptional responses to SARS-CoV-2 in the periphery support novel diagnostic approaches. medRxiv 2020:2020.2007.2020.20155507 [Google Scholar]
  • 42.McClain M, Constantine F, Nicholson B, et al. : A blood-based host gene expression assay allows for early detection of respiratory viral infection: an index-cluster cohort study. Lancet Infect Dis 2020; In Press [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Mayhew MB, Buturovic L, Luethy R, et al. : A generalizable 29-mRNA neural-network classifier for acute bacterial and viral infections. Nat Commun 2020; 11(1):1177. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Oved K, Cohen A, Boico O, et al. : A novel host-proteome signature for distinguishing between acute bacterial and viral infections. PLoS One 2015; 10(3):e0120012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.McHugh L, Seldon TA, Brandon RA, et al. : A Molecular Host Response Assay to Discriminate Between Sepsis and Infection-Negative Systemic Inflammation in Critically Ill Patients: Discovery and Validation in Independent Cohorts. PLoS Med 2015; 12(12):e1001916. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Odermatt J, Friedli N, Kutz A, et al. : Effects of procalcitonin testing on antibiotic use and clinical outcomes in patients with upper respiratory tract infections. An individual patient data meta-analysis. Clin Chem Lab Med 2017; 56(1):170–177 [DOI] [PubMed] [Google Scholar]
  • 47.Scicluna BP, Klein Klouwenberg PM, van Vught LA, et al. : A molecular biomarker to diagnose community-acquired pneumonia on intensive care unit admission. Am J Respir Crit Care Med 2015; 192(7):826–835 [DOI] [PubMed] [Google Scholar]
  • 48.Humphries R, Giamarellos-Bourboulis E, Wright DW, et al. : A 29 messenger RNA host response signature identifies bacterial and viral infections among emergency department patients. Academic Emergency Medicine 2020; 27:S195 [Google Scholar]
  • 49.Mahle RE, Suchindran S, Henao R, et al. : Validation of a host gene expression test for bacterial/viral discrimination in immunocompromised hosts. Clinical Infectious Diseases 2021 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Material

RESOURCES