Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 May 17.
Published in final edited form as: Ann Surg. 2020 Oct;272(4):604–610. doi: 10.1097/SLA.0000000000004379

Multiplexed Plasma Immune Mediator Signatures Can Differentiate Sepsis From NonInfective SIRS

American Surgical Association 2020 Annual Meeting Paper

Laura A Cahill *, Brian A Joughin , Woon Yong Kwon , Kiyoshi Itagaki §, Charlotte H Kirk , Nathan I Shapiro **, Leo E Otterbein , Michael B Yaffe ††,‡‡, James A Lederer §§, Carl J Hauser ¶¶
PMCID: PMC10190157  NIHMSID: NIHMS1893865  PMID: 32932316

Abstract

Objectives:

Sepsis and sterile both release “danger signals" that induce the systemic inflammatory response syndrome (SIRS). So differentiating infection from SIRS can be challenging. Precision diagnostic assays could limit unnecessary antibiotic use, improving outcomes.

Methods:

After surveying human leukocyte cytokine production responses to sterile damage-associated molecular patterns (DAMPs), bacterial pathogen-associated molecular patterns, and bacteria we created a multiplex assay for 31 cytokines. We then studied plasma from patients with bacteremia, septic shock, “severe sepsis,” or trauma (ISS ≥15 with circulating DAMPs) as well as controls. Infections were adjudicated based on post-hospitalization review. Plasma was studied in infection and injury using univariate and multivariate means to determine how such multiplex assays could best distinguish infective from noninfective SIRS.

Results:

Infected patients had high plasma interleukin (IL)-6, IL-1α, and triggering receptor expressed on myeloid cells-1 (TREM-1) compared to controls [false discovery rates (FDR) <0.01, <0.01, <0.0001]. Conversely, injury suppressed many mediators including MDC (FDR <0.0001), TREM-1 (FDR <0.001), IP-10 (FDR <0.01), MCP-3 (FDR <0.05), FLT3L (FDR <0.05), Tweak, (FDR <0.05), GRO-α (FDR <0.05), and ENA-78 (FDR <0.05). In univariate studies, analyte overlap between clinical groups prevented clinical relevance. Multivariate models discriminated injury and infection much better, with the 2-group random-forest model classifying 11/11 injury and 28/29 infection patients correctly in out-of-bag validation.

Conclusions:

Circulating cytokines in traumatic SIRS differ markedly from those in health or sepsis. Variability limits the accuracy of single-mediator assays but machine learning based on multiplexed plasma assays revealed distinct patterns in sepsis- and injury-related SIRS. Defining biomarker release patterns that distinguish specific SIRS populations might allow decreased antibiotic use in those clinical situations. Large prospective studies are needed to validate and operationalize this approach.

Keywords: bioinformatics, biomarkers, inflammation, sepsis, SIRS, sterile injury


After source control, management of surgical sepsis requires initiation of antibiotics. Antibiotic overuse, however, generates resistant infections, making accurate diagnosis critical. The systemic inflammatory response syndrome (SIRS) can reflect infective or sterile processes,1 and overly aggressive antibiotic use yields no benefits.2 Since the differential diagnosis of SIRS is difficult in surgical patients, there is an unmet need for rapid tests differentiating patients who need antibiotics from those who do not.3

SIRS diagnosis is also complex because infection and sterile injuries both initiate inflammation by releasing “danger signals." These molecules activate innate immunity via “pattern-recognition receptors" (PRR) that stimulate cytokine production.4 Danger signals come from many sources. “Pathogen-associated molecular patterns" (PAMPs) arise from microbes5 and “damage-associated molecular patterns" (DAMPs) arise from injured tissue,6 as well as infection.7 Critically, PAMPs and DAMPs can engage PRR that converge on common downstream signal pathways.

These relationships make SIRS diagnosis via individual biomarkers imprecise in surgical patients.8,9 Adding clinical parameters yields slightly better results.10 Multiplexed plasma assays can predict trauma outcomes,11 post-traumatic infection,12 and sepsis severity,13 whereas tissue biomarkers can differentiate rejection from infection.14 We hypothesized that studying multiplexed plasma mediators using machine learning15 might discriminate between patients who would benefit from antibiotics or not at presentation.

Secondarily, cytokine responses to SIRS are incompletely defined. We sought to identify new biomarkers for SIRS based on previous work showing that DAMPs and PAMPs both elicit inflammation,6 and combine these biomarkers with computational techniques differentiating sterile from infective inflammation. We reasoned that developing a computational framework that identifies phenotypic signatures in this population might have important clinical applications.

METHODS

Research Compliance

Approval was obtained from the Institutional Review Board, Beth Israel Deaconess Medical Center, Boston, MA. Consent was obtained from blood donors. All samples were deidentified.

Patient Populations

Blood samples were obtained prospectively from patients at Emergency Department (ED)/Trauma Center presentation. Heparinized plasma was frozen immediately (−80C). Samples (n = 12/group) were analyzed from patients who presented with: bacteremia (positive ED blood cultures); septic shock (proven infection, SBP <90); “severe sepsis” (infection with ≥1 organ dysfunction); or injury (injured patients with ISS ≥15, elevated circulating DAMPs at presentation.16,17) One injured patient lacked sufficient sample and was not studied. Twelve ER patients without pathology (eg, suture removal) were enlisted as “controls.” One control was excluded when review revealed active recovery from pneumonia. Patients with presumed infections were subsequently adjudicated to have had infection or not at sampling via blinded chart review. Five “severe sepsis” patients did not have infection on adjudication and were not studied. Samples from 2 patients were erroneously included in both bacteremia and shock groups. One sample per patient was randomly selected and the other discarded. Ultimately 11 control, 11 injury, and 29 infection samples were analyzed. Preliminary analysis indicated no individual analyte stratified infection samples by severity. Consequently, this distinction was dropped.

Analyte Identification

To identify novel analytes, peripheral blood mononuclear cells (PBMCs) from healthy volunteers (n = 3) were purified using published methods17 and incubated overnight (37°, 5% CO2) with LPS (100 ng/mL), f-MLF (100 nmol/L), NADH dehydrogenase 6 (100 nmol/L) and E coli or S aureus (each at MOI = 1:1 and 2:1). Controls included PBMC and bacteria incubated in media alone. Samples were analyzed (triplicate) using a 66-analyte “cytokine discovery assay” (Eve Technologies, Calgary, Canada; complete listing in Suppl. Table 1, http://links.lww.com/SLA/C507). Analytes showing any significant change after any PBMC stimulus (P < 0.05 vs unstimulated PBMC) were included in the assay. Invariant species were not studied.

Luminex Assays

Blood was collected and spun (2000 g, 15 minutes). Plasma was frozen at −80°C. Assays were conducted using 20 μL of each plasma sample. Luminex bead-based ELISA assays were custom-developed per manufacturer’s instructions using capture/detection antibody pairs for previously published cytokines of interest as well as novel analytes (see above). These were incorporated into a 31-plex assay including interleukin (IL)-1α, IL-1β, IL-6, IL-8, IL-10, IL-15, 1L-17A, IL-18, IL-21, IL-22, IL-23, IL-33, interferon-γ, tumor necrosis factor (TNF), IL-1RA, MDC/CCL22, IP-10, GROα, MCP-3, ENA-78, FLT3L, Tweak, G-CSF, MIG, transforming growth factor (TGF) β, MCP-1, MIP-1α, PDGF-AA, PDGF-BB, triggering receptor expressed on myeloid cells-1 (TREM-1), and GM-CSF. Antibodies and standards were purchased from Biolegend (San Diego) or R&D Systems (Minneapolis, MN). Cytokine concentrations were estimated using serially diluted standards (15,000 to 14.6 pg/mL). Analyte values above and below the standard curves were set to those values. Independent triplicate assays were done in 96-well V-bottom plates (Corning, Corning, NY) using standard incubation and wash buffers and a magnetic plate for bead washes. Assays employed 30-minute incubation periods with 2 buffer washes between using: Luminex beads; biotin-labeled detection antibodies; and treptavidin-phycoerythrin for detection. Results were generated using a FLEXMAP 3D instrument (Luminex, Austin, Texas). Weekly calibration and daily verification were performed using FLEXMAP 3D Performance Verification Kits. Standard concentrations were fit to a weighted, 5-parameter logistic regression curve.

Statistical Methods

Computation and model-building were performed in R version 3.6.3.18 Data plots were generated using version 3.3.0 of the R package ggplot2.19 Kruskal-Wallis tests with Benjamini-Hochberg false discovery rate (FDR) corrections20 were used to identify analytes where intergroup distributions differed significantly. For analytes with FDR ≤0.05, a Wilcoxon rank-sum test (Benjamini-Hochberg corrected) was used to compare the 3 groups and identify significantly different pairs.

Partial least squares discriminant analysis (PLSDA) was performed using version 6.8.5 of the mixOmics21 package. Analyte values were scaled to have mean of zero and variance of one, before computation of the first 10 latent variables. All other default parameters were used. Performance of PLSDA models was assessed by performing 50 instances of 5-fold cross-validation using the “max.dist” distance metric.

Random forest (RF) models were generated classifying patient samples using all 31 measured analytes, using version 4.6–14 of the R package randomForest22 with “keep.forest” set to true, “importance” set to true, and all other parameters as default. Performance was assessed by classifying each sample using only the trees not trained on that sample (ie, “out-of-bag” classification22), and additionally by performing 1000 rounds of 8-fold cross-validation on our 2-group model. Analyte importance was calculated as the decrease in Gini index (a measure of the probability of classifying a sample correctly at random) gained by splitting on the analyte, averaged across trees.

RESULTS

Ten Analytes Vary Significantly Among Control, Injury, and Infection Patient Groups

Typically, proinflammatory species like TNF, IL-1β, and IL-6 dominate early host responses to sepsis, and may be markers for mortality.23 Later, anti-inflammatory cytokines regulate inflammation and promote healing.24 Comparative cytokine studies in early sterile versus infective SIRS are absent and could inform accurate antibiotic use.

Our multiplex assay profiled known and newly identified cytokine signatures. Plasma samples from injured patients, infected patients, and controls were compared in 3 independent experiments. FDR-corrected Kruskal-Wallis tests identified analytes with different intergroup distributions. Group medians (log-transformed) for the analytes (Fig. 1A) demonstrate that of 31 proteins analyzed, 10 analytes differed significantly when compared among controls, injured, and infected patients. Pairwise Wilcoxon tests for analytes with FDR <0.05 showed that levels of IL-6 and TREM-1 were significantly elevated in infected patients compared to either trauma or controls (Fig. 1B). In contrast, MDC/CCL22, TREM-1, IP-10, GROα, MCP-3, ENA-78, FLT3L, and TWEAK were lower in injury (but not infection) compared to controls (Fig. 1B). But although these biomarkers showed clear intergroup differences, their distributions overlapped considerably and none showed clear cutoffs that differentiated groups. The remaining 21 analytes showed no intergroup differences (Supp. Fig. 1, http://links.lww.com/SLA/C505).

FIGURE 1.

FIGURE 1.

Measurement of cytokines and chemokines in plasma reveals a set of 10 that differed among controls, patients with injury and patients with infection. (A) Heat map of log-transformed group medians for each analyte (log10 pg/mL). Asterisks indicate analytes for which a Kruskal-Wallis test with Benjamini-Hochberg false discovery rate (FDR) correction indicates that all 3 groups are not likely to be drawn from the same distribution. (*FDR <0.05; **FDR <0.01; ***FDR <0.001; ****FDR <0.0001). (B) Box-and-whiskers plots are presented indicating the distributions within each group for analytes with FDR <0.05 by Benjamini-Hochberg-corrected Kruskal-Wallis test. Median, 25th, and 75th percentile values for each analyte are indicated by heavy horizontal black bar, and box bottoms and tops, respectively. Whiskers extend to the furthest value from the median on each side that is no more than 1.5 times the interquartile range to the hinge. All individual analyte values are plotted on a log-transformed scale. Brackets indicate the FDR of a Wilcoxon test between pairs of groups, with Benjamini-Hochberg correction across the three intergroup comparisons for each analyte (*FDR <0.05; **FDR <0.01; ***FDR <0.001; ****FDR <0.0001).

PLSDA Identifies Cytokine Combinations That Covary With Clinical Status

Since many analytes had distributions that varied significantly, but no single analyte differentiated the 3 populations or even discriminated the 2 populations of special interest (injury and infection) with high confidence, we applied PLSDA to assess whether combinations of analytes could better predict the status of patients. PLSDA is a classification method for describing a multivariable analyte space as a sequence of orthogonal linear combinations of the original analytes (latent variables), each of which covaries as much as possible with a set of group labels.25

First, we generated a PLSDA model attempting to separate the 3 sample groups. The patient samples’ scores on the first 5 latent variables (Fig. 2A) show that latent variables 1 and 2 (Fig. 2A, horizontal and vertical axes respectively) act primarily to separate injured patients from controls and patients with infections. In contrast, latent variables 3 and 4 (Fig. 2B, horizontal and vertical axes respectively) separate controls from injured or infected patients. A plot showing the scores for each of the first 4 latent variables by group, treated separately, is shown as Figure 2C. All intergroup differences were significant by Kruskal-Wallis test followed by pairwise intergroup Wilcoxon tests controlled for FDR (Fig. 2C). To quantify the performance of this model, and to identify the optimum number of latent variables to use in classification, we performed 50 rounds of 5-fold cross validation. Within each round, the data were split into 5 parts at random, and the classification of samples in each part was predicted using a PLSDA model trained only on the other 4 parts. Although a model built on all of the data is effective for differentiating samples in the infection group from other groups using just 1 latent variable (Fig. 2C), the same is not true when making predictions during cross-validation (ie, splitting the data into a prediction and a validation set). Although most infection samples could be identified correctly using 1 latent variable in cross-validation, the majority of control and injury samples were incorrectly predicted (Supp. Fig. 2A, http://links.lww.com/SLA/C506). Moreover, these erroneous predictions occurred overwhelmingly because samples from both classes were incorrectly classified as being from the infection group (Supp. Fig. 2B, http://links.lww.com/SLA/C506). The lowest overall error rate (~28%) was obtained in cross-validation using a model that contained 6 latent variables (Supp. Fig 2A, http://links.lww.com/SLA/C506). This error was somewhat balanced across sample classes, with infection samples most commonly misclassified as controls, and control and injury samples most commonly misclassified as infection (Supp. Fig 2C, http://links.lww.com/SLA/C506).

FIGURE 2.

FIGURE 2.

Partial least squared discriminant analysis reveals linear combinations of measured cytokines and chemokines (latent variables) that covary maximally with group identity. (A, B)The scores of individual patient samples along latent variables 1 and 2 (A) and 3 and 4 (B) are shown for a PLS-DA model intended to find latent variables separating 3 patient groups: control, injury, and infection. (C) Box-and-whiskers plot (as in Fig. 1B) indicating the distributions within each group for the first 4 latent variables of a 3-population PLS-DA model. (D) The scores of individual patient samples along the latent variables 1 and 2 are shown for a 2-population PLS-DA model intended to find latent variables separating patients with injury from patients with infection. (E) Box-and-whiskers plot (as in Fig. 1B) indicating the distributions within each group for the first 4 latent variables of a 2-population PLS-DA model. (F) Loadings plot detailing the contribution of each analyte to the first latent variable of a 2-population PLSDA model built to distinguish patients with injury-related SIRS from patients with infection. Each analyte is colored based on whether a high value contributes to a prediction of noninfective (green) or infective (yellow) SIRS.

Since this model was generated to differentiate sterile SIRS from infection in early surgical patients, we next wondered whether a simpler PLSDA model differentiating samples only from patients with injury or infection might perform better than a model also required to identify controls. When trained using all the data, this more specific 2-group model did an excellent job of separating infection from injury using only 1 latent variable (Fig. 2D), whereas none of the other latent variables significantly differentiated these 2 classes (Fig. 2E). The coefficients of the various analytes on the correctly discriminating latent variable represent the relative importance of these analytes in distinguishing these 2 classes (Fig. 2F). High levels of MDC/CCL22, IP-10, and TREM-1 point particularly strongly toward infection, whereas low values of these analytes and/or high levels of IL-10, TGFβ, IL-22, and IL-21 contributed to prediction of injury. Cross-validation demonstrated that this single latent variable model had an overall error rate of 12.5% (Supp. Fig. 2D, http://links.lww.com/SLA/C506) with most mistakes being misclassification of injury as infection (Supp. Fig. 2E, http://links.lww.com/SLA/C506). In cross-validation, inclusion of 2 more latent variables, neither of which was itself strongly discriminatory, further increased the overall accuracy of classification to 92%, with more mistaken classifications of injury as infection than vice versa (Supp. Figs. 2D, 2F, http://links.lww.com/SLA/C506).

RF Models Discriminate Samples From Patients With Injury or Infection With High Accuracy

Using cross-validation in our modestly sized dataset, a PLSDA model distinguished the 3 patient sample groups with only 72% accuracy. However, the more specialized 2-category model distinguished injury from infection with 92% accuracy. This exceeds the accuracy of currently available “sepsis tests," particularly the widely used procalcitonin assay (76% sensitivity and 69% specificity for identifying bacteremia at a cutoff value of 0.5 ng/mL26).

However, PLSDA still treats the contributions of each analyte to final classification as independent from each other. We wondered whether a decision tree-based modeling framework, where the value of 1 analyte can influence the contribution of another analyte to classification, might be still more effective. We therefore turned to RF models22 which use multiple decision trees trained on random subsets of the data to vote on the classification of new samples. Each tree sequentially examines one of a subset of measured analytes to choose which group to classify new samples into. Although each tree alone might be relatively inaccurate, the final RF model classifies a sample as whichever class the most trees choose, and is generally much more accurate. The accuracy of RF models can be estimated by classifying each sample using only trees not trained on that sample.

When an RF model was built to classify all 3 patient groups—control, injury, and infection—an overall accuracy of approximately 80% was achieved as assessed by out-of-bag classification. Nearly 55% of control samples were misclassified, primarily as infection (Fig. 3A), whereas all injury samples were correctly predicted and 86% of infection samples were predicted correctly, with most misclassified samples predicted to be controls.

FIGURE 3.

FIGURE 3.

Random forest models use voting by a large number of decision trees to predict the group to which individual patient samples belong. (A, B) Distributions of predictions made on patient samples from each class by random forest models is plotted as a stacked bar, for an RF model built to predict either all three patient groups (A) or only to differentiate injury from infection (B). Only the subset of decision trees not trained on each sample are then used to make a prediction on that sample. The result is an out-of-bag prediction on the rate of errors that would be expected in new samples. This error rate of 97.5% is consistent with that identified by cross-validation of 96.3%. (C) The relative importance of each analyte to a 2-group random forest model for distinguishing samples from patients with injury and patients with infection. For each analyte, importance is calculated as the decrease in Gini index gained by splitting on the analyte, averaged across trees. (D) Here we show 3 (of 500) decision trees created as part of a random forest model for distinguishing samples from patients with injury and patients with infection. Within each tree, a subset of analyte levels are queried sequentially, with the result leading either to a new subquery or to a prediction of what class the sample is from. CON, control; INJ, injury; INF, infection.

As with the PLSDA models, we therefore wondered whether 2-group models limited to differentiating injury from infection (as might be useful in postoperative SIRS) would be more effective. This RF model was able to correctly classify every injury sample (11/11), and 96% (28/29) of infection samples (97.5% overall), using only out-of-bag classification of the subset of trees not trained on each sample (Fig. 3B). To further validate this surprisingly high accuracy, we performed 1000 rounds of true cross-validation—splitting the data into 8 parts, and using each set of 7 to make predictions on the eighth—of a 2-group random forest model, and found that the average overall accuracy was 96.3% (10.6/11 injury, 27.9/29 infection), consistent with the previous estimate. The relative importance of each analyte to the model’s success was also assessed, revealing standout roles for MDC/CCL22 and IP-10, followed by TREM-1, IL-6, GROα, IL1-ß, and MCP-3 (Fig. 3C). A representative sample of 3 trees from this forest is shown (Fig. 3D).

DISCUSSION

Sepsis is often thought of as causing uncontrolled cytokine production or a ”cytokine storm,"27 But sterile SIRS can present with similar clinical signs. Thus, well-defined biomarkers discriminating between patients with infection and tissue injury without infection could limit antibiotic use and improve outcomes.

Of 31 cytokines produced by activated PBMC, 10 showed distinctive plasma profiles in SIRS patients (Fig. 1A). Like other studies,28,29 we found infection increases IL-6 compared to controls. We also saw slight increases in IL-6 after injury. These increases were nonsignificant at this hyperacute time point, but generally agree with findings at later times in burns30,31 and surgery.32,33 But due to overlap in IL-6 plasma levels between injury and infection, IL-6 is not a reliable marker for sepsis in injured or postoperative patients.

TREM-1 is a member of the TNF superfamily that amplifies inflammation.34 Previous reports found TREM-1 reduced by sterile inflammation35 (eg, psoriasis or vasculitis) and recently TREM-1 elevation was suggested as a biomarker for infection.36,37 Our findings confirm these reports, but again, found significant overlap between plasma TREM-1 levels in health and infection. Furthermore, elevated TREM-1 levels at the conclusion of cardiovascular surgery38 suggest limited utility in that postoperative setting. Thus, TREM-1 values alone probably have limited diagnostic value.

We can also report, for the first time, that plasma concentrations of a broad group of chemokines are reduced, rather than increased, in sterile injury compared to other settings. MDC/CCL22, IP-10, GRO-α, MCP-3, and ENA-78 all promote recruitment of immune cells to sites where they initiate inflammation and wound healing.39 MDC/CCL22 suppression was the single best marker identified for injury (Fig. 1B). As a group though, reduced plasma chemokine concentrations after injury suggest compelling immune phenotypic distinctions exist between patients with injury-related SIRS and all other patients. We also report unexpected reductions in the mediators FLT3L, IL-1α, and TWEAK in injured patients. These mediators were not increased in infections. The reductions in FLT3L in injured patients were in contrast to studies of sterile conditions other than trauma.40-42 Thus FLT3L may prove useful for differentiating categories of sterile SIRS.

Single cytokine measurements have been used to prognosticate severity of sepsis.43 Our emerging understanding of shared danger signal responses between sepsis and injury suggested similar approaches using new analytes might be fruitful distinguishing such populations. Nonetheless, no successful univariate metric emerged from our analysis of 31 cytokines. Multivariate and/or machine learning methods have been successful categorizing patient outcomes in sepsis44 and trauma,45 but the variables used are often physiologic rather than molecular. Also, combinations of physiologic measurements with molecular markers made over time have had some success in predicting outcomes in sepsis.46 Multivariate methods based on cytokines and metabolites have also been used to prognosticate sepsis outcomes in ICU cohorts47 and predict organ dysfunction after trauma.48 To our knowledge though, using such tools to distinguish between infection and sterile SIRS has not been attempted before.

Three-group models using PLSDA or RF to distinguish controls, injured patients, and infected patients were only moderately effective owing to pairwise overlaps in markers differentiating control from infection, or control from injury. We found, however, that this overlap was minimized by removing control groups altogether, thus tuning the models more directly toward separating tissue injury from infection. Moreover, we found in so doing that we achieved markedly greater accuracy.

The univariate and multivariate methods used to differentiate sterile from infective SIRS were quite consistent. The plasma analytes most significantly different between these conditions were MDC, TREM-1, and IP-10. These same analytes are the most strongly associated with the most discriminatory latent variable of PLSDA models separating injury from infection. Moreover, using PLSDA, tissue injury was most easily differentiated from infections and controls, which were easily confused with each other (Fig. 2A, Supp. Fig. 2B, http://links.lww.com/SLA/C506). This also made sense for individual analyte values where most significant intergroup differences involved suppression of analytes in the injured patients relative to controls and infection. By showing how to linearly combine information from all the analytes to classify samples, PLSDA achieved a maximum accuracy of 92% in cross-validation. Likewise, an RF model distinguishing injury from infection primarily relies upon MDC and IP-10. In an RF made up of decision trees, the cutoff value for one of these analytes diagnosing infection or injury can depend on the level of the other, or of another analyte entirely. This refinement from the univariate and linear PLSDA models leads to an accuracy of 96.3% in distinguishing sterile, injury-related SIRS from infection using cross-validation within our dataset. Thus, combining machine learning with a specific set of relevant cytokine biomarkers yielded unprecedented accuracy in distinguishing between sterile SIRS and infection.

CONCLUSIONS

Our approach’s strength comes from multiple sources: basing our study on the understanding that SIRS can reflect sterile injury or infection; identifying novel analytes that reflect immune responses sterile and infective danger; studying an inclusive analyte panel in tightly-defined, clinically relevant groups; and using machine learning to assess multivariate patterns of simultaneous analyte variation.

Critically, by stimulating human PBMC ex-vivo, we identified MDC/CCL22 as a previously unreported human SIRS biomarker that seems an excellent marker for sepsis in surgical patients by virtue of being suppressed by simple tissue injury. We confirmed elevations of plasma IL-6 and TREM-1 (Fig. 1B) that had previously been noted in sepsis, but we find overlap between infectious SIRS and ”normal values" that would complicate their use in clinical decision-making.49,50 Also, we found several analytes (MDC, TREM-1, and IP-10) that were suppressed by sterile injuries (Fig. 1B) and are probably robust markers for discriminating sterile SIRS in trauma or postoperative patients from intercurrent infection. We saw patients with SIRS from tissue injury (who will not benefit from antibiotics) are identified more readily by suppression of these negative-responding analytes than by observing elevation of a biomarker. In contrast, infected patients needing antibiotics may be identified by elevation of previously defined markers. But the findings suggest searches for individual “magic-bullet” markers discriminating septic from healthy patients are inherently flawed, with multivariate, systems biology approaches being more fruitful. Acutely ill patients always have an underlying process driving their SIRS. We suggest it may be safer and more clinically appropriate to create tests capable of distinguishing acutely ill patients with sterile SIRS from those with sepsis than to create tests proving patients are septic rather than healthy.

Finally, looking at acutely ill and injured patients and evaluating analytes using the RF approach, we found we could discriminate infective from sterile SIRS with an overall cross-validation accuracy of 96.3% (Fig. 3). Although these results are encouraging, there are numerous study limitations. First, this was a relatively small study group. Second, we know infections can contribute to tissue injury, and injuries predispose to infection, so patients will change category over time. Third, we have yet to determine how long these phenotypic states persist. Thus, it is unclear how long a single assay performed at presentation should inform clinical judgment. Large, prospective, and longitudinal studies of unselected populations will be required to validate this approach. Nonetheless, our findings are hopeful that a rapid “sepsis/SIRS diagnostic” may be achieved by cautious application of these methods to the patient populations of concern. More important still, the general ability to define clinically relevant immune phenotypes by computational analysis of plasma cytokine panels suggests multiple other possible applications.

Supplementary Material

Suppl-3
Suppl-1
Suppl-2

Grant support:

US Army CDMRP Focused Program Award W81XWH-16-1-0464 (The Harvard-Longwood ‘HALO’ collaborative) (CJH-P.I., L.E.O., J.A.L., M.B.Y.).

NIH R43GM125430 (L.E.O., C.J.H.).

NIAID/NIH 1R03AI13534-01 (K.I.).

The Charles and Marjorie Holloway Foundation (M.B.Y.).

Footnotes

The authors report no conflicts of interest.

Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s Web site (www.annalsofsurgery.com).

REFERENCES

  • 1.Bone RC. Toward an epidemiology and natural history of SIRS (systemic inflammatory response syndrome). JAMA J Am Med Assoc. 1992;268:3452–3455. [PubMed] [Google Scholar]
  • 2.Singer M, Deutschman CS, Seymour C, et al. The third international consensus definitions for sepsis and septic shock (sepsis-3). JAMA. 2016;315:801–810. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Denny KJ, De Wale J, Laupland KB, et al. When not to start antibiotics: avoiding antibiotic overuse in the intensive care unit. Clin Microbiol Infect. 2020;26:35–40. [DOI] [PubMed] [Google Scholar]
  • 4.Matzinger P. Tolerance, danger, and the extended family. Annu Rev Immunol. 1994;12:991–1045. [DOI] [PubMed] [Google Scholar]
  • 5.Janeway CA. Approaching the asymptote? Evolution and revolution in immunology. In: Cold Spring Harbor Symposia on Quantitative Biology. 1989:1–13. [DOI] [PubMed] [Google Scholar]
  • 6.Zhang Q, Raoof M, Chen Y, et al. Circulating mitochondrial DAMPs cause inflammatory responses to injury. Nature. 2010;464:104–107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Li H, Itagaki K, Sandler N, et al. Mitochondrial damage-associated molecular patterns from fractures suppress pulmonary immune responses via formyl peptide receptors 1 and 2. J Trauma Acute Care Surg. 2015;78:272–279. discussion 279-81. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Wussler D, Kozhuharov N, Oliveira MT, et al. Clinical utility of procalcitonin in the diagnosis of pneumonia. Clin Chem. 2019;65:1532–1542. [DOI] [PubMed] [Google Scholar]
  • 9.Chengfen Y, Tong L, Xinjing G, et al. Accuracy of procalcitonin for diagnosis of sepsis in adults: a meta-analysis. Zhonghua Wei Zhong Bing Ji Jiu Yi Xue. 2015;27:743–749. [PubMed] [Google Scholar]
  • 10.Lamping F, Jack T, Rübsamen N, et al. Development and validation of a diagnostic model for early differentiation of sepsis and non-infectious SIRS in critically ill children—a data-driven approach using machine-learning algorithms. BMC Pediatr. 2018;18:112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Namas RA, Vodovotz Y, Almahmoud K, et al. Temporal patterns of circulating inflammation biomarker networks differentiate susceptibility to nosocomial infection following blunt trauma in humans. Ann Surg. 2016;263:191–198. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Dente CJ, Bradley M, Schobel S, et al. Towards precision medicine: accurate predictive modeling of infectious complications in combat casualties. J Trauma Acute Care Surg. 2017;83:609–616. [DOI] [PubMed] [Google Scholar]
  • 13.Leligdowicz A, Conroy AL, Hawkes M, et al. Validation of two multiplex platforms to quantify circulating markers of inflammation and endothelial injury in severe infection. PLoS One. 2017;12:e0175130. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Morgun A, Shulzhenko N, Perez-Diez A, et al. Molecular profiling improves diagnoses of rejection and infection in transplanted organs. Circ Res. 2006;98:e74–e83. [DOI] [PubMed] [Google Scholar]
  • 15.Vodovotz Y, An G. Agent-based models of inflammation in translational systems biology: a decade later. Wiley Interdiscip Rev Syst Biol Med. 2019;11:e1460. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Itagaki K, Kaczmarek E, Kwon WY, et al. Formyl peptide receptor-1 blockade prevents receptor regulation by mitochondrial danger-associated molecular patterns and preserves neutrophil function after trauma. Crit Care Med. 2019;48:e123–e132. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Hauser CJ, Zhou X, Joshi P, et al. The immune microenvironment of human fracture/soft-tissue hematomas and its relationship to systemic immunity. J Trauma. 1997;42:895–903. discussion 903-4. [DOI] [PubMed] [Google Scholar]
  • 18.R Core Team. R: A language and environment for statistical computing. Vienna, Austria. Available from: https://www.r-project.org. 2020. [Google Scholar]
  • 19.Wickham H. ggplot2: Elegant Graphics for Data Analysis. New York: Springer-Verlag. Available from: https://ggplot2.tidyverse.org. 2016. [Google Scholar]
  • 20.Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc B. 1995;57:289–300. [Google Scholar]
  • 21.Rohart F, Gautier B, Singh A, et al. mixOmics: an R package for ‘omics feature selection and multiple data integration. PLoS Comput Biol. 2017;13:e1005752. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Liaw A, Wiener M. Classification and regression by randomForest. R News. 2002;2/3:18–21. [Google Scholar]
  • 23.Tschaikowsky K, Hedwig-Geissing M, Braun GG, et al. Predictive value of procalcitonin, interleukin-6, and C-reactive protein for survival in postoperative patients with severe sepsis. J Crit Care. 2011;26:54–64. [DOI] [PubMed] [Google Scholar]
  • 24.Barrientos S, Stojadinovic O, Golinko MS, et al. Growth factors and cytokines in wound healing. Wound Repair Regen. 2008;16:585–601. [DOI] [PubMed] [Google Scholar]
  • 25.Barker M, Rayens W. Partial least squares for discrimination. J Chemom. 2003;17:166–173. [Google Scholar]
  • 26.Hoeboer SH, van der Geest PJ, Nieboer D, et al. The diagnostic accuracy of procalcitonin for bacteraemia: a systematic review and meta-analysis. Clin Microbiol Infect. 2015;21:474–481. [DOI] [PubMed] [Google Scholar]
  • 27.Bone RC, Balk RA, Cerra FB, et al. Definitions for sepsis and organ failure and guidelines for the use of innovative therapies in sepsis. In: Chest. Elsevier; 1992:1644–1655. [DOI] [PubMed] [Google Scholar]
  • 28.Song M, Kellum JA. Interleukin-6. Crit Care Med. 2005;33:S463–S465. [DOI] [PubMed] [Google Scholar]
  • 29.Waage A, Brandtzaeg P, Halstensen A, et al. The complex pattern of cytokines in serum from patients with meningococcal septic shock. Association between interleukin 6, interleukin 1, and fatal outcome. J Exp Med. 1989;169:333–338. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Finnerty CC, Herndon DN, Jeschke MG. Inhalation injury in severely burned children does not augment the systemic inflammatory response. Crit Care. 2007;11:1–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Ueyama M, Maruyama I, Osame M, et al. Marked increase in plasma interleukin-6 in burn patients. J Lab Clin Med. 1992;120:693–698. [PubMed] [Google Scholar]
  • 32.Baigrie RJ, Lamont PM, Kwiatkowski D, et al. Systemic cytokine response after major surgery. Br J Surg. 1992;79:757–760. [DOI] [PubMed] [Google Scholar]
  • 33.Nishimoto N, Yoshizaki K, Tagoh H, et al. Elevation of serum interleukin 6 prior to acute phase proteins on the inflammation by surgical operation. Clin Immunol Immunopathol. 1989;50:399–401. [DOI] [PubMed] [Google Scholar]
  • 34.Colonna M, Facchetti F. TREM-1 (triggering receptor expressed on myeloid cells): a new player in acute inflammatory responses. J Infect Dis. 2003;187:S397–S401. [DOI] [PubMed] [Google Scholar]
  • 35.Bouchon A, Facchetti F, Weigand MA, et al. TREM-1 amplifies inflammation and is a crucial mediator of septic shock. Nature. 2001;410:1103–1107. [DOI] [PubMed] [Google Scholar]
  • 36.Brenner T, Uhle F, Fleming T, et al. Soluble TREM-1 as a diagnostic and prognostic biomarker in patients with septic shock: an observational clinical study. Biomarkers. 2017;22:63–69. [DOI] [PubMed] [Google Scholar]
  • 37.Gibot S, Cravoisy A. Soluble form of the triggering receptor expressed on myeloid cells-1 as a marker of microbial infection. Clin Med Res. 2004;2:181–187. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Stoppelkamp S, Veseli K, Stang K, et al. Identification of predictive early biomarkers for sterile-sirs after cardiovascular surgery. PLoS One. 2015;10:e0135527. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Charo IF, Ransohoff RM. The many roles of chemokines and chemokine receptors in inflammation. N Engl J Med. 2006;354:610–621. [DOI] [PubMed] [Google Scholar]
  • 40.Voronov I, Manolson MF. Editorial: Flt3 ligand—friend or foe? J Leukoc Biol. 2016;99:401–403. [DOI] [PubMed] [Google Scholar]
  • 41.Ramos MI, Perez SG, Aarrass S, et al. FMS-related tyrosine kinase 3 ligand (Flt3L)/CD135 axis in rheumatoid arthritis. Arthritis Res Ther. 2013;15:R209. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Wodnar-Filipowicz A, Lyman SD, Gratwohl A, et al. Flt3 ligand level reflects hematopoietic progenitor cell function in aplastic anemia and chemotherapy-induced bone marrow aplasia. Blood. 1996;88:4493–4499. [PubMed] [Google Scholar]
  • 43.Reinhart K, Bauer M, Riedemann NC, et al. New approaches to sepsis: molecular diagnostics and biomarkers. Clin Microbiol Rev. 2012;25:609–634. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Kamaleswaran R, Akbilgic O, Hallman MA, et al. Applying artificial intelligence to identify physiomarkers predicting severe sepsis in the PICU. Pediatr Crit Care Med. 2018;19:E495–E503. [DOI] [PubMed] [Google Scholar]
  • 45.Bradley M, Dente C, Khatri V, et al. Advanced modeling to predict pneumonia in combat trauma patients. World J Surg. 2015;44:2255–2262. [DOI] [PubMed] [Google Scholar]
  • 46.Almahmoud K, Namas RA, Abdul-Malak O, et al. Impact of injury severity on dynamic inflammation networks following blunt trauma. Shock. 2015;44:101–109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Mickiewicz B, Tam P, Jenne CN, et al. Integration of metabolic and inflammatory mediator profiles as a potential prognostic approach for septic shock in the intensive care unit. Crit Care. 2015;19:11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Namas RA, Almahmoud K, Mi Q, et al. Individual-specific principal component analysis of circulating inflammatory mediators predicts early organ dysfunction in trauma patients. J Crit Care. 2016;36:146–153. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Molano Franco D, Arevalo-Rodriguez I, Roqué I, et al. Plasma interleukin-6 concentration for the diagnosis of sepsis in critically ill adults. Cochrane database Syst Rev. 2019;4:CD011811. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Oku R, Oda S, Nakada TA, et al. Differential pattern of cell-surface and soluble TREM-1 between sepsis and SIRS. Cytokine. 2013;61:112–117. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Suppl-3
Suppl-1
Suppl-2

RESOURCES