Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2021 Jul 16;16(7):e0254580. doi: 10.1371/journal.pone.0254580

Development of a brief scoring system to predict any-cause mortality in patients hospitalized with COVID-19 infection

Nasheena Jiwa 1,2,*, Rahul Mutneja 2, Lucie Henry 3, Garrett Fiscus 3, Richard Zu Wallack 2
Editor: Aleksandar R Zivkovic4
PMCID: PMC8284608  PMID: 34270604

Abstract

Patients hospitalized with COVID-19 infection are at a high general risk for in-hospital mortality. A simple and easy-to-use model for predicting mortality based on data readily available to clinicians in the first 24 hours of hospital admission might be useful in directing scarce medical and personnel resources toward those patients at greater risk of dying. With this goal in mind, we evaluated factors predictive of in-hospital mortality in a random sample of 100 patients (derivation cohort) hospitalized for COVID-19 at our institution in April and May, 2020 and created potential models to test in a second random sample of 148 patients (validation cohort) hospitalized for the same disease over the same time period in the same institution. Two models (Model A: two variables, presence of pneumonia and ischemia); (Model B: three variables, age > 65 years, supplemental oxygen ≥ 4 L/min, and C-reactive protein (CRP) > 10 mg/L) were selected and tested in the validation cohort. Model B appeared the better of the two, with an AUC in receiver operating characteristic curve analysis of 0.74 versus 0.65 in Model A, but the AUC differences were not significant (p = 0.24. Model B also appeared to have a more robust separation of mortality between the lowest (none of the three variables present) and highest (all three variables present) scores at 0% and 71%, respectively. These brief scoring systems may prove to be useful to clinicians in assigning mortality risk in hospitalized patients.

Introduction

The COVID-19 pandemic peaked in Connecticut and other states in the Northeast in April and May, 2020 [1], resulting in dramatic increases in hospitalizations and mortality, and putting considerable stress on health care systems. One striking feature of this novel coronavirus disease is its variability in clinical outcome, ranging from inapparent infection to prolonged hospitalization and death. However, for those individuals with sufficient disease burden and co-morbid conditions to warrant hospitalization, mortality risk is high, although this too varies widely among health care systems [25]. The availability of a simple and brief method of mortality risk prognostication early on in the hospitalization could focus specific COVID-19 therapies and direct scarce personnel and therapeutic resources to those at greatest risk. Our prediction model is intended to be a used as a tool for quick interpretation of patient data within the first 24 hours, to predict mortality outcomes. Accordingly, we sought to develop and test a simple scoring system based on clinical factors and laboratory tests frequently ordered within the first 24 hours of admission that could reasonably predict mortality in those hospitalized with COVID-19 infection.

Methods

Our retrospective study had two components: 1) An initial review of records from adults hospitalized with a clinical diagnosis of COVID-19 infection, evaluating demographic, socioeconomic, and clinical factors as predictors of in-hospital mortality, with a goal of positing one or more brief prognostic scoring systems (derivation cohort); and 2) Testing the proposed scoring system(s) by using data from a second review of hospitalized patients (validation cohort). Our intent was to create a prognostic tool that was brief and simple to administer, yet sufficiently predictive of mortality to be useful to health care professionals. The Trinity Health of New England Institutional Review Board granted approval prior to study initiation. Patient data was collected in a coded and encrypted fashion. The IRB waived the requirement for informed consent as this was a retrospective study.

For the first component of the study, we reviewed in-hospital records from a randomized sample of 100 COVID-19 patients (out of approximately 1600 records), who had been admitted to our tertiary-care center in Hartford, CT, during the months of April and May, 2020. These patients are known as the derivation cohort. All patients had been admitted and hospitalized with a clinical diagnosis and serological confirmation of COVID-19 infection.

The choice of variables abstracted from electronic hospital records in this initial review was based on negative prognostic factors that were available in peer-reviewed medical literature at the time of the review [6, 7] and on our clinical judgment. These included:

  1. Demographics: age, gender, race-ethnicity. For the latter analysis, self-reported Hispanic was categorized separately from self-reported Asian, Black and Caucasian groups in the medical record

  2. Socioeconomic status (SES): Our surrogate marker for low SES was Medicaid dual Medicare-Medicaid or no-insurance status. (Medicaid refers to health coverage for those with very low income, dual Medicare-Medicaid refers to health coverage for those above the age of 65, or under the age of 65 with a disability and low income status).

  3. Referral source: home versus extended care facility (ECF)

  4. Clinical abnormalities within the first 24 hours: fever, hypoxemia, including oxygen therapy, supplemental oxygen requirement in liters/minute

  5. Need for supplemental high-flow oxygen therapy in the first 24 hours, as defined as a flow rate ≥ 4 L/min or the need for high-flow oxygen therapy

  6. Recorded presence of co-morbidity, as documented by the admission history: hypertension, diabetes, obesity, COPD, asthma, chronic liver disease, deep venous thrombosis, atrial fibrillation, coronary artery disease, chronic kidney disease, history of cerebrovascular disease, history of congestive heart failure, history of malignancy

  7. Treatment with angiotensin converting enzyme inhibitors or blockers (ACE or ARBs)

  8. Initial laboratory tests: CBC, general chemistries (including creatinine), troponin, b-type natriuretic protein, C-reactive protein

  9. Radiographic abnormality: consolidation on chest x-ray or CT [8].

  10. The presence of ischemia in the first 24 hours of admission, defined as an elevated troponin level (of > 0.04 pg/L)

In-patient pharmacologic treatments and results from laboratory tests that were administered or preformed pre-hospitalization (such as Vitamin D levels or radiographs) were not analyzed.

A second sample of different patients called the validation cohort of patients was then reviewed. These records were randomly selected from the same population of hospitalized patients in April and May, 2020 (excluding those in the original sample), until an arbitrary number of at least 40 in-hospital deaths from COVID-19 were reviewed. Those variables found predictive of mortality in the univariate analyses in the original sample were abstracted.

Statistical analysis

We then used univariate analyses (proc Logistic, SAS version 9.4) to determine which potential explanatory variables significantly predicted any-cause, in-hospital mortality from COVID-19. For these analyses we created predictive models using 2-category dichotomous adaptations of these variables (0 = absent, 1 = present) and then, using an iterative approach, we then tested the most robust models as predictors of mortality in a second, independent sample of 148 hospitalized COVID patients over the same time period in the same institution. The goal was to identify a brief and easy-to-use model that had the highest area under the curve (AUC) in Receiver Operating Characteristic (ROC) curve analyses [9, 10] and had good separation in mortality between highest (i.e., worst) and lowest scores. For reference, an AUC between 0.7 and 0.8 is considered acceptable, while an AUC between 0.8 and 0.9 is considered excellent [11]. ROC curves were compared using the SAS logistic procedure and the ROCCONTRAST statement. Our study was not designed to analyze post-hospitalization mortality. All living patients had been discharged from the hospital by the time of the analyses.

Results

Derivation cohort

In the initial convenience sample of 100 patients, 44% were female, mean (± standard deviation, SD) age was 68 ± 17 years; 52% had low SES, and race/ethnicity was as follows: Asian 5, Black 28, Caucasian 42, and Hispanic 25. Sixty-one percent were older than 65 years; 48% were treated with oxygen ≥ 4 Liters/minute within the first 24 hours of their hospital stay; radiographic pneumonia was present in 61%; ischemia in 37%; CRP > 10 mg/L was present in 47%; hospital length of stay was 12 ± 16 days; and in-hospital mortality was 36%. The mean number of co-morbid conditions was 2.5 ± 1.7; percentages (in parentheses) were as follows: hypertension (72), insulin-dependent diabetes (23), obesity (52), COPD (12), asthma (14), chronic liver disease (1), history of deep venous thrombosis (7), atrial fibrillation (13), coronary artery disease (23), chronic kidney disease (18), history of cerebrovascular disease (15), history of heart failure (23), history of malignancy (4). For some variables fewer than 100 data values were available; among these were (available numbers for analysis in parentheses): race/ethnicity (81), SES (98), ischemia (99), pneumonia (98), CRP (95), BNP (42).

Validation cohort

In this second sample of 148 subjects, 44% were female; age was 69 ± 14 years, 64% had low SES, 2% were Asian, 43% Black, 41% Caucasian, and 14% Hispanic. Fifty-nine percent were older than 65 years; 47% were treated with supplemental oxygen ≥ 4 Liters/minute; radiographic pneumonia was present in 64%; ischemia in 47%; CRP was elevated > 10 mg/L in 49%; hospital length of stay was 9.0 ± 9.8 days; 45 patients died, giving an in-hospital mortality of 30%. For some variables fewer than 148 data values were available: among these were (available numbers for analysis in parentheses): race/ethnicity (145), SES (147), ischemia (110), pneumonia (142), CRP (130), BNP (61).

Table 1 shows selected patients characteristics for Derivation and Validation Cohorts.

Table 1. Patient characteristics.

Variable Derivation Cohort (n = 100) Validation Cohort (n = 148)
Female (%) 44 44
Age > 65 years (%) 61 59
Race A/B/C/H (%) 5/28/42/25 2/43/41/14
Low SES (%) 52 64
From ECF (%) 33 35
Hospital LOS (days ± SD) 12 ± 16 9 ± 10
Mortality (%) 36 30
Pneumonia (%) 62 64
Ischemia (%) 37 47
CRP > 10 (%) 47 49
High O2 requirement 44 39

Race: A: Asian, B: Black, C: Caucasian; H: Hispanic; ECF: extended care facility; LOS: length of stay; CRP = C-reactive protein; O2 = oxygen.

Selection of predictors and univariate and multivariate logistic regression analysis

The following variables were not related to mortality in univariate analyses in the derivation cohort: gender; fever upon admission; respiratory rate; any of the following preexisting conditions: hypertension, CAD, liver disease, non-insulin-dependent diabetes, DVT, history of CVA, chronic kidney disease, history of CHF, history of malignancy, COPD, asthma, interstitial lung disease; prescribed ACE or ARB; or any of the following laboratory data: lactic acid, hemoglobin, total WBC, lymphocyte count, platelet count, BNP, procalcitonin, ferritin, d-dimer, albumin, bilirubin, or creatinine.

Caucasians were older and tended to have a higher mortality than Blacks (42% versus 30%), but the difference was not statistically significant (p = 0.36); Hispanic mortality was 40% and also not significantly different from Caucasians or Blacks. Low SES was not a significant predictor of mortality, and residence prior to hospitalization (home versus ECF) was also not significantly predictive.

The following variables were predictive of mortality in the univariate analyses: age, atrial fibrillation, insulin-dependent diabetes (IDDM), C-reactive protein (CRP), supplemental oxygen requirement, pneumonia on initial radiology, and ischemia (Table 2).

Table 2. Variables predictive of in-hospital, any-cause mortality in univariate testing in the Derivation Cohort of patients.
Variable OR (95% CI) p.
Age (per year) 1.04 (1.01 to 1.07) 0.009
Atrial fibrillation (N vs. Y) 0.20 (0.06 to 0.71) 0.01
Ischemia (N vs. Y) 0.36 (0.15 to 0.84) 0.02
IDDM (N vs. Y) 0.33 (0.13 to 0.85) 0.02
CRP (per Δ 1 mg/L) 1.05 (1.00 to 1.09) 0.04
Pneumonia (N vs. Y) 0.15 (0.05 to 0.44) 0.0005
High O2 req. (N vs. Y) 0.33 (0.14 To 0.78) 0.01

OR: Odds ratio; CI: 05% confidence interval; N vs. Y: not present versus present; IDDM: insulin-dependent diabetes mellitus; CRP: c-reactive protein; L = liter; O2: oxygen; req.: requirement.

Each of the above predictive variables present in the first 24 hours of hospitalization was assigned a 1 or 0 value for the purpose of creating a useable scoring system based on categorical values. These assignments were: age (< 65 years = 0, over 65 years = 1); atrial fibrillation (absent = 0, present = 1); IDDM (absent = 0, present = 1); CRP (≤ 10 mg/L = 0, > 10 mg/L = 1); supplemental oxygen (0–4 LPM = 0, 4+ LPM or high flow O2 therapy = 1); pneumonia (absent = 0, present = 1); ischemia, reported on the clinical record or with an elevated troponin (> 0.4 ng/mL) (absent = 0, present = 1).

The above categorical variables were entered into a multivariate logistic regression (SAS) in an iterative fashion with a goal of creating a robust mortality prognostic scoring model that was brief, had high AUC in logistic regression, and provided significant separation from the highest score (highest risk, all negative predictor variables present) from lesser scores.

One iteration, Model A, the combination of two variables—pneumonia and ischemia–yielding potential composite scores of 0 (neither variable positive, 25%), 1 (one of the two variables positive, 51%) or 2 (both variables positive, 24%) (Incomplete data on 2 patients), had the highest AUC of 0.74 (95% CI 0.65 to 0.82). Odds ratios for this analysis are in Table 3 while mortality in each score category is given in Table 4.

Table 3. Mortality risk by model.
Derivation Cohort Validation Cohort
Model A
Score OR, 95% CI OR, 95% CI
0 vs. 2 0.03 (0.00 to 0.23) 0.21 (0.58 to 0.75)
1 vs. 2 0.40 (0.15 to 1.09) 0.36 (0.15 to 0.89)
Model B
Score Derivation Cohort Validation Cohort
0 vs. 3 0.11 (0.03 to 0.50) *
1 vs. 3 0.35 (0.11 to 1.06) 0.10 (0.03 to 0.37)
2 vs. 3 0.230 (0.06 to 0.79) 0.29 (0.09 to 0.95)

OR: odds ratio; CI: confidence interval

* = no patient in Sample 2 Model B with a score of 0 died.

Model A: pneumonia, ischemia: one point each if present; scores can range from 0–2.

Model B: age > 65, high supplemental oxygen requirement, CRP > 10 mg/L: one point each if present; scores can range from 0–3.

Table 4. Mortality by score.
Model A
Derivation Cohort Validation Cohort
Score Mortality (%) Mortality (%)
0 4 19
1 40 29
2 63 53
Model B
Derivation Cohort Validation Cohort
Score Mortality (%) Mortality (%)
0 16 0
1 37 20
2 27 41
3 62 71

Differences between means (p.) of category scores.

Model A, Derivation Cohort: 0 vs. 1: 0.04; 0 vs. 2: < 0.0001; 1 vs. 2: 0.04.

Model A, Validation Cohort: 0 vs. 1: 0.42; 0 vs. 2: 0.01; 1 vs. 2: 0.02.

Model B, Derivation Cohort: 0 vs. 1: 0.13; 0 vs. 2: 0.43; 0 vs. 3: 0.001; 1 vs. 2: 0,47; 1 vs. 3: 0.04; 2 vs. 3: 0.01.

Model B, Validation Cohort: 0 vs. 1: 0.09; 0 vs. 2: 0.0002; 0 vs 3: < 0.0001; 1 vs. 2: 0.02; 1 vs. 3: < 0.0001; 2 vs. 3: 0.01.

A second iteration, Model B, combining three variables–age > 65 years, high supplemental oxygen requirement over the first 24 hours of hospitalization, and a CRP > 10 (range of scores 0 to 3)–had the second highest AUC (0.66 (95% CI 0.56 to 0.77). The percentage of patients in Model B was score 0 (20%), score 1 (32%), score 2 (23%) and score 3(25%). Five patients without CRP measurements could not be included in the multivariate analysis. Odds ratios for this analysis are in Table 3 and mortality for each score category is given in Table 4.

The above two ROC curves in Sample 1 were not significantly different: p = 0.24.

Validation of the scoring systems

Both models were tested as predictors of in-hospital mortality in the validation cohort. In this analysis, Model A (two variables: ischemia and consolidation) had a lower AUC than Model B (3 variables: older age, high supplemental oxygen requirement and elevated CRP): 0.65 (95% CI 0.55 to 0.76) versus 0.74 (95% CI 0.65 to 0.83), respectively, but their difference was not significant: p = 0.18. ROC curves for Sample 1 and 2 are given in Fig 1. Of note, only 107 patients could be analyzed in the logistic regression in Model A, mainly from non-existing data on troponin and no clinical diagnosis of ischemia over the first 24 hours of hospitalization. Odds ratios for each score category for both models are in Table 3 and mortality in each score category is given in Table 4.

Fig 1. ROC curves in the derivation and validation cohorts.

Fig 1

Model A: Scoring based on two variables: presence of pneumonia, ischemia; score can range from 0–2. Model B: Scoring based on three variables: age over 65, high supplemental oxygen requirement, C-reactive protein over 10. Presence of each given a score of 1; score can range from 0–3.

Discussion

Our study provides two simple and easy-to-complete prognostic scoring models based on data readily available in the first 24 hours of hospitalization to predict in-hospital mortality of COVID-19 patients. Model A was based on the presence of ischemia and pneumonia; Model B was based on age > 65 years, high supplemental oxygen requirement, and elevated CRP values. Based on performance in the validation cohort, Model B had a slightly higher AUC (0.74 vs. 0.65), although the difference between the two models was not statistically significant. Model B also tended to perform better with respect to mortality separation in logistic regression. Given the small numbers of subjects in our study, the lack of a statistically significant difference in AUC, and the fact that CRP (a component of model B) was not obtained in all patients upon admission, a strong inference on the comparative performance of the two scoring systems would be problematic.

Model B scoring was especially powerful in separating score category 0 from category 3: those patients without any of the three negative factors had no in-hospital mortality while those with all three had 71% mortality. Intermediate scores had intermediate risk, although the categories were not statistically different in some instances. However, mortality in score category 3 was significantly greater than any of the lesser scores, attesting to the potential usefulness of this category.

While the clinical consequences of using either of these two predictive scoring systems is not determined, information from a simple model such as either of these may provide useful prognostic information to Emergency Department and admitting clinicians, thereby potentially directing scarce personnel and medical resources toward those hospitalized individuals at greatest risk of dying.

Similar to our study, advanced age, elevated levels of CRP, and oxygenation status (either from estimates of oxygenation from pulse oximetry or from supplemental oxygen requirements in our study) were predictive of in-hospital mortality in other analyses [8, 1215]. However, in contrast, we were not able to demonstrate that sex, obesity [14] or D-dimer levels [15, 16] obtained on admission predicted mortality. Accurate body weights to determine morbid obesity were often not recorded in the hospital records of our patients, making this analysis problematic. The non-effect of D-dimer may be explained by the small size of our sample and especially since D-dimer tests were ordered by admitting physicians upon admission in only a minority of our study patients.

In other observational studies, and in contrast to our study, male sex and the presence of certain co-morbid factors, namely coronary artery disease, heart failure, cardiac arrhythmia, COPD, and current smoking status, morbid obesity, and a history of cancer predicted mortality [12, 13, 15]. Other than for atrial fibrillation, we were not able to replicate these findings, possibly because of our small sample size.

Of note we were not able to demonstrate that race or ethnicity were a significant predictors of in-hospital mortality. This is somewhat surprising as the data from the Department of Health in our state in general and Hartford County in particular, indicate that Blacks have consistently had higher mortality than Caucasians when expressed as a rate per 100,000 individuals, and Latinos fall somewhere in-between. Part of this inconsistency may be explained that the average age Caucasians in our study was about 10 years higher than that of Blacks, and age is an important predictor of bad outcome with COVID-19. Additionally, our one marker for lower SES, Medicaid or no insurance, did not predict mortality. Thus it appears that, once patients are hospitalized race/ethnicity and SES risk factors do not appear to be important predictors of mortality outcome.

Our results show some similarity and differences to those from a recent prospective, observational study performed of hospitalized COVID-19 patients in England, Scotland and Wales which was also designed to create a pragmatic risk score for in-hospital, any-cause mortality [17]. Using a very large data-set with 30.1% mortality, investigators identified 8 variables available at initial assessment and used them to create a prognostic score: age, sex, number of co-morbidities, respiratory rate, oxygen saturation, level of consciousness, blood urea nitrogen (BUN) level, and CRP.

Of note, the variables, sex, co-morbidities (other than atrial fibrillation in the preliminary analysis), respiratory rate, level of consciousness, and BUN did not prove statistically significant in our study. Age and oxygenation status were similar in the two scoring systems, since in likelihood the highest level of supplemental oxygen saturation (chosen by clinicians in the first 24 hours of hospitalization) probably reflected oxygen saturation. While level of consciousness was not included in our analysis, the reason or reasons for non-significance of other variables are not clear, although the reduced power from our considerably smaller sample probably is an important factor. Nevertheless the ROC, representing a trade-off between sensitivity and specificity, from our 4-point scoring system (0.74) was only slightly less robust than from the 22 point scoring system (0.79). Arguably, the reduction in predictive power from using fewer independent variables might be offset by its simplicity and ease of use.

A retrospective study of 403 adult patients seen in the Emergency Department in a combined secondary/tertiary care center in the Netherlands for the first wave of the pandemic (March through May, 2020) tested 11 prediction models of 30-day mortality as the primary outcome. [18]. The investigators identified two prediction models that performed best: 1) RISE-UP (acronym for Risk Stratification in the Emergency Department in Acutely Ill Older Patients) score, which included age, heart rate, mean arterial pressure, respiratory rate, oxygen saturation, Glasgow Coma Scale (GCS), BUN, bilirubin, albumin, and lactate dehydrogenase; and 2) 4-C (Coronavirus Clinical Characterization Consortium) score, which had been tested previously in the United Kingdom [19], and included age, sex, co-morbidity, RR, GCS, O2 saturation, BUN, and CRP. With an AUC of 0.83 and 0.84, respectively, both performed better than ours, but were obviously more complicated in that they required entering more variables.

Comparing our results with those from other studies of in-hospital mortality is problematic for several reasons, including potential selection biases among the studies, expected regional differences in patient demographics and treatment approaches, and changes over time in therapeutic modalities for this disease.

Our study has several limitations. First is the relatively small number of hospitalized individuals studied in one medical center over the two months coinciding with peak incidence, hospitalizations and mortality in our geographical area. This obviously limits the generalizability of our results. An analysis including a larger sample (or population) of hospitalized patients performed over other health care systems—especially now that it appears mortality from this disease is changing–would be necessary to confirm these results. Since demographics of hospitalized COVID patients and in-hospital treatments are changing over time—and may be considerably different than in April and May, 2020, the time period over which we selected our study patients–the utility of our predictive models and should be tested in more contemporary patients hospitalized with COVID-19.

Another limitation of our study is the fact that some laboratory tests that proved to be predictive of mortality risk, such as CRP and troponin, were not uniformly ordered by clinicians, thereby lowering the power of our logistic regression analyses and potentially introducing a selection bias.

Finally, and similar to the study performed in the United Kingdom described above, the analysis was only of in-hospital mortality, and does not pick up mortality in patients discharged to home, extended care facilities or hospice units. Additionally, a system that would predict in-hospital resources, not just mortality, would prove useful to hospital staff.

In summary, our results suggest that two prognostic models for hospitalized COVID-19 patients predict in-hospital mortality. Model B, consisting of three dichotomous variables present in the first 24 hours of hospitalization (age > 65 years, supplemental oxygen requirement ≥ 4 L/min, and CRP levels > 10 mg/L) may be the better of the two models tested, yielding a moderate AUC and a more robust separation of mortality between the highest and lowest scores. While the multi-component scoring models described earlier out-performed ours with respect to AUC, the simplicity of our model may prove attractive for busy emergency department staff. Our study should be viewed as presenting preliminary data which would be needed to be followed by external validation from analysis of a larger dataset across different institutions and over a longer time range, not only to attempt to replicate the findings of the predictive models, but also to determine if the scoring results in meaningful clinical consequences.

Data Availability

All data that is available is incorporated in the manuscript and attached figures. There are no other documents for supporting data that are available.

Funding Statement

The authors received no specific funding for this work.

References

  • 1.Anonymous. Ct data: Covid-19 data resources. 2020 [cited 2020 17 August, 2020]. Available from: https://data.ct.gov/stories/s/COVID-19-data/wa3g-tfvc/.
  • 2.Group RC, Horby P, Lim WS, Emberson JR, Mafham M, Bell JL, et al. Dexamethasone in hospitalized patients with covid-19—preliminary report. The New England journal of medicine 2020. [Google Scholar]
  • 3.Tang X, Du RH, Wang R, Cao TZ, Guan LL, Yang CQ, et al. Comparison of hospitalized patients with ards caused by covid-19 and h1n1. Chest 2020;158:195–205. doi: 10.1016/j.chest.2020.03.032 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Richardson S, Hirsch JS, Narasimhan M, Crawford JM, McGinn T, Davidson KW, et al. Presenting characteristics, comorbidities, and outcomes among 5700 patients hospitalized with covid-19 in the new york city area. Jama 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Potere N, Valeriani E, Candeloro M, Tana M, Porreca E, Abbate A, et al. Acute complications and mortality in hospitalized patients with coronavirus disease 2019: A systematic review and meta-analysis. Critical care 2020;24:389. doi: 10.1186/s13054-020-03022-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Figliozzi S, Masci PG, Ahmadi N, Tondi L, Koutli E, Aimo A, et al. Predictors of adverse prognosis in covid-19: A systematic review and meta-analysis. European journal of clinical investigation 2020;50:e13362. doi: 10.1111/eci.13362 [DOI] [PubMed] [Google Scholar]
  • 7.Wynants L, Van Calster B, Collins GS, Riley RD, Heinze G, Schuit E, et al. Prediction models for diagnosis and prognosis of covid-19 infection: Systematic review and critical appraisal. BMJ 2020;369:m1328. doi: 10.1136/bmj.m1328 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Rivera-Izquierdo M, Del Carmen Valero-Ubierna M, Rd JL, Fernandez-Garcia MA, Martinez-Diz S, Tahery-Mahmoud A, et al. Sociodemographic, clinical and laboratory factors on admission associated with covid-19 mortality in hospitalized patients: A retrospective observational study. PloS one 2020;15:e0235107. doi: 10.1371/journal.pone.0235107 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Gönen M, SAS Institute. Analyzing receiver operating characteristic curves with sas. Cary, NC: SAS Pub.; 2007. [Google Scholar]
  • 10.Hajian-Tilaki K. Receiver operating characteristic (roc) curve analysis for medical diagnostic test evaluation. Caspian journal of internal medicine 2013;4:627–635. [PMC free article] [PubMed] [Google Scholar]
  • 11.Mandrekar JN. Receiver operating characteristic curve in diagnostic test assessment. Journal of thoracic oncology: official publication of the International Association for the Study of Lung Cancer 2010;5:1315–1316. doi: 10.1097/JTO.0b013e3181ec173d [DOI] [PubMed] [Google Scholar]
  • 12.Gupta S, Hayek SS, Wang W, Chan L, Mathews KS, Melamed ML, et al. Factors associated with death in critically ill patients with coronavirus disease 2019 in the us. JAMA internal medicine 2020. doi: 10.1001/jamainternmed.2020.3596 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Di Castelnuovo A, Bonaccio M, Costanzo S, Gialluisi A, Antinori A, Berselli N, et al. Common cardiovascular risk factors and in-hospital mortality in 3,894 patients with covid-19: Survival analysis and machine learning-based findings from the multicentre italian corist study. Nutrition, metabolism, and cardiovascular diseases: NMCD 2020. doi: 10.1016/j.numecd.2020.07.031 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Palaiodimos L, Kokkinidis DG, Li W, Karamanis D, Ognibene J, Arora S, et al. Severe obesity, increasing age and male sex are independently associated with worse in-hospital outcomes, and higher in-hospital mortality, in a cohort of patients with covid-19 in the bronx, new york. Metabolism: clinical and experimental 2020;108:154262. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Zhang L, Yan X, Fan Q, Liu H, Liu X, Liu Z, et al. D-dimer levels on admission to predict in-hospital mortality in patients with covid-19. Journal of thrombosis and haemostasis: JTH 2020;18:1324–1329. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Simadibrata DM, Lubis AM. D-dimer levels on admission and all-cause mortality risk in covid-19 patients: A meta-analysis. Epidemiology and infection 2020;148:e202. doi: 10.1017/S0950268820002022 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Knight SR, Ho A, Pius R, Buchan I, Carson G, Drake TM, et al. Risk stratification of patients admitted to hospital with covid-19 using the isaric who clinical characterisation protocol: Development and validation of the 4c mortality score. BMJ 2020;370:m3339. doi: 10.1136/bmj.m3339 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.van Dam P, Zelis N, van Kuijk SMJ, Linkens A, Bruggemann RAG, Spaetgens B, et al. Performance of prediction models for short-term outcome in covid-19 patients in the emergency department: A retrospective study. Annals of medicine 2021;53:402–409. doi: 10.1080/07853890.2021.1891453 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Risk stratification of patients admitted to hospital with covid-19 using the isaric who clinical characterisation protocol: Development and validation of the 4c mortality score. BMJ 2020;371:m4334. doi: 10.1136/bmj.m4334 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Aleksandar R Zivkovic

5 May 2021

PONE-D-21-06850

Development of a Brief Scoring System to Predict Any-Cause Mortality in Patients Hospitalized with COVID-19 Infection

PLOS ONE

Dear Dr. Jiwa,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jun 19 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Aleksandar R. Zivkovic

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1) Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2) Thank you for including your ethics statement:   "Institutional Board approval was obtained prior to study initiation."

Please amend your current ethics statement to include the full name of the ethics committee/institutional review board(s) that approved your specific study.

Once you have amended this/these statement(s) in the Methods section of the manuscript, please add the same text to the “Ethics Statement” field of the submission form (via “Edit Submission”).

For additional information about PLOS ONE ethical requirements for human subjects research, please refer to http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-research.

3) PLOS requires an ORCID iD for the corresponding author in Editorial Manager on papers submitted after December 6th, 2016. Please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager. Please see the following video for instructions on linking an ORCID iD to your Editorial Manager account: https://www.youtube.com/watch?v=_xcclfuvtxQ

4)  Thank you for stating the following financial disclosure:

 [NO].

At this time, please address the following queries:

  1. Please clarify the sources of funding (financial or material support) for your study. List the grants or organizations that supported your study, including funding received from your institution.

  2. State what role the funders took in the study. If the funders had no role in your study, please state: “The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.”

  3. If any authors received a salary from any of your funders, please state which authors and which funders.

  4. If you did not receive any funding for this study, please state: “The authors received no specific funding for this work.”

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

5) Thank you for stating the following in your Competing Interests section: 

[NO].

Please complete your Competing Interests on the online submission form to state any Competing Interests. If you have no competing interests, please state "The authors have declared that no competing interests exist.", as detailed online in our guide for authors at http://journals.plos.org/plosone/s/submit-now

 This information should be included in your cover letter; we will change the online submission form on your behalf.

Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests

6) Please ensure that you refer to Figure 1 in your text as, if accepted, production will need this reference to link the reader to the figure.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer #1: Review “Development of a brief scoring system to predict any-cause mortality in patients hospitalized with COVID-19 infection”

First of all, I would like to thank you for the opportunity to read and review your manuscript. I have recently also been involved in research into prediction of short-term mortality in patients with COVID-19. I find the subject of your manuscript very interesting, but there are some mayor aspects that still deserve attention. Please address my comments and suggestions I have listed below.

General remarks

• As for the methodological aspects of the manuscript, I have some mayor concerns regarding the patient selection (see below).

• Abbreviations are not always sufficiently explained throughout the manuscript. For example, in the abstract, the abbreviation AUC is not spelled out. In a general sense, any abbreviation must be written out the first time it is used.

Abstract

I believe it would be better to present the abstract in the fixed structure of Introduction (or background), Methods, Results and Conclusion).

Introduction

• In line 3, the word “striking” is used twice in once sentence. I would suggest that it is removed the first time.

• The recent systematic review by Wynants et al (Reference 7) describes a relatively large number of (diagnostic) and prognostic prediction models for patients with COVID-19. In a recent publication (Performance of prediction models for short-term outcome in COVID-19 patients in the emergency department: a retrospective study. Annals of Medicine 2021;53(1):402-209), we analyzed and externally validated 11 prediction models. In the introduction of your manuscript it should be made clear why there is a need for another prediction model (i.e. what does this model add?).

Methods

• I have some mayor concerns about the selection of patients. Out of approximately 1600 patients, you selected 100 patients for the derivation cohort and 148 patients for the validation cohort (i.e. 1450 patients were not analyzed). In what way were the patients selected? How can you rule out selection bias? I think this is a very important aspect of your manuscript. Why only develop the scoring systems in 100 patients (and not more), and why validate the scores in only 148 patients (and not more)?

• You performed a retrospective study with two components. For the first component, a sample of 100 patients (out of approximately 1600 patients) was used to develop two scoring systems. In de the second component, those two scoring systems were tested. Instead of naming these samples “sample 1” and “sample 2”, I believe it would be better to refer to these samples as derivation cohort and validation cohort.

• In line 5, you state that the scoring systems were tested prospectively. I understand the scoring systems are tested (validated) on retrospective data. The word “prospectively” is confusing and should be avoided here.

• In what setting were the patients included? Emergency department or other?

• Page 4: for readers not living in the USA (like myself), the surrogate marker for low SES should be explained in more detail (Medicaid, dual Medicare-Medicaid).

• Page 6: the abbreviation ABR should be changed to ARB (Angiotensin receptor blocker).

• The section on statistical analysis should have its own subheading.

Results

• As mentioned above, I would suggest the two study samples are referred to as derivation cohort and validation cohort.

• The text belonging to subheadings “Sample 1” and “Sample 2” are actually a summary of the data from Table 1. Consider merging the two subheading into one (e.g. “Study samples”) and shortening the text to just the most important data.

• Page 6 (Subheading “Sample 2”): in this section you described why you ended up with 148 patients in the validation cohort. I believe this explanation should be moved to the Methods section. In my opinion, you should also clarify whether different patients were analyzed in both cohorts, or whether the two cohort show an overlap. The reason for inclusion or exclusion of patients in the cohorts could also be explained in further detail.

• It would greatly improve your manuscript if you could show the patient characteristics of the total population of 1600 patients. This allows the reader to estimate any selection bias. I would suggest adding an extra Table with this information.

• Page 6: The subheading “Creating in-hospital mortality prognostic scoring models from Sample 1 cohort” is too long. It could be changed to “Selection of predictors and univariate and multivariate logistic regression analysis”.

• Page 7: In line 4, “Hispanic mortality” is misspelled as “Hispanic morality”. The point after 40% should be removed.

• Page 8: The subheading “Testing the models using Sample 2 data” could be changed to “Validation of the scoring systems”.

• Page 8: How are the two ROC curves compared? This statistical analysis should be explained in more details in the statistical analysis section.

• Page 8: Model A could be used in only 107 of 148 patients, because of missing data regarding troponin (27%). Since troponin is not routinely measured, is model A feasible in clinical practice?

Discussion

• Page 9: You recommend the use of Model B in patients with COVID-19 for predicting in-hospital mortality. However, your analysis shows no statistically significant difference between model A and B (again: the method of comparing the two models is unclear). So, why not use model A?

• Page 9: What are the clinical consequences of the use of your scoring systems? In other words, should a low or a high score guide clinical decision-making?

• Page 9: The sentence “For reference, an AUC between … considered excellent” belongs to the Methods section and should not be part of the Discussion section.

• Page 10: Before being able to compare your results with those from other studies, it is important to estimate the degree of selection bias. See previous comments.

• Page 10-11: In addition to the British 4C mortality score (Knight and colleagues), the RISE UP score is also very useful for predicting short term adverse outcome in ED patients with COVID-19 (Annals of Medicine 2021;53(1):402-209 and BMJ Open 2021;11:e045141).

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Jul 16;16(7):e0254580. doi: 10.1371/journal.pone.0254580.r002

Author response to Decision Letter 0


27 Jun 2021

Dear PLOS ONE Editorial Staff,

We want to thank you and your reviewers for your thorough review. The text below outlines the reviewers’ comments/questions and our responses.

We wish to re-submit the revised manuscript for consideration for publication.

Sincerely,

Nasheena Jiwa, MD

Journal Requirements:

Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming.

The Trinity Health of New England Institutional Review Board granted approval prior to study initiation

Corresponding Author’s ORCID iD: https://orcid.org/0000-0003-0261-2425

The authors received no specific funding for this work.

The authors have declared that no competing interests exist.

Please ensure that you refer to Figure 1 in your text as, if accepted, production will need this reference to link the reader to the figure.

Figure 1 is now referenced in the text in the last paragraph of the Discussion.

Reviewer Comments:

General Remarks

- First occurrence of area under the curve (AUC) is now spelled out in the Methods section

Abstract: This was revised to reflect the comments by the reviewer:

Abstract

Patients hospitalized with COVID-19 infection are at a high general risk for in-hospital mortality. A simple and easy-to-use model for predicting mortality based on data readily available to clinicians in the first 24 hours of hospital admission might be useful in directing scarce medical and personnel resources toward those patients at greater risk of dying. With this goal in mind, we evaluated factors predictive of in-hospital mortality in a random sample of 100 patients (derivation cohort) hospitalized for COVID-19 at our institution in April and May, 2020 and created potential models to test in a second random sample of 148 patients (validation cohort) hospitalized for the same disease over the same time period in the same institution. Two models (Model A: two variables, presence of pneumonia and ischemia); (Model B: three variables, age > 65 years, supplemental oxygen ≥ 4 L/min, and C-reactive protein (CRP) > 10 mg/L) were selected and tested in the validation cohort. Model B appeared the better of the two, with an AUC in receiver operating characteristic curve analysis of 0.74 versus 0.65 in Model A, but the AUC differences were not significant (p = 0.24. Model B also appeared to have a more robust separation of mortality between the lowest (none of the three variables present) and highest (all three variables present) scores at 0% and 71%, respectively. These brief scoring systems may prove to be useful to clinicians in assigning mortality risk in hospitalized patients.

Introduction:

- Deleted “striking”

- Our prediction model is intended to be a used as a tool for quick interpretation of patient data within the first 24 hours, to predict mortality outcomes

Methods:

In what way were the patients selected? How can you rule out selection bias? I think this is a very important aspect of your manuscript. Why only develop the scoring systems in 100 patients (and not more), and why validate the scores in only 148 patients (and not more)?

Patients in both samples were randomly selected from the pool of patients hospitalized with COVID-19.

This is stated in Methods as follows:

“For the first component of the study, we reviewed in-hospital records from a randomized sample of 100 COVID-19 patients (out of approximately 1600 records), who had been admitted to our tertiary-care center in Hartford, CT, during the months of April and May, 2020. These patients are known as the derivation cohort. All patient’s had been admitted and hospitalized with a clinical diagnosis and serological confirmation of COVID-19 infection.

“A second sample of different patients called the validation cohort of patients was then reviewed. These records were randomly selected from the same population of hospitalized patients in April and May, 2020 (excluding those in the original sample), until an arbitrary number of at least 40 in-hospital deaths from COVID-19 were reviewed.”

Because of limitations in research staff (this study was not externally funded) we could not study a larger sample. However, it should be noted that we did achieve statistical significance in several predictor variables that also appear mechanistically sound.

Corrected sample 1 = derivation cohort, and sample 2 = validation cohort

Removed the word “prospectively”

Patient’s included were admitted to the hospital

Explained in detail: low SES (Medicaid, dual Medicare-Medicaid).

abbreviation ABR was changed to ARB (Angiotensin receptor blocker).

Added a statistical analysis subheading

Results

Corrected sample 1 = derivation cohort, and sample 2 = validation cohort

Description of sample 2/validation cohort of patients was moved to methods section

Clarified that both cohorts contained different patients

It would greatly improve your manuscript if you could show the patient characteristics of the total population of 1600 patients. This allows the reader to estimate any selection bias. I would suggest adding an extra Table with this information.

As we stated earlier, we did not have the research personnel to do a chart review on all 1600 patients. However, we could note that the two random samples were similar with respect to patient characteristics, as given in Table 1.

The subheading “Creating in-hospital mortality prognostic scoring models from Sample 1 cohort” was changed to “Selection of predictors and univariate and multivariate logistic regression analysis”

Corrected: “Hispanic mortality” is misspelled as “Hispanic morality”.

“Testing the models using Sample 2 data” was changed to “Validation of the scoring systems”.

How are the two ROC curves compared? This statistical analysis should be explained in more details in the statistical analysis section.

We clarified this with the following added text: “ROC curves were compared using the SAS logistic procedure and the ROCCONTRAST statement.”

-

Model A could be used in only 107 of 148 patients, because of missing data regarding troponin (27%). Since troponin is not routinely measured, is model A feasible in clinical practice?

We changed the text in the Discussion to reflect this appropriate comment as follows: “Based on performance in the validation cohort, Model B had a slightly higher AUC (0.74 vs. 0.65), although the difference between the two models was not statistically significant. Model B also tended to perform better with respect to mortality separation in logistic regression. Given the small numbers of subjects in our study, the lack of a statistically significant difference in AUC, and the fact that CRP (a component of model B) was not obtained in all patients upon admission, a strong inference on the comparative performance of the two scoring systems would be problematic.”

Discussion:

You recommend the use of Model B in patients with COVID-19 for predicting in-hospital mortality. However, your analysis shows no statistically significant difference between model A and B (again: the method of comparing the two models is unclear). So, why not use model A?

This was addressed with the changes in text above in the Discussion:

What are the clinical consequences of the use of your scoring systems? In other words, should a low or a high score guide clinical decision-making

The text in the Discussion was changed to reflect the uncertainty of the clinical consequences:

“Our study should be viewed as presenting preliminary data which would be needed to be followed by external validation from analysis of a larger dataset across different institutions and over a longer time range, not only to attempt to replicate the findings of the predictive models, but also to determine if the scoring results in meaningful clinical consequences.”

Removed “For reference, an AUC between … considered excellent” from the discussion section and moved it to the Methods section

Page 10: Before being able to compare your results with those from other studies, it is important to estimate the degree of selection bias. See previous comments.

We added the following disclaimer in the Discussion:

“Comparing our results with those from other studies of in-hospital mortality is problematic for several reasons, including potential selection biases among the studies, expected regional differences in patient demographics and treatment approaches, and changes over time in therapeutic modalities for this disease.”

Page 10-11: In addition to the British 4C mortality score (Knight and colleagues), the RISE UP score is also very useful for predicting short term adverse outcome in ED patients with COVID-19 (Annals of Medicine 2021;53(1):402-209 and BMJ Open 2021;11:e045141).

These studies were added to the Discussion and referenced. Thank you.

“A retrospective study of 403 adult patients seen in the Emergency Department in a combined secondary/tertiary care center in the Netherlands for the first wave of the pandemic (March through May, 2020) tested 11 prediction models of 30-day mortality as the primary outcome. (18) The investigators identified two prediction models that performed best: 1) RISE-UP (acronym for Risk Stratification in the Emergency Department in Acutely Ill Older Patients) score, which included age, heart rate, mean arterial pressure, respiratory rate, oxygen saturation, Glasgow Coma Scale (GCS), BUN, bilirubin, albumin, and lactate dehydrogenase; and 2) 4-C (Coronavirus Clinical Characterisation Consortium) score, which had been tested previously in the United Kingdom, (19) and included age, sex, co-morbidity, RR, GCS, O2 saturation, BUN, and CRP. With an AUC of 0.83 and 0.84, respectively, both preformed better than ours, but were obviously more complicated in that they required entering more variables.”

Attachment

Submitted filename: Response to Reviewers 6-7-21 (1).docx

Decision Letter 1

Aleksandar R Zivkovic

30 Jun 2021

Development of a Brief Scoring System to Predict Any-Cause Mortality in Patients Hospitalized with COVID-19 Infection

PONE-D-21-06850R1

Dear Dr. Jiwa,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Aleksandar R. Zivkovic

Academic Editor

PLOS ONE

Acceptance letter

Aleksandar R Zivkovic

8 Jul 2021

PONE-D-21-06850R1

Development of a Brief Scoring System to Predict Any-Cause Mortality in Patients Hospitalized with COVID-19 Infection

Dear Dr. Jiwa:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Aleksandar R. Zivkovic

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Attachment

    Submitted filename: Response to Reviewers 6-7-21 (1).docx

    Data Availability Statement

    All data that is available is incorporated in the manuscript and attached figures. There are no other documents for supporting data that are available.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES